├── Day00 └── README.md ├── Day01 └── README.md ├── Day02 └── README.md ├── Day03 └── README.md ├── Day04 └── README.md ├── Day05 └── README.md ├── Day06 └── README.md ├── Day07 └── README.md ├── Day08 ├── Practicals-README.md └── README.md ├── Day09 ├── README.md └── script.sh ├── Day10 └── README.md ├── Day11 ├── Practicals-README.md └── README.md ├── Day12 ├── README.md └── userdata.sh ├── Day13 └── README.md ├── Day14 ├── README.md ├── app.conf ├── appconf.j2 ├── index.j2 ├── index.nginx-debian.html ├── scorekeeper.js ├── style.css └── ubuntushellscript ├── Day15 └── README.md ├── Day16 └── README.md ├── Day17 └── README.md ├── Day18 ├── README.md ├── trustpolicy.json └── userinlinepolicy.json ├── Day19 ├── README.md └── assume-role-policy.json ├── Day20 └── README.md ├── Day21 ├── MySQL dump file ├── README.md └── app.py ├── Day22 ├── README.md ├── post.json └── put.json ├── Day23 ├── LoadingDataSampleFiles.zip ├── README.md ├── Workbench-Build125.zip └── sample-data ├── Day24 └── README.md ├── Day25 ├── README.md ├── accesspoint.json └── s3-policy-ip.json ├── Day26 ├── README.md ├── rotation.sh └── userdata.sh ├── Day27 └── README.md ├── Day28 └── README.md ├── Day29 └── README.md ├── Day30 ├── Delvol-Lambda.py ├── README.md ├── assignment.py └── iam-policy.json ├── Day31 ├── BuildScript.sh ├── README.md ├── appspec.yml └── tomcat-packages.sh ├── Day32 ├── README.md └── userdata.sh ├── Day33 ├── README.md └── nginx_Ecr.json ├── Day34 └── README.md ├── Day35 ├── README.md ├── eks-cmd.sh ├── eks_deploy.yaml └── votingapp.yaml ├── Day36 ├── README.md └── appspec.yaml ├── Day37 └── README.md ├── Day38 └── README.md └── README.md /Day00/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![Portfolio_Thumbnail](https://github.com/saikiranpi/mastering-aws/assets/109568252/df715451-e120-4ebc-b7e0-d3eadcbbce72) 3 | 4 | 5 | 6 | ######### DONT FORGET TO CHANGE THE DOMAIN NAME WITH YOUR NAME ######### 7 | 8 | - DEPLOY AN EC2 ISNTANCE IN AWS USE UBUNTU 20 9 | 10 | - apt update && apt install -y nginx 11 | 12 | - sudo apt update && sudo apt install certbot python3-certbot-nginx 13 | 14 | - sudo mkdir -p /var/www/pinapathrunisaikiran.co.in/html 15 | 16 | - sudo chown -R $USER:$USER /var/www/pinapathrunisaikiran.co.in/html 17 | 18 | - sudo chmod -R 755 /var/www/pinapathrunisaikiran.co.in 19 | 20 | - nano /var/www/pinapathrunisaikiran.co.in/html/index.html 21 | 22 | 23 | 24 | Welcome to pinapathrunisaikiran.co.in! 25 | 26 | 27 |

Success! The pinapathrunisaikiran.co.in server block is working!

28 | 29 | 30 | 31 | 32 | - sudo nano /etc/nginx/sites-available/pinapathrunisaikiran.co.in 33 | 34 | server { 35 | listen 80; 36 | listen [::]:80; 37 | 38 | root /var/www/pinapathrunisaikiran.co.in/html; 39 | index index.html index.htm index.nginx-debian.html; 40 | 41 | server_name pinapathrunisaikiran.co.in www.pinapathrunisaikiran.co.in; 42 | 43 | location / { 44 | try_files $uri $uri/ =404; 45 | } 46 | } 47 | 48 | 49 | - sudo ln -s /etc/nginx/sites-available/pinapathrunisaikiran.co.in /etc/nginx/sites-enabled/ 50 | 51 | - sudo nginx -t 52 | 53 | - sudo systemctl restart nginx 54 | 55 | sudo certbot certonly \ 56 | --agree-tos \ 57 | --email pinapathruni.saikiran@gmail.com \ 58 | --manual \ 59 | --preferred-challenges=dns \ 60 | -d *.pinapathrunisaikiran.co.in \ 61 | --server https://acme-v02.api.letsencrypt.org/directory 62 | 63 | 64 | 65 | FOR HTTP TO HTTPS FORWARDING RUN THE BELOW COMMAND. 66 | 67 | certbot --nginx 68 | -------------------------------------------------------------------------------- /Day01/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![Black Blue Pink Modern Artificial Intelligence YouTube Thumbnail](https://github.com/saikiranpi/mastering-aws/assets/109568252/fddf742a-56a5-4ef0-b93a-466940b3afd4) 3 | 4 | 5 | ###### IP Explained ####### 6 | 7 | # Network Setup Guide 8 | 9 | Welcome to the Network Setup Guide! This guide will help you understand the basics of IP addresses, classes, public and private IPs, and how to configure them for different environments. 10 | 11 | ## Understanding IP Addresses 12 | 13 | In any network setup, devices communicate with each other using IP addresses. There are two types of IP addresses: 14 | 15 | 1. **IPv4:** Shorter addresses, like phone numbers for devices. 16 | 2. **IPv6:** Longer addresses, similar to phone numbers but with more digits. 17 | 18 | ## IP Address Ranges 19 | 20 | IPv4 addresses range from `0.0.0.0` to `255.255.255.255`. They are divided into five classes: A, B, C, D, and E. 21 | 22 | - **Class A:** `1.0.0.0` to `126.255.255.255` 23 | - **Class B:** `128.0.0.0` to `191.255.255.255` 24 | - **Class C:** `192.0.0.0` to `223.255.255.255` 25 | 26 | Classes D and E are reserved for specific purposes and not commonly used. 27 | 28 | ## Loopback Address 29 | 30 | You might wonder why `127` is skipped. `127.0.0.1` is reserved for loopback, meaning a device pings itself. 31 | 32 | ## Public and Private IPs 33 | 34 | As IP addresses are limited, there's a concept of public and private IPs. 35 | 36 | - **Public IPs:** Used for communication over external networks. 37 | - **Private IPs:** Used internally within closed infrastructures or office environments. 38 | 39 | ### Private IP Ranges 40 | 41 | Private IPs are reserved within specific ranges: 42 | 43 | - `10.0.0.0` to `10.255.255.255` (`10/8 prefix`) 44 | - `172.16.0.0` to `172.31.255.255` (`172.16/12 prefix`) 45 | - `192.168.0.0` to `192.168.255.255` (`192.168/16 prefix`) 46 | 47 | These addresses are for internal use only and should not be accessible from outside the network. 48 | 49 | ## Configuring IP Addresses 50 | 51 | ### Example 52 | 53 | To demonstrate, you can open CMD and type `ipconfig` to view your IPv4 private address. Then, by searching "my public IP" on Google, you can find your public IP address. 54 | 55 | ![Network Setup](network_setup.png) 56 | 57 | In the diagram above, you can see how public and private IPs are used in different environments. 58 | 59 | Now you have a basic understanding of IP addresses, classes, and how to use public and private IPs effectively. Happy networking! 60 | 61 | 62 | ![image](https://github.com/saikiranpi/mastering-aws/assets/109568252/8ffcbeb7-63d3-4df7-9a19-5d9ec9e31629) 63 | -------------------------------------------------------------------------------- /Day02/README.md: -------------------------------------------------------------------------------- 1 | ![a w s-v p c](https://github.com/saikiranpi/mastering-aws/assets/109568252/51f3bbf7-0706-450b-abf5-8c4bd697911b) 2 | 3 | 4 | ############## AWS VPC #################### 5 | 6 | # AWS VPC Setup Guide 7 | 8 | Welcome to the AWS VPC Setup Guide! This guide will walk you through the process of creating a Virtual Private Cloud (VPC) along with its components such as subnets, Internet Gateway, and Routing tables. 9 | 10 | ## What is VPC? 11 | 12 | A Virtual Private Cloud (VPC) is a virtual network environment within AWS that allows you to create a logically isolated section of the AWS Cloud where you can launch AWS resources in a virtual network that you define. 13 | 14 | ## Creating VPC 15 | 16 | To create a VPC: 17 | 1. Go to the AWS Management Console. 18 | 2. Navigate to the VPC dashboard. 19 | 3. Click on "Create VPC" and specify the VPC details such as CIDR block. 20 | 21 | ## Creating Subnets & Internet Gateway 22 | 23 | ### Subnets 24 | Subnets are subdivisions of a VPC's IP address range. They help organize and manage different parts of your network. 25 | 26 | Imagine a large plot of land that you want to develop into a residential area. Subnets are like individual buildings within this plot, each containing multiple flats. 27 | 28 | To create subnets: 29 | 1. Navigate to the VPC dashboard. 30 | 2. Click on "Subnets" and then "Create Subnet". 31 | 3. Specify the subnet details including CIDR block and Availability Zone (AZ). 32 | 33 | ### Internet Gateway (IGW) 34 | An Internet Gateway allows communication between instances in your VPC and the internet. 35 | 36 | To create an Internet Gateway: 37 | 1. Navigate to the VPC dashboard. 38 | 2. Click on "Internet Gateways" and then "Create Internet Gateway". 39 | 3. Attach the Internet Gateway to your VPC. 40 | 41 | ## Creating Routing Tables 42 | 43 | Routing tables define how traffic is directed within the VPC. They control the flow of traffic between subnets, internet gateways, and other network devices within the VPC. 44 | 45 | To create a routing table: 46 | 1. Navigate to the VPC dashboard. 47 | 2. Click on "Route Tables" and then "Create Route Table". 48 | 3. Define the routing rules, ensuring that traffic flows efficiently and securely to its intended destination. 49 | 50 | ## Example on VPC 51 | 52 | On a high level, each company's data and applications are kept separate and secure within their own VPC. Subnets help organize different stages of the software development lifecycle. 53 | 54 | ![VPC Setup Diagram](vpc_setup.png) 55 | 56 | Now you have configured VPC and Subnets successfully! 57 | 58 | ## Internet Gateway & Route Tables 59 | 60 | ### Internet Gateway (IGW) 61 | An Internet Gateway allows communication between instances in your VPC and the internet. 62 | 63 | ### Route Tables 64 | Route tables control the flow of traffic within the VPC. They ensure that traffic is directed efficiently and securely to its intended destination. 65 | 66 | To configure Internet Gateway and Route Tables: 67 | 1. Create an Internet Gateway and attach it to your VPC. 68 | 2. Create a Route Table and define routing rules, allowing traffic to flow between subnets and the internet. 69 | 70 | Remember to allow public subnets to access the internet by configuring the route table appropriately. 71 | 72 | ***Note:*** In routing tables, `0.0.0.0/0` means traffic not destined for the local network (e.g., `10.35.0.0/16`) should be routed to the internet gateway. 73 | 74 | Now you have a fully functional VPC with its components set up properly! Happy networking! 75 | 76 | ![image](https://github.com/saikiranpi/mastering-aws/assets/109568252/97947faf-5b41-41da-9be0-78fd4e495250) 77 | -------------------------------------------------------------------------------- /Day03/README.md: -------------------------------------------------------------------------------- 1 | ![vpc PEERING](https://github.com/saikiranpi/mastering-aws/assets/109568252/982bf754-b276-4154-8e4a-9c4b1f1294f0) 2 | 3 | 4 | 5 | ####### VPC PEERING ############# 6 | 7 | # VPC Peering Guide 8 | 9 | Welcome to the VPC Peering Guide! This guide will walk you through the process of setting up VPC peering between different Virtual Private Clouds (VPCs) in AWS. 10 | 11 | ## Real-Time Example 12 | 13 | Imagine you work for an MNC with data located in both the US and Europe regions. Your company uses AWS to host various services and critical applications. You have VPCs in the US East (Ohio) and EU (Ireland) regions. 14 | 15 | ### Before VPC Peering 16 | 17 | Without VPC peering, communication between resources in separate VPCs and regions is not possible directly. This may lead to increased latency, security risks, additional costs, and potential data security compromises. 18 | 19 | ### After Peering 20 | 21 | Setting up VPC peering establishes a private connection between the VPCs, enabling seamless communication between resources, reducing latency, and enhancing security. 22 | 23 | ## Setting Up VPC Peering 24 | 25 | To set up VPC peering: 26 | 1. **Draw the Diagram**: Visualize the architecture to understand the network topology. 27 | 2. **Create VPCs**: Create three VPCs, two in the US East 1A (10.1.0.0/16, 172.16.0.0/16) and one in US East 2A (192.168.0.0/16). 28 | 3. **Create EC2 Instances**: Launch EC2 instances in each VPC, ensuring Nginx script is included in the user data. 29 | 4. **Configure Security Groups**: Allow necessary inbound and outbound traffic in the security groups for all VPCs and EC2 instances. 30 | 5. **Set Up VPC Peering**: Establish VPC peering connections between the VPCs, ensuring no IP overlap and no transits support. 31 | 32 | ![VPC Peering Diagram](vpc_peering.png) 33 | 34 | Now, by setting up VPC peering, you've created a private connection between VPCs, enabling seamless communication between resources across different regions. 35 | 36 | Remember to follow the two important rules when creating VPC peering connections: no IP overlap and no transits support. 37 | 38 | Happy networking! 39 | 40 | ![image](https://github.com/saikiranpi/mastering-aws/assets/109568252/59795a41-5139-4fed-b43c-793040df240a) 41 | -------------------------------------------------------------------------------- /Day04/README.md: -------------------------------------------------------------------------------- 1 | ![Flow Logs](https://github.com/saikiranpi/mastering-aws/assets/109568252/15614265-ebd8-4769-a7f2-636e56188098) 2 | 3 | 4 | 5 | # VPC Flow Logs Guide 6 | 7 | Welcome to the VPC Flow Logs Guide! This guide will help you understand the importance of VPC flow logs and how to set them up in AWS. 8 | 9 | ## Understanding VPC Flow Logs 10 | 11 | After creating an EC2 instance, how does it connect to the internet? The network interface (ENI) is created, which connects to a subnet, and that subnet is connected to a VPC. There are three types of flows: 12 | 13 | 1. **ENI to Subnet:** Traffic flow between the network interface and the subnet. 14 | 2. **Subnet to VPC:** Traffic flow between the subnet and the VPC. 15 | 3. **ENI to VPC:** Aggregated traffic flow between the network interface and the VPC. 16 | 17 | ## Purpose of VPC Flow Logs 18 | 19 | VPC flow logs are essential for auditing and tracing network traffic. They provide insights into network activities and help detect and investigate security breaches. For example, if there's a breach, the audit team may ask for VPC flow logs to trace the traffic. Additionally, compliance standards such as PCI DSS require organizations to maintain transaction history for security and governance purposes. 20 | 21 | ## Setting Up VPC Flow Logs 22 | 23 | To set up VPC flow logs: 24 | 1. **Create Instance:** Launch an EC2 instance. 25 | 2. **Create S3 Bucket:** Create an S3 bucket to store the flow logs centrally. 26 | 3. **Configure Flow Logs:** Go to the VPC dashboard and create flow logs for the desired VPCs. 27 | 28 | ## Generating Logs 29 | 30 | To generate logs, you can use the cloud shell and run a script to continuously hit a website and capture traffic: 31 | 32 | ```bash 33 | curl ec2-35-173-233-127.compute-1.amazonaws.com 34 | while true 35 | do 36 | curl ec2-35-173-233-127.compute-1.amazonaws.com | grep -I nginx 37 | sleep 1 38 | done 39 | ``` 40 | 41 | This script will generate continuous traffic hitting the specified website, allowing you to observe and capture flow logs. 42 | 43 | By setting up VPC flow logs, you ensure visibility into your network traffic, aiding in security monitoring and compliance requirements. 44 | 45 | Happy logging! 46 | -------------------------------------------------------------------------------- /Day05/README.md: -------------------------------------------------------------------------------- 1 | ![VPC endpoints](https://github.com/saikiranpi/mastering-aws/assets/109568252/9395305a-78c6-4431-97fd-1856f9139392) 2 | 3 | 4 | VPC Endpoints Guide 5 | Welcome to the VPC Endpoints Guide! In this guide, we'll explore how VPC endpoints can be used to securely access AWS services without the need for public internet connectivity. 6 | 7 | Introduction 8 | Consider a scenario where you have a highly sensitive application deployed within an Amazon VPC (Virtual Private Cloud) in your AWS account. This application needs to securely access AWS services such as Amazon S3 and Amazon DynamoDB without exposing it to the public internet. Additionally, you want to restrict access to these services to only resources within your VPC. 9 | 10 | VPC Endpoints Overview 11 | VPC endpoints enable servers within a VPC to communicate with other AWS services internally, without needing to route traffic through the public internet. There are two types of VPC endpoints: 12 | 13 | Gateway Endpoints: Used for services like S3 and DynamoDB. 14 | Interface Endpoints: Create a network interface on a corresponding subnet for other services. 15 | Gateway Endpoints 16 | To set up a gateway endpoint: 17 | 18 | Remove the route to the NAT gateway and disable all public access. 19 | Go to the VPC dashboard, select S3 gateway endpoints, choose your VPC, and select both public and private routing tables. Create endpoints and wait for the file to be downloaded. 20 | Verify by checking the private routing table. 21 | Interface Endpoints 22 | To set up interface endpoints: 23 | 24 | Create a role for EC2 instances with managed instance core and SSM permissions. 25 | Attach the IAM role to both public and private instances and reboot them. 26 | Create endpoints for ec2messages, SSMMESSAGES, and SSM, selecting the proper private instance region, subnet, and security group. Reboot the private server and wait. 27 | Test by checking internet connectivity (should not work) and downloading an image from S3 (should work). 28 | -------------------------------------------------------------------------------- /Day06/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![Security VS NACL](https://github.com/saikiranpi/mastering-aws/assets/109568252/2fe56c98-cc17-4589-821d-9b54e16ac47c) 3 | 4 | 5 | # Security Groups vs Network Access Control Lists (NACLs) 6 | 7 | Let's delve into the differences between security groups (SG) and network access control lists (NACLs) in AWS, using the analogy of firewalls and practical examples to illustrate their functionalities. 8 | 9 | ## Security Groups 10 | 11 | Security groups act as stateful firewalls, controlling traffic at the instance level based on rules. They regulate inbound and outbound traffic and are associated with individual instances. 12 | 13 | ### Practical Example 14 | 15 | Suppose you have an instance with default security group settings: 16 | - All inbound traffic is denied by default. 17 | - Outbound traffic is allowed by default. 18 | 19 | 1. **Allow All Inbound Traffic:** Delete the outbound rules and test internet connectivity. You'll notice that the instance can connect to the internet. 20 | 2. **Restrict Outbound Access to Websites:** Add outbound rules for HTTP and HTTPS. Test again to ensure only website access is permitted. 21 | 3. **Allow ICMP Protocol for Ping:** As ping uses ICMP protocol, add an outbound rule to allow ICMP traffic for ping to work. 22 | 23 | Remember, security groups start with a default deny stance and require explicit rules to allow traffic. 24 | 25 | ## Network Access Control Lists (NACLs) 26 | 27 | NACLs, on the other hand, function as stateless firewalls, controlling traffic at the subnet level based on rules. They evaluate inbound and outbound traffic separately and are associated with subnets. 28 | 29 | ### Real-Time Scenario 30 | 31 | Let's consider a scenario where you have a web server that needs to be accessible from the internet. Here's the setup: 32 | - Outbound Rules: Allow all traffic. 33 | - Inbound Rules: Allow TCP port 80 from 0.0.0.0/0 (anywhere) for web traffic and TCP port 22 from your IP address for SSH access. 34 | 35 | By configuring NACLs in this manner, you ensure that web traffic (HTTP) is allowed from anywhere while SSH access is restricted to your IP address only. 36 | 37 | ## Comparison: SG vs NACL 38 | 39 | - **Security Groups:** Work at the instance level. They are stateful and require explicit rules for inbound and outbound traffic. 40 | - **NACLs:** Operate at the subnet level. They are stateless and evaluate inbound and outbound traffic separately, with the option to allow or deny traffic based on defined rules. 41 | 42 | Remember, in an interview scenario, you may be asked to differentiate between security groups and NACLs. Security groups regulate traffic at the instance level, while NACLs control traffic at the subnet level, offering both allow and deny options based on defined rules. 43 | 44 | ![SG vs NACL](sg_vs_nacl.png) 45 | 46 | This diagram visually represents the differences between security groups and NACLs in AWS, highlighting their respective scopes and functionalities. 47 | 48 | 49 | ![Sg vs NACL](https://github.com/saikiranpi/mastering-aws/assets/109568252/8a623a87-f5f0-4d5d-a84c-268f28bd690a) 50 | 51 | -------------------------------------------------------------------------------- /Day07/README.md: -------------------------------------------------------------------------------- 1 | ![Aws NAT](https://github.com/saikiranpi/mastering-aws/assets/109568252/4991d885-2fa5-4f0d-ab26-88c88dbd8e1d) 2 | # NAT Gateway Guide 3 | 4 | Welcome to the NAT Gateway Guide! In this guide, we'll explore what NAT gateways are, how they work, and how to set them up in AWS. 5 | 6 | ## What is a NAT Gateway? 7 | 8 | A NAT (Network Address Translation) gateway is a managed AWS service that enables instances within a private subnet to connect to the internet or other AWS services while preventing inbound traffic from reaching those instances. 9 | 10 | ## How does a NAT Gateway Work? 11 | 12 | When instances in a private subnet need to access the internet or AWS services, they send their traffic to the NAT gateway. The NAT gateway then forwards the traffic to the internet or the specified AWS service. When the response returns, the NAT gateway sends it back to the instances in the private subnet. 13 | 14 | ## Setting Up a NAT Gateway 15 | 16 | To set up a NAT gateway: 17 | 1. Navigate to the VPC dashboard in the AWS Management Console. 18 | 2. Select "NAT Gateways" and click on "Create NAT Gateway." 19 | 3. Choose the subnet where you want to deploy the NAT gateway and allocate an Elastic IP address for it. 20 | 4. Review and create the NAT gateway. 21 | 22 | ## Practical Example 23 | 24 | Let's say you have a VPC with public and private subnets. Your web servers are in the public subnet, and your application servers are in the private subnet. Your application servers need to access the internet to download software updates. 25 | 26 | By deploying a NAT gateway in the public subnet and routing traffic from the private subnet through it, your application servers can securely access the internet while remaining protected from inbound traffic initiated from the internet. 27 | 28 | ## Benefits of Using a NAT Gateway 29 | 30 | - **Security:** NAT gateways help maintain the security of your private instances by preventing direct inbound traffic. 31 | - **Simplicity:** NAT gateways are fully managed by AWS, reducing the operational overhead for managing NAT instances. 32 | - **Scalability:** NAT gateways automatically scale up to meet your traffic demands without manual intervention. 33 | 34 | ## Considerations 35 | 36 | - **Cost:** NAT gateways incur hourly charges as well as data processing charges for traffic routed through them. 37 | - **High Availability:** For high availability, deploy NAT gateways across multiple Availability Zones within your VPC. 38 | 39 | By leveraging NAT gateways, you can securely enable internet access for instances in private subnets, facilitating communication with external resources while maintaining a secure network environment. 40 | 41 | Happy networking with NAT gateways! 42 | 43 | 44 | ![NAT](https://github.com/saikiranpi/mastering-aws/assets/109568252/bf24800b-401a-473a-8e22-568992114ffc) 45 | 46 | -------------------------------------------------------------------------------- /Day08/Practicals-README.md: -------------------------------------------------------------------------------- 1 | # AWS Transit Gateway (TGW) Practical Demo 2 | 3 | ### What is Transit Gateway (TGW)? 4 | • Transit Gateway is a AWS service which helps in centralising connectivity between multiple VPCs, on-prem networks and even other AWS Accounts 5 | • Reduces the need of VPC peering simplifying network management 6 | • Also called as Site to Site (or) Tunnel to Tunnel VPN Connection 7 | 8 | # Practical Demo Steps 9 | 10 | ## Create Two VPCs in the Mumbai Region 11 | - **VPC-1**: CIDR - `10.0.0.0/16` 12 | - **VPC-2**: CIDR - `192.168.0.0/16` 13 | 14 | ## Update Security Groups 15 | - Allow "All Traffic" in both VPCs. 16 | 17 | ## Create a Transit Gateway 18 | 19 | ## Attach Transit Gateway to Both VPCs 20 | - Use Transit Gateway attachments, selecting the corresponding Transit Gateway and VPCs. 21 | 22 | ## Launch EC2 Instances 23 | - Deploy one EC2 instance each in **VPC-1** and **VPC-2** for testing. 24 | 25 | ## Configure Route Tables 26 | - Create route tables for **VPC-1** and **VPC-2**, directing traffic to each other's CIDR blocks via the Transit Gateway created earlier. 27 | 28 | ## Test Connectivity 29 | - Use the `ping` command with the private IPs of the EC2 instances in both VPCs to verify connectivity. 30 | 31 | ## Validate Connectivity 32 | - Ensure the `ping` command successfully exchanges packets, confirming proper routing and Transit Gateway configuration. 33 | 34 | ## Create a VPC in the North Virginia Region 35 | - **VPC-3**: CIDR - `172.168.0.0/16` 36 | 37 | ## Update Security Groups 38 | - Allow "All Traffic" in **VPC-3**. 39 | 40 | ## Create a Transit Gateway 41 | 42 | ## Attach Transit Gateway to VPC-3 43 | - Use Transit Gateway attachments to link the Transit Gateway with **VPC-3**. 44 | 45 | ## Launch an EC2 Instance in VPC-3 46 | - Deploy an EC2 instance in **VPC-3** for testing. 47 | 48 | ## Configure Route Tables 49 | - Set up route tables in **VPC-3** for **VPC-1** and **VPC-2** CIDR blocks. 50 | 51 | ## Update Mumbai Region Route Tables 52 | - Configure route tables in **VPC-1** and **VPC-2** to route traffic to **VPC-3** via the Transit Gateway. 53 | 54 | ## Verify Routing Configuration 55 | - Ensure all routing configurations are complete. 56 | 57 | ## Create a Transit Gateway Peering Connection 58 | - Establish a Transit Gateway peering connection from the North Virginia region to the Mumbai region to enable inter-region traffic flow. 59 | 60 | ## Accept the Peering Connection 61 | - Approve the Transit Gateway peering request from the Mumbai region. 62 | 63 | ## Update Static Routes 64 | - Update the Transit Gateway route tables: 65 | - Add **VPC-1** and **VPC-2** CIDRs with their Transit Gateway attachments in the North Virginia region. 66 | - Similarly, add **VPC-3** CIDR and its Transit Gateway attachment in the Mumbai region. 67 | 68 | ## Test Final Connectivity 69 | - Use the `ping` command to validate connectivity between EC2 instances across all VPCs. 70 | - Confirm successful traffic flow. 71 | -------------------------------------------------------------------------------- /Day08/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | # Transit Gateway Configuration Guide 4 | 5 | Welcome to the Transit Gateway Configuration Guide! In this guide, we'll explore what Transit Gateways are, how they work, and how to set them up in AWS. 6 | 7 | ## What is a Transit Gateway? 8 | 9 | A Transit Gateway is a network transit hub that enables you to connect multiple VPCs, VPNs, and on-premises networks to streamline network connectivity and management within your AWS infrastructure. 10 | 11 | ## How does a Transit Gateway Work? 12 | 13 | Transit Gateways act as a central hub for routing traffic between connected networks. They simplify network architecture by providing a single point of entry and exit for traffic, reducing the need for complex VPC peering configurations. 14 | 15 | ## Setting Up a Transit Gateway 16 | 17 | To set up a Transit Gateway: 18 | 19 | 1. **Navigate to the Transit Gateway Console:** 20 | - Access the AWS Management Console and navigate to the Transit Gateway service. 21 | 22 | 2. **Create a Transit Gateway:** 23 | - Click on "Create Transit Gateway" and provide details such as name, description, and Amazon side ASN (Autonomous System Number). 24 | 25 | 3. **Add Attachments:** 26 | - Attach VPCs, VPN connections, and Direct Connect gateways to the Transit Gateway to establish connectivity. 27 | 28 | 4. **Configure Route Tables:** 29 | - Define route tables to specify how traffic should be routed between attached networks. 30 | 31 | 5. **Propagation of Routes:** 32 | - Propagate routes from attached VPCs or VPN connections to ensure proper routing of traffic. 33 | 34 | 6. **Associate Subnets:** 35 | - Associate subnets from attached VPCs with the Transit Gateway to enable communication between resources. 36 | 37 | 7. **Testing and Validation:** 38 | - Test connectivity between resources in different networks to ensure proper routing through the Transit Gateway. 39 | 40 | ## Benefits of Using a Transit Gateway 41 | 42 | - **Simplified Network Architecture:** Transit Gateways simplify network connectivity by providing a centralized hub for routing traffic. 43 | - **Scalability:** They support the connection of thousands of VPCs and on-premises networks, allowing for scalable network expansion. 44 | - **Cost-Effective:** Transit Gateways eliminate the need for multiple VPN connections and complex VPC peering arrangements, reducing operational costs. 45 | 46 | ## Considerations 47 | 48 | - **Data Transfer Costs:** Data transfer costs may apply for traffic traversing the Transit Gateway between regions or across AWS services. 49 | - **High Availability:** Deploy Transit Gateways across multiple Availability Zones for high availability and fault tolerance. 50 | 51 | By leveraging Transit Gateways, you can establish a scalable and efficient network architecture in AWS, facilitating seamless communication between VPCs, VPNs, and on-premises networks. 52 | 53 | Happy networking with Transit Gateways! 54 | -------------------------------------------------------------------------------- /Day09/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![EC2](https://github.com/saikiranpi/mastering-aws/assets/109568252/4a762274-7f29-4def-904f-8a589d3a9725) 3 | 4 | 5 | # Understanding EC2 Instance Types and Cost-Saving Techniques 6 | 7 | In AWS, EC2 instances offer various pricing options to suit different usage patterns and budget considerations. Let's explore the different types of EC2 instances and cost-saving techniques you can employ. 8 | 9 | ## EC2 Instance Types 10 | 11 | 1. **On-Demand Instances:** 12 | - Pay-as-you-go pricing model where you pay for compute capacity by the hour or second with no long-term commitments. 13 | - Ideal for short-term workloads, unpredictable usage, or testing environments. 14 | 15 | 2. **Reserved Instances (RIs):** 16 | - Commit to a specific instance type in a region for a one- or three-year term and receive significant discounts compared to On-Demand pricing. 17 | - Suitable for steady-state workloads with predictable usage patterns, providing substantial cost savings over time. 18 | 19 | 3. **Spot Instances:** 20 | - Bid for spare Amazon EC2 computing capacity at a significantly lower price compared to On-Demand instances. 21 | - Perfect for fault-tolerant and flexible workloads, such as batch processing, data analysis, and testing. 22 | 23 | 4. **Launch Templates:** 24 | - Define the configuration of an EC2 instance, including the AMI, instance type, network settings, and storage, and then use it to launch instances repeatedly. 25 | - Streamlines instance provisioning and ensures consistency across deployments. 26 | 27 | ## Cost-Saving Techniques 28 | 29 | 1. **Reserved Instances (RIs):** 30 | - Identify long-term workload requirements and purchase RIs to benefit from significant cost savings over On-Demand pricing. 31 | - Opt for All Upfront, Partial Upfront, or No Upfront payment options based on your budget and cash flow preferences. 32 | 33 | 2. **Spot Instances:** 34 | - Leverage Spot Instances for non-critical workloads or tasks with flexible deadlines to take advantage of cost savings. 35 | - Utilize Spot Fleet or Spot Blocks for more predictable and reliable capacity compared to Spot Instances. 36 | 37 | 3. **Scheduled RIs:** 38 | - Utilize Scheduled RIs to reserve capacity for specific time windows, allowing you to optimize costs for predictable workloads. 39 | 40 | 4. **Resource Optimization:** 41 | - Right-size your EC2 instances by selecting instance types that match your workload requirements to avoid over-provisioning. 42 | - Implement Auto Scaling to dynamically adjust capacity based on demand, optimizing resource utilization and reducing costs. 43 | 44 | 5. **Monitoring and Analysis:** 45 | - Monitor resource usage and performance metrics using AWS Cost Explorer, Trusted Advisor, and third-party tools to identify opportunities for optimization. 46 | - Analyze usage patterns and historical data to make informed decisions about purchasing Reserved Instances or utilizing Spot Instances. 47 | 48 | By understanding the different EC2 instance types and implementing cost-saving techniques, you can effectively manage your AWS infrastructure costs while meeting your application requirements. Choose the pricing model and instance type that best aligns with your workload characteristics and budget constraints. 49 | -------------------------------------------------------------------------------- /Day09/script.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | I=1 3 | sgids='sg-0664fad55261dd1fa' 4 | for subnet in 'subnet-0d124b5eb12011584' 'subnet-0aacfa913cba54372' 'subnet-0e595f62126dd8670' 5 | do 6 | echo "Creating EC2 Instance in $subnet ..." 7 | aws ec2 run-instances --instance-type t2.nano --launch-template LaunchTemplateId=lt-0bc152f2d8ccfde3e --security-group-ids $sgids --subnet-id $subnet --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=AWSB28-Server-'${I}'}]' >> /dev/null 2>&1 8 | echo "Created EC2 Machine with the name Testserver-${I}" 9 | I=$((I+1)) 10 | done 11 | -------------------------------------------------------------------------------- /Day10/README.md: -------------------------------------------------------------------------------- 1 | ![Packer](https://github.com/saikiranpi/mastering-aws/assets/109568252/c26c1269-82ec-49e1-91c6-91dcfd1fe778) 2 | 3 | 4 | PACKER ALL LAB FILES HERE : https://github.com/saikiranpi/packer.git 5 | 6 | 7 | 8 | AMI With PACKER 9 | 10 | 1- Automation of AMI Creation Using Packer 11 | 2- Secure AMI creation Using Inspector. 12 | 1 – Secure AMI creation Using Inspector: 13 | - Create an EC2 instance and install nginx init from User data. Make sure you are giving the tags as app = Nginx. 14 | 15 | - Download Inspector and Give permissions and install Inspector. 16 | 17 | - Meanwhile comeback to console and configure INSPECTOR (Scanning Will take for 15 min ) 18 | 19 | - Assessment target > Create > Name optional > 20 | 21 | - Use tags App: nginx – Save 22 | 23 | - Open amazon inspector > Go to classic Inspector. 24 | 25 | - Assessment templates > Create Name > target (you should select the name you have given) 26 | 27 | - Rile packages > network – security – common vulnerability – CIS operating system.(Central Operating system) 28 | 29 | - It will take some time leave it like that . 30 | 31 | 32 | 3- Automation of AMI Creation Using Packer 33 | Creation of AMI : 34 | - Create a Ec2 35 | - Login and install/configure all application 36 | - Shutdown the machine 37 | - Create AMI image from the machine 38 | - Check the AMI 39 | - Delete the machine 40 | Now we can automate all the steps guys, Daily we cant do all these manually. For achieving these we shall Using Hashicorp packer and terraform. 41 | - Download packer from Packer.io > Binary file > Save it in C dire > Copy the path. 42 | - And follow the steps. 43 | 44 | 45 | Go to Control Panel -> System -> System settings -> Environment Variables. 46 | Scroll down in system variables until you find PATH. 47 | Click edit and change accordingly. 48 | BE SURE to include a semicolon at the end of the previous as that is the delimiter, i.e. c:\path;c:\path2 49 | Launch a new console for the settings to take effect. 50 | 51 | ------- 52 | - Now I need Two files , create 1 as packer. Json and copy the code from terraform single instance GitHub and do necessary changes. 53 | 54 | - Now you need access key and secrete key, for that go to IAM > User > name > Attach existing policy > ec2fullaccess > Create security Credentials and paste it on a variable file 55 | 56 | - AMI – VPC – subnet > Copy Ami from the region. 57 | 58 | - Now run the packer with the commands. Wrong command 59 | 60 | - Packer.exe validate –var-file packer-vars.json packer.json 61 | 62 | - $ packer.exe inspect --var-file packer-vars.json packer.json 63 | 64 | 65 | -------------------------------------------------------------------------------- /Day11/Practicals-README.md: -------------------------------------------------------------------------------- 1 | # Mounting and Attaching EBS Volume to an EC2 Instance 2 | 3 | ## Step 1: Create an EC2 Instance 4 | 5 | Launch an EC2 instance with the following specs: 6 | 7 | - **Instance Type:** t2.micro 8 | - **Root Volume:** 8 GB 9 | 10 | ## Step 2: Create an EBS Volume 11 | 12 | Navigate to the EBS Dashboard and create a volume: 13 | 14 | - **Type:** GP2 15 | - **Size:** 4 GB 16 | - Ensure the volume is in the same Availability Zone (AZ) as your EC2 instance. 17 | 18 | ## Step 3: Attach the Volume 19 | 20 | Attach the newly created EBS volume to your EC2 instance. 21 | 22 | ## Step 4: Login and Verify the Block Device 23 | 24 | 1. SSH into your EC2 instance. 25 | 2. List the block devices using: 26 | ``` 27 | lsblk 28 | ``` 29 | 3. Confirm the new volume is listed and matches the size you created. 30 | 31 | ## Step 5: Format the Disk and Create a Partition 32 | 33 | 1. Start the disk partitioning tool: 34 | ``` 35 | sudo fdisk 36 | ``` 37 | 2. Follow these steps in `fdisk`: 38 | - Type `n` to create a new partition. 39 | - Type `p` to make it a primary partition. 40 | - Press Enter three times to accept defaults. 41 | - Type `w` to write changes and exit. 42 | 43 | ## Step 6: Validate the Partition 44 | 45 | Run the `lsblk` command again to ensure the partition is listed under the disk. 46 | 47 | ## Step 7: Create a Filesystem 48 | 49 | Format the partition with the `ext4` filesystem: 50 | ``` 51 | sudo mkfs -t ext4 52 | ``` 53 | 54 | ## Step 8: Prerequisite to Mount the Disk 55 | 56 | 1. Create a directory to mount the volume: 57 | ``` 58 | sudo mkdir / 59 | ``` 60 | 2. Add some test data to validate persistence later. 61 | 62 | ## Step 9: Mount the Partition 63 | 64 | Mount the partition to the folder: 65 | ``` 66 | sudo mount / 67 | ``` 68 | 69 | ## Step 10: Persist Mount on Reboot 70 | 71 | 1. Edit the `fstab` file: 72 | ``` 73 | sudo vi /etc/fstab 74 | ``` 75 | 2. Add the line. Note: Give the relative (full) path of the folder 76 | ``` 77 | / ext4 defaults 0 0 78 | ``` 79 | 3. Save and verify using: 80 | ``` 81 | cat /etc/fstab 82 | ``` 83 | 84 | ## Step 11: Test Persistence 85 | 86 | 1. Stop and start the EC2 instance. 87 | 2. Verify the mount and test data: 88 | ``` 89 | ls / 90 | ``` 91 | 92 | -------------------------------------------------------------------------------- /Day11/README.md: -------------------------------------------------------------------------------- 1 | ![11 EBS](https://github.com/saikiranpi/mastering-aws/assets/109568252/dc2a2152-8c80-45f3-be97-444f48af7e81) 2 | 3 | 4 | 5 | Understanding Storage Types 6 | In AWS, storage options vary based on the type of data you're working with and your performance and durability requirements. Let's delve into the different types of storage available, focusing on EBS, which provides block-level storage volumes for use with EC2 instances. 7 | 8 | Storage Types 9 | 1. Block Storage 10 | Elastic Block Storage (EBS) Volumes: 11 | Provides persistent block storage volumes that can be attached to EC2 instances. 12 | Allows you to create, attach, and detach volumes to EC2 instances as needed. 13 | Supports different volume types optimized for various workloads, including General Purpose SSD (gp2/gp3), Provisioned IOPS SSD (io1/io2), and Throughput Optimized HDD (st1). 14 | Instance Storage: 15 | Directly attached storage to EC2 instances. 16 | Provides high I/O performance but is non-persistent. 17 | Data stored in instance storage is lost if the instance is stopped or terminated. 18 | Typically available in fixed sizes and types and limited to specific instance types. 19 | 2. File Storage 20 | AWS Elastic File System (EFS): 21 | Fully managed file storage service that supports NFSv4 protocol. 22 | Offers scalable and highly available file storage for Linux-based workloads, allowing multiple EC2 instances to access the same file system concurrently. 23 | AWS FSx: 24 | Provides fully managed file systems optimized for Windows-based workloads, including Windows File Server and Lustre. 25 | 3. Object Storage 26 | Amazon Simple Storage Service (S3): 27 | Object storage service designed to store and retrieve any amount of data from anywhere on the web. 28 | Ideal for storing unstructured data, such as images, videos, documents, and backups. 29 | Offers high durability, availability, and scalability at a low cost. 30 | Amazon Glacier: 31 | Low-cost storage service designed for long-term data archiving and backup. 32 | Offers multiple retrieval options with varying latency, allowing you to optimize costs based on your access requirements. 33 | Advantages and Use Cases of EBS Volumes 34 | Permanent Storage: 35 | EBS volumes provide persistence, ensuring that data remains intact even if the associated EC2 instance is stopped or terminated. 36 | Flexible Volume Types: 37 | Choose from a variety of EBS volume types optimized for different performance and cost requirements, ranging from high-performance SSDs to cost-effective HDDs. 38 | Scalability and Attachment Flexibility: 39 | Easily scale EBS volumes up to 16TB in size and attach/detach them to different EC2 instances as needed. 40 | Practical Implementation and Best Practices 41 | Volume Provisioning and Mounting: 42 | Provision EBS volumes and mount them to EC2 instances using standard Linux commands like lsblk, fdisk, mkfs, and mount. 43 | Update the /etc/fstab file to automatically mount EBS volumes at boot time. 44 | Performance Optimization: 45 | Utilize different EBS volume types based on your application's performance requirements, ensuring optimal I/O performance and cost-effectiveness. 46 | By understanding the various storage options available in AWS, including EBS volumes, you can architect scalable and reliable storage solutions tailored to your specific workload requirements and budget constraints. 47 | -------------------------------------------------------------------------------- /Day12/README.md: -------------------------------------------------------------------------------- 1 | ![NLB](https://github.com/saikiranpi/mastering-aws/assets/109568252/5e518bf7-931c-4f08-9684-22d63f643d80) 2 | 3 | 4 | Commands for Load balancing checking !!! 5 | 6 | - while true; do curl -sL https://www.cloudvishwakarma.in/ | grep -i 'ip-10-0'; sleep 1; done 7 | 8 | - while true; do curl -sL https://www.cloudvishwakarma.in/ | grep -i 'ip-10-0'; sleep 1; done | tee -a awsnlb.log 9 | 10 | - cat awsnlb.log | grep -i ip-10-0-156-5 | wc -l 11 | 12 | 13 | ## Network Load Balancer (NLB) 14 | 15 | In AWS, load balancers play a crucial role in distributing incoming traffic across multiple targets to ensure high availability, fault tolerance, and scalability of applications. Let's explore the Network Load Balancer (NLB), one of the types of load balancers offered by AWS. 16 | 17 | ### Overview: 18 | NLB operates at Layer 4 (Transport Layer) of the OSI model, making it ideal for handling TCP and UDP traffic. It provides ultra-high performance and low-latency load balancing, making it suitable for use cases that require extreme performance and scalability. 19 | 20 | ### Key Features: 21 | - **Layer 4 Load Balancing:** 22 | - NLB operates at the transport layer, allowing it to efficiently distribute traffic based on IP protocol data (TCP or UDP). 23 | 24 | - **High Performance:** 25 | - NLB offers high throughput and low latency, making it suitable for latency-sensitive and high-traffic applications. 26 | 27 | - **Cross-Zone Load Balancing:** 28 | - NLB supports cross-zone load balancing, enabling it to distribute traffic evenly across instances in different availability zones within the same region. 29 | 30 | - **Target Groups:** 31 | - NLB forwards incoming traffic to a target group, which can include EC2 instances, IP addresses, or Lambda functions. 32 | 33 | ### Practical Implementation: 34 | 1. **Setting Up NLB:** 35 | - Create a Network Load Balancer in the AWS Management Console, specifying the listeners, target group, and other configuration details. 36 | 37 | 2. **Testing Load Balancing:** 38 | - Use tools like `curl` to send requests to the NLB's DNS name and observe the distribution of traffic across the registered targets. 39 | 40 | 3. **Monitoring and Optimization:** 41 | - Utilize AWS CloudWatch metrics to monitor the performance of the NLB and optimize its configuration based on traffic patterns and application requirements. 42 | 43 | ### Advantages and Use Cases: 44 | - **Highly Scalable Applications:** 45 | - NLB is well-suited for applications that require high scalability and handle a large volume of traffic, such as gaming platforms, media streaming services, and IoT applications. 46 | 47 | - **Latency-Sensitive Workloads:** 48 | - Applications with stringent latency requirements, such as financial trading platforms and real-time communication services, benefit from NLB's low-latency load balancing capabilities. 49 | 50 | - **UDP-Based Applications:** 51 | - NLB is ideal for UDP-based applications like DNS servers, VoIP services, and online gaming platforms that require efficient load balancing of UDP traffic. 52 | 53 | By leveraging Network Load Balancer, AWS customers can ensure the reliability, scalability, and performance of their applications, especially in scenarios where low latency and high throughput are paramount. 54 | -------------------------------------------------------------------------------- /Day12/userdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | sudo apt update 3 | sudo apt install nginx -y 4 | sudo systemctl restart nginx 5 | sudo systemctl enable nginx 6 | echo "

$(cat /etc/hostname)

" >> /var/www/html/index.nginx-debian.html 7 | echo "

US-EAST-1A-SERVERS

" >> /var/www/html/index.nginx-debian.html 8 | -------------------------------------------------------------------------------- /Day13/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | Creating an Application Load Balancer (ALB) and configuring it with multiple target groups can significantly enhance your application's scalability and reliability. Here's a step-by-step guide along with a diagram: 5 | 6 | **Steps:** 7 | 8 | 1. **Instance Setup:** 9 | - Launch four EC2 instances with private subnets, placing two instances in subnet 1A and the remaining two in subnets 1B and 1C. 10 | - Ensure proper userdata configuration for each instance. 11 | 12 | 2. **Target Group Creation:** 13 | - Create three target groups: 14 | - Home Page: HTTP 80, health check `/homepage/`, add all instances. 15 | - Movies: HTTP 80, health check `/movies/`, add movies server only. 16 | - Shows: HTTP 80, health check `/shows/`, add shows server only. 17 | 18 | 3. **Load Balancer Configuration:** 19 | - Create an ALB (Internet-facing) in your VPC, selecting the public subnets and appropriate security group. 20 | - Configure two listeners: HTTP (80) and HTTPS (443). 21 | - Select the SSL certificate for your domain (e.g., `www.cloudvishwakarma.in`). 22 | 23 | 4. **Route 53 Setup:** 24 | - Create a Route 53 record pointing to the ALB. 25 | 26 | 5. **HTTP to HTTPS Redirection:** 27 | - Configure HTTP to HTTPS redirection: 28 | - Create a rule to redirect HTTP (80) traffic to HTTPS (443). 29 | - Ensure proper HTTPS redirection for all incoming traffic. 30 | 31 | 6. **Path-Based Routing:** 32 | - Implement path-based routing for different content: 33 | - Create rules to direct traffic based on paths (e.g., `/movies/*`, `/shows/*`). 34 | - Ensure each rule forwards traffic to the corresponding target group. 35 | 36 | 7. **Error Handling:** 37 | - Set up error handling rules: 38 | - Define rules for specific paths (e.g., `/google/*`) and provide appropriate responses (e.g., 503 error with a message). 39 | 40 | 8. **Virtual Hosting:** 41 | - Optionally, configure virtual hosting for multiple hosts: 42 | - Utilize host headers to direct traffic to different backend services based on the requested host. 43 | 44 | **Diagram:** 45 | 46 | ``` 47 | +---------+ 48 | | ALB | 49 | | | 50 | +----+----+ 51 | | 52 | +------------+------------+ 53 | | | 54 | +-----v-----+ +-----v-----+ 55 | | Target | | Target | 56 | | Group: | | Group: | 57 | | Homepage | | Movies | 58 | | | | | 59 | +-----+-----+ +-----+-----+ 60 | | | 61 | | | 62 | +-----v-----+ +-----v-----+ 63 | | Target | | Target | 64 | | Group: | | Group: | 65 | | Shows | | ... | 66 | | | | | 67 | +-----------+ +-----------+ 68 | ``` 69 | 70 | By following these steps and configuring your ALB accordingly, you can efficiently distribute incoming traffic across your EC2 instances and provide a seamless experience for your users. 71 | -------------------------------------------------------------------------------- /Day14/README.md: -------------------------------------------------------------------------------- 1 | # autoscaling_testing 2 | autoscaling_testing 3 | -------------------------------------------------------------------------------- /Day14/app.conf: -------------------------------------------------------------------------------- 1 | #Config File 2 | -------------------------------------------------------------------------------- /Day14/appconf.j2: -------------------------------------------------------------------------------- 1 | Todays date is: {{ todays_date }} 2 | Server hostname is: {{ host_name }} 3 | Server FQDN is: {{ fqdn_name }} 4 | Server IP Address is: {{ ip_address }} 5 | Server OS Version is: {{ os_version }} 6 | 7 | -------------------------------------------------------------------------------- /Day14/index.j2: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Color Game 9 | 10 | 11 |
12 |
13 |

{{ custom_heading }}

14 |

We Are Testing AWS AutoScaling

15 |

Todays date is: {{ todays_date }}

16 |

Server hostname is: {{ host_name }}

17 |

Server FQDN is: {{ fqdn_name }}

18 |

Server IP Address is: {{ ip_address }}

19 |
20 |
21 |
22 |
23 |
Lets Play
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
33 | 34 | 35 | 36 | 37 | -------------------------------------------------------------------------------- /Day14/index.nginx-debian.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | Color Game 9 | 10 | 11 |

{{ custom_heading }}

12 |

Todays date is: {{ todays_date }}

13 |

Server hostname is: {{ host_name }}

14 |

Server FQDN is: {{ fqdn_name }}

15 |

Server IP Address is: {{ ip_address }}

16 | 17 | 18 |
19 |
20 |
Lets Play
21 |
22 |
23 |
24 |
25 |
26 |
27 |
28 |
29 |
30 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /Day14/scorekeeper.js: -------------------------------------------------------------------------------- 1 | // var colors = [ 2 | // "rgb(0, 0, 255)", 3 | // "rgb(0, 255, 0)", 4 | // "rgb(255, 0, 0)", 5 | // "rgb(255, 0, 255)", 6 | // "rgb(0, 255, 255)", 7 | // "rgb(255, 255, 0)", 8 | //] 9 | 10 | var colors = generateRandomNumber(6); 11 | 12 | var squares = document.querySelectorAll(".square"); 13 | var rgb = document.querySelector("#rgb"); 14 | var try1 = document.querySelector("#try1"); 15 | pickedColor = pickedColor(); 16 | for(var i=0; i Directory Service. 43 | Choose AWS Managed Microsoft AD. 44 | Configure directory details like domain name and password. 45 | Select VPC and subnets, then create the directory. 46 | Role Switching Interview Questions: 47 | Discuss efficient role switching strategies within an AWS organization using Active Directory. 48 | Compare and contrast managing multiple AWS accounts with AWS SSO and Active Directory. 49 | Explain the two ways to deploy Active Directory: AWS Managed Active Directory and self-hosted using on-prem or EC2 instances. 50 | -------------------------------------------------------------------------------- /Day19/assume-role-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": { 4 | "Effect": "Allow", 5 | "Action": "sts:AssumeRole", 6 | "Resource": [ 7 | "arn:aws:iam::891377035410:role/staging-role", 8 | "arn:aws:iam::211125710812:role/QA-Role" 9 | ] 10 | } 11 | } 12 | -------------------------------------------------------------------------------- /Day20/README.md: -------------------------------------------------------------------------------- 1 | 20 2 | 3 | 4 | 5 | IAM-SSO-SelfAD-ManageAD-Cognito 6 | Overview 7 | This repository focuses on integrating AWS managed Active Directory (AD) with AWS Single Sign-On (SSO) and managing permissions. Additionally, it explores the possibility of integrating Self-Hosted AD with SSO and configuring private Permissions. 8 | 9 | Prerequisites 10 | Before proceeding with the steps outlined below, ensure the following prerequisites are met: 11 | 12 | Active Directory (AD) has already been installed in a previous session. 13 | Three users have been created in AD using "dsa.msc", and the administrator password has been changed. 14 | A new Windows instance needs to be deployed without installing AD, but with necessary tools. 15 | Steps to Follow 16 | Deploy Windows Instance: 17 | Deploy a new Windows instance without installing AD but with required tools. 18 | Install Microsoft AD: 19 | Follow the steps to install Microsoft AD with any domain name (e.g., saikiran.com). 20 | Connect the Directory Service to AD by providing necessary details. 21 | Explain Microsoft AD and AD Connector: 22 | Clarify the concept of Microsoft AD managed by AWS. 23 | Define AD Connector for connecting existing AD with AWS services. 24 | Configure DNS Settings: 25 | Ensure DNS settings on the Windows instance are pointing to the managed AD service. 26 | Install Remote Server Admin Tools: 27 | Install Remote Server Admin Tools and Role Admin Tools, specifically Active Directory Domain Services (ADDS). 28 | Join Windows Instance to Domain: 29 | Change system properties to join the Windows instance to the MS AD domain. 30 | Restart the instance once the configuration is complete. 31 | Integrate MS AD with AWS SSO: 32 | Navigate to IAM Identity Center > SSO > Settings to integrate MS AD with AWS SSO. 33 | Customize the URL portal for user access. 34 | User Management: 35 | Log in to the Windows instance using administrator@saikiran.com. 36 | Show the difference between sysdm.cpl and dsa.msc. 37 | Create users in the new instance and add them to groups. 38 | Assign Permissions: 39 | Navigate to IAM Identity Center > Permission Sets to assign IAM policies (e.g., S3, EC2 access) to users. 40 | Configure AWS SSO Identity Source: 41 | Navigate to Identity and Compliance > AWS SSO > Settings to configure the identity source as AD. 42 | This integration automates user provisioning and login to AWS services. 43 | Conclusion 44 | -------------------------------------------------------------------------------- /Day21/MySQL dump file: -------------------------------------------------------------------------------- 1 | CREATE DATABASE IF NOT EXISTS `myflixdb` /*!40100 DEFAULT CHARACTER SET latin1 */; 2 | USE `myflixdb`; 3 | -- MySQL dump 10.13 Distrib 5.5.16, for Win32 (x86) 4 | -- 5 | -- Host: localhost Database: myflixdb 6 | -- ------------------------------------------------------ 7 | -- Server version 5.5.25a 8 | 9 | /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; 10 | /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; 11 | /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; 12 | /*!40101 SET NAMES utf8 */; 13 | /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */; 14 | /*!40103 SET TIME_ZONE='+00:00' */; 15 | /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; 16 | /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; 17 | /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; 18 | /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; 19 | 20 | -- 21 | -- Table structure for table `movies` 22 | -- 23 | 24 | DROP TABLE IF EXISTS `movies`; 25 | /*!40101 SET @saved_cs_client = @@character_set_client */; 26 | /*!40101 SET character_set_client = utf8 */; 27 | CREATE TABLE `movies` ( 28 | `movie_id` int(11) NOT NULL AUTO_INCREMENT, 29 | `title` varchar(300) DEFAULT NULL, 30 | `director` varchar(150) DEFAULT NULL, 31 | `year_released` year(4) DEFAULT NULL, 32 | `category_id` int(11) DEFAULT NULL, 33 | PRIMARY KEY (`movie_id`), 34 | KEY `fk_Movies_Categories1` (`category_id`), 35 | KEY `title_index` (`title`), 36 | KEY `qw` (`title`), 37 | CONSTRAINT `fk_Movies_Categories1` FOREIGN KEY (`category_id`) REFERENCES `categories` (`category_id`) ON DELETE NO ACTION ON UPDATE NO ACTION 38 | ) ENGINE=InnoDB AUTO_INCREMENT=17 DEFAULT CHARSET=latin1; 39 | /*!40101 SET character_set_client = @saved_cs_client */; 40 | 41 | -- 42 | -- Dumping data for table `movies` 43 | -- 44 | 45 | LOCK TABLES `movies` WRITE; 46 | /*!40000 ALTER TABLE `movies` DISABLE KEYS */; 47 | INSERT INTO `movies` VALUES (1,'Pirates of the Caribean 4',' Rob Marshall',2011,1),(2,'Forgetting Sarah Marshal','Nicholas Stoller',2008,2),(3,'X-Men',NULL,2008,NULL),(4,'Code Name Black','Edgar Jimz',2010,NULL),(5,'Daddy\'s Little Girls',NULL,2007,8),(6,'Angels and Demons',NULL,2007,6),(7,'Davinci Code',NULL,2007,6),(9,'Honey mooners','John Schultz',2005,8),(16,'67% Guilty',NULL,2012,NULL); 48 | /*!40000 ALTER TABLE `movies` ENABLE KEYS */; 49 | UNLOCK TABLES; 50 | 51 | -- 52 | -- Table structure for table `payments` 53 | -- 54 | 55 | DROP TABLE IF EXISTS `payments`; 56 | /*!40101 SET @saved_cs_client = @@character_set_client */; 57 | /*!40101 SET character_set_client = utf8 */; 58 | CREATE TABLE `payments` ( 59 | `payment_id` int(11) NOT NULL AUTO_INCREMENT, 60 | `membership_number` int(11) DEFAULT NULL, 61 | `payment_date` date DEFAULT NULL, 62 | `description` varchar(75) DEFAULT NULL, 63 | `amount_paid` float DEFAULT NULL, 64 | `external_reference_number` int(11) DEFAULT NULL, 65 | PRIMARY KEY (`payment_id`), 66 | KEY `fk_Payments_Members1` (`membership_number`), 67 | CONSTRAINT `fk_Payments_Members1` FOREIGN KEY (`membership_number`) REFERENCES `members` (`membership_number`) ON DELETE NO ACTION ON UPDATE NO ACTION 68 | ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=latin1; 69 | /*!40101 SET character_set_client = @saved_cs_client */; 70 | 71 | -- 72 | -- Dumping data for table `payments` 73 | -- 74 | 75 | LOCK TABLES `payments` WRITE; 76 | /*!40000 ALTER TABLE `payments` DISABLE KEYS */; 77 | INSERT INTO `payments` VALUES (1,1,'2012-07-23','Movie rental payment',2500,11),(2,1,'2012-07-25','Movie rental payment',2000,12),(3,3,'2012-07-30','Movie rental payment',6000,NULL); 78 | /*!40000 ALTER TABLE `payments` ENABLE KEYS */; 79 | UNLOCK TABLES; 80 | 81 | -- 82 | -- Table structure for table `members` 83 | -- 84 | 85 | DROP TABLE IF EXISTS `members`; 86 | /*!40101 SET @saved_cs_client = @@character_set_client */; 87 | /*!40101 SET character_set_client = utf8 */; 88 | CREATE TABLE `members` ( 89 | `membership_number` int(11) NOT NULL AUTO_INCREMENT, 90 | `full_names` varchar(350) NOT NULL, 91 | `gender` varchar(6) DEFAULT NULL, 92 | `date_of_birth` date DEFAULT NULL, 93 | `physical_address` varchar(255) DEFAULT NULL, 94 | `postal_address` varchar(255) DEFAULT NULL, 95 | `contact_number` varchar(75) DEFAULT NULL, 96 | `email` varchar(255) DEFAULT NULL, 97 | PRIMARY KEY (`membership_number`) 98 | ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=latin1; 99 | /*!40101 SET character_set_client = @saved_cs_client */; 100 | 101 | -- 102 | -- Dumping data for table `members` 103 | -- 104 | 105 | LOCK TABLES `members` WRITE; 106 | /*!40000 ALTER TABLE `members` DISABLE KEYS */; 107 | INSERT INTO `members` VALUES (1,'Janet Jones','Female','1980-07-21','First Street Plot No 4','Private Bag','0759 253 542','janetjones@yagoo.cm'),(2,'Janet Smith Jones','Female','1980-06-23','Melrose 123',NULL,NULL,'jj@fstreet.com'),(3,'Robert Phil','Male','1989-07-12','3rd Street 34',NULL,'12345','rm@tstreet.com'),(4,'Gloria Williams','Female','1984-02-14','2nd Street 23',NULL,NULL,NULL); 108 | /*!40000 ALTER TABLE `members` ENABLE KEYS */; 109 | UNLOCK TABLES; 110 | 111 | -- 112 | -- Temporary table structure for view `accounts_v_members` 113 | -- 114 | 115 | DROP TABLE IF EXISTS `accounts_v_members`; 116 | /*!50001 DROP VIEW IF EXISTS `accounts_v_members`*/; 117 | SET @saved_cs_client = @@character_set_client; 118 | SET character_set_client = utf8; 119 | /*!50001 CREATE TABLE `accounts_v_members` ( 120 | `membership_number` int(11), 121 | `full_names` varchar(350), 122 | `gender` varchar(6) 123 | ) ENGINE=MyISAM */; 124 | SET character_set_client = @saved_cs_client; 125 | 126 | -- 127 | -- Temporary table structure for view `general_v_movie_rentals` 128 | -- 129 | 130 | DROP TABLE IF EXISTS `general_v_movie_rentals`; 131 | /*!50001 DROP VIEW IF EXISTS `general_v_movie_rentals`*/; 132 | SET @saved_cs_client = @@character_set_client; 133 | SET character_set_client = utf8; 134 | /*!50001 CREATE TABLE `general_v_movie_rentals` ( 135 | `membership_number` int(11), 136 | `full_names` varchar(350), 137 | `title` varchar(300), 138 | `transaction_date` date, 139 | `return_date` date 140 | ) ENGINE=MyISAM */; 141 | SET character_set_client = @saved_cs_client; 142 | 143 | -- 144 | -- Table structure for table `categories` 145 | -- 146 | 147 | DROP TABLE IF EXISTS `categories`; 148 | /*!40101 SET @saved_cs_client = @@character_set_client */; 149 | /*!40101 SET character_set_client = utf8 */; 150 | CREATE TABLE `categories` ( 151 | `category_id` int(11) NOT NULL AUTO_INCREMENT, 152 | `category_name` varchar(150) DEFAULT NULL, 153 | `remarks` varchar(500) DEFAULT NULL, 154 | PRIMARY KEY (`category_id`) 155 | ) ENGINE=InnoDB AUTO_INCREMENT=9 DEFAULT CHARSET=latin1; 156 | /*!40101 SET character_set_client = @saved_cs_client */; 157 | 158 | -- 159 | -- Dumping data for table `categories` 160 | -- 161 | 162 | LOCK TABLES `categories` WRITE; 163 | /*!40000 ALTER TABLE `categories` DISABLE KEYS */; 164 | INSERT INTO `categories` VALUES (1,'Comedy','Movies with humour'),(2,'Romantic','Love stories'),(3,'Epic','Story acient movies'),(4,'Horror',NULL),(5,'Science Fiction',NULL),(6,'Thriller',NULL),(7,'Action',NULL),(8,'Romantic Comedy',NULL); 165 | /*!40000 ALTER TABLE `categories` ENABLE KEYS */; 166 | UNLOCK TABLES; 167 | 168 | -- 169 | -- Table structure for table `movierentals` 170 | -- 171 | 172 | DROP TABLE IF EXISTS `movierentals`; 173 | /*!40101 SET @saved_cs_client = @@character_set_client */; 174 | /*!40101 SET character_set_client = utf8 */; 175 | CREATE TABLE `movierentals` ( 176 | `reference_number` int(11) NOT NULL AUTO_INCREMENT, 177 | `transaction_date` date DEFAULT NULL, 178 | `return_date` date DEFAULT NULL, 179 | `membership_number` int(11) DEFAULT NULL, 180 | `movie_id` int(11) DEFAULT NULL, 181 | `movie_returned` bit(1) DEFAULT b'0', 182 | PRIMARY KEY (`reference_number`), 183 | KEY `fk_MovieRentals_Members1` (`membership_number`), 184 | KEY `fk_MovieRentals_Movies1` (`movie_id`), 185 | CONSTRAINT `fk_MovieRentals_Members1` FOREIGN KEY (`membership_number`) REFERENCES `members` (`membership_number`) ON DELETE NO ACTION ON UPDATE NO ACTION, 186 | CONSTRAINT `fk_MovieRentals_Movies1` FOREIGN KEY (`movie_id`) REFERENCES `movies` (`movie_id`) ON DELETE NO ACTION ON UPDATE NO ACTION 187 | ) ENGINE=InnoDB AUTO_INCREMENT=16 DEFAULT CHARSET=latin1; 188 | /*!40101 SET character_set_client = @saved_cs_client */; 189 | 190 | -- 191 | -- Dumping data for table `movierentals` 192 | -- 193 | 194 | LOCK TABLES `movierentals` WRITE; 195 | /*!40000 ALTER TABLE `movierentals` DISABLE KEYS */; 196 | INSERT INTO `movierentals` VALUES (11,'2012-06-20',NULL,1,1,'\0'),(12,'2012-06-22','2012-06-25',1,2,'\0'),(13,'2012-06-22','2012-06-25',3,2,'\0'),(14,'2012-06-21','2012-06-24',2,2,'\0'),(15,'2012-06-23',NULL,3,3,'\0'); 197 | /*!40000 ALTER TABLE `movierentals` ENABLE KEYS */; 198 | UNLOCK TABLES; 199 | 200 | -- 201 | -- Final view structure for view `accounts_v_members` 202 | -- 203 | 204 | /*!50001 DROP TABLE IF EXISTS `accounts_v_members`*/; 205 | /*!50001 DROP VIEW IF EXISTS `accounts_v_members`*/; 206 | /*!50001 SET @saved_cs_client = @@character_set_client */; 207 | /*!50001 SET @saved_cs_results = @@character_set_results */; 208 | /*!50001 SET @saved_col_connection = @@collation_connection */; 209 | /*!50001 SET character_set_client = utf8 */; 210 | /*!50001 SET character_set_results = utf8 */; 211 | /*!50001 SET collation_connection = utf8_general_ci */; 212 | /*!50001 CREATE ALGORITHM=UNDEFINED */ 213 | /*!50013 DEFINER=`root`@`localhost` SQL SECURITY DEFINER */ 214 | /*!50001 VIEW `accounts_v_members` AS select `members`.`membership_number` AS `membership_number`,`members`.`full_names` AS `full_names`,`members`.`gender` AS `gender` from `members` */; 215 | /*!50001 SET character_set_client = @saved_cs_client */; 216 | /*!50001 SET character_set_results = @saved_cs_results */; 217 | /*!50001 SET collation_connection = @saved_col_connection */; 218 | 219 | -- 220 | -- Final view structure for view `general_v_movie_rentals` 221 | -- 222 | 223 | /*!50001 DROP TABLE IF EXISTS `general_v_movie_rentals`*/; 224 | /*!50001 DROP VIEW IF EXISTS `general_v_movie_rentals`*/; 225 | /*!50001 SET @saved_cs_client = @@character_set_client */; 226 | /*!50001 SET @saved_cs_results = @@character_set_results */; 227 | /*!50001 SET @saved_col_connection = @@collation_connection */; 228 | /*!50001 SET character_set_client = utf8 */; 229 | /*!50001 SET character_set_results = utf8 */; 230 | /*!50001 SET collation_connection = utf8_general_ci */; 231 | /*!50001 CREATE ALGORITHM=UNDEFINED */ 232 | /*!50013 DEFINER=`root`@`localhost` SQL SECURITY DEFINER */ 233 | /*!50001 VIEW `general_v_movie_rentals` AS select `mb`.`membership_number` AS `membership_number`,`mb`.`full_names` AS `full_names`,`mo`.`title` AS `title`,`mr`.`transaction_date` AS `transaction_date`,`mr`.`return_date` AS `return_date` from ((`movierentals` `mr` join `members` `mb` on((`mr`.`membership_number` = `mb`.`membership_number`))) join `movies` `mo` on((`mr`.`movie_id` = `mo`.`movie_id`))) */; 234 | /*!50001 SET character_set_client = @saved_cs_client */; 235 | /*!50001 SET character_set_results = @saved_cs_results */; 236 | /*!50001 SET collation_connection = @saved_col_connection */; 237 | /*!40103 SET TIME_ZONE=@OLD_TIME_ZONE */; 238 | 239 | /*!40101 SET SQL_MODE=@OLD_SQL_MODE */; 240 | /*!40014 SET FOREIGN_KEY_CHECKS=@OLD_FOREIGN_KEY_CHECKS */; 241 | /*!40014 SET UNIQUE_CHECKS=@OLD_UNIQUE_CHECKS */; 242 | /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; 243 | /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; 244 | /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; 245 | /*!40111 SET SQL_NOTES=@OLD_SQL_NOTES */; 246 | 247 | -- Dump completed on 2012-08-07 18:37:36 248 | -------------------------------------------------------------------------------- /Day21/README.md: -------------------------------------------------------------------------------- 1 | 2 | 21 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | This repository documents the process of setting up a SQL database and connecting it to a web application for efficient management and development purposes. It includes step-by-step instructions, best practices, and considerations for database administrators and developers alike. 11 | 12 | Setup Instructions 13 | Creating DB 14 | Create subnet group under RDS. 15 | Create a SQL DB. 16 | Create one Windows instance for SQL workbench. 17 | One Ubuntu instance for testing. 18 | Login to the Windows instance and disable the firewall and enhance IE configuration under Server Manager. 19 | Install Chrome. 20 | Download and install Microsoft Visual C++ redistributable x64 bit. 21 | Install Workbench and paste the code for execution. 22 | Connecting to Web Application 23 | Log in to Ubuntu and copy Python script and required files. 24 | Add more movies data to the schema and ensure automatic updates. 25 | Test primary and secondary databases. 26 | Before testing, check connections from CMD using nslookup and endpoint. 27 | Reboot RDS with failover to showcase failover and failback mechanisms. 28 | Note: Inform the application development team about any backend issues. Caching data is recommended to avoid downtime. 29 | Availability and Durability 30 | Discuss options for availability and duration. 31 | For multi-AZ-DB clusters, explain the possibility of keeping read replicas for connecting to third-party services. 32 | Illustrate scenarios like creating a copy of a production DB server for development purposes. 33 | Additional Considerations 34 | Modify DB settings such as backup retention (1 day). 35 | Discuss reserved instances for cost optimization. 36 | Introduce read replicas for efficient data reading without modification capabilities. 37 | Conclusion 38 | This README provides a comprehensive guide for setting up and managing SQL databases, connecting them to web applications, and ensuring high availability and durability. By following these instructions and considering additional features like read replicas and reserved instances, you can streamline your database operations for optimal performance and cost-effectiveness. 39 | -------------------------------------------------------------------------------- /Day21/app.py: -------------------------------------------------------------------------------- 1 | import sqlalchemy as sal 2 | import pymysql 3 | from sqlalchemy import create_engine, text 4 | 5 | # Create engine 6 | engine = create_engine('mysql+pymysql://admin:password@endpointhere/myflixdb') 7 | 8 | # Create connection 9 | connection = engine.connect() 10 | 11 | # Execute query 12 | query = text("select * from movies") 13 | result_proxy = connection.execute(query) 14 | 15 | # Fetch all rows 16 | data = result_proxy.fetchall() 17 | 18 | # Close connection 19 | connection.close() 20 | 21 | # Print fetched data 22 | for item in data: 23 | print(item) 24 | 25 | ##ubuntu Installation## 26 | 27 | install python 3 28 | apt update && apt install python3-pip -y 29 | 30 | pip install sqlalchemy 31 | pip install pymysql 32 | -------------------------------------------------------------------------------- /Day22/README.md: -------------------------------------------------------------------------------- 1 | ![22](https://github.com/saikiranpi/mastering-aws/assets/109568252/d234cdd4-6113-4e86-a335-c78d6a2e3e44) 2 | 3 | 4 | # Serverless Architecture using AWS DynamoDB, API Gateway, and Lambda 5 | 6 | This repository demonstrates the implementation of a serverless architecture using AWS services such as DynamoDB, API Gateway, and Lambda. This architecture allows for efficient and scalable handling of data without the need for traditional server infrastructure. 7 | 8 | ## Overview 9 | 10 | The architecture consists of the following components: 11 | 12 | - **DynamoDB**: A key-value pair database service provided by AWS. 13 | - **API Gateway**: Acts as a bridge between the web application and backend services, allowing for the creation of RESTful APIs. 14 | - **Lambda**: AWS Lambda provides serverless computing, allowing you to run code in response to events without provisioning or managing servers. 15 | 16 | ## Explanation 17 | 18 | - When a user visits the hosted website (e.g., hosted on AWS S3), and submits a form, the data is not directly stored in the database. 19 | - Instead, the data is sent to API Gateway, the entry point for requests into AWS services. 20 | - API Gateway cannot directly write data to the database, so an intermediate step is required. 21 | - Lambda functions serve as the application logic in the middle. They process the data received from the API Gateway and interact with the database accordingly. 22 | - This architecture is considered "serverless" because it eliminates the need to provision and manage servers, such as EC2 instances. 23 | 24 | ### Example Scenario 25 | 26 | Suppose you want to create a website for tax calculation. Users fill out forms with their tax information, and the website calculates their tax liabilities. This process requires automated processing, making it suitable for a serverless architecture. 27 | 28 | ## Implementation Steps 29 | 30 | 1. **DynamoDB Setup**: Create a DynamoDB table to store the data. 31 | 2. **Lambda Function Creation**: Create Lambda functions to handle data processing. These functions will interact with DynamoDB. 32 | 3. **Lambda Permissions**: Configure permissions for the Lambda functions to access DynamoDB. 33 | 4. **API Gateway Configuration**: Set up RESTful APIs in API Gateway to receive and process requests. 34 | 5. **Testing**: Test the integration between API Gateway, Lambda, and DynamoDB using tools like ARC (Advanced REST Client). 35 | 6. **Deployment**: Deploy the APIs to a development stage for testing and further integration. 36 | -------------------------------------------------------------------------------- /Day22/post.json: -------------------------------------------------------------------------------- 1 | { 2 | "operation": "read", 3 | "payload": { 4 | "TableName": "bookstore", 5 | "Key": { 6 | "id": 10 7 | } 8 | } 9 | } 10 | -------------------------------------------------------------------------------- /Day22/put.json: -------------------------------------------------------------------------------- 1 | { 2 | "operation": "create", 3 | "payload": { 4 | "TableName": "bookstore", 5 | "Item": { 6 | "id": 30, 7 | "author": "Dhaloni", 8 | "bookname": "I lost my mind in this game - Vincent ", 9 | "Location": "USA", 10 | "Hobbies": { 11 | "Act1": "Swimming", 12 | "Act2": "Cycling", 13 | "Act3": "Writing" 14 | } 15 | } 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /Day23/LoadingDataSampleFiles.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saikiranpi/mastering-aws/22d9bd5b884e3cb22cbdb6e29930f464dd70df22/Day23/LoadingDataSampleFiles.zip -------------------------------------------------------------------------------- /Day23/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![Redshift](https://github.com/saikiranpi/mastering-aws/assets/109568252/97664d8a-aa71-40cc-832f-c34c0d37a5ab) 3 | 4 | 5 | Redshift Setup Guide 6 | This guide provides step-by-step instructions for setting up Amazon Redshift for data warehousing purposes. 7 | 8 | Prerequisites 9 | AWS account with appropriate permissions 10 | Windows instance (T2 medium) 11 | Installation Steps 12 | Create Windows Instance: 13 | 14 | Launch a Windows instance, select T2 medium. 15 | Configuration: 16 | 17 | Navigate to Subnet group and remove private settings. 18 | Install Dependencies: 19 | 20 | Login to the Windows server. 21 | Install Google Chrome. 22 | Install Java SE Development Kit from Oracle. 23 | Data Preparation: 24 | 25 | Download sample files and load them into S3 under a folder named Data/. 26 | Download Redshift Driver: 27 | 28 | Access Redshift and download the driver version 4.2. 29 | Save the driver under C:\ drive on the Windows machine. 30 | Configure Workbench: 31 | 32 | Copy Workbench to C:\ drive. 33 | Run SQLworkbench64. 34 | Manage drivers, select Amazon Redshift, remove current driver, add the downloaded driver. 35 | Connect to Redshift by selecting the added driver, providing necessary credentials. 36 | Data Loading: 37 | 38 | Execute the command select distinct(tablename) from pg_table_def where schemaname = 'public';. 39 | Copy tables and paste them. 40 | Load data from S3 to Redshift using appropriate IAM role with S3 full access. 41 | Quick Site: 42 | 43 | Utilize Quick Site to provide Redshift details for easy comprehension by stakeholders. 44 | Athena and Glue: 45 | 46 | Organize S3 bucket, keep necessary files. 47 | Use Glue to catalog tables and databases. 48 | Query data using Athena. 49 | Data Size Notations 50 | B (Bytes) 51 | KB (Kilobytes): 1024 Bytes 52 | MB (Megabytes): 1024 KB 53 | GB (Gigabytes): 1024 MB 54 | TB (Terabytes): 1024 GB 55 | PB (Petabytes): 1024 TB 56 | EB (Exabytes): 1024 PB 57 | ZB (Zetabytes): 1024 EB 58 | YB (Yottabytes): 1024 ZB 59 | Advantages 60 | Redshift efficiently handles massive data and allows easy scalability. 61 | SQL databases are suitable for smaller datasets but might struggle with handling petabytes of data. 62 | By following these steps, you can set up Amazon Redshift for your data warehousing needs efficiently. 63 | -------------------------------------------------------------------------------- /Day23/Workbench-Build125.zip: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saikiranpi/mastering-aws/22d9bd5b884e3cb22cbdb6e29930f464dd70df22/Day23/Workbench-Build125.zip -------------------------------------------------------------------------------- /Day23/sample-data: -------------------------------------------------------------------------------- 1 | https://www.oracle.com/java/technologies/javase/javase8-archive-downloads.html#license-lightbox 2 | 3 | 4 | select distinct(tablename) from pg_table_def where schemaname = 'public'; 5 | 6 | 7 | 8 | drop table part; 9 | drop table supplier; 10 | drop table customer; 11 | drop table dwdate; 12 | drop table lineorder; 13 | 14 | 15 | CREATE TABLE part 16 | ( 17 | p_partkey INTEGER NOT NULL, 18 | p_name VARCHAR(22) NOT NULL, 19 | p_mfgr VARCHAR(6), 20 | p_category VARCHAR(7) NOT NULL, 21 | p_brand1 VARCHAR(9) NOT NULL, 22 | p_color VARCHAR(11) NOT NULL, 23 | p_type VARCHAR(25) NOT NULL, 24 | p_size INTEGER NOT NULL, 25 | p_container VARCHAR(10) NOT NULL 26 | ); 27 | 28 | CREATE TABLE supplier 29 | ( 30 | s_suppkey INTEGER NOT NULL, 31 | s_name VARCHAR(25) NOT NULL, 32 | s_address VARCHAR(25) NOT NULL, 33 | s_city VARCHAR(10) NOT NULL, 34 | s_nation VARCHAR(15) NOT NULL, 35 | s_region VARCHAR(12) NOT NULL, 36 | s_phone VARCHAR(15) NOT NULL 37 | ); 38 | 39 | CREATE TABLE customer 40 | ( 41 | c_custkey INTEGER NOT NULL, 42 | c_name VARCHAR(25) NOT NULL, 43 | c_address VARCHAR(25) NOT NULL, 44 | c_city VARCHAR(10) NOT NULL, 45 | c_nation VARCHAR(15) NOT NULL, 46 | c_region VARCHAR(12) NOT NULL, 47 | c_phone VARCHAR(15) NOT NULL, 48 | c_mktsegment VARCHAR(10) NOT NULL 49 | ); 50 | 51 | CREATE TABLE dwdate 52 | ( 53 | d_datekey INTEGER NOT NULL, 54 | d_date VARCHAR(19) NOT NULL, 55 | d_dayofweek VARCHAR(10) NOT NULL, 56 | d_month VARCHAR(10) NOT NULL, 57 | d_year INTEGER NOT NULL, 58 | d_yearmonthnum INTEGER NOT NULL, 59 | d_yearmonth VARCHAR(8) NOT NULL, 60 | d_daynuminweek INTEGER NOT NULL, 61 | d_daynuminmonth INTEGER NOT NULL, 62 | d_daynuminyear INTEGER NOT NULL, 63 | d_monthnuminyear INTEGER NOT NULL, 64 | d_weeknuminyear INTEGER NOT NULL, 65 | d_sellingseason VARCHAR(13) NOT NULL, 66 | d_lastdayinweekfl VARCHAR(1) NOT NULL, 67 | d_lastdayinmonthfl VARCHAR(1) NOT NULL, 68 | d_holidayfl VARCHAR(1) NOT NULL, 69 | d_weekdayfl VARCHAR(1) NOT NULL 70 | ); 71 | 72 | CREATE TABLE lineorder 73 | ( 74 | lo_orderkey INTEGER NOT NULL, 75 | lo_linenumber INTEGER NOT NULL, 76 | lo_custkey INTEGER NOT NULL, 77 | lo_partkey INTEGER NOT NULL, 78 | lo_suppkey INTEGER NOT NULL, 79 | lo_orderdate INTEGER NOT NULL, 80 | lo_orderpriority VARCHAR(15) NOT NULL, 81 | lo_shippriority VARCHAR(1) NOT NULL, 82 | lo_quantity INTEGER NOT NULL, 83 | lo_extendedprice INTEGER NOT NULL, 84 | lo_ordertotalprice INTEGER NOT NULL, 85 | lo_discount INTEGER NOT NULL, 86 | lo_revenue INTEGER NOT NULL, 87 | lo_supplycost INTEGER NOT NULL, 88 | lo_tax INTEGER NOT NULL, 89 | lo_commitdate INTEGER NOT NULL, 90 | lo_shipmode VARCHAR(10) NOT NULL 91 | ); 92 | 93 | 94 | 95 | S3 queiry 96 | 97 | copy part 98 | from 's3://testing12121212121121212/data/part-csv.tbl' 99 | credentials 'aws_access_key_id=XXXXXXXXXXXXXXX;aws_secret_access_key=XXXXXXXXXXXXXXXXXX' 100 | csv 101 | null as '\000'; 102 | -------------------------------------------------------------------------------- /Day24/README.md: -------------------------------------------------------------------------------- 1 | Setting Up S3 Storage with CloudFront CDN for High-Performance Websites 2 | Welcome to the ultimate guide for setting up AWS S3 storage with CloudFront CDN to optimize your website's performance and security. 3 | 4 | Prerequisites 5 | AWS Account 6 | Basic understanding of AWS services 7 | Steps: 8 | Create S3 Bucket: 9 | 10 | Create a bucket and upload necessary files. 11 | Adjust Bucket Permissions: 12 | 13 | Remove the check mark for "Block all public access". 14 | Enable Static Website Hosting: 15 | 16 | Enable static website hosting for the bucket. 17 | Check ACM Certificate: 18 | 19 | Ensure ACM certificate is available for the domain name used. 20 | Remove ACLs: 21 | 22 | Delete any ACL applied on bucket or object level. 23 | Create CloudFront Distribution: 24 | 25 | Select the bucket as origin. 26 | Configure OAI and Bucket Policy: 27 | 28 | Select "Yes use OAI" and create a new OAI. 29 | Select "Yes, Update Bucket Policy". 30 | Redirect HTTP to HTTPS: 31 | 32 | Select redirection option. 33 | Configure WAF ACL and SSL Cert: 34 | 35 | Choose WAF ACL if exists. 36 | Specify CNAMEs/Alternate names and SSL certificate. 37 | Select Default HTML: 38 | 39 | Set default HTML page. 40 | Create Distribution: 41 | 42 | Wait for successful deployment. 43 | Create DNS Records: 44 | 45 | Create DNS records for the CDN endpoint using CNAMES. 46 | Access Your Website: 47 | 48 | Access your website using the provided URLs and ensure SSL is enforced. 49 | Avoid Direct S3 Access: 50 | 51 | Direct access to S3 website will result in errors. 52 | Follow these steps meticulously to optimize your website's performance with AWS S3 and CloudFront CDN. Happy optimizing! 53 | 54 | Contributing 55 | Contributions are welcome! Fork the repository and submit a pull request. 56 | 57 | License 58 | This project is licensed under the MIT License - see the LICENSE file for details. 59 | -------------------------------------------------------------------------------- /Day25/README.md: -------------------------------------------------------------------------------- 1 | ![25](https://github.com/saikiranpi/mastering-aws/assets/109568252/b192a9f2-534a-4b50-a0e1-f59a6fec4d19) 2 | 3 | 4 | 5 | Sure, here's a comprehensive GitHub README for your project: 6 | 7 | ```markdown 8 | # Mastering S3 Policies: Endpoints, Access Points, and S3Fs 9 | 10 | Welcome to the repository for our comprehensive tutorial on Amazon S3 policies and access management! This guide will walk you through the steps to create and manage S3 bucket policies, use S3 Access Points, and mount S3 as a filesystem using third-party tools. 11 | 12 | ## Table of Contents 13 | 14 | - [Introduction](#introduction) 15 | - [S3 Bucket Policies](#s3-bucket-policies) 16 | - [Access Points](#access-points) 17 | - [S3Fs](#s3fs) 18 | - [Setup and Configuration](#setup-and-configuration) 19 | - [Step-by-Step Guide](#step-by-step-guide) 20 | - [Tags](#tags) 21 | - [Contributing](#contributing) 22 | - [License](#license) 23 | 24 | ## Introduction 25 | 26 | This repository accompanies the YouTube video tutorial on mastering S3 policies, endpoints, and S3Fs. Follow along to learn how to securely manage access to your S3 buckets and integrate them with your workflows. 27 | 28 | ## S3 Bucket Policies 29 | 30 | ### Overview 31 | S3 bucket policies are an integral part of IAM policies, allowing you to control access to your S3 resources. In this section, we will: 32 | - Create a bucket and upload files. 33 | - Set up a bucket policy to grant access to external users. 34 | - Configure policies for specific IP addresses. 35 | - Use preassigned URLs for secure access without making the bucket public. 36 | 37 | ### Steps 38 | 1. **Create the Bucket and Upload Files**: 39 | ```bash 40 | aws s3 mb s3://your-bucket-name 41 | aws s3 cp your-file.txt s3://your-bucket-name/ 42 | ``` 43 | 44 | 2. **Create Bucket Policy**: 45 | - Under the bucket permissions, create a policy to allow `GetObject` access to an external user. 46 | 47 | 3. **Enable Public Access**: 48 | - Modify the policy to enable public access as needed. 49 | 50 | 4. **IP-Based Policy**: 51 | - Create a policy that grants access only from specific IP addresses. 52 | 53 | 5. **Preassigned URLs**: 54 | - Explain and demonstrate the use of preassigned URLs for secure access. 55 | 56 | ## Access Points 57 | 58 | ### Overview 59 | S3 Access Points simplify managing access to shared data sets in S3. In this section, we will: 60 | - Create a bucket with access points. 61 | - Set up policies for different users and folders. 62 | - Manage access through access points. 63 | 64 | ### Steps 65 | 1. **Create a Bucket with Access Points**: 66 | ```bash 67 | aws s3api create-bucket --bucket your-bucket-name 68 | ``` 69 | 70 | 2. **Create Folders and Users**: 71 | - Create two folders in the bucket. 72 | - Create `Testuser1` for `testfolder1` and `Testuser2` for `testfolder2`. 73 | - Attach inline policies for each user. 74 | 75 | 3. **Policy Management**: 76 | - Remove inline policies and delete bucket data as needed. 77 | - Configure access points and set permissions accordingly. 78 | 79 | 4. **Upload Data Using CLI**: 80 | ```bash 81 | aws s3 cp terraform3.zip s3://arn:aws:s3:us-east-1:431064776024:accesspoint/accesspointtesting1/folder1/terraform3.zip 82 | ``` 83 | 84 | ## S3Fs 85 | 86 | ### Overview 87 | S3Fs allows you to mount S3 buckets as filesystems on your local machine using third-party tools. This section covers: 88 | - Setting up an Ubuntu machine. 89 | - Installing and configuring S3Fs. 90 | 91 | ### Steps 92 | 1. **Setup Ubuntu Machine**: 93 | - Create an Ubuntu instance and connect to it. 94 | 95 | 2. **Install S3Fs**: 96 | ```bash 97 | sudo apt-get update 98 | sudo apt-get install s3fs 99 | ``` 100 | 101 | 3. **Mount S3 Bucket**: 102 | ```bash 103 | mkdir /path/to/mount 104 | s3fs your-bucket-name /path/to/mount -o use_cache=/tmp 105 | df -h | grep -i s3 106 | ``` 107 | 108 | ## Setup and Configuration 109 | 110 | ### Prerequisites 111 | - AWS CLI installed and configured. 112 | - AWS IAM roles and policies set up. 113 | - Ubuntu machine for S3Fs setup. 114 | 115 | ### Installation 116 | Follow the steps in the guide to set up your environment and install necessary tools. 117 | 118 | ## Step-by-Step Guide 119 | Follow the detailed instructions in each section to complete the setup and configuration. 120 | 121 | 122 | ## Contributing 123 | 124 | We welcome contributions! Please read our [Contributing Guidelines](CONTRIBUTING.md) for more details. 125 | 126 | ## License 127 | 128 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 129 | ``` 130 | 131 | Feel free to customize this README further based on your specific requirements and preferences. 132 | -------------------------------------------------------------------------------- /Day25/accesspoint.json: -------------------------------------------------------------------------------- 1 | // This policy allows any AWS user to perform any action on the specified S3 bucket and its objects 2 | 3 | // But only if the action is performed through a Data Access Point belonging to the AWS account with ID "431064776024" 4 | { 5 | "Version": "2012-10-17", 6 | "Statement": [ 7 | { 8 | "Effect": "Allow", 9 | "Principal": { 10 | "AWS": "*" 11 | }, 12 | "Action": "*", 13 | "Resource": [ 14 | "arn:aws:s3:::saikiran236236", 15 | "arn:aws:s3:::saikiran236236/*" 16 | ], 17 | "Condition": { 18 | "StringEquals": { 19 | "s3:DataAccessPointAccount": "431064776024" 20 | } 21 | } 22 | } 23 | ] 24 | } 25 | 26 | 27 | 28 | 29 | // This policy allows the IAM user "developeruser1" in the AWS account "431064776024" to perform any action on all objects accessible 30 | // through the S3 access point "dev1accesspoint" in the "us-east-1" region. 31 | 32 | { 33 | "Version": "2012-10-17", 34 | "Statement": [ 35 | { 36 | "Effect": "Allow", 37 | "Principal": { 38 | "AWS": "arn:aws:iam::431064776024:user/developer1" 39 | }, 40 | "Action": "*", 41 | "Resource": "arn:aws:s3:us-east-1:431064776024:accesspoint/accesspointdev1/object/folder1/*" 42 | } 43 | ] 44 | } 45 | 46 | 47 | 48 | 49 | ####### S3 COMMAND ########## 50 | 51 | 52 | aws s3 cp saikiran.txt s3://arn:aws:s3:us-east-1:431064776024:accesspoint/accesspointdev1/folder1/saikiran.txt 53 | -------------------------------------------------------------------------------- /Day25/s3-policy-ip.json: -------------------------------------------------------------------------------- 1 | ##GIVING ACCESS TO ONLY SPECIFIC IP 2 | 3 | { 4 | "Version": "2012-10-17", 5 | "Statement": [ 6 | { 7 | "Effect": "Allow", 8 | "Principal": "*", 9 | "Action": "s3:*", 10 | "Resource": [ 11 | "arn:aws:s3:::testingggggggggggggggggggggggggggggggggggggg/*" 12 | ], 13 | "Condition": { 14 | "IpAddress": { 15 | "aws:SourceIp": [ 16 | "49.37.154.22/32" 17 | ] 18 | } 19 | } 20 | } 21 | ] 22 | } 23 | -------------------------------------------------------------------------------- /Day26/README.md: -------------------------------------------------------------------------------- 1 | ![26](https://github.com/saikiranpi/mastering-aws/assets/109568252/e21c6307-a46f-41aa-815f-970e8e69af68) 2 | 3 | 4 | AWS Glacier & AWS EFS Tutorial 5 | This repository contains the code and documentation for our video tutorial on AWS Glacier and AWS Elastic File System (EFS). Follow along to learn how to set up and manage these AWS services efficiently. 6 | 7 | AWS Glacier 8 | AWS Glacier is designed for long-term storage of data that is infrequently accessed. Here’s how to get started: 9 | 10 | Steps to Create a Glacier Vault 11 | Create Vault: Start by creating a new vault in AWS Glacier. 12 | Configure S3 Lifecycle Rule: 13 | Go to S3 > Management > Lifecycle Rule. 14 | Create a lifecycle rule to move current versions to different storage classes over time: 15 | Standard IA (30 days) 16 | Intelligent Tiering (60 days) 17 | Glacier Instant Retrieval (90 days) 18 | Glacier Flexible (180 days) 19 | Delete expired objects after 365 days. 20 | Explain Other Rules: Provide details on other lifecycle rules as needed. 21 | Install Fast Glacier: Install and configure Fast Glacier for faster data transfers. 22 | Pricing: Discuss the cost benefits of using AWS Glacier for long-term storage. 23 | AWS Elastic File System (EFS) 24 | AWS EFS provides scalable file storage for use with AWS cloud services and on-premises resources. Here’s a practical example to understand its setup and use: 25 | 26 | Scenario: Jenkins Setup with EFS 27 | Deploy Instances: Deploy two instances with user data (1A and 1B). 28 | Directory Setup: Log in to the Ubuntu server and create the directory /var/lib/jenkins. 29 | Create NFS File System: Ensure you select Public subnets when creating the NFS. 30 | Mount EFS: 31 | Edit /etc/fstab to include the EFS mount details. 32 | Mount the EFS using: sudo mount -t nfs4 -o _netdev fs-XXXX.efs.us-east-1.amazonaws.com:/ /var/lib/jenkinslogs. 33 | Run Jenkins Jobs: Run jobs in Jenkins and verify they are accessible across instances. 34 | Centralize Syslog: 35 | Concatenate syslog to a centralized location: cat /var/log/syslog. 36 | Use a shell script (logbash.sh) to automate this process. 37 | Automate Backups with Crontab: 38 | Schedule the log backup script using Crontab: * * * * * bash /root/logbackup.sh. 39 | -------------------------------------------------------------------------------- /Day26/rotation.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | f="/var/log/syslog" 3 | 4 | if [ ! -f "$f" ]; then 5 | echo "$f does not exist!" 6 | exit 7 | fi 8 | 9 | touch "$f" 10 | MAXSIZE=$((1 * 1024)) 11 | 12 | size=$(du -b "$f" | tr -s '\t' ' ' | cut -d' ' -f1) 13 | if [ "$size" -gt "$MAXSIZE" ]; then 14 | echo Rotating! 15 | timestamp=$(date +%s) 16 | mv "$f" "/var/lib/jenkins/backup.$timestamp" 17 | touch "$f" 18 | fi 19 | -------------------------------------------------------------------------------- /Day26/userdata.sh: -------------------------------------------------------------------------------- 1 | userdata 2 | 3 | #!/bin/bash 4 | sudo apt install nfs-common -y 5 | sudo apt update 6 | sudo apt install -y openjdk-8-jdk 7 | 8 | ##################################### 9 | mkdir /var/lib/jenkins 10 | nano /etc/fstab 11 | fs-0bfdccde40b0f6c5c.efs.us-east-1.amazonaws.com:/ /var/lib/jenkins/ nfs 12 | mount -a 13 | 14 | 15 | 16 | ################################### 17 | -------------------------------------------------------------------------------- /Day27/README.md: -------------------------------------------------------------------------------- 1 | ![27](https://github.com/saikiranpi/mastering-aws/assets/109568252/cd601ddb-6158-45b3-a5e5-b44c6a741710) 2 | 3 | 4 | 5 | 6 | 7 | # AWS FSX and Workspaces Setup Guide 8 | 9 | Welcome to the AWS FSX and Workspaces setup guide! This document will help you configure Active Directory, FSX, and Workspaces on AWS. Follow the steps below to get started. 10 | 11 | ## Table of Contents 12 | 13 | 1. [Prerequisites](#prerequisites) 14 | 2. [Active Directory Setup](#active-directory-setup) 15 | 3. [AWS Workspaces Setup](#aws-workspaces-setup) 16 | 4. [FSX Configuration](#fsx-configuration) 17 | 5. [Network Drive Mapping](#network-drive-mapping) 18 | 6. [Additional Notes](#additional-notes) 19 | 20 | ## Prerequisites 21 | 22 | Before you begin, ensure you have the following: 23 | 24 | - An AWS account with necessary permissions. 25 | - A Windows instance for Active Directory setup. 26 | - Access to AWS Management Console. 27 | 28 | ## Active Directory Setup 29 | 30 | ### 1. Install Active Directory 31 | 32 | 1. **Install AD DS:** 33 | - Open Server Manager and add the Active Directory Domain Services (AD DS) role. 34 | 35 | 2. **Disable Firewall:** 36 | - Open Firewall settings: `firewall.cpl` 37 | - Open Network Connections: `ncpa.cpl` 38 | - In Server Manager, turn off the firewall. 39 | 40 | 3. **Install AD DS Tools:** 41 | - Go to Server Manager > Manage > Add Roles and Features. 42 | - Select `Role Administration Tools` > `AD DS and AD LDS Tools` > `AD DS Tools`. 43 | - Click Next and Install. 44 | 45 | 4. **Configure Domain:** 46 | - Open System Properties: `sysdm.cpl` 47 | - Go to the Computer Name tab and click Change. 48 | - Set the domain to `saikiranpi.in` and enter the admin credentials. 49 | 50 | 5. **Create Users in AD:** 51 | - Open Active Directory Users and Computers: `dsa.msc` 52 | - In your domain, create two users with proper details. 53 | 54 | ## AWS Workspaces Setup 55 | 56 | ### 1. Create and Register Directory 57 | 58 | 1. **Create Directory:** 59 | - In the AWS Management Console, go to End User Computing > Workspaces. 60 | - Create a directory and register it. 61 | 62 | 2. **Register Directory:** 63 | - Select the registered directory and choose the subnets for your Workspaces. 64 | - Confirm the registration. 65 | 66 | ### 2. Launch Workspaces 67 | 68 | 1. **Launch Workspaces:** 69 | - In Workspaces, click Launch Workspaces. 70 | - Search for the users you created earlier. 71 | - Select the user, choose the standard with Windows bundle, and set it to AutoStop. 72 | - Do not select encryption (note: this will speed up the process). 73 | - Click Next and Launch Workspaces (this can take up to 40 minutes). 74 | 75 | 2. **Grant Access to Users:** 76 | - Login to your Windows machine. 77 | - Open Local Users and Groups Management: `lusrmgr.msc` 78 | - Go to Groups > Remote Desktop Users, add the user emails, and confirm. 79 | 80 | 3. **User Login:** 81 | - Users can now login with their credentials (e.g., `user@cloudvishwakarma.in`) via RDP. 82 | 83 | ## FSX Configuration 84 | 85 | ### 1. Create Shared Folders 86 | 87 | 1. **Access FSX:** 88 | - Open File Explorer and enter the FSX DNS name: `\\` 89 | - Create three folders: `User1`, `User2`, and `Common`. 90 | 91 | ### 2. Set Permissions 92 | 93 | 1. **Create AD Group:** 94 | - Go to your Windows AD and create a group called `myadmins`. 95 | - Add the two users to this group. 96 | 97 | 2. **Configure Folder Permissions:** 98 | - On the `Common` folder: 99 | - Right-click > Properties > Security > Edit > Add `myadmins` with Full control. 100 | - Go to Advanced > Disable inheritance > Remove all except `myadmins`. 101 | - Repeat for `User1` and `User2` folders: 102 | - Right-click > Properties > Security > Advanced > Disable inheritance. 103 | - Add the respective user with Full control and remove all others. 104 | 105 | ## Network Drive Mapping 106 | 107 | ### 1. Map Network Drive 108 | 109 | 1. **On User Machine:** 110 | - Open File Explorer. 111 | - Click This PC > Map network drive. 112 | - Enter `\\\share` and click Finish. 113 | 114 | ### 2. Mount to Server 115 | 116 | 1. **Access FSX:** 117 | - Copy the FSX DNS name. 118 | - On your server, open File Explorer and paste the DNS name to access the shared folders. 119 | 120 | ## Additional Notes 121 | 122 | - For group login, additional licensing for Terminal Services may be required. 123 | - Ensure proper permissions and security settings to maintain a secure environment. 124 | 125 | --- 126 | 127 | Feel free to contribute, open issues, or ask questions in the repository! 128 | 129 | --- 130 | 131 | ### Contact 132 | 133 | For any queries, contact [Pinapathruni.saikiran@gmail.com] 134 | 135 | --- 136 | 137 | ### License 138 | 139 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 140 | 141 | --- 142 | 143 | ### References 144 | 145 | - [AWS Directory Service Documentation](https://docs.aws.amazon.com/directoryservice/latest/admin-guide/what_is.html) 146 | - [AWS Workspaces Documentation](https://docs.aws.amazon.com/workspaces/latest/adminguide/amazon-workspaces.html) 147 | - [AWS FSX Documentation](https://docs.aws.amazon.com/fsx/latest/WindowsGuide/what-is.html) 148 | 149 | --- 150 | -------------------------------------------------------------------------------- /Day28/README.md: -------------------------------------------------------------------------------- 1 | ![28](https://github.com/saikiranpi/mastering-aws/assets/109568252/99d12fd1-b83e-4fb4-9867-03f7ce59a652) 2 | 3 | 4 | 5 | --- 6 | 7 | # Mastering AWS - Day 28 8 | 9 | ## Overview 10 | This document outlines the steps to integrate AWS Systems Manager (SSM) with CloudWatch for enhanced monitoring and management of EC2 instances. 11 | 12 | ## Practical Case 13 | In this scenario, we aim to set up SSM and CloudWatch integration to effectively manage and monitor EC2 instances. 14 | 15 | ### Step-by-Step Instructions 16 | 1. **Instance Setup** 17 | - Launch two instances: one Ubuntu and one Amazon Linux. 18 | - Create an IAM role with the following policies: `AmazonSSMManagedInstanceCore`, `AmazonSSMFullAccess`. 19 | 20 | 2. **SSM Run Command** 21 | - Use SSM Run Command to execute shell scripts on instances. 22 | - Example script: 23 | ```bash 24 | #!/bin/bash 25 | for I in {1..10} 26 | do 27 | echo $(date) > /tmp/FILE-$I 28 | sleep 1 29 | done 30 | ``` 31 | - Select instances and choose CloudWatch Logs as output. 32 | 33 | 3. **Copy Script from GitHub** 34 | - Use SSM to copy scripts from a GitHub repo to instances. 35 | - Select Ubuntu instance only. 36 | 37 | 4. **Install CloudWatch Agent** 38 | - Install CloudWatch Agent for enhanced monitoring. 39 | - Ensure IAM role has policies: `AmazonEC2RoleforSSM`, `CloudWatchAgentAdminPolicy`, `CloudWatchAgentServerPolicy`. 40 | - Run `aws configure package` to prepare the CloudWatch Agent package. 41 | - Configure the agent using the wizard. 42 | - Repeat the same steps for both Ubuntu and Amazon Linux instances. 43 | 44 | 5. **Verify Installation** 45 | - Check Parameter Store for configuration details. 46 | - Ensure CloudWatch metrics are available for monitoring. 47 | - Use SSM to manage agent configuration. 48 | 49 | ## Why Install CloudWatch Agent? 50 | Default EC2 monitoring provides limited metrics. To monitor additional metrics like disk space utilization, memory utilization, etc., enhanced monitoring with CloudWatch Agent is necessary. 51 | 52 | ## Conclusion 53 | By following these steps, you can effectively manage your AWS instances using SSM and enhance monitoring with the CloudWatch agent. This setup provides comprehensive insights into your instances' performance and enables efficient management. 54 | 55 | --- 56 | -------------------------------------------------------------------------------- /Day29/README.md: -------------------------------------------------------------------------------- 1 | ![29](https://github.com/saikiranpi/mastering-aws/assets/109568252/681ebb0c-6e58-45d8-8086-7b1c8770050c) 2 | 3 | 4 | 5 | 6 | # Cloud Watch – 1 7 | 8 | This repository provides detailed steps for deploying the CloudWatch Agent on Linux servers, creating alerts for memory and disk space usage, configuring CloudWatch logs, and setting up dashboards to monitor your infrastructure effectively. 9 | 10 | ## Table of Contents 11 | 12 | - [Prerequisites](#prerequisites) 13 | - [Deployment Steps](#deployment-steps) 14 | - [1. Create IAM Role](#1-create-iam-role) 15 | - [2. Setup Amazon Linux Instances](#2-setup-amazon-linux-instances) 16 | - [3. Install and Configure CloudWatch Agent](#3-install-and-configure-cloudwatch-agent) 17 | - [4. Configure CloudWatch Logs](#4-configure-cloudwatch-logs) 18 | - [5. Create CloudWatch Alarms](#5-create-cloudwatch-alarms) 19 | - [6. Simulate Load for Testing](#6-simulate-load-for-testing) 20 | - [Interview Question](#interview-question) 21 | 22 | ## Prerequisites 23 | 24 | Ensure you have the necessary AWS permissions and access to create IAM roles, EC2 instances, and configure CloudWatch. 25 | 26 | ## Deployment Steps 27 | 28 | ### 1. Create IAM Role 29 | 30 | Create an IAM role with the following policies: 31 | 32 | - `AmazonEC2RoleforSSM` 33 | - `CloudWatchAgentAdminPolicy` 34 | - `CloudWatchAgentServerPolicy` 35 | - `AmazonSSMManagedInstanceCore` 36 | - `AmazonSSMFullAccess` 37 | - `CloudWatchLogsFull` 38 | 39 | ### 2. Setup Amazon Linux Instances 40 | 41 | Create two Amazon Linux instances and install NGINX and stress testing tool: 42 | 43 | ```bash 44 | yum update -y 45 | amazon-linux-extras install nginx1.12 -y 46 | service nginx start 47 | systemctl enable nginx 48 | echo "

$(cat /etc/hostname)

" > /usr/share/nginx/html/index.html 49 | sudo amazon-linux-extras install epel -y 50 | sudo yum install stress -y 51 | ``` 52 | 53 | ### 3. Install and Configure CloudWatch Agent 54 | 55 | Use AWS Systems Manager (SSM) to install and configure the CloudWatch Agent: 56 | 57 | 1. **Install CloudWatch Agent:** 58 | - Navigate to SSM > Run Command > AWS-ConfigureAWSPackage 59 | - Package Name: `AmazonCloudWatchAgent` 60 | - Select Instance and run 61 | 62 | 2. **Manage CloudWatch Agent:** 63 | - Navigate to SSM > Run Command > AmazonCloudWatch-manageAgent 64 | - Parameter store Config Name: specify the name of your configuration 65 | 66 | ### 4. Configure CloudWatch Logs 67 | 68 | To monitor NGINX logs: 69 | 70 | 1. Install AWS logs agent: 71 | 72 | ```bash 73 | sudo yum install -y awslogs 74 | ``` 75 | 76 | 2. Edit the AWS logs configuration file: 77 | 78 | ```bash 79 | sudo nano /etc/awslogs/awslogs.conf 80 | ``` 81 | 82 | Add the following configuration for NGINX logs: 83 | 84 | ```ini 85 | [/var/log/nginx/error.log] 86 | log_group_name = nginx-error-log 87 | log_stream_name = {instance_id}/error.log 88 | file = /var/log/nginx/error.log 89 | 90 | [/var/log/nginx/access.log] 91 | log_group_name = nginx-access-log 92 | log_stream_name = {instance_id}/access.log 93 | file = /var/log/nginx/access.log 94 | ``` 95 | 96 | 3. Start the AWS logs service: 97 | 98 | ```bash 99 | sudo systemctl start awslogsd 100 | ``` 101 | 102 | Repeat the above steps on both servers. 103 | 104 | ### 5. Create CloudWatch Alarms 105 | 106 | 1. **Memory Usage Alarm:** 107 | - Navigate to CloudWatch > Alarms > Create Alarm 108 | - Select metric: `cwagent > instanceID > mem_used_percent` 109 | - Set threshold: `Maximum > 1Min > Greater/Equal 40%` 110 | - Configure SNS notifications 111 | 112 | 2. **Disk Usage Alarm:** 113 | - Navigate to CloudWatch > Alarms > Create Alarm 114 | - Select metric: `cwagent > device,fstype,path > disk_used_percentage` 115 | - Set threshold: `Maximum > 1Min > Greater/Equal 26%` 116 | - Configure SNS notifications 117 | 118 | ### 6. Simulate Load for Testing 119 | 120 | **Primary Server:** 121 | - Install `htop`: 122 | 123 | ```bash 124 | yum install -y htop 125 | ``` 126 | 127 | **Secondary Server:** 128 | - Simulate disk load: 129 | 130 | ```bash 131 | for i in {1..100} 132 | do 133 | cp terraform_1.zip terraform_$i.zip 134 | sleep 5 135 | done 136 | ``` 137 | 138 | -------------------------------------------------------------------------------- /Day30/Delvol-Lambda.py: -------------------------------------------------------------------------------- 1 | import json 2 | import boto3 3 | import time 4 | def lambda_handler(event, context): 5 | client = boto3.client("ec2", region_name="us-east-1") 6 | resp = client.describe_volumes().get("Volumes", []) 7 | unattachedvols = [] 8 | time.sleep(30) 9 | for vol in resp: 10 | if len(vol["Attachments"]) == 0: 11 | volid = vol["VolumeId"] 12 | print(f"Volume {volid} is not attached and will be deleted") 13 | unattachedvols.append(vol["VolumeId"]) 14 | else: 15 | volid = vol["VolumeId"] 16 | """print(f"Volume {volid} is attached")""" 17 | 18 | print(f"The Volumes which are availible are {unattachedvols}") 19 | ec2_resource = boto3.resource("ec2", region_name="us-east-1") 20 | for vol in unattachedvols: 21 | volume = ec2_resource.Volume(vol) 22 | volume.delete() 23 | -------------------------------------------------------------------------------- /Day30/README.md: -------------------------------------------------------------------------------- 1 | ![30](https://github.com/saikiranpi/mastering-aws/assets/109568252/00f21c0b-e7dc-4965-9494-2e484821791b) 2 | 3 | 4 | 5 | # Cloud Watch 2: EventBridge and Lambda 6 | 7 | This repository showcases a practical implementation of Amazon EventBridge, Lambda, and CloudWatch to capture and manage logs related to EC2 volumes. The setup involves creating EventBridge rules, Lambda functions, and utilizing CloudWatch to capture logs, with the ultimate goal of automating EC2 volume management. 8 | 9 | ## Table of Contents 10 | - [Introduction](#introduction) 11 | - [Architecture Overview](#architecture-overview) 12 | - [Setup](#setup) 13 | - [Capturing Logs](#capturing-logs) 14 | - [Automating Volume Deletion](#automating-volume-deletion) 15 | - [Contributing](#contributing) 16 | - [License](#license) 17 | 18 | ## Introduction 19 | This project demonstrates the integration of AWS EventBridge with Lambda functions to automate tasks and capture logs in CloudWatch. The key features include: 20 | 1. Capturing EC2 volume events using EventBridge. 21 | 2. Processing these events with Lambda functions. 22 | 3. Capturing logs in CloudWatch for monitoring and debugging. 23 | 4. Automating the deletion of unused EC2 volumes. 24 | 25 | ## Architecture Overview 26 | - **EventBridge**: Captures events related to EC2 volumes (e.g., `CreateVolume`, `DeleteVolume`) and triggers Lambda functions. 27 | - **Lambda**: Processes the events from EventBridge. One function logs the events, while another automates the deletion of unused volumes. 28 | - **CloudWatch**: Stores logs generated by the Lambda functions, providing visibility into the events and actions taken. 29 | 30 | ## Setup 31 | 1. **EventBridge Rules**: Create rules to capture EC2 volume events. 32 | 2. **Lambda Functions**: 33 | - **Log Events**: A Lambda function to print and log EC2 volume events. 34 | - **Delete Volumes**: Another Lambda function to automatically delete unused EC2 volumes. 35 | 3. **CloudWatch Log Groups**: Ensure logs are captured and monitored. 36 | 37 | ## Capturing Logs 38 | 1. **EventBridge Rule**: Capture events related to EC2 volumes. 39 | 2. **Lambda Function**: Process and log these events: 40 | ```python 41 | import json 42 | 43 | def lambda_handler(event, context): 44 | print(json.dumps(event)) 45 | return { 46 | 'statusCode': 200, 47 | 'body': json.dumps('Event received') 48 | } 49 | ``` 50 | 3. **CloudWatch**: Verify that logs are captured under the appropriate log groups. 51 | 52 | ## Automating Volume Deletion 53 | 1. **Lambda Function for Deletion**: 54 | ```python 55 | import boto3 56 | 57 | ec2 = boto3.client('ec2') 58 | 59 | def lambda_handler(event, context): 60 | volumes = ec2.describe_volumes(Filters=[{'Name': 'status', 'Values': ['available']}]) 61 | for volume in volumes['Volumes']: 62 | ec2.delete_volume(VolumeId=volume['VolumeId']) 63 | print(f"Deleted volume: {volume['VolumeId']}") 64 | ``` 65 | 2. **Permissions**: Ensure the Lambda function has the necessary EC2 permissions. 66 | 3. **Testing**: Verify the function deletes unused volumes as expected. 67 | 68 | ## Contributing 69 | Contributions are welcome! Please submit a pull request or open an issue to discuss any changes. 70 | 71 | ## License 72 | This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details. 73 | 74 | --- 75 | 76 | By following this guide, you will have set up an automated system using AWS EventBridge, Lambda, and CloudWatch to manage EC2 volume events and automate volume deletion, enhancing operational efficiency and providing a practical hands-on experience with AWS services. 77 | -------------------------------------------------------------------------------- /Day30/assignment.py: -------------------------------------------------------------------------------- 1 | import json 2 | import boto3 3 | 4 | def lambda_handler (event, context): 5 | client = boto3.client ('iam') 6 | uname = event ['detail'] ['requestParameters'] ['userName' ] 7 | client.put_user_permissions_boundary ( 8 | UserName=uname, 9 | PermissionsBoundary='POLICY-ARN' 10 | ) 11 | -------------------------------------------------------------------------------- /Day30/iam-policy.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Effect": "Allow", 6 | "Action": [ 7 | "s3 :* ", 8 | "cloudwatch :* ", 9 | "ec2 :* " 10 | ], 11 | "Resource": "*" 12 | } 13 | ] 14 | } 15 | -------------------------------------------------------------------------------- /Day31/BuildScript.sh: -------------------------------------------------------------------------------- 1 | artifacts: 2 | files: 3 | - app.war 4 | # - appspec.yml 5 | discard-paths: true 6 | phases: 7 | install: 8 | runtime-versions: 9 | java: corretto17 10 | commands: 11 | - "apt-get install git unzip -y" 12 | - "chmod 700 build.sh" 13 | - "sh build.sh" 14 | - "mv ROOT.war app.war" 15 | version: 0.2 16 | -------------------------------------------------------------------------------- /Day31/README.md: -------------------------------------------------------------------------------- 1 | 2 | 3 | ![31](https://github.com/saikiranpi/mastering-aws/assets/109568252/d35247cd-da34-45d4-ae8d-a51ca3a68fdd) 4 | 5 | 6 | 7 | 8 | ```markdown 9 | # AWS Developer Tools Automation 10 | 11 | ![AWS Developer Tools](https://your-logo-link.png) 12 | 13 | [![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE) 14 | [![Build Status](https://github.com/yourusername/yourproject/workflows/Build/badge.svg)](https://github.com/yourusername/yourproject/actions) 15 | [![Release](https://img.shields.io/github/release/yourusername/yourproject.svg)](https://github.com/yourusername/yourproject/releases) 16 | [![Contributors](https://img.shields.io/github/contributors/yourusername/yourproject.svg)](https://github.com/yourusername/yourproject/graphs/contributors) 17 | [![Issues](https://img.shields.io/github/issues/yourusername/yourproject.svg)](https://github.com/yourusername/yourproject/issues) 18 | 19 | ## Table of Contents 20 | - [Project Overview](#project-overview) 21 | - [Prerequisites](#prerequisites) 22 | - [Installation](#installation) 23 | - [Usage](#usage) 24 | - [Steps](#steps) 25 | - [Cloud9 Setup](#cloud9-setup) 26 | - [CodeCommit](#codecommit) 27 | - [CodeBuild](#codebuild) 28 | - [IAM Roles](#iam-roles) 29 | - [Launch EC2 Instance](#launch-ec2-instance) 30 | - [CodeDeploy](#codedeploy) 31 | - [CodePipeline](#codepipeline) 32 | - [Contributing](#contributing) 33 | - [License](#license) 34 | 35 | ## Project Overview 36 | This project demonstrates the use of AWS Developer Tools for automating the deployment process using Cloud9, CodeCommit, CodeBuild, CodeDeploy, and CodePipeline. The goal is to streamline manual tasks and create a seamless CI/CD pipeline. 37 | 38 | ## Prerequisites 39 | - AWS Account 40 | - Basic knowledge of AWS services 41 | - Git installed 42 | - AWS CLI configured 43 | 44 | ## Installation 45 | 1. Clone the repository: 46 | ```sh 47 | git clone https://github.com/yourusername/yourproject.git 48 | cd yourproject 49 | ``` 50 | 51 | ## Usage 52 | Follow the steps below to set up and automate your deployment process using AWS Developer Tools. 53 | 54 | ## Steps 55 | 56 | ### Cloud9 Setup 57 | 1. Create a new Cloud9 environment: 58 | - Name: `your-environment-name` 59 | - Environment: New EC2 60 | - Instance type: `t2.micro` 61 | - Network: Select your network 62 | - Create the environment 63 | 64 | 2. Set up a NAT: 65 | - Create a NAT 66 | - Clone your repo: 67 | ```sh 68 | git clone https://github.com/aws-samples/eb-tomcat-snakes 69 | ``` 70 | 71 | 3. Push changes to CodeCommit: 72 | ```sh 73 | git remote add origin https://your-repo-url.git 74 | git status 75 | git remote -v 76 | git push origin master 77 | ``` 78 | 79 | ### CodeCommit 80 | 1. Create a new repository in CodeCommit. 81 | 2. Push your Cloud9 project to this repository as shown above. 82 | 83 | ### CodeBuild 84 | 1. Create a build project: 85 | - Source: Your CodeCommit repository and branch 86 | - Environment: Ubuntu standard 4.0 87 | - Service role: New service role 88 | - Subnet: Select private subnets and attach the NAT 89 | - Buildspec: Paste your build script 90 | - Artifacts: S3, ZIP format, app.zip 91 | 2. Start the build and check S3 for artifacts. 92 | 93 | ### IAM Roles 94 | 1. Create two IAM roles: 95 | - S3 full access & AWS CodeDeployFullAccess 96 | - CodeDeploy role 97 | 98 | ### Launch EC2 Instance 99 | 1. Launch a new EC2 instance with the tag name `Tomcat-Server`. 100 | 2. SSH into the instance and install necessary packages. 101 | 102 | ### CodeDeploy 103 | 1. Create `appspec.yml` in Cloud9 and push to the repo: 104 | ```sh 105 | git status 106 | git add . 107 | git commit -m "added specFile" 108 | git push origin master 109 | ``` 110 | 111 | 2. Edit and update the buildspec in CodeBuild. 112 | 3. Create a CodeDeploy application and deployment group with the `Tomcat-Server` tag. 113 | 4. Deploy your application using the S3 zip URI. 114 | 115 | ### CodePipeline 116 | 1. Create a new pipeline: 117 | - Source: CodeCommit (master branch) 118 | - Build: AWS CodeBuild project 119 | - Deploy: CodeDeploy deployment group 120 | 121 | ## Contributing 122 | Contributions are welcome! Please submit a pull request or open an issue for any changes. 123 | 124 | ## License 125 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 126 | ``` 127 | . 128 | -------------------------------------------------------------------------------- /Day31/appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | files: 4 | - source: app.war 5 | destination: /opt/tomcat/webapps/ 6 | -------------------------------------------------------------------------------- /Day31/tomcat-packages.sh: -------------------------------------------------------------------------------- 1 | sudo apt update -y 2 | sudo apt install ruby -y 3 | sudo apt install wget -y 4 | cd /home/ubuntu 5 | 6 | wget https://aws-codedeploy-us-east-1.s3.us-east-1.amazonaws.com/latest/install 7 | chmod +x ./install 8 | sudo ./install auto 9 | 10 | sudo service codedeploy-agent start 11 | sudo service codedeploy-agent status 12 | 13 | --- 14 | apt update 15 | apt install default-jdk -y 16 | apt upgrade 17 | java --version 18 | useradd -m -d /opt/tomcat -U -s /bin/false tomcat 19 | wget https://dlcdn.apache.org/tomcat/tomcat-10/v10.1.24/bin/apache-tomcat-10.1.24.tar.gz -O /tmp/tomcat-10.tar.gz 20 | sudo -u tomcat tar -xzvf /tmp/tomcat-10.tar.gz --strip-components=1 -C /opt/tomcat 21 | nano /etc/systemd/system/tomcat.service 22 | 23 | [Unit] 24 | Description=Apache Tomcat 25 | After=network.target 26 | 27 | [Service] 28 | Type=forking 29 | 30 | User=tomcat 31 | Group=tomcat 32 | 33 | Environment=JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 34 | Environment=CATALINA_PID=/opt/tomcat/tomcat.pid 35 | Environment=CATALINA_HOME=/opt/tomcat 36 | Environment=CATALINA_BASE=/opt/tomcat 37 | Environment="CATALINA_OPTS=-Xms512M -Xmx1024M -server -XX:+UseParallelGC" 38 | 39 | ExecStart=/opt/tomcat/bin/startup.sh 40 | ExecStop=/opt/tomcat/bin/shutdown.sh 41 | 42 | ExecReload=/bin/kill $MAINPID 43 | RemainAfterExit=yes 44 | 45 | [Install] 46 | WantedBy=multi-user.target --- > Paste till here 47 | 48 | 49 | 50 | systemctl daemon-reload 51 | systemctl enable --now tomcat 52 | systemctl status tomcat 53 | -------------------------------------------------------------------------------- /Day32/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![32](https://github.com/saikiranpi/mastering-aws/assets/109568252/b1c24187-1609-4e13-8304-8107a055d0e8) 3 | 4 | 5 | 6 | --- 7 | 8 | AWS Cloud Config and Elastic Beanstalk Tutorial 9 | Welcome to the AWS Cloud Config and Elastic Beanstalk tutorial repository! This repository accompanies a video tutorial that demonstrates how to use AWS Cloud Config to manage and monitor AWS resources and deploy applications using Elastic Beanstalk. 10 | 11 | Table of Contents 12 | Introduction 13 | Prerequisites 14 | Setup Instructions 15 | AWS Cloud Config 16 | Elastic Beanstalk Deployment 17 | Video Tutorial 18 | Resources 19 | Contributing 20 | License 21 | Introduction 22 | In this tutorial, you'll learn how to: 23 | 24 | Use AWS Cloud Config to monitor and manage AWS resources. 25 | Deploy and manage applications with AWS Elastic Beanstalk. 26 | Automate deployment processes and ensure compliance using AWS Cloud Config. 27 | Prerequisites 28 | Before you begin, make sure you have the following: 29 | 30 | An AWS account 31 | AWS CLI installed and configured 32 | Basic knowledge of AWS services 33 | An IAM user with sufficient permissions to create and manage AWS resources 34 | Setup Instructions 35 | Clone the repository to your local machine. 36 | Configure your AWS CLI with the necessary credentials. 37 | AWS Cloud Config 38 | This repository includes configuration files and scripts to set up AWS Cloud Config rules and monitoring. You can find these resources in the cloudconfig directory. 39 | 40 | Elastic Beanstalk Deployment 41 | The elasticbeanstalk directory contains sample application files and configuration for Elastic Beanstalk. 42 | 43 | To deploy an application to Elastic Beanstalk, follow the steps provided in the video tutorial to initialize your Elastic Beanstalk environment and create a new environment using the provided configurations. 44 | 45 | Video Tutorial 46 | Watch the full video : https://youtu.be/a_FrAayZslo 47 | 48 | Resources 49 | AWS Cloud Config Documentation 50 | AWS Elastic Beanstalk Documentation 51 | Contributing 52 | Contributions are welcome! Please fork this repository and submit pull requests for any improvements or fixes. 53 | 54 | License 55 | This project is licensed under the MIT License. See the LICENSE file for details. 56 | -------------------------------------------------------------------------------- /Day32/userdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | apt update 3 | apt install -y openjdk-8-jdk 4 | apt install unzip 5 | curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" 6 | unzip awscliv2.zip 7 | sudo ./aws/install 8 | -------------------------------------------------------------------------------- /Day33/README.md: -------------------------------------------------------------------------------- 1 | 2 | ![33](https://github.com/saikiranpi/mastering-aws/assets/109568252/04392e08-6596-4f7a-8885-90f64ea42ef5) 3 | 4 | 5 | 6 | 7 | ```markdown 8 | # AWS ECS and ECR Tutorial 9 | 10 | Welcome to the AWS ECS and ECR Tutorial! This repository contains all the resources and instructions you need to follow along with the video tutorial on setting up and using AWS ECS (Elastic Container Service) and ECR (Elastic Container Registry). 11 | 12 | ## Table of Contents 13 | 14 | - [Introduction](#introduction) 15 | - [Prerequisites](#prerequisites) 16 | - [Setup AWS ECR](#setup-aws-ecr) 17 | - [Setup AWS ECS](#setup-aws-ecs) 18 | - [Deploying a Containerized Application](#deploying-a-containerized-application) 19 | - [Video Tutorial](#video-tutorial) 20 | - [License](#license) 21 | 22 | ## Introduction 23 | 24 | In this tutorial, you'll learn how to: 25 | - Create a Docker image and push it to AWS ECR. 26 | - Set up AWS ECS to run containerized applications. 27 | - Deploy and manage a containerized application on AWS ECS. 28 | 29 | ## Prerequisites 30 | 31 | Before you begin, ensure you have the following: 32 | - An AWS account. 33 | - Docker installed on your local machine. 34 | - AWS CLI installed and configured on your local machine. 35 | 36 | ## Setup AWS ECR 37 | 38 | 1. **Create a Repository** 39 | - Log in to the AWS Management Console. 40 | - Navigate to the ECR service. 41 | - Click on "Create repository" and follow the prompts. 42 | 43 | 2. **Authenticate Docker to Your ECR Repository** 44 | ```bash 45 | aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account-id.dkr.ecr.your-region.amazonaws.com 46 | ``` 47 | 48 | 3. **Build and Push Docker Image** 49 | ```bash 50 | docker build -t your-image-name . 51 | docker tag your-image-name:latest your-account-id.dkr.ecr.your-region.amazonaws.com/your-repository-name:latest 52 | docker push your-account-id.dkr.ecr.your-region.amazonaws.com/your-repository-name:latest 53 | ``` 54 | 55 | ## Setup AWS ECS 56 | 57 | 1. **Create a Cluster** 58 | - Navigate to the ECS service in the AWS Management Console. 59 | - Click on "Create Cluster" and follow the prompts. 60 | 61 | 2. **Register a Task Definition** 62 | - Go to the "Task Definitions" section. 63 | - Click on "Create new Task Definition" and follow the steps, ensuring you reference the ECR image. 64 | 65 | 3. **Run a Service** 66 | - Go to the "Clusters" section. 67 | - Select your cluster and click on "Create" under the "Services" tab. 68 | - Follow the prompts to configure and deploy your service. 69 | 70 | ## Deploying a Containerized Application 71 | 72 | 1. **Update Service to Use New Image** 73 | - Whenever you push a new Docker image to ECR, update your ECS service to use the new image. 74 | - Go to the ECS service and select your service. 75 | - Click on "Update" and specify the new image version. 76 | 77 | 2. **Monitor the Service** 78 | - Use the ECS console to monitor the health and status of your service. 79 | - Check logs and metrics to ensure everything is running smoothly. 80 | 81 | ## Video Tutorial 82 | 83 | Watch the full video tutorial on YouTube: [AWS ECS and ECR Tutorial](#) (replace `#` with the actual link to your video). 84 | 85 | ## License 86 | 87 | This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. 88 | -------------------------------------------------------------------------------- /Day33/nginx_Ecr.json: -------------------------------------------------------------------------------- 1 | { 2 | "requiresCompatibilities": [ 3 | "EC2" 4 | ], 5 | "containerDefinitions": [ 6 | { 7 | "name": "ecr-ecs", 8 | "image": "431064776024.dkr.ecr.us-east-1.amazonaws.com/ecr-ecs-repo:latest", 9 | "memory": 256, 10 | "cpu": 256, 11 | "essential": true, 12 | "portMappings": [ 13 | { 14 | "containerPort": 80, 15 | "protocol": "tcp" 16 | } 17 | ], 18 | "logConfiguration": { 19 | "logDriver": "awslogs", 20 | "options": { 21 | "awslogs-group": "awslogs-nginx-ecs", 22 | "awslogs-region": "us-east-1", 23 | "awslogs-stream-prefix": "nginx" 24 | } 25 | } 26 | } 27 | ], 28 | "volumes": [], 29 | "networkMode": "bridge", 30 | "placementConstraints": [], 31 | "family": "ecr-ecs" 32 | } 33 | -------------------------------------------------------------------------------- /Day34/README.md: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/saikiranpi/mastering-aws/22d9bd5b884e3cb22cbdb6e29930f464dd70df22/Day34/README.md -------------------------------------------------------------------------------- /Day35/README.md: -------------------------------------------------------------------------------- 1 | ![35](https://github.com/user-attachments/assets/29c5945b-d09e-42ec-a8e3-875f41832b04) 2 | -------------------------------------------------------------------------------- /Day35/eks-cmd.sh: -------------------------------------------------------------------------------- 1 | run aws configure 2 | 3 | #usr/local/bin 4 | ##Kubectl 5 | curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" 6 | chmod 700 /usr/local/bin/kubectl 7 | kubectl version 8 | 9 | #usr/local/bin 10 | 11 | ##Download eksctl 12 | curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s | tr '[:upper:]' '[:lower:]')_amd64.tar.gz" | tar xz -C /tmp 13 | sudo chmod +x /usr/local/bin/eksctl 14 | eksctl version 15 | 16 | https://documentation.sisense.com/latest/linux/prepeks.htm#gsc.tab=0 17 | ## aws configure 18 | aws configure 19 | AWS_REGION=$(aws configure get region) 20 | AWS_REGION=us-east-1 21 | 22 | 23 | eksctl create cluster \ 24 | --name "first-k8s-cluster" \ 25 | --version 1.30 \ 26 | --zones=us-east-1a,us-east-1b,us-east-1c \ 27 | --without-nodegroup 28 | 29 | 30 | eksctl utils associate-iam-oidc-provider \ 31 | --region us-east-1 \ 32 | --cluster first-k8s-cluster \ 33 | --approve 34 | aws eks describe-cluster --name first-k8s-cluster --query cluster.identity.oidc.issuer --output text 35 | 36 | #For Node Group In Public Subnet 37 | eksctl create nodegroup --cluster=first-k8s-cluster \ 38 | --region=us-east-1 \ 39 | --name=first-k8s-cluster-ng-1 \ 40 | --node-type=t3.medium \ 41 | --nodes=2 \ 42 | --nodes-min=2 \ 43 | --nodes-max=4 \ 44 | --node-volume-size=20 \ 45 | --ssh-access \ 46 | --ssh-public-key=LatestPEM \ 47 | --managed \ 48 | --asg-access \ 49 | --external-dns-access \ 50 | --full-ecr-access \ 51 | --appmesh-access \ 52 | --alb-ingress-access 53 | 54 | #For Node Group In Private 55 | eksctl create nodegroup --cluster=first-k8s-cluster \ 56 | --region=us-east-1 \ 57 | --name=first-k8s-cluster-ng-1 \ 58 | --node-type=t3.medium \ 59 | --nodes=2 \ 60 | --nodes-min=2 \ 61 | --nodes-max=4 \ 62 | --node-volume-size=20 \ 63 | --ssh-access \ 64 | --ssh-public-key=LatestPEM \ 65 | --managed \ 66 | --asg-access \ 67 | --external-dns-access \ 68 | --full-ecr-access \ 69 | --appmesh-access \ 70 | --alb-ingress-access 71 | 72 | 73 | # List EKS Clusters 74 | eksctl get clusters 75 | 76 | # Capture Node Group name 77 | eksctl get nodegroup --cluster= 78 | 79 | # Delete Node Group 80 | eksctl delete nodegroup --cluster=first-k8s-cluster --name=first-k8s-cluster-ng-1 81 | 82 | # Delete Cluster 83 | eksctl delete cluster --name=first-k8s-cluster 84 | 85 | 86 | apiVersion: apps/v1 87 | kind: Deployment 88 | metadata: 89 | name: nginx-deployment 90 | labels: 91 | app: nginx 92 | spec: 93 | replicas: 3 94 | selector: 95 | matchLabels: 96 | app: nginx 97 | template: 98 | metadata: 99 | labels: 100 | app: nginx 101 | spec: 102 | containers: 103 | - name: sreeutils 104 | image: sreeharshav/utils 105 | ports: 106 | - containerPort: 8888 107 | 108 | --- 109 | apiVersion: v1 110 | kind: Service 111 | metadata: 112 | labels: 113 | app: nginx 114 | annotations: 115 | service.beta.kubernetes.io/aws-load-balancer-type: nlb 116 | name: nginx-deployment 117 | spec: 118 | ports: 119 | - port: 80 120 | protocol: TCP 121 | targetPort: 8888 122 | selector: 123 | app: nginx 124 | type: LoadBalancer 125 | 126 | 127 | https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.1.3/docs/examples/2048/2048_full.yaml 128 | --- 129 | apiVersion: extensions/v1beta1 130 | kind: Ingress 131 | metadata: 132 | name: ingress-2048 133 | annotations: 134 | kubernetes.io/ingress.class: alb 135 | alb.ingress.kubernetes.io/scheme: internet-facing 136 | alb.ingress.kubernetes.io/target-type: ip 137 | spec: 138 | rules: 139 | - http: 140 | paths: 141 | - path: /* 142 | backend: 143 | serviceName: nginx 144 | servicePort: 80 145 | -------------------------------------------------------------------------------- /Day35/eks_deploy.yaml: -------------------------------------------------------------------------------- 1 | # redis 2 | --- 3 | apiVersion: v1 4 | kind: Service 5 | metadata: 6 | labels: 7 | app: redis 8 | name: redis 9 | spec: 10 | clusterIP: None 11 | ports: 12 | - name: redis-service 13 | port: 6379 14 | targetPort: 6379 15 | selector: 16 | app: redis 17 | --- 18 | apiVersion: apps/v1 19 | kind: Deployment 20 | metadata: 21 | name: redis 22 | labels: 23 | app: redis 24 | spec: 25 | replicas: 1 26 | selector: 27 | matchLabels: 28 | app: redis 29 | template: 30 | metadata: 31 | labels: 32 | app: redis 33 | spec: 34 | containers: 35 | - name: redis 36 | image: redis:alpine 37 | ports: 38 | - containerPort: 6379 39 | name: redis 40 | 41 | # db 42 | --- 43 | apiVersion: v1 44 | kind: Service 45 | metadata: 46 | labels: 47 | app: db 48 | name: db 49 | spec: 50 | clusterIP: None 51 | ports: 52 | - name: db 53 | port: 5432 54 | targetPort: 5432 55 | selector: 56 | app: db 57 | --- 58 | apiVersion: apps/v1 59 | kind: Deployment 60 | metadata: 61 | name: db 62 | labels: 63 | app: db 64 | spec: 65 | replicas: 1 66 | selector: 67 | matchLabels: 68 | app: db 69 | template: 70 | metadata: 71 | labels: 72 | app: db 73 | spec: 74 | containers: 75 | - name: db 76 | image: postgres:9.4 77 | env: 78 | - name: PGDATA 79 | value: /var/lib/postgresql/data/pgdata 80 | - name: POSTGRES_USER 81 | value: postgres 82 | - name: POSTGRES_PASSWORD 83 | value: postgres 84 | ports: 85 | - containerPort: 5432 86 | name: db 87 | volumeMounts: 88 | - name: db-data 89 | mountPath: /var/lib/postgresql/data 90 | volumes: 91 | - name: db-data 92 | emptyDir: {} 93 | --- 94 | apiVersion: v1 95 | kind: PersistentVolumeClaim 96 | metadata: 97 | name: postgres-pv-claim 98 | spec: 99 | accessModes: 100 | - ReadWriteOnce 101 | resources: 102 | requests: 103 | storage: 1Gi 104 | 105 | # result 106 | --- 107 | apiVersion: v1 108 | kind: Service 109 | metadata: 110 | name: result 111 | labels: 112 | app: result 113 | annotations: 114 | service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 115 | service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip" 116 | service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" 117 | spec: 118 | type: LoadBalancer 119 | ports: 120 | - port: 80 121 | targetPort: 80 122 | name: result-service 123 | selector: 124 | app: result 125 | 126 | --- 127 | apiVersion: apps/v1 128 | kind: Deployment 129 | metadata: 130 | name: result 131 | labels: 132 | app: result 133 | spec: 134 | replicas: 1 135 | selector: 136 | matchLabels: 137 | app: result 138 | template: 139 | metadata: 140 | labels: 141 | app: result 142 | spec: 143 | containers: 144 | - name: result 145 | image: kiran2361993/testing:latestappresults 146 | ports: 147 | - containerPort: 80 148 | name: result 149 | 150 | # vote 151 | --- 152 | apiVersion: v1 153 | kind: Service 154 | metadata: 155 | name: vote 156 | labels: 157 | apps: vote 158 | annotations: 159 | service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 160 | service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: "ip" 161 | service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing" 162 | spec: 163 | type: LoadBalancer 164 | ports: 165 | - port: 80 166 | targetPort: 80 167 | name: vote-service 168 | selector: 169 | app: vote 170 | --- 171 | apiVersion: apps/v1 172 | kind: Deployment 173 | metadata: 174 | name: vote 175 | labels: 176 | app: vote 177 | spec: 178 | replicas: 2 179 | selector: 180 | matchLabels: 181 | app: vote 182 | template: 183 | metadata: 184 | labels: 185 | app: vote 186 | spec: 187 | containers: 188 | - name: vote 189 | image: kiran2361993/testing:latestappvote 190 | ports: 191 | - containerPort: 80 192 | name: vote 193 | 194 | # worker 195 | --- 196 | apiVersion: v1 197 | kind: Service 198 | metadata: 199 | labels: 200 | apps: worker 201 | name: worker 202 | spec: 203 | clusterIP: None 204 | selector: 205 | app: worker 206 | --- 207 | apiVersion: apps/v1 208 | kind: Deployment 209 | metadata: 210 | labels: 211 | app: worker 212 | name: worker 213 | spec: 214 | replicas: 1 215 | selector: 216 | matchLabels: 217 | app: worker 218 | template: 219 | metadata: 220 | labels: 221 | app: worker 222 | spec: 223 | containers: 224 | - image: kiran2361993/testing:latestappworker 225 | name: worker 226 | -------------------------------------------------------------------------------- /Day35/votingapp.yaml: -------------------------------------------------------------------------------- 1 | # redis 2 | --- 3 | apiVersion: v1 4 | kind: Service 5 | metadata: 6 | labels: 7 | app: redis 8 | name: redis 9 | spec: 10 | clusterIP: None 11 | ports: 12 | - name: redis-service 13 | port: 6379 14 | targetPort: 6379 15 | selector: 16 | app: redis 17 | --- 18 | apiVersion: apps/v1 19 | kind: Deployment 20 | metadata: 21 | name: redis 22 | labels: 23 | app: redis 24 | spec: 25 | replicas: 1 26 | selector: 27 | matchLabels: 28 | app: redis 29 | template: 30 | metadata: 31 | labels: 32 | app: redis 33 | spec: 34 | containers: 35 | - name: redis 36 | image: redis:alpine 37 | ports: 38 | - containerPort: 6379 39 | name: redis 40 | 41 | # db 42 | --- 43 | apiVersion: v1 44 | kind: Service 45 | metadata: 46 | labels: 47 | app: db 48 | name: db 49 | spec: 50 | clusterIP: None 51 | ports: 52 | - name: db 53 | port: 5432 54 | targetPort: 5432 55 | selector: 56 | app: db 57 | --- 58 | apiVersion: apps/v1 59 | kind: Deployment 60 | metadata: 61 | name: db 62 | labels: 63 | app: db 64 | spec: 65 | replicas: 1 66 | selector: 67 | matchLabels: 68 | app: db 69 | template: 70 | metadata: 71 | labels: 72 | app: db 73 | spec: 74 | containers: 75 | - name: db 76 | image: postgres:9.4 77 | env: 78 | - name: PGDATA 79 | value: /var/lib/postgresql/data/pgdata 80 | - name: POSTGRES_USER 81 | value: postgres 82 | - name: POSTGRES_PASSWORD 83 | value: postgres 84 | ports: 85 | - containerPort: 5432 86 | name: db 87 | volumeMounts: 88 | - name: db-data 89 | mountPath: /var/lib/postgresql/data 90 | volumes: 91 | - name: db-data 92 | emptyDir: {} 93 | --- 94 | apiVersion: v1 95 | kind: PersistentVolumeClaim 96 | metadata: 97 | name: postgres-pv-claim 98 | spec: 99 | accessModes: 100 | - ReadWriteOnce 101 | resources: 102 | requests: 103 | storage: 1Gi 104 | 105 | # result 106 | --- 107 | apiVersion: v1 108 | kind: Service 109 | metadata: 110 | name: result 111 | labels: 112 | app: result 113 | spec: 114 | #type: LoadBalancer 115 | ports: 116 | - port: 80 117 | targetPort: 80 118 | name: result-service 119 | selector: 120 | app: result 121 | --- 122 | apiVersion: apps/v1 123 | kind: Deployment 124 | metadata: 125 | name: result 126 | labels: 127 | app: result 128 | spec: 129 | replicas: 1 130 | selector: 131 | matchLabels: 132 | app: result 133 | template: 134 | metadata: 135 | labels: 136 | app: result 137 | spec: 138 | containers: 139 | - name: result 140 | image: kiran2361993/testing:latestappresults 141 | ports: 142 | - containerPort: 80 143 | name: result 144 | imagePullSecrets: 145 | - name: docker-pwd 146 | # vote 147 | --- 148 | apiVersion: v1 149 | kind: Service 150 | metadata: 151 | name: vote 152 | labels: 153 | apps: vote 154 | spec: 155 | #type: LoadBalancer 156 | ports: 157 | - port: 80 158 | targetPort: 80 159 | name: vote-service 160 | selector: 161 | app: vote 162 | --- 163 | apiVersion: apps/v1 164 | kind: Deployment 165 | metadata: 166 | name: vote 167 | labels: 168 | app: vote 169 | spec: 170 | replicas: 2 171 | selector: 172 | matchLabels: 173 | app: vote 174 | template: 175 | metadata: 176 | labels: 177 | app: vote 178 | spec: 179 | containers: 180 | - name: vote 181 | image: kiran2361993/testing:latestappvote 182 | ports: 183 | - containerPort: 80 184 | name: vote 185 | imagePullSecrets: 186 | - name: docker-pwd 187 | 188 | # worker 189 | --- 190 | apiVersion: v1 191 | kind: Service 192 | metadata: 193 | labels: 194 | apps: worker 195 | name: worker 196 | spec: 197 | clusterIP: None 198 | selector: 199 | app: worker 200 | --- 201 | apiVersion: apps/v1 202 | kind: Deployment 203 | metadata: 204 | labels: 205 | app: worker 206 | name: worker 207 | spec: 208 | replicas: 1 209 | selector: 210 | matchLabels: 211 | app: worker 212 | template: 213 | metadata: 214 | labels: 215 | app: worker 216 | spec: 217 | containers: 218 | - image: kiran2361993/testing:latestappworker 219 | name: worker 220 | imagePullSecrets: 221 | - name: docker-pwd -------------------------------------------------------------------------------- /Day36/README.md: -------------------------------------------------------------------------------- 1 | ![36](https://github.com/saikiranpi/mastering-aws/assets/109568252/384c5621-7d74-40bf-a5d2-a5e414e2442a) 2 | 3 | 4 | # AWS Amplify Integration with GitHub 5 | 6 | 7 | ## Table of Contents 8 | - [Introduction](#introduction) 9 | - [Prerequisites](#prerequisites) 10 | - [Setup AWS Amplify](#setup-aws-amplify) 11 | - [Connect to GitHub](#connect-to-github) 12 | - [Configure Build Settings](#configure-build-settings) 13 | - [Deploying Your App](#deploying-your-app) 14 | - [Monitoring and Managing](#monitoring-and-managing) 15 | - [Conclusion](#conclusion) 16 | 17 | ## Introduction 18 | AWS Amplify is a powerful toolset for building, deploying, and hosting full-stack web and mobile applications. Integrating AWS Amplify with GitHub allows for automated builds and deployments every time you push changes to your repository, ensuring your application is always up-to-date. 19 | 20 | ## Prerequisites 21 | Before you begin, ensure you have the following: 22 | - An AWS account 23 | - A GitHub account 24 | - An existing AWS Amplify project 25 | - An existing GitHub repository 26 | 27 | ## Setup AWS Amplify 28 | 1. Log in to the [AWS Management Console](https://aws.amazon.com/). 29 | 2. Navigate to the AWS Amplify console. 30 | 3. If you don't have an Amplify app yet, click on **Get Started** under the **Deploy** section to create a new Amplify app. 31 | 32 | ## Connect to GitHub 33 | 1. In the Amplify Console, click on **Connect app**. 34 | 2. Choose **GitHub** as your repository service. 35 | 3. You will be prompted to authorize AWS Amplify to access your GitHub account. Click **Authorize aws-amplify-console**. 36 | 4. Select the repository and branch you want to connect to Amplify. 37 | 38 | ## Configure Build Settings 39 | 1. AWS Amplify will auto-detect your build settings. Review the settings in the build specification file (`amplify.yml`) provided by Amplify. 40 | 2. You can customize the build settings if needed. This file typically includes the following stages: 41 | - `preBuild`: Install dependencies 42 | - `build`: Build the application 43 | - `postBuild`: Commands to run after the build 44 | 45 | ## Deploying Your App 46 | 1. Once you’ve connected your repository and configured the build settings, click **Save and Deploy**. 47 | 2. AWS Amplify will start the build process. You can monitor the progress in the Amplify Console. 48 | 3. After the build is complete, your app will be deployed, and Amplify will provide a unique URL where your app is hosted. 49 | 50 | ## Monitoring and Managing 51 | - **Monitoring**: Use the Amplify Console to monitor the status of your builds and deployments. You can view detailed logs for each build to troubleshoot any issues. 52 | - **Managing**: In the Amplify Console, you can manage various aspects of your application, including environment variables, custom domains, and access control. 53 | 54 | ## Conclusion 55 | Integrating AWS Amplify with GitHub simplifies the process of continuous deployment, ensuring your application is always live with the latest changes. By following this guide, you can set up a robust CI/CD pipeline for your web or mobile app with minimal effort. 56 | 57 | For more detailed information, refer to the [AWS Amplify Documentation](https://docs.amplify.aws/). 58 | 59 | --- 60 | -------------------------------------------------------------------------------- /Day36/appspec.yaml: -------------------------------------------------------------------------------- 1 | version: 0.1 2 | frontend: 3 | phases: 4 | preBuild: 5 | commands: 6 | - npm install 7 | build: 8 | commands: 9 | - npm run build 10 | artifacts: 11 | baseDirectory: build 12 | files: 13 | - '**/*' 14 | cache: 15 | paths: 16 | - node_modules/**/* 17 | -------------------------------------------------------------------------------- /Day37/README.md: -------------------------------------------------------------------------------- 1 | # Cloud Infrastructure Best Practices 2 | 3 | This repository outlines best practices for cost optimization and security in cloud infrastructure management. The practices covered include resource management, deployment strategies, and security measures to ensure efficient and secure cloud operations. 4 | 5 | ## Cost Optimization 6 | 7 | 1. **Document Resources**: Identify and document resources that need to be operational 24/7. 8 | 2. **Resource Management**: Shut down or pause resources that are not required during off-hours. 9 | 3. **Instance Utilization**: 10 | - Use Reserved Instances for long-term workloads. 11 | - Use Spot Instances to reduce billing for temporary workloads. 12 | 4. **Trusted Advisor**: Utilize trusted advisors to check for and address unutilized resources. 13 | 14 | ## Security 15 | 16 | 1. **Infrastructure as Code (IaaC)**: 17 | - Deploy infrastructure using Terraform. 18 | - Whenever possible, use serverless options like AWS Lambda. 19 | 2. **Monitoring and Alerts**: 20 | - Enable CloudTrail in all regions where the infrastructure is deployed. 21 | - Monitor all services with CloudWatch. 22 | 3. **Network Security**: 23 | - Do not deploy servers in public subnets. 24 | - Use Security Groups (SG) and Network ACLs (NACL) to restrict traffic. 25 | - Use VPC Endpoints for internal connectivity to other servers from VPC. 26 | 4. **High Availability**: 27 | - Distribute servers across multiple Availability Zones (AZ) for high availability. 28 | - Implement Auto Scaling to handle performance hikes. 29 | 5. **Data Backup**: Regularly backup database servers. 30 | 6. **Authentication and Authorization**: 31 | - Use Microsoft Active Directory (AD) for Single Sign-On (SSO). 32 | -------------------------------------------------------------------------------- /Day38/README.md: -------------------------------------------------------------------------------- 1 | ![Final Class](https://github.com/saikiranpi/mastering-aws/assets/109568252/c051714c-77d8-4d94-b9b2-5264070bf19e) 2 | 3 | # AWS Series: How to Introduce Yourself as a DevOps Engineer 4 | 5 | Welcome to the final class in our AWS series! This repository contains all the resources and information covered throughout our AWS journey, focusing on how to effectively introduce yourself as a DevOps Engineer. 6 | 7 | ## Introduction 8 | 9 | In this final class, we discuss the importance of a strong self-introduction as a DevOps Engineer. This is crucial for job interviews, networking events, and professional settings where first impressions matter. 10 | 11 | ## Key Elements of a Strong Introduction 12 | 13 | - Brief personal background 14 | - Professional experience and key roles 15 | - Core skills and expertise in DevOps 16 | - Notable achievements and projects 17 | 18 | ## Highlighting Your Skills and Experience 19 | 20 | When introducing yourself, it’s important to highlight the skills and experience that make you a valuable asset. Mention specific tools, technologies, and methodologies you are proficient in, such as: 21 | - Continuous Integration/Continuous Deployment (CI/CD) 22 | - Infrastructure as Code (IaC) 23 | - Configuration Management 24 | - Cloud Platforms (e.g., AWS, Azure, Google Cloud) 25 | 26 | ## Tips for Making a Memorable Impression 27 | 28 | - Be concise and clear 29 | - Showcase your passion for DevOps 30 | - Use examples and anecdotes to illustrate your points 31 | - Practice your introduction to ensure a smooth delivery 32 | 33 | ## Conclusion 34 | 35 | A well-crafted introduction can set you apart in the competitive field of DevOps. Use the tips and guidelines from this video to make a strong and lasting impression. 36 | 37 | ## Resources 38 | 39 | - [Video Link to Final Class](https://youtu.be/dkWmxxh99Z8) 40 | - [Complete AWS Series Playlist](https://www.youtube.com/playlist?list=PLMj5OfHGyNU8CpibfqLD7h3nP5tat498U) 41 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![Blue Yellow Red Modern Geometric Funny Moment YouTube Thumbnail](https://github.com/saikiranpi/Aws-Mastery-Journey/assets/109568252/5f6ac0f0-3b4c-409a-b124-f1daf2ba901b) 2 | 3 | 4 | 5 | [![LinkedIn](https://img.shields.io/badge/LinkedIn-%230077B5.svg?logo=linkedin&logoColor=white)](https://linkedin.com/in/https://www.linkedin.com/in/saikiran-p-a0243569/) 6 | [![Medium](https://img.shields.io/badge/Medium-12100E?logo=medium&logoColor=white)](https://medium.com/@https://medium.com/@pinapathrunisaikiran) 7 | [![Docker](https://img.shields.io/badge/docker-12100E?logo=docker&logoColor=blue)](https://hub.docker.com/u/kiran2361993) 8 | [![Portfolio](https://img.shields.io/badge/portfolio-green)](https://www.saikiranpi.in) 9 | [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtube.com/@https://www.youtube.com/channel/UC0n5QpkSD-UcCOsBuFNEcJQ) 10 | 11 | 12 | 13 | LIST OF SERVICES WE COVER END OF THIS JOURNEY :- 14 | 15 | Day 01: Ip Addressing [![Youtube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/QzYP_5dDPQI?si=UHH8mKsHjZ1P0mNF) 16 | 17 | Day 02: VPC,Subnets,Route-Table [![Youtube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/0uWnEiuWnXI?si=CkqmwHYGCayNK0Ez) 18 | 19 | Day 03: VPC Peering [![Youtube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/QtWYT2wE4gA?si=4ex4NqeqFm2ZbClG) 20 | 21 | Day 04: VPC Flow Logs [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/6CjIT068Ss0?si=ZJmTory1iB6JSQzu) 22 | 23 | Day 05: VPC Endpoints [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/wSKsJ44PpUo?si=DHSgO8zg97B0TTJb) 24 | 25 | Day 06: Sg vs NACL [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/wHxH8kGY_nU?si=pkJr6X-IX0F3ieTP) 26 | 27 | Day 7: NAT Gateway [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/9vwzfyUNMKM?si=j71supOOBHNmFjQU) 28 | 29 | Day 8: Transit Gateway [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/h6woUZlxcp8?si=HJBO-qMt9GRbzI8h) 30 | 31 | Day 9: EC2 [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/xVlDWX4ewdc?si=-Et2JkYjCqV5Npux) 32 | 33 | Day 10: EC2 AMI Images with Packer [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/cZEKWxYeEUA?si=ibdBvUnpF_jLQHr3) 34 | 35 | Day 11: EBS Volumes [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/hwX9zyAAWLs?si=UEnBUODEKdtitqEW) 36 | 37 | Day 12: Network Load Balancer [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/th9K0k_J-W4) 38 | 39 | Day 13: Application Load Balancer [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/I4s4VT6k2DU) 40 | 41 | Day 14: AutoScaling Groups [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/IcBlMtVJekQ) 42 | 43 | Day 15: Route53-Routing Policies [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/3ZAbp2gd82Y?si=j1scJw6X_JbwNm9J) 44 | 45 | Day 16: WAF (Web Application Firewall) [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/sCBwaQwZ8xY?si=hQG2iWWycA6OHlz0) 46 | 47 | Day 17: IAM Policies [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/i1WrUy-RxCs?si=80YTIDVfM-M7dpIZ) 48 | 49 | Day 18: IAM Roles [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/XjhrAQdoJow?si=bYQHbhzJnvL0BUDE) 50 | 51 | Day 19: IAM Role Switching Active Directory [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/ayPLTf-svfM?si=0TMna3D-Fe1TpiAc) 52 | 53 | Day 20: IAM-SSO-SelfAD-ManageAD-Cognito [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/wo1Sv47QfXw?si=gfoGuRmLBhzKJJCT) 54 | 55 | Day 21: RDS-MySQL [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/rejfGRBPD_Q?si=AZwXSFuqaADfn95B) 56 | 57 | Day 22: DynamoDB-API Gateway [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/l9J4Amgmz_g?si=sCA5y_oPxxjUqfWw) 58 | 59 | Day 23: Redshift-Data Analytics [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/M3t5i3lNxgc?si=YIg3oa3nub_xbyj2) 60 | 61 | Day 24: S3 [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/Q4LNQSgVWcs?si=2vcoXxWviKJ0F07D) 62 | 63 | Day 25: S3-AccessPoints-EFS [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/S61Ow7TQ-fg?si=dlsmFiGeivjeGVR-) 64 | 65 | Day 26: Glacier -EFS [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/r9KZjWNECwk?si=xtBjytQWnDoYqZ3l) 66 | 67 | Day 27: FSx-Workspaces [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/FgFehTUHn50?si=CbzKpdwQAx0IgYDQ) 68 | 69 | Day 28: Systems Manager [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/LQnMRX8Ow1A?si=VPLZsBq2utk_Ha2f) 70 | 71 | Day 29: CloudWatch - Customer Metrics On nginx [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/RDJyzIPjyzw?si=R5OpHL-gFgOKw8_H) 72 | 73 | Day 30: CloudWatch-EventBridge-Lambda [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/iS3MIvcV9B0?si=bRKCIuSMlBTZWh6e) 74 | 75 | Day 31: Developer Tools [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/Mef_2wD78Hk?si=3TCyDBiFeeOu1zHU) 76 | 77 | Day 32: Elastic Beanstalk-Config [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/a_FrAayZslo?si=_0y0tj1Xp81-y0UQ) 78 | 79 | Day 33: ECR-ECS [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/a_FrAayZslo?si=_0y0tj1Xp81-y0UQ) 80 | 81 | Day 34: ECS-Fargate [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/a_e16xS-MAk?si=N0WcqZ6TDk5Uk9AG) 82 | 83 | Day 35: EKS [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/sJSnRTquVwA?si=YvPW0X4StdRe2QHa) 84 | 85 | Day 36: AWS Amplify [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/VwlnYdLnmyU) 86 | 87 | Day 37: Cost Optimization & Best Security Practices [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/MKF9LMuFv78?si=PoxiAt2IMANQ6TqH) 88 | 89 | Day 38: Interview / Job Market / Resume Discussion [![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?logo=YouTube&logoColor=white)](https://youtu.be/dkWmxxh99Z8?si=hNhE6DzTiEXtwBHy) 90 | 91 | --------------------------------------------------------------------------------