├── LICENSE ├── README.md ├── appspec.yml ├── day-14 ├── README.md └── simple-python-app │ ├── Dockerfile │ ├── app.py │ ├── appspec.yml │ ├── buildspec.yml │ ├── requirements.txt │ ├── start_container.sh │ └── stop_container.sh ├── day-16 ├── README.md ├── custom_metrics_demo │ ├── cloudwatch_metrics.py │ └── requirements.txt └── default_metrics_demo │ └── cpu_spike.py ├── day-17 └── README.md ├── day-18 ├── README.md └── ebs_stale_snapshosts.py ├── day-19 └── README.md ├── day-2 ├── README.md └── interview-questions ├── day-20 └── README.md ├── day-21 ├── Dockerfile ├── README.md ├── app.py ├── commands.md └── requirements.txt ├── day-22 ├── 2048-app-deploy-ingress.md ├── README.md ├── alb-controller-add-on.md ├── configure-oidc-connector.md ├── installing-eks.md ├── prerequisites.md └── sample-app.md ├── day-24 ├── main.tf ├── provider.tf ├── userdata.sh ├── userdata1.sh └── variables.tf ├── day-25 ├── README.md ├── lambda_function.py └── lambda_function_permissions.md ├── day-3 └── README.md ├── day-4 └── README.md ├── day-5 └── README.md ├── day-6 └── README.md ├── day-7 └── vpc-demo-2-tier-app ├── day-8 └── Interview_q&a ├── day-9 ├── README.md └── demos │ └── bucket-policies │ ├── restrict-access-to-owner.json │ └── static-website-basic.json ├── interview-questions ├── 01-ADVANCED.md ├── 01-SCENARIO-BASED.md ├── aws-cli.md ├── aws-terraform.md ├── cloud-migration.md ├── cloudformation.md ├── cloudfront.md ├── cloudtrail.md ├── cloudwatch.md ├── code-build.md ├── code-deploy.md ├── code-pipeline.md ├── dynamodb.md ├── ecr.md ├── ecs.md ├── eks.md ├── elastic-bean-stalk.md ├── elastic-cloud-compute.md ├── elb.md ├── iam.md ├── lambda-functions.md ├── rds.md ├── route53.md ├── s3.md ├── systems-manager.md └── vpc.md └── scripts ├── start_container.sh └── stop_container.sh /appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | 4 | hooks: 5 | ApplicationStop: 6 | - location: scripts/stop_container.sh 7 | timeout: 300 8 | runas: root 9 | AfterInstall: 10 | - location: scripts/start_container.sh 11 | timeout: 300 12 | runas: root -------------------------------------------------------------------------------- /day-14/README.md: -------------------------------------------------------------------------------- 1 | # AWS Continuous Integration Demo 2 | 3 | ## Set Up GitHub Repository 4 | 5 | The first step in our CI journey is to set up a GitHub repository to store our Python application's source code. If you already have a repository, feel free to skip this step. Otherwise, let's create a new repository on GitHub by following these steps: 6 | 7 | - Go to github.com and sign in to your account. 8 | - Click on the "+" button in the top-right corner and select "New repository." 9 | - Give your repository a name and an optional description. 10 | - Choose the appropriate visibility option based on your needs. 11 | - Initialize the repository with a README file. 12 | - Click on the "Create repository" button to create your new GitHub repository. 13 | 14 | Great! Now that we have our repository set up, we can move on to the next step. 15 | 16 | ## Create an AWS CodePipeline 17 | In this step, we'll create an AWS CodePipeline to automate the continuous integration process for our Python application. AWS CodePipeline will orchestrate the flow of changes from our GitHub repository to the deployment of our application. Let's go ahead and set it up: 18 | 19 | - Go to the AWS Management Console and navigate to the AWS CodePipeline service. 20 | - Click on the "Create pipeline" button. 21 | - Provide a name for your pipeline and click on the "Next" button. 22 | - For the source stage, select "GitHub" as the source provider. 23 | - Connect your GitHub account to AWS CodePipeline and select your repository. 24 | - Choose the branch you want to use for your pipeline. 25 | - In the build stage, select "AWS CodeBuild" as the build provider. 26 | - Create a new CodeBuild project by clicking on the "Create project" button. 27 | - Configure the CodeBuild project with the necessary settings for your Python application, such as the build environment, build commands, and artifacts. 28 | - Save the CodeBuild project and go back to CodePipeline. 29 | - Continue configuring the pipeline stages, such as deploying your application using AWS Elastic Beanstalk or any other suitable deployment option. 30 | - Review the pipeline configuration and click on the "Create pipeline" button to create your AWS CodePipeline. 31 | 32 | Awesome job! We now have our pipeline ready to roll. Let's move on to the next step to set up AWS CodeBuild. 33 | 34 | ## Configure AWS CodeBuild 35 | 36 | In this step, we'll configure AWS CodeBuild to build our Python application based on the specifications we define. CodeBuild will take care of building and packaging our application for deployment. Follow these steps: 37 | 38 | - In the AWS Management Console, navigate to the AWS CodeBuild service. 39 | - Click on the "Create build project" button. 40 | - Provide a name for your build project. 41 | - For the source provider, choose "AWS CodePipeline." 42 | - Select the pipeline you created in the previous step. 43 | - Configure the build environment, such as the operating system, runtime, and compute resources required for your Python application. 44 | - Specify the build commands, such as installing dependencies and running tests. Customize this based on your application's requirements. 45 | - Set up the artifacts configuration to generate the build output required for deployment. 46 | - Review the build project settings and click on the "Create build project" button to create your AWS CodeBuild project. 47 | 48 | Fantastic! With AWS CodeBuild all set up, we're now ready to witness the magic of continuous integration in action. 49 | 50 | ## Trigger the CI Process 51 | 52 | In this final step, we'll trigger the CI process by making a change to our GitHub repository. Let's see how it works: 53 | 54 | - Go to your GitHub repository and make a change to your Python application's source code. It could be a bug fix, a new feature, or any other change you want to introduce. 55 | - Commit and push your changes to the branch configured in your AWS CodePipeline. 56 | - Head over to the AWS CodePipeline console and navigate to your pipeline. 57 | - You should see the pipeline automatically kick off as soon as it detects the changes in your repository. 58 | - Sit back and relax while AWS CodePipeline takes care of the rest. It will fetch the latest code, trigger the build process with AWS CodeBuild, and deploy the application if you configured the deployment stage. -------------------------------------------------------------------------------- /day-14/simple-python-app/Dockerfile: -------------------------------------------------------------------------------- 1 | # Base image 2 | FROM python:3.8 3 | 4 | # Set the working directory inside the container 5 | WORKDIR /app 6 | 7 | # Copy the requirements file 8 | COPY requirements.txt . 9 | 10 | # Install the project dependencies 11 | RUN pip install -r requirements.txt 12 | 13 | # Copy the application code into the container 14 | COPY . . 15 | 16 | # Expose the port the Flask application will be listening on 17 | EXPOSE 5000 18 | 19 | # Set environment variables, if necessary 20 | # ENV MY_ENV_VAR=value 21 | 22 | # Run the Flask application 23 | CMD ["python", "app.py"] 24 | -------------------------------------------------------------------------------- /day-14/simple-python-app/app.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | 3 | app = Flask(__name__) 4 | 5 | @app.route('/') 6 | def hello(): 7 | return 'Hello, world!' 8 | 9 | if __name__ == '__main__': 10 | app.run() 11 | 12 | -------------------------------------------------------------------------------- /day-14/simple-python-app/appspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.0 2 | os: linux 3 | 4 | hooks: 5 | ApplicationStop: 6 | - location: scripts/stop_container.sh 7 | timeout: 300 8 | runas: root 9 | AfterInstall: 10 | - location: scripts/start_container.sh 11 | timeout: 300 12 | runas: root 13 | -------------------------------------------------------------------------------- /day-14/simple-python-app/buildspec.yml: -------------------------------------------------------------------------------- 1 | version: 0.2 2 | 3 | env: 4 | parameter-store: 5 | DOCKER_REGISTRY_USERNAME: /myapp/docker-credentials/username 6 | DOCKER_REGISTRY_PASSWORD: /myapp/docker-credentials/password 7 | DOCKER_REGISTRY_URL: /myapp/docker-registry/url 8 | phases: 9 | install: 10 | runtime-versions: 11 | python: 3.11 12 | pre_build: 13 | commands: 14 | - echo "Installing dependencies..." 15 | - pip install -r day-13/simple-python-app/requirements.txt 16 | build: 17 | commands: 18 | - echo "Running tests..." 19 | - cd day-13/simple-python-app/ 20 | - echo "Building Docker image..." 21 | - echo "$DOCKER_REGISTRY_PASSWORD" | docker login -u "$DOCKER_REGISTRY_USERNAME" --password-stdin "$DOCKER_REGISTRY_URL" 22 | - docker build -t "$DOCKER_REGISTRY_URL/$DOCKER_REGISTRY_USERNAME/simple-python-flask-app:latest" . 23 | - docker push "$DOCKER_REGISTRY_URL/$DOCKER_REGISTRY_USERNAME/simple-python-flask-app:latest" 24 | post_build: 25 | commands: 26 | - echo "Build completed successfully!" 27 | artifacts: 28 | files: 29 | - '**/*' 30 | base-directory: ../simple-python-app 31 | 32 | -------------------------------------------------------------------------------- /day-14/simple-python-app/requirements.txt: -------------------------------------------------------------------------------- 1 | flask -------------------------------------------------------------------------------- /day-14/simple-python-app/start_container.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # Pull the Docker image from Docker Hub 5 | echo 6 | 7 | # Run the Docker image as a container 8 | echo 9 | -------------------------------------------------------------------------------- /day-14/simple-python-app/stop_container.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # Stop the running container (if any) 5 | echo "Hi" 6 | -------------------------------------------------------------------------------- /day-16/README.md: -------------------------------------------------------------------------------- 1 | # AWS CLOUD WATCH 2 | 3 | Welcome back to our "30 Days AWS Zero to Hero" series. Today, on Day 16, we will deep dive into AWS CloudWatch. 4 | 5 | What is AWS CloudWatch? 6 | 7 | AWS CloudWatch is a powerful monitoring and observability service provided by Amazon Web Services. It enables you to gain insights into the performance, health, and operational aspects of your AWS resources and applications. CloudWatch collects and tracks metrics, collects and monitors log files, and sets alarms to alert you on certain conditions. 8 | 9 | Advantages of AWS CloudWatch: 10 | 11 | Comprehensive Monitoring: CloudWatch allows you to monitor various AWS resources such as EC2 instances, RDS databases, Lambda functions, and more. You get a unified view of your entire AWS infrastructure. 12 | 13 | Real-Time Metrics: It provides real-time monitoring of metrics, allowing you to respond quickly to any issues or anomalies that might arise. 14 | 15 | Automated Actions: With CloudWatch Alarms, you can set up automated actions like triggering an Auto Scaling group to scale in or out based on certain conditions. 16 | 17 | Log Insights: CloudWatch Insights lets you analyze and search log data from various AWS services, making it easier to troubleshoot problems and identify trends. 18 | 19 | Dashboards and Visualization: Create custom dashboards to visualize your application and infrastructure metrics in one place, making it easier to understand the overall health of your system. 20 | 21 | Problem Solving with AWS CloudWatch: 22 | 23 | CloudWatch helps address several critical challenges, including: 24 | 25 | Resource Utilization: Tracking resource utilization and performance metrics to optimize your AWS infrastructure efficiently. 26 | Proactive Monitoring: Identifying and resolving issues before they impact your applications or users. 27 | Troubleshooting: Analyzing logs and metrics to troubleshoot problems and reduce downtime. 28 | Scalability: Automatically scaling resources based on demand to ensure optimal performance and cost efficiency. 29 | 30 | Practical Use Cases of AWS CloudWatch: 31 | 32 | Auto Scaling: CloudWatch can trigger Auto Scaling actions based on defined thresholds. For example, you can automatically scale in or out based on CPU utilization or request counts. 33 | 34 | Resource Monitoring: Monitor EC2 instances, RDS databases, DynamoDB tables, and other AWS resources to gain insights into their performance and health. 35 | 36 | Application Insights: Track application-specific metrics to monitor the performance of your applications and identify potential bottlenecks. 37 | 38 | Log Analysis: Use CloudWatch Logs Insights to analyze log data, identify patterns, and troubleshoot issues in real-time. 39 | 40 | Billing and Cost Monitoring: CloudWatch can help you monitor your AWS billing and usage patterns, enabling you to optimize costs. -------------------------------------------------------------------------------- /day-16/custom_metrics_demo/cloudwatch_metrics.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | import time 3 | import random 4 | import boto3 5 | 6 | 7 | app = Flask(__name__) 8 | 9 | # Initialize AWS CloudWatch client 10 | cloudwatch = boto3.client('cloudwatch', region_name='us-east-1') 11 | 12 | # Sample product data for our online store 13 | products = { 14 | '1': {'name': 'Product 1', 'price': 10.99}, 15 | '2': {'name': 'Product 2', 'price': 19.99}, 16 | '3': {'name': 'Product 3', 'price': 5.49} 17 | } 18 | 19 | @app.route('/') 20 | def index(): 21 | start_time = time.time() 22 | 23 | # Simulate processing time 24 | time.sleep(random.uniform(0.1, 0.5)) 25 | 26 | # Log the page view metric to CloudWatch 27 | log_metric('PageViews', 1) 28 | 29 | # Log the response time metric to CloudWatch 30 | response_time = (time.time() - start_time) * 1000 31 | log_metric('ResponseTime', response_time) 32 | 33 | return "Welcome to our Online Store!" 34 | 35 | @app.route('/product/') 36 | def product(product_id): 37 | start_time = time.time() 38 | 39 | # Simulate processing time 40 | time.sleep(random.uniform(0.2, 0.8)) 41 | 42 | # Log the page view metric to CloudWatch 43 | log_metric('PageViews', 1) 44 | 45 | # Log the response time metric to CloudWatch 46 | response_time = (time.time() - start_time) * 1000 47 | log_metric('ResponseTime', response_time) 48 | 49 | if product_id in products: 50 | return f"Product: {products[product_id]['name']}, Price: ${products[product_id]['price']}" 51 | else: 52 | return "Product not found." 53 | 54 | def log_metric(metric_name, value): 55 | # Send custom metric to CloudWatch 56 | cloudwatch.put_metric_data( 57 | Namespace='OnlineStore', 58 | MetricData=[{ 59 | 'MetricName': metric_name, 60 | 'Value': value, 61 | 'Unit': 'Count' 62 | }] 63 | ) 64 | 65 | if __name__ == '__main__': 66 | app.run(host='0.0.0.0', port=5000) 67 | -------------------------------------------------------------------------------- /day-16/custom_metrics_demo/requirements.txt: -------------------------------------------------------------------------------- 1 | flask 2 | boto3 -------------------------------------------------------------------------------- /day-16/default_metrics_demo/cpu_spike.py: -------------------------------------------------------------------------------- 1 | import time 2 | 3 | def simulate_cpu_spike(duration=30, cpu_percent=80): 4 | print(f"Simulating CPU spike at {cpu_percent}%...") 5 | start_time = time.time() 6 | 7 | # Calculate the number of iterations needed to achieve the desired CPU utilization 8 | target_percent = cpu_percent / 100 9 | total_iterations = int(target_percent * 5_000_000) # Adjust the number as needed 10 | 11 | # Perform simple arithmetic operations to spike CPU utilization 12 | for _ in range(total_iterations): 13 | result = 0 14 | for i in range(1, 1001): 15 | result += i 16 | 17 | # Wait for the rest of the time interval 18 | elapsed_time = time.time() - start_time 19 | remaining_time = max(0, duration - elapsed_time) 20 | time.sleep(remaining_time) 21 | 22 | print("CPU spike simulation completed.") 23 | 24 | if __name__ == '__main__': 25 | # Simulate a CPU spike for 30 seconds with 80% CPU utilization 26 | simulate_cpu_spike(duration=30, cpu_percent=80) 27 | -------------------------------------------------------------------------------- /day-17/README.md: -------------------------------------------------------------------------------- 1 | # AWS Lambda Deep Dive for Beginners 2 | 3 | ## Introduction to Serverless Computing 4 | 5 | Today, we're going to embark on an exciting journey into the world of serverless computing and explore AWS Lambda, a powerful service offered by Amazon Web Services. 6 | 7 | So, what exactly is "serverless computing"? Don't worry; it's not about eliminating servers altogether. Instead, serverless computing is a cloud computing execution model where you, as a developer, don't have to manage servers directly. You focus solely on writing and deploying your code, while the cloud provider takes care of all the underlying infrastructure. 8 | 9 | ## Understanding AWS Lambda 10 | 11 | In this serverless landscape, AWS Lambda shines as a leading service. AWS Lambda is a compute service that lets you run your code in response to events without the need to provision or manage servers. It automatically scales your applications based on incoming requests, so you don't have to worry about capacity planning or dealing with server maintenance. 12 | 13 | ## How Lambda Functions Fit into the Serverless World 14 | 15 | At the heart of AWS Lambda are "Lambda functions." These are individual units of code that perform specific tasks. Think of them as small, single-purpose applications that run independently. 16 | 17 | Here's how Lambda functions fit into the serverless world: 18 | 19 | 1. **Event-Driven Execution**: Lambda functions are triggered by events. An event could be anything, like a new file being uploaded to Amazon S3, a request hitting an API, or a specific time on the clock. When an event occurs, Lambda executes the corresponding function. 20 | 21 | 2. **No Server Management**: As a developer, you don't need to worry about managing servers. AWS handles everything behind the scenes. You just upload your code, configure the trigger, and Lambda takes care of the rest. 22 | 23 | 3. **Automatic Scaling**: Whether you have one user or one million users, Lambda scales automatically. Each function instance runs independently, ensuring that your application can handle any level of incoming traffic without manual intervention. 24 | 25 | 4. **Pay-per-Use**: One of the most attractive features of serverless computing is cost efficiency. With Lambda, you pay only for the compute time your code consumes. When your code isn't running, you're not charged. 26 | 27 | 5. **Supported Languages**: Lambda supports multiple programming languages like Node.js, Python, Java, Go, and more. You can choose the language you are comfortable with or that best fits your application's needs. 28 | 29 | ## Real-World Use Cases 30 | 31 | Now, let's explore some real-world use cases to better understand how AWS Lambda can be applied: 32 | 33 | 1. **Automated Image Processing**: Imagine you have a photo-sharing app, and users upload images every day. You can use Lambda to automatically resize or compress these images as soon as they are uploaded to S3. 34 | 35 | 2. **Chatbots and Virtual Assistants**: Build interactive chatbots or voice-controlled virtual assistants using Lambda. These assistants can perform tasks like answering questions, fetching data, or even controlling smart home devices. 36 | 37 | 3. **Scheduled Data Backups**: Use Lambda to create scheduled tasks for backing up data from one storage location to another, ensuring data resilience and disaster recovery. 38 | 39 | 4. **Real-Time Analytics**: Lambda can process streaming data from IoT devices, social media, or other sources, allowing you to perform real-time analytics and gain insights instantly. 40 | 41 | 5. **API Backends**: Develop scalable API backends for web and mobile applications using Lambda. It automatically handles the incoming API requests and executes the corresponding functions. 42 | 43 | -------------------------------------------------------------------------------- /day-18/README.md: -------------------------------------------------------------------------------- 1 | # AWS Cloud Cost Optimization - Identifying Stale Resources 2 | 3 | ## Identifying Stale EBS Snapshots 4 | 5 | In this example, we'll create a Lambda function that identifies EBS snapshots that are no longer associated with any active EC2 instance and deletes them to save on storage costs. 6 | 7 | ### Description: 8 | 9 | The Lambda function fetches all EBS snapshots owned by the same account ('self') and also retrieves a list of active EC2 instances (running and stopped). For each snapshot, it checks if the associated volume (if exists) is not associated with any active instance. If it finds a stale snapshot, it deletes it, effectively optimizing storage costs. 10 | 11 | 12 | 13 | -------------------------------------------------------------------------------- /day-18/ebs_stale_snapshosts.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | 3 | def lambda_handler(event, context): 4 | ec2 = boto3.client('ec2') 5 | 6 | # Get all EBS snapshots 7 | response = ec2.describe_snapshots(OwnerIds=['self']) 8 | 9 | # Get all active EC2 instance IDs 10 | instances_response = ec2.describe_instances(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}]) 11 | active_instance_ids = set() 12 | 13 | for reservation in instances_response['Reservations']: 14 | for instance in reservation['Instances']: 15 | active_instance_ids.add(instance['InstanceId']) 16 | 17 | # Iterate through each snapshot and delete if it's not attached to any volume or the volume is not attached to a running instance 18 | for snapshot in response['Snapshots']: 19 | snapshot_id = snapshot['SnapshotId'] 20 | volume_id = snapshot.get('VolumeId') 21 | 22 | if not volume_id: 23 | # Delete the snapshot if it's not attached to any volume 24 | ec2.delete_snapshot(SnapshotId=snapshot_id) 25 | print(f"Deleted EBS snapshot {snapshot_id} as it was not attached to any volume.") 26 | else: 27 | # Check if the volume still exists 28 | try: 29 | volume_response = ec2.describe_volumes(VolumeIds=[volume_id]) 30 | if not volume_response['Volumes'][0]['Attachments']: 31 | ec2.delete_snapshot(SnapshotId=snapshot_id) 32 | print(f"Deleted EBS snapshot {snapshot_id} as it was taken from a volume not attached to any running instance.") 33 | except ec2.exceptions.ClientError as e: 34 | if e.response['Error']['Code'] == 'InvalidVolume.NotFound': 35 | # The volume associated with the snapshot is not found (it might have been deleted) 36 | ec2.delete_snapshot(SnapshotId=snapshot_id) 37 | print(f"Deleted EBS snapshot {snapshot_id} as its associated volume was not found.") 38 | -------------------------------------------------------------------------------- /day-19/README.md: -------------------------------------------------------------------------------- 1 | # Comprehensive Guide to CDN and CloudFront on AWS for Beginners 2 | 3 | If you've never heard of CDN or CloudFront before, don't worry. we'll start from scratch and gradually build up your understanding. By the end, you'll be well-versed in these technologies. So lets get started. 4 | 5 | ## Table of Contents 6 | 1. Introduction to Content Delivery Networks (CDN) 7 | 2. What is CloudFront? 8 | 3. How Does CloudFront Work? 9 | 4. Benefits of CloudFront 10 | 5. Setting Up CloudFront on AWS 11 | 6. Use Cases and Scenarios 12 | 7. Tips and Best Practices 13 | 8. Conclusion 14 | 15 | ## 1. Introduction to Content Delivery Networks (CDN) 16 | 17 | Imagine you have a website with lots of cool content, like images, videos, and documents. When a user visits your site from a different location far away from your server, the content might take a long time to load. That's where CDN comes to the rescue! 18 | 19 | A CDN is like a network of servers spread across various locations worldwide. These servers store a copy of your website's content. When a user requests your website, the content is delivered from the server closest to the user, making it super fast! It's like having a local store for your website content everywhere in the world. 20 | 21 | ## 2. What is CloudFront? 22 | 23 | CloudFront is Amazon Web Services' (AWS) very own CDN service. It integrates seamlessly with other AWS services and allows you to deliver content, videos, applications, and APIs securely with low-latency and high transfer speeds. 24 | 25 | ## 3. How Does CloudFront Work? 26 | 27 | Let's understand how CloudFront works with a simple example: 28 | 29 | Imagine you have a website with images stored on an Amazon S3 bucket (a cloud storage service). When a user requests an image, the request goes to CloudFront first. 30 | 31 | Here's how the process flows: 32 | - **Step 1**: CloudFront checks if it already has the requested image in its cache (storage). If it does, great! It sends the image directly to the user. If not, it proceeds to Step 2. 33 | - **Step 2**: CloudFront fetches the image from the S3 bucket and stores a copy in its cache for future requests. Then, it sends the image to the user. 34 | 35 | The next time someone requests the same image, CloudFront will deliver it from its cache, making it super fast and efficient! 36 | 37 | ## 4. Benefits of CloudFront 38 | 39 | - **Fast Content Delivery**: CloudFront ensures your content reaches users with minimal delay, making your website lightning fast. 40 | - **Global Reach**: With servers in various locations worldwide, CloudFront brings your content closer to users, regardless of where they are. 41 | - **Security**: CloudFront provides security features like DDoS protection and SSL/TLS encryption to keep your content and users safe. 42 | - **Scalability**: CloudFront can handle traffic spikes effortlessly, ensuring a smooth experience for your users. 43 | - **Cost-Effective**: Pay only for the data transfer and requests made, making it cost-effective for businesses of all sizes. 44 | 45 | ## 5. Setting Up CloudFront on AWS 46 | 47 | Now, let's get our hands dirty and set up CloudFront on AWS! 48 | 49 | ### Step 1: Create an S3 Bucket 50 | 1. Go to the AWS Management Console and navigate to Amazon S3. 51 | 2. Create a new bucket to store your website content. 52 | 53 | ### Step 2: Upload Content to the S3 Bucket 54 | 1. Upload images, videos, or any other content you want to serve through CloudFront to your S3 bucket. 55 | 56 | ### Step 3: Create a CloudFront Distribution 57 | 1. Go to the AWS Management Console and navigate to CloudFront. 58 | 2. Click "Create Distribution." 59 | 3. Choose whether you want to deliver a web application or content (like images and videos). 60 | 4. Configure your settings, such as the origin (your S3 bucket), cache behaviors, and security settings. 61 | 5. Click "Create Distribution" to set up CloudFront. 62 | 63 | ### Step 4: Update Website URLs 64 | 1. Once your CloudFront distribution is deployed (it may take a few minutes), you'll get a CloudFront domain name (e.g., `d1a2b3c4def.cloudfront.net`). 65 | 2. Replace the URLs of your website content with the CloudFront domain name. 66 | 67 | That's it! Your content is now being delivered through CloudFront. 68 | 69 | ## 6. Use Cases and Scenarios 70 | 71 | ### Scenario 1: E-Commerce Website 72 | Let's say you have an e-commerce website that sells products globally. By using CloudFront, your product images and videos load quickly for customers all over the world, improving the shopping experience. 73 | 74 | ### Scenario 2: Media Streaming 75 | You're running a video streaming platform. With CloudFront, you can stream videos to users efficiently, regardless of their location, without buffering issues. 76 | 77 | ### Scenario 3: Software Downloads 78 | If you offer software downloads, CloudFront can distribute your files faster, reducing download times and providing a better user experience. 79 | 80 | ## 7. Tips and Best Practices 81 | 82 | - **Caching Strategies**: Configure cache settings wisely to balance freshness and speed for different types of content. 83 | - **Invalidation**: Learn how to invalidate or clear cached content when you make updates to your website. 84 | - **Monitoring and Reporting**: Use AWS tools to monitor your CloudFront distribution's performance and gain insights into user behavior. 85 | 86 | ## 8. Conclusion 87 | 88 | By using CloudFront, you can dramatically improve your website's performance, making users happier and potentially boosting your application and business. 89 | -------------------------------------------------------------------------------- /day-2/README.md: -------------------------------------------------------------------------------- 1 | # IAM 2 | 3 | AWS IAM (Identity and Access Management) is a service provided by Amazon Web Services (AWS) that helps you manage access to your AWS resources. It's like a security system for your AWS account. 4 | 5 | IAM allows you to create and manage users, groups, and roles. Users represent individual people or entities who need access to your AWS resources. Groups are collections of users with similar access requirements, making it easier to manage permissions. Roles are used to grant temporary access to external entities or services. 6 | 7 | With IAM, you can control and define permissions through policies. Policies are written in JSON format and specify what actions are allowed or denied on specific AWS resources. These policies can be attached to IAM entities (users, groups, or roles) to grant or restrict access to AWS services and resources. 8 | 9 | IAM follows the principle of least privilege, meaning users and entities are given only the necessary permissions required for their tasks, minimizing potential security risks. IAM also provides features like multi-factor authentication (MFA) for added security and an audit trail to track user activity and changes to permissions. 10 | 11 | By using AWS IAM, you can effectively manage and secure access to your AWS resources, ensuring that only authorized individuals have appropriate permissions and actions are logged for accountability and compliance purposes. 12 | 13 | Overall, IAM is an essential component of AWS security, providing granular control over access to your AWS account and resources, reducing the risk of unauthorized access and helping maintain a secure environment. 14 | 15 | ## Components of IAM 16 | 17 | Users: IAM users represent individual people or entities (such as applications or services) that interact with your AWS resources. Each user has a unique name and security credentials (password or access keys) used for authentication and access control. 18 | 19 | Groups: IAM groups are collections of users with similar access requirements. Instead of managing permissions for each user individually, you can assign permissions to groups, making it easier to manage access control. Users can be added or removed from groups as needed. 20 | 21 | Roles: IAM roles are used to grant temporary access to AWS resources. Roles are typically used by applications or services that need to access AWS resources on behalf of users or other services. Roles have associated policies that define the permissions and actions allowed for the role. 22 | 23 | Policies: IAM policies are JSON documents that define permissions. Policies specify the actions that can be performed on AWS resources and the resources to which the actions apply. Policies can be attached to users, groups, or roles to control access. IAM provides both AWS managed policies (predefined policies maintained by AWS) and customer managed policies (policies created and managed by you). 24 | 25 | -------------------------------------------------------------------------------- /day-2/interview-questions: -------------------------------------------------------------------------------- 1 | # Interview Questions 2 | 3 | Q: What is AWS IAM, and why is it important? 4 | 5 | A: AWS IAM (Identity and Access Management) is a service provided by Amazon Web Services that helps you control access to your AWS resources. It allows you to manage user identities, permissions, and policies. IAM is important because it enhances security by ensuring that only authorized individuals or entities have access to your AWS resources, helping you enforce the principle of least privilege and maintain a secure environment. 6 | 7 | Q: What is the difference between IAM users and IAM roles? 8 | 9 | A: IAM users represent individual people or entities that need access to your AWS resources. They have their own credentials and are typically associated with long-term access. On the other hand, IAM roles are used to grant temporary access to AWS resources, usually for applications or services. Roles have associated policies and can be assumed by trusted entities to access resources securely. 10 | 11 | Q: What are IAM policies, and how do they work? 12 | 13 | A: IAM policies are JSON documents that define permissions. They specify what actions are allowed or denied on AWS resources and can be attached to IAM users, groups, or roles. Policies control access by matching the actions requested by a user or entity with the actions allowed or denied in the policy. If a requested action matches an allowed action in the policy, access is granted; otherwise, it is denied. 14 | 15 | Q: What is the principle of least privilege, and why is it important in IAM? 16 | 17 | A: The principle of least privilege states that users should be granted only the permissions necessary to perform their tasks and nothing more. It is important in IAM because it minimizes the risk of unauthorized access and limits the potential damage that could be caused by a compromised account. Following the principle of least privilege helps maintain a secure environment by ensuring that users have only the permissions they need to perform their job responsibilities. 18 | 19 | Q: What is an AWS managed policy? 20 | 21 | A: An AWS managed policy is a predefined policy created and managed by AWS. These policies cover common use cases and provide predefined permissions for specific AWS services or actions. AWS managed policies are maintained and updated by AWS, ensuring they stay up to date with new AWS services and features. They can be attached to IAM users, groups, or roles in your AWS account. 22 | -------------------------------------------------------------------------------- /day-20/README.md: -------------------------------------------------------------------------------- 1 | # Introduction to AWS ECR (Elastic Container Registry) 2 | 3 | In this video, we will deep dive into the fundamental concepts of ECR and provide you with a step-by-step practical guide on how to use it effectively. So, let's get started! 4 | 5 | ## Table of Contents 6 | 1. What is AWS ECR? 7 | 2. Key Benefits of ECR 8 | 3. Getting Started with AWS ECR 9 | - Creating an ECR Repository 10 | - Installing AWS CLI 11 | - Configuring AWS CLI 12 | 4. Pushing Docker Images to ECR 13 | 5. Pulling Docker Images from ECR 14 | 6. Cleaning Up Resources 15 | 16 | ## 1. What is AWS ECR? 17 | AWS Elastic Container Registry (ECR) is a fully managed container image registry service provided by Amazon Web Services (AWS). It enables you to store, manage, and deploy container images (Docker images) securely, making it an essential component of your containerized application development workflow. ECR integrates seamlessly with other AWS services like Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). 18 | 19 | ## 2. Key Benefits of ECR 20 | - **Security**: ECR offers encryption at rest, and images are stored in private repositories by default, ensuring the security of your container images. 21 | - **Integration**: ECR integrates smoothly with AWS services like ECS and EKS, simplifying the deployment process. 22 | - **Scalability**: As a managed service, ECR automatically scales to meet the demands of your container image storage. 23 | - **Availability**: ECR guarantees high availability, reducing the risk of image unavailability during critical times. 24 | - **Lifecycle Policies**: You can define lifecycle policies to automate the cleanup of unused or old container images, helping you save on storage costs. 25 | 26 | ## 3. Getting Started with AWS ECR 27 | ### Creating an ECR Repository 28 | 1. Go to the AWS Management Console and navigate to the Amazon ECR service. 29 | 2. Click on "Create repository" to create a new repository. 30 | 3. Enter a unique name for your repository and click "Create repository." 31 | 32 | ### Installing AWS CLI 33 | To interact with ECR from your local machine, you'll need to have the AWS Command Line Interface (CLI) installed. Follow the instructions in the [AWS CLI User Guide](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html) to install it. 34 | 35 | ### Configuring AWS CLI 36 | After installing the AWS CLI, open a terminal and run the following command to configure your CLI with your AWS credentials: 37 | 38 | ``` 39 | aws configure 40 | ``` 41 | 42 | Enter your AWS Access Key ID, Secret Access Key, default region, and preferred output format when prompted. 43 | 44 | ## 4. Pushing Docker Images to ECR 45 | Now that you have your ECR repository set up and the AWS CLI configured, let's push a Docker image to ECR. 46 | 47 | 1. Build your Docker image locally using the `docker build` command: 48 | 49 | ``` 50 | docker build -t 51 | ``` 52 | 53 | 2. Tag the image with your ECR repository URI: 54 | 55 | ``` 56 | docker tag : .dkr.ecr..amazonaws.com/: 57 | ``` 58 | 59 | 3. Log in to your ECR registry using the AWS CLI: 60 | 61 | ``` 62 | aws ecr get-login-password --region | docker login --username AWS --password-stdin .dkr.ecr..amazonaws.com 63 | ``` 64 | 65 | 4. Push the Docker image to ECR: 66 | 67 | ``` 68 | docker push .dkr.ecr..amazonaws.com/: 69 | ``` 70 | 71 | ## 5. Pulling Docker Images from ECR 72 | To pull and use the Docker images from ECR on another system or AWS service, follow these steps: 73 | 74 | 1. Log in to ECR using the AWS CLI as shown in Step 3 of the previous section. 75 | 2. Pull the Docker image from ECR: 76 | 77 | ``` 78 | docker pull .dkr.ecr..amazonaws.com/: 79 | ``` 80 | 81 | ## 6. Cleaning Up Resources 82 | As good practice, remember to clean up resources that you no longer need to avoid unnecessary costs. To delete an ECR repository: 83 | 84 | 1. Make sure there are no images in the repository, or delete the images using `docker rmi` locally. 85 | 2. Go to the AWS Management Console, navigate to the Amazon ECR service, and select your repository. 86 | 3. Click on "Delete" and confirm the action. 87 | -------------------------------------------------------------------------------- /day-21/Dockerfile: -------------------------------------------------------------------------------- 1 | # Use the official Python image as the base image 2 | FROM python:3.9 3 | 4 | # Set the working directory in the container 5 | WORKDIR /app 6 | 7 | # Copy the Python dependencies file to the container 8 | COPY requirements.txt . 9 | 10 | # Install the Python dependencies 11 | RUN pip install --no-cache-dir -r requirements.txt 12 | 13 | # Copy the Flask application code to the container 14 | COPY app.py . 15 | 16 | # Expose the port the Flask application will run on 17 | EXPOSE 3000 18 | 19 | # Command to run the Flask application when the container starts 20 | CMD ["python", "app.py"] 21 | -------------------------------------------------------------------------------- /day-21/README.md: -------------------------------------------------------------------------------- 1 | # AWS ECS Deep Dive 2 | 3 | ## Introduction 4 | 5 | In the ever-evolving world of cloud computing, containerization has emerged as a pivotal technology, enabling developers to package their applications along with all dependencies into a single, portable unit. Amazon Elastic Container Service (ECS), a fully managed container orchestration service from AWS, simplifies the deployment, management, and scaling of containerized applications. 6 | 7 | This blog post aims to be your ultimate guide to AWS ECS. We'll start from the fundamentals and gradually delve into the comparisons with its alternatives. We'll also discuss the pros and cons of ECS, provide step-by-step instructions for installation and configuration, and finally, guide you through deploying your first application on ECS. 8 | 9 | ## Table of Contents 10 | 1. What is AWS ECS? 11 | 2. Why Choose ECS Over Other Container Orchestration Tools? 12 | 3. ECS Fundamentals 13 | - Clusters 14 | - Task Definitions 15 | - Tasks 16 | - Services 17 | 4. Pros of Using AWS ECS 18 | 5. Cons of Using AWS ECS 19 | 6. Installation and Configuration 20 | - Prerequisites 21 | - Setting Up ECS CLI 22 | - Configuring AWS Credentials 23 | 7. Deploying Your First Application on ECS 24 | - Preparing the Application 25 | - Creating a Task Definition 26 | - Configuring the Service 27 | - Deploying the Service 28 | - Monitoring the Service 29 | 8. Conclusion 30 | 31 | ## 1. What is AWS ECS? 32 | 33 | AWS ECS is a fully managed container orchestration service that allows you to run Docker containers at scale. It eliminates the need to manage your own container orchestration infrastructure and provides a highly scalable, reliable, and secure environment for deploying and managing your applications. 34 | 35 | ## 2. Why Choose ECS Over Other Container Orchestration Tools? 36 | 37 | Before diving deep into ECS, let's compare it with some popular alternatives like Kubernetes and Docker Swarm. 38 | 39 | ### Comparison with Kubernetes: 40 | 41 | Kubernetes is undoubtedly a powerful container orchestration tool with a vast ecosystem, but it comes with a steeper learning curve. ECS, on the other hand, offers a more straightforward setup and is tightly integrated with other AWS services, making it a preferred choice for AWS-centric environments. 42 | 43 | ### Comparison with Docker Swarm: 44 | 45 | Docker Swarm is relatively easy to set up and is suitable for small to medium-scale deployments. However, as your application grows, ECS outshines Docker Swarm in terms of scalability, reliability, and seamless integration with AWS features like IAM roles and CloudWatch. 46 | 47 | ## 3. ECS Fundamentals 48 | 49 | To understand ECS better, let's explore its core components: 50 | 51 | ### Clusters: 52 | 53 | A cluster is a logical grouping of EC2 instances or Fargate tasks on which you run your containers. It acts as the foundation of ECS, where you can deploy your services. 54 | 55 | ### Task Definitions: 56 | 57 | Task Definitions define how your containers should run, including the Docker image to use, CPU and memory requirements, networking, and more. It is like a blueprint for your containers. 58 | 59 | ### Tasks: 60 | 61 | A task represents a single running instance of a task definition within a cluster. It could be a single container or multiple related containers that need to work together. 62 | 63 | ### Services: 64 | 65 | Services help you maintain a specified number of running tasks simultaneously, ensuring high availability and load balancing for your applications. 66 | 67 | ## 4. Pros of Using AWS ECS 68 | 69 | - **Fully Managed Service**: AWS handles the underlying infrastructure, making it easier for you to focus on deploying and managing applications. 70 | 71 | - **Seamless Integration**: ECS seamlessly integrates with other AWS services like IAM, CloudWatch, Load Balancers, and more. 72 | 73 | - **Scalability**: With support for Auto Scaling, ECS can automatically adjust the number of tasks based on demand. 74 | 75 | - **Cost-Effective**: You pay only for the AWS resources you use, and you can take advantage of cost optimization features. 76 | 77 | ## 5. Cons of Using AWS ECS 78 | 79 | - **AWS-Centric**: If you have a multi-cloud strategy or already invested heavily in another cloud provider, ECS's tight integration with AWS might be a limitation. 80 | 81 | - **Learning Curve for Advanced Features**: While basic usage is easy, utilizing more advanced features might require a deeper understanding. 82 | 83 | - **Limited Flexibility**: Although ECS can run non-Docker workloads with EC2 launch types, it is primarily optimized for Docker containers. 84 | 85 | ## 6. Installation and Configuration 86 | 87 | Let's get our hands dirty and set up AWS ECS step-by-step. 88 | 89 | ### Prerequisites: 90 | 91 | - An AWS account with appropriate IAM permissions. 92 | - The AWS CLI and ECS CLI installed on your local machine. 93 | 94 | ### Setting Up ECS CLI: 95 | 96 | ECS CLI is a command-line tool that simplifies the process of creating and managing ECS resources. 97 | 98 | ```bash 99 | $ ecs-cli configure --region --access-key --secret-key --cluster 100 | ``` 101 | 102 | ### Configuring AWS Credentials: 103 | 104 | Ensure you have the necessary AWS credentials configured using `aws configure` command. 105 | 106 | ## 7. Deploying Your First Application on ECS 107 | 108 | In this section, we'll deploy a simple web application using ECS. 109 | 110 | ### Preparing the Application: 111 | 112 | 1. Create a Dockerfile for your web application. 113 | 2. Build the Docker image and push it to Amazon ECR (Elastic Container Registry). 114 | 115 | ### Creating a Task Definition: 116 | 117 | Define the task using the ECS CLI or the AWS Management Console. 118 | 119 | ### Configuring the Service: 120 | 121 | Create an ECS service to manage the desired number of tasks and set up load balancing. 122 | 123 | ### Deploying the Service: 124 | 125 | Use the ECS CLI or the AWS Management Console to deploy the service. 126 | 127 | ### Monitoring the Service: 128 | 129 | Monitor your ECS service using AWS CloudWatch metrics and logs. 130 | 131 | ## 8. Conclusion 132 | 133 | In conclusion, AWS ECS offers a robust and user-friendly platform for deploying and managing containerized applications. We covered the fundamentals of ECS, compared it with its alternatives, discussed its pros and cons, and walked through the installation, configuration, and deployment of a sample application. 134 | -------------------------------------------------------------------------------- /day-21/app.py: -------------------------------------------------------------------------------- 1 | # app.py 2 | 3 | from flask import Flask 4 | 5 | app = Flask(__name__) 6 | 7 | # Route to the root URL 8 | @app.route('/') 9 | def hello(): 10 | return 'Hello, Flask on Docker!' 11 | 12 | # Route to a custom endpoint 13 | @app.route('/greet/') 14 | def greet(name): 15 | return f'Hello, {name}! Welcome to Flask on Docker.' 16 | 17 | if __name__ == '__main__': 18 | app.run(host='0.0.0.0', port=3000) 19 | -------------------------------------------------------------------------------- /day-21/commands.md: -------------------------------------------------------------------------------- 1 | # Login to ECR (replace and with your actual values) 2 | $ aws ecr get-login-password --region | docker login --username AWS --password-stdin .dkr.ecr..amazonaws.com 3 | 4 | # Build the Docker image (replace with your ECR repository name) 5 | $ docker build -t .dkr.ecr..amazonaws.com/:latest . 6 | 7 | # Push the Docker image to ECR (replace with your ECR repository name) 8 | $ docker push .dkr.ecr..amazonaws.com/:latest 9 | -------------------------------------------------------------------------------- /day-21/requirements.txt: -------------------------------------------------------------------------------- 1 | Flask==2.0.1 2 | -------------------------------------------------------------------------------- /day-22/2048-app-deploy-ingress.md: -------------------------------------------------------------------------------- 1 | # 2048 App 2 | 3 | ## Create Fargate profile 4 | 5 | ``` 6 | eksctl create fargateprofile \ 7 | --cluster demo-cluster \ 8 | --region us-east-1 \ 9 | --name alb-sample-app \ 10 | --namespace game-2048 11 | ``` 12 | 13 | ## Deploy the deployment, service and Ingress 14 | 15 | ``` 16 | kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml 17 | ``` 18 | 19 | 20 | 21 | ![Screenshot 2023-08-03 at 7 57 15 PM](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/93b06a9f-67f9-404f-b0ad-18e3095b7353) 22 | -------------------------------------------------------------------------------- /day-22/alb-controller-add-on.md: -------------------------------------------------------------------------------- 1 | # How to setup alb add on 2 | 3 | Download IAM policy 4 | 5 | ``` 6 | curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json 7 | ``` 8 | 9 | Create IAM Policy 10 | 11 | ``` 12 | aws iam create-policy \ 13 | --policy-name AWSLoadBalancerControllerIAMPolicy \ 14 | --policy-document file://iam_policy.json 15 | ``` 16 | 17 | Create IAM Role 18 | 19 | ``` 20 | eksctl create iamserviceaccount \ 21 | --cluster= \ 22 | --namespace=kube-system \ 23 | --name=aws-load-balancer-controller \ 24 | --role-name AmazonEKSLoadBalancerControllerRole \ 25 | --attach-policy-arn=arn:aws:iam:::policy/AWSLoadBalancerControllerIAMPolicy \ 26 | --approve 27 | ``` 28 | 29 | ## Deploy ALB controller 30 | 31 | Add helm repo 32 | 33 | ``` 34 | helm repo add eks https://aws.github.io/eks-charts 35 | ``` 36 | 37 | Update the repo 38 | 39 | ``` 40 | helm repo update eks 41 | ``` 42 | 43 | Install 44 | 45 | ``` 46 | helm install aws-load-balancer-controller eks/aws-load-balancer-controller \ 47 | -n kube-system \ 48 | --set clusterName= \ 49 | --set serviceAccount.create=false \ 50 | --set serviceAccount.name=aws-load-balancer-controller \ 51 | --set region= \ 52 | --set vpcId= 53 | ``` 54 | 55 | Verify that the deployments are running. 56 | 57 | ``` 58 | kubectl get deployment -n kube-system aws-load-balancer-controller 59 | ``` 60 | -------------------------------------------------------------------------------- /day-22/configure-oidc-connector.md: -------------------------------------------------------------------------------- 1 | # commands to configure IAM OIDC provider 2 | 3 | ``` 4 | export cluster_name=demo-cluster 5 | ``` 6 | 7 | ``` 8 | oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5) 9 | ``` 10 | 11 | ## Check if there is an IAM OIDC provider configured already 12 | 13 | - aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4\n 14 | 15 | If not, run the below command 16 | 17 | ``` 18 | eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve 19 | ``` -------------------------------------------------------------------------------- /day-22/installing-eks.md: -------------------------------------------------------------------------------- 1 | # Install EKS 2 | 3 | Please follow the prerequisites doc before this. 4 | 5 | ## Install using Fargate 6 | 7 | ``` 8 | eksctl create cluster --name demo-cluster --region us-east-1 --fargate 9 | ``` 10 | 11 | ## Delete the cluster 12 | 13 | ``` 14 | eksctl delete cluster --name demo-cluster --region us-east-1 15 | ``` 16 | 17 | 18 | 19 | -------------------------------------------------------------------------------- /day-22/prerequisites.md: -------------------------------------------------------------------------------- 1 | # prerequisites 2 | 3 | kubectl – A command line tool for working with Kubernetes clusters. For more information, see [Installing or updating kubectl]("https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html"). 4 | 5 | eksctl – A command line tool for working with EKS clusters that automates many individual tasks. For more information, see [Installing or updating]("https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html"). 6 | 7 | AWS CLI – A command line tool for working with AWS services, including Amazon EKS. For more information, see [Installing, updating, and uninstalling the AWS CLI]("https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html") in the AWS Command Line Interface User Guide. After installing the AWS CLI, we recommend that you also configure it. For more information, see [Quick configuration]("https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html#cli-configure-quickstart-config") with aws configure in the AWS Command Line Interface User Guide. 8 | -------------------------------------------------------------------------------- /day-22/sample-app.md: -------------------------------------------------------------------------------- 1 | # Sample App deployment 2 | 3 | ## Copy the deploy.yml to your local and save it with name deploy.yml 4 | 5 | ``` 6 | apiVersion: apps/v1 7 | kind: Deployment 8 | metadata: 9 | name: eks-sample-linux-deployment 10 | labels: 11 | app: eks-sample-linux-app 12 | spec: 13 | replicas: 3 14 | selector: 15 | matchLabels: 16 | app: eks-sample-linux-app 17 | template: 18 | metadata: 19 | labels: 20 | app: eks-sample-linux-app 21 | spec: 22 | affinity: 23 | nodeAffinity: 24 | requiredDuringSchedulingIgnoredDuringExecution: 25 | nodeSelectorTerms: 26 | - matchExpressions: 27 | - key: kubernetes.io/arch 28 | operator: In 29 | values: 30 | - amd64 31 | - arm64 32 | containers: 33 | - name: nginx 34 | image: public.ecr.aws/nginx/nginx:1.23 35 | ports: 36 | - name: http 37 | containerPort: 80 38 | imagePullPolicy: IfNotPresent 39 | nodeSelector: 40 | kubernetes.io/os: linux 41 | ``` 42 | 43 | ## Deploy the app 44 | 45 | ``` 46 | kubectl apply -f deploy.yaml 47 | ``` 48 | 49 | 50 | ## Copy the below file as service.yml 51 | 52 | ``` 53 | apiVersion: v1 54 | kind: Service 55 | metadata: 56 | name: eks-sample-linux-service 57 | labels: 58 | app: eks-sample-linux-app 59 | spec: 60 | selector: 61 | app: eks-sample-linux-app 62 | ports: 63 | - protocol: TCP 64 | port: 80 65 | targetPort: 80 66 | ``` 67 | 68 | ## Deploy the service 69 | 70 | ``` 71 | kubectl apply -f service.yaml 72 | ``` -------------------------------------------------------------------------------- /day-24/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "myvpc" { 2 | cidr_block = var.cidr 3 | } 4 | 5 | resource "aws_subnet" "sub1" { 6 | vpc_id = aws_vpc.myvpc.id 7 | cidr_block = "10.0.0.0/24" 8 | availability_zone = "us-east-1a" 9 | map_public_ip_on_launch = true 10 | } 11 | 12 | resource "aws_subnet" "sub2" { 13 | vpc_id = aws_vpc.myvpc.id 14 | cidr_block = "10.0.1.0/24" 15 | availability_zone = "us-east-1b" 16 | map_public_ip_on_launch = true 17 | } 18 | 19 | resource "aws_internet_gateway" "igw" { 20 | vpc_id = aws_vpc.myvpc.id 21 | } 22 | 23 | resource "aws_route_table" "RT" { 24 | vpc_id = aws_vpc.myvpc.id 25 | 26 | route { 27 | cidr_block = "0.0.0.0/0" 28 | gateway_id = aws_internet_gateway.igw.id 29 | } 30 | } 31 | 32 | resource "aws_route_table_association" "rta1" { 33 | subnet_id = aws_subnet.sub1.id 34 | route_table_id = aws_route_table.RT.id 35 | } 36 | 37 | resource "aws_route_table_association" "rta2" { 38 | subnet_id = aws_subnet.sub2.id 39 | route_table_id = aws_route_table.RT.id 40 | } 41 | 42 | resource "aws_security_group" "webSg" { 43 | name = "web" 44 | vpc_id = aws_vpc.myvpc.id 45 | 46 | ingress { 47 | description = "HTTP from VPC" 48 | from_port = 80 49 | to_port = 80 50 | protocol = "tcp" 51 | cidr_blocks = ["0.0.0.0/0"] 52 | } 53 | ingress { 54 | description = "SSH" 55 | from_port = 22 56 | to_port = 22 57 | protocol = "tcp" 58 | cidr_blocks = ["0.0.0.0/0"] 59 | } 60 | 61 | egress { 62 | from_port = 0 63 | to_port = 0 64 | protocol = "-1" 65 | cidr_blocks = ["0.0.0.0/0"] 66 | } 67 | 68 | tags = { 69 | Name = "Web-sg" 70 | } 71 | } 72 | 73 | resource "aws_s3_bucket" "example" { 74 | bucket = "abhisheksterraform2023project" 75 | } 76 | 77 | 78 | resource "aws_instance" "webserver1" { 79 | ami = "ami-0261755bbcb8c4a84" 80 | instance_type = "t2.micro" 81 | vpc_security_group_ids = [aws_security_group.webSg.id] 82 | subnet_id = aws_subnet.sub1.id 83 | user_data = base64encode(file("userdata.sh")) 84 | } 85 | 86 | resource "aws_instance" "webserver2" { 87 | ami = "ami-0261755bbcb8c4a84" 88 | instance_type = "t2.micro" 89 | vpc_security_group_ids = [aws_security_group.webSg.id] 90 | subnet_id = aws_subnet.sub2.id 91 | user_data = base64encode(file("userdata1.sh")) 92 | } 93 | 94 | #create alb 95 | resource "aws_lb" "myalb" { 96 | name = "myalb" 97 | internal = false 98 | load_balancer_type = "application" 99 | 100 | security_groups = [aws_security_group.webSg.id] 101 | subnets = [aws_subnet.sub1.id, aws_subnet.sub2.id] 102 | 103 | tags = { 104 | Name = "web" 105 | } 106 | } 107 | 108 | resource "aws_lb_target_group" "tg" { 109 | name = "myTG" 110 | port = 80 111 | protocol = "HTTP" 112 | vpc_id = aws_vpc.myvpc.id 113 | 114 | health_check { 115 | path = "/" 116 | port = "traffic-port" 117 | } 118 | } 119 | 120 | resource "aws_lb_target_group_attachment" "attach1" { 121 | target_group_arn = aws_lb_target_group.tg.arn 122 | target_id = aws_instance.webserver1.id 123 | port = 80 124 | } 125 | 126 | resource "aws_lb_target_group_attachment" "attach2" { 127 | target_group_arn = aws_lb_target_group.tg.arn 128 | target_id = aws_instance.webserver2.id 129 | port = 80 130 | } 131 | 132 | resource "aws_lb_listener" "listener" { 133 | load_balancer_arn = aws_lb.myalb.arn 134 | port = 80 135 | protocol = "HTTP" 136 | 137 | default_action { 138 | target_group_arn = aws_lb_target_group.tg.arn 139 | type = "forward" 140 | } 141 | } 142 | 143 | output "loadbalancerdns" { 144 | value = aws_lb.myalb.dns_name 145 | } -------------------------------------------------------------------------------- /day-24/provider.tf: -------------------------------------------------------------------------------- 1 | terraform { 2 | required_providers { 3 | aws = { 4 | source = "hashicorp/aws" 5 | version = "5.11.0" 6 | } 7 | } 8 | } 9 | 10 | provider "aws" { 11 | # Configuration options 12 | region = "us-east-1" 13 | } -------------------------------------------------------------------------------- /day-24/userdata.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | apt update 3 | apt install -y apache2 4 | 5 | # Get the instance ID using the instance metadata 6 | INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 7 | 8 | # Install the AWS CLI 9 | apt install -y awscli 10 | 11 | # Download the images from S3 bucket 12 | #aws s3 cp s3://myterraformprojectbucket2023/project.webp /var/www/html/project.png --acl public-read 13 | 14 | # Create a simple HTML file with the portfolio content and display the images 15 | cat < /var/www/html/index.html 16 | 17 | 18 | 19 | My Portfolio 20 | 31 | 32 | 33 |

Terraform Project Server 1

34 |

Instance ID: $INSTANCE_ID

35 |

Welcome to Abhishek Veeramalla's Channel

36 | 37 | 38 | 39 | EOF 40 | 41 | # Start Apache and enable it on boot 42 | systemctl start apache2 43 | systemctl enable apache2 -------------------------------------------------------------------------------- /day-24/userdata1.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | apt update 3 | apt install -y apache2 4 | 5 | # Get the instance ID using the instance metadata 6 | INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id) 7 | 8 | # Install the AWS CLI 9 | apt install -y awscli 10 | 11 | # Download the images from S3 bucket 12 | #aws s3 cp s3://myterraformprojectbucket2023/project.webp /var/www/html/project.png --acl public-read 13 | 14 | # Create a simple HTML file with the portfolio content and display the images 15 | cat < /var/www/html/index.html 16 | 17 | 18 | 19 | My Portfolio 20 | 31 | 32 | 33 |

Terraform Project Server 1

34 |

Instance ID: $INSTANCE_ID

35 |

Welcome to CloudChamp's Channel

36 | 37 | 38 | 39 | EOF 40 | 41 | # Start Apache and enable it on boot 42 | systemctl start apache2 43 | systemctl enable apache2 -------------------------------------------------------------------------------- /day-24/variables.tf: -------------------------------------------------------------------------------- 1 | variable "cidr" { 2 | default = "10.0.0.0/16" 3 | } -------------------------------------------------------------------------------- /day-25/README.md: -------------------------------------------------------------------------------- 1 | # AWS Config 2 | 3 | we'll use AWS Config to detect compliant and non-compliant ec2 instances for below rule. 4 | - compliant ec2 instance has monitoring enabled 5 | - non-compliant ec2 instance does not have monitoring enabled 6 | 7 | Step 1: Set Up AWS Config 8 | 9 | Log in to your AWS Management Console. 10 | 11 | Navigate to the AWS Config service. 12 | 13 | Click on "Get started" if you're using AWS Config for the first time. 14 | 15 | Configure the delivery channel settings, which include specifying an Amazon S3 bucket where AWS Config will store configuration history. 16 | 17 | Choose the resource types you want AWS Config to monitor. In this case, select "Amazon EC2 Instances." 18 | 19 | Step 2: Create a Custom Config Rule 20 | 21 | Navigate to the AWS Config console. 22 | 23 | In the left navigation pane, click on "Rules." 24 | 25 | Click on the "Add rule" button. 26 | 27 | Choose "Create a custom rule." 28 | 29 | Give your rule a name and description (e.g., "Monitoring for EC2 Instances"). 30 | 31 | For "Scope of changes," choose "Resources." 32 | 33 | Define the rule trigger. You can use AWS Lambda as the trigger source. If you haven't already created a Lambda function for this rule, create one that checks whether monitoring is enabled for an EC2 instance. The Lambda function will return whether the resource is compliant or not based on monitoring status. 34 | 35 | Step 3: Define the Custom Rule in AWS Config 36 | 37 | Choose your Lambda function from the dropdown list as the evaluator for the rule. 38 | 39 | Specify the trigger type (e.g., "Configuration changes"). 40 | 41 | Save the rule. 42 | 43 | Step 4: Monitor and Alert 44 | 45 | AWS Config will now continuously evaluate your EC2 instances against the rule you've created. 46 | 47 | If any EC2 instance is found without monitoring enabled, the custom rule's Lambda function will mark it as non-compliant. -------------------------------------------------------------------------------- /day-25/lambda_function.py: -------------------------------------------------------------------------------- 1 | import boto3 2 | import json 3 | 4 | def lambda_handler(event, context): 5 | 6 | # Get the specific EC2 instance. 7 | ec2_client = boto3.client('ec2') 8 | 9 | # Assume compliant by default 10 | compliance_status = "COMPLIANT" 11 | 12 | # Extract the configuration item from the invokingEvent 13 | config = json.loads(event['invokingEvent']) 14 | 15 | configuration_item = config["configurationItem"] 16 | 17 | # Extract the instanceId 18 | instance_id = configuration_item['configuration']['instanceId'] 19 | 20 | # Get complete Instance details 21 | instance = ec2_client.describe_instances(InstanceIds=[instance_id])['Reservations'][0]['Instances'][0] 22 | 23 | # Check if the specific EC2 instance has Cloud Trail logging enabled. 24 | 25 | if not instance['Monitoring']['State'] == "enabled": 26 | compliance_status = "NON_COMPLIANT" 27 | 28 | evaluation = { 29 | 'ComplianceResourceType': 'AWS::EC2::Instance', 30 | 'ComplianceResourceId': instance_id, 31 | 'ComplianceType': compliance_status, 32 | 'Annotation': 'Detailed monitoring is not enabled.', 33 | 'OrderingTimestamp': config['notificationCreationTime'] 34 | } 35 | 36 | config_client = boto3.client('config') 37 | 38 | response = config_client.put_evaluations( 39 | Evaluations=[evaluation], 40 | ResultToken=event['resultToken'] 41 | ) 42 | 43 | return response 44 | -------------------------------------------------------------------------------- /day-25/lambda_function_permissions.md: -------------------------------------------------------------------------------- 1 | Below are the permissions that you need to grant to the role that executes the lambda function used in the project. 2 | 3 | ![Screenshot 2023-08-10 at 11 41 54 PM](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/99e08bdb-17aa-4962-a96a-3cecdb99ee8d) 4 | -------------------------------------------------------------------------------- /day-3/README.md: -------------------------------------------------------------------------------- 1 | # What will you learn 2 | 3 | ## Introduction to EC2: 4 | 5 | What is EC2, and why is it important? 6 | 7 | ``` 8 | - Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. 9 | - Access reliable, scalable infrastructure on demand. Scale capacity within minutes with SLA commitment of 99.99% availability. 10 | - Provide secure compute for your applications. Security is built into the foundation of Amazon EC2 with the AWS Nitro System. 11 | - Optimize performance and cost with flexible options like AWS Graviton-based instances, Amazon EC2 Spot instances, and AWS Savings Plans. 12 | ``` 13 | 14 | EC2 usecases 15 | 16 | ``` 17 | Deliver secure, reliable, high-performance, and cost-effective compute infrastructure to meet demanding business needs. 18 | Access the on-demand infrastructure and capacity you need to run HPC applications faster and cost-effectively. 19 | Access environments in minutes, dynamically scale capacity as needed, and benefit from AWS’s pay-as-you-go pricing. 20 | Deliver the broadest choice of compute, networking (up to 400 Gbps), and storage services purpose-built to optimize price performance for ML projects 21 | ``` 22 | 23 | EC2 Instance Types 24 | 25 | Recommended to follow [this](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/instance-types.html) page for very detailed and updated information. 26 | 27 | General purpose 28 | 29 | ``` 30 | General Purpose instances are designed to deliver a balance of compute, memory, and network resources. They are suitable for a wide range of applications, including web servers, 31 | small databases, development and test environments, and more. 32 | ``` 33 | 34 | Compute optimized 35 | 36 | ``` 37 | Compute Optimized instances provide a higher ratio of compute power to memory. They excel in workloads that require high-performance processing such as batch processing, 38 | scientific modeling, gaming servers, and high-performance web servers. 39 | ``` 40 | 41 | Memory optimized 42 | 43 | ``` 44 | Memory Optimized instances are designed to handle memory-intensive workloads. They are suitable for applications that require large amounts of memory, such as in-memory databases, 45 | real-time big data analytics, and high-performance computing. 46 | ``` 47 | 48 | Storage optimized 49 | 50 | ``` 51 | Storage Optimized instances are optimized for applications that require high, sequential read and write access to large datasets. 52 | They are ideal for tasks like data warehousing, log processing, and distributed file systems. 53 | ``` 54 | 55 | Accelerated computing 56 | 57 | ``` 58 | Accelerated Computing Instances typically come with one or more types of accelerators, such as Graphics Processing Units (GPUs), 59 | Field Programmable Gate Arrays (FPGAs), or custom Application Specific Integrated Circuits (ASICs). 60 | These accelerators offload computationally intensive tasks from the main CPU, enabling faster and more efficient processing for specific workloads. 61 | ``` 62 | 63 | ![image](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/fc8e083c-dba5-41a6-94b9-14ebef0255c1) 64 | 65 | Instance families 66 | 67 | ``` 68 | C – Compute 69 | 70 | D – Dense storage 71 | 72 | F – FPGA 73 | 74 | G – GPU 75 | 76 | Hpc – High performance computing 77 | 78 | I – I/O 79 | 80 | Inf – AWS Inferentia 81 | 82 | M – Most scenarios 83 | 84 | P – GPU 85 | 86 | R – Random access memory 87 | 88 | T – Turbo 89 | 90 | Trn – AWS Tranium 91 | 92 | U – Ultra-high memory 93 | 94 | VT – Video transcoding 95 | 96 | X – Extra-large memory 97 | ``` 98 | 99 | Additional capabilities 100 | 101 | ``` 102 | a – AMD processors 103 | 104 | g – AWS Graviton processors 105 | 106 | i – Intel processors 107 | 108 | d – Instance store volumes 109 | 110 | n – Network and EBS optimized 111 | 112 | e – Extra storage or memory 113 | 114 | z – High performance 115 | ``` 116 | 117 | ## EC2 Instance Basics: 118 | 119 | Understanding the concept of virtual servers and instances. 120 | Key components of an EC2 instance: AMI (Amazon Machine Image), instance types, and instance states. 121 | Differentiating between On-Demand, Reserved, and Spot instances. 122 | 123 | ## Launching an EC2 Instance: 124 | 125 | - Step-by-step guide on launching an EC2 instance using the AWS Management Console. 126 | - Configuring instance details, such as instance type, network settings, and storage options. 127 | - Understanding security groups and key pairs for securing instances. 128 | 129 | ## Managing EC2 Instances: 130 | 131 | - Starting, stopping, and terminating instances. 132 | - Monitoring instance performance and utilization. 133 | - Basic troubleshooting and accessing instances using SSH (Secure Shell). 134 | -------------------------------------------------------------------------------- /day-4/README.md: -------------------------------------------------------------------------------- 1 | # VPC 2 | 3 | Imagine you want to set up a private, secure, and isolated area in the cloud where you can run your applications and store your data. This is where a VPC comes into play. 4 | 5 | A VPC is a virtual network that you create in the cloud. It allows you to have your own private section of the internet, just like having your own network within a larger network. Within this VPC, you can create and manage various resources, such as servers, databases, and storage. 6 | 7 | Think of it as having your own little "internet" within the bigger internet. This virtual network is completely isolated from other users' networks, so your data and applications are secure and protected. 8 | 9 | Just like a physical network, a VPC has its own set of rules and configurations. You can define the IP address range for your VPC and create smaller subnetworks within it called subnets. These subnets help you organize your resources and control how they communicate with each other. 10 | 11 | To connect your VPC to the internet or other networks, you can set up gateways or routers. These act as entry and exit points for traffic going in and out of your VPC. You can control the flow of traffic and set up security measures to protect your resources from unauthorized access. 12 | 13 | With a VPC, you have control over your network environment. You can define access rules, set up firewalls, and configure security groups to regulate who can access your resources and how they can communicate. 14 | 15 | ![image](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/12cc10b6-724c-42c9-b07b-d8a7ce124e24) 16 | 17 | By default, when you create an AWS account, AWS will create a default VPC for you but this default VPC is just to get started with AWS. You should create VPCs for applications or projects. 18 | 19 | ## VPC components 20 | 21 | The following features help you configure a VPC to provide the connectivity that your applications need: 22 | 23 | Virtual private clouds (VPC) 24 | 25 | A VPC is a virtual network that closely resembles a traditional network that you'd operate in your own data center. After you create a VPC, you can add subnets. 26 | Subnets 27 | 28 | A subnet is a range of IP addresses in your VPC. A subnet must reside in a single Availability Zone. After you add subnets, you can deploy AWS resources in your VPC. 29 | IP addressing 30 | 31 | You can assign IP addresses, both IPv4 and IPv6, to your VPCs and subnets. You can also bring your public IPv4 and IPv6 GUA addresses to AWS and allocate them to resources in your VPC, such as EC2 instances, NAT gateways, and Network Load Balancers. 32 | 33 | Network Access Control List (NACL) 34 | 35 | A Network Access Control List is a stateless firewall that controls inbound and outbound traffic at the subnet level. It operates at the IP address level and can allow or deny traffic based on rules that you define. NACLs provide an additional layer of network security for your VPC. 36 | 37 | Security Group 38 | 39 | A security group acts as a virtual firewall for instances (EC2 instances or other resources) within a VPC. It controls inbound and outbound traffic at the instance level. Security groups allow you to define rules that permit or restrict traffic based on protocols, ports, and IP addresses. 40 | 41 | Routing 42 | 43 | Use route tables to determine where network traffic from your subnet or gateway is directed. 44 | Gateways and endpoints 45 | 46 | A gateway connects your VPC to another network. For example, use an internet gateway to connect your VPC to the internet. Use a VPC endpoint to connect to AWS services privately, without the use of an internet gateway or NAT device. 47 | Peering connections 48 | 49 | Use a VPC peering connection to route traffic between the resources in two VPCs. 50 | Traffic Mirroring 51 | 52 | Copy network traffic from network interfaces and send it to security and monitoring appliances for deep packet inspection. 53 | Transit gateways 54 | 55 | Use a transit gateway, which acts as a central hub, to route traffic between your VPCs, VPN connections, and AWS Direct Connect connections. 56 | VPC Flow Logs 57 | 58 | A flow log captures information about the IP traffic going to and from network interfaces in your VPC. 59 | VPN connections 60 | 61 | Connect your VPCs to your on-premises networks using AWS Virtual Private Network (AWS VPN). 62 | 63 | 64 | ## Resources 65 | 66 | VPC with servers in private subnets and NAT 67 | 68 | https://docs.aws.amazon.com/vpc/latest/userguide/vpc-example-private-subnets-nat.html 69 | 70 | ![image](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/89d8316e-7b70-4821-a6bf-67d1dcc4d2fb) 71 | 72 | 73 | 74 | -------------------------------------------------------------------------------- /day-5/README.md: -------------------------------------------------------------------------------- 1 | # AWS Security using Security Groups and NACL 2 | 3 | AWS (Amazon Web Services) provides multiple layers of security to protect resources and data within its cloud infrastructure. Two important components for network security in AWS are Security Groups and Network Access Control Lists (NACLs). Let's explore how each of them works: 4 | 5 | Security Groups: 6 | Security Groups act as virtual firewalls for Amazon EC2 instances (virtual servers) at the instance level. They control inbound and outbound traffic by allowing or denying specific protocols, ports, and IP addresses. 7 | Each EC2 instance can be associated with one or more security groups, and each security group consists of inbound and outbound rules. 8 | Inbound rules determine the traffic that is allowed to reach the EC2 instance, whereas outbound rules control the traffic leaving the instance. 9 | Security Groups can be configured using IP addresses, CIDR blocks, security group IDs, or DNS names to specify the source or destination of the traffic. 10 | They operate at the instance level and evaluate the rules before allowing traffic to reach the instance. 11 | Security Groups are stateful, meaning that if an inbound rule allows traffic, the corresponding outbound traffic is automatically allowed, and vice versa. 12 | Changes made to security group rules take effect immediately. 13 | 14 | Network Access Control Lists (NACLs): 15 | NACLs are an additional layer of security that operates at the subnet level. They act as stateless traffic filters for inbound and outbound traffic at the subnet boundary. 16 | Unlike Security Groups, NACLs are associated with subnets, and each subnet can have only one NACL. However, multiple subnets can share the same NACL. 17 | NACLs consist of a numbered list of rules (numbered in ascending order) that are evaluated in order from lowest to highest. 18 | Each rule in the NACL includes a rule number, protocol, rule action (allow or deny), source or destination IP address range, port range, and ICMP (Internet Control Message Protocol) type. 19 | NACL rules can be configured to allow or deny specific types of traffic based on the defined criteria. 20 | They are stateless, which means that if an inbound rule allows traffic, the corresponding outbound traffic must be explicitly allowed using a separate outbound rule. 21 | Changes made to NACL rules may take some time to propagate to all the resources using the associated subnet. 22 | 23 | ## Project Implemented in the video 24 | 25 | 26 | ![Screenshot 2023-06-29 at 12 14 32 AM](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/30bbc9e8-6502-438b-8adf-ece8b81edce9) 27 | 28 | -------------------------------------------------------------------------------- /day-6/README.md: -------------------------------------------------------------------------------- 1 | # Route53 2 | 3 | TODO 4 | -------------------------------------------------------------------------------- /day-7/vpc-demo-2-tier-app: -------------------------------------------------------------------------------- 1 | # VPC Demo for 2 tier app in private subnet 2 | 3 | https://youtu.be/FZPTL_kNvXc 4 | -------------------------------------------------------------------------------- /day-8/Interview_q&a: -------------------------------------------------------------------------------- 1 | # Scenario Based Interview Questions on EC2, IAM and VPC 2 | 3 | 4 | Q: You have been assigned to design a VPC architecture for a 2-tier application. The application needs to be highly available and scalable. 5 | How would you design the VPC architecture? 6 | 7 | A: In this scenario, I would design a VPC architecture in the following way. 8 | I would create 2 subnets: public and private. The public subnet would contain the load balancers and be accessible from the internet. The private subnet would host the application servers. 9 | I would distribute the subnets across multiple Availability Zones for high availability. Additionally, I would configure auto scaling groups for the application servers. 10 | 11 | Q: Your organization has a VPC with multiple subnets. You want to restrict outbound internet access for resources in one subnet, but allow outbound internet access for resources in another subnet. How would you achieve this? 12 | 13 | A: To restrict outbound internet access for resources in one subnet, we can modify the route table associated with that subnet. In the route table, we can remove the default route (0.0.0.0/0) that points to an internet gateway. 14 | This would prevent resources in that subnet from accessing the internet. For the subnet where outbound internet access is required, we can keep the default route pointing to the internet gateway. 15 | 16 | Q: You have a VPC with a public subnet and a private subnet. Instances in the private subnet need to access the internet for software updates. How would you allow internet access for instances in the private subnet? 17 | 18 | A: To allow internet access for instances in the private subnet, we can use a NAT Gateway or a NAT instance. 19 | We would place the NAT Gateway/instance in the public subnet and configure the private subnet route table to send outbound traffic to the NAT Gateway/instance. This way, instances in the private subnet can access the internet through the NAT Gateway/instance. 20 | 21 | Q: You have launched EC2 instances in your VPC, and you want them to communicate with each other using private IP addresses. What steps would you take to enable this communication? 22 | 23 | A: By default, instances within the same VPC can communicate with each other using private IP addresses. 24 | To ensure this communication, we need to make sure that the instances are launched in the same VPC and are placed in the same subnet or subnets that are connected through a peering connection or a VPC peering link. 25 | Additionally, we should check the security groups associated with the instances to ensure that the necessary inbound and outbound rules are configured to allow communication between them. 26 | 27 | Q: You want to implement strict network access control for your VPC resources. How would you achieve this? 28 | 29 | A: To implement granular network access control for VPC resources, we can use Network Access Control Lists (ACLs). 30 | NACLs are stateless and operate at the subnet level. We can define inbound and outbound rules in the NACLs to allow or deny traffic based on source and destination IP addresses, ports, and protocols. 31 | By carefully configuring NACL rules, we can enforce fine-grained access control for traffic entering and leaving the subnets. 32 | 33 | Q: Your organization requires an isolated environment within the VPC for running sensitive workloads. How would you set up this isolated environment? 34 | 35 | A: To set up an isolated environment within the VPC, we can create a subnet with no internet gateway attached. 36 | This subnet, known as an "isolated subnet," will not have direct internet connectivity. We can place the sensitive workloads in this subnet, ensuring that they are protected from inbound and outbound internet traffic. 37 | However, if these workloads require outbound internet access, we can set up a NAT Gateway or NAT instance in a different subnet and configure the isolated subnet's route table to send outbound traffic through the NAT Gateway/instance. 38 | 39 | Q: Your application needs to access AWS services, such as S3 securely within your VPC. How would you achieve this? 40 | 41 | A: To securely access AWS services within the VPC, we can use VPC endpoints. VPC endpoints allow instances in the VPC to communicate with AWS services privately, without requiring internet gateways or NAT gateways. 42 | We can create VPC endpoints for specific AWS services, such as S3 and DynamoDB, and associate them with the VPC. 43 | This enables secure and efficient communication between the instances in the VPC and the AWS services. 44 | 45 | Q: What is the difference between NACL and Security groups ? Explain with a use case ? 46 | 47 | A: For example, I want to design a security architecture, I would use a combination of NACLs and security groups. At the subnet level, I would configure NACLs to enforce inbound and outbound traffic restrictions based on source and destination IP addresses, ports, and protocols. NACLs are stateless and can provide an additional layer of defense by filtering traffic at the subnet boundary. 48 | At the instance level, I would leverage security groups to control inbound and outbound traffic. Security groups are stateful and operate at the instance level. By carefully defining security group rules, I can allow or deny specific traffic to and from the instances based on the application's security requirements. 49 | By combining NACLs and security groups, I can achieve granular security controls at both the network and instance level, providing defense-in-depth for the sensitive application. 50 | 51 | Q: What is the difference between IAM users, groups, roles and policies ? 52 | 53 | A: IAM User: An IAM user is an identity within AWS that represents an individual or application needing access to AWS resources. IAM users have permanent long-term credentials, such as a username and password, or access keys (Access Key ID and Secret Access Key). IAM users can be assigned directly to IAM policies or added to IAM groups for easier management of permissions. 54 | IAM Role: An IAM role is similar to an IAM user but is not associated with a specific individual. Instead, it is assumed by entities such as IAM users, applications, or services to obtain temporary security credentials. IAM roles are useful when you want to grant permissions to entities that are external to your AWS account or when you want to delegate access to AWS resources across accounts. IAM roles have policies attached to them that define the permissions granted when the role is assumed. 55 | IAM Group: An IAM group is a collection of IAM users. By organizing IAM users into groups, you can manage permissions collectively. IAM groups make it easier to assign permissions to multiple users simultaneously. Users within an IAM group inherit the permissions assigned to that group. For example, you can create a "Developers" group and assign appropriate policies to grant permissions required for developers across your organization. 56 | IAM Policy: An IAM policy is a document that defines permissions and access controls in AWS. IAM policies can be attached to IAM users, IAM roles, and IAM groups to define what actions can be performed on which AWS resources. IAM policies use JSON (JavaScript Object Notation) syntax to specify the permissions and can be created and managed independently of the users, roles, or groups. IAM policies consist of statements that include the actions allowed or denied, the resources on which the actions can be performed, and any additional conditions. 57 | 58 | Q: You have a private subnet in your VPC that contains a number of instances that should not have direct internet access. However, you still need to be able to securely access these instances for administrative purposes. How would you set up a bastion host to facilitate this access? 59 | 60 | A: To securely access the instances in the private subnet, you can set up a bastion host (also known as a jump host or jump box). The bastion host acts as a secure entry point to your private subnet. Here's how you can set up a bastion host: 61 | Create a new EC2 instance in a public subnet, which will serve as the bastion host. Ensure that this instance has a public IP address or is associated with an Elastic IP address for persistent access. 62 | Configure the security group for the bastion host to allow inbound SSH (or RDP for Windows) traffic from your IP address or a restricted range of trusted IP addresses. This limits access to the bastion host to authorized administrators only. 63 | Place the instances in the private subnet and configure their security groups to allow inbound SSH (or RDP) traffic from the bastion host security group. 64 | SSH (or RDP) into the bastion host using your private key or password. From the bastion host, you can then SSH (or RDP) into the instances in the private subnet using their private IP addresses. 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | -------------------------------------------------------------------------------- /day-9/README.md: -------------------------------------------------------------------------------- 1 | # AWS S3 2 | 3 | ## About 4 | 5 | What is Amazon S3? 6 | 7 | Simple Storage Service is a scalable and secure cloud storage service provided by Amazon Web Services (AWS). It allows you to store and retrieve any amount of data from anywhere on the web. 8 | 9 | What are S3 buckets? 10 | 11 | S3 buckets are containers for storing objects (files) in Amazon S3. Each bucket has a unique name globally across all of AWS. You can think of an S3 bucket as a top-level folder that holds your data. 12 | 13 | Why use S3 buckets? 14 | 15 | S3 buckets provide a reliable and highly scalable storage solution for various use cases. They are commonly used for backup and restore, data archiving, content storage for websites, and as a data source for big data analytics. 16 | 17 | Key benefits of S3 buckets 18 | 19 | S3 buckets offer several advantages, including: 20 | 21 | Durability and availability: S3 provides high durability and availability for your data. 22 | Scalability: You can store and retrieve any amount of data without worrying about capacity constraints. 23 | Security: S3 offers multiple security features such as encryption, access control, and audit logging. 24 | Performance: S3 is designed to deliver high performance for data retrieval and storage operations. 25 | Cost-effective: S3 offers cost-effective storage options and pricing models based on your usage patterns. 26 | 27 | ## Creating and Configuring S3 Buckets 28 | 29 | Creating an S3 bucket 30 | 31 | To create an S3 bucket, you can use the AWS Management Console, AWS CLI (Command Line Interface), or AWS SDKs (Software Development Kits). You need to specify a globally 32 | unique bucket name and select the region where you want to create the bucket. 33 | 34 | Choosing a bucket name and region 35 | 36 | The bucket name must be unique across all existing bucket names in Amazon S3. It should follow DNS naming conventions, be 3-63 characters long, and contain only lowercase 37 | letters, numbers, periods, and hyphens. The region selection affects data latency and compliance with specific regulations. 38 | 39 | Bucket properties and configurations 40 | 41 | Versioning: Versioning allows you to keep multiple versions of an object in the bucket. It helps protect against accidental deletions or overwrites. 42 | 43 | Bucket-level permissions and policies 44 | 45 | Bucket-level permissions and policies define who can access and perform actions on the bucket. You can grant permissions using IAM (Identity and Access Management) policies, 46 | which allow fine-grained control over user access to the bucket and its objects. 47 | 48 | ## Uploading and Managing Objects in S3 Buckets 49 | 50 | Uploading objects to S3 buckets 51 | 52 | You can upload objects to an S3 bucket using various methods, including the AWS Management Console, AWS CLI, SDKs, and direct HTTP uploads. 53 | Each object is assigned a unique key (name) within the bucket to retrieve it later. 54 | 55 | Object metadata and properties 56 | 57 | Object metadata contains additional information abouteach object in an S3 bucket. It includes attributes like content type, cache control, encryption settings, 58 | and custom metadata. These properties help in managing and organizing objects within the bucket. 59 | 60 | File formats and object encryption 61 | 62 | S3 supports various file formats, including text files, images, videos, and more. You can encrypt objects stored in S3 using server-side encryption (SSE). 63 | SSE options include SSE-S3 (Amazon-managed keys), SSE-KMS (AWS Key Management Service), and SSE-C (customer-provided keys). 64 | 65 | Lifecycle management 66 | 67 | Lifecycle management allows you to define rules for transitioning objects between different storage classes or deleting them automatically based on predefined criteria. 68 | For example, you can move infrequently accessed data to a lower-cost storage class after a specified time or delete objects after a certain retention period. 69 | 70 | Multipart uploads 71 | 72 | Multipart uploads provide a mechanism for uploading large objects in parts, which improves performance and resiliency. You can upload each part in parallel and then 73 | combine them to create the complete object. Multipart uploads also enable resumable uploads in case of failures. 74 | 75 | Managing large datasets with S3 Batch Operations 76 | 77 | S3 Batch Operations is a feature that allows you to perform bulk operations on large numbers of objects in an S3 bucket. 78 | It provides an efficient way to automate tasks such as copying objects, tagging, and restoring archived data. 79 | 80 | ## Advanced S3 Bucket Features 81 | 82 | S3 Storage Classes 83 | 84 | S3 offers multiple storage classes, each designed for different use cases and performance requirements: 85 | 86 | ![Screenshot 2023-07-06 at 7 16 51 PM](https://github.com/iam-veeramalla/aws-devops-zero-to-hero/assets/43399466/6b1ebcda-5b99-4358-ac1a-5bf559140571) 87 | 88 | 89 | S3 Replication 90 | 91 | S3 replication enables automatic and asynchronous replication of objects between S3 buckets in different regions or within the same region. 92 | Cross-Region Replication (CRR) provides disaster recovery and compliance benefits, while Same-Region Replication (SRR) can be used for data resilience and low-latency access. 93 | 94 | S3 Event Notifications and Triggers 95 | 96 | S3 event notifications allow you to configure actions when specific events occur in an S3 bucket. For example, you can trigger AWS Lambda functions, send messages to Amazon 97 | Simple Queue Service (SQS), or invoke other services using Amazon SNS when an object is created or deleted. 98 | 99 | S3 Batch Operations 100 | 101 | S3 Batch Operations allow you to perform large-scale batch operations on objects, such as copying, tagging, or deleting, across multiple buckets. It simplifies managing large 102 | datasets and automates tasks that would otherwise be time-consuming. 103 | 104 | ## Security and Compliance in S3 Buckets 105 | 106 | S3 bucket security considerations 107 | 108 | Ensure that S3 bucket policies, access control, and encryption settings are appropriately configured. Regularly monitor and audit access logs for unauthorized activities. 109 | 110 | Data encryption at rest and in transit 111 | 112 | Encrypt data at rest using server-side encryption options provided by S3. Additionally, enable encryption in transit by using SSL/TLS for data transfers. 113 | 114 | Access logging and monitoring 115 | 116 | Enable access logging to capture detailed records of requests made to your S3 bucket. Monitor access logs and configure alerts to detect any suspicious activities or unauthorized access attempts. 117 | 118 | 119 | ## S3 Bucket Management and Administration 120 | 121 | S3 bucket policies 122 | 123 | Create and manage bucket policies to control access to your S3 buckets. Bucket policies are written in JSON and define permissions for various actions and resources. 124 | 125 | S3 access control and IAM roles 126 | 127 | Use IAM roles and policies to manage access to S3 buckets. IAM roles provide temporary credentials and fine-grained access control to AWS resources. 128 | 129 | S3 APIs and SDKs 130 | 131 | Interact with S3 programmatically using AWS SDKs or APIs. These provide libraries and methods for performing various operations on S3 buckets and objects. 132 | 133 | Monitoring and logging with CloudWatch 134 | 135 | Utilize Amazon CloudWatch to monitor S3 metrics, set up alarms for specific events, and collect and analyze logs for troubleshooting and performance optimization. 136 | 137 | S3 management tools 138 | 139 | AWS provides multiple management tools, such as the AWS Management Console, AWS CLI, and third-party tools, to manage S3 buckets efficiently and perform operations like uploads, downloads, and bucket configurations. 140 | 141 | ## Troubleshooting and Error Handling 142 | 143 | Common S3 error messages and their resolutions 144 | 145 | Understand common S3 error messages like access denied, bucket not found, and exceeded bucket quota. Troubleshoot and resolve these errors by checking permissions, bucket configurations, and network connectivity. 146 | 147 | Debugging S3 bucket access issues 148 | 149 | Investigate and resolve issues related to access permissions, IAM roles, and bucket policies. Use tools like AWS CloudTrail and S3 access logs to identify and troubleshoot access problems. 150 | 151 | Data consistency and durability considerations 152 | 153 | Ensure data consistency and durability by understanding S3's data replication and storage mechanisms. Verify that data is correctly uploaded, retrieve objects using proper methods, and address any data integrity issues. 154 | 155 | Recovering deleted objects 156 | 157 | If an object is accidentally deleted, you can often recover it using versioning or S3 event notifications. Additionally, consider enabling Cross-Region Replication (CRR) for disaster recovery scenarios. 158 | -------------------------------------------------------------------------------- /day-9/demos/bucket-policies/restrict-access-to-owner.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Id": "RestrictBucketToIAMUsersOnly", 4 | "Statement": [ 5 | { 6 | "Sid": "AllowOwnerOnlyAccess", 7 | "Effect": "Deny", 8 | "Principal": "*", 9 | "Action": "s3:*", 10 | "Resource": [ 11 | "arn:aws:s3:::your-bucket-name/*", 12 | "arn:aws:s3:::your-bucket-name" 13 | ], 14 | "Condition": { 15 | "StringNotEquals": { 16 | "aws:PrincipalArn": "arn:aws:iam::AWS_ACCOUNT_ID:root" 17 | } 18 | } 19 | } 20 | ] 21 | } 22 | -------------------------------------------------------------------------------- /day-9/demos/bucket-policies/static-website-basic.json: -------------------------------------------------------------------------------- 1 | { 2 | "Version": "2012-10-17", 3 | "Statement": [ 4 | { 5 | "Sid": "PublicReadGetObject", 6 | "Effect": "Allow", 7 | "Principal": "*", 8 | "Action": [ 9 | "s3:GetObject" 10 | ], 11 | "Resource": [ 12 | "arn:aws:s3:::/*" 13 | ] 14 | } 15 | ] 16 | } 17 | -------------------------------------------------------------------------------- /interview-questions/01-ADVANCED.md: -------------------------------------------------------------------------------- 1 | ### 1. **Question:** Explain the concept of "GitOps" and how it aligns with DevOps principles. 2 | **Answer:** GitOps is a DevOps practice that uses version control systems like Git to manage infrastructure and application configurations. All changes are made through pull requests, which triggers automated deployments. This approach promotes versioning, collaboration, and automation while maintaining a declarative, auditable infrastructure. 3 | 4 | ### 2. **Question:** How does AWS CodeArtifact enhance dependency management in DevOps workflows? 5 | **Answer:** AWS CodeArtifact is a package management service that allows you to store, manage, and share software packages. It improves dependency management by centralizing artifact storage, ensuring consistency across projects, and enabling version control of packages, making it easier to manage dependencies in DevOps pipelines. 6 | 7 | ### 3. **Question:** Describe the use of AWS CloudFormation Drift Detection and Remediation. 8 | **Answer:** AWS CloudFormation Drift Detection helps identify differences between the deployed stack and the expected stack configuration. When drift is detected, you can use CloudFormation StackSets to automatically remediate drift across multiple accounts and regions, ensuring consistent infrastructure configurations. 9 | 10 | ### 4. **Question:** How can you implement Infrastructure as Code (IaC) security scanning in AWS DevOps pipelines? 11 | **Answer:** You can use tools like AWS CloudFormation Guard, cfn-nag, or open-source security scanners to analyze IaC templates for security vulnerabilities and compliance violations. By integrating these tools into DevOps pipelines, you can ensure that infrastructure code adheres to security best practices. 12 | 13 | ### 5. **Question:** Explain the role of Amazon CloudWatch Events in automating DevOps workflows. 14 | **Answer:** Amazon CloudWatch Events allow you to respond to changes in AWS resources by triggering automated actions. In DevOps, you can use CloudWatch Events to automate CI/CD pipeline executions, scaling actions, incident response, and other tasks based on resource state changes. 15 | 16 | ### 6. **Question:** Describe the use of AWS Systems Manager Automation and its impact on DevOps practices. 17 | **Answer:** AWS Systems Manager Automation enables you to automate common operational tasks across AWS resources. In DevOps, it enhances repeatability and consistency by automating tasks like patch management, application deployments, and configuration changes, reducing manual intervention and errors. 18 | 19 | ### 7. **Question:** How can you implement fine-grained monitoring and alerting using Amazon CloudWatch Metrics and Alarms? 20 | **Answer:** Amazon CloudWatch Metrics provide granular insights into resource performance, while CloudWatch Alarms enable you to set thresholds and trigger actions based on metric conditions. In DevOps, you can use these services to monitor specific application and infrastructure metrics, allowing you to respond to issues proactively. 21 | 22 | ### 8. **Question:** Explain the concept of "Serverless DevOps" and how it differs from traditional DevOps practices. 23 | **Answer:** Serverless DevOps leverages serverless computing to automate and streamline development and operations tasks. It reduces infrastructure management, emphasizes event-driven architectures, and allows developers to focus on code rather than server provisioning. However, it also presents challenges in testing, observability, and architecture design. 24 | 25 | ### 9. **Question:** Describe the use of AWS CloudTrail and AWS CloudWatch Logs integration for audit and security in DevOps. 26 | **Answer:** AWS CloudTrail records API calls, while AWS CloudWatch Logs centralizes log data. Integrating these services allows you to monitor and audit AWS API activities, detect security events, and generate alerts in near real-time. This integration enhances security and compliance practices in DevOps workflows. 27 | 28 | ### 10. **Question:** How can AWS AppConfig be used to manage application configurations in DevOps pipelines? 29 | **Answer:** AWS AppConfig is a service that allows you to manage application configurations and feature flags. In DevOps, you can use AppConfig to separate configuration from code, enable dynamic updates, and control feature releases. This improves deployment flexibility, reduces risk, and supports A/B testing. -------------------------------------------------------------------------------- /interview-questions/01-SCENARIO-BASED.md: -------------------------------------------------------------------------------- 1 | ### 1. **Scenario:** You have a microservices application that needs to scale dynamically based on traffic. How would you design an architecture for this using AWS services? 2 | **Answer:** I would use Amazon ECS or Amazon EKS for container orchestration, coupled with AWS Auto Scaling to adjust the number of instances based on CPU or custom metrics. Application Load Balancers can distribute traffic, and Amazon CloudWatch can monitor and trigger scaling events. 3 | 4 | ### 2. **Scenario:** Your application's database is experiencing performance issues. Describe how you would use AWS tools to troubleshoot and resolve this. 5 | **Answer:** I would use Amazon RDS Performance Insights to identify bottlenecks, CloudWatch Metrics for monitoring, and AWS X-Ray for tracing requests. I'd also consider optimizing queries and using read replicas if necessary. 6 | 7 | ### 3. **Scenario:** You're migrating a monolithic application to a microservices architecture. How would you ensure smooth deployment and minimize downtime? 8 | **Answer:** I would adopt a "strangler" pattern, gradually migrating components to microservices. This minimizes risk by replacing pieces of the monolith over time, allowing for testing and validation at each step. 9 | 10 | ### 4. **Scenario:** Your team is frequently encountering configuration drift issues in your infrastructure. How could you prevent and manage this effectively? 11 | **Answer:** I would implement Infrastructure as Code (IaC) using AWS CloudFormation or Terraform. By versioning and automating infrastructure changes, we can ensure consistent and repeatable deployments. 12 | 13 | ### 5. **Scenario:** Your company is launching a new product, and you expect a sudden spike in traffic. How would you ensure the application remains responsive and available? 14 | **Answer:** I would implement a combination of auto-scaling groups, Amazon CloudFront for content delivery, Amazon RDS read replicas, and Amazon DynamoDB provisioned capacity to handle increased load while maintaining performance. 15 | 16 | ### 6. **Scenario:** You're working on a CI/CD pipeline for a containerized application. How could you ensure that every code change is automatically tested and deployed? 17 | **Answer:** I would set up an AWS CodePipeline that integrates with AWS CodeBuild for building and testing containers. After successful testing, I'd use AWS CodeDeploy to deploy the containers to an ECS cluster or Kubernetes on EKS. 18 | 19 | ### 7. **Scenario:** Your team wants to ensure secure access to AWS resources for different team members. How could you implement this? 20 | **Answer:** I would use AWS Identity and Access Management (IAM) to create fine-grained policies for each team member. IAM roles and groups can be assigned permissions based on least privilege principles. 21 | 22 | ### 8. **Scenario:** You're managing a complex microservices architecture with multiple services communicating. How could you monitor and trace requests across services? 23 | **Answer:** I would integrate AWS X-Ray into the application to trace requests as they traverse services. This would provide insights into latency, errors, and dependencies between services. 24 | 25 | ### 9. **Scenario:** Your application has a front-end hosted on S3, and you need to enable HTTPS for security. How would you achieve this? 26 | **Answer:** I would use Amazon CloudFront to distribute content from the S3 bucket, configure a custom domain, and associate an SSL/TLS certificate through AWS Certificate Manager. 27 | 28 | ### 10. **Scenario:** Your organization has multiple AWS accounts for different environments (dev, staging, prod). How would you manage centralized billing and ensure cost optimization? 29 | **Answer:** I would use AWS Organizations to manage multiple accounts and enable consolidated billing. AWS Cost Explorer and AWS Budgets could be used to monitor and optimize costs across accounts. 30 | 31 | ### 11. **Scenario:** Your application frequently needs to run resource-intensive tasks in the background. How could you ensure efficient and scalable task processing? 32 | **Answer:** I would use AWS Lambda for serverless background processing or AWS Batch for batch processing. Both services can scale automatically based on the workload. 33 | 34 | ### 12. **Scenario:** Your team is using Jenkins for CI/CD, but you want to reduce management overhead. How could you migrate to a serverless CI/CD approach? 35 | **Answer:** I would consider using AWS CodePipeline and AWS CodeBuild. CodePipeline integrates seamlessly with CodeBuild, allowing you to create serverless CI/CD pipelines without managing infrastructure. 36 | 37 | ### 13. **Scenario:** Your organization wants to enable single sign-on (SSO) for multiple AWS accounts. How could you achieve this while maintaining security? 38 | **Answer:** I would use AWS Single Sign-On (SSO) to manage user access across multiple AWS accounts. By configuring SSO integrations, users can access multiple accounts securely without needing separate credentials. 39 | 40 | ### 14. **Scenario:** Your company is aiming for high availability by deploying applications across multiple regions. How could you implement global traffic distribution? 41 | **Answer:** I would use Amazon Route 53 with Latency-Based Routing or Geolocation Routing to direct traffic to the closest or most appropriate region based on user location. 42 | 43 | ### 15. **Scenario:** Your application is generating a significant amount of logs. How could you centralize log management and enable efficient analysis? 44 | **Answer:** I would use Amazon CloudWatch Logs to centralize log storage and AWS CloudWatch Logs Insights to query and analyze logs efficiently, making it easier to troubleshoot and monitor application behavior. 45 | 46 | ### 16. **Scenario:** Your application needs to store and retrieve large amounts of unstructured data. How could you design a cost-effective solution? 47 | **Answer:** I would use Amazon S3 with appropriate storage classes (such as S3 Standard or S3 Intelligent-Tiering) based on data access patterns. This allows for durable and cost-effective storage of unstructured data. 48 | 49 | ### 17. **Scenario:** Your team wants to enable automated testing for infrastructure deployments. How could you achieve this? 50 | **Answer:** I would integrate AWS CloudFormation StackSets into the CI/CD pipeline. StackSets allow you to deploy infrastructure templates to multiple accounts and regions, enabling automated testing of infrastructure changes. 51 | 52 | ### 18. **Scenario:** Your application uses AWS Lambda functions, and you want to improve cold start performance. How could you address this challenge? 53 | **Answer:** I would implement an Amazon API Gateway with the HTTP proxy integration, creating a warm-up endpoint that periodically invokes Lambda functions to keep them warm. 54 | 55 | ### 19. **Scenario:** Your application has multiple microservices, each with its own database. How could you manage database schema changes efficiently? 56 | **Answer:** I would use AWS Database Migration Service (DMS) to replicate data between the old and new schema versions, allowing for seamless database migrations without disrupting application operations. 57 | 58 | ### 20. **Scenario:** Your organization is concerned about data protection and compliance. How could you ensure sensitive data is securely stored and transmitted? 59 | **Answer:** I would use Amazon S3 server-side encryption and Amazon RDS encryption at rest for data storage. For data transmission, I would use SSL/TLS encryption for communication between services and implement security best practices. -------------------------------------------------------------------------------- /interview-questions/aws-cli.md: -------------------------------------------------------------------------------- 1 | ### 1. What is the AWS Command Line Interface (CLI)? 2 | The AWS Command Line Interface (CLI) is a unified tool that allows you to interact with various AWS services using command-line commands. 3 | 4 | ### 2. Why would you use the AWS CLI? 5 | The AWS CLI provides a convenient way to automate tasks, manage AWS resources, and interact with services directly from the command line, making it useful for scripting and administration. 6 | 7 | ### 3. How do you install the AWS CLI? 8 | You can install the AWS CLI on various operating systems using package managers or by downloading the installer from the AWS website. 9 | 10 | ### 4. What is the purpose of AWS CLI profiles? 11 | AWS CLI profiles allow you to manage multiple sets of AWS security credentials, making it easier to switch between different accounts and roles. 12 | 13 | ### 5. How can you configure the AWS CLI with your credentials? 14 | You can configure the AWS CLI by running the `aws configure` command, where you provide your access key, secret key, default region, and output format. 15 | 16 | ### 6. What is the difference between IAM user-based credentials and IAM role-based credentials in the AWS CLI? 17 | IAM user-based credentials are long-term access keys associated with an IAM user, while IAM role-based credentials are temporary credentials obtained by assuming a role using the `sts assume-role` command. 18 | 19 | ### 7. How can you interact with AWS services using the AWS CLI? 20 | You can interact with AWS services by using AWS CLI commands specific to each service. For example, you can use `aws ec2 describe-instances` to list EC2 instances. 21 | 22 | ### 8. What is the syntax for AWS CLI commands? 23 | The basic syntax for AWS CLI commands is `aws [options]`, where you replace `` with the service you want to interact with and `` with the desired action. 24 | 25 | ### 9. How can you list available AWS CLI services and commands? 26 | You can run `aws help` to see a list of AWS services and the corresponding commands available in the AWS CLI. 27 | 28 | ### 10. What is the purpose of output formatting options in AWS CLI commands? 29 | Output formatting options allow you to specify how the results of AWS CLI commands are presented. Common options include JSON, text, table, and YAML formats. 30 | 31 | ### 11. How can you filter and format AWS CLI command output? 32 | You can use filters like `--query` to extract specific data from AWS CLI command output, and you can use `--output` to choose the format of the output. 33 | 34 | ### 12. How can you create and manage AWS resources using the AWS CLI? 35 | You can create and manage AWS resources using commands such as `aws ec2 create-instance` for EC2 instances or `aws s3 cp` to copy files to Amazon S3 buckets. 36 | 37 | ### 13. How does AWS CLI handle pagination of results? 38 | Some AWS CLI commands return paginated results. You can use the `--max-items` and `--page-size` options to control the number of items displayed per page. 39 | 40 | ### 14. What is the AWS SSO (Single Sign-On) feature in the AWS CLI? 41 | The AWS SSO feature in the AWS CLI allows you to authenticate and obtain temporary credentials using an AWS SSO profile, simplifying the management of credentials. 42 | 43 | ### 15. Can you use the AWS CLI to work with AWS CloudFormation? 44 | Yes, you can use the AWS CLI to create, update, and delete CloudFormation stacks using the `aws cloudformation` commands. 45 | 46 | ### 16. How can you debug AWS CLI commands? 47 | You can use the `--debug` option with AWS CLI commands to get detailed debug information, which can help troubleshoot issues. 48 | 49 | ### 17. Can you use the AWS CLI in AWS Lambda functions? 50 | Yes, AWS Lambda functions can use the AWS CLI by packaging it with the function code and executing CLI commands from within the function. 51 | 52 | ### 18. How can you secure the AWS CLI on your local machine? 53 | You can secure the AWS CLI on your local machine by using IAM roles, IAM user-based credentials, and the AWS CLI's built-in encryption mechanisms for configuration files. 54 | 55 | ### 19. How can you update the AWS CLI to the latest version? 56 | You can update the AWS CLI to the latest version using package managers like `pip` (Python package manager) or by downloading the installer from the AWS website. 57 | 58 | ### 20. How do you uninstall the AWS CLI? 59 | To uninstall the AWS CLI, you can use the package manager or the uninstaller provided by the installer you used to install it initially. -------------------------------------------------------------------------------- /interview-questions/aws-terraform.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Terraform? 2 | Terraform is an open-source Infrastructure as Code (IaC) tool that allows you to define, manage, and provision infrastructure resources using declarative code. 3 | 4 | ### 2. How does Terraform work with AWS? 5 | Terraform interacts with the AWS API to create and manage resources based on the configurations defined in Terraform files. 6 | 7 | ### 3. What is an AWS provider in Terraform? 8 | An AWS provider in Terraform is a plugin that allows Terraform to interact with AWS services by making API calls. 9 | 10 | ### 4. How do you define resources in Terraform? 11 | Resources are defined in Terraform using HashiCorp Configuration Language (HCL) syntax in `.tf` files. Each resource type corresponds to an AWS service. 12 | 13 | ### 5. What is a Terraform state file? 14 | The Terraform state file maintains the state of the resources managed by Terraform. It's used to track the actual state of the infrastructure. 15 | 16 | ### 6. How can you initialize a Terraform project? 17 | You can initialize a Terraform project using the `terraform init` command. It downloads required provider plugins and initializes the backend. 18 | 19 | ### 7. How do you plan infrastructure changes in Terraform? 20 | You can use the `terraform plan` command to see the changes that Terraform will apply to your infrastructure before actually applying them. 21 | 22 | ### 8. What is the `terraform apply` command used for? 23 | The `terraform apply` command applies the changes defined in your Terraform configuration to your infrastructure. It creates, updates, or deletes resources as needed. 24 | 25 | ### 9. What is the purpose of Terraform variables? 26 | Terraform variables allow you to parameterize your configurations, making them more flexible and reusable across different environments. 27 | 28 | ### 10. How do you manage secrets and sensitive information in Terraform? 29 | Sensitive information should be stored in environment variables or external systems like AWS Secrets Manager. You can use variables to reference these values in Terraform. 30 | 31 | ### 11. What is remote state in Terraform? 32 | Remote state in Terraform refers to storing the state file on a remote backend, such as Amazon S3, instead of locally. This facilitates collaboration and enables locking. 33 | 34 | ### 12. How can you manage multiple environments (dev, prod) with Terraform? 35 | You can use Terraform workspaces or create separate directories for each environment, each with its own state file and variables. 36 | 37 | ### 13. How do you handle dependencies between resources in Terraform? 38 | Terraform automatically handles dependencies based on the resource definitions in your configuration. It will create resources in the correct order. 39 | 40 | ### 14. What is Terraform's "apply" process? 41 | The "apply" process in Terraform involves comparing the desired state from your configuration to the current state, generating an execution plan, and then applying the changes. 42 | 43 | ### 15. How can you manage versioning of Terraform configurations? 44 | You can use version control systems like Git to track changes to your Terraform configurations. Additionally, Terraform Cloud and Enterprise offer versioning features. 45 | 46 | ### 16. What is the difference between Terraform and CloudFormation? 47 | Terraform is a multi-cloud IaC tool that supports various cloud providers, including AWS. CloudFormation is AWS-specific and focuses on AWS resource provisioning. 48 | 49 | ### 17. What is a Terraform module? 50 | A Terraform module is a reusable set of configurations that can be used to create multiple resources with a consistent configuration. 51 | 52 | ### 18. How can you destroy infrastructure created by Terraform? 53 | You can use the `terraform destroy` command to remove all resources defined in your Terraform configuration. 54 | 55 | ### 19. How does Terraform manage updates to existing resources? 56 | Terraform applies updates by modifying existing resources rather than recreating them. This helps preserve data and configurations. 57 | 58 | ### 20. Can Terraform be used for managing third-party resources? 59 | Yes, Terraform has the capability to manage resources beyond AWS. It supports multiple providers, making it versatile for managing various cloud and on-premises resources. -------------------------------------------------------------------------------- /interview-questions/cloud-migration.md: -------------------------------------------------------------------------------- 1 | ### 1. What is cloud migration? 2 | Cloud migration refers to the process of moving applications, data, and workloads from on-premises environments or one cloud provider to another. 3 | 4 | ### 2. What are the common drivers for cloud migration? 5 | Drivers for cloud migration include cost savings, scalability, agility, improved security, and the ability to leverage advanced cloud services. 6 | 7 | ### 3. What are the six common cloud migration strategies? 8 | The six common cloud migration strategies are Rehost (lift and shift), Replatform, Repurchase (buy a SaaS solution), Refactor (rearchitect), Retire, and Retain (leave unchanged). 9 | 10 | ### 4. What is the "lift and shift" migration strategy? 11 | The "lift and shift" strategy (Rehost) involves moving applications and data as they are from on-premises to the cloud without significant modifications. 12 | 13 | ### 5. How does the "replatform" strategy differ from "lift and shift"? 14 | The "replatform" strategy involves making minor adjustments to applications or databases before migrating them to the cloud, often to optimize for cloud services. 15 | 16 | ### 6. When would you consider the "rebuy" strategy? 17 | The "rebuy" strategy (Repurchase) involves replacing an existing application with a cloud-based Software as a Service (SaaS) solution. It's suitable when a suitable SaaS option is available. 18 | 19 | ### 7. What is the "rearchitect" migration strategy? 20 | The "rearchitect" strategy (Refactor) involves modifying or rearchitecting applications to fully leverage cloud-native features and services. 21 | 22 | ### 8. How do you decide which cloud migration strategy to use? 23 | The choice of strategy depends on factors like business goals, existing technology stack, application complexity, and desired outcomes. 24 | 25 | ### 9. What are some key benefits of the "rearchitect" strategy? 26 | The "rearchitect" strategy can lead to improved performance, scalability, and cost savings by utilizing cloud-native services. 27 | 28 | ### 10. What is the importance of a migration readiness assessment? 29 | A migration readiness assessment helps evaluate an organization's current environment, readiness for cloud migration, and the appropriate migration strategy to adopt. 30 | 31 | ### 11. How can you minimize downtime during cloud migration? 32 | You can use strategies like blue-green deployments, canary releases, and traffic shifting to minimize downtime and ensure a smooth migration process. 33 | 34 | ### 12. What is data migration in the context of cloud migration? 35 | Data migration involves moving data from on-premises databases to cloud-based databases, ensuring data consistency, integrity, and minimal disruption. 36 | 37 | ### 13. What is the "big bang" migration approach? 38 | The "big bang" approach involves migrating all applications and data at once, which can be risky due to potential disruptions. It's often considered when there's a clear deadline. 39 | 40 | ### 14. What is the "staged" migration approach? 41 | The "staged" approach involves migrating applications or components in stages, allowing for gradual adoption and risk mitigation. 42 | 43 | ### 15. How does the "strangler" migration pattern work? 44 | The "strangler" pattern involves gradually replacing components of an existing application with cloud-native components until the entire application is migrated. 45 | 46 | ### 16. What role does automation play in cloud migration? 47 | Automation streamlines the migration process by reducing manual tasks, ensuring consistency, and accelerating deployments. 48 | 49 | ### 17. How do you ensure security during cloud migration? 50 | Security should be considered at every stage of migration. Ensure data encryption, access controls, compliance, and monitoring are in place. 51 | 52 | ### 18. How can you handle application dependencies during migration? 53 | Understanding application dependencies is crucial. You can use tools to map dependencies and ensure that all necessary components are migrated together. 54 | 55 | ### 19. What is the "lift and reshape" strategy? 56 | The "lift and reshape" strategy involves moving applications to the cloud and then making necessary adjustments for better cloud optimization and cost savings. 57 | 58 | ### 20. What is the importance of testing in cloud migration? 59 | Testing helps identify issues, validate performance, and ensure the migrated applications function as expected in the new cloud environment. -------------------------------------------------------------------------------- /interview-questions/cloudformation.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CloudFormation? 2 | AWS CloudFormation is a service that allows you to define and provision infrastructure as code, enabling you to create, update, and manage AWS resources in a declarative and automated way. 3 | 4 | ### 2. What are the benefits of using AWS CloudFormation? 5 | Benefits of using AWS CloudFormation include infrastructure as code, automated resource provisioning, consistent deployments, version control, and support for template reuse. 6 | 7 | ### 3. What is an AWS CloudFormation template? 8 | An AWS CloudFormation template is a JSON or YAML file that defines the AWS resources and their configurations needed for a particular stack. 9 | 10 | ### 4. How does AWS CloudFormation work? 11 | AWS CloudFormation interprets templates and deploys the specified resources in the order defined, managing the provisioning, updating, and deletion of resources. 12 | 13 | ### 5. What is a CloudFormation stack? 14 | A CloudFormation stack is a collection of AWS resources created and managed as a single unit, based on a CloudFormation template. 15 | 16 | ### 6. What is the difference between AWS CloudFormation and AWS Elastic Beanstalk? 17 | AWS CloudFormation provides infrastructure as code and lets you define and manage resources at a lower level, while AWS Elastic Beanstalk is a platform-as-a-service (PaaS) that abstracts the deployment of applications. 18 | 19 | ### 7. What is the purpose of a CloudFormation change set? 20 | A CloudFormation change set allows you to preview the changes that will be made to a stack before applying those changes, helping to ensure that updates won't cause unintended consequences. 21 | 22 | ### 8. How can you create an AWS CloudFormation stack? 23 | You can create a CloudFormation stack using the AWS Management Console, AWS CLI, or AWS SDKs. You provide a template, choose a stack name, and specify any parameters. 24 | 25 | ### 9. How can you update an existing AWS CloudFormation stack? 26 | You can update a CloudFormation stack by making changes to the template or stack parameters and then using the AWS Management Console, AWS CLI, or SDKs to initiate an update. 27 | 28 | ### 10. What is the CloudFormation rollback feature? 29 | The CloudFormation rollback feature automatically reverts changes to a stack if an update fails, helping to ensure that your infrastructure remains consistent. 30 | 31 | ### 11. How does AWS CloudFormation handle dependencies between resources? 32 | CloudFormation handles dependencies by automatically determining the order in which resources need to be created or updated to maintain consistent state. 33 | 34 | ### 12. What are CloudFormation intrinsic functions? 35 | CloudFormation intrinsic functions are built-in functions that you can use within templates to manipulate values or perform dynamic operations during stack creation and update. 36 | 37 | ### 13. How can you perform conditionals in CloudFormation templates? 38 | You can use CloudFormation's intrinsic functions, such as `Fn::If` and `Fn::Equals`, to define conditions and control the creation of resources based on those conditions. 39 | 40 | ### 14. What is the CloudFormation Designer? 41 | The CloudFormation Designer is a visual tool that helps you design and visualize CloudFormation templates using a drag-and-drop interface. 42 | 43 | ### 15. How can you manage secrets in CloudFormation templates? 44 | You should avoid hardcoding secrets in templates. Instead, you can use AWS Secrets Manager or AWS Parameter Store to store sensitive information and reference them in your templates. 45 | 46 | ### 16. How can you provision custom resources in CloudFormation? 47 | You can use AWS Lambda-backed custom resources to perform actions in response to stack events that aren't natively supported by CloudFormation resources. 48 | 49 | ### 17. What is stack drift in AWS CloudFormation? 50 | Stack drift occurs when actual resources in a stack differ from the expected resources defined in the CloudFormation template. 51 | 52 | ### 18. How does CloudFormation support rollback triggers? 53 | Rollback triggers in CloudFormation allow you to specify actions that should be taken when a stack rollback is initiated, such as sending notifications or cleaning up resources. 54 | 55 | ### 19. Can AWS CloudFormation be used for creating non-AWS resources? 56 | Yes, CloudFormation supports custom resources that can be used to manage non-AWS resources or to execute arbitrary code during stack creation and update. 57 | 58 | ### 20. What is CloudFormation StackSets? 59 | CloudFormation StackSets allow you to deploy CloudFormation stacks across multiple accounts and regions, enabling centralized management of infrastructure deployments. -------------------------------------------------------------------------------- /interview-questions/cloudfront.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon CloudFront? 2 | Amazon CloudFront is a Content Delivery Network (CDN) service provided by AWS that accelerates content delivery by distributing it across a network of edge locations. 3 | 4 | ### 2. How does CloudFront work? 5 | CloudFront caches content in edge locations globally. When a user requests content, CloudFront delivers it from the nearest edge location, reducing latency and improving performance. 6 | 7 | ### 3. What are edge locations in CloudFront? 8 | Edge locations are data centers globally distributed by CloudFront. They store cached content and serve it to users, minimizing the distance data needs to travel. 9 | 10 | ### 4. What types of distributions are available in CloudFront? 11 | CloudFront offers Web Distributions for websites and RTMP Distributions for media streaming. 12 | 13 | ### 5. How can you ensure that content in CloudFront is updated? 14 | You can create invalidations in CloudFront to remove cached content and force the distribution of fresh content. 15 | 16 | ### 6. Can you use custom SSL certificates with CloudFront? 17 | Yes, you can use custom SSL certificates to secure connections between users and CloudFront. 18 | 19 | ### 7. What is an origin in CloudFront? 20 | An origin is the source of the content CloudFront delivers. It can be an Amazon S3 bucket, an EC2 instance, an Elastic Load Balancer, or even an HTTP server. 21 | 22 | ### 8. How can you control who accesses content in CloudFront? 23 | You can use CloudFront signed URLs or cookies to restrict access to content based on user credentials. 24 | 25 | ### 9. What are cache behaviors in CloudFront? 26 | Cache behaviors define how CloudFront handles different types of requests. They include settings like TTL, query string forwarding, and more. 27 | 28 | ### 10. How can you integrate CloudFront with other AWS services? 29 | You can integrate CloudFront with Amazon S3, Amazon EC2, AWS Lambda, and more to accelerate content delivery. 30 | 31 | ### 11. How can you analyze CloudFront distribution performance? 32 | You can use CloudFront access logs stored in Amazon S3 to analyze the performance of your distribution. 33 | 34 | ### 12. What is the purpose of CloudFront behaviors? 35 | CloudFront behaviors help specify how CloudFront should respond to different types of requests for different paths or patterns. 36 | 37 | ### 13. Can CloudFront be used for dynamic content? 38 | Yes, CloudFront can be used for both static and dynamic content delivery, improving the performance of web applications. 39 | 40 | ### 14. What is a distribution in CloudFront? 41 | A distribution represents the configuration and content for your CloudFront content delivery. It can have multiple origins and cache behaviors. 42 | 43 | ### 15. How does CloudFront handle cache expiration? 44 | CloudFront uses Time to Live (TTL) settings to determine how long objects are cached in edge locations before checking for updates. 45 | 46 | ### 16. What are the benefits of using CloudFront with Amazon S3? 47 | Using CloudFront with Amazon S3 reduces latency, offloads traffic from your origin server, and improves global content delivery. 48 | 49 | ### 17. Can CloudFront be used for both HTTP and HTTPS content? 50 | Yes, CloudFront supports both HTTP and HTTPS content delivery. HTTPS is recommended for enhanced security. 51 | 52 | ### 18. How can you measure the performance of CloudFront distributions? 53 | You can use CloudFront metrics in Amazon CloudWatch to monitor the performance of your distributions and analyze their behavior. 54 | 55 | ### 19. What is origin shield in CloudFront? 56 | Origin Shield is an additional caching layer that helps reduce the load on your origin server by caching content closer to the origin. 57 | 58 | ### 20. How can CloudFront improve security? 59 | CloudFront can help protect against DDoS attacks by absorbing traffic spikes and providing secure connections through HTTPS. -------------------------------------------------------------------------------- /interview-questions/cloudtrail.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CloudTrail? 2 | AWS CloudTrail is a service that provides governance, compliance, and audit capabilities by recording and storing API calls made on your AWS account. 3 | 4 | ### 2. What type of information does AWS CloudTrail record? 5 | CloudTrail records API calls, capturing information about who made the call, when it was made, which service was accessed, and what actions were taken. 6 | 7 | ### 3. How does AWS CloudTrail store its data? 8 | CloudTrail stores its data in Amazon S3 buckets, allowing you to easily analyze and retrieve the recorded information. 9 | 10 | ### 4. How can you enable AWS CloudTrail for an AWS account? 11 | You can enable CloudTrail through the AWS Management Console or the AWS CLI by creating a trail and specifying the services you want to track. 12 | 13 | ### 5. What is a CloudTrail trail? 14 | A CloudTrail trail is a configuration that specifies the settings for logging and delivering events. Trails can be applied to an entire AWS account or specific regions. 15 | 16 | ### 6. What is the purpose of CloudTrail log files? 17 | CloudTrail log files contain records of API calls and events, which can be used for security analysis, compliance, auditing, and troubleshooting. 18 | 19 | ### 7. How can you access CloudTrail log files? 20 | CloudTrail log files are stored in an S3 bucket. You can access them directly or use services like Amazon Athena or Amazon CloudWatch Logs Insights for querying and analysis. 21 | 22 | ### 8. What is the difference between a management event and a data event in CloudTrail? 23 | Management events are related to the management of AWS resources, while data events focus on the actions performed on those resources. 24 | 25 | ### 9. How can you view and analyze CloudTrail logs? 26 | You can view and analyze CloudTrail logs using the CloudTrail console, AWS CLI, or third-party tools. You can also set up CloudWatch Alarms to detect specific events. 27 | 28 | ### 10. What is CloudTrail Insights? 29 | CloudTrail Insights is a feature that uses machine learning to identify unusual patterns and suspicious activity in CloudTrail logs. 30 | 31 | ### 11. How can you integrate CloudTrail with CloudWatch Logs? 32 | You can integrate CloudTrail with CloudWatch Logs to receive CloudTrail events in near real-time, allowing you to create CloudWatch Alarms and automate actions. 33 | 34 | ### 12. What is CloudTrail Event History? 35 | CloudTrail Event History is a feature that displays the past seven days of management events for your account, helping you quickly identify changes made to resources. 36 | 37 | ### 13. What is CloudTrail Data Events? 38 | CloudTrail Data Events track actions performed on Amazon S3 objects, providing insight into object-level activity and changes. 39 | 40 | ### 14. What is the purpose of CloudTrail Insights events? 41 | CloudTrail Insights events are automatically generated when CloudTrail detects unusual or high-risk activity, helping you identify and respond to potential security issues. 42 | 43 | ### 15. How can you ensure that CloudTrail logs are tamper-proof? 44 | CloudTrail logs are stored in an S3 bucket with server-side encryption enabled, ensuring that the logs are tamper-proof and protected. 45 | 46 | ### 16. Can CloudTrail logs be used for compliance and auditing? 47 | Yes, CloudTrail logs can be used to demonstrate compliance with various industry standards and regulations by providing an audit trail of AWS account activity. 48 | 49 | ### 17. How does CloudTrail support multi-region trails? 50 | Multi-region trails allow you to capture events from multiple AWS regions in a single trail, providing a centralized view of account activity. 51 | 52 | ### 18. Can CloudTrail be used to monitor non-AWS services? 53 | CloudTrail primarily monitors AWS services, but you can integrate it with AWS Lambda to capture and log custom events from non-AWS services. 54 | 55 | ### 19. How can you receive notifications about CloudTrail events? 56 | You can use Amazon SNS (Simple Notification Service) to receive notifications about CloudTrail events, such as when new log files are delivered to your S3 bucket. 57 | 58 | ### 20. How can you use CloudTrail logs for incident response? 59 | CloudTrail logs can be used for incident response by analyzing events to identify the cause of an incident, understand its scope, and take appropriate actions. -------------------------------------------------------------------------------- /interview-questions/cloudwatch.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon CloudWatch? 2 | Amazon CloudWatch is a monitoring and observability service that provides insights into your AWS resources and applications by collecting and tracking metrics, logs, and events. 3 | 4 | ### 2. What types of data does Amazon CloudWatch collect? 5 | Amazon CloudWatch collects metrics, logs, and events. Metrics are data points about your resources and applications, logs are textual data generated by resources, and events provide insights into changes and notifications. 6 | 7 | ### 3. How can you use Amazon CloudWatch to monitor resources? 8 | You can use CloudWatch to monitor resources by collecting and visualizing metrics, setting alarms for specific thresholds, and generating insights into resource performance. 9 | 10 | ### 4. What are CloudWatch metrics? 11 | CloudWatch metrics are data points about the performance of your resources and applications. They can include data like CPU utilization, network traffic, and more. 12 | 13 | ### 5. How can you collect custom metrics in Amazon CloudWatch? 14 | You can collect custom metrics in CloudWatch by using the CloudWatch API or SDKs to publish data to CloudWatch using the `PutMetricData` action. 15 | 16 | ### 6. What are CloudWatch alarms? 17 | CloudWatch alarms allow you to monitor metrics and set thresholds to trigger notifications or automated actions when specific conditions are met. 18 | 19 | ### 7. How can you visualize CloudWatch metrics? 20 | You can visualize CloudWatch metrics using CloudWatch Dashboards, which allow you to create customized views of metrics, graphs, and text. 21 | 22 | ### 8. What is CloudWatch Logs? 23 | CloudWatch Logs is a service that collects, stores, and monitors log files from various resources, making it easier to analyze and troubleshoot applications. 24 | 25 | ### 9. How can you store logs in Amazon CloudWatch Logs? 26 | You can store logs in CloudWatch Logs by sending log data from your resources or applications using the CloudWatch Logs agent, SDKs, or directly through the CloudWatch API. 27 | 28 | ### 10. What is CloudWatch Logs Insights? 29 | CloudWatch Logs Insights is a feature that allows you to query and analyze log data to gain insights into your applications and resources. 30 | 31 | ### 11. What is the CloudWatch Events service? 32 | CloudWatch Events provides a way to respond to state changes in your AWS resources, such as launching instances, creating buckets, or modifying security groups. 33 | 34 | ### 12. How can you use CloudWatch Events to trigger actions? 35 | You can use CloudWatch Events to trigger actions by defining rules that match specific events and associate those rules with targets like Lambda functions, SQS queues, and more. 36 | 37 | ### 13. What is CloudWatch Container Insights? 38 | CloudWatch Container Insights provides a way to monitor and analyze the performance of containers managed by services like Amazon ECS and Amazon EKS. 39 | 40 | ### 14. What is CloudWatch Contributor Insights? 41 | CloudWatch Contributor Insights provides insights into the top contributors affecting the performance of your resources, helping you identify bottlenecks and optimization opportunities. 42 | 43 | ### 15. How can you use CloudWatch Logs for troubleshooting? 44 | You can use CloudWatch Logs for troubleshooting by analyzing log data, setting up alarms for specific log patterns, and correlating events to diagnose issues. 45 | 46 | ### 16. Can CloudWatch Logs Insights query data from multiple log groups? 47 | Yes, CloudWatch Logs Insights can query data from multiple log groups, allowing you to analyze and gain insights from a broader set of log data. 48 | 49 | ### 17. How can you set up CloudWatch Alarms? 50 | You can set up CloudWatch Alarms by defining a metric, setting a threshold for the metric, and specifying actions to be taken when the threshold is breached. 51 | 52 | ### 18. What is CloudWatch Anomaly Detection? 53 | CloudWatch Anomaly Detection is a feature that automatically analyzes historical metric data to create a baseline and detect deviations from expected patterns. 54 | 55 | ### 19. How does CloudWatch support cross-account monitoring? 56 | You can use CloudWatch Cross-Account Cross-Region (CACR) to set up cross-account monitoring, allowing you to view metrics and alarms from multiple AWS accounts. 57 | 58 | ### 20. Can CloudWatch integrate with other AWS services? 59 | Yes, CloudWatch can integrate with other AWS services like Amazon EC2, Amazon RDS, Lambda, and more to provide enhanced monitoring and insights into resource performance. -------------------------------------------------------------------------------- /interview-questions/code-build.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodeBuild? 2 | AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software artifacts, such as executable files or application packages. 3 | 4 | ### 2. How does CodeBuild work? 5 | CodeBuild uses build specifications defined in buildspec.yml files. When triggered by a source code change, it pulls the code from the repository, follows the build steps specified, and generates the build artifacts. 6 | 7 | ### 3. What is a buildspec.yml file? 8 | A buildspec.yml file is used to define the build steps, environment settings, and other instructions for CodeBuild. It's stored in the same repository as the source code and provides the necessary information to execute the build. 9 | 10 | ### 4. How can you integrate CodeBuild with CodePipeline? 11 | You can add a CodeBuild action to your CodePipeline stages. This enables you to use CodeBuild as one of the actions in your CI/CD workflow for building and testing code. 12 | 13 | ### 5. What programming languages and build environments does CodeBuild support? 14 | CodeBuild supports a wide range of programming languages and build environments, including Java, Python, Node.js, Ruby, Go, .NET, Docker, and more. 15 | 16 | ### 6. Explain the caching feature in CodeBuild. 17 | The caching feature allows you to store certain directories in Amazon S3 to speed up build times. CodeBuild can fetch cached content instead of rebuilding dependencies, improving overall build performance. 18 | 19 | ### 7. How does CodeBuild handle environment setup and cleanup? 20 | CodeBuild automatically provisions and manages the build environment based on the specifications in the buildspec.yml file. After the build completes, CodeBuild automatically cleans up the environment. 21 | 22 | ### 8. Can you customize the build environment in CodeBuild? 23 | Yes, you can customize the build environment by specifying the base image, build tools, environment variables, and more in the buildspec.yml file. 24 | 25 | ### 9. What are artifacts and how are they used in CodeBuild? 26 | Artifacts are the output files generated by the build process. They can be binaries, archives, or any other build output. These artifacts can be stored in Amazon S3 or other destinations for later use. 27 | 28 | ### 10. How can you secure sensitive information in your build process? 29 | Sensitive information, such as passwords or API keys, should be stored in AWS Secrets Manager or AWS Systems Manager Parameter Store. You can retrieve these secrets securely during the build process. 30 | 31 | ### 11. Describe a scenario where you'd use multiple build environments in a CodeBuild project. 32 | You might use multiple build environments to support different stages of the development process. For example, you could have one environment for development builds and another for production releases. 33 | 34 | ### 12. What is the role of build projects in CodeBuild? 35 | A build project defines how CodeBuild should build your source code. It includes settings like the source repository, build environment, buildspec.yml location, and other configuration details. 36 | 37 | ### 13. How can you troubleshoot a failing build in CodeBuild? 38 | You can view build logs and examine the output of build steps to identify issues. If a buildspec.yml file has errors, they can often be resolved by reviewing the syntax and ensuring proper settings. 39 | 40 | ### 14. What's the benefit of using CodeBuild over traditional build tools? 41 | CodeBuild is fully managed and scalable. It eliminates the need to provision and manage build servers, making it easier to set up and scale build processes without infrastructure overhead. 42 | 43 | ### 15. Can you build Docker images using CodeBuild? 44 | Yes, CodeBuild supports building Docker images as part of the build process. You can define build steps to build and push Docker images to repositories like Amazon ECR. 45 | 46 | ### 16. How can you integrate third-party build tools with CodeBuild? 47 | You can define build steps in your buildspec.yml file to execute third-party build tools or scripts. This enables seamless integration with tools specific to your project's needs. 48 | 49 | ### 17. What happens if a build fails in CodeBuild? 50 | If a build fails, CodeBuild can be configured to stop the pipeline in CodePipeline, send notifications, and provide detailed logs to help diagnose and resolve the issue. 51 | 52 | ### 18. Can you set up multiple build projects within a single CodeBuild project? 53 | Yes, a CodeBuild project can have multiple build projects associated with it. This is useful when you want to build different components of your application in parallel. 54 | 55 | ### 19. How can you monitor and visualize build performance in CodeBuild? 56 | You can use Amazon CloudWatch to collect and visualize metrics from CodeBuild, such as build duration, success rates, and resource utilization. 57 | 58 | ### 20. Explain how CodeBuild pricing works. 59 | CodeBuild pricing is based on the number of build minutes consumed. A build minute is billed per minute of code build time, including time spent provisioning and cleaning up the build environment. 60 | -------------------------------------------------------------------------------- /interview-questions/code-deploy.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodeDeploy? 2 | AWS CodeDeploy is a fully managed deployment service that automates software deployments to a variety of compute platforms, including Amazon EC2 instances, AWS Lambda functions, and on-premises servers. 3 | 4 | ### 2. How does CodeDeploy work? 5 | CodeDeploy coordinates application deployments by pushing code changes to instances, managing deployment lifecycle events, and rolling back deployments if necessary. 6 | 7 | ### 3. What are the deployment strategies supported by CodeDeploy? 8 | CodeDeploy supports various deployment strategies, including Blue-Green, In-Place, and Canary. Each strategy determines how new code versions are rolled out to instances. 9 | 10 | ### 4. Explain the Blue-Green deployment strategy in CodeDeploy. 11 | In Blue-Green deployment, two identical environments (blue and green) are set up. New code is deployed to the green environment, and after successful testing, traffic is switched from the blue to the green environment. 12 | 13 | ### 5. How does CodeDeploy handle rollbacks? 14 | If a deployment fails or triggers alarms, CodeDeploy can automatically roll back to the previous version of the application, minimizing downtime and impact. 15 | 16 | ### 6. Can you use CodeDeploy for serverless deployments? 17 | Yes, CodeDeploy can be used to deploy AWS Lambda functions. It facilitates smooth updates to Lambda function code without service interruption. 18 | 19 | ### 7. What is an Application Revision in CodeDeploy? 20 | An Application Revision is a version of your application code that is deployed using CodeDeploy. It can include application files, configuration files, and scripts necessary for deployment. 21 | 22 | ### 8. How can you integrate CodeDeploy with your CI/CD pipeline? 23 | CodeDeploy can be integrated into your CI/CD pipeline using services like AWS CodePipeline. After successful builds, the pipeline triggers CodeDeploy to deploy the new version. 24 | 25 | ### 9. What is a Deployment Group in CodeDeploy? 26 | A Deployment Group is a set of instances or Lambda functions targeted for deployment. It defines where the application should be deployed and how the deployment should be executed. 27 | 28 | ### 10. How can you ensure zero downtime during application deployments? 29 | Zero downtime can be achieved by using strategies like Blue-Green deployments or Canary deployments. These strategies allow you to gradually shift traffic to the new version while testing its stability. 30 | 31 | ### 11. Explain how you can manage deployment configuration in CodeDeploy. 32 | Deployment configuration specifies parameters such as deployment style, traffic routing, and the order of deployment lifecycle events. It allows you to fine-tune deployment behavior. 33 | 34 | ### 12. How can you handle database schema changes during deployments? 35 | Database schema changes can be managed using pre- and post-deployment scripts. These scripts ensure that the database is properly updated before and after deployment. 36 | 37 | ### 13. Describe a scenario where you would use the Canary deployment strategy. 38 | You might use the Canary strategy when you want to gradually expose a new version to a small portion of your users for testing before rolling it out to the entire user base. 39 | 40 | ### 14. How does CodeDeploy handle instances with different capacities? 41 | CodeDeploy can automatically distribute the new version of the application across instances with varying capacities by taking into account the deployment configuration and specified traffic weights. 42 | 43 | ### 15. What are hooks in CodeDeploy? 44 | Hooks are scripts that run at various points in the deployment lifecycle. They allow you to perform custom actions, such as validating deployments or running tests, at specific stages. 45 | 46 | ### 16. How does CodeDeploy ensure consistent deployments across instances? 47 | CodeDeploy uses an agent on each instance that manages deployment lifecycle events and ensures consistent application deployments. 48 | 49 | ### 17. What is the difference between an EC2/On-Premises deployment and a Lambda deployment in CodeDeploy? 50 | An EC2/On-Premises deployment involves deploying code to instances, while a Lambda deployment deploys code to Lambda functions. Both utilize CodeDeploy's deployment capabilities. 51 | 52 | ### 18. How can you monitor the progress of a deployment in CodeDeploy? 53 | You can monitor deployments using the AWS Management Console, AWS CLI, or AWS SDKs. CodeDeploy provides detailed logs and metrics to track the status and progress of deployments. 54 | 55 | ### 19. Can CodeDeploy deploy applications across multiple regions? 56 | Yes, CodeDeploy can deploy applications to multiple regions. However, each region requires its own deployment configuration and setup. 57 | 58 | ### 20. What is the role of the CodeDeploy agent? 59 | The CodeDeploy agent is responsible for executing deployment instructions on instances. It communicates with the CodeDeploy service and manages deployment lifecycle events. -------------------------------------------------------------------------------- /interview-questions/code-pipeline.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS CodePipeline? 2 | AWS CodePipeline is a fully managed continuous integration and continuous delivery (CI/CD) service that automates the release process of software applications. It enables developers to build, test, and deploy their code changes automatically and efficiently. 3 | 4 | ### 2. How does CodePipeline work? 5 | CodePipeline orchestrates the flow of code changes through multiple stages. Each stage represents a step in the release process, such as source code retrieval, building, testing, and deployment. Developers define the pipeline structure, including the sequence of stages and associated actions, to automate the entire software delivery lifecycle. 6 | 7 | ### 3. Explain the basic structure of a CodePipeline. 8 | A CodePipeline consists of stages, actions, and transitions. Stages are logical phases of the pipeline, actions are the tasks performed within those stages (e.g., source code checkout, deployment), and transitions define the flow of execution between stages. 9 | 10 | ### 4. What are artifacts in CodePipeline? 11 | Artifacts are the output files generated during the build or compilation phase of the pipeline. These artifacts are the result of a successful action and are used as inputs for subsequent stages. For example, an artifact could be a packaged application ready for deployment. 12 | 13 | ### 5. Describe the role of the Source stage in CodePipeline. 14 | The Source stage is the starting point of the pipeline. It retrieves the source code from a version control repository, such as GitHub or AWS CodeCommit. When changes are detected in the repository, the Source stage triggers the pipeline execution. 15 | 16 | ### 6. How can you prevent unauthorized changes to the pipeline? 17 | Access to CodePipeline resources can be controlled using AWS Identity and Access Management (IAM) policies. By configuring IAM roles and permissions, you can restrict access to only authorized individuals or processes, preventing unauthorized modifications to the pipeline. 18 | 19 | ### 7. Can you explain the concept of a manual approval action? 20 | A manual approval action is used to pause the pipeline and require human intervention before proceeding to the next stage. This action is often employed for production deployments, allowing a designated person to review and approve changes before they are released. 21 | 22 | ### 8. What is a webhook in CodePipeline? 23 | A webhook is a mechanism that allows external systems, such as version control repositories like GitHub, to automatically trigger a pipeline execution when code changes are pushed. This integration facilitates the continuous integration process by initiating the pipeline without manual intervention. 24 | 25 | ### 9. How can you parallelize actions in CodePipeline? 26 | Parallel execution of actions is achieved by using parallel stages. Within a stage, you can define multiple actions that run concurrently, optimizing the pipeline's execution time and improving overall efficiency. 27 | 28 | ### 10. What's the difference between AWS CodePipeline and AWS CodeDeploy? 29 | AWS CodePipeline manages the entire CI/CD workflow, encompassing various stages like building, testing, and deploying. AWS CodeDeploy, on the other hand, focuses solely on the deployment phase by automating application deployment to instances or services. 30 | 31 | ### 11. Describe a scenario where you'd use a custom action in CodePipeline. 32 | A custom action is useful when integrating with third-party tools or services that are not natively supported by CodePipeline's built-in actions. For example, you could create a custom action to integrate with a specialized security scanning tool. 33 | 34 | ### 12. How can you handle different deployment environments (e.g., dev, test, prod) in CodePipeline? 35 | To handle different deployment environments, you can create separate stages for each environment within the pipeline. This allows you to customize the deployment process, testing procedures, and configurations specific to each environment. 36 | 37 | ### 13. Explain how you would set up automatic rollbacks in CodePipeline. 38 | Automatic rollbacks can be set up using CloudWatch alarms and AWS Lambda functions. If the deployment triggers an alarm (e.g., error rate exceeds a threshold), the Lambda function can initiate a rollback by deploying the previous version of the application. 39 | 40 | ### 14. How do you handle sensitive information like API keys in your CodePipeline? 41 | Sensitive information, such as API keys or database credentials, should be stored in AWS Secrets Manager or AWS Systems Manager Parameter Store. During pipeline execution, you can retrieve these secrets and inject them securely into the deployment process. 42 | 43 | ### 15. Describe Blue-Green deployment and how it can be achieved with CodePipeline. 44 | Blue-Green deployment involves running two separate environments (blue and green) concurrently. CodePipeline can achieve this by having distinct stages for each environment, allowing testing of the new version in the green environment before redirecting traffic from blue to green. 45 | 46 | ### 16. What is the difference between a pipeline and a stage in CodePipeline? 47 | A pipeline represents the end-to-end workflow, comprising multiple stages. Stages are the individual components within the pipeline, each responsible for specific actions or tasks. 48 | 49 | ### 17. How can you incorporate testing into your CodePipeline? 50 | Testing can be integrated into CodePipeline by adding testing actions to appropriate stages. Unit tests, integration tests, and other types of tests can be performed as part of the pipeline to ensure code quality and functionality. 51 | 52 | ### 18. What happens if an action in a pipeline fails? 53 | If an action fails, CodePipeline can be configured to respond in various ways. It can stop the pipeline, notify relevant stakeholders, trigger a rollback, or continue with the pipeline execution based on predefined conditions and actions. 54 | 55 | ### 19. Explain how you can create a reusable pipeline template in CodePipeline. 56 | To create a reusable pipeline template, you can use AWS CloudFormation. Define the pipeline structure, stages, and actions in a CloudFormation template. This enables you to consistently deploy pipelines across multiple projects or applications. 57 | 58 | ### 20. Can you integrate CodePipeline with on-premises resources? 59 | Yes, you can integrate CodePipeline with on-premises resources using the AWS CodePipeline on-premises action. This allows you to connect your existing tools and infrastructure with your AWS-based CI/CD pipeline, facilitating hybrid deployments. -------------------------------------------------------------------------------- /interview-questions/dynamodb.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon DynamoDB? 2 | Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It's designed to handle massive amounts of structured data across various use cases. 3 | 4 | ### 2. How does Amazon DynamoDB work? 5 | DynamoDB stores data in tables, each with a primary key and optional secondary indexes. It automatically replicates data across multiple Availability Zones for high availability and durability. 6 | 7 | ### 3. What types of data models does Amazon DynamoDB support? 8 | DynamoDB supports both document data model (key-value pairs) and columnar data model (tables with items and attributes). It's well-suited for a variety of applications, from simple key-value stores to complex data models. 9 | 10 | ### 4. What are the key features of Amazon DynamoDB? 11 | Key features of DynamoDB include automatic scaling, multi-master replication, global tables for global distribution, support for ACID transactions, and seamless integration with AWS services. 12 | 13 | ### 5. What is the primary key in Amazon DynamoDB? 14 | The primary key is used to uniquely identify items within a table. It consists of a partition key (and optional sort key), which determines how data is distributed and stored. 15 | 16 | ### 6. How does partitioning work in Amazon DynamoDB? 17 | DynamoDB divides a table's data into partitions based on the partition key. Each partition can store up to 10 GB of data and handle a certain amount of read and write capacity. 18 | 19 | ### 7. What is the difference between a partition key and a sort key in DynamoDB? 20 | The partition key is used to distribute data across partitions, while the sort key is used to determine the order of items within a partition. Together, they create a unique identifier for each item. 21 | 22 | ### 8. How can you query data in Amazon DynamoDB? 23 | You can use the Query operation to retrieve items from a table based on the primary key or a secondary index. Queries are efficient and support various filter expressions. 24 | 25 | ### 9. What are secondary indexes in Amazon DynamoDB? 26 | Secondary indexes allow you to query the data using attributes other than the primary key. Global secondary indexes span the entire table, while local secondary indexes are created on a specific partition. 27 | 28 | ### 10. What is eventual consistency in DynamoDB? 29 | DynamoDB offers both strong consistency and eventual consistency for read operations. With eventual consistency, changes made to items may take some time to propagate across all replicas. 30 | 31 | ### 11. How can you ensure data durability in Amazon DynamoDB? 32 | DynamoDB replicates data across multiple Availability Zones, ensuring data durability and availability even in the event of hardware failures or AZ outages. 33 | 34 | ### 12. Can you change the schema of an existing Amazon DynamoDB table? 35 | Yes, you can change the schema of an existing DynamoDB table by modifying the provisioned throughput, changing the primary key, adding or removing secondary indexes, and more. 36 | 37 | ### 13. What is the capacity mode in Amazon DynamoDB? 38 | DynamoDB offers two capacity modes: Provisioned and On-Demand. In Provisioned mode, you provision a specific amount of read and write capacity. In On-Demand mode, capacity is automatically adjusted based on usage. 39 | 40 | ### 14. How can you automate the scaling of Amazon DynamoDB tables? 41 | You can enable auto scaling for your DynamoDB tables to automatically adjust read and write capacity based on traffic patterns. Auto scaling helps maintain optimal performance. 42 | 43 | ### 15. What is DynamoDB Streams? 44 | DynamoDB Streams captures changes to items in a table, allowing you to process and react to those changes in real time. It's often used for building event-driven applications. 45 | 46 | ### 16. How can you back up Amazon DynamoDB tables? 47 | DynamoDB provides backup and restore capabilities. You can create on-demand backups or enable continuous backups, which automatically create backups as data changes. 48 | 49 | ### 17. What is the purpose of the DynamoDB Accelerator (DAX)? 50 | DynamoDB Accelerator (DAX) is an in-memory cache that provides high-speed access to frequently accessed items. It reduces the need to read data from the main DynamoDB table. 51 | 52 | ### 18. How can you implement transactions in Amazon DynamoDB? 53 | DynamoDB supports ACID transactions for multiple item updates. You can use the `TransactWriteItems` operation to group multiple updates into a single, atomic transaction. 54 | 55 | ### 19. What is the difference between Amazon DynamoDB and Amazon S3? 56 | Amazon DynamoDB is a NoSQL database service optimized for high-performance, low-latency applications with structured data. Amazon S3 is an object storage service used for storing files, images, videos, and more. 57 | 58 | ### 20. What are Global Tables in Amazon DynamoDB? 59 | Global Tables enable you to replicate data across multiple AWS regions, providing low-latency access to DynamoDB data from users around the world. -------------------------------------------------------------------------------- /interview-questions/ecr.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Elastic Container Registry (ECR)? 2 | Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry that makes it easy to store, manage, and deploy Docker container images. 3 | 4 | ### 2. How does Amazon ECR work? 5 | Amazon ECR allows you to push Docker container images to a repository and then pull those images to deploy containers on Amazon ECS, Kubernetes, or other container orchestrators. 6 | 7 | ### 3. What are the key features of Amazon ECR? 8 | Key features of Amazon ECR include secure and private Docker image storage, integration with AWS Identity and Access Management (IAM), lifecycle policies, and image vulnerability scanning. 9 | 10 | ### 4. What is a Docker container image? 11 | A Docker container image is a lightweight, standalone, and executable software package that contains everything needed to run a piece of software, including code, runtime, libraries, and settings. 12 | 13 | ### 5. How do you push Docker images to Amazon ECR? 14 | You can use the `docker push` command to push Docker images to Amazon ECR repositories after authenticating with your AWS credentials. 15 | 16 | ### 6. How can you pull Docker images from Amazon ECR? 17 | You can use the `docker pull` command to pull Docker images from Amazon ECR repositories after authenticating with your AWS credentials. 18 | 19 | ### 7. What is the significance of Amazon ECR lifecycle policies? 20 | Amazon ECR lifecycle policies allow you to define rules that automatically clean up and manage images based on conditions like image age, count, and usage. 21 | 22 | ### 8. How does Amazon ECR support image vulnerability scanning? 23 | Amazon ECR supports image vulnerability scanning by integrating with Amazon ECR Public and AWS Security Hub to provide insights into the security posture of your container images. 24 | 25 | ### 9. How can you ensure private and secure image storage in Amazon ECR? 26 | Amazon ECR repositories are private by default and can be accessed only by authorized users and roles. You can control access using IAM policies and resource-based policies. 27 | 28 | ### 10. How does Amazon ECR integrate with Amazon ECS? 29 | Amazon ECR integrates seamlessly with Amazon ECS, allowing you to use your ECR repositories to store and manage container images for your ECS tasks and services. 30 | 31 | ### 11. What are ECR lifecycle policies? 32 | ECR lifecycle policies are rules you define to manage the retention of images in your repositories. They help keep your image repositories organized and free up storage space. 33 | 34 | ### 12. Can you use Amazon ECR for multi-region deployments? 35 | Yes, you can use Amazon ECR in multi-region deployments by replicating images across different regions and using cross-region replication. 36 | 37 | ### 13. What is Amazon ECR Public? 38 | Amazon ECR Public is a feature that allows you to store and share publicly accessible container images. It's useful for distributing open-source software or other public content. 39 | 40 | ### 14. How can you improve image build and deployment speed using Amazon ECR? 41 | You can improve image build and deployment speed by using Amazon ECR's image layer caching and pulling pre-built base images from the registry. 42 | 43 | ### 15. What is the Amazon ECR Docker Credential Helper? 44 | The Amazon ECR Docker Credential Helper is a tool that simplifies authentication to Amazon ECR repositories, allowing Docker to authenticate with ECR using IAM credentials. 45 | 46 | ### 16. How does Amazon ECR support image versioning? 47 | Amazon ECR supports image versioning by allowing you to tag images with different version labels. This helps in maintaining different versions of the same image. 48 | 49 | ### 17. Can you use Amazon ECR with Kubernetes? 50 | Yes, you can use Amazon ECR with Kubernetes by configuring the necessary authentication and pulling container images from ECR repositories when deploying pods. 51 | 52 | ### 18. How does Amazon ECR handle image replication? 53 | Amazon ECR provides cross-region replication to replicate images to different AWS regions, improving availability and reducing latency for users in different regions. 54 | 55 | ### 19. What is the cost structure of Amazon ECR? 56 | Amazon ECR charges based on the amount of data stored in your repositories and the data transferred out to other AWS regions or services. 57 | 58 | ### 20. How can you ensure high availability for images in Amazon ECR? 59 | Amazon ECR provides high availability by replicating images across multiple Availability Zones within a region, ensuring durability and availability of your container images. -------------------------------------------------------------------------------- /interview-questions/ecs.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon ECS? 2 | Amazon Elastic Container Service (Amazon ECS) is a fully managed container orchestration service that allows you to run, manage, and scale Docker containers on a cluster of Amazon EC2 instances or AWS Fargate. 3 | 4 | ### 2. How does Amazon ECS work? 5 | Amazon ECS simplifies the deployment and management of containers by providing APIs to launch and stop containerized applications. It handles the underlying infrastructure and scaling for you. 6 | 7 | ### 3. What is a container in the context of Amazon ECS? 8 | A container is a lightweight, standalone executable package that includes everything needed to run a piece of software, including the code, runtime, libraries, and system tools. 9 | 10 | ### 4. What is a task definition in Amazon ECS? 11 | A task definition is a blueprint for running a Docker container as part of a task in Amazon ECS. It defines container configurations, resources, networking, and more. 12 | 13 | ### 5. How are tasks and services related in Amazon ECS? 14 | A task is a running container or a group of related containers defined by a task definition. A service in ECS manages the desired number of tasks to maintain availability and desired state. 15 | 16 | ### 6. What is the difference between Amazon ECS and AWS Fargate? 17 | Amazon ECS gives you control over EC2 instances to run containers, while AWS Fargate is a serverless compute engine for containers. With Fargate, you don't need to manage the underlying infrastructure. 18 | 19 | ### 7. How can you schedule tasks in Amazon ECS? 20 | Tasks in Amazon ECS can be scheduled using services, which maintain a desired count of tasks in a cluster. You can also use Amazon ECS Events to trigger task execution based on events. 21 | 22 | ### 8. What is the purpose of the Amazon ECS cluster? 23 | An Amazon ECS cluster is a logical grouping of container instances and tasks. It provides a way to manage and organize your containers within a scalable infrastructure. 24 | 25 | ### 9. How can you scale containers in Amazon ECS? 26 | You can scale containers by adjusting the desired task count of an ECS service. Amazon ECS automatically adjusts the number of tasks based on your scaling policies. 27 | 28 | ### 10. What is Amazon ECS Agent? 29 | The Amazon ECS Agent is a component that runs on each EC2 instance in your ECS cluster. It's responsible for communicating with the ECS control plane and managing tasks on the instance. 30 | 31 | ### 11. What is the difference between a task and a container instance in Amazon ECS? 32 | A task is a running instance of a containerized application, while a container instance is an Amazon EC2 instance that's part of an ECS cluster and runs the ECS Agent. 33 | 34 | ### 12. How can you manage container secrets in Amazon ECS? 35 | You can manage container secrets using AWS Secrets Manager or AWS Systems Manager Parameter Store. Secrets can be injected into containers at runtime as environment variables. 36 | 37 | ### 13. What is the purpose of Amazon ECS Capacity Providers? 38 | ECS Capacity Providers allow you to manage capacity and scaling for your tasks. They define how tasks are placed and whether to use On-Demand Instances or Spot Instances. 39 | 40 | ### 14. Can you use Amazon ECS to orchestrate non-Docker workloads? 41 | Yes, Amazon ECS supports running tasks with the Fargate launch type that allow you to specify images from various sources, including Amazon ECR, Docker Hub, and more. 42 | 43 | ### 15. How does Amazon ECS integrate with other AWS services? 44 | Amazon ECS integrates with other AWS services like Amazon CloudWatch for monitoring, AWS Identity and Access Management (IAM) for access control, and Amazon VPC for networking. 45 | 46 | ### 16. What is the difference between the Fargate and EC2 launch types in Amazon ECS? 47 | The Fargate launch type lets you run containers without managing the underlying infrastructure, while the EC2 launch type gives you control over the EC2 instances where containers are deployed. 48 | 49 | ### 17. How can you manage container networking in Amazon ECS? 50 | Amazon ECS uses Amazon VPC networking for containers. You can configure networking using task definitions, security groups, and subnets to control communication between containers. 51 | 52 | ### 18. What is the purpose of the Amazon ECS Task Placement Strategy? 53 | Task Placement Strategy allows you to define rules for how tasks are distributed across container instances. It can help optimize resource usage and ensure high availability. 54 | 55 | ### 19. What is the role of the ECS Service Scheduler? 56 | The ECS Service Scheduler is responsible for placing and managing tasks across the cluster. It ensures tasks are launched, monitored, and replaced as needed. 57 | 58 | ### 20. How can you ensure high availability in Amazon ECS? 59 | To achieve high availability, you can use Amazon ECS services with multiple tasks running across multiple Availability Zones (AZs), combined with Auto Scaling to maintain the desired task count. -------------------------------------------------------------------------------- /interview-questions/eks.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon EKS? 2 | Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that makes it easier to deploy, manage, and scale containerized applications using Kubernetes. 3 | 4 | ### 2. How does Amazon EKS work? 5 | Amazon EKS eliminates the need to install, operate, and maintain your own Kubernetes control plane. It provides a managed environment for deploying, managing, and scaling containerized applications using Kubernetes. 6 | 7 | ### 3. What is Kubernetes? 8 | Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. 9 | 10 | ### 4. What are the key features of Amazon EKS? 11 | Key features of Amazon EKS include automatic upgrades, integration with AWS services, high availability with multiple availability zones, security with IAM and VPC, and simplified Kubernetes operations. 12 | 13 | ### 5. What is a Kubernetes cluster? 14 | A Kubernetes cluster is a collection of nodes (Amazon EC2 instances) that run containerized applications managed by Kubernetes. It includes a control plane and worker nodes. 15 | 16 | ### 6. How do you create a Kubernetes cluster in Amazon EKS? 17 | To create an EKS cluster, you use the AWS Management Console, AWS CLI, or AWS CloudFormation. EKS automatically provisions the control plane and worker nodes. 18 | 19 | ### 7. What are Kubernetes nodes? 20 | Kubernetes nodes are the worker machines that run containers. They host pods, which are the smallest deployable units in Kubernetes. 21 | 22 | ### 8. How does Amazon EKS manage Kubernetes control plane updates? 23 | Amazon EKS automatically handles the upgrades of the Kubernetes control plane. It schedules and applies updates while ensuring minimal disruption to the applications running on the cluster. 24 | 25 | ### 9. What is the difference between Amazon EKS and Amazon ECS? 26 | Amazon EKS provides managed Kubernetes clusters, while Amazon ECS provides managed Docker container orchestration. EKS is better suited for complex microservices architectures using Kubernetes. 27 | 28 | ### 10. How can you scale applications in Amazon EKS? 29 | You can scale applications in EKS by adjusting the desired replica count of Kubernetes Deployments or StatefulSets. EKS automatically manages the scaling of underlying resources. 30 | 31 | ### 11. What is the role of Amazon EKS Managed Node Groups? 32 | Amazon EKS Managed Node Groups simplify the deployment and management of worker nodes in an EKS cluster. They automatically provision, configure, and scale nodes. 33 | 34 | ### 12. How does Amazon EKS handle networking? 35 | Amazon EKS uses Amazon VPC for networking. It creates a VPC and subnets for your cluster, and each pod in the cluster gets an IP address from the subnet. 36 | 37 | ### 13. What is the Kubernetes Pod in Amazon EKS? 38 | A Kubernetes Pod is the smallest deployable unit in Kubernetes. It represents a single instance of a running process in the cluster and can consist of one or more containers. 39 | 40 | ### 14. How does Amazon EKS integrate with AWS services? 41 | Amazon EKS integrates with various AWS services like IAM for access control, Amazon VPC for networking, and CloudWatch for monitoring and logging. 42 | 43 | ### 15. Can you run multiple Kubernetes clusters on Amazon EKS? 44 | Yes, you can run multiple Kubernetes clusters on Amazon EKS, each with its own set of worker nodes and applications. 45 | 46 | ### 16. What is the difference between Kubernetes Deployment and StatefulSet? 47 | A Kubernetes Deployment is suitable for stateless applications, while a StatefulSet is designed for stateful applications that require stable network identifiers and ordered, graceful scaling. 48 | 49 | ### 17. How can you secure an Amazon EKS cluster? 50 | You can secure an EKS cluster by using AWS Identity and Access Management (IAM) roles, integrating with Amazon VPC for networking isolation, and applying security best practices to your Kubernetes workloads. 51 | 52 | ### 18. What is the Kubernetes Operator in Amazon EKS? 53 | A Kubernetes Operator is a method of packaging, deploying, and managing an application using Kubernetes-native APIs. It allows for more automated management of complex applications. 54 | 55 | ### 19. How can you automate application deployments in Amazon EKS? 56 | You can use Kubernetes Deployments or other tools like Helm to automate application deployments in Amazon EKS. These tools help manage the lifecycle of containerized applications. 57 | 58 | ### 20. How does Amazon EKS handle high availability? 59 | Amazon EKS supports high availability by distributing control plane components across multiple availability zones. It also offers features like managed node groups and Auto Scaling for worker nodes. 60 | -------------------------------------------------------------------------------- /interview-questions/elastic-bean-stalk.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Elastic Beanstalk? 2 | AWS Elastic Beanstalk is a platform-as-a-service (PaaS) offering that simplifies application deployment and management. It handles infrastructure provisioning, deployment, monitoring, and scaling, allowing developers to focus on writing code. 3 | 4 | ### 2. How does Elastic Beanstalk work? 5 | Elastic Beanstalk abstracts the infrastructure layer, allowing you to upload your code (web application or microservices) and configuration. It then automatically deploys, manages, and scales your application based on the platform, language, and environment settings you choose. 6 | 7 | ### 3. What languages and platforms does Elastic Beanstalk support? 8 | Elastic Beanstalk supports multiple programming languages and platforms, including Java, .NET, PHP, Node.js, Python, Ruby, Go, and Docker. 9 | 10 | ### 4. What is an Elastic Beanstalk environment? 11 | An Elastic Beanstalk environment is a specific instance of your application that includes the runtime, resources, and configuration settings. You can have multiple environments (e.g., development, testing, production) for the same application. 12 | 13 | ### 5. How does Elastic Beanstalk handle updates and deployments? 14 | Elastic Beanstalk supports both All at Once and Rolling deployments. All at Once deploys updates to all instances simultaneously, while Rolling deploys updates in batches to reduce downtime. 15 | 16 | ### 6. Can you customize the infrastructure in Elastic Beanstalk? 17 | Yes, Elastic Beanstalk allows you to customize the environment's resources, configuration, and scaling settings through environment configuration files or the AWS Management Console. 18 | 19 | ### 7. How can you monitor the health of an Elastic Beanstalk environment? 20 | Elastic Beanstalk provides health monitoring through CloudWatch. You can set up alarms based on metrics like CPU utilization, latency, and request count. 21 | 22 | ### 8. What is the Elastic Beanstalk Command Line Interface (EB CLI)? 23 | The EB CLI is a command-line tool that provides an interface for interacting with Elastic Beanstalk. It enables developers to manage applications and environments using commands. 24 | 25 | ### 9. How does Elastic Beanstalk handle automatic scaling? 26 | Elastic Beanstalk can automatically scale your application based on the configured scaling triggers, such as CPU utilization, network traffic, or other custom metrics. 27 | 28 | ### 10. Explain the difference between Single Instance and Load Balanced environments in Elastic Beanstalk. 29 | In a Single Instance environment, your application runs on a single EC2 instance. In a Load Balanced environment, your application runs on multiple instances behind a load balancer, improving availability and scalability. 30 | 31 | ### 11. How does Elastic Beanstalk support rolling back deployments? 32 | Elastic Beanstalk supports rolling back to a previous version if an update results in errors or issues. You can initiate a rollback through the AWS Management Console or the EB CLI. 33 | 34 | ### 12. Can Elastic Beanstalk deploy applications to multiple availability zones? 35 | Yes, Elastic Beanstalk can automatically deploy your application to multiple availability zones within a region to enhance high availability. 36 | 37 | ### 13. How can you handle environment-specific configurations in Elastic Beanstalk? 38 | You can use configuration files, environment variables, or Parameter Store to manage environment-specific configurations, ensuring your application behaves consistently across environments. 39 | 40 | ### 14. Describe how you would configure environment variables in Elastic Beanstalk. 41 | Environment variables can be configured using the AWS Management Console, the EB CLI, or Elastic Beanstalk configuration files. They provide a way to pass dynamic values to your application. 42 | 43 | ### 15. Can Elastic Beanstalk deploy applications stored in containers? 44 | Yes, Elastic Beanstalk supports deploying Docker containers. You can specify a Docker image repository and Elastic Beanstalk will handle deployment and management of the containerized application. 45 | 46 | ### 16. How can you automate deployments to Elastic Beanstalk? 47 | You can use the AWS CodePipeline service to automate the deployment process to Elastic Beanstalk. This helps create a continuous integration and continuous delivery (CI/CD) pipeline. 48 | 49 | ### 17. What is the difference between an environment URL and a CNAME in Elastic Beanstalk? 50 | An environment URL is a unique URL automatically generated for each Elastic Beanstalk environment. A CNAME (Canonical Name) is an alias that you can configure to map a custom domain to your Elastic Beanstalk environment. 51 | 52 | ### 18. Can Elastic Beanstalk be used for serverless applications? 53 | While Elastic Beanstalk handles infrastructure provisioning, it is not a serverless service like AWS Lambda. It's designed to manage and scale applications on virtual machines. 54 | 55 | ### 19. What are worker environments in Elastic Beanstalk? 56 | Worker environments in Elastic Beanstalk are used for background tasks and processing. They handle tasks asynchronously, separate from the main application environment. 57 | 58 | ### 20. How can you back up and restore an Elastic Beanstalk environment? 59 | Elastic Beanstalk does not provide built-in backup and restore capabilities. However, you can use AWS services like Amazon RDS for database backups and CloudFormation for environment configuration versioning. -------------------------------------------------------------------------------- /interview-questions/elastic-cloud-compute.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon EC2? 2 | Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It allows users to create, configure, and manage virtual servers (known as instances) in the AWS cloud. 3 | 4 | ### 2. How does Amazon EC2 work? 5 | Amazon EC2 enables users to launch instances based on pre-configured Amazon Machine Images (AMIs). These instances run within virtual private clouds (VPCs) and can be configured with various resources like CPU, memory, storage, and networking. 6 | 7 | ### 3. What are the different instance types in EC2? 8 | Amazon EC2 offers a wide range of instance types optimized for different use cases, such as general-purpose, memory-optimized, compute-optimized, and GPU instances. 9 | 10 | ### 4. Explain the differences between on-demand, reserved, and spot instances. 11 | - On-Demand Instances: Pay-as-you-go pricing with no upfront commitment. 12 | - Reserved Instances: Provides capacity reservation at a lower cost in exchange for a commitment. 13 | - Spot Instances: Allows users to bid on unused EC2 capacity, potentially leading to significantly lower costs. 14 | 15 | ### 5. How can you improve the availability of EC2 instances? 16 | To improve availability, you can place instances in multiple Availability Zones (AZs) within a region. This helps ensure redundancy and fault tolerance. 17 | 18 | ### 6. What is an Amazon Machine Image (AMI)? 19 | An Amazon Machine Image (AMI) is a pre-configured template that contains the information required to launch an EC2 instance. AMIs can include an operating system, applications, data, and configuration settings. 20 | 21 | ### 7. How can you secure your EC2 instances? 22 | You can enhance the security of EC2 instances by using security groups, Network ACLs, key pairs, and configuring firewalls. Additionally, implementing multi-factor authentication (MFA) is recommended for account access. 23 | 24 | ### 8. Explain the difference between public IP and Elastic IP in EC2. 25 | A public IP is assigned to an instance at launch, but it can change if the instance is stopped and started. An Elastic IP is a static IP address that can be associated with an instance, providing a consistent public IP even after stopping and starting the instance. 26 | 27 | ### 9. How can you scale your application using EC2? 28 | You can scale your application horizontally by adding more instances. Amazon EC2 Auto Scaling helps you automatically adjust the number of instances based on demand. 29 | 30 | ### 10. What is Amazon EBS? 31 | Amazon Elastic Block Store (EBS) provides persistent block storage volumes for EC2 instances. EBS volumes can be attached to instances and used as data storage. 32 | 33 | ### 11. How can you encrypt data on EBS volumes? 34 | You can encrypt EBS volumes using Amazon EBS encryption. You can choose to create encrypted volumes during instance launch or encrypt existing unencrypted volumes. 35 | 36 | ### 12. How can you back up your EC2 instances? 37 | You can create snapshots of EBS volumes, which serve as backups. These snapshots can be used to create new EBS volumes or restore existing ones. 38 | 39 | ### 13. What is the difference between instance store and EBS-backed instances? 40 | Instance store instances use ephemeral storage that is directly attached to the instance, providing high I/O performance. EBS-backed instances use EBS volumes for storage, offering persistent data storage. 41 | 42 | ### 14. What are instance metadata and user data in EC2? 43 | Instance metadata provides information about an instance, such as its IP address, instance type, and IAM role. User data is information that you can pass to an instance during launch to customize its behavior. 44 | 45 | ### 15. How can you launch instances in a Virtual Private Cloud (VPC)? 46 | When launching instances, you can choose a specific VPC and subnet. This ensures that the instances are launched within the defined network environment. 47 | 48 | ### 16. What is the purpose of an EC2 security group? 49 | An EC2 security group acts as a virtual firewall for instances to control inbound and outbound traffic. You can specify rules to allow or deny traffic based on IP addresses and ports. 50 | 51 | ### 17. How can you automate the deployment of EC2 instances? 52 | You can use AWS CloudFormation to create and manage a collection of related AWS resources, including EC2 instances. This allows you to define the infrastructure as code. 53 | 54 | ### 18. How can you achieve high availability for an application using EC2? 55 | You can use features like Amazon EC2 Auto Scaling and Elastic Load Balancing to distribute incoming traffic and automatically adjust the number of instances to handle changes in demand. 56 | 57 | ### 19. What is Amazon Machine Learning (Amazon ML)? 58 | Amazon ML is a service that enables you to build predictive models using machine learning technology. It's used to perform predictions on data and make informed decisions. 59 | 60 | ### 20. What is Amazon EC2 Instance Connect? 61 | Amazon EC2 Instance Connect provides a simple and secure way to connect to your instances using Secure Shell (SSH). It eliminates the need to use key pairs and allows you to connect using your AWS Management Console credentials. -------------------------------------------------------------------------------- /interview-questions/elb.md: -------------------------------------------------------------------------------- 1 | Certainly! Here are 20 interview questions related to Elastic Load Balancers (ELBs) in AWS, along with detailed answers in Markdown format: 2 | 3 | ## Elastic Load Balancers (ELBs) Interview Questions 4 | 5 | ### 1. What is an Elastic Load Balancer (ELB)? 6 | An Elastic Load Balancer (ELB) is a managed AWS service that automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, or IP addresses, to ensure high availability and fault tolerance. 7 | 8 | ### 2. What are the three types of Elastic Load Balancers available in AWS? 9 | There are three types of Elastic Load Balancers: Application Load Balancer (ALB), Network Load Balancer (NLB), and Gateway Load Balancer (GWLB). 10 | 11 | ### 3. What is the main difference between Application Load Balancer (ALB) and Network Load Balancer (NLB)? 12 | ALB operates at the application layer and supports advanced routing, including content-based routing and path-based routing. NLB operates at the transport layer and provides ultra-low latency and high throughput. 13 | 14 | ### 4. What are some key features of Application Load Balancer (ALB)? 15 | ALB supports features like dynamic port mapping, path-based routing, support for HTTP/2 and WebSocket protocols, and content-based routing using listeners and rules. 16 | 17 | ### 5. When should you use Network Load Balancer (NLB)? 18 | NLB is suitable for scenarios that require extreme performance, high throughput, and low latency, such as gaming applications and real-time streaming. 19 | 20 | ### 6. What is a target group in Elastic Load Balancing? 21 | A target group is a logical grouping of targets (such as EC2 instances) registered with a load balancer. ALB and NLB use target groups to route requests to registered targets. 22 | 23 | ### 7. How does health checking work in Elastic Load Balancers? 24 | Elastic Load Balancers perform health checks on registered targets to ensure they are available to receive traffic. Unhealthy targets are temporarily removed from rotation. 25 | 26 | ### 8. How can you route requests to different target groups based on URL paths in Application Load Balancer (ALB)? 27 | ALB supports path-based routing, where you define listeners and rules to route requests to different target groups based on specific URL paths. 28 | 29 | ### 9. What is cross-zone load balancing? 30 | Cross-zone load balancing is a feature that evenly distributes traffic across all registered targets in all availability zones, helping to achieve even distribution and better resource utilization. 31 | 32 | ### 10. How can you enable SSL/TLS encryption for traffic between clients and the load balancer? 33 | You can configure an SSL/TLS certificate on the load balancer, enabling it to terminate SSL/TLS connections and communicate with registered targets over HTTP. 34 | 35 | ### 11. Can you use Elastic Load Balancer (ELB) with resources outside AWS? 36 | Yes, ELB can be used with on-premises resources using Network Load Balancer with IP addresses as targets or with AWS Global Accelerator to route traffic to resources outside AWS. 37 | 38 | ### 12. What is a sticky session, and how can you enable it in Elastic Load Balancers? 39 | Sticky sessions ensure that a user's session is consistently directed to the same target. In ALB, you can enable sticky sessions using the `stickiness` option in the target group settings. 40 | 41 | ### 13. What is the purpose of pre-warming in Elastic Load Balancers? 42 | Pre-warming involves sending a low volume of traffic to a new load balancer to allow it to scale up its capacity and establish connections gradually. 43 | 44 | ### 14. How does Elastic Load Balancer support IPv6? 45 | Elastic Load Balancer (ALB and NLB) supports both IPv4 and IPv6 addresses, allowing applications to be accessed over the IPv6 protocol. 46 | 47 | ### 15. What is connection draining, and when is it useful? 48 | Connection draining is the process of gradually stopping traffic to an unhealthy target instance before removing it from the target group. It's useful to ensure active requests are completed before taking the instance out of rotation. 49 | 50 | ### 16. How can you enable access logs for Elastic Load Balancers? 51 | You can enable access logs for Elastic Load Balancers to capture detailed information about requests, responses, and client IP addresses. These logs can be stored in an Amazon S3 bucket. 52 | 53 | ### 17. What is the purpose of an idle timeout setting in Elastic Load Balancers? 54 | The idle timeout setting defines the maximum time an idle connection can remain open between the load balancer and a client. After this duration, the connection is closed. 55 | 56 | ### 18. Can you associate Elastic IP addresses with Elastic Load Balancers? 57 | No, Elastic Load Balancers do not have static IP addresses. They have DNS names that are used to route traffic to registered targets. 58 | 59 | ### 19. How can you configure health checks for targets in Elastic Load Balancers? 60 | You can configure health checks by defining a health check path, interval, timeout, and thresholds. ELB sends periodic requests to targets to verify their health. 61 | 62 | ### 20. Can you use Elastic Load Balancers to distribute traffic across regions? 63 | Elastic Load Balancers can distribute traffic only within the same region. For distributing traffic across regions, you can use AWS Global Accelerator. 64 | 65 | Remember that while these answers provide depth, it's important to personalize your responses based on your experience and understanding of Elastic Load Balancers and AWS load balancing concepts. -------------------------------------------------------------------------------- /interview-questions/iam.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Identity and Access Management (IAM)? 2 | AWS IAM is a service that allows you to manage users, groups, and permissions for accessing AWS resources. It provides centralized control over authentication and authorization. 3 | 4 | ### 2. What are the key components of AWS IAM? 5 | Key components of AWS IAM include users, groups, roles, policies, permissions, and identity providers. 6 | 7 | ### 3. How does AWS IAM work? 8 | AWS IAM allows you to create users and groups, assign policies that define permissions, and use roles to delegate permissions to AWS services and resources. 9 | 10 | ### 4. What is the difference between authentication and authorization in AWS IAM? 11 | Authentication is the process of verifying the identity of users or entities, while authorization is the process of granting or denying access to resources based on policies and permissions. 12 | 13 | ### 5. How can you secure your AWS account using IAM? 14 | You can secure your AWS account by enforcing the principle of least privilege, creating strong password policies, enabling multi-factor authentication (MFA), and regularly reviewing permissions. 15 | 16 | ### 6. How do IAM users differ from IAM roles? 17 | IAM users are individuals or entities that have a fixed set of permissions associated with them. IAM roles are temporary credentials that can be assumed by users or AWS services to access resources. 18 | 19 | ### 7. What is an IAM policy? 20 | An IAM policy is a JSON document that defines permissions. It specifies what actions are allowed or denied on which AWS resources for whom (users, groups, or roles). 21 | 22 | ### 8. What is the AWS Management Console? 23 | The AWS Management Console is a web-based interface that allows you to interact with and manage AWS resources. IAM users can use the console to access resources based on their permissions. 24 | 25 | ### 9. How does IAM manage access keys? 26 | IAM users can have access keys (access key ID and secret access key) associated with their accounts, which are used for programmatic access to AWS resources. 27 | 28 | ### 10. What is the purpose of IAM groups? 29 | IAM groups allow you to group users and apply policies to them collectively, simplifying permission management by granting the same set of permissions to multiple users. 30 | 31 | ### 11. What is the role of an IAM policy document? 32 | An IAM policy document defines the permissions and actions that are allowed or denied. It is written in JSON format and attached to users, groups, or roles. 33 | 34 | ### 12. How can you grant permissions to an IAM user? 35 | You can grant permissions to an IAM user by attaching policies to the user directly or by adding the user to groups with associated policies. 36 | 37 | ### 13. How can you delegate permissions to AWS services using IAM roles? 38 | IAM roles allow you to delegate permissions to AWS services like EC2 instances, Lambda functions, and more, without exposing long-term credentials. 39 | 40 | ### 14. What is cross-account access in AWS IAM? 41 | Cross-account access allows you to grant permissions to users or entities from one AWS account to access resources in another AWS account. 42 | 43 | ### 15. How does IAM support identity federation? 44 | IAM supports identity federation by allowing users to access AWS resources using temporary security credentials obtained from trusted identity providers (e.g., SAML, OpenID Connect). 45 | 46 | ### 16. What is the purpose of an IAM access advisor? 47 | IAM access advisors provide insights into the services that users accessed and the actions they performed. This helps in auditing and understanding resource usage. 48 | 49 | ### 17. How does IAM enforce the principle of least privilege? 50 | IAM enforces the principle of least privilege by allowing you to define specific permissions for users, groups, or roles, reducing the risk of unauthorized access. 51 | 52 | ### 18. What is the difference between IAM policies and resource-based policies? 53 | IAM policies are attached to identities (users, groups, roles), while resource-based policies are attached to AWS resources (e.g., S3 buckets, Lambda functions) to control access from different identities. 54 | 55 | ### 19. How can you implement multi-factor authentication (MFA) in IAM? 56 | You can enable MFA for IAM users to require an additional authentication factor (e.g., a code from a virtual MFA device) along with their password when signing in. 57 | 58 | ### 20. What is the IAM policy evaluation logic? 59 | IAM uses an explicit deny model, which means that if a user's permissions include an explicit deny statement, it overrides any allow statements in the policy. 60 | -------------------------------------------------------------------------------- /interview-questions/lambda-functions.md: -------------------------------------------------------------------------------- 1 | ### 1. What is AWS Lambda? 2 | AWS Lambda is a serverless compute service that lets you run code without provisioning or managing servers. It automatically scales and manages the infrastructure required to run your code in response to events. 3 | 4 | ### 2. How does AWS Lambda work? 5 | You can upload your code to Lambda and define event sources that trigger the execution of your code. Lambda automatically manages the execution environment, scales it as needed, and provides monitoring and logging. 6 | 7 | ### 3. What are the key benefits of using AWS Lambda? 8 | The benefits of AWS Lambda include automatic scaling, reduced operational overhead, cost efficiency (as you pay only for the compute time used), and the ability to build event-driven architectures. 9 | 10 | ### 4. What types of events can trigger AWS Lambda functions? 11 | AWS Lambda functions can be triggered by various event sources, such as changes in Amazon S3 objects, updates to Amazon DynamoDB tables, HTTP requests through Amazon API Gateway, and more. 12 | 13 | ### 5. How is concurrency managed in AWS Lambda? 14 | Lambda automatically handles concurrency by scaling out instances of your function in response to incoming requests. You can set a concurrency limit to control how many concurrent executions are allowed. 15 | 16 | ### 6. What is the maximum execution duration for a single AWS Lambda invocation? 17 | The maximum execution duration for a single Lambda invocation is 15 minutes. 18 | 19 | ### 7. How do you pass data to and from AWS Lambda functions? 20 | You can pass data to Lambda functions through event objects, which contain information about the triggering event. You can also return data by using the return statement or creating a response object. 21 | 22 | ### 8. Can AWS Lambda functions communicate with external resources? 23 | Yes, Lambda functions can communicate with external resources such as databases, APIs, and other AWS services by using appropriate SDKs and APIs provided by AWS. 24 | 25 | ### 9. What are AWS Lambda layers? 26 | AWS Lambda layers are a way to manage and share code that is common across multiple functions. Layers can include libraries, custom runtimes, and other function dependencies. 27 | 28 | ### 10. How can you handle errors in AWS Lambda functions? 29 | You can handle errors by using try-catch blocks in your code. Lambda also provides CloudWatch Logs for monitoring, and you can set up error handling and retries for asynchronous invocations. 30 | 31 | ### 11. Can AWS Lambda functions access the internet? 32 | Yes, Lambda functions can access the internet through the Virtual Private Cloud (VPC) or through public endpoints if your function is not configured within a VPC. 33 | 34 | ### 12. What are the execution environments available for AWS Lambda functions? 35 | Lambda supports several runtimes, including Node.js, Python, Java, Go, Ruby, .NET Core, and custom runtimes using the Runtime API. 36 | 37 | ### 13. How can you configure environment variables for AWS Lambda functions? 38 | You can set environment variables for Lambda functions when creating or updating the function. These variables can be accessed within your code. 39 | 40 | ### 14. What is the difference between synchronous and asynchronous invocation of Lambda functions? 41 | Synchronous invocations wait for the function to complete and return a response, while asynchronous invocations return immediately, and the response is sent to a specified destination. 42 | 43 | ### 15. What is the AWS Lambda Event Source Mapping? 44 | Event Source Mapping allows you to connect event sources like Amazon DynamoDB streams or Amazon Kinesis streams to Lambda functions. This enables the function to process events as they occur. 45 | 46 | ### 16. How can you manage the permissions and execution roles for AWS Lambda functions? 47 | You can use AWS Identity and Access Management (IAM) roles to grant permissions to your Lambda functions. Execution roles define what AWS resources the function can access. 48 | 49 | ### 17. What is AWS Step Functions? 50 | AWS Step Functions is a serverless orchestration service that lets you coordinate multiple AWS services into serverless workflows using visual workflows called state machines. 51 | 52 | ### 18. How can you automate the deployment of AWS Lambda functions? 53 | You can use AWS Serverless Application Model (SAM) templates, AWS CloudFormation, or CI/CD tools like AWS CodePipeline to automate the deployment of Lambda functions. 54 | 55 | ### 19. Can AWS Lambda functions connect to on-premises resources? 56 | Yes, Lambda functions can connect to on-premises resources by placing the function inside a VPC and using a VPN or Direct Connect connection to establish connectivity. 57 | 58 | ### 20. What is the Cold Start issue in AWS Lambda? 59 | The Cold Start issue occurs when a Lambda function is invoked for the first time or after it has been idle for a while. The function needs to be initialized, causing a slight delay in response time. -------------------------------------------------------------------------------- /interview-questions/rds.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon RDS? 2 | Amazon RDS is a managed relational database service that simplifies database setup, operation, and scaling. It supports various database engines like MySQL, PostgreSQL, Oracle, SQL Server, and Amazon Aurora. 3 | 4 | ### 2. How does Amazon RDS work? 5 | Amazon RDS automates common database management tasks such as provisioning, patching, backup, recovery, and scaling. It allows you to focus on your application without managing the underlying infrastructure. 6 | 7 | ### 3. What are the key features of Amazon RDS? 8 | Amazon RDS offers automated backups, automated software patching, high availability through Multi-AZ deployments, read replicas for scaling read operations, and the ability to create custom database snapshots. 9 | 10 | ### 4. What is Multi-AZ deployment in Amazon RDS? 11 | Multi-AZ deployment is a feature that provides high availability by automatically maintaining a standby replica in a different Availability Zone (AZ). If the primary database fails, the standby replica is promoted. 12 | 13 | ### 5. How can you improve read performance in Amazon RDS? 14 | You can improve read performance by creating read replicas. Read replicas replicate data from the primary database and can be used to distribute read traffic. 15 | 16 | ### 6. What is Amazon Aurora? 17 | Amazon Aurora is a MySQL and PostgreSQL-compatible relational database engine that provides high performance, availability, and durability. It's designed to be compatible with these engines while offering improved performance and features. 18 | 19 | ### 7. What is the purpose of the RDS option group? 20 | An RDS option group is a collection of database engine-specific settings that can be applied to your DB instance. It allows you to configure features and settings that are not enabled by default. 21 | 22 | ### 8. How can you encrypt data in Amazon RDS? 23 | You can encrypt data at rest and in transit in Amazon RDS. Data at rest can be encrypted using Amazon RDS encryption or Amazon Aurora encryption, while data in transit can be encrypted using SSL. 24 | 25 | ### 9. What is a DB parameter group in Amazon RDS? 26 | A DB parameter group is a collection of database engine configuration values that can be applied to one or more DB instances. It allows you to customize database settings. 27 | 28 | ### 10. How can you monitor Amazon RDS instances? 29 | Amazon RDS provides metrics and logs through Amazon CloudWatch. You can set up alarms based on these metrics to get notified of performance issues. 30 | 31 | ### 11. What is the difference between Amazon RDS and Amazon DynamoDB? 32 | Amazon RDS is a managed relational database service, while Amazon DynamoDB is a managed NoSQL database service. RDS supports SQL databases like MySQL and PostgreSQL, while DynamoDB is designed for fast and flexible NoSQL data storage. 33 | 34 | ### 12. How can you take backups of Amazon RDS databases? 35 | Amazon RDS provides automated backups. You can also create manual backups or snapshots using the AWS Management Console, AWS CLI, or APIs. 36 | 37 | ### 13. Can you change the DB instance type for an existing Amazon RDS instance? 38 | Yes, you can modify the DB instance type for an existing Amazon RDS instance using the AWS Management Console, AWS CLI, or API. 39 | 40 | ### 14. What is the purpose of the RDS Read Replica? 41 | An RDS Read Replica is a copy of a source DB instance that can be used to offload read traffic from the primary instance. It enhances read scalability and can be in a different region than the source. 42 | 43 | ### 15. How can you replicate data between Amazon RDS and on-premises databases? 44 | You can use Amazon Database Migration Service (DMS) to replicate data between Amazon RDS and on-premises databases. DMS supports various migration scenarios. 45 | 46 | ### 16. What is the maximum storage capacity for an Amazon RDS instance? 47 | The maximum storage capacity for an Amazon RDS instance depends on the database engine and instance type. It can range from a few gigabytes to several terabytes. 48 | 49 | ### 17. How can you restore an Amazon RDS instance from a snapshot? 50 | You can restore an Amazon RDS instance from a snapshot using the AWS Management Console, AWS CLI, or APIs. The restored instance will have the data from the snapshot. 51 | 52 | ### 18. What is the significance of the RDS DB Subnet Group? 53 | An RDS DB Subnet Group is used to specify the subnets where you want to place your DB instances in a VPC. It helps determine the network availability for your database. 54 | 55 | ### 19. How does Amazon RDS handle automatic backups? 56 | Amazon RDS automatically performs backups according to the backup retention period you set. Backups are stored in Amazon S3 and can be used for restoration. 57 | 58 | ### 20. Can you run custom scripts or install custom software on Amazon RDS instances? 59 | Amazon RDS is a managed service that abstracts the underlying infrastructure, so you can't directly access the operating system. However, you can use parameter groups and option groups to configure certain settings. -------------------------------------------------------------------------------- /interview-questions/route53.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Route 53? 2 | Amazon Route 53 is a scalable and highly available Domain Name System (DNS) web service that helps route end-user requests to AWS resources or external endpoints. 3 | 4 | ### 2. What is DNS? 5 | DNS (Domain Name System) is a system that translates human-readable domain names into IP addresses, allowing computers to locate resources on the internet. 6 | 7 | ### 3. How does Amazon Route 53 work? 8 | Amazon Route 53 manages the DNS records for your domain, allowing you to associate domain names with resources such as EC2 instances, S3 buckets, and load balancers. 9 | 10 | ### 4. What are the types of routing policies in Amazon Route 53? 11 | Amazon Route 53 offers several routing policies, including Simple, Weighted, Latency, Failover, Geolocation, and Multi-Value. 12 | 13 | ### 5. What is the purpose of the Simple routing policy in Route 53? 14 | The Simple routing policy directs traffic to a single resource, such as an IP address or an Amazon S3 bucket, without any logic or decision-making. 15 | 16 | ### 6. How does the Weighted routing policy work in Route 53? 17 | The Weighted routing policy allows you to distribute traffic across multiple resources based on assigned weights. You can control the distribution of traffic based on proportions. 18 | 19 | ### 7. What is the Latency routing policy in Amazon Route 53? 20 | The Latency routing policy directs traffic to the AWS region with the lowest latency for a given user, improving the user experience by minimizing response times. 21 | 22 | ### 8. How does the Failover routing policy work? 23 | The Failover routing policy directs traffic to a primary resource and fails over to a secondary resource if the primary resource becomes unavailable. 24 | 25 | ### 9. What is the Geolocation routing policy? 26 | The Geolocation routing policy directs traffic based on the geographic location of the user, allowing you to route users to the nearest or most appropriate resource. 27 | 28 | ### 10. What is the Multi-Value routing policy? 29 | The Multi-Value routing policy allows you to associate multiple resources with a single DNS name and return multiple IP addresses in a random or weighted manner. 30 | 31 | ### 11. How can you route traffic to an AWS resource using Route 53? 32 | To route traffic to an AWS resource, you create DNS records, such as A records for IPv4 addresses and Alias records for AWS resources like ELB, S3, and CloudFront distributions. 33 | 34 | ### 12. Can Route 53 route traffic to non-AWS resources? 35 | Yes, Route 53 can route traffic to resources outside of AWS by using the simple routing policy to direct traffic to IP addresses or domain names. 36 | 37 | ### 13. How can you ensure high availability using Route 53? 38 | Route 53 provides health checks to monitor the health of resources and can automatically fail over to healthy resources in case of failures. 39 | 40 | ### 14. What are health checks in Amazon Route 53? 41 | Health checks in Route 53 monitor the health and availability of your resources by periodically sending requests and verifying the responses. 42 | 43 | ### 15. How can you configure a custom domain for an Amazon S3 bucket using Route 53? 44 | You can create an Alias record in Route 53 that points to the static website hosting endpoint of the S3 bucket, allowing you to use a custom domain for your S3 bucket. 45 | 46 | ### 16. What is a DNS alias record? 47 | An alias record is a Route 53-specific DNS record that allows you to route traffic directly to an AWS resource, such as an ELB, CloudFront distribution, or S3 bucket. 48 | 49 | ### 17. How can you migrate a domain to Amazon Route 53? 50 | To migrate a domain to Route 53, you update your domain's DNS settings to use Route 53's name servers and then recreate your DNS records within the Route 53 console. 51 | 52 | ### 18. How does Route 53 support domain registration? 53 | Route 53 allows you to register new domain names, manage existing domain names, and associate them with resources and services within your AWS account. 54 | 55 | ### 19. How can you use Route 53 to set up a global website? 56 | You can use the Geolocation routing policy to route users to different resources based on their geographic location, creating a global website with reduced latency. 57 | 58 | ### 20. What is Route 53 Resolver? 59 | Route 53 Resolver is a service that provides DNS resolution across Amazon VPCs and on-premises networks, enabling hybrid network configurations. -------------------------------------------------------------------------------- /interview-questions/s3.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon S3? 2 | Amazon Simple Storage Service (Amazon S3) is a scalable object storage service designed to store and retrieve any amount of data from anywhere on the web. It's commonly used to store files, backups, images, videos, and more. 3 | 4 | ### 2. What are the key features of Amazon S3? 5 | Amazon S3 offers features like data durability, high availability, security options, scalable storage, and the ability to store data in different storage classes based on access patterns. 6 | 7 | ### 3. What is an S3 bucket? 8 | An S3 bucket is a container for storing objects, which can be files, images, videos, and more. Each object in S3 is identified by a unique key within a bucket. 9 | 10 | ### 4. How can you control access to objects in S3? 11 | Access to S3 objects can be controlled using bucket policies, access control lists (ACLs), and IAM (Identity and Access Management) policies. You can define who can read, write, and delete objects. 12 | 13 | ### 5. What is the difference between S3 Standard, S3 Intelligent-Tiering, and S3 One Zone-IA storage classes? 14 | - S3 Standard: Offers high durability, availability, and performance. 15 | - S3 Intelligent-Tiering: Automatically moves objects between two access tiers based on changing access patterns. 16 | - S3 One Zone-IA: Stores objects in a single availability zone with lower storage costs, but without the multi-AZ resilience of S3 Standard. 17 | 18 | ### 6. How does S3 provide data durability? 19 | S3 provides 99.999999999% (11 9's) durability by automatically replicating objects across multiple facilities within a region. 20 | 21 | ### 7. What is Amazon S3 Glacier used for? 22 | Amazon S3 Glacier is a storage service designed for data archiving. It offers lower-cost storage with retrieval times ranging from minutes to hours. 23 | 24 | ### 8. How can you secure data in Amazon S3? 25 | You can secure data in Amazon S3 by using access control mechanisms, like bucket policies and IAM policies, and by enabling encryption using server-side encryption or client-side encryption. 26 | 27 | ### 9. What is S3 versioning? 28 | S3 versioning is a feature that allows you to preserve, retrieve, and restore every version of every object in a bucket. It helps protect against accidental deletion and overwrites. 29 | 30 | ### 10. What is a pre-signed URL in S3? 31 | A pre-signed URL is a URL that grants temporary access to an S3 object. It can be generated using your AWS credentials and shared with others to provide temporary access. 32 | 33 | ### 11. How can you optimize costs in Amazon S3? 34 | You can optimize costs by using storage classes that match your data access patterns, utilizing lifecycle policies to transition objects to less expensive storage tiers, and setting up cost allocation tags for billing visibility. 35 | 36 | ### 12. What is S3 Cross-Region Replication? 37 | S3 Cross-Region Replication is a feature that automatically replicates objects from one S3 bucket in one AWS region to another bucket in a different region. 38 | 39 | ### 13. How can you automate the movement of objects between different storage classes? 40 | You can use S3 Lifecycle policies to automate the transition of objects between storage classes based on predefined rules and time intervals. 41 | 42 | ### 14. What is the purpose of S3 event notifications? 43 | S3 event notifications allow you to trigger AWS Lambda functions or SQS queues when certain events, like object creation or deletion, occur in an S3 bucket. 44 | 45 | ### 15. What is the AWS Snowball device? 46 | The AWS Snowball is a physical data transport solution used for migrating large amounts of data into and out of AWS. It's ideal for scenarios where the network transfer speed is not sufficient. 47 | 48 | ### 16. What is Amazon S3 Select? 49 | Amazon S3 Select is a feature that allows you to retrieve specific data from an object using SQL-like queries, without the need to retrieve the entire object. 50 | 51 | ### 17. What is the difference between Amazon S3 and Amazon EBS? 52 | Amazon S3 is object storage used for storing files, while Amazon EBS (Elastic Block Store) is block storage used for attaching to EC2 instances as volumes. 53 | 54 | ### 18. How can you enable server access logging in Amazon S3? 55 | You can enable server access logging to track all requests made to your bucket. The logs are stored in a target bucket and can help analyze access patterns. 56 | 57 | ### 19. What is S3 Transfer Acceleration? 58 | S3 Transfer Acceleration is a feature that speeds up transferring files to and from Amazon S3 by utilizing Amazon CloudFront's globally distributed edge locations. 59 | 60 | ### 20. How can you replicate data between S3 buckets within the same region? 61 | You can use S3 Cross-Region Replication to replicate data between S3 buckets within the same region by specifying the same source and destination region. -------------------------------------------------------------------------------- /interview-questions/systems-manager.md: -------------------------------------------------------------------------------- 1 | Certainly! Here are 20 interview questions related to AWS Systems Manager, along with detailed answers in Markdown format: 2 | 3 | ## AWS Systems Manager Interview Questions 4 | 5 | ### 1. What is AWS Systems Manager? 6 | AWS Systems Manager is a service that provides centralized management for AWS resources, helping you automate tasks, manage configurations, and improve overall operational efficiency. 7 | 8 | ### 2. What are some key components of AWS Systems Manager? 9 | Key components of AWS Systems Manager include Run Command, State Manager, Automation, Parameter Store, Patch Manager, OpsCenter, and Distributor. 10 | 11 | ### 3. What is the purpose of AWS Systems Manager Parameter Store? 12 | AWS Systems Manager Parameter Store is a secure storage service that allows you to store and manage configuration data, such as passwords, database strings, and API keys. 13 | 14 | ### 4. How can you use Run Command in AWS Systems Manager? 15 | Run Command allows you to remotely manage instances by running commands without requiring direct access. It's useful for tasks like software installations or updates. 16 | 17 | ### 5. What is State Manager in AWS Systems Manager? 18 | State Manager helps you define and maintain consistent configurations for your instances over time, ensuring they comply with your desired state. 19 | 20 | ### 6. How does Automation work in AWS Systems Manager? 21 | Automation enables you to create workflows for common maintenance and deployment tasks. It uses documents to define the steps required to achieve specific outcomes. 22 | 23 | ### 7. What is Patch Manager in AWS Systems Manager? 24 | Patch Manager helps you automate the process of patching instances with the latest security updates, allowing you to keep your instances up-to-date and secure. 25 | 26 | ### 8. How can you manage inventory using AWS Systems Manager? 27 | Systems Manager Inventory allows you to collect metadata about instances and applications, helping you track changes, perform audits, and maintain compliance. 28 | 29 | ### 9. What is the difference between Systems Manager Parameter Store and Secrets Manager? 30 | Parameter Store is designed for storing configuration data, while Secrets Manager is designed for securely storing and managing sensitive information like passwords and API keys. 31 | 32 | ### 10. How can you use AWS Systems Manager to automate instance configuration? 33 | You can use State Manager to define a desired state for your instances, ensuring that they have the necessary configurations and software. 34 | 35 | ### 11. What are AWS Systems Manager documents? 36 | Documents are pre-defined or custom scripts that define the steps for performing tasks using Systems Manager. They can be used with Automation, Run Command, and State Manager. 37 | 38 | ### 12. How can you schedule automated tasks with AWS Systems Manager? 39 | You can use Maintenance Windows in Systems Manager to define schedules for executing tasks across your fleet of instances. 40 | 41 | ### 13. What is the purpose of Distributor in AWS Systems Manager? 42 | Distributor is a feature that allows you to package and distribute software packages to your instances, making it easier to manage software deployments. 43 | 44 | ### 14. How can you use AWS Systems Manager to manage compliance? 45 | You can use Compliance Manager to assess and monitor the compliance of your instances against predefined or custom policies. 46 | 47 | ### 15. What is the OpsCenter feature in AWS Systems Manager? 48 | OpsCenter helps you manage and resolve operational issues by providing a central place to view, investigate, and take action on operational tasks and incidents. 49 | 50 | ### 16. How can you integrate AWS Systems Manager with other AWS services? 51 | AWS Systems Manager integrates with services like CloudWatch, Lambda, and Step Functions to enable more advanced automation and orchestration. 52 | 53 | ### 17. Can AWS Systems Manager be used with on-premises resources? 54 | Yes, AWS Systems Manager can be used to manage both AWS resources and on-premises resources by installing the necessary agent on your servers. 55 | 56 | ### 18. How does AWS Systems Manager help with troubleshooting? 57 | Systems Manager provides features like Run Command, Session Manager, and Automation to remotely access instances for troubleshooting and maintenance tasks. 58 | 59 | ### 19. What is the Session Manager feature in AWS Systems Manager? 60 | Session Manager allows you to start interactive sessions with your instances without requiring SSH or RDP access, enhancing security and control. 61 | 62 | ### 20. How can you secure data stored in AWS Systems Manager Parameter Store? 63 | You can use IAM policies to control who has access to Parameter Store parameters and implement encryption at rest using KMS keys. -------------------------------------------------------------------------------- /interview-questions/vpc.md: -------------------------------------------------------------------------------- 1 | ### 1. What is Amazon Virtual Private Cloud (VPC)? 2 | Amazon VPC is a logically isolated section of the AWS Cloud where you can launch resources in a virtual network that you define. It allows you to control your network environment, including IP addresses, subnets, and security settings. 3 | 4 | ### 2. What are the key components of Amazon VPC? 5 | Key components of Amazon VPC include subnets, route tables, network access control lists (ACLs), security groups, and Virtual Private Gateways (VPGs). 6 | 7 | ### 3. How does Amazon VPC work? 8 | Amazon VPC enables you to create a private and secure network within AWS. You define IP ranges for your VPC, create subnets, and configure network security. 9 | 10 | ### 4. What are VPC subnets? 11 | VPC subnets are segments of the VPC's IP address range. They allow you to isolate resources and control access by creating public and private subnets. 12 | 13 | ### 5. How can you connect your on-premises network to Amazon VPC? 14 | You can establish a Virtual Private Network (VPN) connection or use AWS Direct Connect to connect your on-premises network to Amazon VPC. 15 | 16 | ### 6. What is a VPC peering connection? 17 | VPC peering allows you to connect two VPCs together, enabling resources in different VPCs to communicate as if they were on the same network. 18 | 19 | ### 7. What is a route table in Amazon VPC? 20 | A route table defines the rules for routing traffic within a VPC. It determines how traffic is directed between subnets and to external destinations. 21 | 22 | ### 8. How do security groups work in Amazon VPC? 23 | Security groups act as virtual firewalls for your instances, controlling inbound and outbound traffic. They can be associated with instances and control their network access. 24 | 25 | ### 9. What are network access control lists (ACLs) in Amazon VPC? 26 | Network ACLs are stateless filters that control inbound and outbound traffic at the subnet level. They provide an additional layer of security to control traffic flow. 27 | 28 | ### 10. How can you ensure private communication between instances in Amazon VPC? 29 | You can create private subnets and configure security groups to allow communication only between instances within the same subnet, enhancing network security. 30 | 31 | ### 11. What is the default VPC in Amazon Web Services? 32 | The default VPC is a pre-configured VPC that is created for your AWS account in each region. It simplifies instance launch but doesn't provide the same level of isolation as custom VPCs. 33 | 34 | ### 12. Can you peer VPCs in different regions? 35 | No, VPC peering is limited to VPCs within the same region. To connect VPCs across regions, you would need to use VPN or AWS Direct Connect. 36 | 37 | ### 13. How can you control public and private IP addresses in Amazon VPC? 38 | Amazon VPC allows you to allocate private IP addresses to instances automatically. Public IP addresses can be associated with instances launched in public subnets. 39 | 40 | ### 14. What is a VPN connection in Amazon VPC? 41 | A VPN connection allows you to securely connect your on-premises network to your Amazon VPC using encrypted tunnels over the public internet. 42 | 43 | ### 15. What is an Internet Gateway (IGW) in Amazon VPC? 44 | An Internet Gateway enables instances in your VPC to access the internet and allows internet traffic to reach instances in your VPC. 45 | 46 | ### 16. How can you ensure high availability in Amazon VPC? 47 | You can design your VPC with subnets across multiple Availability Zones (AZs) to ensure that your resources remain available in the event of an AZ outage. 48 | 49 | ### 17. How does Amazon VPC provide isolation? 50 | Amazon VPC provides isolation by allowing you to define and manage your own virtual network environment, including subnets, route tables, and network ACLs. 51 | 52 | ### 18. Can you modify a VPC after creation? 53 | While you can modify certain attributes of a VPC, such as its IP address range and subnets, some attributes are immutable, like the VPC's CIDR block. 54 | 55 | ### 19. What is a default route in Amazon VPC? 56 | A default route in a route table directs traffic to the Internet Gateway (IGW), allowing instances in public subnets to communicate with the internet. 57 | 58 | ### 20. What is the purpose of the Amazon VPC Endpoint? 59 | An Amazon VPC Endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services without needing an internet gateway or VPN connection. -------------------------------------------------------------------------------- /scripts/start_container.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # Pull the Docker image from Docker Hub 5 | docker pull abhishekf5/simple-python-flask-app 6 | 7 | # Run the Docker image as a container 8 | docker run -d -p 5000:5000 abhishekf5/simple-python-flask-app 9 | -------------------------------------------------------------------------------- /scripts/stop_container.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | set -e 3 | 4 | # Stop the running container (if any) 5 | echo "Hi" --------------------------------------------------------------------------------