├── TEMPLATE_IP_Range_Planner_Organizer.pdf ├── demo-s3-mrap-creation-testing.txt ├── demo-setting-up-mfa-otp.txt ├── demo-s3-performance-transfer-acceleration.txt ├── demo-creating-a-budget.txt ├── demo-ec2-create-iam-instance-role.txt ├── demo-ecs-create-container-task.txt ├── demo-ecs-create-docker-image-push-dockerhub.txt ├── ec2-ssm-parameter-store-create-parameters.txt ├── demo-creating-using-scp.txt ├── demo-s3-presigned-urls-create.txt ├── demo-ec2-creating-connecting-to-an-ec2-instance.txt.txt ├── demo-kms-create-encrypt-decrypt.txt ├── demo-s3-versioning-enable-delete-objects.txt ├── demo-cloudtrail-implement-trail.txt ├── demo-s3-crr-replication-rule-create.txt ├── demo-creating-aws-organization.txt ├── demo-s3-creating-static-site.txt ├── demo-s3-bucket-create-upload-delete.txt ├── demo-s3-sse-encryption-role-separation.txt ├── demo-access-key-creating-configuring.txt ├── demo-create-lambda-function-eventbridge-event.txt ├── demo-creating-aws-admin-account.txt ├── README.md ├── demo-vpc-create-multiple-subnets.txt └── notes-tech-networking-fundamentals.txt /TEMPLATE_IP_Range_Planner_Organizer.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/tim-andes/aws-certified-solutions-architect-cantrill-notes/HEAD/TEMPLATE_IP_Range_Planner_Organizer.pdf -------------------------------------------------------------------------------- /demo-s3-mrap-creation-testing.txt: -------------------------------------------------------------------------------- 1 | Visit Adrian Cantrill's tutorial: 2 | 3 | https://github.com/acantril/learn-cantrill-io-labs/tree/master/00-aws-simple-demos/aws-s3-multi-region-access-point 4 | -------------------------------------------------------------------------------- /demo-setting-up-mfa-otp.txt: -------------------------------------------------------------------------------- 1 | # Demo Attach MFA device 2 | Goal: Activate Multifactor Authentication / OTP for user using a device (phone with Google Auth on it) 3 | 4 | IAM Account Dropdown > Security Credentials to IAM console > locate "Assign MFA Device", follow steps. 5 | - Log Out and test MFA login. 6 | 7 | # REMEMBER: When setting up a new AWS account / Identity, you need to add another entry within your OTP application. 8 | -------------------------------------------------------------------------------- /demo-s3-performance-transfer-acceleration.txt: -------------------------------------------------------------------------------- 1 | ## DEMO S3 Performance 2 | Enable Accelerated Transfer on an S3 bucket and review the AWS provided tool to complete Direct uploads vs Accelerated Transfer 3 | 4 | Creat s3 bucket (no perids in name, DNS naming compatible) > name "testac3464576" > Create Bucket > new Bucket Properties tab > edit Transfer Acceleration, Enable > Save changes 5 | - NOTE: When Transfer Acceleration is Enabled, an Accelerated endpoint address is provided. You need to use this endpoint to get the benefit of Accel. Transfers. 6 | - To compare upload speeds: http://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html 7 | -------------------------------------------------------------------------------- /demo-creating-a-budget.txt: -------------------------------------------------------------------------------- 1 | # Demo: Creating a Budget 2 | Goal: Understand Free Tier and set up a budget. 3 | 4 | AWS Free Tier: https://aws.amazon.com/free/ 5 | - Details allocations of free resources 6 | 7 | ## See Spend Details 8 | IAM Account dropdown > Billing Dashboard > click "Bills" (for spend details) 9 | 10 | ## Set up Billing Notification Preferences 11 | Billing Notification Preferences: Click "Billing Preferences" > check all boxes within "Invoice delivery preferences" and "Alert preferences" and Update 12 | 13 | ## Create Cost Budgets 14 | Click "Budgets" > select Use a Template > select an appropriate option based on monthly spend budget (select Zero spend budget) > Budget Name "Monthly Zero Budget", enter Email Recipients for alerts ([EMAIL]+trainingawsgeneral@gmail.com) > click "Create Budget" 15 | -- Budgets allow you to monitor spend and configure alerts when hitting spend targets 16 | -------------------------------------------------------------------------------- /demo-ec2-create-iam-instance-role.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Using EC2 Instance Roles 2 | Create an EC2 Instance Role, apply it to an EC2 instance and learn how to interact with the credentials this generates within the EC2 instance metadata. 3 | 4 | Steps: 5 | 1. 1 click deploy 6 | 2. EC2 > instance connect > EC2 Instance Connect > test command `aws s3 ls` > need creds 7 | 3. Create IAM Role > IAM > Roles > Create Role > Trust Entity, AWS service: select `EC2` > Next, Add Permissions, search `s3`, select AmazonS3ReadOnlyAccess > Next, Role name "A4LInstanceRole" > Create Role 8 | 4. Attach Role to Instance: EC2 > Right-click instance, Security: Modify IAM Role > Choose Role A4LInstanceRole 9 | 5. You can now `aws s3 ls` command in EC2 Connect because you have applied the IAM Role that has the S3ReadOnly permission to the EC2 instance 10 | 6. Clean up > IAM: Delete A4LInstanceRole > EC2: Security: Modify IAM, select NO IAM Role > CFN: Delete Demo stack 11 | -------------------------------------------------------------------------------- /demo-ecs-create-container-task.txt: -------------------------------------------------------------------------------- 1 | ## DEMO ECS - Deploying 'container of cats' using Fargate 2 | Create a Fargate Cluster, create a task and container definition and deploy the world renowned 'container of cats' Application from Dockerhub into Fargate. 3 | 4 | 1. Create Cluster: ECS Console > Clusters > Create Cluster > Networking Only option > name "allthecats", (don't tick New VPC box, use Default VPC instead by not clicking) > Create. If you get ECS error, redo these steps 5 | 2. Create Task Definition (deploy container): Task Definitions > New > Compatibility: Fargate > Next Step > def name "containerofcats", no task role needed, operating system family: linux, task size: 1bg memory, 0.5vCPU > click "Add Container": name "containerofcatsweb", image "docker.io/acantril/containerofcats", Memory Limit Soft 1024, Port Mappings: 80 tcp > Add 6 | 3. Run it: Clusters > allthecats > Tasks tab > Launch type: fargate, OS family: linux, select VPC: Cluster VPC: default selection, subnet: select 2 at random > Create/Add 7 | 4. Cleanup > Stop Task Definition > Task: Actions: Deregister > Cluster: allthecats: Delete 8 | -------------------------------------------------------------------------------- /demo-ecs-create-docker-image-push-dockerhub.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Creating 'container of cats' Docker Image 2 | Create a docker image containing the 'container of cats' application. 3 | - Prereq: Create a DockerHub account 4 | 5 | 1. 1 Click Deploy (A.Cantrill lesson). 6 | 2. Nav to new EC2 instance > Connect > Session Manager 7 | 3. Commands https://learn-cantrill-labs.s3.amazonaws.com/awscoursedemos/0030-aws-associate-ec2docker/lesson_commands.txt 8 | - `sudo amazon-linux-extras install docker` - Install Docker 9 | - `sudo service docker start` - Start Docker 10 | - `docker ps` - Test Docker. Expect a permissions error 11 | - `sudo usermod -a -G docker ec2-user` - Add permissions 12 | - `exit` and reopen a Session Manager 13 | - `sudo su - ec2-user` - Log in as EC2 user 14 | - `docker ps` now works 15 | - `cd container`, `ls -la` 16 | - `docker build -t containerofcats .` - Build container image 17 | - `docker images --filter reference=containerofcats` - Show images in this container 18 | - `docker run -t -i -p 80:80 containerofcats` - Map port 80 on container to port 80 on ec2 instance with cats image 19 | - Log in to docker hub 20 | docker login --username=YOUR_USER 21 | docker images 22 | docker tag IMAGEID YOUR_USER/containerofcats 23 | docker push YOUR_USER/containerofcats:latest 24 | -------------------------------------------------------------------------------- /ec2-ssm-parameter-store-create-parameters.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Parameter Store 2 | Create some Parameters in the Parameter Store and interact with them via the command line - using individual parameter operations and accessing via paths. 3 | 4 | Steps: 5 | 1. Nav to Systems Manager in AWS Console > Parameter Store > Create parameter 6 | 2. Parameter 1 details: Name "/my-cat-app/dbstring" > value "db.allthecats.com:3306" > description "Connection string for cat app" > create 7 | - the fwd slashes create hierarchy in Parameter Store 8 | 3. Parameter 2: Name "/my-cat-app/dbuser" > value: "bosscat" > create 9 | 4. Param 3 (secure): Name "/my-cat-app/dbpassword" > Type: SecureString > value: amazingsecretpassword1337 (encrypted) 10 | 5. Param 4: Name "/my-dog-app/dbstring" > value: "db.ifwereallymusthavedogs.com:3306" 11 | 6. Param 5: Name "/rate-my-lizard/dbstring" > value "db.thisisprettyrandom.com:3306" 12 | 7. CloudShell (top menu) > try these commands: 13 | aws ssm get-parameters --names /rate-my-lizard/dbstring 14 | aws ssm get-parameters --names /my-dog-app/dbstring 15 | aws ssm get-parameters --names /my-cat-app/dbstring 16 | aws ssm get-parameters-by-path --path /my-cat-app/ 17 | aws ssm get-parameters-by-path --path /my-cat-app/ --with-decryption 18 | 8. Clean up: Select all params > Delete 19 | -------------------------------------------------------------------------------- /demo-creating-using-scp.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Using Service Control Policies (SCP) 2 | Update the structure within the organization, add hierarchy (Root, Dev OU, Prod OU) - and apply an SCP to the PRODUCTION account to test their capabilities 3 | - Follows the same DENY-ALLOW-DENY IDP pattern 4 | 5 | 1. Create OU: Be in Admin account > AWS Organizations > check Root container box, "Actions: Create New" > Org Unit Name "PROD" > Create OU. 6 | 2. Create OU: Repeat above for DEV 7 | 3. Move relevant accounts to new OUs: check Production account, dropdown "Move" > Select respective OU (PROD/DEV) > Move AWS Account 8 | 4. Add SCP to Production Account: Switch Role: PROD > nav to s3 > Create Bucket > name (must be globally unique) "catpics3453451" > region: us-e-1 > upload attached photo (proving we have admin access) 9 | 5. Restrict with SCP. Switch Back > AWS Orgz > Policies > click SCP, enable SCP > Create Policy > copy .json text content from lesson file denys3.json > replace JSON with copied text > name "Allow All Except S3" > create policy 10 | 6. AWS Orgz > AWS Accounts > click PROD OU > tab "Policies" > Attach Allow All Except S3 Policy 11 | 7. Detach FullAWSAccess. AWS Orgz > Account PROD > tab "Policies" > select "attached directly" FullAWSAccess > Detach > Detach Policy 12 | 8. We reverted everything back to normal (re-attached Full Access policy to PROD OU, emptied s3 bucket, deleted s3 bucket 13 | -------------------------------------------------------------------------------- /demo-s3-presigned-urls-create.txt: -------------------------------------------------------------------------------- 1 | ## DEMO S3 Creating and using PreSigned URLs 2 | Create a bucket, upload an object and generate a presignedURL allowing access for any unauthenticated identities. 3 | 4 | 1. Create bucket > name "animals4lifemedia[randomnumber]" > Create bucket 5 | 2. Upload object > all5.jpv to new bucket 6 | 3. Generate PreSigned ULR > Cloud Shell (terminal icon top right of AWS) > shell command "aws s3 presign s3://animals4lifemedia745675/all5.jpg --expires-in 180" 7 | - 180 is in seconds, 3 mins 8 | 4. Clean up: Empty/delete bucket 9 | 10 | NOTE: Interesting Aspects... 11 | 1. Try "aws s3 presign s3://animals4lifemedia745675/all5.jpg --expires-in 604,800" 12 | 2. Nav to IAM > Users > iamadmin > Permissions: Add inline policy, copy JSON from lesson > save 13 | 3. In CloudShell, try "aws s3 ls", you'll see Access Denied. This Explicit Deny overrules S3 permissions. Refresh PreSigned URL and see that access is now denied as the current permissions of iamadmin are restricted from S3 14 | 4. iamadmin currently restricted from S3, but can still generated a PreSigned URL for it, even with no access. 15 | - You can also generate a PreSigned URL on a non-existent object 16 | - If you generate a PreSigned URL with an assumed Role, the URL will stop working with the temporary creds for the Role stop working 17 | - Can now create PreSigned URL from AWS Dashbord s3 > bucket > object > Object Actions dropdown "Share with a presigned URL" 18 | -------------------------------------------------------------------------------- /demo-ec2-creating-connecting-to-an-ec2-instance.txt.txt: -------------------------------------------------------------------------------- 1 | ### DEMO: Create an EC2 Instance 2 | https://learn.cantrill.io/courses/1820301/lectures/41301621 3 | 4 | 1. Create SSH Key Pair: Navigate to EC2 Console (search bar) > left sidebar > Network & Security > Key Pairs) > click "Create Key Pair" 5 | - Note: For creating key pair, Private key file format choises are .pem and .ppk. For MacOS/Linux, always use .pem, if using modern Windows you can choose .pem. Older Windows or Putty terminal app, choose .ppk 6 | 2. Configure: Left sidebar "Instances" > click "Launch Instances" > select O/S (Amazon Linux) > select keypair login (A4L) > Network Settings: select Create Security Group > name/description "MyFirstInstanceSG" > leave Inbound security groups rules as defaults > Launch EC2 Instance 7 | 8 | ## DEMO: Connect to Terminal of an EC2 Instance 9 | 1. In EC2 Dashboard, Right Click your EC2 instance, select "Connect" > SSH Client 10 | 2. Open your local Terminal / Command Prompt and change directories to wherever your SSH private file key is (likely Downloads, it's a .pem file created previously) 11 | 3. Copy the command at the bottom of the SSH Client tab "ssh -i "A4L.pem" ec2-user@ec2-54-89-175-122.compute-1.amazonaws.com", verify fingerprint with "yes" 12 | - Note: If using MacOS/Linux and "Permission Denied": paste in terminal "chmod 400 A4L.pem" 13 | - More key file permissions help: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/connection-prereqs.html#connection-prereqs-private-key 14 | -------------------------------------------------------------------------------- /demo-kms-create-encrypt-decrypt.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - KMS - Encrypting the battleplans with KMS 2 | Run through the practical steps of creating and configuring a KMS Key, an Alias and we use that Key and the CLI tools to encrypt and decrypt some data 3 | 4 | 1. Generate KMS Key: Login to Mgmt/General Admin account > KMS > Create a key > Key Type: Symmetric > Next > create alias "catrobot" > Next > Define key admin permissions: check "iamadmin" (so iamadmin user can administer this key) > Next > Define key usage permissions: select "iamadmin" > Next > Review > Finish 5 | 2. Enable Key Rotation: Access catrobot key > Key Rotation tab > Check auto-rotate KMS key yearly, Save 6 | 3. Use Key / Create Battle Plan: access CloudShell (first icon on right-side main AWS menu, looks like Terminal icon) > Create plaintext battleplan: || echo "find all the doggos, distract them with the yumz" > battleplans.txt || 7 | 4. Encrypt message - Output will be the Encrypted not_battleplans.enc. Enter below lines into Shell 8 | aws kms encrypt \ 9 | --key-id alias/catrobot \ 10 | --plaintext fileb://battleplans.txt \ 11 | --output text \ 12 | --query CiphertextBlob \ 13 | | base64 --decode > not_battleplans.enc 14 | 5. Receiver needs to Decrypt the cihpertext. Enter below lines into Shell: 15 | aws kms decrypt \ 16 | --ciphertext-blob fileb://not_battleplans.enc \ 17 | --output text \ 18 | --query Plaintext | base64 --decode > decryptedplans.txt 19 | 6. Clean Up: Select key > Key Actions: Schedule for deletion (waiting period 7-30 days, input 7) > Confirm > Schedule deletion 20 | -------------------------------------------------------------------------------- /demo-s3-versioning-enable-delete-objects.txt: -------------------------------------------------------------------------------- 1 | ## DEMO S3 Versioning 2 | This lesson looks at a 'Animal of the week' website, and how to use versioning to recover when objects are changed and deleted accidentally or intentionally. 3 | 4 | 1. Create new S3 Bucket: Login to Mgmt/General Admin account > S3 > Buckets > Create Bucket > name [unique] "acbucket23425" > uncheck Block Public Access > Enable Bucket Versioning > Create Bucket 5 | 2. Enable Static Hosting: New bucket > Properties tab > Enable Static Website Hosting (bottom of page) > Index doc "index.html" > error doc "error.html" > Save changes 6 | - REMEMBER: Just enabling Static Website Hosting is insufficient, you also need to add a Bucket Policy 7 | 3. Add Policy: Access new bucket > Permissions tab > edit Bucket Policy > copy text from bucket_policy.json (from demo .zip) > paste in json, edit Resource to look like " "Resource": "arn:aws:s3:::acbucket23425/*" ", except with your copied Bucket ARN instead 8 | 4. Upload files: acbucket > Upload > file "index.html" and folder "img" > click Upload 9 | 5. Versions: acbucket > Objects tab > toggle Show Versions > access winkie.jpg > Upload > Add Files > version 2/winkie.jpg > Upload > Access winkie.job object and see multiple versions if "Show versions" toggle is enabled 10 | 6. Delete Marker: Untoggle Show Versions within winkie.jpg Object > select winkie.jog > Delete > toggle on Show Versions > see Delete Marker > select Delete Marker and Delete 11 | 7. Delete Latest Version to Perma-Delete: acbucket Objects > toggle on Show Versions > select Latest Version of winkie.jpg > Delete 12 | - NOTE: Whenever working with specific Versions of an object, this permanently deletes and makes next most recent version the new Latest / Current Version 13 | -------------------------------------------------------------------------------- /demo-cloudtrail-implement-trail.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Implementing an Organizational Trail (CloudTrail) 2 | This CloudTrail will be configured for all regions and set to log global services events. 3 | We will set the trail to log to an S3 bucket and then enhance it to inject data into CloudWatch Logs. 4 | 5 | NOTE: You can create individual trails in accounts, but it's always more efficient to use an Organizational Trail. 6 | NOTE: By default when you create a Trail, it creates a Trail for all AWS Regions in your account 7 | 8 | EXAM: s3 bucket names MUST be GLOBALLY UNIQUE 9 | 10 | 1. iamadmin login > CloudTrail > Trails > Create trail > Trail name "Animals4lifeOrg" > check "Enable for all accounts in my org" > uncheck Enable under "Log file SSE-KSM encryption (use this in production, not in this practice) > Enable CloudWatch Logs > CW Logs Role name "CloudTrailRoleForCloudWatchLogs_Animals4Life" > Next > select Event types to log: check Management Events > Next > Review / Create trail 11 | - After creating, can take some time for data to start appearing 12 | 2. Open the s3 link on the new Trail > move down through CloudTrail/ > View json.gz (can decompress with Chat GPT) to verify Trail is working 13 | 3. In new tab, nav to CloudWatch > Log Groups > Log Streams all in us-east-1 14 | - Account ID is contained within Log Stream file name. 15 | - In Cloud Trail Event History, this will log Events for 90 days even if you have no Trail created. Trails allow you to customize what happens to the data 16 | 17 | - In this demo, we created a Trail 1) to store data in s3 2) to put it into CloudWatch Logs 18 | 19 | NOTE: s3 charges based on storage. If CloudTrail is enabled, the logs will accumulate possibly past Free Tier 20 | - To Avoid logging events and incurring charges: Go to Trails, "Stop Logging" 21 | -------------------------------------------------------------------------------- /demo-s3-crr-replication-rule-create.txt: -------------------------------------------------------------------------------- 1 | ## DEMO Cross-Region Replication of an S3 Static Website 2 | Create 2 S3 buckets - one in N. Virginia, the other in N. California and configure Cross-Region Replication (CRR) between the two. 3 | 4 | 1. iamadmin account, N.VA region selected 5 | 2. Nav to S3 > Create Source Bucket > Create Bucket > name of source "sourcebucketta[random number]" > region us-east-1 > Create Bucket 6 | 3. Enable Static Website on Source > Properties Tab > Static website hosting: edit: Enable > type: static website > index doc "index.html"/error doc "index.html" > Save changes 7 | 4. Edit Source bucket Permissions for Public Access > Permissions tab > uncheck Block All Public Access > confirm > Save changes 8 | 5. To made Source Bucket public, add policy > Permissions tab > Bucket Policy "edit" > from lesson docs, paste JSON, replace Resource arn before the "/*" > Save 9 | 6. Create Destination Bucket & Set Permissions/Policy > Create Bucket > name of dest "destinationbucketta[random number]" > AWS Region: us-west-1 > uncheck Block All Public Access > Properties tab, Enable Static website hosting > Hosting type: static > index/error docs "index.html" > Save changes > Permissions tab, edit Bucket policy, paste JSON, update ARN, Save 10 | 7. Enable Cross-Region Replication (CRR): Source Bucket Management tab > Create replication rule > Enable Versioning > Replication rule name "staticwebsiteDR" > Status "Enabled" > Choose Rule Scope: "Apply to All objects in the bucket" > Destination: Browse S3, find destination bucket, enable versioning > IAM Role: dropdown "create new role" > Create replication rule > Replicate Existing Obejcts? No (as we have no pre-existing objects) 11 | 8. Clean Up: Empty/Delete Destination Bucket > Empty/Delete Source Bucket > IAM: locate role staring with "s3crr_role[...]" 12 | -------------------------------------------------------------------------------- /demo-creating-aws-organization.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - AWS Organizations 2 | 3 | Steps: 4 | 1. The GENERAL account will become the MANAGEMENT / MASTER account for the organisation 5 | 2. We will invite the PRODUCTION account as a MEMBER account and create the DEVELOPMENT account as a MEMBER account. 6 | 3. Finally - we will create an OrganizationAccountAccessRole in the production account, and use this role to switch between accounts. 7 | 8 | 1. Nav to AWS Organization > "Create an Organization" 9 | 2. In Incognito Window, log in to the Production Account admin, copy Account ID > back to General AWS Management Account window > "Add an AWS account" > Invite Existing Account > Paste in ID / add message > Send Invitation 10 | 3. In Production Admin account > nav to AWS Organizations > Invitations > Accept Invitation 11 | 4. Manually adding a Role to an invited account > In Prod account nav to IAM > Roles > Create Role > Type of Entity: AWS Account > get account ID of General AWS Org Account > select Another AWS Account > Paste Account ID > Add Permissions: AdministratorAccess > Next > name "OrganizationAccountAccessRole" > Create Role. In IAM > Roles > OrganizationAccountAccessRole > Trust Relationships, you'll see the General Account ID referenced as a Trusted Entity 12 | 5. Role Switch: From General to Production Account. Copy Production ID > Main Dropdown "Switch Role" > click "Switch Role" > paste Prod ID > name of Role created "OrganizationAccountAccessRole" > Display Name "Prod" > select color Red > Create / Switch Role. You'll see Display name "Prod" on top right menu drop down. You can also SWITCH BACK 13 | 14 | 6. Create new Account within Org. AWS Orgz > Add Account > Create an AWS Account > name "OrganizationAccountAccessRole" > Create 15 | 7. After new Account created, copy its account ID > dropdown Switch Role > switch to DEV role 16 | -------------------------------------------------------------------------------- /demo-s3-creating-static-site.txt: -------------------------------------------------------------------------------- 1 | ## DEMO Creating a Static Website with S3 2 | 1. Create S3 Bucket > bucket name "[custom-globally-unique]" > uncheck "Block All Public Access" and acknowledge > Create Bucket 3 | 2. Enable Static Website Hosting > access new bucket > Properties tab > scroll to bottom, Edit Static Website Hosting, Enable > Hosting Type: Static > Index Document "index.html" > Error Document "error.html" > In properties, scroll down and copy new static URL 4 | 3. Upload some Objects to the bucket > Objects tab, "Upload" > Upload index.html, error.html, and "img" folder 5 | 4. Paste copied URL into browser 6 | - You'll get a 403 Forbidden error (Access Denied) - Remember, S3 buckets are private by Default. We need to add permissions for anonymous/public users to visit site (no method to provide creds to S3) 7 | 5. Give Permissions to un-authenticated uses to acess bucket > Access S3 bucket > Permissions tab > Bucket Policy > Edit > Grab JSON from .zip > Paste in, but replace the "Resource" value BEFORE the "/*" (mine now looks like "Resource":["arn:aws:s3:::top10catsever876.com/*"]) > Save 8 | 6. You can now visit your website 9 | Extra: If you Registered a domain in R53, you can customize your URL. R53 > Hosted Zones > access your domain > Create Record > select Simple Routing > Record Name Eg. "top10".animals4life.io > choose Endpoint "Alias to S3 website endpoint" > Region: us-east-1 > S3 endpoint, select your bucket > click "Define simple record" > Create Records. Now Cantrill can go to top10.animals4life.io. Remember: Domain name and bucket name must match exactly to do this 10 | 7. Clean Up: Empty bucket, delete bucket 11 | 12 | 13 | Requirements for S3 bucket to operate as a website: 14 | - Disable Block Public Access Settings 15 | - Enable Static Web Hosting 16 | - Set index/error documents 17 | - Upload web files 18 | - Add a bucket policy 19 | -------------------------------------------------------------------------------- /demo-s3-bucket-create-upload-delete.txt: -------------------------------------------------------------------------------- 1 | ## DEMO S3 Versioning 2 | This lesson looks at a 'Animal of the week' website, and how to use versioning to recover when objects are changed and deleted accidentally or intentionally. 3 | 4 | 1. Create new S3 Bucket: Login to Mgmt/General Admin account > S3 > Buckets > Create Bucket > name [unique] "acbucket23425" > uncheck Block Public Access > Enable Bucket Versioning > Create Bucket 5 | 2. Enable Static Hosting: New bucket > Properties tab > Enable Static Website Hosting (bottom of page) > Index doc "index.html" > error doc "error.html" > Save changes 6 | - REMEMBER: Just enabling Static Website Hosting is insufficient, you also need to add a Bucket Policy 7 | 3. Add Policy: Access new bucket > Permissions tab > edit Bucket Policy > copy text from bucket_policy.json (from demo .zip) > paste in json, edit Resource to look like " "Resource": "arn:aws:s3:::acbucket23425/*" ", except with your copied Bucket ARN instead 8 | 4. Upload files: acbucket > Upload > file "index.html" and folder "img" > click Upload 9 | 5. Versions: acbucket > Objects tab > toggle Show Versions > access winkie.jpg > Upload > Add Files > version 2/winkie.jpg > Upload > Access winkie.job object and see multiple versions if "Show versions" toggle is enabled 10 | 6. Delete Marker: Untoggle Show Versions within winkie.jpg Object > select winkie.jog > Delete > toggle on Show Versions > see Delete Marker > select Delete Marker and Delete 11 | 7. Delete Latest Version to Perma-Delete: acbucket Objects > toggle on Show Versions > select Latest Version of winkie.jpg > Delete 12 | - NOTE: Whenever working with specific Versions of an object, this permanently deletes and makes next most recent version the new Latest / Current Version 13 | 8. Clean Up: Empty acbucket > Delete acbucket 14 | 15 | REMEMBER: Versioning Enabled can incur significantly higher costs than Versioing turned off. 16 | -------------------------------------------------------------------------------- /demo-s3-sse-encryption-role-separation.txt: -------------------------------------------------------------------------------- 1 | ## Demo - SSE-S3 - Object Encryption and Role Separation 2 | Create an S3 bucket, and upload 3 images to the bucket using different encryption methods 3 | 4 | 1. Create S3 Bucket > name "catpics[random number]" > Create Bucket 5 | 2. Create Key: Nav to KMS > Create Key > Defaults > Alias "catpics" > Next > Don't set any Key Administrative Positions > Next > skip Key Usage Permissions > Finish 6 | 3. Upload SSE-S3 image: Back to S3 > catpics bucket > upload just sse-s3-dweez.jpg (must upload separate as we are configuring separate encryptions) > expand Properties accordian, Server-side encryption "Specify an en encryption key" > Encryption settings "Override bucket settings" > Encryption Key type "SSE-S3" > Upload 7 | 4. Upload SSE-KMS image w/ AWS managed default key: Upload see-kms-ginny.jpg > Properties > SSE - Specify an encrypt key > Override bucket settings > key type: SSE-KMS > Choose from your AWS KMS keys: choose the one ending "aws/s3", an AWS managed default key > Upload 8 | 5. Replace Step 4 Object, SSE-KMS image w/ KMS Generated Key: Upload see-kms-ginny.jpg > Properties > SSE - Specify an encrypt key > Override bucket settings > key type: SSE-KMS > Choose from your AWS KMS keys: alias "catpics" > Upload 9 | 6. Apply Deny policy on IAM, preventing KMS use > IAM Dashboard > Users > iamadmin > Permissions tab > Add Policy: Inline Policy > JSON tab > delete template > Paste in JSON provided in lesson > Review policy > name "denyKMS" > Create policy. Now you can open see-s3 image, but not the kms image. IAM > Remove DenyKMS 10 | 7. Set Default Bucket Encryption: catpics bucket > Properties tab > Default Encryption "edit" > key type: SSE-KMS > AWS KMS Key: Choose, "catpics" 11 | 8. Upload Default Merlin jpg with no encrypt settings: upload default-merlin.jpg without changing anything > Properties tab: Server-side encryption settings - see default stuff added from Step 7 12 | 13 | EXAM: If you need to fully manage the keys used as part of S3 encryption process, you have to use SSE-KMS 14 | -------------------------------------------------------------------------------- /demo-access-key-creating-configuring.txt: -------------------------------------------------------------------------------- 1 | # DEMO Creating Access Keys and Setting Up AWS CLI v2 Tools 2 | 3 | Once logged in with Admin user: 4 | IAM Dropdown > Security Credentials > scroll down to "Create Access Key", click > Command Line Interface (CLI) > check box at bottom > Next > Set Decription Tag > "Create Access Key" 5 | 6 | Once Access Key is created, you can use Actions dropdown to Deactivate, Activate, and Delete. If you ever lose access to a key, you need to deactivate & delete it, then create a new one. 7 | 8 | ## Download AWS CLI v2 9 | AWS CLI v2 (Windows) Installation - https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-windows.html 10 | 11 | AWS CLI v2 (macOS) Installation - https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-mac.html 12 | 13 | AWS CLI v2 (Linux) Installation - https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2-linux.html 14 | 15 | ## Configure CLI 16 | Configure a set of credentials which CLI will use to communicate with AWS. We will use the General IAMADMIN user for this one. 17 | 18 | COMMAND: 'aws configure': configures default profile for CLI 19 | COMMAND: 'aws configure --profile iamadmin-general' / 'aws configure --profile iamadmin-production': named profile for CLI 20 | 21 | Upon entering the above Command: 22 | - AWS Access Key ID 23 | - AWS Secret Key 24 | - Default Region Name: us-east-1 25 | - Default output format (press Enter with blank field for default) 26 | 27 | Test that this is successful with COMMAND 'aws s3 ls --profile [CLI profile name]'. This will error first as we need to specify named profile: 28 | 'aws s3 ls --profile iamadmin-general', which will currently return a blank string as there are no s3 buckets. 29 | 30 | ## Configure for Production 31 | COMMAND: 'aws configure --profile iamadmin-production' 32 | COMMAND to test: 'aws s3 ls --profile iamadmin-production' 33 | 34 | SECURITY REMINDER: Never share your SECRET KEY. If leaked, delete and create new set of keys and re-configure in CLI 35 | 36 | TIP: If after you Configure CLI with credentials, you can delete the credential files (CSVs) 37 | -------------------------------------------------------------------------------- /demo-create-lambda-function-eventbridge-event.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Automated EC2 Control Using Lambda and Events 2 | Gain experience of using Lambda for some simple account management tasks. 3 | 4 | 1. 1 click deploy 5 | 2. Give Lambda Permissions: https://learn-cantrill-labs.s3.amazonaws.com/awscoursedemos/0024-aws-associate-lambda-eventdrivenlambda/lambdarole.json > IAM > Role > Create Role > Entity Type, AWS Service, select Lambda > Next 6 | 3. Create IAM Role Policy: Click 'Create Policy' > JSON tab > Paste JSON provided > Next > Policy name "Lambdastartandstop" > Create Policy 7 | 4. Create IAM Role with Policy: IAM Role tab > Select Policy for IAM Role > Role name "EC2StartStopLambdaRole" > Create Role 8 | 5. EC2 > Copy both instance ID's to notepad/clipboard 9 | 6. Create Lambda Function: Lambda > Create function > select Author from Scratch > name "EC2Stop" > runtime "Python 3.9" > dropdown "Change default exec role", select Use Existing, select the IAM Role you created > Create Function > paste in code to Lambda code block from file lambda_instance_stop.py > click Deploy 10 | 7. Creat env variable for EC2Stop: Lambda function, configuration tab > Environment Variables "Edit" > click "Add env variable" > key: EC2_INSTANCES, value: "ec2_instance_ID_1,ec2_instance_ID_2" 11 | 8. Test Event: EC2Stop Lambda Function, Test tab > Event name "test" > Test. Observe EC2 instances being stopped. 12 | 9. Create second function "EC2Start" > Lambda, Functions "Create Function" > name "EC2Start" > Existing IAM Role > Create function > add new code from lambda_instance_start.py > Paste code, Deploy > Environment Variables "Edit" > click "Add env variable" > key: EC2_INSTANCES, value: "ec2_instance_ID_1,ec2_instance_ID_2" > Save 13 | 10. Test Event: EC2Start Lambda Function, Test tab > Event name "test" > Test. Observe EC2 instances being started. 14 | 11. Set up Event-driven Lambda Function: Lambda > New Function name "EC2Protect" > Runtime python 3.9 > Select IAM role that we created > Create function > Add code from file "lambda_instance_protect.py", Deploy 15 | 12. Create EventBridge Rule for Lambda to receive: EventBridge > New Rule name "EC2Protect" > desc "Start protected instance" > select "rule with an event pattern" > Event Source "AWS events or EventBridge partner events" > Event pattern "AWS Services", "EC2", event type: EC2 Instance State-change Notification, Specific States: Stopped > Specific Instance IDs, paste instance 1 ID > Next > Target 1 AWS Svc, "Lambda function", Function: EC2Protect > Create Rule 16 | 13. Clean up: Lambda, delete functions > EventBridge, delete Rules > IAM Policies, delete Lambda policy > Delete Lambda IAM Role > CloudFormation, delete Stack 17 | -------------------------------------------------------------------------------- /demo-creating-aws-admin-account.txt: -------------------------------------------------------------------------------- 1 | # DEMO: Creating the GENERAL AWS Account: https://learn.cantrill.io/courses/1820301/lectures/41301459 2 | 3 | 1. Create General AWS account (*MANAGEMENT). This account's root user will be what we log in with (root user = account specific). 4 | 2. Add root user MFA for security. 5 | 3. Create a Budget to protect against unintended costs. 6 | 4. Create an IAM user, IAMADMIN. Give permissions. Then we'll use this ID for the course. 7 | 8 | ## 1. Create General AWS account 9 | - https://aws.amazon.com/resources/create-account/ - Root user email address (see TIP below), AWS account name 10 | - Choose free tier AWS account after providing all account setup info (unique email, CC, verification). 11 | - Complete the prompted steps. Account will now be created. 12 | 13 | ## 2. Add root user MFA for security 14 | IAM Dropdown > Security Credentials > Assign MFA Device > follow steps 15 | - Recommend using Google Authenticator app 16 | Once, steps are complete, Log Out and test MFA login. 17 | 18 | TIP: using one email for multiple accounts with Gmail. AWS accounts should be viewed as disposable, create as many as you need. Create a new account for each course. 19 | # TIP Example: email is catguy@gmail.com. You can use + sign in email address to create 'unique' email addresses. Ex: catguy+AWSAccount1@gmail.com, catguy+AWSAccount2@gmail.com, etc etc. This is called a Dynamic Alias. 20 | 21 | ## 3. Create a Budget to protect against unintended costs 22 | AWS Free Tier: https://aws.amazon.com/free/ 23 | - Details allocations of free resources 24 | 25 | Create Cost Budgets: click "Budgets" > select Use a Template > select an appropriate option based on monthly spend budget (select Zero spend budget) > Budget Name "Monthly Zero Budget", enter Email Recipients for alerts ([EMAIL]+trainingawsgeneral@gmail.com) > click "Create Budget" 26 | -- Budgets allow you to monitor spend and configure alerts when hitting spend targets 27 | 28 | ## 4. Create an IAM user, IAMADMIN and give permissions 29 | Search "IAM" > IAM Dashboard > Create user "iamadmin" 30 | - Create an IAM user (2nd radial option, 1st option to be covered later), check "Provide user access to the AWS Management Console - optional" 31 | - Select second radial option, Create IAM User: Custom PW 32 | - Give admin permissions "Attach policies directly" > check "AdministratorAccess" 33 | - Create user 34 | - Test login by visiting alias URL 35 | - Confirm login with profile dropdown that will show account name 36 | - Secure admin account with OTP > Security Credentials > Assign MFA 37 | - Log out and test OTP 38 | 39 | # TIP: To set alias (make it globablly unique) 40 | Within IAM Dashboard, find right-side AWS Account info, find "Account Alias", click "Create". 41 | - Must be a globablly unique ID. I am using "ta-cantrill-training-aws-general" 42 | -- Now URL is https://ta-cantrill-training-aws-general.signin.aws.amazon.com/console 43 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # aws-certified-solutions-architect-cantrill-notes 2 | My personal notes from Adrian Cantrill's course for AWS Certified Solutions Architect - Associate (SAA-C03). 3 | 4 | # Result: Passed on the first attempt. 5 | 6 | 140 hours of study, May 1 - August 10 Mon-Fri 7 | 8 | # Tools: 9 | - Cantrill's SAA-C03 course (1.5x speed, pausing to take notes) 10 | - Tutorials Dojo SAA-C03 practice exam collection 11 | - AWS docs 12 | - Flash cards on phone while vacationing (Quizlet app free version, found collections made by other users) 13 | 14 | # Exam: August 11 15 | Studied roughly 3 hours a day usually using the Pomodoro technique (25min study, 5min break, repeat). Basically spent 95% of the time just going through Cantrill's course and taking notes. I would go in to Tutorials Dojo to reinforce the topic I learned with quizzes and reviews. And Googling often when I didn't understand concepts or how different services connected. For the last 5 days after I finished Cantrill's course, I focused on the Tutorials Dojo full practice exams, all timed. I would then review what I answered incorrectly and then retake the same exam to get a 95%+. I did each exam twice in a row and completed two new exams a day. During a vacation during my study period, I brought some flashcards on my phone found on Quizlet to just keep the content close in mind, but not study heavily during the break. I burned out a few times and took a 3 day weekend here and there, as well. 16 | 17 | I had to reschedule the exam twice after realizing I did not allot enough time to get through all the Cantrill material (I honestly ended up skipping some demos towards the end just to get through the content). 18 | 19 | In terms of the actual exam, I found that the content and questions were overall pretty difficult and not exactly what I practiced on Tutorials Dojo. However, hammering out quizzes/exams over and over gave me the ability to break down the questions and logic out the answers drawing from what I learned studying. I think it was very valuable that I reinforced Cantrill learnings frequently with quizzes and Google research related to the studied topic. I never had issues with the exam time running out, my issue was really just mainly absorbing this massive amount of info and actually being able to understand each component and connect the dots. 20 | 21 | Overall, the content and certification exam was difficult, but entirely doable on the first attempt if you use your resources and stay consistent. Good luck! 22 | ___ 23 | What's Next? It was recommended to me (by a friend who works professionally in AWS) that I complete the Cloud Resume Challenge which has been extremely valuable so far. I am a self-taught developer with 2 years of professional Web Dev experience and the CRC challenge is helping me bridge the gap from AWS study to job-readiness. I'll be able to share this project and its results with recruiters/hiring managers to display my abilities 24 | 25 | # Basic Account Setup 26 | 1. create account and root user 27 | 2. add alt contact info 28 | 3. billing notifications, create free-tier budget (skip MFA and security), "CRC Zero-Spend Budget" 29 | 4. create IAM admin 30 | - create alias for sign in 31 | 5. sign in with crc-admin 32 | 6. create access keys 33 | -public 34 | -secret 35 | 7. configure CLI (command prompt) ``aws configure --profile crc-admin 36 | -------------------------------------------------------------------------------- /demo-vpc-create-multiple-subnets.txt: -------------------------------------------------------------------------------- 1 | ## DEMO - Custom VPCs - Create VPC - Overview 2 | - VPCs are regionally isolated and regionally resilient service. Operates from all AZs in that region. 3 | - Nothing is allowed IN or OUT of a VPC without explicit permission; provides isolated blast radius; if you have a proble inside a VPC, the impact is limited to that VPC and/or anything connected 4 | - Custom VPCs allow for simple or multi-tier; flexible configuration 5 | - Custom VPCs also provide Hybrid Networking 6 | - When you create VPC, you can pick DEFAULT or DEDICATED tenancy. AKA shared or dedicated hardware 7 | -- If you pick DEFAULT tenancy, you can choose on a per resource basis later on when provisioning resources whether it's shared or dedicated hardware 8 | -- If you pick DEDICATED tenancy at VPC level, it's locked in. Any resources created inside the VPC have to be on dedicated hardware (cost premium compared to Default) 9 | - By default, VPC uses IPv4 private/public IPs. The private CIDR block is main method of IP communication for VPC, & Puclic IPs (when you want make resources public) 10 | - VPC allocated ONE mandatory Private IPv4 CIDR Block 11 | -- Primary Block has two main restrictions: min /28 (16 IPs), max /16 (65,536 IPs) 12 | -- Optional to add secondary IPv4 13 | - Optional: VPC can use IPv6 by assigning /56 IPv6 CIDR to VPC (start appliny IPv6 as default as this is what's being adopted now) 14 | -- IMPORTANT: You can't pick a block of IP ranges like IPv4. Your range is either allocated by AWS, or customer can use their OWN IPv6 range 15 | -- IPv6 doesn't have Public/Private concept, the range is all Publicly routable by default 16 | 17 | ### DEMO - Custom VPCs - Create VPC - DNS 18 | VPCs also have DNS. provided by R53 19 | - DNS is 'Base IP + 2', so if VPC is 10.0.0.0, then DNS IP is 10.0.0.2 20 | EXAM: 21 | - Two critical options for how DNS functions in a VPC 22 | -- 1. First is a setting called enableDnsHostnames - Gives instances public DNS names 23 | -- 2. enableDnsSupport setting. Enables DNS resolution in VPC. Indicates whether DNS is enabled or disabled in VPC. If enabled, instances in VPC can use the DNS IP address 24 | 25 | ## DEMO pt.1 - Custom VPCs - Create/Configure VPC 26 | Steps through the architecture and features of Custom VPCs including the main issues which are raised in the exam 27 | 28 | ### STEPS pt.1 29 | 1. Create the VPC: VPC > Your VPCs > Create VPC > select VPC Only > name tag "a4l-vpc1" > IPv4 CIDR "10.16.0.0/16" > IPv6 CIDR block "Amazon-provided IPv6 CIDR block", default us-east-1 > Tenancy set to Default > Create VPC 30 | 2. VPC Settings: In VPC > Actions dropdown, "Edit VPC Settings" > DNS Settings check both Enable DNS Resolution and Enable DNS Hostnames < Save (for any resources in this custom VPC, if they have puclic IP addresses, they'll also get public DNS names) 31 | 32 | ## DEMO pt.2 - Custom VPCs - VPC Subnets 33 | Subnets are what services run from inside VPCs, within a particular Availability Zone. They're how you add function, structure, and resilience to VPCs 34 | NOTE: With AWS diagrams, color Blue is private and Green is for Public (green = Go = Public) 35 | - Subnets in a VPC start off Private and take config to make Public 36 | - Subnets are AZ Resilient. Created within on AZ and can never be changed. If AZ fails, subnet (and any contained services) fails 37 | EXAM: Subnet can NEVER be in multiple AZs. 1 subnet is in 1 AZ. An AZ can have 0 or more subnets 38 | - Default IP for subnet is IPv4 and is allocated an IPv$ CIDR, this CIDR is a subset of the VPC CIDR Block (has to be within VPC allocated range) 39 | EXAM: CIDR that a subnet uses cannot overlap with any other subnets in the VPC 40 | - Subnet can optionally be allocated IPv6 CIDR (/64 subset of the /56 VPC, space for 256 /64 ranges that each subnet can use. Note: VPC must also be config'd for IPv6 41 | - By default, subnets in a VPC can communicate with other subnets in the same VPC 42 | 43 | ### DEMO pt.2 - Custom VPCs - VPC Subnets - Subnet IP addressing 44 | - 5 IPs inside every VPC subnet are RESERVED 45 | -- Example: Subnet is 10.16.16.0/20, range of 10.16.16.0 -> 10.16.31.255 46 | --- Unusable Addresses: Network Address (10.16.16.0) - NOTE: Not just AWS, NOTHING uses first address on any IP networks 47 | --- Unusable Addresses: 'Network+1' Address (10.16.16.1) - VPC Router; logical network device that moves data between subnets 48 | --- Unusable Addresses: 'Network+2' Address (10.16.16.2) - Reserved VPC address, but generally for DNS 49 | --- Unusable Addresses: 'Network+3' Address (10.16.16.3) - Reserved Future Use 50 | --- Unusable Addresses: Network Broadcast Address (10.16.31.255) - LAST IP in subnet 51 | EXAM: if a subnet has 16 usable IPs... it actually only has 11, because 5 are reserved (3 for AWS (+1, +2, +3, 1 network (1st IP in range), one broadcast (last IP in range)) 52 | - Dynamic Host Configuration Protocol (DHCP). VPC has a configuration object applied to it called DHCP Option Set. It's how computing devices receive IP address automatically. 1 DHCP option set applied to a VPC at a time and flows through to Subnets. You can create DHCP option sets, but you CANNOT edit them. To change settings, create a new DHCP and change the VPC allocation to the new DHCP Option Set. 53 | - On every subnet, you can define two IP Allocation Options: 1. auto-assign public IPv4 address 2. auto-assign an IPv6 address 54 | 55 | ### STEPS pt.2 - Creating lots of subnets 56 | 3. Create Multiple Subnets with VPC multi-tier structure: VPC > Subnets > Create subnet > Select VPC (a4l-vpc1) > subnet name "sn-reserved-A" > IPv4 CIDR block "10.16.0.0/20" > IPv6 CIDR block, choose the one option, fill in unique IPv6 value for the respective subnet (00 for first one) > Add New Subnet > Repeat for each subnet in the AZ (4 total) > Verify Details in each > Create Subnet. Repeat for AZB, AZC. 57 | 4. Auto allocate IPv6 Addressing: select sn-app-A > Actions drop down, "Edit Subnet Settings" > Auto-assign IP settings, check "Enable auto-assign IPv6 address" > Save > Do this for all new subnets 58 | -------------------------------------------------------------------------------- /notes-tech-networking-fundamentals.txt: -------------------------------------------------------------------------------- 1 | # Networking Start Pack 2 | 3 | ## OSI 7-Layer Model 4 | The OSI (Open Systems Interconnection) model is a framework that describes the functions and interactions of computer systems in a network. 5 | 6 | - Local Networking: Ethernet (start / end point of data moving across internet) 7 | - Routing: Moving data across multiple networks 8 | - Segmenting, Ports, and Sessions 9 | - Applications 10 | 11 | 1. Physical 12 | 2. Data Link 13 | 3. Network 14 | 4. Transport 15 | 5. Session 16 | 6. Presentation 17 | 7. Application 18 | ^ These 7 layers are the "Networking Stack". 1-3 are Media Layers, 4-7 are Host Layers 19 | 20 | ### Layer 1 - Physical 21 | Imagine you have two laptops at home and you want to LAN game between the two. You have a physical connection/medium between the two laptops (network interface card, network cable). 22 | 23 | Physical Medium: can be Copper (electrical), fiber (light), of WiFi (RF). 24 | 25 | A Layer 3 device has the capabilities of all layers under it (3, 2, and 1). A Layer 1 device only has the capabilities of Layer 1, as no layers are below it. 26 | 27 | There are no individual device addresses at Layer 1. Anything received on any port is transmitted out on every other port (including errors and collisions). If multiple devices transmit on the same level 1 physical medium, a collision occurs and corrupts data. The more layer 1 devices connected, the more likely a collision. 28 | 29 | Layer 1 is "dumb". 1 broadcast and 1 collision domain, not very scalable. No access control. No uniquely identifiable devices. so no device-to-device communication. 30 | 31 | ### Layer 2 - Data Link (DL) 32 | Data Link adds lots of intelligence to layer 1, allowing for more effective communication. DL all higher layers rely on Layer 2 as it supports the transfer of data. 33 | 34 | Rather than physical wavelengths or voltages, but uses "frames". Devices at L2 have a unique hardware (MAC) address. Frames can be addressed to a destination or to a broadcast. 35 | 36 | Frame is a container of sorts. 37 | - First is PREAMBLE (let's device know this is start of the frame 38 | - Destination and Source MAC addresses 39 | - EtherType: which layer 3 protocol is putting its data in a frame 40 | - Payload: The data the frame carries from source to destination 41 | - Frame Check Sequence (FCS): Confirm it corruption has occurred 42 | 43 | Mac Header = Source MAC address + Destination MAC address + EtherType 44 | 45 | In order to have active Layer 2 network, you need Layer 1 active and working. Layer 2 checks for carriers, if no carriers, Frame is sent to Layer 1 for transmission. Carrier detected? Wait, as another device is transmitting. 46 | 47 | CSMA: Carrier Sense Multiple Access. Detects data to avoid collision. 48 | 49 | Encapsulation: When data is wrapped in by a Frame. As data is passed down OSI model, many different components encapsulate the data. 50 | 51 | Collision Detection at layer 2. If both devices check for carrier which doesn't exist, then both L2's instruct transmission via layer 1, a collision can occur. If collision is detected, jam signal is sent and a random backoff occurs. Backoff = time + random. It increases if another collision occurs. 52 | 53 | Hub = layer 1 device. Dumb, send data to all ports. 54 | Switch = layer 2 device. Maintains MAC address table to learn what's connected to each port. Intelligent; store and forward Frames appropriately, based on MAC address table. Won't forward collisions. Each port becomes a separate collision domain. 55 | 56 | - Identifiable devices 57 | - Media Access Control (sharing) 58 | - Collision Detection 59 | - Unicast 1:1 60 | - Broadcast 1:ALL 61 | - Switches 62 | 63 | ### Layer 3 - Network 64 | Requires 1 or more Layer 2 networks to function. Layer 3 gets data from one location to another. With streaming, data moved from server hosting video to local device. 65 | 66 | #### Why is Layer 3 needed? 67 | If we have two local area networks with geographic separation (east/west coast US separation). LAN1 and LAN2 are isolated Layer 2 local networks right now. You *could* use point to point physical links across the distance, but this is costly. Layer 2 protocols can be different so connecting to L2 networks can be challenging. We need something in common between the two L2's. To move data between two local networks... inter-networking ==> internet. IP addresses can be assigned as a way to connect separated L2 networks. 68 | 69 | - Routers are the hardware that L3 uses to encapsulate data in an IP packet. 70 | 71 | - IP allows you to connect to remote networks, allowing you to cross over intermediary networks in between. 72 | 73 | Packet: Data unit used in IP. Similar to Frames (L2). Every packet has a Source IP and Destination IP address. 74 | 75 | - Two versions of IP in use. Version 4 and Version 6. 76 | -- Destination/Source IP addresses are larger in v6 to allow for more. 77 | 78 | #### L3: IP Addressing (IPv4) 79 | Structure of an IP address. This section focuses on v4. 80 | 81 | IP Address 82 | 133 . 33 . 3 . 7 83 | - Dotted decimal notation from 0 - 255 84 | - All IP addresses formed of two different partsL 1) Network (133.33), 2) Hosts (3.7) 85 | - You can see if two IP addresses are on same network if the first half matches (both being 133.33) 86 | - 4 sets of 8 bits, 32 bits total for IP address. Each number in IP Address is an octet 87 | - /16 prefix. First 16 bits are network, last 16 bits are host 88 | - Static IP: assigned by humans. DHCP: Machine assigned IPs 89 | - IP addresses need to be unique, esp on local network 90 | 91 | Subnet Masks: ID's host and network parts of IP addresses. So it can know when to send data on same local network, or use a router to transport across different networks. Configured on L3 along with IP addresses. Allows an IP device to know if IP is on same network or not. If not on same network, data is sent to Default Gateway (router) 92 | - e.g. 255.255.0.0 is a subnet mask, and is the same as the /16 prefix (when broken down into binary) 93 | 94 | #### L3: Route Tables & Routes 95 | How a router decides where to send data. 96 | 97 | Every router has at least 1 route table. This is how a router knows where to send data. Two fields: 1) destination field 2) Next Hop/Target. The larger the prefix, the more specific the route (/0, /16, /32). Routers prefer more specific routes. 98 | 99 | /24 = first 24 bits for network, last 8 bits are for host. 100 | 101 | Default Route: 0.0.0.0/0 = matches if nothing else does. 102 | 103 | IMPORTANT: When ISP router is forwarding packet to AWS router, it's forwarding it at Level 2; wrapped in a Frame. How do we determine MAC address of AWS router? Address Resolution Protocol 104 | 105 | #### L3: Address Resolution Protocol 106 | Used when Layer 3 packet needs to be encapsulated in a Frame and sent to MAC address. You don't know the MAC address, but you need to get it. This is where ARP comes in. 107 | - ARP will give you MAC address with given IP address 108 | - ARP broadcasts on Layer 2 109 | 110 | #### L3: IP Routing 111 | Lengthy example, not a very great visual. Recommend looking up another video on YouTube for IP Routing Explanation 112 | 113 | ### Layer 4 & 5 - Transport & Session 114 | Layer 4 allows you to create Segments with a sequence number in order to maintain the order (L3 can't control order of packets) 115 | 116 | 117 | NOTE: Course is no longer broken down by Layers 118 | 119 | 120 | #### TCP - Transmission Control Protocol 121 | - connection between client and server used to exchange data 122 | - established between two devices using a 'random port' on a client and a 'known port' on the server 123 | - connection is reliable provided via segments encapsulated in IP packets. Orders segments with their sequence numbers 124 | - also has error checking and retransmission 125 | - bidirectional connections 126 | - creates a 'channel' to exchange data, but really it's just a collection of segments; no real channel 127 | - Ephemeral Port: client source port (or High Ports). Needs separate rules compared to Well Known Port 128 | - Well Known Port: server port. Needs separate rules compared to Ephemeral Port 129 | 130 | ##### Flags Field in TCP 131 | Contains actual flags which can be set to influence the connection. 132 | FIN - used to close 133 | ACK - used for acknowledgments 134 | SYN - sequence numbers 135 | 1) "Hey, let's talk": Send segment to server with SYN (initial sequence number is ISN, or 'cs'; sequence set) 136 | 2) "Sure, let's talk": SYN-ACK - server receives communication from client, receives cs, sets ACK to cs+1, sends cs+1 and ss 137 | 3) "Awesome, go!": Send segment with ACK, send to ss+1 138 | 4) Connection established, client can send data 139 | 140 | ##### Firewalls on Ephemeral Port side 141 | - Stateless Firewall: Doesn't understand state of a connection. Two rules 1) OUT: allow outbound segments (initiating traffic) 2) IN: allow response (response traffic). In AWS, a Nework Access Control List (ACL) is a stateless firewall. 142 | - Stateful Firewall: Sees OUT and IN as the same thing; if OUT allowed, IN is auto-alowed (and vice versa). In AWS, this is how a Security Group works 143 | 144 | ### Network Address Translation (NAT) 145 | We will cover the basics of NAT, including how it works, its different types, and the benefits and drawbacks of using NAT. 146 | - NAT is a process designed to address growing shortages of IPv4 addresses 147 | - Translates Private IPv4 addresses to Public and back as they cannot communicate otherwise. This is what gets Private IPs on the internet 148 | -- Publically routable addresses (must be unique) 149 | -- Private addresses (don't need to be unique, can be used multiple places) 150 | - Router is an example of a NAT device 151 | 152 | #### Types of NATs (3) 153 | 1 - Static NAT: 1 private to 1 (fixed) public address (IGW, Internet Gateway). Gives private address access to internet in both directions 154 | 2 - Dynamic NAT: Pool of public IP addresses for private IP's to use. Generally used with large number of private IP addresses but have less Public Addresses than private. Private IPs must use the Public IP allocations at DIFFERENT times; they cannot overlap. External access may fail if there are no IPs in the public pool 155 | 3 - Port Address Translation (PAT): Many private to 1 public (AWS NATGW). This is what your home router does with multiple devices. Uses ports to ID devices 156 | -- This is the method that NAT instances use in AWS NATGateway (NATGW) 157 | -- Uses both IP addresses & Ports to allow multiple devices to use same IP 158 | -- MANY:1, PrivateIP:PublicIP Architecture 159 | 160 | - NAT only makes sense for IPv4. IPv6 has enough addresses, as is (no need for NAT in IPv6) 161 | 162 | ### IP Address Space & Subnetting (IPv4) 163 | Originally this was directly managed by IANA (Internet Assigned Numbers Authority) 164 | - Parts now delegated to regional authorities 165 | - There are about 4.3B IPv4 addresses 166 | 167 | #### Address Space Classing 168 | Class A Address Space: 0.0.0.0 -> 127.255.255.255 (for huge business/early internet) 169 | Class B Address Space: 128.0.0.0 -> 191.255.255.255 (for larger businesses that didn't need Class A) 170 | Class C Address Space: 192.0.0.0 -> 223.255.255.255 (for smaller businesses not large enough for B or A) 171 | Class D & Class E: D for Multicast, E is Reserved 172 | 173 | #### Private IP Adresses 174 | Defined by a standard RFC1918 175 | 176 | When possible, allocate non-overlapping ranges across your networks. 177 | 178 | ##### IPv4 Address Space 179 | See Class A - E above. 4,294,967,296 total IPs in IPv4. Too few which is why IPv6 was created (340 trillion trillion trillion addresses) 180 | 181 | ##### IP Subnetting (IPv4) 182 | Subnetting takes a large network and breaks into smaller networks. 183 | 184 | CIDR (Classless Inter-Domain Routing) - Let's us take networks and break them down, determining size of network (ex. /16 prefix) 185 | /8 is the same as a class A network 186 | /16 is smaller network within Class A network 187 | - The larger the prefix value, the smaller the network 188 | 189 | If you have a large network, you can subnet them. /16 => 2 * /17, split network in 2. One /16 network is the same as 2 /17 networks. 190 | -- Now we have two /17 networks. We can split one of these into 2 * /18. Now we have 3x: /17, /18. and /18 networks. 191 | --- To get to 4 networks, split the remaining /17 network into another two /18 networks, for four total 192 | -- /32 represents a single IP address 193 | - While unusual, it is possible to have odd numbered split networks 194 | 195 | Entire internet is a /0 network. This is why 0.0.0.0 matches the entire internet. 196 | 197 | Eg. 10.16.0.0/16 = 10.16.0.0 => 10.16.255.255 -- Start to End of Network 198 | - If you create /17 subnets from this... 199 | -- 10.16.0.0/17 (1) = 10.16.0.0 => 10.16.127.255 200 | -- 10.16.128.0/17 (2) = 10.16.128.255 => 10.16.255.255 201 | - And so on. Think of it as splitting network ranges in half each time. 202 | 203 | ### DDOS Attacks - Distributed Denial of Service Attacks 204 | A Distributed Denial of Service (DDoS) attack is a type of cyber attack where multiple compromised devices, often from different locations, are used to flood a targeted website or network with traffic, causing the website or network to become unavailable to its users 205 | 206 | #### VLAN - Virtual Local Area Networking 207 | Avoids forcing physical networking throughout an organization. 208 | 209 | Frame Tagging (802.1Q & 802.1AD) - used for VLAN ID or VID 210 | - 802.1Q allows multiple VLANs over the same L2 physical network 211 | - If two 802.1Q VLANs over different connected networks have the same VLAN, 802.1AD adds new fields to VLAN to make same VLAN #'s unique 212 | - One Switch can host multiple Broadcast Domains 213 | - Trunk Port: Connection between 2 802.1Q capable devices 214 | - Devices on different VLANs cannot communicate without a Layer 3 Device (router) 215 | 216 | VLANs allow you to create separate L2 network segments to allow for Isolation 217 | VLANs offer separate broadcast domains 218 | 802.1Q == VLANs 219 | 802.1AD (nested Q-in-Q VLANs) 220 | Q-in-Q used in larger networks. .1Q for smaller networks. 221 | 222 | ### Decimal to Binary Conversion (IP Addressing) 223 | In computer networking, IP (Internet Protocol) addresses are used to identify and communicate with devices on a network. IP addresses are typically represented in decimal notation, with four decimal numbers separated by periods (e.g. 192.168.1.1). However, computers process and transmit data in binary form, which is a sequence of 0s and 1s. To enable communication between devices, IP addresses must be converted from decimal to binary form, and vice versa. In this lesson, we will explore how to convert IP addresses from decimal to binary form and from binary to decimal form. 224 | ** for IPv4 Addressing 225 | 226 | Eg. 133.33.33.7 -> Dotted decimal notation, which is what a human sees 227 | - Computer sees 10000101.00100001.00100001.00000111 228 | 229 | #### Decimal to Binary 230 | Eg. 133.33.33.7 to binary. 231 | HINT: Use binary table https://www.networkacademy.io/ccna/ip-subnetting/converting-ip-addresses-into-binary 232 | - 133, 33, 33, 7 are numbers each between 0 - 255. Tackle these sets of number individually. Each number in IP is 8 bits. 233 | -- 133. If this number is less than Binary Position Value, write a 0. If equal or larger, minus binary position from your decimal number, add 1 in binary column (133 - 128 = 5. 234 | -- Position two is 5. 5 is less than 64, so write a 0. Position 3-5 are also less than 5. Put 0's 235 | -- Position 6. Remaining value is 5, Binary Position Value is 4. 5 - 4 = 1. Put a 1 in position 6 236 | 237 | BINARY VALUE of 133: [ 1 ][ 0 ][ 0 ][ 0 ][ 0 ][ 1 ][ 0 ][ 1 ] 238 | BINARY VALUE of 33: [ 0 ][ 0 ][ 1 ][ 0 ][ 0 ][ 0 ][ 0 ][ 1 ] 239 | BINARY VALUE of 33: [ 0 ][ 0 ][ 1 ][ 0 ][ 0 ][ 0 ][ 0 ][ 1 ] 240 | BINARY VALUE of 7: [ 0 ][ 0 ][ 0 ][ 0 ][ 0 ][ 1 ][ 1 ][ 1 ] 241 | 242 | #### Binary to Decimal 243 | Eg. 10000101.00100001.00100001.00000111 244 | - Break into 4 sections. Work left to right. Still using conversion table 245 | 1. 10000101. Match digit with corresponding position on table. [128][0][0][0][0][4][0][1] = 133 246 | 2. 00100001. [0][0][32][0][0][0][0][1] = 33 247 | 3. 00100001. [0][0][32][0][0][0][0][1] = 33 248 | 4. 00000111. [0][0][0][0][0][4][2][1] = 7 249 | 250 | #### SSL & TLS 251 | Secure Socket Layer (SSL) and Transport Layer Security (TLS) are two cryptographic protocols used to provide secure communication over the internet. SSL was developed by Netscape in the mid-1990s and TLS is its successor. These protocols are used to secure web traffic, email, instant messaging, and other types of internet traffic. SSL and TLS use a combination of symmetric and asymmetric encryption to encrypt data, ensuring that information transmitted over the internet is secure and cannot be intercepted by unauthorized parties. 252 | - TLS is newer and more secure version of SSL 253 | 254 | TLS ensures privacy; communications are encrypted. Asymmetric and then symmetric encryption. Aim for symmetric encryption. Part of TLS process is moving from asymmetric to symmetric encryption. Also verifies ID's of server or client/server. Also protects against alteration. 255 | - Phases 1) Cipher Suites 2) Authentication 3) Key Exchange 256 | 1) Cypher Suites - Set of protocols upon TCP connection. Client/server need to agree on cipher suite to use 257 | 2) Authentication - Ensure server certificate is authentic. Certificate Authority issues these signed certificates 258 | 3) Key Exchange - Where we move from assymetric to symmetric encryption (becomes faster). Client generates pre-master key, encrypts it with servers public key and sends to server. The server decrypts the pre-master key using its private key. 259 | 260 | #### BGP (Border Gateway Protocol) 261 | Border Gateway Protocol (BGP) is a routing protocol used to exchange routing information between different networks on the internet. BGP is the protocol that enables the internet to function as a global network of interconnected networks. BGP is responsible for choosing the best path for network traffic to follow from one network to another, and for announcing that path to other routers on the internet 262 | - BGP as a system is made up of self-managing networks known as Autonomous Systems (AS). AS's are black boxes that abstract away the detail of the network; BGP just needs to know network, as a whole, exists 263 | - Each AS is allocated a number called ASN's. 16 bits in length. Allocated by IANA. Numbers 64,512 - 65,534 are private 264 | - Operates over TCP port 179; it's reliable 265 | - Not automatic, it's manually configured 266 | - BGP is a path-vector protocol. It exchanges the BEST PATH between peers (aka shortest path), this path is called the ASPATH. This path is then 'trusted' and trust can be shared (AS's share trusted routes) 267 | - iBGP and eBGP (internal BGP and external BGP). Either within an AS or between AS's 268 | - AS Path Prepending can be used to artificially make one path look longer than another 269 | 270 | In summary, an AS will advertise all the shortest paths it knows to all its peers, as the AS prepends its own AS number onto the path, this creates a source-to-destination path which BGP routers can learn and propogate. 271 | 272 | #### Stateful VS Stateless Firewalls 273 | Firewalls are an essential component of network security that help protect against unauthorized access and attacks. 274 | There are two primary types of firewalls: stateful and stateless 275 | - Stateful: A stateful firewall is designed to monitor and track the state of network connections, keeping track of the status of individual network sessions to make more informed decisions about what network traffic to allow or deny 276 | - Stateless: stateless firewall examines each individual network packet in isolation and makes decisions based on predetermined rules, without any awareness of the state of the network connection 277 | 278 | Reminder: TCP (transmission control protocol) transfers IP packets using error correction and ports. REQUEST and RESPONSE: two components of a client-server connection. 279 | - Request can be inbound or outbound. Response needs to be the inverse. 280 | 281 | OUTBOUND VS INBOUND connections are based off perspective. Whether you're looking from eyes of client or server. 282 | - Eg. User requesting cat photos from catigram.io... OB Request per User, IB Request per Server. 283 | 284 | Stateless firewall doesn't understand the state of connections. A Stateless Firewall needs a separate rule for each inbound and outbound response (we see it as a single Connection, but Stateless doesn't see that; the Request and Response each need a separate rule). Inverse rule is required for the response in Stateless. 285 | 286 | Stateful firewall is intelligent enough to identify the REQUEST and RESPONSE components of a connection as being RELATED. In this way, only the Request has to be ALLOWED or NOT, then the Response is automatically ALLOWED or NOT. This reduces admin overhead and chance of mistakes. 287 | 288 | #### JumboFrames & MTU 289 | What is a JumboFrame? Max V2 Frame size is 1500 bytes. Anything bigger is a JumboFrame (but generally max size 9000 bytes). 290 | 291 | Imagine 4 EC2 instances. A/B and C/D connected. A/B standard frames, C/D JumboFrames 292 | - Frame Payload and Frame Overhead (1500 bytes normal, 9000 bytes jumbo) 293 | - Data split between frames. There's always space between frames (times when nothing is transmitted; downtime). With normal frames, you have more overhead with increased frames, and more wasted time on medium between increased frames. With JumboFrames, less combined Overhead and less downtime. 294 | - To avoid Fragmentation, JumboFrames need to be the same size. Every step must also support JumboFrames. 295 | 296 | ##### Areas of AWS that do and don't support JumboFrames 297 | DOES NOT: 298 | - Traffic outside single VPC 299 | - Traffic over an inter-region VPC peering connection 300 | - Traffic over VPN connections 301 | - Traffic over an internet gateway 302 | 303 | DOES 304 | - Same region peering 305 | - Direct Connect 306 | - Transit Gateway (up to 8500 bytes) 307 | 308 | #### Application Firewalls (Layer 7 Firewalls) 309 | Layer 7 firewalls, also known as application-layer firewalls, are a type of firewall that provides advanced security features by inspecting and filtering data at the application layer of the OSI (Open Systems Interconnection) model. Traditional firewalls, such as packet filtering or stateful inspection firewalls, operate at the network and transport layers and are only capable of filtering traffic based on IP addresses, port numbers, and protocol types. In contrast, layer 7 firewalls have the ability to analyze the content of network traffic, including application protocols such as HTTP, FTP, and SMTP, and can make more granular decisions about which traffic should be allowed or blocked. 310 | - Normal Firewalls (Layer 3/4/5): Sees two flows of info: Request and Response. As this is Layer 3/4, the request/response are seen as separate. If you add session capability, request/response now seen as one session. They can't see into the data, it's just an opaque layer as the data is Level 7 (HTTP). 311 | - Layer 7 Firewalls can see the data above layer 3/4/5. It can identify abnormal requests. Keeps all Layer 3/4/5 elements, but can react to L7 elements 312 | 313 | #### IP Sec VPN Fundamentals 314 | IPSEC, or Internet Protocol Security, is a suite of protocols used to secure Internet Protocol (IP) communications by authenticating and encrypting each IP packet. IPSEC is widely used to secure Virtual Private Networks (VPNs), remote access connections, and other types of network traffic. 315 | 316 | ##### Fibre Optic Cables 317 | - Alternative way to transmit data VS copper cables. Fiber optic cables transmits light over glass/plastic medium (VS electric over copper). Fiber also resistance to electromagnetic interference and less prone to water ingress. Fiber is more consistent; better for higher speeds, larger distances, etc. 318 | - Physical construction is cable and connector. 319 | - Fiber diameter expressed as X/Y eg. 9/125. X is diameter of core, Y is diameter of cladding (both in microns). Light bounces off inside of core, which is why its diameter is important. Core/cladding are for the data transmission. The buffer is a boundary surrounding the core for protection 320 | - Single Mode and Multi-Mode Fiber 321 | -- Single Mode. Small core, 8-9 microns, yellow jacket usually. Little bounce, little distortion. Best for high speed over long distance. Laser optics 322 | -- Multi-mode. Bigger core, orange or aqua color jacket. Bigger core means wider range of light to use, and more bouncing. Different colors of light can be sent simultaneously. Creates more distortion over distance. Best for speed/cost effectiveness. LED optics 323 | - Fiber Optic transceivers. How to connect to fiber. These are what generates and sends the light to/from fiber. Data -> Light -> Data 324 | -- Transceivers also Mult- or Single-mode. Optimised for a cable type, need same kind at both ends 325 | 326 | ##### Encryption 101 327 | Encryption is the process of converting data into a form that is unreadable to unauthorized users 328 | - Encryption Approaches: 2 329 | -- 1. Encryption at Rest. One party. Eg. Local laptop encrypting/decrypting data as it's written / read from disc. Used in cloud environments. 330 | -- 2. Encryption In Transit. Protecting data as transferred between two places. Multiple individuals/systems involved Eg. Laptop to bank and back (applying encryption wrapper). 331 | 332 | - Concepts 333 | -- Plain text (document, image, app), algorithm (plain text + encryption key = encrypted data), key (a 'password'), ciphertext created from all the previous stuff. Decrypt: ciphertext + key = plain text. 334 | 335 | - Symmetric 336 | Same key used for both encryption and decryption processes. Transferring this key is the problem here. 337 | 338 | - Asymmetric 339 | Keys used in asymmetric encryption are also asymmetric 340 | -- Asym keys are formed of twp parts: Public and Private keys. Only private key can decrypt data. No issue with public key being stolen as it is made to be accessible. Asym used with two parties involves where the two parties have never physically met before. Computationally more difficult than symmetric. 341 | 342 | - Signing 343 | Uses asymmetric keys to verify identity. Encryption doesn't prove identity, which is why Signing exists. Message gets signed with private key for verification. 344 | 345 | - Steganography 346 | It's obvious when data is encrypted; you can't hide it. Steganography is a process which addresses this -- hiding something in something else. Eg. cat data hidden in puppy image 347 | 348 | #### Envelope Encryption 349 | Envelope encryption is a technique used to secure data by encrypting it with multiple layers of keys 350 | 351 | KMS (Key Management Services) - Used to encrypt data less than 4kb in size (other keys). Key Encryption Key (KEK). DEKs (Data Encryption Key) are created by KMS but not managed by KMS. KEK Asym or Sym. DEK always symmetric. 352 | - Asym flexible but slow. Sym fast but difficult to move securely. 353 | 354 | #### Hardware Security Modules (HSM) 355 | Hardware Security Modules (HSMs) are physical devices designed to provide a high level of security and cryptographic processing to protect sensitive data and keys 356 | - HSM isolated from main infrastructure. For cryptographic operations, you send the operations to the isolated HSM system. Keys created/stored/authenticatednever leave HSM. 357 | - Tamper proof and hardened against physical or logical attacks 358 | - Role Separation (admins w/o full access) 359 | 360 | #### Hash Functions & Hashing 361 | Hash functions are mathematical algorithms that transform input data into a fixed-length string of characters, called a hash or message digest. Hashing is the process of applying a hash function to data to produce a unique and irreversible representation of the original data. Hash functions are widely used in computer security and cryptography for data integrity and authentication, digital signatures, password storage, and more. 362 | 363 | The main characteristics of a hash function are its one-way property, where it is easy to compute the hash value of the input data but computationally infeasible to reconstruct the original data from the hash value, and its collision resistance, where it is highly unlikely for two different inputs to produce the same hash value 364 | 365 | #### How it works 366 | - Data + Hashing Function = Hash 367 | - Starts with hash function (MD5, SHA2-256) 368 | - Data1 in -> Hash Function -> Hash1 out. Change 1 pixel in data and re-run, this creates Hash2 369 | - Hashing is one way. Once you hash an image, you can't unhash it 370 | - Same data in, same hash value out. But again, a single different byte or pixel creates a new hash 371 | - MD5 hashing algo is too weak to be used in real-world systems 372 | - Downloaded data can be verified using hash (to verify data is unaltered) 373 | - Has the hash itself been altered? Digital Signing addresses this 374 | 375 | ##### Hashing Weakness 376 | - Collision. If we hash image of a plane, then take another image and hash both... Should have different hash values. However, if both hash's match... Collision. 377 | - Don't use MD5 378 | 379 | #### Digital Signatures 380 | Digital signatures are electronic signatures that are used to authenticate the integrity and authenticity of digital messages or documents 381 | - Can sign using a Private Key for verification 382 | - Two benefits when used with Hashing: verifies INTEGRITY of data and AUTHENTICITY of data (verify WHAT and WHO) 383 | 384 | #### DNS - What does DNS do? 385 | The Domain Name System (DNS) is a core part of most applications and IT systems 386 | 387 | ##### DNS 101: Functionality 388 | If you visit netflix.com, the computers communicating don't use the namespace "netflix.com". Instead, this name is linked to an IP. DNS is what connects names to IPs. Basically, DNS is a huge DB that converts names to IP addresses 389 | 390 | TIP: You MUST understand DNS to work in networking / AWS 391 | 392 | ##### DNS 101: Why do we need lots of DNS servers? 393 | Problems with 1 (or a few) DNS servers: 394 | - Obvious risk problems. Bad actors could attack infrastructure 395 | - Scaling problem (almost everyone uses DNS globally) 396 | -- This demands a hierarchical structure 397 | 398 | ##### DNS 101: DNS Terms 399 | DNS Zone. A database containing records (URLs etc) 400 | ZoneFile. The "file" storing the zone on disk 401 | Name Server (NS). DNS server which hosts 1 or more zones and stores 1 or more ZoneFiles 402 | Authoritative. Contains real/genuine records (the boss) 403 | Non-authoritative / Cached. Copies of records/zones stored elsewhere to speed things up 404 | 405 | ##### DNS 101: Hierarchical Design of DNS 406 | - Starts with DNS Root (the boss) 407 | - Root Zone - Contains high level info on top level domains (TLD), but no details. Root Zone points at name servers hosting TLD zones (which are run by registries like verisign). TLD stores high level info only on domains in that TLD. TLD points at NameServers. These NameServers are authoritative of the domains contained. Authoritative name servers point to the domain zones and zonefiles, which are subsequently authoritative. The zone prior knows a bit about the zone nested in it. 408 | 409 | ##### DNS 101: How DNS works 410 | Core functionality of DNS: You have a person/device/service and you need to get the IP address of a domain. DNS's job allows you to locate the specific zone that can provide an authoritative response for your request (like visiting netflix.com) 411 | 412 | How a query works: 413 | 1 - Querying for netflix.com 414 | 2 - First DNS local cache and host file of local machine is checked 415 | 3 - If local is unaware of query, we move to DNS Queries Resolver (type of DNS server within router or internet provider). This also has a local cache to check on. Results from here are non-authoritative 416 | 4 - Next is that the Queries Resolver queries Root Zone for netflix.com. Root Zone looks at NameServer Records and gets details from there 417 | 5 - Resolver can now query one of the .com TLD Name Servers. Details of DNS record from Netflix NS is returned to Resolver 418 | 6 - Now, Resolver queries netflix.com DNS Name Server. This can return an authoritative result. Result caches this result and returns result through to client 419 | - End to end, this process is called 'walking the tree' 420 | 421 | #### DNS 101: Registering a new domain 422 | Process requires a few key entities: person registering, domain registrar, DNS hosting provider, TLD registry (Verisign), .com TLD Zone (managed by Verisign) 423 | 1 - pay for domain via domain registrar 424 | 2 - If registrar and hosting provider are same company, create a zone and get an NS 425 | 3 - If registrar and hosting provider are different, you'll be asked for NS zone info (which is configured separately) 426 | 4 - Register domain, supply Name Server info to TLD registry (Verisign) 427 | 5 - Verisign adds Name Server to the .com TLD zone, making domain live 428 | 429 | Registrar VS Hosting Provider: 430 | - Registrar has one purpose: let you purchase domains 431 | - Hosting Provider operate DNS name servers which can host DNS zones. Allows you to manage content of those zones. 432 | -- Some companies are either Registrars or HP's, some are both 433 | 434 | #### DNS 101: DNSSEC 435 | DNSSEC strengthens authentication in DNS using digital signatures based on public key cryptography. With DNSSEC, it's not DNS queries and responses themselves that are cryptographically signed, but rather DNS data itself is signed by the owner of the data 436 | - Two improvements over DNS: 1) origin authentication (is the data from the correct entity), 2) data integrity protection (has the data been modified) 437 | -- by creating cryptographically verifiable DNS Chain of Trust 438 | -- DNSSEC is additive to DNS, not replacing DNS. DNS-only device won't receive DNSSEC results 439 | 440 | ##### How DNSSEC Works within a Zone 441 | How it allows a DNSSEC to validate a resource within a zone (data integrity) 442 | 443 | TERM: Resource Records Set (RRSE). icann.org zone has the following: Resource Records (cname, A record, AAAA record, 4 MX records, mail exchange) -- 4 total resource record sets: cname, A, AAAA, MX. RRSETs are used within DNSSEC. DNSSEC looks at record sets, not individual records. 444 | - RRSIG stores a digital signature for RRSET: Zone Signing Key (ZSK). This public/private key set is separate from the zone. ZSK is record #256. RRSIG validates RRSETS. 445 | -- Uses digital signing and hashing 446 | 447 | TERM: Key Signing Key (KSK, record #257). Ensures that a Zone Signing Key is trusted. The Private KSK creates an RRSIG from DNSKey (ZSK) 448 | 449 | #### DNS 101: DNSSEC Chain of Trust 450 | How the chain of TRUST is created between PARENT and CHILD zones within DNS 451 | 452 | TERM: Delegated Signer (DS): Contains a HASH of the child domain's public KSK. Links the parent trust to the child. 453 | - Root Zone Private Key / Signing Keys are explicitly trusted 454 | - DNS Resolver can walk through the zones until establishing trust (from Root through to bottom) 455 | 456 | #### DNS 101: DNSSEC Root Signing Ceremony 457 | Controlling the keys of the internet. In case of Root Zone, there is no parent zone to provide trust -- we need a way to create this TRUST ANCHOR 458 | - Key to internet: Private "." DNS Root Key Signing Key 459 | -- This key is explicitly trusted by everything 460 | - Ultimately, you're trying to get Root Zone RRSIG DNS Key 461 | 462 | ### CONTAINERS & VISUALIZATION 463 | 464 | #### Kubernetes (K8) 465 | Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications 466 | 467 | ##### K8: Recovery Point Objective (RPO) 468 | RPO is the max amount of data over time that can be loss before the loss exceeds what org can tolerate. Expressed in minutes or hours. 469 | - Successful backups are known as Recovery Points 470 | - Backups should occur as often or more than an RPO. Eg. If RPO is 6 hours, backups should occur at least once every 6 hours 471 | - The lower the RPO, the more costly (more backups required) 472 | 473 | ##### K8: Recovery Time Objective (RTO) 474 | RTO is the maximum tolerable length of time that a system can be down after a failure or disaster occurs 475 | 476 | End to End Recovery: Recovery time of a system begins at the moment of failure and is only FIXED when handed back to business in a fully tested state 477 | 478 | ### DATA FORMATS & CONFIG FORMATS 479 | 480 | #### YAML 101 481 | YAML (short for "YAML Ain't Markup Language") is a lightweight, human-readable data serialization format that is often used for configuration files and data exchange between applications. 482 | - Unordered collection of key:value pairs 483 | - Indentations/spaces matter in YAML 484 | 485 | YAML supports complex data structures, including lists, dictionaries, and nested structures, and it can be used with a wide range of programming languages and tools 486 | 487 | Dictionary: Unordered data structure; key:value pair collection. Think key:value pairs with nested ones inside 488 | 489 | #### JSON 101 490 | JSON (short for "JavaScript Object Notation") is a lightweight data interchange format that is commonly used for data exchange between applications 491 | 492 | In JSON, a dictionary is called an Object. A list is called an Array. 493 | 494 | ### CLOUD COMPUTING 101 495 | 496 | #### What is cloud computing? 497 | Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. - National Institute of Standards and Technology (NIST) definition 498 | 499 | To be cloud computing, meet these 5 characteristics / criteria: 500 | 1 - On-demand Self-service. Provision capabilities as needed without requiring human interaction 501 | 2 - Broad Network Access. Capabilities available over the network and accessed through the standard mechanisms 502 | 3 - Resource Pooling. There is a sense of location independence, no control / knowledge over exact location of resources. Resources pooled to serve multiple consumers using a multi-tenant model 503 | 4 - Rapid Elasticity. Capabilities can be elastically provisioned and released to scale rapidly outward / inward with demand. To the consumer, the resources avilable for provisioning appear to be unlimited 504 | 5 - Measured Service. Resource usage can be monitored, controlled, reported, and billed 505 | 506 | #### Public vs Private vs Multi vs Hybrid Cloud 507 | Public, private, hybrid, and multi-cloud are all viable options 508 | - Public. Cloud available to the public. 509 | - Private. Run from business on-premise *real* cloud. 510 | - Hybrid. Private and public cloud cooperating together in a single environment (from same vendor usually). Not to be confused with 'hyrid environments / networking' (which is public cloud with legacy on-premise stuff) 511 | - Multi-cloud. Using multiple public cloud environments (eg. AWS + Azure). Stay away from 'single management window/single pane of glass'; abstracts features abstracting down to common feature sets 512 | 513 | #### Cloud Service Models Cloud Service Models (IAAS, PAAS, SAAS) 514 | Terms & Concepts: 515 | - Infrastructure Stack: Facilities, infrastructure, servers, virtualization, O/S, Container, runtime, Data, Application 516 | - Unit of Consumption: What you use/pay for. This is what makes each service model below, different. Eg Virtual machine 517 | - Infrastructure as a Service (IaaS): Provider manages facilities, infrastructure, servers, virtualization. You manage the rest. Unit of consumption is O/S or Virtual Machine 518 | -- AWS EC2 uses IaaS 519 | - Platform as a Service (PaaS): Provider manages facilities, infrastructure, servers, virtualization, O/S, Container, runtime. You consume the runtime environment. 520 | - Software as a Service (SaaS): Provider manages facilities, infrastructure, servers, virtualization, O/S, Container, runtime, Data. You consume the application. 521 | --------------------------------------------------------------------------------