├── README.md ├── aws-cost-audit ├── README.md └── scripts │ ├── check_budgets.sh │ ├── check_data_transfer_risks.sh │ ├── check_forgotten_ebs.sh │ ├── check_idle_ec2.sh │ ├── check_idle_load_balancers.sh │ ├── check_old_rds_snapshots.sh │ ├── check_on_demand_instances.sh │ ├── check_s3_lifecycle.sh │ ├── check_untagged_resources.sh │ ├── main.sh │ └── utils.sh ├── img └── aws_audit_output.png └── linux-namespaces-cgroups ├── README.md └── cgroup.sh /README.md: -------------------------------------------------------------------------------- 1 | # bash-cookbook 2 | ![](https://i.imgur.com/1cl8FAf.png) 3 | 4 | Collection of some useful Bash scripts 5 | 6 | ## Table of contents: 7 | - [Check if IP address is private or public](../../tree/check-ip-type) 8 | - [Parse $PATH variable and output in readable format](../../tree/parse-path) 9 | - [Get parameters of secure TLS connection for the given domain](../../tree/tls-info) 10 | - [Bash script that splits a file into multiple parts based on a specified size in MB](../../tree/split-file) 11 | - [Cleanup all Docker containers and images on your system](../../tree/docker-cleanup) 12 | - [Set Git config settings for the repository](../../tree/git-account-switcher) 13 | - [Completely remove the given package from the system](../../tree/remove-package) 14 | - [Convert decimal number to binary and vice versa](../../tree/bdconvert) 15 | - [Calculate network, broadcast addresses, subnet mask and number of availalbe hosts from IP CIDR range](../../tree/cidrcalc) 16 | - [Cleanup all pods in K8s cluster with 'Failed' or 'Unknown' statuses](../../tree/k8s-cleanup-pods) 17 | - [Use cgroups to limit CPU usage to 50% for a given process](./linux-namespaces-cgroups/) 18 | - [Audit AWS costs](./aws-cost-audit/) -------------------------------------------------------------------------------- /aws-cost-audit/README.md: -------------------------------------------------------------------------------- 1 | # AWS Cost Audit 2 | 3 | The following scripts allows to audit and review the following aspects of AWS Costs: 4 | - **Untagged resources**: No tags = no visibility 5 | - **Idle EC2s & oversized instances**: Avoid wasted budget, downsize or right-size using [AWS Compute Optimizer](https://aws.amazon.com/compute-optimizer/) 6 | - **No budgets or alerts**: Avoid surprises during high traffic spikes. Check budgets and alerts 7 | - **S3 buckets with no lifecycle policies**: Without auto-delete rules, logs can pile up for years in your S3 buckets. To avoid set expiration policies 8 | - **Piling RDS Snapshots**: Old snapshots = hidden costs. Keep only what you need for compliance or recovery 9 | - **Forgotten EBS volumes**: Unattached EBS volumes are still billed - unless they’re deleted or snapshotted and archived 10 | - **Data Transfer Charges**: Cross-AZ traffic or public IP usage can sneak up. Use VPC endpoints and same-AZ designs where possible. 11 | - **Savings Plans / Reserved Instances**: Using On-Demand Instances for stable workloads can result in overpaying. Review your workloads to migrate to savings plans. 12 | - **Load balancers without traffic**: Check CloudWatch - if no traffic, shut them down. 13 | 14 | ## Run AWS Audit 15 | 16 | To run the audit script: 17 | ```bash 18 | ./main.sh 19 | ``` 20 | 21 | You should get the output `audit_log` file and the check progress in the terminal: 22 | 23 | ![](../img/aws_audit_output.png) 24 | 25 | > ⚠️ **Important:** 26 | > The scripts check only for one given account in one specific region. You should have to run it against other accounts and regions in separate 27 | 28 | ## Scripts 29 | 30 | The `main.sh` script is the launcing script. `utils.sh` script defines AWS account ID and log message formats. 31 | 32 | ```bash 33 | . 34 | ├── check_budgets.sh 35 | ├── check_data_transfer_risks.sh 36 | ├── check_forgotten_ebs.sh 37 | ├── check_idle_ec2.sh 38 | ├── check_idle_load_balancers.sh 39 | ├── check_old_rds_snapshots.sh 40 | ├── check_on_demand_instances.sh 41 | ├── check_s3_lifecycle.sh 42 | ├── check_untagged_resources.sh 43 | ├── main.sh 44 | └── utils.sh 45 | ``` 46 | 47 | ## AWS Budgets 48 | 49 | The `check_budgets` script queries AWS for a list of budget names. For each budget, checks if notifications are set up and logs appropriate messages. For more information, see [AWS Budgets](https://aws.amazon.com/aws-cost-management/aws-budgets/) 50 | 51 | ## IDLE EC2 and Oversized Instances 52 | 53 | The `check_idle_ec2.sh` script: 54 | 1. Queries all running EC2 instances. 55 | 2. For each instance, retrieves its type and calculates the average CPU utilization. 56 | 3. Flags instances as "idle" if their CPU usage is below 10%, or "active" otherwise. 57 | 4. Logs results using custom logging functions and suggests using AWS Compute Optimizer for further optimization. 58 | 59 | ## Check S3 Buckets Without Lifecycle Policies 60 | 61 | In order to make this script function properly, first ensure that you have the following IAM permissions in your AWS account: 62 | - `s3:ListAllMyBuckets` 63 | - `s3:GetBucketLifecycleConfiguration` 64 | 65 | The `check_s3_lifecycle.sh` script: 66 | - Lists all S3 buckets in the account. 67 | - For each bucket, checks if a lifecycle policy exists and logs the result. 68 | - If a policy exists, displays details (ID, Prefix, Status) of each rule using `jq`. 69 | 70 | ## Check Old RDS Snapshots 71 | 72 | In order to make this script function properly, first ensure that you have the following IAM permissions in your AWS account: 73 | - `rds:DescribeDBSnapshots` 74 | 75 | The `check_old_rds_snapshots.sh` script checks for Amazon RDS automated and manual database snapshots older than 30 days: 76 | - Defines a 30-day threshold for identifying old snapshots. 77 | - Queries RDS snapshots older than the threshold, extracting their identifier, associated instance, creation time, and type. 78 | 79 | ## Check for Forgotten EBS Volumes 80 | 81 | The `check_forgotten_ebs.sh` script checks for unattached (available) Amazon EBS volumes in a specified AWS region: 82 | - Queries EBS volumes with a status of "available" (not attached to any EC2 instance), extracting their ID, size, creation time, and tags. 83 | 84 | ## Audit Data Transfer Risks 85 | 86 | The `check_data_transfer_risks.sh` script checks for the following: 87 | - EC2 instances with public IP addresses 88 | - Unused Elastic IPs 89 | - Subnets in different AZs 90 | - S3 VPC endpoint 91 | - DynamoDB VPC endpoint 92 | 93 | To add additional services for VPC endpoints like RDS, modify the script. For more information, see [Access an AWS service using an interface VPC endpoint](https://docs.aws.amazon.com/vpc/latest/privatelink/create-interface-endpoint.html) 94 | 95 | ## On-Demand EC2 Instances 96 | 97 | The `check_on_demand_instances.sh` script outputs the number of on-demand EC2 instances in the account. To save more, review your workloads and consider using Reserved Instances or Savings Plans. For more information, see [EC2 pricing](https://aws.amazon.com/ec2/pricing/) 98 | 99 | ## Load Balancers Without Traffic 100 | 101 | The `check_idle_load_balancers.sh` script detects load balancers without traffic: 102 | - Lists all Application Load Balancers (ALB) and Network Load Balancers (NLB) 103 | - Checks CloudWatch metrics (RequestCount for ALB, ActiveFlowCount or ProcessedBytes for NLB) 104 | - Flags any with `0` average traffic over a recent period (e.g. past 3 days) 105 | -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_budgets.sh: -------------------------------------------------------------------------------- 1 | ###################################################################################################################### 2 | # _ __ __ ___ ___ _ _ 3 | # /_\\ \ / // __| | _ ) _ _ __| | __ _ ___ | |_ 4 | # / _ \\ \/\/ / \__ \ | _ \| || |/ _` |/ _` |/ -_)| _| 5 | # /_/ \_\\_/\_/ |___/ |___/ \_,_|\__,_|\__, |\___| \__| 6 | # |___/ 7 | # This script queries AWS for a list of budget names 8 | # For each budget, checks if notifications are set up and logs appropriate messages. 9 | # To learn more, see https://maxat-akbanov.com/ 10 | ###################################################################################################################### 11 | 12 | #!/bin/bash 13 | 14 | # Source (import) the utils.sh script from the current directory 15 | # This contains helper functions like get_account_id, log_error, log_info, etc. 16 | source ./utils.sh 17 | 18 | # Retrieve AWS Account ID using a function from utils.sh and store it in ACCOUNT_ID 19 | ACCOUNT_ID=$(get_account_id) 20 | 21 | # Check if ACCOUNT_ID is empty (i.e., the command failed to retrieve an ID) 22 | if [ -z "$ACCOUNT_ID" ]; then 23 | log_error "Failed to retrieve AWS Account ID. Is your AWS CLI configured?" 24 | exit 1 25 | fi 26 | 27 | log_info "Checking budgets for AWS Account: $ACCOUNT_ID" 28 | echo "------------------------------------------" 29 | 30 | # Retrieve a list of budget names from AWS using the AWS CLI 31 | # The output is piped to 'jq' (a JSON processor) to extract just the BudgetName fields 32 | budget_names=$(aws budgets describe-budgets \ 33 | --account-id "$ACCOUNT_ID" \ 34 | --output json | jq -r '.Budgets[].BudgetName') 35 | 36 | # Check if no budgets were found (budget_names is empty) 37 | if [ -z "$budget_names" ]; then 38 | log_warn "No budgets found for this account." 39 | exit 0 40 | fi 41 | 42 | # Loop through each budget name retrieved, using IFS (Internal Field Separator) to read lines 43 | while IFS= read -r budget_name; do 44 | log_info "Budget: $budget_name" 45 | 46 | # Query AWS for notifications associated with the current budget 47 | notifications=$(aws budgets describe-notifications-for-budget \ 48 | --account-id "$ACCOUNT_ID" \ 49 | --budget-name "$budget_name" \ 50 | --query 'Notifications' \ 51 | --output text) 52 | 53 | # Check if no notifications were found for this budget 54 | if [ -z "$notifications" ]; then 55 | log_warn " No alerts (notifications) configured!" 56 | else 57 | log_success " Budget alerts are configured." 58 | fi 59 | 60 | done <<< "$budget_names" -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_data_transfer_risks.sh: -------------------------------------------------------------------------------- 1 | ######################################################################## 2 | # ___ _ _____ __ 3 | # | \ __ _ | |_ __ _ |_ _|_ _ __ _ _ _ ___ / _| ___ _ _ 4 | # | |) |/ _` || _|/ _` | | | | '_|/ _` || ' \ (_-<| _|/ -_)| '_| 5 | # |___/ \__,_| \__|\__,_| |_| |_| \__,_||_||_|/__/|_| \___||_| 6 | # 7 | # To learn more, see https://maxat-akbanov.com/ 8 | ######################################################################## 9 | 10 | #!/bin/bash 11 | 12 | source ./utils.sh 13 | 14 | REGION=$(aws configure get region) 15 | 16 | log_info "Auditing data transfer risks in $REGION" 17 | echo "--------------------------------------------------" 18 | 19 | # ✅ 1. Detect EC2 instances with public IPs 20 | log_info "🔍 EC2 Instances with Public IPs:" 21 | # Retrieve details of running EC2 instances 22 | # The 'aws ec2 describe-instances' command queries instance information 23 | # --filters limits to instances in the "running" state 24 | # --query extracts InstanceId and PublicIpAddress for each instance 25 | instances=$(aws ec2 describe-instances \ 26 | --filters Name=instance-state-name,Values=running \ 27 | --query 'Reservations[*].Instances[*].{ID:InstanceId,PublicIP:PublicIpAddress}' \ 28 | --output json) 29 | 30 | # Parse the instances JSON using jq to identify instances with non-null PublicIP 31 | # For each instance with a public IP, print a warning with the instance ID and public IP 32 | echo "$instances" | jq -r '.[][] | select(.PublicIP != null) | "⚠️ Instance: \(.ID) has Public IP: \(.PublicIP)"' 33 | 34 | # ✅ 2. Detect allocated Elastic IPs (EIPs) 35 | log_info "🔍 Elastic IP Addresses (EIPs):" 36 | # Retrieve details of allocated Elastic IPs 37 | eips=$(aws ec2 describe-addresses --query 'Addresses[*].{PublicIP:PublicIp,InstanceId:InstanceId}' --output json) 38 | 39 | # Check if no Elastic IPs were found (eips is empty or an empty array "[]") 40 | if [ -z "$eips" ] || [ "$eips" == "[]" ]; then 41 | log_success "✅ No Elastic IPs allocated." 42 | else 43 | # Parse the EIPs JSON using jq 44 | # For each EIP: 45 | # - If InstanceId is null, print a warning indicating an unused EIP 46 | # - If InstanceId is present, print a success message indicating the EIP is attached to an instance 47 | echo "$eips" | jq -r '.[] | 48 | if .InstanceId == null then 49 | "⚠️ Unused Elastic IP: \(.PublicIP)" 50 | else 51 | "✅ Elastic IP \(.PublicIP) attached to instance: \(.InstanceId)" 52 | end' 53 | fi 54 | 55 | # ✅ 3. Detect subnets spread across Availability Zones (AZs) 56 | log_info "🔍 Subnet-AZ Mapping (check same-AZ design):" 57 | # Retrieve details of subnets 58 | # The 'aws ec2 describe-subnets' command queries subnet information 59 | # --query extracts SubnetId, AvailabilityZone, and the Name tag (if present) 60 | # The output is piped to jq to format each subnet's details 61 | # - Name is used if available; otherwise, SubnetId is used 62 | # - Prints the subnet name (or ID) and its Availability Zone 63 | aws ec2 describe-subnets \ 64 | --query 'Subnets[*].{ID:SubnetId,AZ:AvailabilityZone,Name:Tags[?Key==`Name`]|[0].Value}' \ 65 | --output json | jq -r '.[] | " ↳ Subnet: \(.Name // .ID), AZ: \(.AZ)"' 66 | 67 | # ✅ 4. Detect S3 and DynamoDB VPC Endpoints 68 | log_info "🔍 VPC Endpoints (S3 & DynamoDB):" 69 | # Retrieve S3 VPC endpoint details 70 | # The 'aws ec2 describe-vpc-endpoints' command queries VPC endpoint information 71 | # --query filters for endpoints with a ServiceName containing 's3' 72 | s3_vpce=$(aws ec2 describe-vpc-endpoints \ 73 | --query "VpcEndpoints[?contains(ServiceName, 's3')].ServiceName" \ 74 | --output text) 75 | 76 | # Retrieve DynamoDB VPC endpoint details 77 | ddb_vpce=$(aws ec2 describe-vpc-endpoints \ 78 | --query "VpcEndpoints[?contains(ServiceName, 'dynamodb')].ServiceName" \ 79 | --output text) 80 | 81 | # Check if an S3 VPC endpoint was found 82 | if [ -z "$s3_vpce" ]; then 83 | log_warn "No VPC endpoint for S3 detected" 84 | else 85 | log_success "S3 VPC endpoint present: $s3_vpce" 86 | fi 87 | 88 | # Check if a DynamoDB VPC endpoint was found 89 | if [ -z "$ddb_vpce" ]; then 90 | log_warn "No VPC endpoint for DynamoDB detected" 91 | else 92 | log_success "DynamoDB VPC endpoint present: $ddb_vpce" 93 | fi 94 | 95 | log_success "Data transfer risk audit completed." -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_forgotten_ebs.sh: -------------------------------------------------------------------------------- 1 | ################################################################### 2 | # ___ ___ ___ ___ _ _ 3 | # | __|| _ )/ __| / __|| |_ ___ __ | |__ 4 | # | _| | _ \\__ \ | (__ | ' \ / -_)/ _|| / / 5 | # |___||___/|___/ \___||_||_|\___|\__||_\_\ 6 | # 7 | # To learn more, see https://maxat-akbanov.com/ 8 | ################################################################### 9 | 10 | #!/bin/bash 11 | 12 | source ./utils.sh 13 | 14 | ACCOUNT_ID=$(get_account_id) 15 | 16 | REGION=$(aws configure get region) 17 | 18 | log_info "Checking for unattached (forgotten) EBS volumes in $REGION" 19 | echo "------------------------------------------------------------" 20 | 21 | # Retrieve details of EBS volumes that are unattached (status=available) 22 | # The 'aws ec2 describe-volumes' command queries EBS volume information 23 | # --filters Name=status,Values=available limits to volumes not attached to any EC2 instance 24 | # --query uses a structured JSON format to extract specific fields: 25 | # - ID: VolumeId (unique identifier of the volume) 26 | # - Size: Size (volume size in GiB) 27 | # - Created: CreateTime (creation timestamp of the volume) 28 | # - Tags: Tags (volume tags, if any) 29 | volumes=$(aws ec2 describe-volumes \ 30 | --filters Name=status,Values=available \ 31 | --query 'Volumes[*].{ID:VolumeId,Size:Size,Created:CreateTime,Tags:Tags}' \ 32 | --output json) 33 | 34 | # Check if no unattached volumes were found (volumes is empty or an empty array "[]") 35 | if [ -z "$volumes" ] || [ "$volumes" == "[]" ]; then 36 | log_success "🧹 No unattached EBS volumes found." 37 | exit 0 38 | fi 39 | 40 | # Parse the volumes JSON and format the output using jq 41 | # For each unattached volume, print a warning with details: 42 | # - .ID: VolumeId (unique identifier of the volume) 43 | # - .Size: Size (volume size in GiB) 44 | # - .Created: CreateTime (creation timestamp of the volume) 45 | # - .Tags: Tags (volume tags, or "None" if no tags are present, using // for null handling) 46 | echo "$volumes" | jq -r '.[] | 47 | "⚠️ Unattached EBS Volume: \(.ID)\n ↳ Size: \(.Size) GiB\n ↳ Created: \(.Created)\n ↳ Tags: \(.Tags // "None")\n"' -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_idle_ec2.sh: -------------------------------------------------------------------------------- 1 | #################################################### 2 | # ___ _ _ ___ ___ ___ 3 | # |_ _| __| || | ___ | __|/ __||_ ) 4 | # | | / _` || |/ -_) | _|| (__ / / 5 | # |___|\__,_||_|\___| |___|\___|/___| 6 | # 7 | # To learn more, see https://maxat-akbanov.com/ 8 | #################################################### 9 | 10 | #!/bin/bash 11 | 12 | source ./utils.sh 13 | 14 | ACCOUNT_ID=$(get_account_id) 15 | 16 | REGION=$(aws configure get region) 17 | 18 | log_info "Checking for idle or oversized EC2 instances in $REGION" 19 | echo "------------------------------------------" 20 | 21 | # Define threshold for CPU utilization (in percent) below which an instance is considered idle 22 | CPU_THRESHOLD=10 23 | # Define the time period (in days) to evaluate CPU utilization 24 | DAYS=3 25 | 26 | # Retrieve a list of running EC2 instance IDs 27 | instance_ids=$(aws ec2 describe-instances \ 28 | --filters "Name=instance-state-name,Values=running" \ 29 | --query 'Reservations[*].Instances[*].InstanceId' \ 30 | --output text) 31 | 32 | # Check if no running instances were found (instance_ids is empty) 33 | if [ -z "$instance_ids" ]; then 34 | log_warn "No running EC2 instances found." 35 | exit 0 36 | fi 37 | 38 | # Loop through each instance ID retrieved 39 | for id in $instance_ids; do 40 | # Retrieve the instance type for the current instance 41 | instance_type=$(aws ec2 describe-instances \ 42 | --instance-ids "$id" \ 43 | --query 'Reservations[0].Instances[0].InstanceType' \ 44 | --output text) 45 | 46 | # Retrieve the average CPU utilization for the instance over the specified period 47 | # The 'aws cloudwatch get-metric-statistics' command fetches CloudWatch metrics 48 | # --namespace AWS/EC2 specifies the EC2 metrics namespace 49 | # --metric-name CPUUtilization specifies the CPU usage metric 50 | # --dimensions filters metrics for the specific instance ID 51 | # --statistics Average computes the average value 52 | # --period 86400 sets the metric granularity to daily (86400 seconds = 1 day) 53 | # --start-time and --end-time define the time range (last $DAYS days to now) 54 | # --query extracts the Average values from the Datapoints 55 | # The output is piped to awk to calculate the overall average across all datapoints 56 | # If no datapoints exist, awk returns 0 57 | avg_cpu=$(aws cloudwatch get-metric-statistics \ 58 | --namespace AWS/EC2 \ 59 | --metric-name CPUUtilization \ 60 | --dimensions Name=InstanceId,Value=$id \ 61 | --statistics Average \ 62 | --period 86400 \ 63 | --start-time $(date -u -d "$DAYS days ago" +"%Y-%m-%dT%H:%M:%SZ") \ 64 | --end-time $(date -u +"%Y-%m-%dT%H:%M:%SZ") \ 65 | --query 'Datapoints[*].Average' --output text | awk '{ sum+=$1; count++ } END { if (count > 0) print sum/count; else print 0 }') 66 | 67 | # Check if the average CPU utilization is below the defined threshold 68 | # The comparison is done using bc (basic calculator) to handle floating-point numbers 69 | if (( $(echo "$avg_cpu < $CPU_THRESHOLD" | bc -l) )); then 70 | log_warn "Idle Instance: $id ($instance_type) — Avg CPU: ${avg_cpu}%" 71 | else 72 | log_success "Active Instance: $id ($instance_type) — Avg CPU: ${avg_cpu}%" 73 | fi 74 | done 75 | 76 | echo 77 | log_info "👉 Tip: For detailed right-sizing recommendations, check AWS Compute Optimizer:" 78 | log_info "https://console.aws.amazon.com/compute-optimizer/home" -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_idle_load_balancers.sh: -------------------------------------------------------------------------------- 1 | ######################################################################### 2 | # ___ ___ _ ___ _ _ ___ __ _ _ _ ___ 3 | # |_ _|| \ | | | __| /_\ | | | _ ) / /| \| || | | _ ) 4 | # | | | |) || |__ | _| / _ \ | |__ | _ \ / / | .` || |__ | _ \ 5 | # |___||___/ |____||___| /_/ \_\|____||___//_/ |_|\_||____||___/ 6 | # 7 | # To learn more, see https://maxat-akbanov.com/ 8 | ######################################################################### 9 | 10 | #!/bin/bash 11 | 12 | source ./utils.sh 13 | 14 | ACCOUNT_ID=$(get_account_id) 15 | 16 | REGION=$(aws configure get region) 17 | 18 | # Define the time period (in days) to evaluate for load balancer activity 19 | DAYS=3 20 | 21 | log_info "Checking ALBs and NLBs for idle state (no traffic in past $DAYS days)" 22 | echo "------------------------------------------------------------" 23 | 24 | # Calculate the time range for checking metrics 25 | # START: Convert the date from $DAYS ago to ISO8601 format (YYYY-MM-DDTHH:MM:SSZ) in UTC 26 | START=$(date -u -d "$DAYS days ago" +"%Y-%m-%dT%H:%M:%SZ") 27 | # END: Get the current date and time in ISO8601 format (YYYY-MM-DDTHH:MM:SSZ) in UTC 28 | END=$(date -u +"%Y-%m-%dT%H:%M:%SZ") 29 | 30 | # ✅ 1. Check Application Load Balancers (ALBs) 31 | log_info "🔍 Checking Application Load Balancers (ALB):" 32 | # Retrieve ARNs (Amazon Resource Names) of all Application Load Balancers 33 | # The 'aws elbv2 describe-load-balancers' command queries load balancer information 34 | # --query filters for ALBs (Type='application') and extracts their ARNs 35 | alb_arns=$(aws elbv2 describe-load-balancers --query 'LoadBalancers[?Type==`application`].LoadBalancerArn' --output text) 36 | 37 | # Loop through each ALB ARN 38 | for arn in $alb_arns; do 39 | # Extract the load balancer name from the ARN using basename 40 | lb_name=$(basename "$arn") 41 | 42 | # Retrieve the total request count for the ALB over the past $DAYS days 43 | # The 'aws cloudwatch get-metric-statistics' command fetches CloudWatch metrics 44 | # --namespace AWS/ApplicationELB specifies the ALB metrics namespace 45 | # --metric-name RequestCount measures the number of requests handled by the ALB 46 | # --dimensions filters metrics for the specific load balancer 47 | # --statistics Sum computes the total sum of requests 48 | # --period 86400 sets the metric granularity to daily (86400 seconds = 1 day) 49 | # --start-time and --end-time define the time range (last $DAYS days to now) 50 | # --query extracts the Sum values from the Datapoints 51 | # The output is piped to awk to calculate the total sum across all datapoints 52 | count=$(aws cloudwatch get-metric-statistics \ 53 | --namespace AWS/ApplicationELB \ 54 | --metric-name RequestCount \ 55 | --dimensions Name=LoadBalancer,Value=$lb_name \ 56 | --statistics Sum \ 57 | --period 86400 \ 58 | --start-time "$START" \ 59 | --end-time "$END" \ 60 | --query 'Datapoints[*].Sum' --output text | awk '{ sum+=$1 } END { print sum }') 61 | 62 | # Check if the request count is empty or less than 1 (indicating no traffic) 63 | # The comparison uses bc (basic calculator) to handle floating-point numbers 64 | if [ -z "$count" ] || (( $(echo "$count < 1" | bc -l) )); then 65 | # Log a warning if the ALB is idle (no requests in the past $DAYS days) 66 | log_warn "Idle ALB: $lb_name — RequestCount: 0" 67 | else 68 | # Log a success message if the ALB is active, including the total request count 69 | log_success "Active ALB: $lb_name — Requests in last $DAYS days: $count" 70 | fi 71 | done 72 | 73 | # ✅ 2. Check Network Load Balancers (NLBs) 74 | log_info "🔍 Checking Network Load Balancers (NLB):" 75 | # Retrieve ARNs of all Network Load Balancers 76 | # The 'aws elbv2 describe-load-balancers' command queries load balancer information 77 | # --query filters for NLBs (Type='network') and extracts their ARNs 78 | # --output text formats the output as plain text 79 | nlb_arns=$(aws elbv2 describe-load-balancers --query 'LoadBalancers[?Type==`network`].LoadBalancerArn' --output text) 80 | 81 | # Loop through each NLB ARN 82 | for arn in $nlb_arns; do 83 | # Extract the load balancer name from the ARN using basename 84 | lb_name=$(basename "$arn") 85 | 86 | # Retrieve the total active flow count for the NLB over the past $DAYS days 87 | # The 'aws cloudwatch get-metric-statistics' command fetches CloudWatch metrics 88 | # --namespace AWS/NetworkELB specifies the NLB metrics namespace 89 | # --metric-name ActiveFlowCount measures the number of active TCP/UDP flows 90 | # --dimensions filters metrics for the specific load balancer 91 | # --statistics Sum computes the total sum of flows 92 | # --period 86400 sets the metric granularity to daily (86400 seconds = 1 day) 93 | # --start-time and --end-time define the time range (last $DAYS days to now) 94 | # --query extracts the Sum values from the Datapoints 95 | # The output is piped to awk to calculate the total sum across all datapoints 96 | count=$(aws cloudwatch get-metric-statistics \ 97 | --namespace AWS/NetworkELB \ 98 | --metric-name ActiveFlowCount \ 99 | --dimensions Name=LoadBalancer,Value=$lb_name \ 100 | --statistics Sum \ 101 | --period 86400 \ 102 | --start-time "$START" \ 103 | --end-time "$END" \ 104 | --query 'Datapoints[*].Sum' --output text | awk '{ sum+=$1 } END { print sum }') 105 | 106 | # Check if the flow count is empty or less than 1 (indicating no traffic) 107 | # The comparison uses bc to handle floating-point numbers 108 | if [ -z "$count" ] || (( $(echo "$count < 1" | bc -l) )); then 109 | # Log a warning if the NLB is idle (no active flows in the past $DAYS days) 110 | log_warn "Idle NLB: $lb_name — ActiveFlowCount: 0" 111 | else 112 | # Log a success message if the NLB is active, including the total flow count 113 | log_success "Active NLB: $lb_name — Flows in last $DAYS days: $count" 114 | fi 115 | done 116 | 117 | log_success "Load balancer traffic audit completed." -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_old_rds_snapshots.sh: -------------------------------------------------------------------------------- 1 | ##################################################################### 2 | # ___ ___ ___ ___ _ _ 3 | # | _ \| \ / __| / __| _ _ __ _ _ __ ___| |_ ___ | |_ ___ 4 | # | /| |) |\__ \ \__ \| ' \ / _` || '_ \(_-<| ' \ / _ \| _|(_-< 5 | # |_|_\|___/ |___/ |___/|_||_|\__,_|| .__//__/|_||_|\___/ \__|/__/ 6 | # |_| 7 | # 8 | # To learn more, see https://maxat-akbanov.com/ 9 | ##################################################################### 10 | 11 | #!/bin/bash 12 | 13 | source ./utils.sh 14 | 15 | ACCOUNT_ID=$(get_account_id) 16 | 17 | REGION=$(aws configure get region) 18 | 19 | # Define the threshold (in days) for identifying "old" RDS snapshots 20 | THRESHOLD_DAYS=30 21 | 22 | log_info "Checking for old RDS snapshots (older than $THRESHOLD_DAYS days) in $REGION" 23 | echo "------------------------------------------------------------" 24 | 25 | # Convert the threshold (30 days ago) to an ISO8601 formatted date 26 | # The 'date' command with -u ensures UTC time, and -d calculates the date $THRESHOLD_DAYS ago 27 | # The output is formatted as YYYY-MM-DDTHH:MM:SSZ (ISO8601) 28 | cutoff_date=$(date -u -d "$THRESHOLD_DAYS days ago" +"%Y-%m-%dT%H:%M:%SZ") 29 | 30 | # Retrieve details of RDS snapshots older than the cutoff date 31 | # The 'aws rds describe-db-snapshots' command queries RDS snapshot information 32 | # --query filters snapshots where SnapshotCreateTime is earlier than cutoff_date 33 | # The query extracts DBSnapshotIdentifier, DBInstanceIdentifier, SnapshotCreateTime, and SnapshotType 34 | snapshots=$(aws rds describe-db-snapshots \ 35 | --query "DBSnapshots[?SnapshotCreateTime<'$cutoff_date'].[DBSnapshotIdentifier,DBInstanceIdentifier,SnapshotCreateTime,SnapshotType]" \ 36 | --output json) 37 | 38 | # Check if no snapshots were found (snapshots is empty or an empty array "[]") 39 | if [ -z "$snapshots" ] || [ "$snapshots" == "[]" ]; then 40 | log_success "♻️ No RDS snapshots older than $THRESHOLD_DAYS days." 41 | exit 0 42 | fi 43 | 44 | # Parse the snapshots JSON and format the output using jq 45 | # For each snapshot, print a warning with details: 46 | # - .0: DBSnapshotIdentifier (snapshot name) 47 | # - .1: DBInstanceIdentifier (associated RDS instance) 48 | # - .2: SnapshotCreateTime (creation timestamp) 49 | # - .3: SnapshotType (e.g., manual or automated) 50 | echo "$snapshots" | jq -r '.[] | 51 | "⚠️ Snapshot: \(.0)\n Instance: \(.1)\n Created: \(.2)\n Type: \(.3)\n"' -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_on_demand_instances.sh: -------------------------------------------------------------------------------- 1 | ############################################################################# 2 | # ___ ___ _ ___ ___ ___ 3 | # / _ \ _ _ | \ ___ _ __ __ _ _ _ __| | | __|/ __||_ ) 4 | # | (_) || ' \ | |) |/ -_)| ' \ / _` || ' \ / _` | | _|| (__ / / 5 | # \___/ |_||_| |___/ \___||_|_|_|\__,_||_||_|\__,_| |___|\___|/___| 6 | # 7 | # To learn more, see https://maxat-akbanov.com/ 8 | ############################################################################# 9 | 10 | #!/bin/bash 11 | 12 | source ./utils.sh 13 | 14 | ACCOUNT_ID=$(get_account_id) 15 | 16 | REGION=$(aws configure get region) 17 | 18 | log_info "Checking for On-Demand EC2 instances in $REGION" 19 | echo "------------------------------------------------------------" 20 | 21 | # Retrieve details of running EC2 instances 22 | # The 'aws ec2 describe-instances' command queries instance information 23 | # --filters limits to instances in the "running" state 24 | # --query extracts InstanceId, InstanceType, and InstanceLifecycle for each instance 25 | # - InstanceLifecycle indicates if the instance is On-Demand (null), Spot, or Scheduled 26 | instances=$(aws ec2 describe-instances \ 27 | --filters Name=instance-state-name,Values=running \ 28 | --query 'Reservations[*].Instances[*].{ID:InstanceId,Type:InstanceType,Lifecycle:InstanceLifecycle}' \ 29 | --output json) 30 | 31 | # Parse the instances JSON using jq to identify On-Demand instances 32 | # Select instances where Lifecycle is null (indicating On-Demand) 33 | # For each On-Demand instance, print a warning with the instance ID and type, using a money bag emoji 34 | echo "$instances" | jq -r '.[][] | select(.Lifecycle == null) | "💸 On-Demand Instance: \(.ID) (\(.Type))"' 35 | 36 | # Count the number of On-Demand instances separately 37 | # jq filters instances where Lifecycle is null, creates an array, and counts its length 38 | count=$(echo "$instances" | jq '[.[][] | select(.Lifecycle == null)] | length') 39 | 40 | # Check if no On-Demand instances were found (count is 0) 41 | if [ "$count" -eq 0 ]; then 42 | log_success "No On-Demand instances detected." 43 | else 44 | log_warn "Total On-Demand instances: $count" 45 | log_info "Consider using Reserved Instances or Savings Plans to save costs." 46 | fi -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_s3_lifecycle.sh: -------------------------------------------------------------------------------- 1 | ######################################################################################## 2 | # / __||__ / | | (_) / _| ___ __ _ _ __ | | ___ | _ \ ___ | |(_) __ (_) ___ ___ 3 | # \__ \ |_ \ | |__ | || _|/ -_)/ _|| || |/ _|| |/ -_) | _// _ \| || |/ _|| |/ -_)(_-< 4 | # |___/|___/ |____||_||_| \___|\__| \_, |\__||_|\___| |_| \___/|_||_|\__||_|\___|/__/ 5 | # |__/ 6 | # To learn more, see https://maxat-akbanov.com/ 7 | ######################################################################################## 8 | 9 | #!/bin/bash 10 | 11 | source ./utils.sh 12 | 13 | ACCOUNT_ID=$(get_account_id) 14 | 15 | REGION=$(aws configure get region) 16 | 17 | log_info "Checking S3 buckets for missing lifecycle policies in $REGION" 18 | echo "------------------------------------------" 19 | 20 | # Retrieve a list of all S3 bucket names in the account 21 | buckets=$(aws s3api list-buckets --query 'Buckets[*].Name' --output text) 22 | 23 | # Check if no buckets were found (buckets is empty) 24 | if [ -z "$buckets" ]; then 25 | log_warn "No S3 buckets found in this account." 26 | exit 0 27 | fi 28 | 29 | # Loop through each bucket name retrieved 30 | for bucket in $buckets; do 31 | # Attempt to retrieve the lifecycle configuration for the current bucket 32 | # The 'aws s3api get-bucket-lifecycle-configuration' command fetches lifecycle rules 33 | # 2>/dev/null redirects error messages (e.g., if no lifecycle policy exists) to /dev/null 34 | lifecycle=$(aws s3api get-bucket-lifecycle-configuration \ 35 | --bucket "$bucket" \ 36 | --query 'Rules' \ 37 | --output json 2>/dev/null) 38 | 39 | # Check if no lifecycle policy was found or if the response is "null" 40 | if [ -z "$lifecycle" ] || [ "$lifecycle" == "null" ]; then 41 | log_warn "🗃️ Bucket without lifecycle policy: $bucket" 42 | else 43 | log_success "✅ Bucket with lifecycle policy: $bucket" 44 | # Parse the lifecycle rules using jq to extract and display details 45 | # For each rule, print the ID, Prefix (or "N/A" if not set), and Status 46 | echo "$lifecycle" | jq -r '.[] | " ↳ ID: \(.ID // "N/A"), Prefix: \(.Filter.Prefix // "N/A"), Status: \(.Status)"' 47 | fi 48 | done 49 | 50 | log_success "S3 lifecycle policy check completed." -------------------------------------------------------------------------------- /aws-cost-audit/scripts/check_untagged_resources.sh: -------------------------------------------------------------------------------- 1 | ############################################################################################### 2 | # | | | | _ _ | |_ __ _ __ _ __ _ ___ __| | | _ \ ___ ___ ___ _ _ _ _ __ ___ ___ 3 | # | |_| || ' \| _|/ _` |/ _` |/ _` |/ -_)/ _` | | // -_)(_-/dev/null) 58 | check_tags "$bucket" "$tags" "S3 Bucket" 59 | done 60 | 61 | # ✅ RDS Instances 62 | log_info "🔎 Checking RDS Instances..." 63 | rds_instances=$(aws rds describe-db-instances --query 'DBInstances[*].DBInstanceIdentifier' --output text) 64 | for id in $rds_instances; do 65 | arn="arn:aws:rds:$(aws configure get region):$ACCOUNT_ID:db:$id" 66 | tags=$(aws rds list-tags-for-resource --resource-name "$arn" --query 'TagList' --output json) 67 | check_tags "$id" "$tags" "RDS Instance" 68 | done 69 | 70 | # ✅ Lambda Functions 71 | log_info "🔎 Checking Lambda Functions..." 72 | lambdas=$(aws lambda list-functions --query 'Functions[*].FunctionName' --output text) 73 | for fn in $lambdas; do 74 | arn=$(aws lambda get-function --function-name "$fn" --query 'Configuration.FunctionArn' --output text) 75 | tags=$(aws lambda list-tags --resource "$arn" --query 'Tags' --output json) 76 | # Convert flat map to array of key-value pairs 77 | formatted=$(echo "$tags" | jq -r 'to_entries | map({Key: .key, Value: .value})') 78 | check_tags "$fn" "$formatted" "Lambda Function" 79 | done 80 | 81 | log_success "Untagged resource check completed." 82 | -------------------------------------------------------------------------------- /aws-cost-audit/scripts/main.sh: -------------------------------------------------------------------------------- 1 | ######################################################################## 2 | # /_\\ \ / // __| /_\ _ _ __| |(_)| |_ 3 | # / _ \\ \/\/ / \__ \ / _ \| || |/ _` || || _| 4 | # /_/ \_\\_/\_/ |___/ /_/ \_\\_,_|\__,_||_| \__| 5 | # 6 | # To learn more, see https://maxat-akbanov.com/ 7 | ######################################################################## 8 | 9 | #!/bin/bash 10 | 11 | # Optional: log to file 12 | exec > >(tee "audit_aws_$(date +%Y%m%d_%H%M%S).log") 2>&1 13 | 14 | ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) 15 | REGION=$(aws configure get region) 16 | 17 | echo "🧾 AWS Cost Audit Started on $(date +'%d-%b-%Y %H:%M:%S')" 18 | echo "📛 Account: $ACCOUNT_ID | 📍 Region: $REGION" 19 | echo "==============================" 20 | 21 | # Run individual checks 22 | echo -e "\n--- 📊 Budget Alerts Check ---" 23 | ./check_budgets.sh 24 | 25 | echo -e "\n--- 🏷️ Untagged Resources Check ---" 26 | ./check_untagged_resources.sh 27 | 28 | echo -e "\n--- 💤 Idle EC2 Resources Check ---" 29 | ./check_idle_ec2.sh 30 | 31 | echo -e "\n--- ♻️ S3 Lifecycle Policies Check ---" 32 | ./check_s3_lifecycle.sh 33 | 34 | echo -e "\n--- 🗓️ Old RDS Snapshots Check ---" 35 | ./check_old_rds_snapshots.sh 36 | 37 | echo -e "\n--- 🧹 Forgotten EBS Volumes Check ---" 38 | ./check_forgotten_ebs.sh 39 | 40 | echo -e "\n--- 🌐 Data Transfer Risks Check ---" 41 | ./check_data_transfer_risks.sh 42 | 43 | echo -e "\n--- 💸 On-Demand EC2 Instances Check ---" 44 | ./check_on_demand_instances.sh 45 | 46 | echo -e "\n--- 🛑 Idle Load Balancers Check ---" 47 | ./check_idle_load_balancers.sh 48 | 49 | echo -e "\n✅ AWS Audit Completed" 50 | -------------------------------------------------------------------------------- /aws-cost-audit/scripts/utils.sh: -------------------------------------------------------------------------------- 1 | ################################################################## 2 | # Script for Common Shared Logic 3 | ################################################################## 4 | 5 | #!/bin/bash 6 | 7 | get_account_id() { 8 | aws sts get-caller-identity --query Account --output text 2>/dev/null 9 | } 10 | 11 | log_info() { 12 | echo -e "ℹ️ $1" 13 | } 14 | 15 | log_warn() { 16 | echo -e "⚠️ $1" 17 | } 18 | 19 | log_success() { 20 | echo -e "✅ $1" 21 | } 22 | 23 | log_error() { 24 | echo -e "❌ $1" 25 | } 26 | -------------------------------------------------------------------------------- /img/aws_audit_output.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Brain2life/bash-cookbook/e9e8be87457acd15b03de5417599c2b55a9492f1/img/aws_audit_output.png -------------------------------------------------------------------------------- /linux-namespaces-cgroups/README.md: -------------------------------------------------------------------------------- 1 | # Demonstrating cgroups in action: Restricting CPU usage for a specific process 2 | 3 | For more information, see the article: [Linux Namespaces and cgroups: Building Blocks of Modern Containerization](https://maxat-akbanov.com/linux-namespaces-and-cgroups-building-blocks-of-modern-containerization) -------------------------------------------------------------------------------- /linux-namespaces-cgroups/cgroup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Define variables 4 | CGROUP_NAME="my_cgroup" 5 | CGROUP_PATH="/sys/fs/cgroup/cpu/$CGROUP_NAME" 6 | CPU_LIMIT=50000 # 50% CPU usage (quota in microseconds) 7 | CPU_PERIOD=100000 # Period in microseconds (default is 100ms) 8 | 9 | # Step 1: Create a new cgroup 10 | echo "Creating cgroup at $CGROUP_PATH..." 11 | mkdir -p $CGROUP_PATH 12 | 13 | # Step 2: Set CPU usage limits 14 | echo "Setting CPU limits..." 15 | echo $CPU_LIMIT > $CGROUP_PATH/cpu.cfs_quota_us 16 | echo $CPU_PERIOD > $CGROUP_PATH/cpu.cfs_period_us 17 | 18 | # Step 3: Launch a process to test the CPU limit 19 | echo "Starting a CPU-intensive process (infinite loop)..." 20 | # Launch a background CPU-intensive process 21 | bash -c "while :; do :; done" & 22 | PROCESS_PID=$! 23 | 24 | echo "Process started with PID $PROCESS_PID" 25 | 26 | # Step 4: Add the process to the cgroup 27 | echo "Adding process $PROCESS_PID to cgroup..." 28 | echo $PROCESS_PID > $CGROUP_PATH/cgroup.procs 29 | 30 | # Step 5: Monitor CPU usage for the process 31 | echo "Monitoring CPU usage (press Ctrl+C to exit)..." 32 | while true; do 33 | CPU_USAGE=$(ps -p $PROCESS_PID -o %cpu=) 34 | echo "CPU Usage of PID $PROCESS_PID: $CPU_USAGE%" 35 | sleep 2 36 | done 37 | --------------------------------------------------------------------------------