├── .dockerignore ├── .env ├── .gitignore ├── CHEAT-SHEET.md ├── LICENSE ├── README.md ├── ataka-cli ├── ataka ├── api │ ├── Dockerfile │ ├── __init__.py │ ├── dependencies.py │ ├── requirements.txt │ └── routers │ │ ├── exploit.py │ │ ├── exploit_history.py │ │ ├── flag.py │ │ ├── job.py │ │ └── targets.py ├── cli │ ├── Dockerfile │ ├── __main__.py │ └── requirements.txt ├── common │ ├── database │ │ ├── __init__.py │ │ ├── config.py │ │ └── models │ │ │ ├── __init__.py │ │ │ ├── exclusion.py │ │ │ ├── execution.py │ │ │ ├── exploit.py │ │ │ ├── exploit_history.py │ │ │ ├── flag.py │ │ │ ├── job.py │ │ │ └── target.py │ ├── delayed_start.sh │ ├── flag_status.py │ ├── job_execution_status.py │ ├── queue │ │ ├── __init__.py │ │ ├── flag.py │ │ ├── job.py │ │ ├── multiplexed_queue.py │ │ ├── output.py │ │ └── queue.py │ ├── requirements.txt │ └── wait-for-it.sh ├── ctfcode │ ├── Dockerfile │ ├── __main__.py │ ├── ctf.py │ ├── flags.py │ ├── requirements.txt │ └── target_job_generator.py ├── ctfconfig │ ├── enowars7.py │ ├── faustctf.py │ ├── iccdemo.py │ ├── old │ │ ├── cinsects.py │ │ ├── cwte.py │ │ ├── ecsc2022.py │ │ ├── ructf.py │ │ └── saarctf.py │ ├── ructf.py │ └── testctf.py ├── executor │ ├── Dockerfile │ ├── __main__.py │ ├── exploits.py │ ├── jobs.py │ ├── localdata.py │ └── requirements.txt └── player-cli │ ├── .gitignore │ ├── __main__.py │ ├── package_player_cli.sh │ ├── player_cli │ ├── __init__.py │ ├── ctfconfig_wrapper.py │ ├── exploit │ │ ├── __init__.py │ │ ├── execution.py │ │ ├── exploit.py │ │ ├── job.py │ │ └── target.py │ ├── flags.py │ ├── service.py │ └── util.py │ ├── requirements.txt │ └── templates │ ├── python │ ├── Dockerfile │ ├── exploit.py │ └── requirements.txt │ └── ubuntu │ ├── Dockerfile │ └── exploit.sh ├── data ├── exploits │ └── .keep ├── persist │ └── .keep └── shared │ └── exploits │ └── .keep ├── docker-compose.yml └── reload.sh /.dockerignore: -------------------------------------------------------------------------------- 1 | # ignore alll 2 | ** 3 | 4 | # allow these files 5 | !/ataka/** 6 | 7 | # ignore everything within these files 8 | venv 9 | **/__pycache__ 10 | **/node_modules 11 | -------------------------------------------------------------------------------- /.env: -------------------------------------------------------------------------------- 1 | 2 | # To make sure logging output works 3 | PYTHONUNBUFFERED=1 4 | 5 | # RabbitMQ 6 | RABBITMQ_USER=ataka 7 | RABBITMQ_PASSWORD=PWRBDAnMkq6CSQQiQEgrFYw99KFZMYrms 8 | 9 | # Postgres 10 | POSTGRES_USER=ataka 11 | POSTGRES_PASSWORD=BQHp7j45mkbt4ITaE7QvX3T18bNtxIliM 12 | 13 | # Persistent data store 14 | DATA_STORE=/home/ataka/ataka/data 15 | 16 | # The user that owns the data/ directory and has access to the docker socket 17 | USERID=1000:966 18 | 19 | CTF=testctf 20 | 21 | # Which CPUs to limit exploits to. This is mainly to leave a couple CPUs for other tools on the same machine 22 | EXPLOIT_CPUSET=4-15 23 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.pyc 2 | *.swp 3 | .idea/ 4 | *.iml 5 | src/**/__pycache__/ 6 | venv/ 7 | **/.vagrant/ 8 | data/ 9 | ataka-player-cli.pyz 10 | ataka/player-cli/player_cli/ctfconfig.py 11 | -------------------------------------------------------------------------------- /CHEAT-SHEET.md: -------------------------------------------------------------------------------- 1 | # Ataka Quick Command Cheat Sheet 2 | 3 | The most important commands are listed below: 4 | 5 | Submit flag manually: 6 | `atk flag submit FLAG{N0WAYAAAAAAAAAAAAAAAAAAAAAAAAAB}` 7 | Or simply `atk flag submit ` 8 | 9 | Update config: 10 | `atk reload` 11 | 12 | Create exploit template: 13 | `atk exploit template [:version] ` 14 | 15 | List services / flag ids: 16 | `atk flag ids [--all-targets]` 17 | 18 | Run exploit locally testing: 19 | `atk exploit runlocal ` 20 | 21 | Run exploit locally against everyone: 22 | `atk exploit runlocal --all-targets ` 23 | 24 | Create exploit on the server: 25 | `atk exploit create ` 26 | 27 | Upload exploit: 28 | `atk exploit upload ` 29 | 30 | Logs: 31 | `atk exploit logs -n ` -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Ataka 2 | 3 | Runs exploits, fast. 4 | 5 | > See `.env` variable for ctf selection. See ataka/ctfconfig/ for ctf config code. This directory is mounted into the docker container, not copied. Edits can be applied via hot-reload. 6 | 7 | > Check out [the cheat sheet file](CHEAT-SHEET.md) for the most important commands as a quick TL;DR after installing 8 | # Server 9 | 10 | 1. Edit `.env` file to set: 11 | - **DATA_STORE**: **Absolute** path to a folder to store player exploit related files. 12 | - **USERID**: The `user:group` id tuple to use for ataka. Note that write access write access to both the docker socket and the data directory has to be provided. You'll want to set this to the user id of the owner of the data directory and the group id of the `docker` group. 13 | - **CTF**: The name of the ctfconfig to use. 14 | 2. Edit the ctfconfig in `ataka/ctfconfig/` 15 | 3. Run `docker-compose up -d --build` 16 | 17 | > The ctfconfig is mounted into the containers. When editing the config while ataka is running, run `./ataka-cli reload` to hot-reload the ctfconfig. 18 | 19 | # Player-CLI 20 | 21 | The player-cli is a tool written in python to help players interact with *ataka* and create, upload and manage exploits and targets. 22 | 23 | ## Setup 24 | 25 | The player-cli is a `.pyz` file (Python Zipped Executable). 26 | 27 | > This only needs to be done once. 28 | 29 | - Download the ataka-player-cli through a get request to port 8000 of the api container. 30 | - Save that in a known location (`~/.local/bin/atk`). 31 | - Mark as excecutable 32 | 33 | ## Reloading Player-CLI 34 | 35 | When the ctfconfig is modified and `./ataka-cli reload` is run, the local offline copy of the ctfconfig needs to also be reloaded. For that run: 36 | 37 | ```bash 38 | $ atk reload 39 | Writing player-cli at 40 | ``` 41 | 42 | This overwrites the old player-cli with the new one. 43 | 44 | ## How to write an exploit 45 | 46 | An exploit can be any executable or script. 47 | 48 | It will receive two environment variables: 49 | - `TARGET_IP`: the IP to attack; 50 | - `TARGET_EXTRA`: a JSON string containing extra information on the target, such as flag IDs. 51 | 52 | Your exploit should print the captured flags. 53 | They will be matched by a regular expression, so the output doesn't have to be clean. 54 | 55 | ## Local exploits 56 | 57 | ### Testing a local exploit 58 | 59 | Get the name of your service: 60 | 61 | ``` 62 | $ atk flag ids 63 | [*] Flag IDs for service buffalo 64 | 10.99.0.2 => ["1234", "5678"] 65 | [*] Flag IDs for service swiss_keys 66 | 10.99.0.2 => ["1234", "5678"] 67 | [*] Flag IDs for service to_the_moon 68 | 10.99.0.2 => ["1234", "5678"] 69 | [*] Flag IDs for service kyc 70 | 10.99.0.2 => ["1234", "5678"] 71 | [*] Flag IDs for service gopher_coin 72 | 10.99.0.2 => ["1234", "5678"] 73 | [*] Flag IDs for service wall.eth 74 | 10.99.0.2 => ["1234", "5678"] 75 | [*] Flag IDs for service oly_consensus 76 | 10.99.0.2 => ["1234", "5678"] 77 | ``` 78 | 79 | Run the exploit: 80 | 81 | ``` 82 | $ atk exploit runlocal exploit.py SERVICE 83 | ``` 84 | 85 | Where: 86 | - `exploit.py` is your exploit (must be executable); 87 | - `SERVICE` is the target service name; 88 | 89 | This will test the exploit against the NOP team: `exploit runlocal` is supposed to be used for testing, not for actual attacks, which should be centralized to allow the captain to manage them. 90 | 91 | By default, all output will be shown, to limit this, use the `-l/--limit` flag. 92 | 93 | If you only want to run a fixed number of attack rounds, you can use `-c/--count` (e.g., `-c 1` is a one-shot attack). 94 | By default, the command attacks forever until manually terminated. 95 | 96 | Instead of an executable, you can also specify a directory containing a Dockerfile. 97 | The runner will try to execute the command from the Dockerfile (locally, outside Docker), with the specified directory as the working directory. 98 | This is useful in combination with `exploit download`, described in the next section. 99 | 100 | The local runner can run exploits against the real targets (`--all-targets`, or subsets via `-T/-N`). 101 | 102 | ## Centralized exploits 103 | 104 | ### Deploying a centralized exploit 105 | 106 | To run on the centralized attacker, exploits must be wrapped in a Docker container and uploaded to the server. 107 | 108 | The CLI provides templates for common containers. 109 | For example, to get a Python (latest version) container, use: 110 | 111 | ``` 112 | $ atk exploit template python DIR_NAME 113 | ``` 114 | 115 | Where `DIR_NAME` is the name of the directory that will be created. 116 | 117 | At the moment, we have templates for `python` (Python plus some common dependencies, such as pwntools, you can add more in `requirements.txt`) and `ubuntu` (Ubuntu with a bash exploit). 118 | You can also specify Docker tags for specific versions (e.g., `python:3.9-slim`, `ubuntu:18.04`, and so forth). 119 | 120 | Now you need to create an "exploit history", i.e., the collection which will contain all the versions of your exploit: 121 | 122 | ``` 123 | $ atk exploit create NAME SERVICE 124 | ``` 125 | 126 | Where `NAME` is the name of your exploit, and `SERVICE` is the target service (you can list them with `atk service ls`). 127 | 128 | Finally, upload your exploit directory: 129 | 130 | ``` 131 | $ atk exploit upload NAME AUTHOR DIR_NAME 132 | ``` 133 | 134 | Where `NAME` is the one you chose earlier, `AUTHOR` is your nickname, and `DIR_NAME` is the exploit directory. 135 | 136 | This will take care of uploading the exploit. 137 | You can check it with `atk exploit ls`. 138 | 139 | Whenever you want to update your exploit to a new version, just issue the `exploit upload` command again. 140 | The attacker will assign progressive numbers to the versions, such as `NAME-1`, `NAME-2`, and so forth. 141 | 142 | Uploaded exploits can be downloaded by anyone with `atk exploit download EXPLOIT_ID OUTPUT_DIR`. 143 | 144 | 145 | ### (De)activating exploits 146 | 147 | You can use `atk exploit activate/deactivate` to activate/deactivate an exploit. 148 | The commands accepts a history ID or an exploit ID. 149 | Generally, you should use them with history IDs, and use `exploit switch` (see next section) for switching between different versions. 150 | 151 | When `exploit activate` gets a history ID, it activates the most recent exploit version in the history. 152 | If an exploit is already active, it does nothing. 153 | 154 | When `exploit deactivate` gets a history ID, it deactivates all the exploits in the history. 155 | 156 | 157 | ### Switching exploit versions 158 | 159 | List the exploits to find the exploit ID: 160 | 161 | ``` 162 | $ atk exploit ls 163 | ... 164 | cool-pwn (buffalo) 165 | 2022-06-04 14:59:11 nickname cool-pwn-1 166 | 2022-06-04 18:15:29 nickname cool-pwn-2 167 | 2022-06-04 18:15:31 ACTIVE nickname cool-pwn-3 168 | ... 169 | ``` 170 | 171 | Now switch to the desired version: 172 | 173 | ``` 174 | $ atk exploit switch cool-pwn-1 175 | Deactivate cool-pwn-3 176 | Activate cool-pwn-1 177 | ``` 178 | 179 | We can confirm it worked: 180 | 181 | ``` 182 | $ atk exploit ls 183 | ... 184 | cool-pwn (buffalo) 185 | 2022-06-04 14:59:11 ACTIVE nickname cool-pwn-1 186 | 2022-06-04 18:15:29 nickname cool-pwn-2 187 | 2022-06-04 18:15:31 nickname cool-pwn-3 188 | ... 189 | ``` 190 | 191 | 192 | ### Checking exploit logs 193 | 194 | You can check logs (including stdout/stderr) for a centralized exploit using `atk exploit logs NAME`, where `NAME` is a history or exploit ID. 195 | If you pass an exploit ID, it will show logs for that specific version. 196 | If you pass a history ID, it will show logs for active exploits in the history. 197 | You can pass more than one ID and mix exploit and history IDs. 198 | 199 | By default, it will show logs from the current round. 200 | You can show logs from the last NUM rounds by passing `-n NUM`. 201 | 202 | 203 | ### Target management 204 | 205 | Ataka supports per-history target control. 206 | By default, all targets are enabled. 207 | 208 | To see which targets are enabled for a history, use `atk exploit target ls`. 209 | You can enable/disable targets for a history with `atk exploit target on/off` (they both support `--all` to mean "all known targets"). 210 | 211 | Enabling one or multiple targets also schedules a new job for immediate central execution for the specified target(s), even if said target is already enabled. 212 | You can use this to re-run a central execution if necessary. 213 | 214 | ## Manual flag submission 215 | 216 | You can manually submit flags: 217 | 218 | ``` 219 | $ atk flag submit 'FLAG{foo}' 'FLAG{bar}' ... 220 | ``` 221 | 222 | The flags can be dirty, they will be matched using the flag regex. 223 | 224 | If you don't specify flags, `flag submit` will read from stdin until EOF and then submit (i.e., **it is not streaming**): 225 | 226 | ``` 227 | $ echo 'dirtyFLAG{foo}dirtyFLAG{bar}dirty' | atk flag submit 228 | ``` 229 | 230 | ## Emergency mode 231 | 232 | Invoking the CLI as `atk -b/--bypass-tools ...` will bypass the centralized Ataka service and connect directly to the gameserver to get attack targets, flag IDs and to submit flags. 233 | In this mode, only 234 | - `exploit flag ids`, 235 | - `exploit runlocal` and 236 | - `flag submit` are guaranteed to work. 237 | 238 | A typical emergency scenario will involve running exploits locally with `exploit runlocal --all-targets` (and/or `-N/-T` if finer target control is required). 239 | -------------------------------------------------------------------------------- /ataka-cli: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | docker-compose exec cli python -m ataka.cli "$@" 4 | -------------------------------------------------------------------------------- /ataka/api/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | RUN pip install --no-cache-dir --upgrade pip 4 | 5 | WORKDIR / 6 | 7 | COPY ataka/common/requirements.txt /ataka/common/ 8 | RUN pip install --no-cache-dir -r /ataka/common/requirements.txt 9 | COPY ataka/common /ataka/common 10 | 11 | VOLUME /data/shared 12 | VOLUME /data/exploits 13 | 14 | COPY ataka/api/requirements.txt /ataka/api/ 15 | RUN pip install --no-cache-dir -r /ataka/api/requirements.txt 16 | COPY ataka/api /ataka/api 17 | 18 | CMD [ "bash", "/ataka/common/delayed_start.sh", "--", "python", "-m", "uvicorn", "--host", "0.0.0.0", "ataka.api:app"] 19 | -------------------------------------------------------------------------------- /ataka/api/__init__.py: -------------------------------------------------------------------------------- 1 | from fastapi import FastAPI, Depends, APIRouter 2 | from fastapi.responses import FileResponse 3 | 4 | from ataka.api.routers import targets, exploit_history, exploit, flag, job 5 | from ataka.api.dependencies import get_session, get_channel 6 | from ataka.common import queue, database 7 | 8 | app = FastAPI() 9 | 10 | api = APIRouter(prefix="/api") 11 | api.include_router(targets.router) 12 | api.include_router(exploit_history.router) 13 | api.include_router(exploit.router) 14 | api.include_router(flag.router) 15 | api.include_router(job.router) 16 | 17 | @app.on_event("startup") 18 | async def startup_event(): 19 | await queue.connect() 20 | await database.connect() 21 | 22 | 23 | @app.on_event("shutdown") 24 | async def shutdown_event(): 25 | await queue.disconnect() 26 | await database.disconnect() 27 | 28 | 29 | @app.get("/") 30 | async def get_playercli(): 31 | return FileResponse(path="/data/shared/ataka-player-cli.pyz", filename="ataka-player-cli.pyz") 32 | 33 | app.include_router(api) 34 | -------------------------------------------------------------------------------- /ataka/api/dependencies.py: -------------------------------------------------------------------------------- 1 | from ataka.common import queue, database 2 | 3 | 4 | async def get_session(): 5 | async with database.get_session() as session: 6 | yield session 7 | 8 | 9 | async def get_channel(): 10 | async with queue.get_channel() as channel: 11 | yield channel 12 | -------------------------------------------------------------------------------- /ataka/api/requirements.txt: -------------------------------------------------------------------------------- 1 | fastapi==0.100.0 2 | uvicorn==0.23.1 -------------------------------------------------------------------------------- /ataka/api/routers/exploit.py: -------------------------------------------------------------------------------- 1 | import base64 2 | import binascii 3 | import os 4 | import re 5 | import secrets 6 | from datetime import datetime 7 | 8 | from fastapi import APIRouter, Depends, HTTPException 9 | from pydantic import BaseModel 10 | from sqlalchemy import select 11 | from sqlalchemy.exc import IntegrityError, NoResultFound 12 | from sqlalchemy.ext.asyncio import AsyncSession 13 | from sqlalchemy.orm import selectinload 14 | 15 | from ataka.api.dependencies import get_session, get_channel 16 | from ataka.common.database.models import ExploitHistory, Exploit, Job, Execution 17 | 18 | router = APIRouter(prefix="/exploit", tags=['exploit']) 19 | 20 | @router.get("/") 21 | async def exploit_all(session: AsyncSession = Depends(get_session)): 22 | get_exploits = select(Exploit) 23 | exploits = (await session.execute(get_exploits)).scalars() 24 | return [x.to_dict() for x in exploits] 25 | 26 | 27 | class ExploitCreateRequest(BaseModel): 28 | history_id: str 29 | author: str 30 | context: str 31 | 32 | 33 | @router.post("/") 34 | async def exploit_create(req: ExploitCreateRequest, 35 | session: AsyncSession = Depends(get_session), 36 | channel=Depends(get_channel)): 37 | try: 38 | context = base64.b64decode(req.context) 39 | except binascii.Error: 40 | raise HTTPException(400, detail="Invalid Docker context encoding") 41 | 42 | get_history = select(ExploitHistory) \ 43 | .where(ExploitHistory.id == req.history_id) \ 44 | .options(selectinload(ExploitHistory.exploits), 45 | selectinload(ExploitHistory.exclusions)) 46 | try: 47 | history = (await session.execute(get_history)).scalar_one() 48 | except NoResultFound: 49 | raise HTTPException(404, detail="History does not exists") 50 | 51 | max_idx = 0 52 | prefix = req.history_id + '-' 53 | for exploit in history.exploits: 54 | if exploit.id.startswith(prefix): 55 | try: 56 | idx = int(exploit.id[len(prefix):]) 57 | except ValueError: 58 | continue 59 | max_idx = max(max_idx, idx) 60 | exploit_id = f'{prefix}{max_idx + 1}' 61 | docker_name = re.sub(r'[^0-9a-z\-]+', '', exploit_id.lower()) + "-" + secrets.token_hex(8) 62 | 63 | ctx_path = f"/data/exploits/{docker_name}" 64 | excl_opener = lambda path, flags: os.open(path, flags | os.O_EXCL) 65 | try: 66 | with open(ctx_path, "wb", opener=excl_opener) as f: 67 | f.write(context) 68 | except FileExistsError: 69 | raise HTTPException(500, detail="Exploit already exists (file)") 70 | 71 | exploit = Exploit(id=exploit_id, exploit_history_id=req.history_id, docker_name=docker_name, 72 | active=False, author=req.author) 73 | session.add(exploit) 74 | try: 75 | await session.commit() 76 | except IntegrityError: 77 | raise HTTPException(500, detail="Exploit already exists (db)") 78 | 79 | return exploit.to_dict() | {"history": history.to_dict()} 80 | 81 | 82 | class ExploitPatchRequest(BaseModel): 83 | active: bool 84 | 85 | 86 | @router.patch("/{exploit_id}") 87 | async def exploit_patch(exploit_id: str, req: ExploitPatchRequest, 88 | session: AsyncSession = Depends(get_session)): 89 | get_exploit = select(Exploit).where(Exploit.id == exploit_id) 90 | try: 91 | exploit = (await session.execute(get_exploit)).scalar_one() 92 | except NoResultFound: 93 | raise HTTPException(404, detail="Exploit does not exists") 94 | 95 | exploit.active = req.active 96 | await session.commit() 97 | 98 | return {} 99 | 100 | 101 | @router.get("/{exploit_id}/jobs") 102 | async def exploit_jobs(exploit_id: str, limit: int = 10, after: int = 0, 103 | session: AsyncSession = Depends(get_session)): 104 | get_jobs = select(Job) \ 105 | .where((Job.exploit_id == exploit_id) & (Job.timestamp >= datetime.fromtimestamp(after))) \ 106 | .order_by(Job.timestamp.desc()) \ 107 | .limit(limit) \ 108 | .options( 109 | selectinload(Job.executions).selectinload(Execution.target)) 110 | jobs = (await session.execute(get_jobs)).scalars() 111 | 112 | return [ 113 | { 114 | "job": job.to_dict(), 115 | "executions": [x.to_dict() | {"target": x.target.to_dict()} for x in job.executions], 116 | } 117 | for job in jobs 118 | ] 119 | 120 | 121 | @router.get("/{exploit_id}/download") 122 | async def exploit_download(exploit_id: str, 123 | session: AsyncSession = Depends(get_session)): 124 | get_exploit = select(Exploit).where(Exploit.id == exploit_id) 125 | try: 126 | exploit = (await session.execute(get_exploit)).scalar_one() 127 | except NoResultFound: 128 | raise HTTPException(404, detail="Exploit does not exists") 129 | 130 | ctx_path = f"/data/exploits/{exploit.docker_name}" 131 | with open(ctx_path, "rb") as f: 132 | data = f.read() 133 | 134 | return {"data": base64.b64encode(data).decode()} 135 | -------------------------------------------------------------------------------- /ataka/api/routers/exploit_history.py: -------------------------------------------------------------------------------- 1 | from typing import Set 2 | 3 | from fastapi import APIRouter, Depends, HTTPException 4 | from pydantic import BaseModel 5 | from sqlalchemy import select 6 | from sqlalchemy.exc import IntegrityError, NoResultFound 7 | from sqlalchemy.ext.asyncio import AsyncSession 8 | from sqlalchemy.orm import selectinload, joinedload 9 | 10 | from ataka.api.dependencies import get_session 11 | from ataka.common.database.models import ExploitHistory, Exclusion 12 | 13 | router = APIRouter(prefix="/exploit_history", tags=['exploit_history']) 14 | 15 | 16 | @router.get("/") 17 | async def exploit_history_list(session: AsyncSession = Depends(get_session)): 18 | get_histories = select(ExploitHistory).options(joinedload(ExploitHistory.exploits)) 19 | 20 | histories = (await session.execute(get_histories)).unique().scalars() 21 | return [history.to_dict() | {"exploits": [exploit.to_dict() for exploit in history.exploits]} for history in 22 | histories] 23 | 24 | 25 | class ExploitHistoryCreateRequest(BaseModel): 26 | history_id: str 27 | service: str 28 | 29 | 30 | @router.post("/") 31 | async def exploit_history_create(req: ExploitHistoryCreateRequest, session: AsyncSession = Depends(get_session)): 32 | history = ExploitHistory(id=req.history_id, service=req.service) 33 | session.add(history) 34 | try: 35 | await session.commit() 36 | except IntegrityError: 37 | raise HTTPException(400, detail="History already exists") 38 | 39 | return {} 40 | 41 | 42 | @router.get("/{history_id}") 43 | async def exploit_history_get(history_id: str, session: AsyncSession = Depends(get_session)): 44 | get_history = select(ExploitHistory) \ 45 | .where(ExploitHistory.id == history_id) \ 46 | .options(joinedload(ExploitHistory.exploits)) 47 | try: 48 | history = (await session.execute(get_history)).unique().scalar_one() 49 | except NoResultFound: 50 | raise HTTPException(404, detail="History does not exists") 51 | 52 | return history.to_dict() | {"exploits": [exploit.to_dict() for exploit in history.exploits]} 53 | 54 | 55 | @router.get("/{history_id}/exclusions") 56 | async def exploit_history_get_exclusions( 57 | history_id: str, 58 | session: AsyncSession = Depends(get_session) 59 | ): 60 | get_history = select(ExploitHistory) \ 61 | .where(ExploitHistory.id == history_id) \ 62 | .options(selectinload(ExploitHistory.exclusions)) 63 | try: 64 | history = (await session.execute(get_history)).scalar_one() 65 | except NoResultFound: 66 | raise HTTPException(404, detail="History does not exists") 67 | 68 | return [x.target_ip for x in history.exclusions] 69 | 70 | 71 | class ExclusionsPutRequest(BaseModel): 72 | target_ips: Set[str] 73 | 74 | 75 | @router.put("/{history_id}/exclusions") 76 | async def exploit_history_put_exclusions( 77 | history_id: str, 78 | req: ExclusionsPutRequest, 79 | session: AsyncSession = Depends(get_session) 80 | ): 81 | get_history = select(ExploitHistory) \ 82 | .where(ExploitHistory.id == history_id) \ 83 | .options(selectinload(ExploitHistory.exclusions)) 84 | try: 85 | history = (await session.execute(get_history)).scalar_one() 86 | except NoResultFound: 87 | raise HTTPException(404, detail="History does not exists") 88 | 89 | cur_ips = set(x.target_ip for x in history.exclusions) 90 | 91 | # try: 92 | for ip in req.target_ips: 93 | if ip not in cur_ips: 94 | excl = Exclusion(exploit_history_id=history_id, target_ip=ip) 95 | session.add(excl) 96 | for excl in history.exclusions: 97 | if excl.target_ip not in req.target_ips: 98 | await session.delete(excl) 99 | 100 | await session.commit() 101 | 102 | return {} 103 | -------------------------------------------------------------------------------- /ataka/api/routers/flag.py: -------------------------------------------------------------------------------- 1 | from fastapi import APIRouter, Depends 2 | from pydantic import BaseModel 3 | from sqlalchemy import select 4 | from sqlalchemy.ext.asyncio import AsyncSession 5 | from sqlalchemy.orm import joinedload 6 | 7 | from ataka.api.dependencies import get_session, get_channel 8 | from ataka.common.database.models import Flag, Execution 9 | from ataka.common.job_execution_status import JobExecutionStatus 10 | from ataka.common.queue import OutputMessage, OutputQueue 11 | 12 | router = APIRouter(prefix="/flag", tags=['flag']) 13 | 14 | 15 | class FlagSubmission(BaseModel): 16 | flags: str 17 | 18 | 19 | @router.post("/submit") 20 | async def submit_flag(submission: FlagSubmission, session: AsyncSession = Depends(get_session), 21 | channel=Depends(get_channel)): 22 | execution = Execution(status=JobExecutionStatus.FINISHED, stdout=submission.flags, stderr='') 23 | session.add(execution) 24 | await session.commit() 25 | 26 | output_message = OutputMessage(execution_id=execution.id, stdout=True, output=submission.flags) 27 | output_queue = await OutputQueue.get(channel) 28 | await output_queue.send_message(output_message) 29 | 30 | return {"execution_id": execution.id} 31 | 32 | 33 | @router.get("/execution/{execution_id}") 34 | async def get_flags_by_execution(execution_id: int, session: AsyncSession = Depends(get_session)): 35 | get_flags = select(Flag).where(Flag.execution_id == execution_id).options(joinedload(Flag.execution) 36 | .joinedload(Execution.target)) 37 | flags = (await session.execute(get_flags)).unique().scalars() 38 | return [flag.to_dict() | 39 | ({} if flag.execution.target is None else {"target": flag.execution.target.to_dict()}) 40 | for flag in flags] 41 | -------------------------------------------------------------------------------- /ataka/api/routers/job.py: -------------------------------------------------------------------------------- 1 | import time 2 | from datetime import datetime 3 | 4 | from fastapi import APIRouter, HTTPException, Depends 5 | from pydantic import BaseModel 6 | from sqlalchemy import select, update 7 | from sqlalchemy.ext.asyncio import AsyncSession 8 | from sqlalchemy.orm import selectinload 9 | 10 | from ataka.api.dependencies import get_session, get_channel 11 | from ataka.common.database.models import Execution, Job 12 | from ataka.common.job_execution_status import JobExecutionStatus 13 | from ataka.common.queue import JobQueue, JobMessage, JobAction, OutputQueue, OutputMessage 14 | 15 | router = APIRouter(prefix="/job", tags=['job']) 16 | 17 | 18 | class NewJob(BaseModel): 19 | targets: list[int] 20 | exploit_id: str | None 21 | manual_id: None | str 22 | timeout: int 23 | 24 | 25 | @router.post("/") 26 | async def post_job(job: NewJob, session: AsyncSession = Depends(get_session), channel=Depends(get_channel)): 27 | if job.exploit_id is not None and job.manual_id is not None: 28 | raise HTTPException(400, detail="Can't supply both exploit_id and manual_id") 29 | elif job.exploit_id is None and job.manual_id is None: 30 | raise HTTPException(400, detail="Need to provide either exploit_id and manual_id") 31 | 32 | if len(job.targets) == 0: 33 | raise HTTPException(400, detail="Need to provide at least one target") 34 | 35 | new_job = Job(status=JobExecutionStatus.QUEUED if job.exploit_id is not None else JobExecutionStatus.RUNNING, 36 | exploit_id=job.exploit_id, manual_id=job.manual_id, 37 | timeout=datetime.fromtimestamp(time.time() + job.timeout)) 38 | 39 | session.add(new_job) 40 | executions = [Execution(job=new_job, target_id=target, status=JobExecutionStatus.RUNNING) for target in job.targets] 41 | session.add_all(executions) 42 | 43 | await session.commit() 44 | 45 | if job.exploit_id is not None: 46 | job_queue = await JobQueue.get(channel) 47 | await job_queue.send_message(JobMessage(action=JobAction.QUEUE, job_id=new_job.id)) 48 | 49 | return new_job.to_dict() | {"executions": [e.to_dict() for e in executions]} 50 | 51 | 52 | class ExecutionResult(BaseModel): 53 | stdout: str 54 | stderr: str 55 | status: JobExecutionStatus = JobExecutionStatus.CANCELLED 56 | 57 | @router.post("/execution/{execution_id}/finish") 58 | async def finish_execution(execution_id: int, execution: ExecutionResult, session: AsyncSession = Depends(get_session), 59 | channel=Depends(get_channel)): 60 | update_execution = update(Execution) \ 61 | .where(Execution.id == execution_id) \ 62 | .values(status=execution.status, 63 | stdout=execution.stdout, 64 | stderr=execution.stderr) 65 | await session.execute(update_execution) 66 | await session.commit() 67 | 68 | stdout_message = OutputMessage(execution_id=execution_id, stdout=True, output=execution.stdout) 69 | stderr_message = OutputMessage(execution_id=execution_id, stdout=False, output=execution.stderr) 70 | output_queue = await OutputQueue.get(channel) 71 | await output_queue.send_message(stdout_message) 72 | await output_queue.send_message(stderr_message) 73 | 74 | return {} 75 | 76 | 77 | @router.post("/{job_id}/finish") 78 | async def finish_execution(job_id: int, status: JobExecutionStatus = JobExecutionStatus.FINISHED, session: AsyncSession = Depends(get_session)): 79 | print(job_id, status) 80 | update_job = update(Job).where(Job.id == job_id).values(status=status) 81 | await session.execute(update_job) 82 | await session.commit() 83 | return {} 84 | 85 | 86 | @router.get("/{job_id}") 87 | async def get_job(job_id: int, session: AsyncSession = Depends(get_session)): 88 | get_job = select(Job) \ 89 | .where(Job.id == job_id) \ 90 | .options(selectinload(Job.executions).selectinload(Execution.target)) 91 | job = (await session.execute(get_job)).scalar_one() 92 | 93 | return job.to_dict() | {"executions": [x.to_dict() | {"target": x.target.to_dict()} for x in job.executions]} 94 | -------------------------------------------------------------------------------- /ataka/api/routers/targets.py: -------------------------------------------------------------------------------- 1 | from fastapi import APIRouter, Depends 2 | from sqlalchemy import select, func 3 | from sqlalchemy.ext.asyncio import AsyncSession 4 | 5 | from ataka.api.dependencies import get_session 6 | from ataka.common.database.models import Target 7 | 8 | router = APIRouter(prefix="/targets", tags=['targets']) 9 | 10 | 11 | @router.get("/") 12 | async def all_targets(service: str = None, session: AsyncSession = Depends(get_session)): 13 | get_max_version = select(func.max(Target.version)) 14 | version = (await session.execute(get_max_version)).scalar_one() 15 | 16 | get_targets = select(Target).where(Target.version == version) 17 | if service is not None: 18 | get_targets = get_targets.where(Target.service == service) 19 | targets = (await session.execute(get_targets)).scalars() 20 | return [x.to_dict() for x in targets] 21 | -------------------------------------------------------------------------------- /ataka/cli/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | RUN pip install --no-cache-dir --upgrade pip 4 | 5 | WORKDIR / 6 | 7 | COPY ataka/common/requirements.txt /ataka/common/ 8 | RUN pip install --no-cache-dir -r /ataka/common/requirements.txt 9 | COPY ataka/common /ataka/common 10 | 11 | COPY ataka/cli/requirements.txt /ataka/cli/ 12 | RUN pip install --no-cache-dir -r /ataka/cli/requirements.txt 13 | COPY ataka/cli /ataka/cli 14 | 15 | CMD [ "sleep", "infinity" ] 16 | 17 | -------------------------------------------------------------------------------- /ataka/cli/__main__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from functools import update_wrapper 3 | from typing import List 4 | 5 | import typer 6 | from rich import print, box 7 | from rich.table import Table 8 | from rich.live import Live 9 | from pamqp.commands import Basic 10 | from sqlalchemy import select 11 | from sqlalchemy.orm import joinedload 12 | 13 | from ataka.common import queue, database 14 | from ataka.common.database.models import Flag, FlagStatus, Execution, Job 15 | from ataka.common.queue import FlagQueue, FlagMessage, OutputQueue 16 | 17 | 18 | def coro(f): 19 | def wrapper(*args, **kwargs): 20 | return asyncio.run(f(*args, **kwargs)) 21 | 22 | return update_wrapper(wrapper, f) 23 | 24 | 25 | app = typer.Typer() 26 | 27 | 28 | @app.command() 29 | @coro 30 | async def log(): 31 | await queue.connect() 32 | async with database.get_session() as session: 33 | async with queue.get_channel() as channel: 34 | output_queue: OutputQueue = await OutputQueue.get(channel) 35 | async for message in output_queue.wait_for_messages(): 36 | load_execution = select(Execution) \ 37 | .where(Execution.id == message.execution_id) \ 38 | .options(joinedload(Execution.job).joinedload(Job.exploit), joinedload(Execution.target)) 39 | 40 | execution = (await session.execute(load_execution)).unique().scalar_one() 41 | timestamp = execution.timestamp.strftime("%Y-%m-%d %H:%M:%S") 42 | 43 | table = Table(box=box.ROUNDED) 44 | table.add_column("Timestamp") 45 | table.add_column("Service") 46 | table.add_column("Target") 47 | table.add_column("Exploit ID") 48 | table.add_column("") 49 | table.add_column("Log") 50 | 51 | is_manual = execution.job is None or execution.target is None 52 | error_tag = "[bold red]ERR[/bold red]" if not message.stdout else " " 53 | 54 | for line in message.output.strip().split("\n"): 55 | if is_manual: 56 | table.add_row(timestamp, 'MANUAL', '', '', '', line) 57 | else: 58 | if execution.job.exploit_id is None: 59 | table.add_row(timestamp, execution.target.service, execution.target.ip, f'LOCAL {execution.job.manual_id}', error_tag, line) 60 | else: 61 | table.add_row(timestamp, execution.target.service, execution.target.ip, f'{execution.job.exploit.id} ({execution.job.exploit.author})', error_tag, line) 62 | print(table) 63 | 64 | await queue.disconnect() 65 | 66 | 67 | @app.command() 68 | @coro 69 | async def submit_flag(flag: List[str]): 70 | await database.connect() 71 | async with database.get_session() as session: 72 | flag_objects = [Flag(flag=f, status=FlagStatus.QUEUED) for f in flag] 73 | session.add_all(flag_objects) 74 | await session.commit() 75 | 76 | messages = [FlagMessage(flag_id=f.id, flag=f.flag) for f in flag_objects] 77 | await database.disconnect() 78 | 79 | await queue.connect() 80 | async with queue.get_channel() as channel: 81 | flag_queue: FlagQueue = await FlagQueue.get(channel) 82 | for message in messages: 83 | result = await flag_queue.send_message(message) 84 | if isinstance(result, Basic.Ack): 85 | print(f"Submitted {message.flag}") 86 | else: 87 | print(result) 88 | await queue.disconnect() 89 | 90 | 91 | app() 92 | -------------------------------------------------------------------------------- /ataka/cli/requirements.txt: -------------------------------------------------------------------------------- 1 | typer[all] -------------------------------------------------------------------------------- /ataka/common/database/__init__.py: -------------------------------------------------------------------------------- 1 | from contextlib import asynccontextmanager 2 | import traceback 3 | 4 | from .config import engine, Base, async_session 5 | 6 | 7 | async def connect(): 8 | async with engine.begin() as conn: 9 | await conn.run_sync(Base.metadata.create_all) 10 | 11 | 12 | async def disconnect(): 13 | await engine.dispose() 14 | 15 | 16 | @asynccontextmanager 17 | async def get_session(): 18 | try: 19 | async with async_session() as session: 20 | yield session 21 | except Exception as e: 22 | traceback.print_exception(e) 23 | raise e 24 | -------------------------------------------------------------------------------- /ataka/common/database/config.py: -------------------------------------------------------------------------------- 1 | import os 2 | 3 | from sqlalchemy.ext.asyncio import create_async_engine, async_sessionmaker 4 | from sqlalchemy.orm import declarative_base 5 | 6 | user = os.environ['POSTGRES_USER'] 7 | password = os.environ['POSTGRES_PASSWORD'] 8 | 9 | engine = create_async_engine(f"postgresql+asyncpg://{user}:{password}@postgres/{user}") #, echo=True) 10 | async_session = async_sessionmaker(engine, expire_on_commit=False) 11 | Base = declarative_base() 12 | 13 | 14 | class JsonBase: 15 | def to_dict(self): 16 | return {c.name: self.__dict__[c.name] if c.name in self.__dict__ else None for c in self.__table__.columns} 17 | 18 | @classmethod 19 | def from_dict(cls, dict): 20 | return cls(**dict) 21 | -------------------------------------------------------------------------------- /ataka/common/database/models/__init__.py: -------------------------------------------------------------------------------- 1 | from .exclusion import * 2 | from .execution import * 3 | from .exploit import * 4 | from .exploit_history import * 5 | from .flag import * 6 | from .job import * 7 | from .target import * 8 | -------------------------------------------------------------------------------- /ataka/common/database/models/exclusion.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, String, ForeignKey 2 | from sqlalchemy.orm import relationship 3 | 4 | from ..config import Base, JsonBase 5 | 6 | 7 | class Exclusion(Base, JsonBase): 8 | __tablename__ = "exclusions" 9 | 10 | exploit_history_id = Column(String, ForeignKey("exploit_histories.id"), 11 | primary_key=True) 12 | target_ip = Column(String, primary_key=True) 13 | 14 | exploit_history = relationship("ExploitHistory", back_populates="exclusions") 15 | -------------------------------------------------------------------------------- /ataka/common/database/models/execution.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, Integer, ForeignKey, UnicodeText, Enum, DateTime, func 2 | from sqlalchemy.orm import relationship 3 | 4 | from ataka.common.job_execution_status import JobExecutionStatus 5 | from ..config import Base, JsonBase 6 | 7 | 8 | class Execution(Base, JsonBase): 9 | __tablename__ = "executions" 10 | 11 | id = Column(Integer, primary_key=True) 12 | job_id = Column(Integer, ForeignKey("jobs.id"), index=True) 13 | target_id = Column(Integer, ForeignKey("targets.id")) 14 | status = Column(Enum(JobExecutionStatus)) 15 | stdout = Column(UnicodeText) 16 | stderr = Column(UnicodeText) 17 | timestamp = Column(DateTime(timezone=True), server_default=func.now()) 18 | 19 | job = relationship("Job", back_populates="executions") 20 | target = relationship("Target") 21 | flags = relationship("Flag", back_populates="execution") 22 | -------------------------------------------------------------------------------- /ataka/common/database/models/exploit.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, String, DateTime, func, ForeignKey, Boolean 2 | from sqlalchemy.orm import relationship 3 | 4 | from ..config import Base, JsonBase 5 | 6 | 7 | class Exploit(Base, JsonBase): 8 | __tablename__ = "exploits" 9 | 10 | id = Column(String, primary_key=True) 11 | exploit_history_id = Column(String, ForeignKey("exploit_histories.id")) 12 | docker_name = Column(String, unique=True, index=True) 13 | active = Column(Boolean) 14 | author = Column(String) 15 | timestamp = Column(DateTime(timezone=True), server_default=func.now()) 16 | 17 | exploit_history = relationship("ExploitHistory", back_populates="exploits") 18 | -------------------------------------------------------------------------------- /ataka/common/database/models/exploit_history.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, String 2 | from sqlalchemy.orm import relationship 3 | 4 | from ..config import Base, JsonBase 5 | 6 | 7 | class ExploitHistory(Base, JsonBase): 8 | __tablename__ = "exploit_histories" 9 | 10 | id = Column(String, primary_key=True) 11 | service = Column(String) 12 | 13 | exploits = relationship("Exploit", back_populates="exploit_history") 14 | exclusions = relationship("Exclusion", back_populates="exploit_history") 15 | -------------------------------------------------------------------------------- /ataka/common/database/models/flag.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, Integer, String, DateTime, func, Enum, ForeignKey, Boolean 2 | from sqlalchemy.orm import relationship 3 | 4 | from ..config import Base, JsonBase 5 | 6 | from ataka.common.flag_status import FlagStatus 7 | 8 | class Flag(Base, JsonBase): 9 | __tablename__ = "flags" 10 | 11 | id = Column(Integer, primary_key=True) 12 | flag = Column(String, index=True) 13 | status = Column(Enum(FlagStatus)) 14 | timestamp = Column(DateTime(timezone=True), server_default=func.now()) 15 | 16 | execution_id = Column(Integer, ForeignKey("executions.id"), index=True) 17 | stdout = Column(Boolean) 18 | start = Column(Integer) 19 | end = Column(Integer) 20 | 21 | execution = relationship("Execution", back_populates="flags") 22 | -------------------------------------------------------------------------------- /ataka/common/database/models/job.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, Integer, DateTime, func, ForeignKey, Enum, String 2 | from sqlalchemy.orm import relationship 3 | 4 | from ataka.common.job_execution_status import JobExecutionStatus 5 | from ..config import Base, JsonBase 6 | 7 | 8 | class Job(Base, JsonBase): 9 | __tablename__ = "jobs" 10 | 11 | id = Column(Integer, primary_key=True) 12 | exploit_id = Column(String, ForeignKey("exploits.id"), index=True) 13 | manual_id = Column(String, index=True) 14 | status = Column(Enum(JobExecutionStatus), index=True) 15 | timeout = Column(DateTime(timezone=True)) 16 | timestamp = Column(DateTime(timezone=True), server_default=func.now()) 17 | 18 | exploit = relationship("Exploit") 19 | executions = relationship("Execution", back_populates="job") 20 | -------------------------------------------------------------------------------- /ataka/common/database/models/target.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import Column, Integer, String, DateTime, func, Index, Sequence 2 | 3 | from ..config import Base, JsonBase 4 | 5 | 6 | class Target(Base, JsonBase): 7 | __tablename__ = "targets" 8 | 9 | id = Column(Integer, primary_key=True) 10 | version = Column(Integer, index=True) 11 | ip = Column(String) 12 | service = Column(String, index=True) 13 | extra = Column(String) 14 | timestamp = Column(DateTime(timezone=True), server_default=func.now()) 15 | 16 | version_seq = Sequence("version_seq", metadata=Base.metadata) 17 | 18 | __table_args__ = (Index('service_version_index', "service", "version"), ) 19 | -------------------------------------------------------------------------------- /ataka/common/delayed_start.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Waiting for Postgres to start ..." 4 | bash ataka/common/wait-for-it.sh postgres:5432 -t 0 5 | echo "Postgres is running!" 6 | 7 | echo "" 8 | echo "Waiting for RabbitMQ to start ..." 9 | bash ataka/common/wait-for-it.sh rabbitmq:5672 -t 0 10 | echo "RabbitMQ is running!" 11 | 12 | shift; # skip shell script and first -- 13 | echo "" 14 | echo "Starting now $@ ..." 15 | exec $@ 16 | -------------------------------------------------------------------------------- /ataka/common/flag_status.py: -------------------------------------------------------------------------------- 1 | import enum 2 | 3 | 4 | class FlagStatus(str, enum.Enum): 5 | UNKNOWN = 'unknown' 6 | 7 | # Flag yielded points 8 | OK = 'ok' 9 | 10 | # Flag is in queue for submission 11 | QUEUED = 'queued' 12 | 13 | # Flag is currently being submitted 14 | PENDING = 'pending' 15 | 16 | # We already submitted this flag and the submission system tells us that 17 | DUPLICATE = 'duplicate' 18 | 19 | # We didn't submit this flag because it was a duplicate 20 | DUPLICATE_NOT_SUBMITTED = "duplicate_not_submitted" 21 | 22 | # something is wrong with our submitter or the flag service 23 | ERROR = 'error' 24 | 25 | # the flag belongs to the NOP team 26 | NOP = 'NOP' 27 | 28 | # we tried to submit our own flag and the submission system lets us know 29 | OWNFLAG = 'ownflag' 30 | 31 | # the flag is not longer active. This is used if a flags expire 32 | INACTIVE = 'inactive' 33 | 34 | # flag fits the format and could be sent to the submission system, but the 35 | # submission system told us it is invalid 36 | INVALID = 'invalid' 37 | 38 | DuplicatesDontResubmitFlagStatus = {FlagStatus.OK, FlagStatus.QUEUED, FlagStatus.PENDING, FlagStatus.DUPLICATE, 39 | FlagStatus.NOP, FlagStatus.OWNFLAG, FlagStatus.INACTIVE, FlagStatus.INVALID} -------------------------------------------------------------------------------- /ataka/common/job_execution_status.py: -------------------------------------------------------------------------------- 1 | import enum 2 | 3 | 4 | class JobExecutionStatus(str, enum.Enum): 5 | QUEUED = "queued" 6 | RUNNING = "running" 7 | FINISHED = "finished" 8 | FAILED = "failed" 9 | TIMEOUT = "timeout" 10 | CANCELLED = "cancelled" 11 | -------------------------------------------------------------------------------- /ataka/common/queue/__init__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | from contextlib import asynccontextmanager 4 | 5 | import aio_pika 6 | from aio_pika import RobustConnection 7 | 8 | from .flag import * 9 | from .job import * 10 | from .output import * 11 | 12 | connection: RobustConnection = None 13 | 14 | 15 | async def connect(): 16 | global connection 17 | connection = await aio_pika.connect_robust(host="rabbitmq", login=os.environ["RABBITMQ_USER"], 18 | password=os.environ["RABBITMQ_PASSWORD"], 19 | loop=asyncio.get_running_loop()) 20 | 21 | 22 | async def disconnect(): 23 | global connection 24 | await connection.close() 25 | 26 | 27 | @asynccontextmanager 28 | async def get_channel(): 29 | async with await connection.channel() as channel: 30 | yield channel 31 | -------------------------------------------------------------------------------- /ataka/common/queue/flag.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | 3 | from .queue import WorkQueue, Message 4 | 5 | 6 | @dataclass 7 | class FlagMessage(Message): 8 | flag_id: int 9 | flag: str 10 | 11 | 12 | class FlagQueue(WorkQueue): 13 | queue_name = "flag" 14 | message_type = FlagMessage 15 | -------------------------------------------------------------------------------- /ataka/common/queue/job.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from enum import Enum 3 | 4 | from .queue import WorkQueue, Message 5 | 6 | 7 | class JobAction(str, Enum): 8 | QUEUE = "queue" 9 | CANCEL = "cancel" 10 | 11 | 12 | @dataclass 13 | class JobMessage(Message): 14 | action: JobAction 15 | job_id: int 16 | 17 | 18 | class JobQueue(WorkQueue): 19 | queue_name = "job" 20 | message_type = JobMessage 21 | -------------------------------------------------------------------------------- /ataka/common/queue/multiplexed_queue.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | from typing import Callable, Any 3 | 4 | import aiormq 5 | from aio_pika import Queue, IncomingMessage 6 | from aio_pika.queue import ConsumerTag, QueueIterator 7 | 8 | 9 | class MultiplexedQueue(Queue): 10 | def __init__(self, queue: Queue): 11 | self._queue = queue 12 | self._consuming = False 13 | self._consumers = 0 14 | self._callbacks = {} 15 | self._consumer_tag = None 16 | 17 | async def consume( 18 | self, 19 | callback: Callable[[IncomingMessage], Any], 20 | no_ack: bool = False, 21 | exclusive: bool = False, 22 | arguments: dict = None, 23 | consumer_tag=None, 24 | timeout=None, 25 | ) -> ConsumerTag: 26 | if self._consumer_tag is None: 27 | self._consumer_tag = consumer_tag 28 | 29 | self._consumers += 1 30 | 31 | if not self._consuming: 32 | self._consuming = True 33 | self._consumer_tag = await self._queue.consume(self.call_consumers, no_ack, exclusive, arguments, 34 | self._consumer_tag, timeout) 35 | 36 | tag = f"{self._consumer_tag}-{str(self._consumers)}" 37 | self._callbacks[tag] = callback 38 | return tag 39 | 40 | async def cancel( 41 | self, consumer_tag: ConsumerTag, timeout=None, nowait: bool = False 42 | ) -> aiormq.spec.Basic.CancelOk: 43 | self._consumers -= 0 44 | 45 | del self._callbacks[consumer_tag] 46 | return aiormq.spec.Basic.CancelOk(consumer_tag) 47 | 48 | async def call_consumers(self, message: IncomingMessage): 49 | await asyncio.gather(*[callback(message) for callback in self._callbacks.values()]) 50 | 51 | def __getattr__(self, item): 52 | return getattr(self._queue, item) 53 | 54 | def __aiter__(self, ): 55 | return self.iterator() 56 | 57 | def iterator(self, *args, **kwargs): 58 | return QueueIterator(self, *args, **kwargs) 59 | -------------------------------------------------------------------------------- /ataka/common/queue/output.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from typing import Optional 3 | 4 | from .queue import Message, PubSubQueue 5 | 6 | 7 | @dataclass 8 | class OutputMessage(Message): 9 | execution_id: int 10 | stdout: bool 11 | output: str 12 | 13 | 14 | class OutputQueue(PubSubQueue): 15 | queue_name = "output" 16 | message_type = OutputMessage 17 | -------------------------------------------------------------------------------- /ataka/common/queue/queue.py: -------------------------------------------------------------------------------- 1 | import json 2 | from abc import ABC, abstractmethod 3 | from dataclasses import dataclass, asdict 4 | 5 | import aio_pika 6 | import asyncio 7 | 8 | from ataka.common.queue.multiplexed_queue import MultiplexedQueue 9 | 10 | 11 | @dataclass 12 | class Message: 13 | def to_bytes(self) -> bytes: 14 | return json.dumps(self.to_dict()).encode() 15 | 16 | def to_dict(self): 17 | return asdict(self) 18 | 19 | @classmethod 20 | def from_bytes(cls, body: bytes): 21 | return cls(**json.loads(body.decode())) 22 | 23 | 24 | # TODO: add proper generic type hints once python 3.11 is an option 25 | class Queue(ABC): 26 | queue_name: str 27 | message_type: type[Message] 28 | 29 | _channel: aio_pika.Channel 30 | 31 | @classmethod 32 | async def get(cls, channel): 33 | self = cls() 34 | self._channel = channel 35 | return self 36 | 37 | @abstractmethod 38 | async def _get_exchange(self) -> aio_pika.Exchange: 39 | pass 40 | 41 | @abstractmethod 42 | async def _get_queue(self) -> aio_pika.Queue: 43 | pass 44 | 45 | # send a message message 46 | async def send_message(self, message): 47 | exchange = await self._get_exchange() 48 | return await exchange.publish(aio_pika.Message(body=self.message_type.to_bytes(message)), 49 | routing_key=self.queue_name) 50 | 51 | # a generator to return messages as they are received (endless loop) 52 | async def wait_for_messages(self, **kwargs): 53 | async with (await self._get_queue()).iterator(**kwargs) as queue_iter: 54 | async for message in queue_iter: 55 | async with message.process(ignore_processed=True): 56 | yield self.message_type.from_bytes(message.body) 57 | 58 | async def clear(self): 59 | queue = await self._get_queue() 60 | return await queue.purge() 61 | 62 | 63 | class PubSubQueue(Queue): 64 | _exchange: aio_pika.Exchange = None 65 | _queue: aio_pika.Queue = None 66 | 67 | async def _get_exchange(self) -> aio_pika.Exchange: 68 | if self._exchange is None: 69 | self._exchange = await self._channel.declare_exchange(self.queue_name, aio_pika.ExchangeType.FANOUT) 70 | return self._exchange 71 | 72 | async def _get_queue(self) -> aio_pika.Queue: 73 | if self._queue is None: 74 | self._queue = MultiplexedQueue(await self._channel.declare_queue(exclusive=True, auto_delete=True)) 75 | await self._queue.bind(await self._get_exchange()) 76 | return self._queue 77 | 78 | 79 | class WorkQueue(Queue): 80 | _queue: aio_pika.Queue = None 81 | 82 | async def _get_exchange(self) -> aio_pika.Exchange: 83 | return self._channel.default_exchange 84 | 85 | async def _get_queue(self) -> aio_pika.Queue: 86 | if self._queue is None: 87 | self._queue = await self._channel.declare_queue(name=self.queue_name, durable=True) 88 | return self._queue 89 | -------------------------------------------------------------------------------- /ataka/common/requirements.txt: -------------------------------------------------------------------------------- 1 | aio-pika==9.1.5 2 | asyncpg==0.28.0 3 | SQLAlchemy==2.0.19 -------------------------------------------------------------------------------- /ataka/common/wait-for-it.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | # Use this script to test if a given TCP host/port are available 3 | 4 | WAITFORIT_cmdname=${0##*/} 5 | 6 | echoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo "$@" 1>&2; fi } 7 | 8 | usage() 9 | { 10 | cat << USAGE >&2 11 | Usage: 12 | $WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args] 13 | -h HOST | --host=HOST Host or IP under test 14 | -p PORT | --port=PORT TCP port under test 15 | Alternatively, you specify the host and port as host:port 16 | -s | --strict Only execute subcommand if the test succeeds 17 | -q | --quiet Don't output any status messages 18 | -t TIMEOUT | --timeout=TIMEOUT 19 | Timeout in seconds, zero for no timeout 20 | -- COMMAND ARGS Execute command with args after the test finishes 21 | USAGE 22 | exit 1 23 | } 24 | 25 | wait_for() 26 | { 27 | if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then 28 | echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT" 29 | else 30 | echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout" 31 | fi 32 | WAITFORIT_start_ts=$(date +%s) 33 | while : 34 | do 35 | if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then 36 | nc -z $WAITFORIT_HOST $WAITFORIT_PORT 37 | WAITFORIT_result=$? 38 | else 39 | (echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1 40 | WAITFORIT_result=$? 41 | fi 42 | if [[ $WAITFORIT_result -eq 0 ]]; then 43 | WAITFORIT_end_ts=$(date +%s) 44 | echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds" 45 | break 46 | fi 47 | sleep 1 48 | done 49 | return $WAITFORIT_result 50 | } 51 | 52 | wait_for_wrapper() 53 | { 54 | # In order to support SIGINT during timeout: http://unix.stackexchange.com/a/57692 55 | if [[ $WAITFORIT_QUIET -eq 1 ]]; then 56 | timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --quiet --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT & 57 | else 58 | timeout $WAITFORIT_BUSYTIMEFLAG $WAITFORIT_TIMEOUT $0 --child --host=$WAITFORIT_HOST --port=$WAITFORIT_PORT --timeout=$WAITFORIT_TIMEOUT & 59 | fi 60 | WAITFORIT_PID=$! 61 | trap "kill -INT -$WAITFORIT_PID" INT 62 | wait $WAITFORIT_PID 63 | WAITFORIT_RESULT=$? 64 | if [[ $WAITFORIT_RESULT -ne 0 ]]; then 65 | echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT" 66 | fi 67 | return $WAITFORIT_RESULT 68 | } 69 | 70 | # process arguments 71 | while [[ $# -gt 0 ]] 72 | do 73 | case "$1" in 74 | *:* ) 75 | WAITFORIT_hostport=(${1//:/ }) 76 | WAITFORIT_HOST=${WAITFORIT_hostport[0]} 77 | WAITFORIT_PORT=${WAITFORIT_hostport[1]} 78 | shift 1 79 | ;; 80 | --child) 81 | WAITFORIT_CHILD=1 82 | shift 1 83 | ;; 84 | -q | --quiet) 85 | WAITFORIT_QUIET=1 86 | shift 1 87 | ;; 88 | -s | --strict) 89 | WAITFORIT_STRICT=1 90 | shift 1 91 | ;; 92 | -h) 93 | WAITFORIT_HOST="$2" 94 | if [[ $WAITFORIT_HOST == "" ]]; then break; fi 95 | shift 2 96 | ;; 97 | --host=*) 98 | WAITFORIT_HOST="${1#*=}" 99 | shift 1 100 | ;; 101 | -p) 102 | WAITFORIT_PORT="$2" 103 | if [[ $WAITFORIT_PORT == "" ]]; then break; fi 104 | shift 2 105 | ;; 106 | --port=*) 107 | WAITFORIT_PORT="${1#*=}" 108 | shift 1 109 | ;; 110 | -t) 111 | WAITFORIT_TIMEOUT="$2" 112 | if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi 113 | shift 2 114 | ;; 115 | --timeout=*) 116 | WAITFORIT_TIMEOUT="${1#*=}" 117 | shift 1 118 | ;; 119 | --) 120 | shift 121 | WAITFORIT_CLI=("$@") 122 | break 123 | ;; 124 | --help) 125 | usage 126 | ;; 127 | *) 128 | echoerr "Unknown argument: $1" 129 | usage 130 | ;; 131 | esac 132 | done 133 | 134 | if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then 135 | echoerr "Error: you need to provide a host and port to test." 136 | usage 137 | fi 138 | 139 | WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15} 140 | WAITFORIT_STRICT=${WAITFORIT_STRICT:-0} 141 | WAITFORIT_CHILD=${WAITFORIT_CHILD:-0} 142 | WAITFORIT_QUIET=${WAITFORIT_QUIET:-0} 143 | 144 | # Check to see if timeout is from busybox? 145 | WAITFORIT_TIMEOUT_PATH=$(type -p timeout) 146 | WAITFORIT_TIMEOUT_PATH=$(realpath $WAITFORIT_TIMEOUT_PATH 2>/dev/null || readlink -f $WAITFORIT_TIMEOUT_PATH) 147 | 148 | WAITFORIT_BUSYTIMEFLAG="" 149 | if [[ $WAITFORIT_TIMEOUT_PATH =~ "busybox" ]]; then 150 | WAITFORIT_ISBUSY=1 151 | # Check if busybox timeout uses -t flag 152 | # (recent Alpine versions don't support -t anymore) 153 | if timeout &>/dev/stdout | grep -q -e '-t '; then 154 | WAITFORIT_BUSYTIMEFLAG="-t" 155 | fi 156 | else 157 | WAITFORIT_ISBUSY=0 158 | fi 159 | 160 | if [[ $WAITFORIT_CHILD -gt 0 ]]; then 161 | wait_for 162 | WAITFORIT_RESULT=$? 163 | exit $WAITFORIT_RESULT 164 | else 165 | if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then 166 | wait_for_wrapper 167 | WAITFORIT_RESULT=$? 168 | else 169 | wait_for 170 | WAITFORIT_RESULT=$? 171 | fi 172 | fi 173 | 174 | if [[ $WAITFORIT_CLI != "" ]]; then 175 | if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then 176 | echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess" 177 | exit $WAITFORIT_RESULT 178 | fi 179 | exec "${WAITFORIT_CLI[@]}" 180 | else 181 | exit $WAITFORIT_RESULT 182 | fi 183 | -------------------------------------------------------------------------------- /ataka/ctfcode/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | RUN pip install --no-cache-dir --upgrade pip 4 | 5 | WORKDIR / 6 | 7 | COPY ataka/common/requirements.txt /ataka/common/ 8 | RUN pip install --no-cache-dir -r /ataka/common/requirements.txt 9 | COPY ataka/common /ataka/common 10 | 11 | VOLUME /ataka/ctfconfig 12 | VOLUME /ataka/player-cli 13 | VOLUME /data/shared 14 | 15 | COPY ataka/ctfcode/requirements.txt /ataka/ctfcode/ 16 | RUN pip install --no-cache-dir -r /ataka/ctfcode/requirements.txt 17 | COPY ataka/ctfcode /ataka/ctfcode 18 | 19 | CMD [ "bash", "/ataka/common/delayed_start.sh", "--", "python", "-m", "ataka.ctfcode" ] 20 | -------------------------------------------------------------------------------- /ataka/ctfcode/__main__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import os 3 | import signal 4 | 5 | from ataka.common import queue, database 6 | from .ctf import CTF 7 | from .flags import Flags 8 | from .target_job_generator import TargetJobGenerator 9 | 10 | 11 | async def main(): 12 | # initialize connections 13 | await queue.connect() 14 | await database.connect() 15 | 16 | # load ctf-specific code 17 | ctf = CTF(os.environ["CTF"]) 18 | flags = Flags(ctf) 19 | target_job_generator = TargetJobGenerator(ctf) 20 | 21 | loop = asyncio.get_event_loop() 22 | loop.add_signal_handler(signal.SIGUSR1, ctf.reload) 23 | 24 | flags_task = flags.poll_and_submit_flags() 25 | output_task = flags.poll_and_parse_output() 26 | target_job_task = target_job_generator.run_loop() 27 | 28 | await asyncio.gather(flags_task, output_task, target_job_task) 29 | 30 | 31 | asyncio.run(main()) 32 | -------------------------------------------------------------------------------- /ataka/ctfcode/ctf.py: -------------------------------------------------------------------------------- 1 | import time 2 | from importlib import import_module, reload 3 | import logging 4 | import traceback 5 | import functools 6 | from subprocess import Popen 7 | 8 | import exrex 9 | 10 | from ataka.common.flag_status import FlagStatus 11 | 12 | 13 | def catch(default=None): 14 | def decorator(func): 15 | @functools.wraps(func) 16 | def wrapper(*args, **kwargs): 17 | try: 18 | return func(*args, **kwargs) 19 | except Exception as e: 20 | logging.error(f"Error occurred while accessing CTFCode") 21 | logging.error(traceback.format_exc()) 22 | return default 23 | 24 | return wrapper 25 | 26 | return decorator 27 | 28 | 29 | def expect(validator=lambda *args, **kwargs: True): 30 | def decorator(func): 31 | @functools.wraps(func) 32 | def wrapper(*args, **kwargs): 33 | result = func(*args, **kwargs) 34 | if not validator(result, *args, **kwargs): 35 | logging.error(f"CTF Config returned unexpected result for " 36 | f"{func.__name__}({', '.join([repr(x) for x in args] + [str(k) + '=' + repr(v) for k, v in kwargs.items()])})") 37 | logging.error(result) 38 | return result 39 | 40 | return wrapper 41 | 42 | return decorator 43 | 44 | 45 | # A wrapper that loads the specified ctf by name, and wraps the api with support 46 | # for hot-reload and provides a few convenience functions 47 | class CTF: 48 | def __init__(self, name: str): 49 | self._name = name 50 | self._module = import_module(f"ataka.ctfconfig.{name}") 51 | self._self_test() 52 | self.package_player_cli() 53 | 54 | def package_player_cli(self): 55 | logging.info("Packaging player-cli") 56 | Popen(['/ataka/player-cli/package_player_cli.sh']) 57 | 58 | @catch(default=None) 59 | def reload(self): 60 | self.package_player_cli() 61 | 62 | logging.info("Reloading ctf code") 63 | reload(self._module) 64 | self._self_test() 65 | 66 | @catch(default=[]) 67 | @expect(validator=lambda x, self: type(x) == list and all([type(s) == str for s in x])) 68 | def get_runlocal_targets(self): 69 | return self._module.RUNLOCAL_TARGETS 70 | 71 | @catch(default=set()) 72 | @expect(validator=lambda x, self: type(x) == set and all([type(s) == str for s in x])) 73 | def get_static_exclusions(self): 74 | return self._module.STATIC_EXCLUSIONS 75 | 76 | @catch(default=60) 77 | @expect(validator=lambda x, self: type(x) == int and 0 < x < 14400) 78 | def get_round_time(self): 79 | return self._module.ROUND_TIME 80 | 81 | @catch(default=(r".*", 0)) 82 | @expect(validator=lambda x, self: type(x) == tuple and len(x) == 2 and type(x[0]) == str and type(x[1]) == int and 83 | 0 <= x[1] < 100) 84 | def get_flag_regex(self): 85 | return self._module.FLAG_REGEX 86 | 87 | @catch(default=100) 88 | @expect(validator=lambda x, self: type(x) == int and 0 < x < 25000) 89 | def get_flag_batchsize(self): 90 | return self._module.FLAG_BATCHSIZE 91 | 92 | @catch(default=1) 93 | @expect(validator=lambda x, self: (type(x) == int or type(x) == float) and 0 < x < self.get_round_time()) 94 | def get_flag_ratelimit(self): 95 | return self._module.FLAG_RATELIMIT 96 | 97 | @catch(default=1577840400) 98 | @expect(validator=lambda x, self: type(x) == int and (abs(time.time() - x) // 60 // 60 // 24) < 30) 99 | def get_start_time(self): 100 | return self._module.START_TIME 101 | 102 | def get_cur_tick(self): 103 | running_time = time.time() - self.get_start_time() 104 | return running_time // self.get_round_time() 105 | 106 | def get_next_tick_start(self): 107 | return self.get_start_time() + self.get_round_time() * (self.get_cur_tick() + 1) 108 | 109 | @catch(default={}) 110 | @expect(validator=lambda x, self: type(x) == dict and all( 111 | [type(service) == str and type(item) == list and all( 112 | ['ip' in entry and 'extra' in entry and type(entry['ip']) == str and type(entry['extra'] == str) 113 | for entry in item] 114 | ) for service, item in x.items()] 115 | )) 116 | def get_targets(self): 117 | return self._module.get_targets() 118 | 119 | @catch(default=[]) 120 | @expect(validator=lambda x, self, flags: type(x) == list and len(x) == len(flags) and all( 121 | [type(status) == FlagStatus for status in x] 122 | )) 123 | def submit_flags(self, flags): 124 | return self._module.submit_flags(flags) 125 | 126 | def _self_test(self): 127 | logging.info("=" * 40) 128 | logging.info(f"Running ctf config self test for config {self._name}") 129 | try: 130 | self.get_runlocal_targets() 131 | self.get_static_exclusions() 132 | self.get_round_time() 133 | 134 | regex, group = self.get_flag_regex() 135 | batchsize = self.get_flag_batchsize() 136 | 137 | self.get_flag_ratelimit() 138 | self.get_start_time() 139 | 140 | self.get_targets() 141 | 142 | fake_flag_count = min(batchsize, 10) 143 | logging.info(f"Submitting {fake_flag_count} fake flags...") 144 | fake_flags = [exrex.getone(regex) for _ in range(fake_flag_count)] 145 | status_list = self.submit_flags(fake_flags) 146 | for flag, status in zip(fake_flags, status_list): 147 | logging.info(f" {flag} -> {status}") 148 | logging.info("Test finished (if you only see flag submission results, everything is good)") 149 | except Exception as e: 150 | logging.error(f"Self-Test FAILED") 151 | logging.error(traceback.format_exc()) 152 | logging.info("=" * 40) 153 | -------------------------------------------------------------------------------- /ataka/ctfcode/flags.py: -------------------------------------------------------------------------------- 1 | import re 2 | import time 3 | from asyncio import TimeoutError, sleep 4 | 5 | import asyncio 6 | from sqlalchemy import update, select 7 | 8 | from ataka.common import database 9 | from ataka.common.database.models import Flag 10 | from ataka.common.flag_status import FlagStatus, DuplicatesDontResubmitFlagStatus 11 | from ataka.common.queue import FlagQueue, get_channel, FlagMessage, OutputQueue 12 | from .ctf import CTF 13 | 14 | 15 | class Flags: 16 | def __init__(self, ctf: CTF): 17 | self._ctf = ctf 18 | self._flag_cache = {} 19 | 20 | async def poll_and_submit_flags(self): 21 | async with get_channel() as channel: 22 | flag_queue = await FlagQueue.get(channel) 23 | last_submit = time.time() 24 | 25 | async with database.get_session() as session: 26 | while True: 27 | batchsize = self._ctf.get_flag_batchsize() 28 | ratelimit = self._ctf.get_flag_ratelimit() 29 | 30 | queue_init = select(Flag).where(Flag.status.in_({FlagStatus.PENDING, FlagStatus.ERROR})) 31 | init_list = list((await session.execute(queue_init)).scalars()) 32 | 33 | submit_list = [FlagMessage(flag.id, flag.flag) for flag in init_list if flag.status == FlagStatus.PENDING] 34 | resubmit_list = [FlagMessage(flag.id, flag.flag) for flag in init_list if flag.status == FlagStatus.ERROR] 35 | dupe_list = [] 36 | try: 37 | async for message in flag_queue.wait_for_messages(timeout=ratelimit): 38 | flag_id = message.flag_id 39 | flag = message.flag 40 | #print(f"Got flag {flag}, cache {'NOPE' if flag not in self._flag_cache else self._flag_cache[flag]}") 41 | 42 | check_duplicates = select(Flag) \ 43 | .where(Flag.id != flag_id) \ 44 | .where(Flag.flag == flag) \ 45 | .where(Flag.status.in_(DuplicatesDontResubmitFlagStatus)) \ 46 | .limit(1) 47 | duplicate = (await session.execute(check_duplicates)).scalars().first() 48 | if duplicate is not None: 49 | dupe_list.append(flag_id) 50 | self._flag_cache[flag] = FlagStatus.DUPLICATE_NOT_SUBMITTED 51 | else: 52 | submit_list.append(message) 53 | self._flag_cache[flag] = FlagStatus.PENDING 54 | 55 | if len(submit_list) >= batchsize: 56 | break 57 | except TimeoutError as e: 58 | pass 59 | 60 | if len(dupe_list) > 0: 61 | print(f"Dupe list of size {len(dupe_list)}") 62 | set_duplicates = update(Flag)\ 63 | .where(Flag.id.in_(dupe_list))\ 64 | .values(status=FlagStatus.DUPLICATE_NOT_SUBMITTED) 65 | await session.execute(set_duplicates) 66 | await session.commit() 67 | 68 | if len(submit_list) < batchsize and len(resubmit_list) > 0: 69 | resubmit_amount = min(batchsize-len(submit_list), len(resubmit_list)) 70 | print(f"Got leftover capacity, resubmitting {resubmit_amount} errored flags " 71 | f"({len(resubmit_list) - resubmit_amount} remaining)") 72 | 73 | submit_list += resubmit_list[:resubmit_amount] 74 | resubmit_list = resubmit_list[resubmit_amount:] 75 | 76 | if len(submit_list) > 0: 77 | set_pending = update(Flag) \ 78 | .where(Flag.id.in_([x.flag_id for x in submit_list])) \ 79 | .values(status=FlagStatus.PENDING) \ 80 | .returning(Flag) 81 | result = list((await session.execute(set_pending)).scalars()) 82 | await session.commit() 83 | 84 | diff = time.time() - last_submit 85 | print(f"Submitting {len(submit_list)} flags, {diff:.2f}s since last time" + 86 | (f" (sleeping {ratelimit-diff:.2f})" if diff < ratelimit else "")) 87 | if diff < ratelimit: 88 | await sleep(ratelimit-diff) 89 | last_submit = time.time() 90 | 91 | statuslist = self._ctf.submit_flags([flag.flag for flag in result]) 92 | print(f"Done submitting ({statuslist.count(FlagStatus.OK)} ok)") 93 | 94 | for flag, status in zip(result, statuslist): 95 | #print(flag.id, flag.flag, status) 96 | flag.status = status 97 | self._flag_cache[flag.flag] = status 98 | 99 | if status == FlagStatus.ERROR: 100 | resubmit_list.append(FlagMessage(flag.id, flag.flag)) 101 | 102 | await session.commit() 103 | else: 104 | print("No flags for now") 105 | 106 | async def poll_and_parse_output(self): 107 | async with get_channel() as channel: 108 | flag_queue = await FlagQueue.get(channel) 109 | output_queue = await OutputQueue.get(channel) 110 | async with database.get_session() as session: 111 | async for message in output_queue.wait_for_messages(): 112 | regex, group = self._ctf.get_flag_regex() 113 | submissions = [] 114 | duplicates = [] 115 | for match in re.finditer(regex, message.output): 116 | if match.start(group) == -1 or match.end(group) == -1: 117 | continue 118 | 119 | flag = match.group(group) 120 | flag_obj = Flag(flag=flag, status=FlagStatus.QUEUED, execution_id=message.execution_id, 121 | stdout=message.stdout, start=match.start(group), end=match.end(group)) 122 | if flag in self._flag_cache and self._flag_cache[flag] in DuplicatesDontResubmitFlagStatus: 123 | flag_obj.status = FlagStatus.DUPLICATE_NOT_SUBMITTED 124 | duplicates.append(flag_obj) 125 | else: 126 | submissions.append(flag_obj) 127 | self._flag_cache[flag] = flag_obj.status 128 | 129 | if len(submissions) + len(duplicates) == 0: 130 | continue 131 | 132 | session.add_all(submissions + duplicates) 133 | await session.commit() 134 | 135 | if len(submissions) > 0: 136 | await asyncio.gather(*[ 137 | flag_queue.send_message(FlagMessage(flag_id=f.id, flag=f.flag)) 138 | for f in submissions]) 139 | -------------------------------------------------------------------------------- /ataka/ctfcode/requirements.txt: -------------------------------------------------------------------------------- 1 | requests==2.31.0 2 | pwntools==4.11.0 3 | exrex==0.11.0 4 | -------------------------------------------------------------------------------- /ataka/ctfcode/target_job_generator.py: -------------------------------------------------------------------------------- 1 | import time 2 | from asyncio import sleep 3 | from datetime import datetime 4 | from functools import reduce 5 | 6 | from sqlalchemy.future import select 7 | from sqlalchemy.orm import selectinload 8 | 9 | from ataka.common import database 10 | from ataka.common.database.models import Job, Target, Execution, ExploitHistory 11 | from ataka.common.queue import JobQueue, get_channel, JobMessage, JobAction 12 | from ataka.common.job_execution_status import JobExecutionStatus 13 | from .ctf import CTF 14 | 15 | 16 | class TargetJobGenerator: 17 | def __init__(self, ctf: CTF): 18 | self._ctf = ctf 19 | 20 | async def run_loop(self): 21 | async with get_channel() as channel: 22 | job_queue = await JobQueue.get(channel) 23 | 24 | async with database.get_session() as session: 25 | # Get all queued jobs 26 | await job_queue.clear() 27 | get_future_jobs = select(Job) \ 28 | .where((Job.exploit_id != None) & 29 | Job.status.in_([JobExecutionStatus.QUEUED, JobExecutionStatus.RUNNING])) 30 | future_jobs = (await session.execute(get_future_jobs)).scalars() 31 | # cancel the jobs 32 | for job in future_jobs: 33 | await job_queue.send_message(JobMessage(action=JobAction.CANCEL, job_id=job.id)) 34 | 35 | while True: 36 | if (sleep_duration := self._ctf.get_start_time() - time.time()) > 0: 37 | print(f"CTF not started yet, sleeping for {int(sleep_duration)} seconds...") 38 | await sleep(min(self._ctf.get_round_time(), sleep_duration)) 39 | continue 40 | 41 | print("New tick") 42 | all_targets = self._ctf.get_targets() 43 | 44 | async with database.get_session() as session: 45 | next_version = await session.execute(Target.version_seq) 46 | 47 | # if we have an exploit, submit a job for this service 48 | get_exploits = select(ExploitHistory) \ 49 | .options( 50 | selectinload(ExploitHistory.exploits), 51 | selectinload(ExploitHistory.exclusions)) 52 | exploit_list = list((await session.execute(get_exploits)).scalars()) 53 | all_exploits = {exploit.service: [] for exploit in exploit_list} 54 | for history in exploit_list: 55 | all_exploits[history.service].append(history) 56 | 57 | job_list = [] 58 | for service, targets in all_targets.items(): 59 | target_objs = [Target(version=next_version, ip=t["ip"], service=service, extra=t["extra"]) 60 | for t in targets] 61 | session.add_all(target_objs) 62 | 63 | if service not in all_exploits: 64 | continue 65 | 66 | exploits_for_this_service = all_exploits[service] 67 | del all_exploits[service] 68 | 69 | for history in exploits_for_this_service: 70 | excluded_ips = set(x.target_ip for x in history.exclusions) 71 | excluded_ips.update(self._ctf.get_static_exclusions()) 72 | history_target_objs = [t for t in target_objs if t.ip not in excluded_ips] 73 | for history in [e for e in history.exploits if e.active]: 74 | job_obj = Job(status=JobExecutionStatus.QUEUED, 75 | timeout=datetime.fromtimestamp(self._ctf.get_next_tick_start()), 76 | exploit=history) 77 | session.add(job_obj) 78 | 79 | session.add_all([Execution(job=job_obj, target=t, 80 | status=JobExecutionStatus.QUEUED) for t in 81 | history_target_objs]) 82 | 83 | job_list += [job_obj] 84 | 85 | for service, histories in all_exploits.items(): 86 | for history in histories: 87 | print(f"WARNING: Got exploit history {history.id} for service {service} but no targets for this service.") 88 | 89 | await session.commit() 90 | 91 | for job in job_list: 92 | await job_queue.send_message(JobMessage(action=JobAction.QUEUE, job_id=job.id)) 93 | 94 | # sleep until next tick 95 | next_tick = self._ctf.get_next_tick_start() 96 | diff = next_tick - time.time() 97 | if diff > 0: 98 | await sleep(diff) 99 | -------------------------------------------------------------------------------- /ataka/ctfconfig/enowars7.py: -------------------------------------------------------------------------------- 1 | from pwn import * 2 | import json 3 | import requests 4 | 5 | try: 6 | from ataka.common.database.models import FlagStatus 7 | except ImportError as e: 8 | import enum 9 | 10 | class FlagStatus(str, enum.Enum): 11 | UNKNOWN = "unknown" 12 | 13 | # everything is fine 14 | OK = "ok" 15 | 16 | # Flag is currently being submitted 17 | QUEUED = "queued" 18 | 19 | # Flag is currently being submitted 20 | PENDING = "pending" 21 | 22 | # We already submitted this flag and the submission system tells us thats 23 | DUPLICATE = "duplicate" 24 | 25 | # something is wrong with our submitter 26 | ERROR = "error" 27 | 28 | # the service did not check the flag, but told us to fuck off 29 | RATELIMIT = "ratelimit" 30 | 31 | # something is wrong with the submission system 32 | EXCEPTION = "exception" 33 | 34 | # we tried to submit our own flag and the submission system lets us know 35 | OWNFLAG = "ownflag" 36 | 37 | # the flag is not longer active. This is used if a flags are restricted to a 38 | # specific time frame 39 | INACTIVE = "inactive" 40 | 41 | # flag fits the format and could be sent to the submission system, but the 42 | # submission system told us it is invalid 43 | INVALID = "invalid" 44 | 45 | # This status code is used in case the scoring system requires the services to 46 | # be working. Flags that are rejected might be sent again! 47 | SERVICEBROKEN = "servicebroken" 48 | 49 | 50 | FLAG_SUBMIT_HOST = "10.0.13.37" 51 | FLAG_SUBMIT_PORT = 1337 52 | 53 | # Ataka Host Domain / IP 54 | ATAKA_HOST = "ataka.local" 55 | 56 | # Our own host 57 | OWN_HOST = "10.1.2.1" 58 | 59 | RUNLOCAL_TARGETS = ["10.1.1.1"] 60 | 61 | # Config for framework 62 | ROUND_TIME = 60 63 | 64 | # format: regex, group where group 0 means the whole regex 65 | FLAG_REGEX = r"ENO[A-Za-z0-9+\/=]{48}", 0 66 | 67 | 68 | FLAG_BATCHSIZE = 1000 69 | 70 | FLAG_RATELIMIT = 5 # Wait in seconds between each call of submit_flags() 71 | 72 | START_TIME = 1690027200 73 | 74 | # IPs that are always excluded from attacks. 75 | STATIC_EXCLUSIONS = set(["10.1.2.1"]) 76 | 77 | # End config 78 | 79 | 80 | def get_services() -> list: 81 | return [ 82 | "asocialnetwork", 83 | "bollwerk", 84 | "phreaking", 85 | "yvm", 86 | "granulizer", 87 | "oldschool", 88 | "steinsgate", 89 | ] 90 | 91 | 92 | def get_targets() -> dict: 93 | r = requests.get(f"https://7.enowars.com/scoreboard/attack.json") 94 | 95 | services = r.json()["services"] 96 | print("getting services") 97 | return { 98 | service: [ 99 | {"ip": ip, "extra": json.dumps(services[service][ip])} 100 | for ip in services[service].keys() 101 | ] 102 | if service in services 103 | else [{"ip": ip, "extra": "[]"} for ip in get_all_target_ips()] 104 | for service in get_services() 105 | } 106 | 107 | 108 | def get_all_target_ips() -> set: 109 | r = requests.get(f"https://7.enowars.com/api/data/ips") 110 | ips = r.text.split("\n") 111 | ips = [ip for ip in ips if ip != ""] 112 | return set(ips) 113 | 114 | 115 | def submit_flags(flags) -> list: 116 | # TODO for next time: exchange with long-living socket, possibly async API 117 | results = [] 118 | try: 119 | HEADER = b"Welcome to the EnoEngine's EnoFlagSink\xe2\x84\xa2!\nPlease submit one flag per line. Responses are NOT guaranteed to be in chronological order.\n\n" 120 | server = remote(FLAG_SUBMIT_HOST, FLAG_SUBMIT_PORT, timeout=2) 121 | server.recvuntil(HEADER, timeout=5) 122 | for flag in flags: 123 | server.sendline(flag.encode()) 124 | response = server.recvline(timeout=2) 125 | if b" INV" in response: 126 | results += [FlagStatus.INVALID] 127 | elif b" OLD" in response: 128 | results += [FlagStatus.INACTIVE] 129 | elif b" OK" in response: 130 | results += [FlagStatus.OK] 131 | elif b" OWN" in response: 132 | results += [FlagStatus.OWNFLAG] 133 | elif b" DUP" in response: 134 | results += [FlagStatus.DUPLICATE] 135 | else: 136 | results += [FlagStatus.ERROR] 137 | print(f"Invalid response: {response}") 138 | server.close() 139 | except Exception as e: 140 | print(f"Exception: {e}", flush=True) 141 | results += [FlagStatus.ERROR for _ in flags[len(results)]] 142 | 143 | return results 144 | 145 | 146 | if __name__ == "__main__": 147 | import pprint 148 | 149 | pp = pprint.PrettyPrinter(indent=4) 150 | pp.pprint(get_targets()) 151 | pp.pprint( 152 | submit_flags( 153 | [ 154 | "ENOBAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", 155 | "ENOBCAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA", 156 | "jou", 157 | ] 158 | ) 159 | ) 160 | -------------------------------------------------------------------------------- /ataka/ctfconfig/faustctf.py: -------------------------------------------------------------------------------- 1 | import json 2 | import requests 3 | 4 | from pwn import * 5 | 6 | from ataka.common.flag_status import FlagStatus 7 | 8 | ### EXPORTED CONFIG 9 | 10 | # Ataka Host Domain / IP 11 | ATAKA_HOST = "ataka.h4xx.eu" 12 | 13 | # Default targets for atk runlocal 14 | RUNLOCAL_TARGETS = [ 15 | # NOP Team 16 | "fd66:666:1::2", 17 | ] 18 | 19 | # IPs that are always excluded from attacks. These can be included in runlocal with --ignore-exclusions 20 | # These still get targets with flag ids, they're just never (automatically) attacked 21 | STATIC_EXCLUSIONS = {"fd66:666:930::2"} 22 | 23 | ROUND_TIME = 180 24 | 25 | # format: regex, group where group 0 means the whole regex 26 | FLAG_REGEX = r"FAUST_[A-Za-z0-9/+]{32}", 0 27 | 28 | # Maximum list length for submit_flags() 29 | FLAG_BATCHSIZE = 500 30 | 31 | # Minimum wait in seconds between each call of submit_flags() 32 | FLAG_RATELIMIT = 3 33 | 34 | # When the CTF starts 35 | START_TIME = 1695474000 + 1 # Sun Jul 16 2023 09:00:00 GMT+0200 (Central European Summer Time) 36 | 37 | ### END EXPORTED CONFIG 38 | 39 | EDITION = 2023 40 | 41 | FLAGID_URL = f"https://{EDITION}.faustctf.net/competition/teams.json" 42 | 43 | 44 | def get_targets(): 45 | no_flagid_services = {"jokes", "office-supplies"} 46 | 47 | teams = requests.get(FLAGID_URL).json() 48 | 49 | team_ids = teams['teams'] 50 | flag_ids = teams['flag_ids'] 51 | 52 | ## A generic solution for just a single vulnbox: 53 | default_targets = {service: {str(i): [] for i in team_ids} for service in no_flagid_services} | {service: {} for service in flag_ids.keys()} 54 | 55 | targets = { 56 | service: [ 57 | { 58 | "ip": f"fd66:666:{i}::2", 59 | "extra": json.dumps(info), 60 | } 61 | for i, info in (default_targets[service] | service_info).items() 62 | ] 63 | for service, service_info in ({service: {} for service in no_flagid_services} | flag_ids).items() 64 | } 65 | 66 | return targets 67 | 68 | 69 | def submit_flags(flags): 70 | # TODO for next time: exchange with long-living socket, possibly async API 71 | results = [] 72 | try: 73 | HEADER = b"\nOne flag per line please!\n\n" 74 | server = remote("submission.faustctf.net", 666, timeout=2) 75 | server.recvuntil(HEADER, timeout=5) 76 | for flag in flags: 77 | server.sendline(flag.encode()) 78 | response = server.recvline(timeout=2) 79 | if b" INV" in response: 80 | results += [FlagStatus.INVALID] 81 | elif b' OLD' in response: 82 | results += [FlagStatus.INACTIVE] 83 | elif b' OK' in response: 84 | results += [FlagStatus.OK] 85 | elif b' OWN' in response: 86 | results += [FlagStatus.OWNFLAG] 87 | elif b' DUP' in response: 88 | results += [FlagStatus.DUPLICATE] 89 | else: 90 | results += [FlagStatus.ERROR] 91 | print(f"Invalid response: {response}") 92 | server.close() 93 | except Exception as e: 94 | print(f"Exception: {e}", flush=True) 95 | results += [FlagStatus.ERROR for _ in flags[len(results):]] 96 | 97 | return results 98 | -------------------------------------------------------------------------------- /ataka/ctfconfig/iccdemo.py: -------------------------------------------------------------------------------- 1 | import json 2 | import requests 3 | 4 | from ataka.common.flag_status import FlagStatus 5 | 6 | ### EXPORTED CONFIG 7 | 8 | # Ataka Host Domain / IP 9 | ATAKA_HOST = "ataka.h4xx.eu" 10 | 11 | # Default targets for atk runlocal 12 | RUNLOCAL_TARGETS = [ 13 | # NOP Team 14 | "10.60.0.1", 15 | "10.61.0.1", 16 | "10.62.0.1", 17 | "10.63.0.1", 18 | "10.60.1.1", 19 | "10.61.1.1", 20 | "10.62.1.1", 21 | "10.63.1.1", 22 | "10.60.7.1", 23 | "10.61.7.1", 24 | "10.62.7.1", 25 | "10.63.7.1", 26 | ] 27 | 28 | # IPs that are always excluded from attacks. These can be included in runlocal with --ignore-exclusions 29 | # These still get targets with flag ids, they're just never (automatically) attacked 30 | STATIC_EXCLUSIONS = {"10.60.5.1", "10.61.5.1", "10.62.5.1", "10.63.5.1"} 31 | 32 | ROUND_TIME = 120 33 | 34 | # format: regex, group where group 0 means the whole regex 35 | FLAG_REGEX = r"[A-Z0-9]{31}=", 0 36 | 37 | # Maximum list length for submit_flags() 38 | FLAG_BATCHSIZE = 2500 39 | 40 | # Minimum wait in seconds between each call of submit_flags() 41 | FLAG_RATELIMIT = 5 42 | 43 | # When the CTF starts 44 | START_TIME = 1689490800 + 1 # Sun Jul 16 2023 09:00:00 GMT+0200 (Central European Summer Time) 45 | 46 | ### END EXPORTED CONFIG 47 | 48 | 49 | TEAM_TOKEN = "45f8890e6c13d864527c1e869ca384d0" 50 | SUBMIT_URL = "http://10.10.0.1:8080/flags" 51 | FLAGID_URL = "http://10.10.0.1:8081/flagIds" 52 | 53 | 54 | def get_targets(): 55 | services = ["CyberUni_1", "CyberUni_2", "CyberUni_3", "CyberUni_4", 56 | "ClosedSea-1", "ClosedSea-2", "Trademark", "rpn"] 57 | 58 | ## TODO: fill 59 | default_targets = { 60 | "rpn": 61 | {f"10.60.{i}.1": [] for i in range(10)}, 62 | "CyberUni_1": 63 | {f"10.61.{i}.1": [] for i in range(10)}, 64 | "CyberUni_2": 65 | {f"10.61.{i}.1": [] for i in range(10)}, 66 | "Trademark": 67 | {f"10.62.{i}.1": [] for i in range(10)}, 68 | } 69 | ## A generic solution for just a single vulnbox: 70 | # default_targets = {service: {f"10.62.{i}.1": [] for i in range(10)} for service in services} 71 | 72 | flag_ids = requests.get(FLAGID_URL).json() 73 | 74 | targets = { 75 | service: [ 76 | { 77 | "ip": ip, 78 | "extra": json.dumps(ip_info), 79 | } 80 | for ip, ip_info in (default_targets[service] | service_info).items() 81 | ] 82 | for service, service_info in ({service: [] for service in services} | flag_ids).items() 83 | } 84 | 85 | return targets 86 | 87 | 88 | def submit_flags(flags): 89 | resp = requests.put( 90 | SUBMIT_URL, headers={"X-Team-Token": TEAM_TOKEN}, json=flags 91 | ).json() 92 | 93 | results = [] 94 | for flag_resp in resp: 95 | msg = flag_resp["msg"] 96 | if flag_resp["status"]: 97 | status = FlagStatus.OK 98 | elif "invalid flag" in msg: 99 | status = FlagStatus.INVALID 100 | elif "flag from nop team" in msg: 101 | status = FlagStatus.INACTIVE 102 | elif "flag is your own" in msg: 103 | status = FlagStatus.OWNFLAG 104 | elif "flag too old" in msg or "flag is too old" in msg: 105 | status = FlagStatus.INACTIVE 106 | elif "flag already claimed" in msg: 107 | status = FlagStatus.DUPLICATE 108 | else: 109 | status = FlagStatus.ERROR 110 | print(f"Got error while flagsubmission: {msg}") 111 | results.append(status) 112 | 113 | return results 114 | -------------------------------------------------------------------------------- /ataka/ctfconfig/old/cinsects.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from bs4 import BeautifulSoup 4 | from ataka.common.database.models import FlagStatus 5 | 6 | import requests 7 | 8 | # Config for framework 9 | ROUND_TIME = 60 10 | 11 | # format: regex, group where group 0 means the whole regex 12 | FLAG_REGEX = r"FLG\w{30}", 0 13 | 14 | FLAG_BATCHSIZE = 100 15 | 16 | FLAG_RATELIMIT = 0.5 # Wait in seconds between each call of submit_flags() 17 | 18 | START_TIME = 1645280101 19 | 20 | # IPs that are always excluded from attacks. 21 | STATIC_EXCLUSIONS = set([]) 22 | 23 | LOGIN_URL = "https://dashboard.ctf.cinsects.de/login/?next=/ctf/submit_flag/manual/" 24 | SUBMIT_URL = "https://dashboard.ctf.cinsects.de/ctf/submit_flag/?format=json" 25 | USERNAME = "Cyberwehr" 26 | PASSWORD = "password" 27 | 28 | 29 | 30 | 31 | # End config 32 | 33 | def get_services(): 34 | return ['securestorage', 'catclub', 'commercial-timetracker', 'hireme', 'deployment_apik', 'deployment_poly', 'hiddenservice8'] 35 | 36 | 37 | def get_targets(): 38 | targets = requests.get("https://dashboard.ctf.cinsects.de/ctf/targets/?format=json") 39 | obj = targets.json() 40 | 41 | targets = {service: [{"ip": obj[service][team][0], "extra": ""} for team in obj[service]] for service in obj} 42 | return targets 43 | 44 | logger = logging.getLogger() 45 | 46 | 47 | def submit_flags(flags): 48 | s = requests.Session() 49 | 50 | r = s.get(LOGIN_URL) 51 | 52 | bs = BeautifulSoup(r.content, 'lxml') 53 | csrf = bs.find_all("input", {"name": "csrfmiddlewaretoken"})[0]["value"] 54 | 55 | r = s.post(LOGIN_URL, data={"csrfmiddlewaretoken": csrf, "csrfmiddlewaretoken": csrf, "username": USERNAME, "password": PASSWORD, "next": "/ctf/submit_flag/manual/"}, headers={'Content-Type': 'application/x-www-form-urlencoded', "Referer": LOGIN_URL}) 56 | 57 | 58 | bs = BeautifulSoup(r.content, 'lxml') 59 | csrf = bs.find_all("input", {"name": "csrfmiddlewaretoken"})[0]["value"] 60 | 61 | r = s.post(SUBMIT_URL, json={"flags": flags}, headers={'Content-Type': 'application/json', "Referer": SUBMIT_URL, "X-Csrftoken": csrf}) 62 | print(r.json()) 63 | 64 | return [FlagStatus.OK for flag in flags] 65 | 66 | -------------------------------------------------------------------------------- /ataka/ctfconfig/old/cwte.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from pwn import * 3 | import requests 4 | import json 5 | 6 | # Ataka Host Domain / IP 7 | ATAKA_HOST = 'ataka.h4xx.eu' 8 | 9 | # Our own host 10 | OWN_HOST = '10.20.2.' 11 | 12 | RUNLOCAL_TARGETS = [ 13 | # NOP Team 14 | '10.20.1.4', 15 | '10.20.1.5', 16 | '10.20.1.6' 17 | ] 18 | 19 | # Config for framework 20 | ROUND_TIME = 120 21 | 22 | # format: regex, group where group 0 means the whole regex 23 | FLAG_REGEX = r"ICC_[a-zA-Z\/0-9+\-]{32}", 0 24 | #FLAG_REGEX = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", 0 25 | 26 | FLAG_BATCHSIZE = 1000 27 | 28 | FLAG_RATELIMIT = 0.5 # Wait in seconds between each call of submit_flags() 29 | 30 | # 2023-07-09 09:00:00 GMT 31 | START_TIME = 1688893200 32 | 33 | # IPs that are always excluded from attacks. 34 | STATIC_EXCLUSIONS = set([OWN_HOST + x for x in ["4", "5", "6"]]) 35 | 36 | # End config 37 | 38 | 39 | def get_services(): 40 | return ["adorad", "navashield", "flagprescription"] 41 | 42 | 43 | def flag_ids(): 44 | url = "https://web.ad.teameurope.space/competition/teams.json" 45 | 46 | response = requests.get(url) 47 | 48 | data = None 49 | try: 50 | data = response.json()['flag_ids'] 51 | except: 52 | logger.error(f"Couldn't get flag_ids from {url}") 53 | 54 | return data 55 | 56 | 57 | 58 | def get_targets(): 59 | extra = flag_ids() 60 | #TODO: only show relevant flag_ids 61 | return {"adorad": [{"ip": f"10.20.{i}.4", "extra": json.dumps([extra['ADorAD - AD'][str(i)], extra['ADorAD - Workhorz'][str(i)]])} for i in range(1,26) if str(i) in extra['ADorAD - AD'] and str(i) in extra["ADorAD - Workhorz"]], 62 | "navashield": [{"ip": f"10.20.{i}.6", "extra": json.dumps([extra['Navashield - Server'][str(i)], extra['Navashield - Client'][str(i)]])} for i in range(1,26) if str(i) in extra['Navashield - Server'] and str(i) in extra["Navashield - Client"]], 63 | "flagprescription": [{"ip": f"10.20.{i}.6", "extra": json.dumps([extra['Flag Prescription Prescription'][str(i)], extra['Flag Prescription Appointments'][str(i)]])} for i in range(1,26) if str(i) in extra['Flag Prescription Prescription'] and str(i) in extra["Flag Prescription Appointments"]], 64 | } 65 | 66 | return {service: [{"ip": f"10.20.{i}.{service[1]}", "extra": json.dumps(extra['flag_ids'][service[0]][i]) if not extra is None else None} for i in range(1, 26)] for service in get_services()} 67 | 68 | 69 | def get_all_target_ips(): 70 | return set([f"10.20.{i}.{j}" for i in range(1, 26) for j in range(4,7)]) 71 | 72 | """ 73 | r = requests.get("http://10.10.10.10/api/client/attack_data") 74 | 75 | targets = [] 76 | services = r.json() 77 | 78 | ids = 1 79 | for (service, ts) in services.items(): 80 | for (target, hints) in ts.items(): 81 | if target == OURSERVER: 82 | continue 83 | ta = Target(ip=target, service_id=ids, service_name=service, custom={'extra': json.dumps(hints)}) 84 | targets.append(ta) 85 | ids += 1 86 | 87 | return targets 88 | """ 89 | 90 | logger = logging.getLogger() 91 | 92 | """ 93 | SUBMISSION_URL = "10.10.10.100" 94 | SUBMISSION_TOKEN = "30771485d3cb53a3" 95 | 96 | RESPONSES = { 97 | FlagStatus.INACTIVE: ['timeout', 'game not started', 'try again later', 'game over', 'is not up', 'no such flag'], 98 | FlagStatus.OK: ['accepted', 'congrat'], 99 | FlagStatus.ERROR: ['bad', 'wrong', 'expired', 'unknown', 'your own', 'too old', 'not in database', 100 | 'already submitted', 'invalid flag'], 101 | } 102 | """ 103 | 104 | HEADER=b"""Play With Team Europe CTF 2023 / Attack-Defense Flag Submission Server 105 | One flag per line please! 106 | 107 | """ 108 | 109 | def submit_flags(flags): 110 | results = [] 111 | try: 112 | server = remote("10.20.151.1", 31111, timeout=2) 113 | server.recvuntil(HEADER, timeout=10) 114 | for flag in flags: 115 | server.sendline(flag.encode()) 116 | response = server.recvline(timeout=5) 117 | if b" INV" in response: 118 | results += [FlagStatus.INVALID] 119 | elif b' OLD' in response: 120 | results += [FlagStatus.INACTIVE] 121 | elif b' OK' in response: 122 | results += [FlagStatus.OK] 123 | elif b' OWN' in response: 124 | results += [FlagStatus.OWNFLAG] 125 | elif b' DUP' in response: 126 | results += [FlagStatus.DUPLICATE] 127 | else: 128 | results += [FlagStatus.ERROR] 129 | logger.error(f"Invalid response: {response}") 130 | except Exception as e: 131 | logger.error(f"Exception: {e}") 132 | results += [FlagStatus.ERROR for _ in flags[len(results)]] 133 | 134 | return results 135 | -------------------------------------------------------------------------------- /ataka/ctfconfig/old/ecsc2022.py: -------------------------------------------------------------------------------- 1 | import logging 2 | from pwn import * 3 | import json 4 | import requests 5 | 6 | try: 7 | from ataka.common.database.models import FlagStatus 8 | except ImportError as e: 9 | import enum 10 | class FlagStatus(str, enum.Enum): 11 | UNKNOWN = 'unknown' 12 | 13 | # everything is fine 14 | OK = 'ok' 15 | 16 | # Flag is currently being submitted 17 | QUEUED = 'queued' 18 | 19 | # Flag is currently being submitted 20 | PENDING = 'pending' 21 | 22 | # We already submitted this flag and the submission system tells us thats 23 | DUPLICATE = 'duplicate' 24 | 25 | # something is wrong with our submitter 26 | ERROR = 'error' 27 | 28 | # the service did not check the flag, but told us to fuck off 29 | RATELIMIT = 'ratelimit' 30 | 31 | # something is wrong with the submission system 32 | EXCEPTION = 'exception' 33 | 34 | # we tried to submit our own flag and the submission system lets us know 35 | OWNFLAG = 'ownflag' 36 | 37 | # the flag is not longer active. This is used if a flags are restricted to a 38 | # specific time frame 39 | INACTIVE = 'inactive' 40 | 41 | # flag fits the format and could be sent to the submission system, but the 42 | # submission system told us it is invalid 43 | INVALID = 'invalid' 44 | 45 | # This status code is used in case the scoring system requires the services to 46 | # be working. Flags that are rejected might be sent again! 47 | SERVICEBROKEN = 'servicebroken' 48 | 49 | # Ataka Host Domain / IP 50 | ATAKA_HOST = 'ataka.h4xx.eu' 51 | 52 | # Our own host 53 | OWN_HOST = '10.10.2.1' 54 | 55 | RUNLOCAL_TARGETS = ['10.10.1.1'] 56 | 57 | # Config for framework 58 | ROUND_TIME = 180 59 | 60 | # format: regex, group where group 0 means the whole regex 61 | FLAG_REGEX = r"ECSC_[A-Za-z0-9\\+/]{32}", 0 62 | #FLAG_REGEX = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", 0 63 | 64 | FLAG_BATCHSIZE = 1000 65 | 66 | FLAG_RATELIMIT = 5 # Wait in seconds between each call of submit_flags() 67 | 68 | START_TIME = 1663223401 # Thu Sep 15 2022 08:30:01 GMT+0200 (Central European Summer Time) 69 | 70 | # IPs that are always excluded from attacks. 71 | STATIC_EXCLUSIONS = set(['10.10.2.1']) 72 | 73 | # End config 74 | 75 | 76 | def get_services(): 77 | # COPY FROM flagids NOT scoreboard 78 | return [i for sub in 79 | [[f"{i}_flagstore1", f"{i}_flagstore2"] for i in 80 | ["aquaeductus", "blinkygram", "cantina", "techbay", "winds-of-the-past", "dewaste"]] for i in sub] + ["hps"] 81 | 82 | 83 | def get_targets(): 84 | #extra = '["1234", "5678"]' 85 | r = requests.get("http://10.10.254.254/competition/teams.json") 86 | 87 | targets = [] 88 | services = r.json() 89 | 90 | flagids = services['flag_ids'] 91 | 92 | return {service: [{"ip": f"10.10.{i}.1", "extra": json.dumps(flagids[service][i])} for i in flagids[service].keys()] if service in flagids else [{"ip": ip, "extra": "[]"} for ip in get_all_target_ips()] for service in get_services()} 93 | 94 | 95 | def get_all_target_ips(): 96 | return set(f'10.10.{i}.1' for i in range(1, 34)) 97 | 98 | def submit_flags(flags): 99 | # TODO for next time: exchange with long-living socket, possibly async API 100 | results = [] 101 | try: 102 | HEADER = b"ECSC 2022 | Attack-Defense Flag Submission Server\nOne flag per line please!\n\n" 103 | server = remote("10.10.254.254", 31337, timeout=2) 104 | server.recvuntil(HEADER, timeout=5) 105 | for flag in flags: 106 | server.sendline(flag.encode()) 107 | response = server.recvline(timeout=2) 108 | if b" INV" in response: 109 | results += [FlagStatus.INVALID] 110 | elif b' OLD' in response: 111 | results += [FlagStatus.INACTIVE] 112 | elif b' OK' in response: 113 | results += [FlagStatus.OK] 114 | elif b' OWN' in response: 115 | results += [FlagStatus.OWNFLAG] 116 | elif b' DUP' in response: 117 | results += [FlagStatus.DUPLICATE] 118 | else: 119 | results += [FlagStatus.ERROR] 120 | print(f"Invalid response: {response}") 121 | except Exception as e: 122 | print(f"Exception: {e}", flush=True) 123 | results += [FlagStatus.ERROR for _ in flags[len(results)]] 124 | 125 | return results 126 | 127 | 128 | if __name__ == '__main__': 129 | import pprint 130 | pp = pprint.PrettyPrinter(indent=4) 131 | pp.pprint(get_targets()) 132 | pp.pprint(submit_flags([ 133 | 'ECSC_Q1RGLSRZ6/VTRVL7RVEtRB69jI+HvO4m', 134 | 'test_flag_2', 135 | ])) 136 | 137 | -------------------------------------------------------------------------------- /ataka/ctfconfig/old/ructf.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | try: 4 | from ataka.common.database.models import FlagStatus 5 | except ImportError as e: 6 | import enum 7 | class FlagStatus(str, enum.Enum): 8 | UNKNOWN = 'unknown' 9 | 10 | # everything is fine 11 | OK = 'ok' 12 | 13 | # Flag is currently being submitted 14 | QUEUED = 'queued' 15 | 16 | # Flag is currently being submitted 17 | PENDING = 'pending' 18 | 19 | # We already submitted this flag and the submission system tells us thats 20 | DUPLICATE = 'duplicate' 21 | 22 | # something is wrong with our submitter 23 | ERROR = 'error' 24 | 25 | # the service did not check the flag, but told us to fuck off 26 | RATELIMIT = 'ratelimit' 27 | 28 | # something is wrong with the submission system 29 | EXCEPTION = 'exception' 30 | 31 | # we tried to submit our own flag and the submission system lets us know 32 | OWNFLAG = 'ownflag' 33 | 34 | # the flag is not longer active. This is used if a flags are restricted to a 35 | # specific time frame 36 | INACTIVE = 'inactive' 37 | 38 | # flag fits the format and could be sent to the submission system, but the 39 | # submission system told us it is invalid 40 | INVALID = 'invalid' 41 | 42 | # This status code is used in case the scoring system requires the services to 43 | # be working. Flags that are rejected might be sent again! 44 | SERVICEBROKEN = 'servicebroken' 45 | 46 | import json 47 | import requests 48 | 49 | # Ataka Host Domain / IP 50 | ATAKA_HOST = 'ataka.h4xx.eu' 51 | 52 | # Our own host 53 | OWN_HOST = '10.60.9.3' 54 | 55 | RUNLOCAL_TARGETS = [f'10.60.{i}.3' for i in range(2,5)] 56 | 57 | # Config for framework 58 | ROUND_TIME = 60 59 | 60 | # format: regex, group where group 0 means the whole regex 61 | FLAG_REGEX = r"[A-Za-z0-9_]{31}=", 0 62 | 63 | FLAG_BATCHSIZE = 100 64 | 65 | FLAG_RATELIMIT = 0.5 # Wait in seconds between each call of submit_flags() 66 | 67 | START_TIME = 1682143201 #Sat Apr 22 2023 08:00:01 GMT+0200 (Central European Summer Time) 68 | 69 | # IPs that are always excluded from attacks. 70 | STATIC_EXCLUSIONS = set(['10.60.9.3']) 71 | 72 | # End config 73 | 74 | SERVICES_URL = 'https://monitor.ructf.org/services' 75 | FLAGID_URL = 'https://monitor.ructf.org/flag_ids?service=%s' 76 | SUBMIT_URL = 'https://monitor.ructf.org/flags' 77 | 78 | TEAM_TOKEN = 'CLOUD_9_c8a088e09071f040570a229e4a1d12b2' 79 | 80 | 81 | def get_services(): 82 | # COPY FROM flagids NOT scoreboard 83 | services = requests.get(SERVICES_URL).json() 84 | #print(services) 85 | return [str(service_id) + '_' + str(service_name) for service_id, service_name in services.items()] 86 | 87 | 88 | def get_targets(): 89 | services = get_services() 90 | 91 | try: 92 | targets = {} 93 | 94 | for service_name in services: 95 | service_id = service_name.split('_', 1)[0] 96 | dt = requests.get(FLAGID_URL % (service_id,), headers={"X-Team-Token": TEAM_TOKEN}).json() 97 | # print(dt) 98 | 99 | flag_ids = dt['flag_ids'] 100 | targets[service_name] = [ 101 | { 102 | 'ip': data['host'], 103 | 'extra': json.dumps(data['flag_ids']) 104 | } 105 | for team_id, data in flag_ids.items() 106 | ] 107 | #print(targets) 108 | return targets 109 | except Exception as e: 110 | print(f"Error while getting targets: {e}") 111 | return {service: [] for service in get_services()} 112 | 113 | def get_all_target_ips(): 114 | return set(f'10.60.{i}.3' for i in range(2, 37)) 115 | 116 | 117 | def submit_flags(flags): 118 | results = [] 119 | try: 120 | conn = requests.put(SUBMIT_URL, json=flags, headers={'X-Team-Token': TEAM_TOKEN}) 121 | resp = conn.json() 122 | for flag_answer in resp: 123 | msg = flag_answer["msg"] 124 | if 'Accepted' in msg: 125 | status = FlagStatus.OK 126 | elif 'invalid or own flag' in msg: 127 | status = FlagStatus.INVALID 128 | elif 'flag is too old' in resp: 129 | status = FlagStatus.INACTIVE 130 | elif 'already submitted' in msg: 131 | status = FlagStatus.DUPLICATE 132 | # elif 'NOP team' in resp: 133 | # status = FlagStatus.NOP 134 | #elif 'flag is your own' in msg: 135 | # status = FlagStatus.OWNFLAG 136 | # elif 'OFFLINE' in resp: 137 | # status = FlagStatus.OFFLINE 138 | else: 139 | status = FlagStatus.ERROR 140 | print(msg) 141 | results.append(status) 142 | 143 | except Exception as e: 144 | print(f"Error while submitting flags: {e}") 145 | results += [FlagStatus.ERROR]*(len(flags)-len(results)) 146 | 147 | return results 148 | 149 | 150 | logger = logging.getLogger() 151 | 152 | 153 | if __name__ == '__main__': 154 | import pprint 155 | pp = pprint.PrettyPrinter(indent=4) 156 | pp.pprint(get_targets()) 157 | pp.pprint(submit_flags([ 158 | 'test_flag_1', 159 | 'test_flag_2', 160 | ])) 161 | 162 | -------------------------------------------------------------------------------- /ataka/ctfconfig/old/saarctf.py: -------------------------------------------------------------------------------- 1 | import logging 2 | 3 | from ataka.common.flag_status import FlagStatus 4 | 5 | import json 6 | import requests 7 | import telnetlib 8 | import socket 9 | 10 | # Config for framework 11 | ROUND_TIME = 120 12 | 13 | # format: regex, group where group 0 means the whole regex 14 | FLAG_REGEX = r"SAAR\{[A-Za-z0-9-_]{32}\}", 0 15 | 16 | FLAG_BATCHSIZE = 500 17 | 18 | FLAG_RATELIMIT = 0.5 # Wait in seconds between each call of submit_flags() 19 | 20 | START_TIME = 1653055201 # Fri May 20 2022 4:00:01 PM GMT+02:00 (Central European Summer Time) 21 | 22 | # IPs that are always excluded from attacks. 23 | STATIC_EXCLUSIONS = set([]) 24 | 25 | SUBMIT_DOM = 'submission.ctf.saarland' 26 | SUBMIT_PORT = 31337 27 | FLAGID_URL = 'https://scoreboard.ctf.saarland/attack.json' 28 | 29 | 30 | def get_services(): 31 | return ["backd00r", "bytewarden", "saarbahn", "saarcloud", "saarloop", "saarsecvv"] 32 | 33 | 34 | def get_targets(): 35 | try: 36 | dt = requests.get(FLAGID_URL).json() 37 | #dt = json.loads('{"teams":[{"id":1,"name":"NOP","ip":"10.32.1.2"},{"id":2,"name":"saarsec","ip":"10.32.2.2"}],"flag_ids":{"service_1":{"10.32.1.2":{"15":["username1","username1.2"],"16":["username2","username2.2"]},"10.32.2.2":{"15":["username3","username3.2"],"16":["username4","username4.2"]}}}}') 38 | 39 | flag_ids = dt['flag_ids'] 40 | teams = dt['teams'] 41 | targets = { 42 | service: [ 43 | { 44 | 'ip': ip, 45 | 'extra': json.dumps([x for tick, flagids in ip_info.items() for x in flagids]), 46 | } 47 | for ip, ip_info in service_info.items() 48 | ] 49 | for service, service_info in flag_ids.items() 50 | } 51 | 52 | targets["saarcloud"] = [{"ip": team["ip"], "extra": ""} for team in teams if team["online"]] 53 | 54 | return targets 55 | except Exception as e: 56 | print(f"Error while getting targets: {e}") 57 | return {service: [] for service in get_services()} 58 | 59 | 60 | def submit_flags(flags): 61 | results = [] 62 | try: 63 | conn = telnetlib.Telnet(SUBMIT_DOM, SUBMIT_PORT, 2) 64 | 65 | for flag in flags: 66 | conn.write((flag + '\n').encode()) 67 | resp = conn.read_until(b"\n").decode() 68 | if resp == '[OK]\n': 69 | status = FlagStatus.OK 70 | elif 'format' in resp: 71 | status = FlagStatus.INVALID 72 | elif 'Invalid flag' in resp: 73 | status = FlagStatus.INVALID 74 | elif 'Expired' in resp: 75 | status = FlagStatus.INACTIVE 76 | elif 'Already submitted' in resp: 77 | status = FlagStatus.DUPLICATE 78 | elif 'NOP team' in resp: 79 | status = FlagStatus.NOP 80 | elif 'own flag' in resp: 81 | status = FlagStatus.OWNFLAG 82 | else: 83 | status = FlagStatus.ERROR 84 | results.append(status) 85 | 86 | conn.get_socket().shutdown(socket.SHUT_WR) 87 | conn.read_all() 88 | conn.close() 89 | except Exception as e: 90 | print(f"Error while submitting flags: {e}") 91 | results += [FlagStatus.ERROR]*(len(flags)-len(results)) 92 | 93 | return results 94 | 95 | 96 | logger = logging.getLogger() 97 | 98 | if __name__ == '__main__': 99 | import pprint 100 | pp = pprint.PrettyPrinter(indent=4) 101 | pp.pprint(get_targets()) 102 | pp.pprint(submit_flags([ 103 | 'test_flag_1', 104 | 'test_flag_2', 105 | ])) 106 | -------------------------------------------------------------------------------- /ataka/ctfconfig/ructf.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | from ataka.common.flag_status import FlagStatus 4 | 5 | ### EXPORTED CONFIG 6 | 7 | # Ataka Host Domain / IP 8 | ATAKA_HOST = 'ataka.h4xx.eu' 9 | 10 | # Default targets for atk runlocal 11 | RUNLOCAL_TARGETS = ["10.60.1.3"] 12 | 13 | # IPs that are always excluded from attacks. 14 | STATIC_EXCLUSIONS = {'10.61.84.3'} 15 | 16 | ROUND_TIME = 60 17 | 18 | # format: regex, group where group 0 means the whole regex 19 | FLAG_REGEX = r"[A-Z0-9]{31}=", 0 20 | # FLAG_REGEX = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", 0 21 | 22 | FLAG_BATCHSIZE = 100 23 | 24 | FLAG_RATELIMIT = 1 # Wait in seconds between each call of submit_flags() 25 | 26 | # When the CTF starts 27 | START_TIME = 1699092000 28 | 29 | ### END EXPORTED CONFIG 30 | 31 | import requests 32 | 33 | SERVICES_URL = 'https://monitor.cloud.ructf.org/services' 34 | FLAGID_URL = 'https://monitor.cloud.ructf.org/flag_ids?service=%s' 35 | SUBMIT_URL = 'https://monitor.cloud.ructf.org/flags' 36 | 37 | TEAM_TOKEN = 'CLOUD_340_d86dc72998b6f974679d5c963a79a5cc' 38 | 39 | def get_targets(): 40 | # COPY FROM flagids NOT scoreboard 41 | services = requests.get(SERVICES_URL).json() 42 | #print(services) 43 | services = [str(service_id) + '_' + str(service_name) for service_id, service_name in services.items()] 44 | 45 | try: 46 | targets = {} 47 | 48 | for service_name in services: 49 | service_id = service_name.split('_', 1)[0] 50 | dt = requests.get(FLAGID_URL % (service_id,), headers={"X-Team-Token": TEAM_TOKEN}).json() 51 | # print(dt) 52 | 53 | flag_ids = dt['flag_ids'] 54 | targets[service_name] = [ 55 | { 56 | 'ip': data['host'], 57 | 'extra': json.dumps(data['flag_ids']) 58 | } 59 | for team_id, data in flag_ids.items() 60 | ] 61 | #print(targets) 62 | return targets 63 | except Exception as e: 64 | print(f"Error while getting targets: {e}") 65 | return {service: [] for service in services} 66 | 67 | 68 | services = ["buffalo", "gopher_coin", "kyc", "oly_consensus", "swiss_keys", "to_the_moon", "wall.eth"] 69 | 70 | default_targets = {service: {f"10.99.{i}.2": ["1234", "5678"] for i in range(10)} for service in services} 71 | 72 | # remote fetch here 73 | flag_ids = default_targets 74 | 75 | targets = { 76 | service: [ 77 | { 78 | "ip": ip, 79 | "extra": json.dumps(ip_info), 80 | } 81 | for ip, ip_info in (default_targets[service] | service_info).items() 82 | ] 83 | for service, service_info in ({service: [] for service in services} | flag_ids).items() 84 | } 85 | 86 | return targets 87 | 88 | def submit_flags(flags): 89 | results = [] 90 | try: 91 | conn = requests.put(SUBMIT_URL, json=flags, headers={'X-Team-Token': TEAM_TOKEN}) 92 | resp = conn.json() 93 | for flag_answer in resp: 94 | msg = flag_answer["msg"] 95 | if 'Accepted' in msg: 96 | status = FlagStatus.OK 97 | elif 'invalid or own flag' in msg: 98 | status = FlagStatus.INVALID 99 | elif 'flag is too old' in resp: 100 | status = FlagStatus.INACTIVE 101 | elif 'already submitted' in msg: 102 | status = FlagStatus.DUPLICATE 103 | # elif 'NOP team' in resp: 104 | # status = FlagStatus.NOP 105 | #elif 'flag is your own' in msg: 106 | # status = FlagStatus.OWNFLAG 107 | # elif 'OFFLINE' in resp: 108 | # status = FlagStatus.OFFLINE 109 | else: 110 | status = FlagStatus.ERROR 111 | print(msg) 112 | results.append(status) 113 | 114 | except Exception as e: 115 | print(f"Error while submitting flags: {e}") 116 | results += [FlagStatus.ERROR]*(len(flags)-len(results)) 117 | 118 | return results 119 | 120 | -------------------------------------------------------------------------------- /ataka/ctfconfig/testctf.py: -------------------------------------------------------------------------------- 1 | import json 2 | 3 | from ataka.common.flag_status import FlagStatus 4 | 5 | ### EXPORTED CONFIG 6 | 7 | # Ataka Host Domain / IP 8 | ATAKA_HOST = 'ataka.h4xx.eu' 9 | 10 | # Default targets for atk runlocal 11 | RUNLOCAL_TARGETS = ["10.99.0.2"] 12 | 13 | # IPs that are always excluded from attacks. 14 | STATIC_EXCLUSIONS = {'10.99.1.2'} 15 | 16 | ROUND_TIME = 10 17 | 18 | # format: regex, group where group 0 means the whole regex 19 | FLAG_REGEX = r"[A-Z0-9]{31}=", 0 20 | # FLAG_REGEX = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}", 0 21 | 22 | FLAG_BATCHSIZE = 100 23 | 24 | FLAG_RATELIMIT = 1 # Wait in seconds between each call of submit_flags() 25 | 26 | # When the CTF starts 27 | START_TIME = 1690227547 28 | 29 | 30 | ### END EXPORTED CONFIG 31 | 32 | def get_targets(): 33 | services = ["buffalo", "gopher_coin", "kyc", "oly_consensus", "swiss_keys", "to_the_moon", "wall.eth"] 34 | 35 | default_targets = {service: {f"10.99.{i}.2": ["1234", "5678"] for i in range(10)} for service in services} 36 | 37 | # remote fetch here 38 | flag_ids = default_targets 39 | 40 | targets = { 41 | service: [ 42 | { 43 | "ip": ip, 44 | "extra": json.dumps(ip_info), 45 | } 46 | for ip, ip_info in (default_targets[service] | service_info).items() 47 | ] 48 | for service, service_info in ({service: [] for service in services} | flag_ids).items() 49 | } 50 | 51 | return targets 52 | 53 | 54 | submitted_flags = set() 55 | 56 | 57 | def _randomness(): 58 | import random 59 | return \ 60 | random.choices([FlagStatus.OK, FlagStatus.INVALID, FlagStatus.INACTIVE, FlagStatus.OWNFLAG, FlagStatus.ERROR], 61 | weights=[0.5, 0.2, 0.2, 0.05, 0.1], k=1)[0] 62 | 63 | 64 | def submit_flags(flags): 65 | import time 66 | time.sleep(min(len(flags) / 1000, 2)) 67 | result = {flag: FlagStatus.DUPLICATE if flag in submitted_flags else _randomness() for flag in flags} 68 | submitted_flags.update([flag for flag, status in result.items() if status != FlagStatus.ERROR]) 69 | return [result[flag] for flag in flags] 70 | -------------------------------------------------------------------------------- /ataka/executor/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:3.11-slim 2 | 3 | RUN pip install --no-cache-dir --upgrade pip 4 | 5 | WORKDIR / 6 | 7 | COPY ataka/common/requirements.txt /ataka/common/ 8 | RUN pip install --no-cache-dir -r /ataka/common/requirements.txt 9 | COPY ataka/common /ataka/common 10 | 11 | VOLUME /data/shared 12 | VOLUME /data/exploits 13 | VOLUME /data/persist 14 | 15 | COPY ataka/executor/requirements.txt /ataka/executor/ 16 | RUN pip install --no-cache-dir -r /ataka/executor/requirements.txt 17 | COPY ataka/executor /ataka/executor 18 | 19 | CMD [ "bash", "/ataka/common/delayed_start.sh", "--", "python", "-m", "ataka.executor" ] 20 | -------------------------------------------------------------------------------- /ataka/executor/__main__.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | 3 | from aiodocker import Docker 4 | 5 | from ataka.common import queue, database 6 | from .exploits import Exploits 7 | from .jobs import Jobs 8 | 9 | 10 | async def main(): 11 | # initialize connections 12 | await queue.connect() 13 | await database.connect() 14 | 15 | docker = Docker() 16 | 17 | # load ctf-specific code 18 | exploits = Exploits(docker) 19 | jobs = Jobs(docker, exploits) 20 | 21 | poll_task = jobs.poll_and_run_jobs() 22 | 23 | await asyncio.gather(poll_task) 24 | 25 | await docker.close() 26 | 27 | 28 | asyncio.run(main()) -------------------------------------------------------------------------------- /ataka/executor/exploits.py: -------------------------------------------------------------------------------- 1 | from asyncio import sleep 2 | 3 | from aiodocker import DockerError 4 | 5 | from .localdata import * 6 | 7 | 8 | class BuildError(Exception): 9 | pass 10 | 11 | 12 | class Exploits: 13 | def __init__(self, docker): 14 | self._docker = docker 15 | self._exploits = {} 16 | 17 | async def ensure_exploit(self, exploit): 18 | if exploit.id not in self._exploits: 19 | self._exploits[exploit.id] = LocalExploit(id=exploit.id, 20 | service=exploit.exploit_history.service, 21 | author=exploit.author, 22 | docker_name=exploit.docker_name, 23 | status=LocalExploitStatus.BUILDING) 24 | await self.build_exploit(exploit.id) 25 | 26 | exploit = self._exploits[exploit.id] 27 | while exploit.status == LocalExploitStatus.BUILDING: 28 | await sleep(1) 29 | 30 | return exploit 31 | 32 | async def build_exploit(self, exploit_id): 33 | exploit = self._exploits[exploit_id] 34 | try: 35 | print(f"Building exploit {exploit_id} ({exploit.docker_name})...") 36 | tag = f"openattackdefensetools/ataka-exploit-{exploit.docker_name}" 37 | 38 | try: 39 | existing_ref = await self._docker.images.inspect(tag) 40 | exploit.build_output = "Notice: Re-Using existing image" 41 | exploit.status = LocalExploitStatus.FINISHED 42 | exploit.docker_id = existing_ref["Id"] 43 | exploit.docker_cmd = existing_ref["Config"]["Cmd"] 44 | print(f"Re-using image {exploit.docker_id}") 45 | return 46 | except DockerError: 47 | pass 48 | 49 | try: 50 | build_ref = self._docker.images.build(tag=tag, 51 | fileobj=open(f"/data/exploits/{exploit.docker_name}", "rb"), 52 | encoding="Dummy", 53 | stream=True) 54 | except FileNotFoundError as e: 55 | build_ref = [{"error": f"FileNotFoundError: {e}"}] 56 | 57 | async for line in build_ref: 58 | if "stream" in line: 59 | exploit.build_output += line["stream"] 60 | # else: 61 | # print(line) 62 | 63 | if "error" in line: 64 | exploit.build_output += f"ERROR: {line['error']}\n" 65 | exploit.status = LocalExploitStatus.ERROR 66 | 67 | if "aux" in line: 68 | exploit.docker_id = line["aux"]['ID'] 69 | 70 | if exploit.docker_id is not None: 71 | # figure out docker config 72 | image_info = await self._docker.images.inspect(exploit.docker_id) 73 | exploit.status = LocalExploitStatus.FINISHED 74 | exploit.docker_cmd = image_info["Config"]["Cmd"] 75 | 76 | except DockerError as e: 77 | exploit.status = LocalExploitStatus.ERROR 78 | exploit.build_output += f"DOCKER BUILD ERROR: {e.message}" 79 | -------------------------------------------------------------------------------- /ataka/executor/jobs.py: -------------------------------------------------------------------------------- 1 | import asyncio 2 | import math 3 | import os 4 | import time 5 | import traceback 6 | from datetime import datetime 7 | from typing import Optional 8 | 9 | from aiodocker import DockerError 10 | from sqlalchemy.future import select 11 | from sqlalchemy.orm import selectinload, joinedload 12 | 13 | from ataka.common import database 14 | from ataka.common.database.models import Job, Execution, Exploit 15 | from ataka.common.queue import get_channel, JobQueue, JobAction, OutputQueue, OutputMessage 16 | from .localdata import * 17 | 18 | 19 | class BuildError(Exception): 20 | pass 21 | 22 | 23 | class Jobs: 24 | def __init__(self, docker, exploits): 25 | self._docker = docker 26 | self._exploits = exploits 27 | self._jobs = {} 28 | 29 | async def poll_and_run_jobs(self): 30 | async with get_channel() as channel: 31 | job_queue = await JobQueue.get(channel) 32 | 33 | async for job_message in job_queue.wait_for_messages(): 34 | match job_message.action: 35 | case JobAction.CANCEL: 36 | print(f"DEBUG: CURRENTLY RUNNING {len(self._jobs)}") 37 | result = [(task, job) for task, job in self._jobs.items() if job.id == job_message.job_id] 38 | if len(result) > 0: 39 | task, job = result[0] 40 | await task.cancel() 41 | case JobAction.QUEUE: 42 | job_execution = JobExecution(self._docker, self._exploits, channel, job_message.job_id) 43 | task = asyncio.create_task(job_execution.run()) 44 | 45 | def on_done(job): 46 | del self._jobs[job] 47 | 48 | self._jobs[task] = job_execution 49 | task.add_done_callback(on_done) 50 | 51 | 52 | class JobExecution: 53 | def __init__(self, docker, exploits, channel, job_id: int): 54 | self.id = job_id 55 | self._docker = docker 56 | self._exploits = exploits 57 | self._channel = channel 58 | self._data_store = os.environ["DATA_STORE"] 59 | 60 | async def run(self): 61 | job = await self.fetch_job_from_database() 62 | if job is None: 63 | return 64 | 65 | exploit = job.exploit 66 | 67 | persist_dir = f"/data/persist/{exploit.docker_name}" 68 | host_persist_dir = f"{self._data_store}/persist/{exploit.docker_name}" 69 | host_shared_dir = f"{self._data_store}/shared/exploits" 70 | 71 | try: 72 | os.makedirs(persist_dir, exist_ok=True) 73 | container_ref = await self._docker.containers.create_or_replace( 74 | name=f"ataka-exploit-{exploit.docker_name}", 75 | config={ 76 | "Image": exploit.docker_id, 77 | "Cmd": ["sleep", str(math.floor(job.timeout - time.time()))], 78 | "AttachStdin": False, 79 | "AttachStdout": False, 80 | "AttachStderr": False, 81 | "Tty": False, 82 | "OpenStdin": False, 83 | "StopSignal": "SIGKILL", 84 | "HostConfig": { 85 | "Mounts": [ 86 | { 87 | "Type": "bind", 88 | "Source": host_persist_dir, 89 | "Target": "/persist", 90 | }, 91 | { 92 | "Type": "bind", 93 | "Source": host_shared_dir, 94 | "Target": "/shared", 95 | } 96 | ], 97 | "CapAdd": ["NET_RAW"], 98 | "NetworkMode": "container:ataka-exploit", 99 | "CpusetCpus": os.environ.get('EXPLOIT_CPUSET', ''), 100 | }, 101 | }, 102 | ) 103 | 104 | await container_ref.start() 105 | except DockerError as exception: 106 | print(f"Got docker error for exploit {exploit.id} (service {exploit.service}) by {exploit.author}") 107 | print(traceback.format_exception(exception)) 108 | for e in job.executions: 109 | e.status = JobExecutionStatus.FAILED 110 | e.stderr = str(exception) 111 | await self.submit_to_database(job.executions) 112 | raise exception 113 | 114 | execute_tasks = [self.docker_execute(container_ref, e) for e in job.executions] 115 | 116 | print(f"Starting {len(execute_tasks)} tasks for exploit {exploit.id} (service {exploit.service}) by {exploit.author}") 117 | 118 | # Execute all the exploits 119 | results = await asyncio.gather(*execute_tasks) 120 | 121 | # try: 122 | # os.rmdir(persist_dir) 123 | # except (FileNotFoundError, OSError): 124 | # pass 125 | 126 | await self.submit_to_database(results) 127 | # TODO: send to ctfconfig 128 | 129 | async def fetch_job_from_database(self) -> Optional[LocalJob]: 130 | async with database.get_session() as session: 131 | get_job = select(Job).where(Job.id == self.id).options( 132 | joinedload(Job.exploit).joinedload(Exploit.exploit_history), joinedload(Job.executions).joinedload(Execution.target) 133 | ) 134 | job = (await session.execute(get_job)).unique().scalar_one() 135 | executions = job.executions 136 | 137 | time_left = job.timeout.timestamp() - time.time() 138 | if time_left < 0: 139 | job.status = JobExecutionStatus.TIMEOUT 140 | for e in executions: 141 | e.status = JobExecutionStatus.TIMEOUT 142 | e.stderr = "" 143 | await session.commit() 144 | return None 145 | 146 | local_exploit = await self._exploits.ensure_exploit(job.exploit) 147 | 148 | job.timeout = datetime.fromtimestamp(time.time() + time_left) 149 | if local_exploit.status is not LocalExploitStatus.FINISHED: 150 | print(f"Got build error for exploit {local_exploit.id} (service {local_exploit.service}) by {local_exploit.author}") 151 | print(f" {local_exploit.build_output}") 152 | job.status = JobExecutionStatus.FAILED 153 | for e in executions: 154 | e.status = JobExecutionStatus.FAILED 155 | e.stderr = local_exploit.build_output 156 | await session.commit() 157 | return None 158 | 159 | job.status = JobExecutionStatus.RUNNING 160 | local_executions = [] 161 | for e in executions: 162 | e.status = JobExecutionStatus.RUNNING 163 | local_executions += [ 164 | LocalExecution(e.id, local_exploit, LocalTarget(e.target.ip, e.target.extra), JobExecutionStatus.RUNNING)] 165 | 166 | await session.commit() 167 | 168 | # Convert data to local for usage without database 169 | return LocalJob(local_exploit, job.timeout.timestamp(), local_executions) 170 | 171 | async def submit_to_database(self, results: [LocalExecution]): 172 | local_executions = {e.database_id: e for e in results} 173 | status = JobExecutionStatus.FAILED if any([e.status == JobExecutionStatus.FAILED for e in results]) \ 174 | else JobExecutionStatus.CANCELLED if any([e.status == JobExecutionStatus.CANCELLED for e in results]) \ 175 | else JobExecutionStatus.FINISHED 176 | 177 | # submit results to database 178 | async with database.get_session() as session: 179 | get_job = select(Job).where(Job.id == self.id) 180 | job = (await session.execute(get_job)).scalar_one() 181 | job.status = status 182 | 183 | get_executions = select(Execution).where(Execution.job_id == self.id) \ 184 | .options(selectinload(Execution.target)) 185 | executions = (await session.execute(get_executions)).scalars() 186 | 187 | for execution in executions: 188 | local_execution = local_executions[execution.id] 189 | execution.status = local_execution.status 190 | execution.stdout = local_execution.stdout 191 | execution.stderr = local_execution.stderr 192 | 193 | await session.commit() 194 | 195 | async def docker_execute(self, container_ref, execution: LocalExecution) -> LocalExecution: 196 | async def exec_in_container_and_poll_output(): 197 | try: 198 | exec_ref = await container_ref.exec(cmd=execution.exploit.docker_cmd, workdir="/exploit", tty=False, 199 | environment={ 200 | "ATAKA_CENTRAL_EXECUTION": "TRUE", 201 | "TARGET_IP": execution.target.ip, 202 | "TARGET_EXTRA": execution.target.extra, 203 | "ATAKA_EXPLOIT_ID": execution.exploit.id, 204 | }) 205 | async with exec_ref.start(detach=False) as stream: 206 | while True: 207 | message = await stream.read_out() 208 | if message is None: 209 | break 210 | 211 | yield message[0], message[1].decode() 212 | except DockerError as e: 213 | print(f"DOCKER EXECUTION ERROR for {execution.exploit.id} (service {execution.exploit.service}) " \ 214 | f"by {execution.exploit.author} against target {execution.target.ip}\n" \ 215 | f"{e.message}") 216 | msg = f"DOCKER EXECUTION ERROR: {e.message}" 217 | execution.status = JobExecutionStatus.FAILED 218 | execution.stderr += msg 219 | yield 2, msg 220 | 221 | 222 | output_queue = await OutputQueue.get(self._channel) 223 | 224 | async for (stream, output) in exec_in_container_and_poll_output(): 225 | # collect output 226 | match stream: 227 | case 1: 228 | execution.stdout += output 229 | case 2: 230 | execution.stderr += output 231 | 232 | await output_queue.send_message(OutputMessage(execution.database_id, stream == 1, output)) 233 | if execution.status in [JobExecutionStatus.QUEUED, JobExecutionStatus.RUNNING]: 234 | execution.status = JobExecutionStatus.FINISHED 235 | return execution 236 | -------------------------------------------------------------------------------- /ataka/executor/localdata.py: -------------------------------------------------------------------------------- 1 | from dataclasses import dataclass 2 | from enum import Enum 3 | 4 | from ataka.common.job_execution_status import JobExecutionStatus 5 | 6 | 7 | class LocalExploitStatus(str, Enum): 8 | BUILDING = "building" 9 | FINISHED = "finished" 10 | ERROR = "error" 11 | 12 | 13 | @dataclass 14 | class LocalExploit: 15 | id: str 16 | service: str 17 | author: str 18 | docker_name: str 19 | status: LocalExploitStatus 20 | build_output: str = "" 21 | docker_id: str = None 22 | docker_cmd: [] = None 23 | 24 | 25 | @dataclass 26 | class LocalTarget: 27 | ip: str 28 | extra: str = "" 29 | 30 | 31 | @dataclass 32 | class LocalExecution: 33 | database_id: int 34 | exploit: LocalExploit 35 | target: LocalTarget 36 | status: JobExecutionStatus 37 | stdout: str = "" 38 | stderr: str = "" 39 | 40 | 41 | @dataclass 42 | class LocalJob: 43 | exploit: LocalExploit 44 | timeout: float 45 | executions: [LocalExecution] 46 | -------------------------------------------------------------------------------- /ataka/executor/requirements.txt: -------------------------------------------------------------------------------- 1 | aiodocker==0.21.0 -------------------------------------------------------------------------------- /ataka/player-cli/.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | -------------------------------------------------------------------------------- /ataka/player-cli/__main__.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | from player_cli import app 4 | 5 | if __name__ == '__main__': 6 | app() 7 | -------------------------------------------------------------------------------- /ataka/player-cli/package_player_cli.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | set -u 5 | 6 | TMPFILE="$(mktemp -d)" 7 | trap "rm -rf '$TMPFILE'" 0 # EXIT 8 | trap "rm -rf '$TMPFILE'; exit 1" 2 # INT 9 | trap "rm -rf '$TMPFILE'; exit 1" 1 15 # HUP TERM 10 | 11 | cd "$TMPFILE" 12 | cp -r /ataka/player-cli . 13 | cp "/ataka/ctfconfig/$CTF.py" player-cli/player_cli/ctfconfig.py 14 | mkdir -p player-cli/ataka/common 15 | cp /ataka/common/flag_status.py player-cli/ataka/common/flag_status.py 16 | pip install -r player-cli/requirements.txt --target player-cli/ 17 | python -m zipapp -c --python "/usr/bin/env python3" --output /data/shared/ataka-player-cli.pyz player-cli/ 18 | echo 'Python player created' 19 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/__init__.py: -------------------------------------------------------------------------------- 1 | import sys 2 | 3 | import requests 4 | import typer 5 | from rich import print 6 | import player_cli.exploit 7 | import player_cli.flags 8 | import player_cli.service 9 | import player_cli.ctfconfig_wrapper 10 | 11 | state = { 12 | 'host': '', 13 | 'bypass_tools': False, 14 | 'debug': False 15 | } 16 | 17 | app = typer.Typer(no_args_is_help=True) 18 | 19 | app.add_typer(player_cli.exploit.app, 20 | name='exploit', help='Manage exploits.') 21 | app.add_typer(player_cli.flags.app, 22 | name='flag', help='Manage flags.') 23 | app.add_typer(player_cli.service.app, 24 | name='service', help='Show services (legacy).') 25 | 26 | 27 | @app.callback() 28 | def main( 29 | host: str = typer.Option(player_cli.ctfconfig_wrapper.ATAKA_HOST, '--host', '-h', 30 | help='Ataka web API host.'), 31 | bypass_tools: bool = typer.Option(False, '--bypass-tools', '-b', help= 32 | 'Interact directly with the gameserver instead of using our tools. ' 33 | 'Use only in emergencies!'), 34 | debug: bool = typer.Option(False, '--debug', '-d', help='Turn on debug logging') 35 | ): 36 | """ 37 | Player command-line interface to Ataka. 38 | """ 39 | state['host'] = host 40 | state['bypass_tools'] = bypass_tools 41 | state['debug'] = debug 42 | 43 | 44 | @app.command('reload', help='Reload offline ctfconfig') 45 | def reload_config( 46 | host: str = typer.Option(None, '--host', '-h', 47 | help='Ataka web API host.'), 48 | ): 49 | if host is not None: 50 | state['host'] = host 51 | 52 | SANITY_CHECK_STR = b'#!/usr/bin/env python3\nPK' 53 | 54 | cli_path = sys.argv[0] 55 | resp = requests.get(f"http://{player_cli.state['host']}/") 56 | 57 | if resp.status_code != 200: 58 | print(f"{player_cli.state['host']} returned {resp.status_code}") 59 | return 60 | 61 | if not resp.content.startswith(SANITY_CHECK_STR): 62 | print(f"Invalid Response from {player_cli.state['host']}") 63 | return 64 | 65 | print(f"Writing player-cli at {cli_path}") 66 | with open(cli_path, 'wb') as f: 67 | f.write(resp.content) 68 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/ctfconfig_wrapper.py: -------------------------------------------------------------------------------- 1 | ''' 2 | This file is responsible for mocking some of the API endpoints we have in ataka 3 | And act as a fallback, in case our tools fail 4 | ''' 5 | from rich import print 6 | import re 7 | 8 | from .ctfconfig import * 9 | 10 | FLAG_FINDER = re.compile(FLAG_REGEX[0]) 11 | 12 | def _parse_and_submit_content(data: str): 13 | from player_cli.flags import FLAG_STATUS_COLOR 14 | 15 | flags = list(set(FLAG_FINDER.findall(data))) 16 | 17 | # TODO: no ratelimit, but batch runs 18 | submissions = submit_flags(flags) 19 | 20 | for flag, status in zip(flags, submissions): 21 | print(f"{flag} -> {FLAG_STATUS_COLOR[status](status)}") 22 | 23 | def request(method, endpoint, data=None): 24 | if endpoint == 'flag/submit': 25 | _parse_and_submit_content(data['flags']) 26 | return {"execution_id": 0} 27 | elif endpoint == 'targets': 28 | return [target | {"service": service, "id": i} for service, targets in get_targets().items() for i, target in zip(range(100000), targets)] 29 | elif endpoint == "job": 30 | # create fake job 31 | return {'executions': [{'target_id': x, 'id': 0, 'status': "running"} for x in data['targets']], 'id': 0} 32 | elif endpoint == "flag/execution/0": 33 | return [] 34 | elif endpoint == "job/execution/0/finish": 35 | # submit data 36 | _parse_and_submit_content(data['stdout']) 37 | _parse_and_submit_content(data['stderr']) 38 | return {} 39 | elif endpoint == "job/0/finish": 40 | # finish job 41 | pass 42 | else: 43 | assert False, f'Invalid request: {method} {endpoint} {data}' 44 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/exploit/__init__.py: -------------------------------------------------------------------------------- 1 | import functools 2 | import io 3 | import os 4 | import re 5 | import shutil 6 | import signal 7 | import sys 8 | import time 9 | import getpass 10 | from multiprocessing.pool import ThreadPool 11 | from zipfile import ZipFile 12 | 13 | from click.exceptions import Exit 14 | from rich import print 15 | from rich.markup import escape 16 | import typer 17 | import base64 18 | import tarfile 19 | 20 | from typing import List 21 | 22 | from rich.prompt import Confirm 23 | 24 | from player_cli.exploit import target 25 | from .execution import print_exploit_execution, EXECUTION_STATUS_IS_FINAL, EXECUTION_STATUS_COLOR 26 | from .exploit import get_all_histories, resolve_history, deactivate_history, activate_exploit, resolve_exploit, \ 27 | print_history, print_logs, ResolveStrategy 28 | from player_cli.util import request, WARN_STR, ERROR_STR, make_executable, parse_dockerfile_cmd, dt_to_local_str, \ 29 | dt_from_iso, highlight_flags, blueify, magentify, NOTICE_STR 30 | from player_cli.ctfconfig_wrapper import RUNLOCAL_TARGETS, ROUND_TIME 31 | from .job import run_local_job 32 | from .target import get_targets 33 | from ..flags import poll_and_show_flags 34 | 35 | app = typer.Typer(no_args_is_help=True) 36 | 37 | app.add_typer(target.app, name='target', help='Manage exploit targets.') 38 | 39 | 40 | @app.command('ls', help='List all exploits.') 41 | def exploit_ls( 42 | history_ids: List[str] = typer.Argument(None, help='History ID(s).') 43 | ): 44 | histories = resolve_history(history_ids) if history_ids is not None and len(history_ids) > 0 else get_all_histories() 45 | 46 | if len(histories) == 0: 47 | print("No exploits found") 48 | return 49 | 50 | for history in histories: 51 | print_history(history) 52 | 53 | 54 | @app.command('activate', help='Activate an exploit.', no_args_is_help=True) 55 | def exploit_activate( 56 | exploit_id: str = typer.Argument(..., help= 57 | 'Exploit ID or history ID. ' 58 | 'If an exploit ID is specified, activates that exploit. ' 59 | 'If a history ID is specified, activates the most recent exploit in ' 60 | 'the history if none is already activated.') 61 | ): 62 | exploit = resolve_exploit(exploit_id) 63 | return activate_exploit(exploit) 64 | 65 | 66 | @app.command('deactivate', help='Deactivate an exploit.', no_args_is_help=True) 67 | def exploit_deactivate( 68 | exploit_id: str = typer.Argument(..., help= 69 | 'Exploit ID or history ID. ' 70 | 'If an exploit ID is specified, deactivates that exploit. ' 71 | 'If a history ID is specified, deactivates all exploits in the history.') 72 | ): 73 | history = resolve_history(exploit_id) 74 | deactivate_history(history) 75 | 76 | 77 | @app.command('switch', help= 78 | 'Activate an exploit and deactivate all others in the history.', no_args_is_help=True) 79 | def exploit_switch( 80 | exploit_id: str = typer.Argument(..., help='Exploit ID.') 81 | ): 82 | exploit = resolve_exploit(exploit_id) 83 | history = exploit['history'] 84 | if history['id'] == exploit_id: 85 | print(f'{ERROR_STR}: you specified an exploit history id, not a specific exploit id.') 86 | print(f'{ERROR_STR}: Please select one of the following exploits to switch to.') 87 | print_history(history) 88 | raise typer.Exit(code=1) 89 | 90 | if exploit['active']: 91 | print(f'{WARN_STR}: exploit "{exploit_id}" is already active, doing nothing...') 92 | return 93 | 94 | deactivate_history(history) 95 | activate_exploit(exploit) 96 | 97 | @app.command('create', help='Create an exploit history.', no_args_is_help=True) 98 | def exploit_create( 99 | history_id: str = typer.Argument(..., help='History ID (the "friendly" exploit name).'), 100 | service: str = typer.Argument(..., help='The target service.') 101 | ): 102 | targets = get_targets(None) 103 | services = set([target['service'] for target in targets]) 104 | 105 | if service not in services: 106 | print(f'{ERROR_STR}: unknown service "{service}". Available services: {magentify(", ".join(services))}.') 107 | raise typer.Exit(code=1) 108 | 109 | request('POST', 'exploit_history', data={ 110 | 'history_id': history_id, 111 | 'service': service, 112 | }) 113 | 114 | print(f"Exploit history {history_id} created!") 115 | 116 | 117 | @app.command('logs', help='Show remote exploit logs.', no_args_is_help=True) 118 | def exploit_logs( 119 | cmd_ids: List[str] = typer.Argument(..., metavar='EXPLOIT_ID...', help= 120 | 'Exploit ID or history ID. ' 121 | 'If an exploit ID is specified, shows logs for that exploit. ' 122 | 'If a history ID is specified, shows logs for the active exploits in the history. ' 123 | 'You can specify multiple IDs and mix exploit and history IDs.'), 124 | limit: int = typer.Option(1, '-n', '--num', metavar='NUM', help= 125 | 'Show logs for the last NUM ticks.') 126 | ): 127 | exploits = resolve_exploit(cmd_ids, ResolveStrategy.ACTIVE) 128 | 129 | print_logs(exploits, limit) 130 | 131 | 132 | @app.command('upload', help='Upload an exploit.', no_args_is_help=True) 133 | def exploit_upload( 134 | history_id: str = typer.Argument(..., help='History ID (the "friendly" exploit name).'), 135 | author: str = typer.Argument(..., help='The author.'), 136 | context: str = typer.Argument(..., help= 137 | 'Path to a directory or tarball containing the Docker context. ' 138 | 'The Dockerfile must be top-level. ' 139 | 'Supported tarball compression formats: xz, bzip2, gzip, identity (no compression).'), 140 | y: bool = typer.Option(False, help='Switch to new exploit after uploading.'), 141 | ): 142 | if os.path.isdir(context): 143 | with io.BytesIO() as bio: 144 | with tarfile.open(fileobj=bio, mode='w:gz') as tar: 145 | tar.add(context, arcname='') 146 | bio.seek(0) 147 | context_data = bio.read() 148 | else: 149 | with open(context, 'rb') as f: 150 | context_data = f.read() 151 | 152 | exploit = request('POST', 'exploit', data={ 153 | 'history_id': history_id, 154 | 'author': author, 155 | 'context': base64.b64encode(context_data).decode(), 156 | }) 157 | 158 | exploit_id = exploit['id'] 159 | 160 | targets = get_targets(exploit['history']['service'], all_targets=False) 161 | 162 | job = request('POST', 'job', data={ 163 | 'targets': [target['id'] for target in targets], 164 | 'exploit_id': exploit_id, 165 | 'manual_id': None, 166 | 'timeout': ROUND_TIME, 167 | }) 168 | 169 | job_id = job['id'] 170 | print(f"{exploit_id}: Waiting for initial job {job_id} to finish..", end='') 171 | job_status = None 172 | 173 | for i in range(120): 174 | job = request('GET', f'job/{job_id}') 175 | executions = job['executions'] 176 | print(f".", end='') 177 | if job_status != job['status']: 178 | print(f" {EXECUTION_STATUS_COLOR[job['status']](job['status'])}", end='') 179 | if job['status'] == 'queued': 180 | print(f" (probably building right now)", end='') 181 | job_status = job['status'] 182 | 183 | if job['status'] in EXECUTION_STATUS_IS_FINAL: 184 | job['timestamp'] = dt_from_iso(job['timestamp']) 185 | print("\n") 186 | for e in executions: 187 | print_exploit_execution(job, e) 188 | 189 | poll_and_show_flags([e['id'] for e in executions], timeout=2) 190 | break 191 | 192 | time.sleep(1) 193 | 194 | if job_status == 'finished': 195 | if not y: 196 | msg = f"{NOTICE_STR}: Do you want to activate the newly uploaded exploit {exploit_id}?" 197 | history = resolve_history(exploit_id) 198 | if len(previous_versions := [exploit['id'] for exploit in history['exploits'] if exploit['active']]) > 0: 199 | msg += f"\n{NOTICE_STR}: this would deactivate the previous version: {','.join(previous_versions)}" 200 | should_switch = Confirm.ask(msg) 201 | else: 202 | should_switch = True 203 | 204 | if should_switch: 205 | if len([exploit['id'] for exploit in history['exploits'] if exploit['active']]) > 0: 206 | exploit_switch(exploit_id) 207 | else: 208 | exploit_activate(exploit_id) 209 | else: 210 | print(f"{WARN_STR}: Did not activate exploit {exploit_id}.") 211 | 212 | elif job_status == 'failed': 213 | print(f"{ERROR_STR}: Job execution failed, not switching exploit automatically.") 214 | print(f"Please fix the issue and re-run the upload") 215 | else: 216 | print(f"{WARN_STR}: Job did not finish within 120 seconds, not switching exploit over automatically.") 217 | print(f"Check results manually with `{sys.argv[0]} exploit logs {exploit_id}`") 218 | print(f"Afterwards use `{sys.argv[0]} exploit switch {exploit_id}` to switch over central execution") 219 | 220 | 221 | @app.command('download', help='Download an exploit.', no_args_is_help=True) 222 | def exploit_download( 223 | exploit_id: str = typer.Argument(..., help='Exploit ID.'), 224 | path: str = typer.Argument(..., help='Output directory (will be created).'), 225 | overwrite: bool = typer.Option(False, '--overwrite', help= 226 | 'Proceed even if the destination directory already exists.'), 227 | unsafe: bool = typer.Option(False, '--unsafe', help= 228 | 'DANGEROUS: do not perform safety checks before extracting the downloaded archive.') 229 | ): 230 | resp = request('GET', f'exploit/{exploit_id}/download') 231 | 232 | data = base64.b64decode(resp['data']) 233 | 234 | with io.BytesIO(data) as bio: 235 | with tarfile.open(fileobj=bio) as tar: 236 | if not unsafe: 237 | for info in tar: 238 | if info.name.startswith('/'): 239 | print(f'{ERROR_STR}: unsafe archive: absolute path detected') 240 | raise typer.Exit(code=1) 241 | elif '/../' in f'/{info.name}/': 242 | print(f'{ERROR_STR}: unsafe archive: path traversal detected') 243 | raise typer.Exit(code=1) 244 | elif info.islnk() or info.issym(): 245 | print(f'{ERROR_STR}: unsafe archive: link detected') 246 | raise typer.Exit(code=1) 247 | 248 | try: 249 | os.mkdir(path) 250 | except FileExistsError: 251 | if overwrite: 252 | print(f'{WARN_STR}: directory "{path}" already exists (proceeding anyway)') 253 | else: 254 | print(f'{ERROR_STR}: directory "{path}" already exists (use --overwrite to proceed anyway)') 255 | raise typer.Exit(code=1) 256 | 257 | tar.extractall(path) 258 | 259 | 260 | self_as_zip_path = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) 261 | 262 | 263 | @app.command("template", help="Generate an exploit stub from a template.", no_args_is_help=True) 264 | def exploit_template( 265 | template: str = typer.Argument( 266 | ..., 267 | help="Template to use, one of: python, ubuntu. " 268 | "Optionally, you can specify a Docker tag (e.g., python:3.9-slim). " 269 | "If none is specified, the latest will be used.", 270 | ), 271 | path: str = typer.Argument(..., help="Destination directory."), 272 | overwrite: bool = typer.Option( 273 | False, 274 | "--overwrite", 275 | help="Proceed even if the destination directory already exists.", 276 | ), 277 | ): 278 | if template.count(":") > 1: 279 | raise typer.BadParameter("Can't give more than one tag", param_hint="TEMPLATE") 280 | 281 | with ZipFile(self_as_zip_path, "r") as archive: 282 | templates = [info.filename.split("/")[1] for info in archive.infolist() 283 | if info.is_dir() and info.filename.count("/") == 2 and info.filename.startswith("templates/")] 284 | template, tag = template.split(":") if ":" in template else (template, "latest") 285 | 286 | if template not in templates: 287 | raise typer.BadParameter(f'unknown template {template}', param_hint="TEMPLATE") 288 | 289 | for info in [x for x in archive.infolist() if x.filename.startswith(f"templates/{template}/")]: 290 | out_filename = os.path.join(path, info.filename[len(f"templates/{template}/"):]) 291 | if info.is_dir(): 292 | try: 293 | os.mkdir(out_filename) 294 | except FileExistsError: 295 | if overwrite: 296 | print( 297 | f'{WARN_STR}: directory "{out_filename}" already exists (proceeding anyway)' 298 | ) 299 | else: 300 | print( 301 | f'{ERROR_STR}: directory "{out_filename}" already exists (use --overwrite to proceed anyway)' 302 | ) 303 | raise typer.Exit(code=1) 304 | else: 305 | with open(out_filename, "wb") as outfile: 306 | with archive.open(info.filename) as infile: 307 | contents = infile.read() 308 | 309 | if info.filename == f"templates/{template}/Dockerfile": 310 | contents = re.sub(b"^FROM (.*):.*$", f"FROM \\1:{tag}".encode(), contents, 311 | flags=re.MULTILINE) 312 | 313 | outfile.write(contents) 314 | # if file is executable 315 | if (info.external_attr >> 16) & 0o111 > 0: 316 | make_executable(out_filename) 317 | 318 | print(f'Created exploit "{path}/" from template {template}:{tag}') 319 | 320 | 321 | @app.command('runlocal', help='Run an exploit locally.', no_args_is_help=True) 322 | def exploit_runlocal( 323 | path: str = typer.Argument(..., help= 324 | 'Path to exploit executable, or to a directory containing a Dockerfile. ' 325 | 'In the latter case, will try to run the container command locally. ' 326 | 'The working directory will be the one where the Dockerfile is located.'), 327 | service: str = typer.Argument(..., help='The target service.'), 328 | target_ips: List[str] = typer.Option(RUNLOCAL_TARGETS, '--target', '-T', help= 329 | 'Target to attack (you can specify this option multiple times).'), 330 | no_target_ips: List[str] = typer.Option([], '-N', '--no-target', help= 331 | 'Target to not attack (you can specify this option multiple times).'), 332 | all_targets: bool = typer.Option(False, '--all-targets', help= 333 | 'Attack all targets (overrides --target).'), 334 | ignore_exclusions: bool = typer.Option(False, help= 335 | 'Ignore static exclusions, i.e. excluding our own vulnbox.'), 336 | timeout: int = typer.Option(30, '--timeout', '-t', help= 337 | 'Timeout for a single exploit execution, in seconds.'), 338 | jobs: int = typer.Option(0, '--jobs', '-j', help= 339 | 'Number of parallel jobs (0 for CPU count).'), 340 | limit: int = typer.Option(-1, '--limit', '-l', help= 341 | 'Limit stdout printing to chars. ' 342 | 'Set to -1 if you want to see the whole output.'), 343 | count: int = typer.Option(0, '-c', '--count', help= 344 | 'Number of attack rounds to perform (0 for infinite).') 345 | ): 346 | if os.path.isdir(path): 347 | try: 348 | with open(f'{path}/Dockerfile', 'r') as f: 349 | dockerfile = f.read() 350 | except FileNotFoundError: 351 | print(f'{ERROR_STR}: directory specified, but no Dockerfile found') 352 | raise typer.Exit(code=1) 353 | except IOError: 354 | print(f'{ERROR_STR}: error reading Dockerfile') 355 | raise typer.Exit(code=1) 356 | exe_args = parse_dockerfile_cmd(dockerfile) 357 | if exe_args is None: 358 | print(f'{ERROR_STR}: could not extract command from Dockerfile') 359 | raise typer.Exit(code=1) 360 | exe = shutil.which(exe_args[0]) 361 | if exe is None: 362 | print(f'{ERROR_STR}: could not find executable for "{exe_args[0]}"') 363 | raise typer.Exit(code=1) 364 | workdir = path 365 | else: 366 | if not os.access(path, os.X_OK): 367 | print(f'{ERROR_STR}: exploit file is not executable') 368 | raise typer.Exit(code=1) 369 | exe = path 370 | exe_args = [path] 371 | workdir = '.' 372 | 373 | if jobs == 0: 374 | jobs = None 375 | 376 | manual_id = f"{os.uname().nodename}-{getpass.getuser()}-{os.path.basename(os.path.realpath(workdir))}" 377 | 378 | targets = get_targets(None) 379 | services = set([target['service'] for target in targets]) 380 | 381 | if service not in services: 382 | print(f'{ERROR_STR}: unknown service "{service}". Available services: {magentify(", ".join(services))}.') 383 | raise typer.Exit(code=1) 384 | 385 | original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) 386 | pool = ThreadPool(jobs) 387 | signal.signal(signal.SIGINT, original_sigint_handler) 388 | 389 | job_func = functools.partial(run_local_job, exe=exe, args=exe_args, workdir=workdir, timeout=timeout) 390 | 391 | num_rounds = 0 392 | while True: 393 | job = None 394 | try: 395 | print(f'\\[{manual_id}] Attack round started') 396 | 397 | start = time.time() 398 | 399 | targets = get_targets(None, all_targets=all_targets, target_ips=target_ips, 400 | no_target_ips=no_target_ips, ignore_exclusions=ignore_exclusions) 401 | services = set([target['service'] for target in targets]) 402 | targets = {target['id']: target for target in targets if target['service'] == service} 403 | if service not in services: 404 | print(f'{ERROR_STR}: No such service {service}, aborting...') 405 | raise typer.Exit(1) 406 | 407 | targets_summary = ', '.join(t['ip'] for t in targets.values()) 408 | print(f'\\[{manual_id}] Attacking {len(targets)} targets: {targets_summary}') 409 | 410 | job = request('POST', 'job', data={ 411 | 'targets': list(targets.keys()), 412 | 'exploit_id': None, 413 | 'manual_id': manual_id, 414 | 'timeout': ROUND_TIME, 415 | }) 416 | 417 | for execution in job['executions']: 418 | execution['target'] = targets[execution['target_id']] 419 | execution['finished'] = False 420 | 421 | for result in pool.imap_unordered(job_func, job['executions']): 422 | host = result['target']['ip'] 423 | extra = result['target']['extra'] 424 | msg = result['msg'] 425 | flags = result['flags'] 426 | print(f'\\[{manual_id}] Execution completed on target {host}: {msg}') 427 | print(f' Extra: {extra}') 428 | #print(f' Submitted {len(flags)} flags:') 429 | #for flag in flags: 430 | # dt = dt_to_local_str(dt_from_iso(flag["timestamp"])) 431 | # print(f' {dt} | ID {flag["id"]}: {flag["flag"]} ({flag["status"]})') 432 | 433 | if limit == -1: 434 | print(f' Execution output:') 435 | # Assumes author wants to see the stdout in a beautiful way, so no repr 436 | print(highlight_flags(escape(result['stdout']), blueify)) 437 | elif result["stdout"] != '': 438 | print(f' Execution output: {magentify(escape(repr(result["stdout"][:limit])))}') 439 | if result["stderr"] != '': 440 | print('\n'.join([f'[bold red]ERR[/bold red] {x}' for x in highlight_flags(escape(result['stderr']), blueify).split('\n')])) 441 | 442 | executions = job['executions'] 443 | request('POST', f'job/{job["id"]}/finish') 444 | job = None 445 | 446 | poll_and_show_flags([execution['id'] for execution in executions]) 447 | 448 | num_rounds += 1 449 | if count > 0 and num_rounds == count: 450 | break 451 | 452 | end = time.time() 453 | if end - start < ROUND_TIME: 454 | to_wait = ROUND_TIME - (end - start) 455 | print(f'\n\\[{manual_id}] Waiting {to_wait:.2f} seconds for next round') 456 | time.sleep(to_wait) 457 | except KeyboardInterrupt: 458 | print(f'\\[{manual_id}] Terminating...') 459 | pool.terminate() 460 | 461 | if job is not None: 462 | print(f"\\[{manual_id}] Cancelling job...") 463 | for execution in job['executions']: 464 | if not execution['finished']: 465 | request('POST', f'job/execution/{execution["id"]}/finish', data={ 466 | "stdout": '', 467 | "stderr": '', 468 | "status": 'cancelled' 469 | }) 470 | 471 | request('POST', f'job/{job["id"]}/finish', params={"status": 'cancelled'}) 472 | print(f"\\[{manual_id}] Done") 473 | return 474 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/exploit/execution.py: -------------------------------------------------------------------------------- 1 | from rich import print 2 | from rich.markup import escape 3 | 4 | from ..util import dt_to_local_str, highlight_flags, blueify, redify, magentify, greenify, yellowfy, ERROR_STR 5 | 6 | EXECUTION_STATUS_IS_FINAL = {'finished', 'failed', 'timeout', 'cancelled'} 7 | 8 | EXECUTION_STATUS_COLOR = { 9 | 'queued': magentify, 10 | 'running': blueify, 11 | 'finished': greenify, 12 | 'failed': redify, 13 | 'timeout': yellowfy, 14 | 'cancelled': redify, 15 | } 16 | 17 | 18 | def print_exploit_execution(job, execution): 19 | exe_desc = f'execution {execution["id"]} of job {job["id"]}' 20 | 21 | status = execution['status'] 22 | status_note = ' (not finished, check back later)' if status not in EXECUTION_STATUS_IS_FINAL else '' 23 | 24 | print(f'--- Begin {exe_desc} ---') 25 | print(f'Status : {EXECUTION_STATUS_COLOR[status](status)}{status_note}') 26 | print(f'Exploit : {job["exploit_id"]}') 27 | print(f'Target : {execution["target"]["ip"]} ({execution["target"]["service"]})') 28 | print(f'Timestamp : {dt_to_local_str(job["timestamp"])}') 29 | 30 | if execution['stdout']: 31 | for line in escape(execution['stdout']).splitlines(): 32 | print(f" {highlight_flags(line, blueify)}") 33 | print() 34 | 35 | if execution['stderr']: 36 | for line in escape(execution['stderr']).splitlines(): 37 | print(f"{ERROR_STR} {highlight_flags(line, blueify)}") 38 | 39 | print(f'--- End {exe_desc} ---') 40 | print() 41 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/exploit/exploit.py: -------------------------------------------------------------------------------- 1 | from enum import Enum 2 | import time 3 | 4 | import typer 5 | from rich import print 6 | 7 | from player_cli.ctfconfig_wrapper import START_TIME, ROUND_TIME 8 | 9 | from player_cli.util import request, dt_from_iso, ERROR_STR, WARN_STR, dt_to_local_str, greenify 10 | 11 | ACTIVE_STR = greenify('ACTIVE') 12 | 13 | 14 | def get_all_histories(): 15 | histories = request('GET', 'exploit_history/') 16 | 17 | for history in histories: 18 | for exploit in history['exploits']: 19 | exploit['timestamp'] = dt_from_iso(exploit['timestamp']) 20 | exploit['history'] = history 21 | history['exploits'].sort(key=lambda x: x['timestamp']) 22 | 23 | return histories 24 | 25 | 26 | class ResolveStrategy(Enum): 27 | LATEST = 0 28 | ACTIVE = 1 29 | 30 | 31 | def resolve_exploit(exploit_or_history_id: list[str] | str, 32 | resolve_strategy: ResolveStrategy = ResolveStrategy.LATEST) -> list[dict] | dict: 33 | histories = get_all_histories() 34 | 35 | def _resolve(exploit_id): 36 | for history in histories: 37 | if exploit_id == history['id']: 38 | suitable_exploits = history['exploits'] if resolve_strategy == ResolveStrategy.LATEST else \ 39 | [exploit for exploit in history['exploits'] if exploit['active']] 40 | 41 | if len(suitable_exploits) == 0: 42 | print(f'{ERROR_STR}: no' 43 | f'{" " if resolve_strategy == ResolveStrategy.LATEST else " active "}' 44 | f'exploits in history "{history["id"]}"') 45 | raise typer.Exit(code=1) 46 | 47 | return suitable_exploits[-1] 48 | 49 | for exploit in history['exploits']: 50 | if exploit['id'] == exploit_id: 51 | return exploit 52 | 53 | print(f'{ERROR_STR}: unknown exploit or history "{exploit_id}"') 54 | raise typer.Exit(code=1) 55 | 56 | if type(exploit_or_history_id) == str: 57 | return _resolve(exploit_or_history_id) 58 | else: 59 | # Deduplicate 60 | exploit_set = {exploit['id']: exploit for exploit in 61 | [_resolve(exploit_id) for exploit_id in exploit_or_history_id]} 62 | return list(exploit_set.values()) 63 | 64 | 65 | def resolve_history(exploit_or_history_id: list[str] | str): 66 | histories = get_all_histories() 67 | 68 | def _resolve(exploit_id): 69 | for history in histories: 70 | if exploit_id == history['id']: 71 | return history 72 | 73 | for exploit in history['exploits']: 74 | if exploit['id'] == exploit_id: 75 | return history 76 | 77 | print(f'{ERROR_STR}: unknown exploit or history "{exploit_id}"') 78 | raise typer.Exit(code=1) 79 | 80 | if type(exploit_or_history_id) == str: 81 | return _resolve(exploit_or_history_id) 82 | else: 83 | history_set = {history['id']: history for history in 84 | [_resolve(exploit_id) for exploit_id in exploit_or_history_id]} 85 | return list(history_set.values()) 86 | 87 | 88 | def activate_exploit(exploit): 89 | history = exploit['history'] 90 | if any(exploit['active'] for exploit in history['exploits']): 91 | print(f'{WARN_STR}: an exploit is already active, doing nothing') 92 | return 93 | 94 | print(f'Activate {exploit["id"]}') 95 | request('PATCH', f'exploit/{exploit["id"]}', data={ 96 | 'active': True 97 | }) 98 | 99 | exploit['active'] = True 100 | 101 | 102 | def deactivate_history(history): 103 | if not any(exploit['active'] for exploit in history['exploits']): 104 | print(f'{WARN_STR}: no exploit active, doing nothing') 105 | return 106 | 107 | for exploit in history['exploits']: 108 | if exploit['active']: 109 | print(f'Deactivate {exploit["id"]}') 110 | request('PATCH', f'exploit/{exploit["id"]}', data={ 111 | 'active': False 112 | }) 113 | exploit['active'] = False 114 | 115 | 116 | def print_history(history): 117 | from .target import print_exploit_targets 118 | 119 | print(f'{history["id"]} ({history["service"]})') 120 | 121 | has_active = False 122 | exploits = history['exploits'] 123 | if not exploits: 124 | print(' No uploaded exploits yet') 125 | else: 126 | for exploit in exploits: 127 | if exploit['active']: 128 | has_active = True 129 | 130 | active = ACTIVE_STR if exploit['active'] else '' 131 | ts = dt_to_local_str(exploit['timestamp']) 132 | print(f' {ts} {active:6} {exploit["author"]} {exploit["id"]}') 133 | 134 | if has_active: 135 | print('') 136 | print_exploit_targets(history, indent=4) 137 | 138 | print('') 139 | 140 | 141 | def print_logs(exploits: list[dict], limit=1): 142 | from .execution import print_exploit_execution 143 | from ..flags import poll_and_show_flags 144 | 145 | print(f'Showing logs for exploits: {", ".join([exploit["id"] for exploit in exploits])}') 146 | print() 147 | 148 | start_of_tick = START_TIME + ROUND_TIME * (((time.time() - START_TIME) // ROUND_TIME) + 1 - limit) 149 | 150 | job_items = [] 151 | for exploit in exploits: 152 | exploit_items = request('GET', f'exploit/{exploit["id"]}/jobs', params={ 153 | 'after': start_of_tick, 154 | }) 155 | for item in exploit_items: 156 | item['job']['timestamp'] = dt_from_iso(item['job']['timestamp']) 157 | job_items += exploit_items 158 | 159 | job_items.sort(key=lambda x: x['job']['timestamp']) 160 | 161 | execution_ids = [] 162 | 163 | for item in job_items: 164 | job = item['job'] 165 | for execution in item['executions']: 166 | execution_ids.append(execution['id']) 167 | print_exploit_execution(job, execution) 168 | 169 | poll_and_show_flags(execution_ids) 170 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/exploit/job.py: -------------------------------------------------------------------------------- 1 | import os 2 | import subprocess 3 | from rich import print 4 | 5 | from player_cli.util import greenify, yellowfy, redify, request 6 | 7 | SUCCESS_STR = greenify("success") 8 | TIMEOUT_STR = yellowfy("timeout") 9 | 10 | def run_local_job(execution, exe, args, workdir, timeout): 11 | target = execution['target'] 12 | msg = SUCCESS_STR 13 | 14 | try: 15 | env = os.environ.copy() 16 | env['TARGET_IP'] = target['ip'] 17 | env['TARGET_EXTRA'] = target['extra'] 18 | 19 | proc = subprocess.run(args, 20 | executable=os.path.abspath(exe), 21 | stdin=subprocess.DEVNULL, 22 | stdout=subprocess.PIPE, 23 | stderr=subprocess.PIPE, 24 | timeout=timeout, 25 | env=env, 26 | cwd=os.path.abspath(workdir)) 27 | if proc.returncode != 0: 28 | msg = f'{redify("failed")}, exit status {proc.returncode}' 29 | stdout = proc.stdout 30 | stderr = proc.stderr 31 | except subprocess.TimeoutExpired as e: 32 | msg = TIMEOUT_STR 33 | stdout = e.stdout 34 | stderr = e.stderr 35 | 36 | if stdout: 37 | stdout = stdout.decode(encoding="utf-8", errors="ignore") 38 | else: 39 | stdout = '' 40 | 41 | if stderr: 42 | stderr = stderr.decode(encoding="utf-8", errors="ignore") 43 | else: 44 | stderr = '' 45 | 46 | flags = [] 47 | try: 48 | flags = request('POST', f'job/execution/{execution["id"]}/finish', data={ 49 | 'stdout': stdout, 50 | 'stderr': stderr, 51 | }) 52 | execution['finished'] = True 53 | except Exception as e: 54 | msg = f'{yellowfy("unknown")}: Output submit API returned error' 55 | 56 | return { 57 | 'execution': execution, 58 | 'target': target, 59 | 'msg': msg, 60 | 'stdout': stdout, 61 | 'stderr': stderr, 62 | 'flags': flags, 63 | } 64 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/exploit/target.py: -------------------------------------------------------------------------------- 1 | import typer 2 | from rich import print 3 | 4 | from typing import List 5 | from player_cli.ctfconfig_wrapper import STATIC_EXCLUSIONS, RUNLOCAL_TARGETS, ROUND_TIME 6 | from .exploit import resolve_history, resolve_exploit 7 | from player_cli.util import request, greenify, redify, WARN_STR, ERROR_STR 8 | 9 | app = typer.Typer(no_args_is_help=True) 10 | 11 | 12 | def print_exploit_targets(history, indent=0): 13 | indent = ' ' * indent 14 | 15 | exclude_ips = set(request('GET', f'exploit_history/{history["id"]}/exclusions')) 16 | 17 | columns = typer.get_terminal_size()[0] // 24 18 | 19 | i = 0 20 | targets = get_targets(history['service']) 21 | target_set = set([x['ip'] for x in targets]) | exclude_ips 22 | 23 | # try some sorting 24 | try: 25 | # based on ipv4 26 | target_set = sorted(target_set, key=lambda x: [int(a) for a in x.split(".")]) 27 | except ValueError: 28 | try: 29 | # based on ipv6 30 | target_set = sorted(target_set, key=lambda x: [0 if a == '' else int(a, base=16) for a in x.split(":")]) 31 | except ValueError: 32 | pass 33 | 34 | for ip in target_set: 35 | if ip in STATIC_EXCLUSIONS: 36 | continue 37 | status = redify('OFF') if ip in exclude_ips else greenify('ON ') 38 | if i % columns == 0: 39 | print(indent, end='') 40 | print(f'{status} {ip.ljust(20)}', end='\n' if (i % columns) == (columns - 1) else '') 41 | i += 1 42 | 43 | if i % columns > 0: 44 | print() 45 | 46 | 47 | def get_targets(service, all_targets=True, target_ips=RUNLOCAL_TARGETS, no_target_ips=None, ignore_exclusions=False): 48 | if no_target_ips is None: 49 | no_target_ips = set() 50 | 51 | no_target_ips = set(no_target_ips) 52 | target_ips = set(target_ips) 53 | if not ignore_exclusions: 54 | no_target_ips |= STATIC_EXCLUSIONS 55 | 56 | targets = request('GET', 'targets', params={} if service is None else {'service': service}) 57 | 58 | if not all_targets: 59 | targets = [t for t in targets if t['ip'] in target_ips] 60 | return [t for t in targets if t['ip'] not in no_target_ips] 61 | 62 | 63 | @app.command('ls', help='List exploit targets.', no_args_is_help=True) 64 | def exploit_target_ls( 65 | history_id: str = typer.Argument(..., help='History ID.') 66 | ): 67 | history = resolve_history(history_id) 68 | 69 | print_exploit_targets(history) 70 | 71 | 72 | def _exploit_target_on_off(history_id: str, target_ips: List[str], all_flag: bool, force: bool, on: bool): 73 | target_ips = set(target_ips) 74 | 75 | if all_flag and len(target_ips) > 0: 76 | print(f'{ERROR_STR}: you specified both --all and target IP(s), this is probably a mistake') 77 | raise typer.Exit(code=1) 78 | if not all_flag and len(target_ips) == 0: 79 | print(f'{ERROR_STR}: no target IP(s) specified (did you want --all?)') 80 | raise typer.Exit(code=1) 81 | 82 | history = resolve_history(history_id) 83 | 84 | targets = get_targets(history['service']) 85 | target_set = set([x['ip'] for x in targets]) 86 | 87 | current_exclude_ips = set(request('GET', f'exploit_history/{history_id}/exclusions')) 88 | 89 | for ip in target_ips: 90 | if ip in STATIC_EXCLUSIONS: 91 | print(f'{WARN_STR}: ignoring target {ip} (static exclusion)') 92 | elif ip not in target_set: 93 | if force: 94 | print(f'{WARN_STR}: unknown target {ip} (proceeding anyway)') 95 | else: 96 | print(f'{ERROR_STR}: unknown target {ip} (use --force to exclude anyway)') 97 | raise typer.Exit(code=1) 98 | 99 | if on: 100 | if all_flag: 101 | new_exclude_ips = set() 102 | else: 103 | new_exclude_ips = current_exclude_ips - (target_ips - STATIC_EXCLUSIONS) 104 | else: 105 | if all_flag: 106 | target_ips = target_set 107 | 108 | new_exclude_ips = current_exclude_ips | (target_ips - STATIC_EXCLUSIONS) 109 | 110 | if new_exclude_ips != current_exclude_ips: 111 | request('PUT', f'exploit_history/{history_id}/exclusions', data={ 112 | 'target_ips': list(new_exclude_ips), 113 | }) 114 | 115 | print_exploit_targets(history) 116 | 117 | 118 | @app.command('on', help='Turn on exploit targets.', no_args_is_help=True) 119 | def exploit_target_on( 120 | history_id: str = typer.Argument(..., help='History ID.'), 121 | target_ips: List[str] = typer.Argument(None, help='Target IP(s).'), 122 | all_flag: bool = typer.Option(False, '--all', help='Turn on all targets.') 123 | ): 124 | target_ips = target_ips if target_ips is not None else [] 125 | _exploit_target_on_off(history_id, target_ips, all_flag, True, True) 126 | 127 | exploit = resolve_exploit(history_id) 128 | 129 | targets = get_targets(exploit['history']['service'], target_ips=target_ips, all_targets=all_flag) 130 | 131 | job = request('POST', 'job', data={ 132 | 'targets': [target['id'] for target in targets], 133 | 'exploit_id': exploit['id'], 134 | 'manual_id': None, 135 | 'timeout': ROUND_TIME, 136 | }) 137 | print(f"Created job {job['id']} for targets {', '.join([target['ip'] for target in targets])}") 138 | 139 | @app.command('off', help='Turn off exploit targets.', no_args_is_help=True) 140 | def exploit_target_off( 141 | history_id: str = typer.Argument(..., help='History ID.'), 142 | target_ips: List[str] = typer.Argument(None, help='Target IP(s).'), 143 | all_flag: bool = typer.Option(False, '--all', help='Turn on all targets.'), 144 | force: bool = typer.Option(False, '--force', help= 145 | 'Allow exclusion of unknown targets.') 146 | ): 147 | target_ips = target_ips if target_ips is not None else [] 148 | _exploit_target_on_off(history_id, target_ips, all_flag, force, False) 149 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/flags.py: -------------------------------------------------------------------------------- 1 | import sys 2 | import time 3 | from typing import List, Optional 4 | from rich.live import Live 5 | from rich.table import Table 6 | from rich import print, box 7 | 8 | import typer 9 | 10 | from player_cli.ctfconfig_wrapper import RUNLOCAL_TARGETS 11 | from player_cli.exploit import get_targets 12 | from player_cli.util import request, ERROR_STR, magentify, greenify, blueify, redify, yellowfy 13 | 14 | app = typer.Typer(no_args_is_help=True) 15 | 16 | FLAG_STATUS_IS_FINAL = {'ok', 'duplicate', 'duplicate_not_submitted', 'nop', 'ownflag', 'inactive', 'invalid'} 17 | 18 | FLAG_STATUS_COLOR = { 19 | 'ok': greenify, 20 | 'queued': blueify, 21 | 'pending': blueify, 22 | 'duplicate': lambda x: x, 23 | 'duplicate_not_submitted': lambda x: x, 24 | 'unknown': redify, 25 | 'error': redify, 26 | 'nop': yellowfy, 27 | 'ownflag': yellowfy, 28 | 'inactive': lambda x: x, 29 | 'invalid': lambda x: x, 30 | } 31 | 32 | def generate_summary(flags) -> Table: 33 | table = Table(box=box.ROUNDED) 34 | 35 | categories = sorted(set([flag['status'] for flag in flags])) 36 | summary = {category: len([flag for flag in flags if flag['status'] == category]) for category 37 | in categories} 38 | 39 | for category in summary.keys(): 40 | table.add_column(FLAG_STATUS_COLOR[category](category)) 41 | 42 | table.add_row(*[str(count) for count in summary.values()]) 43 | return table 44 | 45 | def generate_flag_status_table(flags) -> Table: 46 | has_targets = any(['target' in flag for flag in flags]) 47 | 48 | table = Table(box=box.ROUNDED) 49 | # Print details 50 | table.add_column("ID") 51 | table.add_column("FLAG (duplicates hidden)") 52 | if has_targets: 53 | table.add_column("TARGET") 54 | table.add_column("STATUS") 55 | 56 | for flag in sorted(flags, key=lambda x: x['id']): 57 | # filter dupes 58 | if flag['status'] != 'duplicate_not_submitted': 59 | status_line = ' -> '.join([FLAG_STATUS_COLOR[s](s) for s in flag['status_list']]) 60 | if has_targets: 61 | table.add_row(str(flag['id']), flag['flag'], flag['target']['ip'], status_line) 62 | else: 63 | table.add_row(str(flag['id']), flag['flag'], status_line) 64 | 65 | return table 66 | 67 | 68 | def poll_and_show_flags(executions: int | list[int], force_detail=False, timeout=10, pollrate=0.5): 69 | if type(executions) == int: 70 | executions = [executions] 71 | 72 | flags = [flag for execution_id in executions for flag in request('GET', f'flag/execution/{execution_id}')] 73 | if len(flags) == 0: 74 | print("No flags found.") 75 | return 76 | 77 | flag_count = len(flags) 78 | intro = f'Submitted {flag_count} flags:' 79 | print(intro) 80 | 81 | old_flags = {flag['id']: flag | {"status_list": [flag['status']]} for flag in flags} 82 | 83 | finished_flags = {flag['id']: flag for flag in flags if flag['status'] in FLAG_STATUS_IS_FINAL} 84 | 85 | if len([x for x in old_flags.values() if x['status'] != 'duplicate_not_submitted']) > 20 and not force_detail: 86 | show_detail = False 87 | table_generator = generate_summary 88 | else: 89 | show_detail = True 90 | table_generator = generate_flag_status_table 91 | 92 | with Live(table_generator(old_flags.values()), auto_refresh=False) as live: 93 | for i in range(int(timeout / pollrate)): 94 | if len(flags) == len(finished_flags): 95 | break 96 | 97 | time.sleep(pollrate) 98 | 99 | flags = [flag for execution_id in executions for flag in 100 | request('GET', f'flag/execution/{execution_id}')] 101 | 102 | for new_flag in flags: 103 | if new_flag['id'] in old_flags: 104 | old_flag = old_flags[new_flag['id']] 105 | 106 | new_flag['status_list'] = old_flag['status_list'] 107 | if old_flag['status_list'][-1] != new_flag['status']: 108 | new_flag['status_list'] += [new_flag['status']] 109 | else: 110 | new_flag['status_list'] = [new_flag['status']] 111 | 112 | if new_flag['status'] in FLAG_STATUS_IS_FINAL: 113 | finished_flags[new_flag['id']] = new_flag 114 | 115 | live.update(table_generator(flags), refresh=True) 116 | old_flags = {flag['id']: flag for flag in flags} 117 | 118 | if show_detail: 119 | print(generate_summary(flags)) 120 | 121 | 122 | @app.command('submit', help='Submit flags.') 123 | def flag_submit( 124 | flags: List[str] = typer.Argument(None, help= 125 | 'Flags to submit. ' 126 | 'They will be extracted with the flag regex, so they can be dirty. ' 127 | 'If no flags are specified, stdin will be read until EOF and submitted.') 128 | ): 129 | if flags: 130 | data = '\n'.join(flags) 131 | else: 132 | data = sys.stdin.read() 133 | 134 | response = request('POST', 'flag/submit', data={ 135 | 'flags': data, 136 | }) 137 | 138 | print("Flags submitted. Polling for results, feel free to abort with CTRL-C") 139 | 140 | time.sleep(0.5) 141 | poll_and_show_flags(response['execution_id'], force_detail=True) 142 | 143 | 144 | @app.command("ids", help="Print flagids") 145 | def flag_ids( 146 | service: Optional[str] = typer.Argument(default=None, help='Service ID'), 147 | target_ips: List[str] = typer.Option(RUNLOCAL_TARGETS, '--target', '-T', help= 148 | 'Which Targets to print flag ids (you can specify this option multiple times).'), 149 | no_target_ips: List[str] = typer.Option([], '-N', '--no-target', help= 150 | 'Which Targets to exclude (you can specify this option multiple times).'), 151 | all_targets: bool = typer.Option(False, '--all-targets', help= 152 | 'All targets (overrides --target).'), 153 | ignore_exclusions: bool = typer.Option(True, help= 154 | 'Ignore static exclusions, i.e. excluding our own vulnbox.'), 155 | ): 156 | targets = get_targets(service=None, all_targets=all_targets, target_ips=target_ips, no_target_ips=no_target_ips, 157 | ignore_exclusions=ignore_exclusions) 158 | 159 | services = set([target['service'] for target in targets]) 160 | targets_by_service = {service: [target for target in targets if target['service'] == service] for service in 161 | services} 162 | 163 | if service is not None: 164 | if service not in services: 165 | print(f'{ERROR_STR}: unknown service "{service}". Available services: {magentify(", ".join(services))}.') 166 | raise typer.Exit(code=1) 167 | 168 | targets_by_service = {service: targets_by_service[service]} 169 | 170 | for service, targets in targets_by_service.items(): 171 | targets_summary = ', '.join(t['ip'] for t in targets) 172 | print(f'[*] Flag IDs for service {greenify(service)}') 173 | for t in targets: 174 | print(f' {blueify(t["ip"])} => {t["extra"]}') 175 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/service.py: -------------------------------------------------------------------------------- 1 | from rich import print 2 | 3 | import typer 4 | 5 | from player_cli.exploit.target import get_targets 6 | from player_cli.util import magentify 7 | 8 | app = typer.Typer(no_args_is_help=True) 9 | 10 | @app.command('ls', help='List services (legacy)') 11 | def service_ls(): 12 | targets = get_targets(None) 13 | services = set([target['service'] for target in targets]) 14 | 15 | print(f'Available services:\n' + magentify("\n".join(services))) 16 | -------------------------------------------------------------------------------- /ataka/player-cli/player_cli/util.py: -------------------------------------------------------------------------------- 1 | import os 2 | import re 3 | import time 4 | from datetime import datetime 5 | 6 | import player_cli 7 | import requests 8 | import typer 9 | from requests import JSONDecodeError 10 | from rich import print 11 | from rich.text import Text 12 | from rich.markup import escape 13 | 14 | 15 | CHECK_FOR_CMD = re.compile(r'CMD\s*\[\s*(.+)\s*\]') 16 | 17 | 18 | def colorfy(msg, color): 19 | return f'[bold {color}]{msg}[/bold {color}]' 20 | 21 | 22 | def magentify(msg): 23 | return colorfy(msg, typer.colors.MAGENTA) 24 | 25 | 26 | def blueify(msg): 27 | return colorfy(msg, typer.colors.BLUE) 28 | 29 | 30 | def greenify(msg): 31 | return colorfy(msg, typer.colors.GREEN) 32 | 33 | 34 | def redify(msg): 35 | return colorfy(msg, typer.colors.RED) 36 | 37 | 38 | def yellowfy(msg): 39 | return colorfy(msg, typer.colors.YELLOW) 40 | 41 | 42 | DEBUG_STR = colorfy('DEBUG', 'bright_yellow') 43 | ERROR_STR = redify('ERROR') 44 | WARN_STR = yellowfy('WARNING') 45 | NOTICE_STR = blueify('NOTICE') 46 | 47 | 48 | def request(method, endpoint, data=None, params=None): 49 | if player_cli.state['bypass_tools']: 50 | if player_cli.state['debug']: 51 | print(f"{DEBUG_STR}: {yellowfy('BYPASS')} " + escape(f"{method} {endpoint}{'' if params is None else f' with params {params}'}")) 52 | if data is not None: 53 | print(f"{DEBUG_STR}: {yellowfy('BYPASS')} " + escape(data)) 54 | print(f"{DEBUG_STR}: ") 55 | 56 | result = player_cli.ctfconfig_wrapper.request(method, endpoint, data=data) 57 | if player_cli.state['debug']: 58 | print(f"{DEBUG_STR}: {yellowfy('BYPASS')} " + escape(result)) 59 | return result 60 | 61 | url = f'http://{player_cli.state["host"]}/api/{endpoint}' 62 | 63 | if player_cli.state['debug']: 64 | print(f"{DEBUG_STR}: " + escape(f"{method} {url}{'' if params is None else f' with params {params}'}")) 65 | if data is not None: 66 | print(f"{DEBUG_STR}: " + escape(data)) 67 | print(f"{DEBUG_STR}: ") 68 | 69 | func = { 70 | 'GET': requests.get, 71 | 'PUT': requests.put, 72 | 'POST': requests.post, 73 | 'PATCH': requests.patch, 74 | }[method] 75 | response = func(url, json=data, params=params) 76 | if player_cli.state['debug']: 77 | print(f"{DEBUG_STR}: " + escape(f"{response.status_code} {response.reason}")) 78 | print(f"{DEBUG_STR}: " + escape(response.json())) 79 | 80 | if response.status_code != 200: 81 | print(f"{ERROR_STR}: " + escape(f"{method} {endpoint} returned status code {response.status_code} {response.reason}")) 82 | try: 83 | print(f"{ERROR_STR}: " + escape(response.json())) 84 | except JSONDecodeError: 85 | print(f"{ERROR_STR}: " + escape(response.text)) 86 | raise typer.Exit(code=1) 87 | return response.json() 88 | 89 | 90 | def dt_from_iso(s): 91 | return datetime.strptime(s, '%Y-%m-%dT%H:%M:%S.%f%z') 92 | 93 | 94 | def dt_to_local_str(dt): 95 | # This can break in rare cases, but it's mostly fine. 96 | # It avoids depending on dateutil. 97 | epoch = time.mktime(dt.timetuple()) 98 | offset = datetime.fromtimestamp(epoch) - datetime.utcfromtimestamp(epoch) 99 | local_dt = dt + offset 100 | return local_dt.strftime('%Y-%m-%d %H:%M:%S') 101 | 102 | 103 | def make_executable(path): 104 | mode = os.stat(path).st_mode 105 | # copy R bits to X 106 | mode |= (mode & 0o444) >> 2 107 | os.chmod(path, mode) 108 | 109 | 110 | def highlight_flags(s, func): 111 | repl = lambda m: func(m.group(0)) 112 | return player_cli.ctfconfig_wrapper.FLAG_FINDER.sub(repl, s) 113 | 114 | 115 | def parse_dockerfile_cmd(content: str) -> list[str] | None: 116 | """ Extractes the CMD-Block out of a dockerfile and parses it 117 | 118 | Args: 119 | content (str): The content of the dockerfile 120 | 121 | Returns: 122 | list[str] | None: Either a list containing the programm and 123 | the arguments, in a special uecase an empty list 124 | (see examples) or None 125 | 126 | Usage examples: 127 | >>> parse_dockerfile_cmd('CMD [ "prog"]') 128 | ['prog'] 129 | >>> parse_dockerfile_cmd("CMD [ 'prog','arg1']") 130 | ['prog', 'arg1'] 131 | >>> parse_dockerfile_cmd('CMD [ "prog", \\'arg1\\']') 132 | ['prog', 'arg1'] 133 | >>> parse_dockerfile_cmd("CMD [ \\"prog\\", 'arg1']") 134 | ['prog', 'arg1'] 135 | >>> parse_dockerfile_cmd('CMD []') is None 136 | True 137 | >>> parse_dockerfile_cmd('CMD [ ]') # In this case, a empty list is returned 138 | [] 139 | """ 140 | matches = CHECK_FOR_CMD.findall(content) 141 | if matches: 142 | ret_arguments = [] 143 | for argument in matches[-1].split(","): 144 | argument = argument.strip() 145 | # If the length is zero, don't add an empty string 146 | if len(argument) == 0: 147 | continue 148 | 149 | ret_arguments.append( 150 | argument[1:-1] 151 | ) 152 | 153 | return ret_arguments 154 | return None 155 | -------------------------------------------------------------------------------- /ataka/player-cli/requirements.txt: -------------------------------------------------------------------------------- 1 | typer[all] 2 | requests 3 | -------------------------------------------------------------------------------- /ataka/player-cli/templates/python/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM python:latest 2 | 3 | RUN pip install --no-cache-dir --upgrade pip 4 | 5 | RUN pip install --no-cache-dir pwntools cryptography beautifulsoup4 requests 6 | 7 | WORKDIR /exploit 8 | 9 | COPY requirements.txt . 10 | RUN pip install --no-cache-dir -r requirements.txt 11 | 12 | COPY . . 13 | 14 | CMD [ "python3", "exploit.py" ] 15 | -------------------------------------------------------------------------------- /ataka/player-cli/templates/python/exploit.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | import os 4 | import json 5 | import time 6 | import random 7 | import string 8 | 9 | # Work around rate limits 10 | # time.sleep(random.uniform(0, 10)) 11 | # Remember to sleep at least 1s between subsequent TCP connections within this exploit 12 | 13 | ### HELPERS ### 14 | def gen_random(n): 15 | return ''.join(random.choice(string.ascii_lowercase + string.ascii_uppercase) for i in range(n)) 16 | 17 | useragent = random.choice([ 18 | "python-requests/2.28.0", 19 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:98.0) Gecko/20100101 Firefox/98.0", 20 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.88 Safari/537.36", 21 | "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36", 22 | "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.198 Safari/537.36", 23 | "Mozilla/5.0 (X11; Linux x86_64; rv:100.0) Gecko/20100101 Firefox/100.0", 24 | "Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Firefox/91.0", 25 | "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36", 26 | "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.0.0 Safari/537.36", 27 | "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:100.0) Gecko/20100101 Firefox/100.0", 28 | ]) 29 | 30 | # Pick what you need. some examples given here 31 | headers = {'User-Agent': useragent, 32 | 'Accept-Language': 'en-US,en;q=0.5', 33 | 'Content-Type': 'application/x-www-form-urlencoded', 34 | 'Upgrade-Insecure-Requests': '1'} 35 | 36 | ### END HELPERS ### 37 | 38 | HOST = os.getenv('TARGET_IP') 39 | EXTRA = json.loads(os.getenv('TARGET_EXTRA', '[]')) 40 | 41 | print('ECSC_AAAAAAAAABBBBBBBBCCCCCCCDDDDDDDD') 42 | -------------------------------------------------------------------------------- /ataka/player-cli/templates/python/requirements.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenAttackDefenseTools/ataka/8e345a2c9ba981887a7fed21be71489e96d5fc69/ataka/player-cli/templates/python/requirements.txt -------------------------------------------------------------------------------- /ataka/player-cli/templates/ubuntu/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:latest 2 | 3 | RUN apt-get -y update 4 | 5 | RUN apt-get install -y curl 6 | 7 | WORKDIR /exploit 8 | 9 | COPY . . 10 | 11 | CMD [ "bash", "exploit.sh" ] 12 | -------------------------------------------------------------------------------- /ataka/player-cli/templates/ubuntu/exploit.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | curl $TARGET_IP 4 | -------------------------------------------------------------------------------- /data/exploits/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenAttackDefenseTools/ataka/8e345a2c9ba981887a7fed21be71489e96d5fc69/data/exploits/.keep -------------------------------------------------------------------------------- /data/persist/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenAttackDefenseTools/ataka/8e345a2c9ba981887a7fed21be71489e96d5fc69/data/persist/.keep -------------------------------------------------------------------------------- /data/shared/exploits/.keep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/OpenAttackDefenseTools/ataka/8e345a2c9ba981887a7fed21be71489e96d5fc69/data/shared/exploits/.keep -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | # 4 | # Have a look at the .env file -> used for configuration 5 | # 6 | 7 | services: 8 | rabbitmq: 9 | image: rabbitmq:3.9-management-alpine 10 | restart: always 11 | init: true 12 | expose: 13 | - "15672" 14 | - "5672" 15 | hostname: ataka-rabbitmq 16 | networks: 17 | - ataka 18 | environment: 19 | RABBITMQ_DEFAULT_USER: ${RABBITMQ_USER} 20 | RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD} 21 | volumes: 22 | - rabbitmq-data:/var/lib/rabbitmq:rw 23 | 24 | postgres: 25 | image: postgres:14-alpine 26 | restart: always 27 | init: true 28 | hostname: ataka-postgres 29 | ports: 30 | - "5432:5432" 31 | networks: 32 | - ataka 33 | environment: 34 | POSTGRES_USER: ${POSTGRES_USER} 35 | POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} 36 | volumes: 37 | - postgres-data:/var/lib/postgresql/data:rw 38 | 39 | adminer: 40 | image: adminer:latest 41 | restart: always 42 | ports: 43 | - "8080:8080" 44 | hostname: ataka-adminer 45 | networks: 46 | - ataka 47 | environment: 48 | ADMINER_DEFAULT_SERVER: postgres 49 | 50 | api: 51 | image: openattackdefensetools/ataka-api 52 | restart: always 53 | init: true 54 | user: $USERID 55 | build: 56 | context: ./ 57 | dockerfile: ataka/api/Dockerfile 58 | volumes: 59 | - ${DATA_STORE}/shared:/data/shared:rw,z 60 | - ${DATA_STORE}/exploits:/data/exploits:rw,z 61 | ports: 62 | - "8000:8000" 63 | security_opt: 64 | - label:disable 65 | depends_on: 66 | - postgres 67 | - rabbitmq 68 | hostname: ataka-api 69 | networks: 70 | - ataka 71 | env_file: 72 | - .env 73 | 74 | executor: 75 | image: openattackdefensetools/ataka-executor 76 | restart: always 77 | init: true 78 | user: $USERID 79 | build: 80 | context: ./ 81 | dockerfile: ataka/executor/Dockerfile 82 | volumes: 83 | - /var/run/docker.sock:/run/docker.sock:rw 84 | - ${DATA_STORE}/shared:/data/shared:rw,z 85 | - ${DATA_STORE}/exploits:/data/exploits:rw,z 86 | - ${DATA_STORE}/persist:/data/persist:rw,z 87 | security_opt: 88 | - label:disable 89 | depends_on: 90 | - postgres 91 | - rabbitmq 92 | hostname: ataka-executor 93 | networks: 94 | - ataka 95 | env_file: 96 | - .env 97 | 98 | exploit: 99 | image: alpine:latest 100 | command: "sleep infinity" 101 | restart: always 102 | init: true 103 | user: $USERID 104 | container_name: ataka-exploit 105 | networks: 106 | - ataka-exploits 107 | 108 | ctfcode: 109 | image: openattackdefensetools/ataka-ctfcode 110 | restart: always 111 | init: true 112 | user: $USERID 113 | build: 114 | context: ./ 115 | dockerfile: ataka/ctfcode/Dockerfile 116 | volumes: 117 | - ./ataka/ctfconfig:/ataka/ctfconfig:ro 118 | - ./ataka/player-cli:/ataka/player-cli:ro 119 | - ${DATA_STORE}/shared:/data/shared:rw,z 120 | security_opt: 121 | - label:disable 122 | depends_on: 123 | - postgres 124 | - rabbitmq 125 | hostname: ataka-ctfcode 126 | networks: 127 | - ataka 128 | env_file: 129 | - .env 130 | 131 | cli: 132 | image: openattackdefensetools/ataka-cli 133 | restart: always 134 | init: true 135 | user: $USERID 136 | build: 137 | context: ./ 138 | dockerfile: ataka/cli/Dockerfile 139 | depends_on: 140 | - postgres 141 | - rabbitmq 142 | hostname: ataka-cli 143 | networks: 144 | - ataka 145 | env_file: 146 | - .env 147 | 148 | volumes: 149 | postgres-data: 150 | rabbitmq-data: 151 | 152 | 153 | networks: 154 | ataka: 155 | driver: bridge 156 | driver_opts: 157 | com.docker.network.bridge.name: br-ataka 158 | enable_ipv6: true 159 | ipam: 160 | config: 161 | - subnet: 2001:0DB8::/112 162 | 163 | ataka-exploits: 164 | driver: bridge 165 | driver_opts: 166 | com.docker.network.bridge.name: br-exploits 167 | enable_ipv6: true 168 | ipam: 169 | config: 170 | - subnet: 2001:0DB9::/112 171 | -------------------------------------------------------------------------------- /reload.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | docker compose kill -s SIGUSR1 ctfcode 4 | --------------------------------------------------------------------------------