├── .env ├── README.md ├── cleanup.sh ├── deployments ├── cloud.yml ├── cloud_sawmill.yml ├── halos.yml ├── images │ ├── crowdcontrol │ │ └── Dockerfile │ ├── proxy │ │ ├── Dockerfile │ │ ├── README.md │ │ ├── nginx.conf │ │ ├── passwd │ │ └── servers │ │ │ ├── chainsaw.conf │ │ │ ├── crowdcontrol.conf │ │ │ ├── library.conf │ │ │ ├── pwnboard.conf │ │ │ ├── sawmill.conf │ │ │ └── spin.conf │ ├── pwnboard │ │ ├── README.md │ │ └── board.json │ ├── sangheili │ │ ├── Dockerfile │ │ └── config.yml │ └── theark │ │ └── config.yml └── internal.yml ├── docs ├── cloud.md └── internal.md └── topology.json /.env: -------------------------------------------------------------------------------- 1 | SAWMILL_HOST=logs.pwnboard.win 2 | SAWMILL_PORT=5000 3 | PWNBOARD_URL=http://pwnboard.win 4 | PWNBOARD_PORT=80 5 | 6 | # Internal deployment 7 | ARK_HOST=192.168.7.224 8 | ARK_PORT=8080 9 | 10 | # Generic passwords for all the services 11 | REDTEAM_USER=redteam 12 | REDTEAM_PASS=breakthings -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # RedTeamDeploy 2 | Deploy redteam infrastructure using Docker Compose. RTD focuses on allowing infrastructure to be rapidly setup and deployed. 3 | RTD has multiple different [deployment options](#Deployment-Options). Each deployment is a different docker-compose file that presumably is to be run on a unique server. 4 | 5 | 6 | ## Notes and Warnings 7 | __Modification__ 8 | Each of these services are meant to work together in a group. If you want to deploy a single service from the list, it will be best just to use the Dockerfile for the service as opposed to using only parts of these deployments. 9 | 10 | __Security__ 11 | These services are meant for Red/Blue security competitions, they will be running for, at most, a few days. They are hacked together, crash, and may contain security issues and not-best-practice deployments. __THEY ARE NOT MEANT FOR PRODUCTION OF ANY KIND__ 12 | 13 | __Scale__ 14 | These services are, for the most part, very tiny. For the cloud deployment, all of the services can reasonably be deployed on a box with 4GB of RAM and a few GB of storage. You COULD run them on seperate hosts but for the most part that should not be an issue. 15 | 16 | __Improvements__ 17 | There are other unique tools which could help with our deployments. OInvestigate these further to determine usability: 18 | - https://github.com/jwilder/nginx-proxy 19 | - https://github.com/khast3x/Redcloud 20 | 21 | ## Deployment Options 22 | 23 | ### Cloud Deployment 24 | These servers are meant to be external upstream services and agregation points. They are deployed publically and with domain names. Each one is hosted on port 80 reverse proxied behind an NGINX container. See [Cloud Deployment](docs/cloud.md) for full deployment information. 25 | 26 | - [Crowd Control](https://github.com/degenerat3/crowdcontrol) - Command and Control Server 27 | - [Chainsaw](https://github.com/degenerat3/chainsaw) - Victim information collector and forwarder 28 | - [The Library](https://github.com/RITRedteam/TheLibrary) - Redteam CDN and link generator 29 | - [Pwnboard](https://github.com/micahjmartin/pwnboard) - Beacon and Access tracking 30 | 31 | - [Sawmill](https://github.com/RITRedteam/Sawmill) _(OPTIONAL)_ - Redteam Logging server 32 | 33 | 34 | ### Internal Deployment 35 | Deploys internal services that require private IP addresses. Most often deployed day-of on a local machine. See [Internal Deployment](docs/internal.md) for full deployment information. 36 | 37 | - [Sangheili](https://github.com/ritredteam/sangheili) - Proxying service 38 | - [The Ark](https://github.com/RITRedteam/TheArk) - Internal IP adresses management 39 | - Multiple [Halos](https://github.com/ritredteam/TheArkHalo) - Reverse proxying services for each C2 as needed 40 | 41 | 42 | ### Running and Setup 43 | Running a deployment option should be as simple as starting that docker-compose file. But first, make sure you have completed the following steps to get your competition going: 44 | - [ ] Update the [pwnboard topology](deployments/images/pwnboard/README.md) 45 | - [ ] Set up the correct domains in [`images/proxy/nginx.conf`](deployments/images/proxy/nginx.conf). Each service needs a unique domain name. 46 | - [ ] Forward all your domains to the correct IP address 47 | - [ ] Change the passwords for each of the relevant services: 48 | - [Sawmill](deployments/images/proxy/README.md) 49 | - [The Library](.env) 50 | - [The Ark](.env) 51 | - [ ] Update the Sawmill domain in the [env](.env) (Or leave it blank) 52 | 53 | Once everything is completed, you may run the following commands to get it up and running: 54 | ``` 55 | docker-compose -f deployments/cloud.yml build 56 | docker-compose -f deployments/cloud.yml up -d 57 | ``` 58 | 59 | 60 | 61 | ### TODO 62 | * Add Ark password and token values 63 | * Test internal deployment 64 | * Get Sangheili to point to the Ark properly 65 | * Develop testing script for all the services as a healthcheck 66 | 67 | 68 | -------------------------------------------------------------------------------- /cleanup.sh: -------------------------------------------------------------------------------- 1 | #remove duplicate IPs from a host 2 | ip a | grep "inet 192.168" | grep "ark" | awk '{print $2}' > /tmp/ips 3 | while read p; do ip a del $p dev ens33; done /etc/timezone 7 | 8 | RUN git clone https://github.com/degenerat3/crowdcontrol /opt/cc 9 | 10 | WORKDIR /opt/cc/server 11 | RUN pip3 install -r requirements.txt 12 | 13 | # Set up the data directories for the app 14 | RUN mkdir -p /tmp/cc/hosts 15 | RUN mkdir -p /tmp/cc/windows/hosts 16 | RUN touch /tmp/cc/calls.log /tmp/cc/tasks.log /tmp/cc/windows/calls.log /tmp/cc/windows/tasks.log 17 | 18 | # Run the app 19 | CMD ["python3", "app.py"] 20 | -------------------------------------------------------------------------------- /deployments/images/proxy/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM nginx:alpine 2 | 3 | # Copy the base config 4 | COPY nginx.conf /etc/nginx/nginx.conf 5 | RUN rm /etc/nginx/conf.d/default.conf -------------------------------------------------------------------------------- /deployments/images/proxy/README.md: -------------------------------------------------------------------------------- 1 | # Change password 2 | 3 | TODO: Show how to generate a passwd file -------------------------------------------------------------------------------- /deployments/images/proxy/nginx.conf: -------------------------------------------------------------------------------- 1 | worker_processes 5; ## Default: 1 2 | #pid logs/nginx.pid; 3 | 4 | events { 5 | worker_connections 4096; 6 | } 7 | 8 | http { 9 | #sendfile on; 10 | #tcp_nopush on; 11 | 12 | # Default server block 13 | server { 14 | listen 80 default_server; 15 | server_name _; 16 | return 502; 17 | } 18 | 19 | include /etc/nginx/conf.d/*.conf; 20 | } 21 | -------------------------------------------------------------------------------- /deployments/images/proxy/passwd: -------------------------------------------------------------------------------- 1 | redteam:$apr1$8ywBCRDS$KltRSHV27uToaQSt/JMEW/ 2 | -------------------------------------------------------------------------------- /deployments/images/proxy/servers/chainsaw.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name reach.c2the.world chainsaw.c2the.world; 4 | location / { 5 | proxy_pass http://chainsaw:5000; 6 | proxy_set_header Host $host; 7 | } 8 | } -------------------------------------------------------------------------------- /deployments/images/proxy/servers/crowdcontrol.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name cc.c2the.world; 4 | location / { 5 | proxy_pass http://crowdcontrol:5000; 6 | proxy_set_header Host $host; 7 | } 8 | } -------------------------------------------------------------------------------- /deployments/images/proxy/servers/library.conf: -------------------------------------------------------------------------------- 1 | server { 2 | client_max_body_size 0; # Allows uploads of any size to the library 3 | listen 80; 4 | server_name cdn.c2the.world; 5 | location / { 6 | proxy_pass http://library:5000; 7 | proxy_set_header Host $host; 8 | } 9 | } -------------------------------------------------------------------------------- /deployments/images/proxy/servers/pwnboard.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name pwnboard.win; 4 | location / { 5 | proxy_pass http://pwnboard:5000; 6 | proxy_set_header Host $host; 7 | } 8 | } -------------------------------------------------------------------------------- /deployments/images/proxy/servers/sawmill.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name logs.pwnboard.win; 4 | 5 | # Default credentials: 6 | # redteam:letredin 7 | auth_basic "Super secure login"; 8 | auth_basic_user_file /etc/nginx/passwd; 9 | 10 | location / { 11 | proxy_pass http://kibana:5601; 12 | proxy_set_header Host $host; 13 | } 14 | } 15 | -------------------------------------------------------------------------------- /deployments/images/proxy/servers/spin.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name spin.pwnboard.win; 4 | location / { 5 | proxy_pass http://spin; 6 | proxy_set_header Host $host; 7 | } 8 | } -------------------------------------------------------------------------------- /deployments/images/pwnboard/README.md: -------------------------------------------------------------------------------- 1 | # Pwnboard Setup 2 | 3 | # Board Setup 4 | The pwnboard requires a topology to run specifying what IP addresses are in the competition. 5 | The topology is a generic format used by all the different tools for consistency. If the competition does not exist yet, you may generate one or modify an existing one using [Topology-Generator](https://github.com/RITRedteam/Topology-Generator). 6 | 7 | 8 | Once you have a YAML file with the base topology, a "board" file needs to be generated for the pwnboard. The board file allows for modification of a single IP address and is much more of a verbose configuration file than the generic topology file. The script to convert a topo file into a board file can be downloaded [here](https://github.com/micahjmartin/pwnboard/blob/master/scripts/gen_config.py) 9 | 10 | ``` 11 | curl https://raw.githubusercontent.com/micahjmartin/pwnboard/master/scripts/gen_config.py > gen_config.py 12 | ``` 13 | 14 | Now convert the topology file using the following command 15 | ``` 16 | python3 gen_config.py topology.json deployments/images/pwnboard/board.json 17 | ``` 18 | 19 | Now when the docker image is built, it will have the correct configuration. 20 | 21 | ## Notes about the pwnboard 22 | 23 | Make sure that `SAWMILL_HOST` is properly set in the [env file](../../../.env) or PWNBOARD will lag 24 | when trying to send SYSLOGS to it. If no instance of Sawmill is running, leave it blank. 25 | -------------------------------------------------------------------------------- /deployments/images/pwnboard/board.json: -------------------------------------------------------------------------------- 1 | { 2 | "teams": [ 3 | 1, 4 | 2, 5 | 3, 6 | 4, 7 | 5, 8 | 6, 9 | 7, 10 | 8, 11 | 9, 12 | 10, 13 | 11, 14 | 12, 15 | 13, 16 | 14, 17 | 15 18 | ], 19 | "board": [ 20 | { 21 | "name": "Linux A", 22 | "hosts": [ 23 | { 24 | "ip": "10.1.1.10" 25 | }, 26 | { 27 | "ip": "10.2.1.10" 28 | }, 29 | { 30 | "ip": "10.3.1.10" 31 | }, 32 | { 33 | "ip": "10.4.1.10" 34 | }, 35 | { 36 | "ip": "10.5.1.10" 37 | }, 38 | { 39 | "ip": "10.6.1.10" 40 | }, 41 | { 42 | "ip": "10.7.1.10" 43 | }, 44 | { 45 | "ip": "10.8.1.10" 46 | }, 47 | { 48 | "ip": "10.9.1.10" 49 | }, 50 | { 51 | "ip": "10.10.1.10" 52 | }, 53 | { 54 | "ip": "10.11.1.10" 55 | }, 56 | { 57 | "ip": "10.12.1.10" 58 | }, 59 | { 60 | "ip": "10.13.1.10" 61 | }, 62 | { 63 | "ip": "10.14.1.10" 64 | }, 65 | { 66 | "ip": "10.15.1.10" 67 | } 68 | ] 69 | }, 70 | { 71 | "name": "Linux B", 72 | "hosts": [ 73 | { 74 | "ip": "10.1.1.20" 75 | }, 76 | { 77 | "ip": "10.2.1.20" 78 | }, 79 | { 80 | "ip": "10.3.1.20" 81 | }, 82 | { 83 | "ip": "10.4.1.20" 84 | }, 85 | { 86 | "ip": "10.5.1.20" 87 | }, 88 | { 89 | "ip": "10.6.1.20" 90 | }, 91 | { 92 | "ip": "10.7.1.20" 93 | }, 94 | { 95 | "ip": "10.8.1.20" 96 | }, 97 | { 98 | "ip": "10.9.1.20" 99 | }, 100 | { 101 | "ip": "10.10.1.20" 102 | }, 103 | { 104 | "ip": "10.11.1.20" 105 | }, 106 | { 107 | "ip": "10.12.1.20" 108 | }, 109 | { 110 | "ip": "10.13.1.20" 111 | }, 112 | { 113 | "ip": "10.14.1.20" 114 | }, 115 | { 116 | "ip": "10.15.1.20" 117 | } 118 | ] 119 | }, 120 | { 121 | "name": "Linux C", 122 | "hosts": [ 123 | { 124 | "ip": "10.1.1.30" 125 | }, 126 | { 127 | "ip": "10.2.1.30" 128 | }, 129 | { 130 | "ip": "10.3.1.30" 131 | }, 132 | { 133 | "ip": "10.4.1.30" 134 | }, 135 | { 136 | "ip": "10.5.1.30" 137 | }, 138 | { 139 | "ip": "10.6.1.30" 140 | }, 141 | { 142 | "ip": "10.7.1.30" 143 | }, 144 | { 145 | "ip": "10.8.1.30" 146 | }, 147 | { 148 | "ip": "10.9.1.30" 149 | }, 150 | { 151 | "ip": "10.10.1.30" 152 | }, 153 | { 154 | "ip": "10.11.1.30" 155 | }, 156 | { 157 | "ip": "10.12.1.30" 158 | }, 159 | { 160 | "ip": "10.13.1.30" 161 | }, 162 | { 163 | "ip": "10.14.1.30" 164 | }, 165 | { 166 | "ip": "10.15.1.30" 167 | } 168 | ] 169 | }, 170 | { 171 | "name": "Windows A", 172 | "hosts": [ 173 | { 174 | "ip": "10.1.1.40" 175 | }, 176 | { 177 | "ip": "10.2.1.40" 178 | }, 179 | { 180 | "ip": "10.3.1.40" 181 | }, 182 | { 183 | "ip": "10.4.1.40" 184 | }, 185 | { 186 | "ip": "10.5.1.40" 187 | }, 188 | { 189 | "ip": "10.6.1.40" 190 | }, 191 | { 192 | "ip": "10.7.1.40" 193 | }, 194 | { 195 | "ip": "10.8.1.40" 196 | }, 197 | { 198 | "ip": "10.9.1.40" 199 | }, 200 | { 201 | "ip": "10.10.1.40" 202 | }, 203 | { 204 | "ip": "10.11.1.40" 205 | }, 206 | { 207 | "ip": "10.12.1.40" 208 | }, 209 | { 210 | "ip": "10.13.1.40" 211 | }, 212 | { 213 | "ip": "10.14.1.40" 214 | }, 215 | { 216 | "ip": "10.15.1.40" 217 | } 218 | ] 219 | }, 220 | { 221 | "name": "Windows B", 222 | "hosts": [ 223 | { 224 | "ip": "10.1.1.50" 225 | }, 226 | { 227 | "ip": "10.2.1.50" 228 | }, 229 | { 230 | "ip": "10.3.1.50" 231 | }, 232 | { 233 | "ip": "10.4.1.50" 234 | }, 235 | { 236 | "ip": "10.5.1.50" 237 | }, 238 | { 239 | "ip": "10.6.1.50" 240 | }, 241 | { 242 | "ip": "10.7.1.50" 243 | }, 244 | { 245 | "ip": "10.8.1.50" 246 | }, 247 | { 248 | "ip": "10.9.1.50" 249 | }, 250 | { 251 | "ip": "10.10.1.50" 252 | }, 253 | { 254 | "ip": "10.11.1.50" 255 | }, 256 | { 257 | "ip": "10.12.1.50" 258 | }, 259 | { 260 | "ip": "10.13.1.50" 261 | }, 262 | { 263 | "ip": "10.14.1.50" 264 | }, 265 | { 266 | "ip": "10.15.1.50" 267 | } 268 | ] 269 | }, 270 | { 271 | "name": "AD", 272 | "hosts": [ 273 | { 274 | "ip": "10.1.1.60" 275 | }, 276 | { 277 | "ip": "10.2.1.60" 278 | }, 279 | { 280 | "ip": "10.3.1.60" 281 | }, 282 | { 283 | "ip": "10.4.1.60" 284 | }, 285 | { 286 | "ip": "10.5.1.60" 287 | }, 288 | { 289 | "ip": "10.6.1.60" 290 | }, 291 | { 292 | "ip": "10.7.1.60" 293 | }, 294 | { 295 | "ip": "10.8.1.60" 296 | }, 297 | { 298 | "ip": "10.9.1.60" 299 | }, 300 | { 301 | "ip": "10.10.1.60" 302 | }, 303 | { 304 | "ip": "10.11.1.60" 305 | }, 306 | { 307 | "ip": "10.12.1.60" 308 | }, 309 | { 310 | "ip": "10.13.1.60" 311 | }, 312 | { 313 | "ip": "10.14.1.60" 314 | }, 315 | { 316 | "ip": "10.15.1.60" 317 | } 318 | ] 319 | }, 320 | { 321 | "name": "pfsense", 322 | "hosts": [ 323 | { 324 | "ip": "10.1.2.1" 325 | }, 326 | { 327 | "ip": "10.2.2.1" 328 | }, 329 | { 330 | "ip": "10.3.2.1" 331 | }, 332 | { 333 | "ip": "10.4.2.1" 334 | }, 335 | { 336 | "ip": "10.5.2.1" 337 | }, 338 | { 339 | "ip": "10.6.2.1" 340 | }, 341 | { 342 | "ip": "10.7.2.1" 343 | }, 344 | { 345 | "ip": "10.8.2.1" 346 | }, 347 | { 348 | "ip": "10.9.2.1" 349 | }, 350 | { 351 | "ip": "10.10.2.1" 352 | }, 353 | { 354 | "ip": "10.11.2.1" 355 | }, 356 | { 357 | "ip": "10.12.2.1" 358 | }, 359 | { 360 | "ip": "10.13.2.1" 361 | }, 362 | { 363 | "ip": "10.14.2.1" 364 | }, 365 | { 366 | "ip": "10.15.2.1" 367 | } 368 | ] 369 | }, 370 | { 371 | "name": "Web", 372 | "hosts": [ 373 | { 374 | "ip": "10.1.2.2" 375 | }, 376 | { 377 | "ip": "10.2.2.2" 378 | }, 379 | { 380 | "ip": "10.3.2.2" 381 | }, 382 | { 383 | "ip": "10.4.2.2" 384 | }, 385 | { 386 | "ip": "10.5.2.2" 387 | }, 388 | { 389 | "ip": "10.6.2.2" 390 | }, 391 | { 392 | "ip": "10.7.2.2" 393 | }, 394 | { 395 | "ip": "10.8.2.2" 396 | }, 397 | { 398 | "ip": "10.9.2.2" 399 | }, 400 | { 401 | "ip": "10.10.2.2" 402 | }, 403 | { 404 | "ip": "10.11.2.2" 405 | }, 406 | { 407 | "ip": "10.12.2.2" 408 | }, 409 | { 410 | "ip": "10.13.2.2" 411 | }, 412 | { 413 | "ip": "10.14.2.2" 414 | }, 415 | { 416 | "ip": "10.15.2.2" 417 | } 418 | ] 419 | }, 420 | { 421 | "name": "DB", 422 | "hosts": [ 423 | { 424 | "ip": "10.1.2.3" 425 | }, 426 | { 427 | "ip": "10.2.2.3" 428 | }, 429 | { 430 | "ip": "10.3.2.3" 431 | }, 432 | { 433 | "ip": "10.4.2.3" 434 | }, 435 | { 436 | "ip": "10.5.2.3" 437 | }, 438 | { 439 | "ip": "10.6.2.3" 440 | }, 441 | { 442 | "ip": "10.7.2.3" 443 | }, 444 | { 445 | "ip": "10.8.2.3" 446 | }, 447 | { 448 | "ip": "10.9.2.3" 449 | }, 450 | { 451 | "ip": "10.10.2.3" 452 | }, 453 | { 454 | "ip": "10.11.2.3" 455 | }, 456 | { 457 | "ip": "10.12.2.3" 458 | }, 459 | { 460 | "ip": "10.13.2.3" 461 | }, 462 | { 463 | "ip": "10.14.2.3" 464 | }, 465 | { 466 | "ip": "10.15.2.3" 467 | } 468 | ] 469 | }, 470 | { 471 | "name": "FTP", 472 | "hosts": [ 473 | { 474 | "ip": "10.1.2.4" 475 | }, 476 | { 477 | "ip": "10.2.2.4" 478 | }, 479 | { 480 | "ip": "10.3.2.4" 481 | }, 482 | { 483 | "ip": "10.4.2.4" 484 | }, 485 | { 486 | "ip": "10.5.2.4" 487 | }, 488 | { 489 | "ip": "10.6.2.4" 490 | }, 491 | { 492 | "ip": "10.7.2.4" 493 | }, 494 | { 495 | "ip": "10.8.2.4" 496 | }, 497 | { 498 | "ip": "10.9.2.4" 499 | }, 500 | { 501 | "ip": "10.10.2.4" 502 | }, 503 | { 504 | "ip": "10.11.2.4" 505 | }, 506 | { 507 | "ip": "10.12.2.4" 508 | }, 509 | { 510 | "ip": "10.13.2.4" 511 | }, 512 | { 513 | "ip": "10.14.2.4" 514 | }, 515 | { 516 | "ip": "10.15.2.4" 517 | } 518 | ] 519 | }, 520 | { 521 | "name": "Gitlab", 522 | "hosts": [ 523 | { 524 | "ip": "10.1.2.5" 525 | }, 526 | { 527 | "ip": "10.2.2.5" 528 | }, 529 | { 530 | "ip": "10.3.2.5" 531 | }, 532 | { 533 | "ip": "10.4.2.5" 534 | }, 535 | { 536 | "ip": "10.5.2.5" 537 | }, 538 | { 539 | "ip": "10.6.2.5" 540 | }, 541 | { 542 | "ip": "10.7.2.5" 543 | }, 544 | { 545 | "ip": "10.8.2.5" 546 | }, 547 | { 548 | "ip": "10.9.2.5" 549 | }, 550 | { 551 | "ip": "10.10.2.5" 552 | }, 553 | { 554 | "ip": "10.11.2.5" 555 | }, 556 | { 557 | "ip": "10.12.2.5" 558 | }, 559 | { 560 | "ip": "10.13.2.5" 561 | }, 562 | { 563 | "ip": "10.14.2.5" 564 | }, 565 | { 566 | "ip": "10.15.2.5" 567 | } 568 | ] 569 | } 570 | ] 571 | } -------------------------------------------------------------------------------- /deployments/images/sangheili/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM alpine:latest 2 | RUN apk add --update python3 tzdata git 3 | 4 | # Set the timezone 5 | ENV TZ=America/New_York 6 | RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone 7 | 8 | RUN git clone --branch dev https://github.com/RITRedteam/Sangheili /opt/sangheili 9 | WORKDIR /opt/sangheili 10 | RUN pip3 install -r requirements.txt 11 | RUN pip3 install requests 12 | 13 | # Run the app 14 | CMD ["python3", "sangheili.py"] 15 | -------------------------------------------------------------------------------- /deployments/images/sangheili/config.yml: -------------------------------------------------------------------------------- 1 | # Settings for Sangheili 2 | server: 0.0.0.0 3 | port: 1080 4 | 5 | address_server: http://0.0.0.0:5000 # Overridden in .env 6 | reserve_addresses: true 7 | -------------------------------------------------------------------------------- /deployments/images/theark/config.yml: -------------------------------------------------------------------------------- 1 | # Config file where you can blacklist certain IPs/networks 2 | # or specify certain networks 3 | 4 | # LAST UPDATED FOR UB 5 | 6 | valid: 7 | # - default # This will pull the network and netmask from the default gateway 8 | - 192.168.5.0/24 9 | - 192.168.6.0/24 10 | 11 | #invalid: 12 | # - 10.4.0.0/16 # Dont use this network -------------------------------------------------------------------------------- /deployments/internal.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | services: 3 | theark: 4 | build: https://github.com/ritredteam/theark.git 5 | network_mode: 'host' 6 | privileged: true 7 | environment: 8 | - FLASK_HOST=${ARK_HOST} 9 | - FLASK_DEBUG=false 10 | - FLASK_PORT=${ARK_PORT} 11 | - ARK_USERNAME=${REDTEAM_USER} 12 | - ARK_PASSWORD=${REDTEAM_PASS} 13 | - ARK_DATABASE=./db/theark.sqlite 14 | - ARK_CONFIG=./db/config.yml 15 | volumes: 16 | - ./images/theark:/opt/theark/db/:rw 17 | healthcheck: 18 | test: ["CMD", "curl", "-f", "http://${ARK_HOST}:${ARK_PORT}"] 19 | interval: '10s' 20 | timeout: '10s' 21 | retries: 5 22 | sangheili: 23 | build: 'images/sangheili/' 24 | network_mode: 'host' 25 | privileged: true 26 | environment: 27 | - address_server=http://${ARK_HOST}:${ARK_PORT} 28 | - reserve_addresses=True 29 | - address_count=15 30 | - ARK_USERNAME=${REDTEAM_USER} 31 | - ARK_PASSWORD=${REDTEAM_PASS} 32 | restart: on-failure 33 | depends_on: 34 | - theark -------------------------------------------------------------------------------- /docs/cloud.md: -------------------------------------------------------------------------------- 1 | # Cloud Deployment 2 | 3 | 4 | __Sawmill Deployment__ 5 | Because Sawmill uses its own docker-compose stack, the easiest way -------------------------------------------------------------------------------- /docs/internal.md: -------------------------------------------------------------------------------- 1 | # Internal Deployment -------------------------------------------------------------------------------- /topology.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Irsec2019", 3 | "date": "4/20/19", 4 | "teams": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13], 5 | "networks": [ 6 | { 7 | "name": "internal", 8 | "ip": "10.2.x", 9 | "hosts": [ 10 | { 11 | "name": "troy", 12 | "os": "Windows Server 2012 R2", 13 | "ip": "1", 14 | "services": "ad/dns" 15 | }, 16 | { 17 | "name": "gabriella", 18 | "os": "Fedora 29", 19 | "ip": "2", 20 | "services": "mail" 21 | }, 22 | { 23 | "name": "chad", 24 | "os": "Windows Server 2019", 25 | "ip": "3", 26 | "services": "mssql" 27 | }, 28 | { 29 | "name": "jason", 30 | "os": "Windows 10", 31 | "ip": "DHCP", 32 | "services": "client" 33 | }, 34 | { 35 | "name": "zeke", 36 | "os": "Windows 10", 37 | "ip": "DHCP", 38 | "services": "client" 39 | }, 40 | { 41 | "name": "martha", 42 | "os": "Solaris 11", 43 | "ip": "DHCP", 44 | "services": "client" 45 | } 46 | ] 47 | }, 48 | { 49 | "name": "cloud", 50 | "ip": "10.3.x", 51 | "hosts": [ 52 | { 53 | "name": "sharpay", 54 | "os": "Ubuntu 16.04.6", 55 | "ip": "1", 56 | "services": "docker" 57 | }, 58 | { 59 | "name": "kelsi", 60 | "os": "FreeBSD 12.0", 61 | "ip": "2", 62 | "services": "flask/web" 63 | }, 64 | { 65 | "name": "taylor", 66 | "os": "Centos 7.6.1910", 67 | "ip": "3", 68 | "services": "bind/dns" 69 | }, 70 | { 71 | "name": "darbus", 72 | "os": "Ubuntu 16.04.6", 73 | "ip": "4", 74 | "services": "apache/web" 75 | } 76 | ] 77 | } 78 | ] 79 | } --------------------------------------------------------------------------------