├── .gitignore ├── LICENSE ├── README.md ├── cleanup.sh ├── copy_result_replays.py ├── downloadlinuxpackage.sh ├── make_matches.py ├── packer.json ├── read_replay.py ├── repocache.py ├── requirements.txt ├── rungame.py ├── template_container ├── Dockerfile └── startup.py └── terraform ├── .gitignore ├── README.md └── modules ├── app ├── main.tf └── variables.tf └── vpc ├── README.md ├── main.tf ├── outputs.tf ├── private_subnets.tf ├── public_subnets.tf └── variables.tf /.gitignore: -------------------------------------------------------------------------------- 1 | # Containers 2 | containers/ 3 | 4 | # Directories 5 | repocache/ 6 | results/ 7 | replays/ 8 | 9 | # Starcaft II binary 10 | StarCraftII/ 11 | /SC2*.zip 12 | 13 | # Byte-compiled / optimized / DLL files 14 | __pycache__/ 15 | *.py[cod] 16 | *$py.class 17 | 18 | # C extensions 19 | *.so 20 | 21 | # Distribution / packaging 22 | .Python 23 | env/ 24 | build/ 25 | develop-eggs/ 26 | dist/ 27 | downloads/ 28 | eggs/ 29 | .eggs/ 30 | lib/ 31 | lib64/ 32 | parts/ 33 | sdist/ 34 | var/ 35 | wheels/ 36 | *.egg-info/ 37 | .installed.cfg 38 | *.egg 39 | 40 | # PyInstaller 41 | # Usually these files are written by a python script from a template 42 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 43 | *.manifest 44 | *.spec 45 | 46 | # Installer logs 47 | pip-log.txt 48 | pip-delete-this-directory.txt 49 | 50 | # Unit test / coverage reports 51 | htmlcov/ 52 | .tox/ 53 | .coverage 54 | .coverage.* 55 | .cache 56 | nosetests.xml 57 | coverage.xml 58 | *.cover 59 | .hypothesis/ 60 | 61 | # Translations 62 | *.mo 63 | *.pot 64 | 65 | # Django stuff: 66 | *.log 67 | local_settings.py 68 | 69 | # Flask stuff: 70 | instance/ 71 | .webassets-cache 72 | 73 | # Scrapy stuff: 74 | .scrapy 75 | 76 | # Sphinx documentation 77 | docs/_build/ 78 | 79 | # PyBuilder 80 | target/ 81 | 82 | # Jupyter Notebook 83 | .ipynb_checkpoints 84 | 85 | # pyenv 86 | .python-version 87 | 88 | # celery beat schedule file 89 | celerybeat-schedule 90 | 91 | # SageMath parsed files 92 | *.sage.py 93 | 94 | # dotenv 95 | .env 96 | 97 | # virtualenv 98 | .venv 99 | venv/ 100 | ENV/ 101 | 102 | # Spyder project settings 103 | .spyderproject 104 | .spyproject 105 | 106 | # Rope project settings 107 | .ropeproject 108 | 109 | # mkdocs documentation 110 | /site 111 | 112 | # mypy 113 | .mypy_cache/ 114 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Hannes Karppila 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## SC2 Bot Match Runner 2 | 3 | Tournament runner scripts for running Starcraft II Bots in different Git repos 4 | against each other. The repos are expected to contain a fork of the 5 | [`python-sc2-bot-template`](https://github.com/Dentosal/python-sc2-bot-template). Runs matches in parallel 6 | using docker containers. 7 | 8 | ### Setup 9 | 10 | ``` 11 | pip3 install -r requirements.txt 12 | ``` 13 | 14 | ### Running a match 15 | 16 | Run the runner script like this: 17 | 18 | ``` 19 | python3 rungame.py --step-time-limit 2.0 --game-time-limit 1200 20 | ``` 21 | 22 | For example 23 | 24 | ``` 25 | export REPO1="https://github.com/dentosal/python-sc2-bot-template.git" 26 | export REPO2="https://github.com/dentosal/python-sc2-bot.git" 27 | python3 rungame.py "Abyssal Reef LE" $REPO1 $REPO2 28 | ``` 29 | 30 | You can run multiple pairs at once too! The matches will be run in parallel in separate docker containers. 31 | 32 | Results will be stored under the `results/` directory. 33 | 34 | Bot logs will be produced into files in this directory in realtime. 35 | 36 | Each bot client will create its replay file into this directory after the match is over. 37 | This means that for each match there will be two essentially identical replay files. The runner verifies that 38 | the replays contain the same result. 39 | 40 | After all matches have finished, a `results.json` file will be created. It contains the result of all matches. 41 | 42 | ### Debugging 43 | 44 | If running gets stuck, you may want to 45 | 46 | ``` 47 | docker ps 48 | docker logs 49 | ``` 50 | 51 | To clean unused docker images and containers 52 | 53 | ``` 54 | ./cleanup.sh 55 | ``` 56 | 57 | You can also deleted cached repositories, in case there's an issue with pulling automatically: 58 | 59 | ``` 60 | rm -fr repocache 61 | ``` 62 | 63 | Cleaning up created containers: 64 | 65 | ``` 66 | rm -fr containers 67 | ``` 68 | 69 | ### Generating match list from list of repo URLs 70 | 71 | ``` 72 | python3 make_matches.py --type round-robin $REPO1 $REPO2 $REPO3 73 | ``` 74 | 75 | ### Viewing match replays 76 | 77 | The runner stores match results under `results/` directories. You can easily copy all the match replays 78 | into your StarCraft II installation directory using 79 | 80 | ``` 81 | python3 copy_result_replays.py 82 | ``` 83 | 84 | You can find your accountid and serverid as directory names under `~/Library/Application Support/Blizzard/StarCraft II/Accounts`. 85 | 86 | Now you can open the replays using the Starcraft UI: they can be found in Replays. 87 | -------------------------------------------------------------------------------- /cleanup.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | docker ps | grep sc2_ | awk '{print $1}' | xargs docker kill 3 | 4 | docker ps -aq --no-trunc | xargs docker rm 5 | 6 | docker image ls|grep ""|awk '{print $3}'|xargs docker image rm 7 | -------------------------------------------------------------------------------- /copy_result_replays.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import platform 3 | assert platform.system() == "Darwin", "Currently only supported on macOS" 4 | import os 5 | import json 6 | import shutil 7 | from pathlib import Path 8 | 9 | import repocache 10 | 11 | def resolve_dir(args): 12 | accounts_dir = (Path().home() / "Library/Application Support/Blizzard/StarCraft II/Accounts").resolve(strict=True) 13 | 14 | if args.account_id: 15 | account_id = args.account_id 16 | else: 17 | accounts = list(accounts_dir.iterdir()) 18 | if len(accounts) == 0: 19 | print("No accounts found") 20 | exit(2) 21 | elif len(accounts) > 1: 22 | print("Please specify account_id") 23 | exit(2) 24 | else: 25 | account_id = accounts[0] 26 | 27 | servers_dir = (accounts_dir / account_id).resolve(strict=True) 28 | 29 | if args.server_id: 30 | server_id = args.server_id 31 | else: 32 | servers = [n for n in servers_dir.iterdir() if n.suffix != ".txt" and n.name != "Hotkeys"] 33 | if len(servers) == 0: 34 | print(f"No servers found for account {account_id}") 35 | exit(2) 36 | elif len(servers) > 1: 37 | print("Please specify server_id") 38 | exit(2) 39 | else: 40 | server_id = servers[0] 41 | 42 | server_dir = (servers_dir / server_id).resolve(strict=True) 43 | return (server_dir / "Replays" / "Multiplayer").resolve(strict=True) 44 | 45 | def copy_replays(args, target_dir): 46 | source_dir = Path("results") 47 | 48 | rc = repocache.RepoCache() 49 | 50 | for timestamp_dir in source_dir.iterdir(): 51 | if args.timestamp and timestamp_dir.name != args.timestamp[0]: 52 | continue 53 | 54 | with open(timestamp_dir / "results.json") as f: 55 | results = json.load(f) 56 | 57 | for i_match, result in enumerate(results): 58 | if not result["record_ok"]: 59 | print(f"WARNING: Missing record for match{i_match} in {timestamp_dir}") 60 | print("=> Not copying") 61 | continue 62 | 63 | names = [] 64 | for repo_url in result["repositories"]: 65 | with open(rc.get_cached(repo_url) / "botinfo.json") as f: 66 | names.append(json.load(f)["name"]) 67 | 68 | target_path = target_dir / f"{timestamp_dir.name}_{'_vs_'.join(names)}.SC2Replay" 69 | replay_file = timestamp_dir / f"{i_match}_0.SC2Replay" 70 | if os.path.exists(replay_file): 71 | shutil.copy2(replay_file, target_path) 72 | else: 73 | replay_file = timestamp_dir / f"{i_match}_1.SC2Replay" 74 | shutil.copy2(replay_file, target_path) 75 | 76 | 77 | 78 | # WIP: 79 | # if args.use_bot_names: 80 | # import re 81 | # 82 | # with open(target_path, "rb") as f: 83 | # replay = f.read() 84 | # 85 | # for i, foo_name in enumerate(re.findall(br"foo\d{5}", replay)): 86 | # assert foo_name in replay 87 | # print(foo_name, bytes(names[i], "utf-8")[:8]) 88 | # replay = replay.replace(foo_name, bytes(names[i], "utf-8")[:8]) 89 | # 90 | # with open(target_path, "wb") as f: 91 | # f.write(replay) 92 | # 93 | # print(target_path) 94 | 95 | 96 | def main(args): 97 | target_dir = resolve_dir(args) 98 | copy_replays(args, target_dir) 99 | 100 | if __name__ == '__main__': 101 | parser = argparse.ArgumentParser(description="Move and rename SC2 replay files") 102 | parser.add_argument("account_id", nargs="?", help="Account id") 103 | parser.add_argument("server_id", nargs="?", help="Server id") 104 | parser.add_argument("--timestamp", nargs=1, default=None, help="specify run timestamp. Defaults to all timestamps.") 105 | # parser.add_argument("--use-bot-names", action="store_true", help="Use correct bot names in the replay") 106 | args = parser.parse_args() 107 | 108 | main(args) 109 | -------------------------------------------------------------------------------- /downloadlinuxpackage.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -e 3 | ARCHIVE_NAME=SC2.4.0.2.zip 4 | ARCHIVE_URL="http://blzdistsc2-a.akamaihd.net/Linux/$ARCHIVE_NAME" 5 | if [ -d StarCraftII ]; then 6 | echo "StarCraftII Linux binaries already present" 7 | else 8 | if [ ! -e "$ARCHIVE_NAME" ]; then 9 | echo "Downloading StarCraft II Linux binaries" 10 | curl -O $ARCHIVE_URL 11 | fi 12 | unzip -P iagreetotheeula $ARCHIVE_NAME 13 | # Workaround for case sensitivity issue where SC2 API returns 14 | # lowercase name 15 | if [ ! -e "StarCraftII/maps" ]; then 16 | ln -r -s StarCraftII/Maps StarCraftII/maps 17 | fi 18 | rm $ARCHIVE_NAME 19 | fi 20 | 21 | MAPS_NAME=Ladder2017Season3 22 | MAPS_FILE=Ladder2017Season3_Updated.zip 23 | MAPS_URL="http://blzdistsc2-a.akamaihd.net/MapPacks/$MAPS_FILE" 24 | 25 | if [ -d StarCraftII/Maps/$MAPS_NAME ]; then 26 | echo "Maps already loaded in StarCraftII/Maps/$MAPS_NAME" 27 | else 28 | if [ ! -e "$MAPS_FILE" ]; then 29 | echo "Downloading maps from $MAPS_URL" 30 | curl -O $MAPS_URL 31 | fi 32 | unzip -P iagreetotheeula $MAPS_FILE -d StarCraftII/Maps 33 | rm $MAPS_FILE 34 | fi 35 | -------------------------------------------------------------------------------- /make_matches.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | 3 | parser = argparse.ArgumentParser(description="Generate set of player pairs") 4 | parser.add_argument("--deterministic", action="store_true", help="Keep the order of games and participants") 5 | parser.add_argument("--type", 6 | action="store", 7 | choices=["pairs", "round-robin"], 8 | default="pairs", 9 | help="Special testing value" 10 | ) 11 | parser.add_argument("participants", type=str, nargs="+") 12 | args = parser.parse_args() 13 | 14 | if not args.deterministic: 15 | import random 16 | random.shuffle(args.participants) 17 | 18 | if args.type == "pairs": 19 | if len(args.participants) % 2 != 0: 20 | print("Even number of participants required.") 21 | exit(2) 22 | 23 | matches = [ 24 | [args.participants[i], args.participants[i+1]] 25 | for i in range(0, len(args.participants), 2) 26 | ] 27 | 28 | elif args.type == "round-robin": 29 | matches = [] 30 | for i1, p1 in enumerate(args.participants): 31 | for i2, p2 in enumerate(args.participants): 32 | if i1 < i2: 33 | matches.append([p1, p2]) 34 | 35 | if not args.deterministic: 36 | random.shuffle(matches) 37 | 38 | for pair in matches: 39 | print(" ".join(f"{p:>{2+max(len(q) for q in args.participants)}}" for p in pair)) 40 | -------------------------------------------------------------------------------- /packer.json: -------------------------------------------------------------------------------- 1 | { 2 | "variables": { 3 | "aws_access_key": "", 4 | "aws_secret_key": "" 5 | }, 6 | "builders": [{ 7 | "type": "amazon-ebs", 8 | "access_key": "{{user `aws_access_key`}}", 9 | "secret_key": "{{user `aws_secret_key`}}", 10 | "region": "eu-west-1", 11 | "source_ami_filter": { 12 | "filters": { 13 | "virtualization-type": "hvm", 14 | "name": "ubuntu/images/*ubuntu-xenial-16.04-amd64-server-*", 15 | "root-device-type": "ebs" 16 | }, 17 | "owners": ["099720109477"], 18 | "most_recent": true 19 | }, 20 | "instance_type": "t2.micro", 21 | "ssh_username": "ubuntu", 22 | "ami_name": "starcraft-runner-{{timestamp}}" 23 | }], 24 | "provisioners": [ 25 | { 26 | "type": "shell", 27 | "inline": [ 28 | "sleep 30", 29 | "sudo add-apt-repository -y ppa:jonathonf/python-3.6", 30 | "sudo apt-get update", 31 | "sudo apt-get install -y python3.6 python3-pip unzip python-apt", 32 | "sudo apt-get install -y libltdl7", 33 | "curl -O https://download.docker.com/linux/ubuntu/dists/xenial/pool/stable/amd64/docker-ce_17.12.0~ce-0~ubuntu_amd64.deb", 34 | "sudo dpkg -i docker-ce_17.12.0~ce-0~ubuntu_amd64.deb", 35 | "pip3 install --upgrade pip", 36 | "cd /home/ubuntu/", 37 | "git clone https://github.com/Dentosal/sc2-bot-match-runner.git", 38 | "cd sc2-bot-match-runner", 39 | "mkdir -p StarCraftII", 40 | "sudo python3.6 -m pip install --upgrade pip", 41 | "sudo apt-get install -y python3.6-gdbm", 42 | "sudo usermod -a -G docker ubuntu" 43 | ] 44 | }, 45 | { 46 | "type": "shell-local", 47 | "command": "./downloadlinuxpackage.sh" 48 | }, 49 | { 50 | "type": "file", 51 | "source": "./StarCraftII", 52 | "destination": "/home/ubuntu/sc2-bot-match-runner/StarCraftII" 53 | } 54 | ] 55 | } 56 | -------------------------------------------------------------------------------- /read_replay.py: -------------------------------------------------------------------------------- 1 | import json 2 | from mpyq import MPQArchive 3 | 4 | def winners(filepath): 5 | archive = MPQArchive(str(filepath)) 6 | files = archive.extract() 7 | data = json.loads(files[b"replay.gamemetadata.json"]) 8 | result_by_playerid = {p["PlayerID"]: p["Result"] for p in data["Players"]} 9 | return {playerid: result=="Win" for playerid, result in result_by_playerid.items()} 10 | -------------------------------------------------------------------------------- /repocache.py: -------------------------------------------------------------------------------- 1 | import string 2 | 3 | from pathlib import Path 4 | import subprocess as sp 5 | 6 | class RepoCache(object): 7 | PATH = Path("repocache/") 8 | def __init__(self): 9 | if not self.PATH.exists(): 10 | self.PATH.mkdir() 11 | assert self.PATH.is_dir() 12 | 13 | @staticmethod 14 | def repo_name(url): 15 | if url.endswith("/"): 16 | url = url[:-1] 17 | 18 | if url.endswith(".git"): 19 | url = url[:-4] 20 | 21 | owner, name = url.replace("+", "").rsplit("/")[-2:] 22 | while "__" in owner: 23 | owner = owner.replace("__", "_") 24 | return f"{owner}__{name}" 25 | 26 | def latest_hash(self, url): 27 | return sp.check_output(["git", "rev-parse", "HEAD"], cwd=self.PATH).strip().decode("utf-8") 28 | 29 | def _clone(self, name, url): 30 | try: 31 | sp.run( 32 | ["git", "clone", url, name], 33 | cwd=self.PATH, 34 | check=True, stdout=sp.DEVNULL 35 | ) 36 | except sp.CalledProcessError as err: 37 | raise Exception("Error cloning repository {0}: {1}".format(name, err)) 38 | 39 | def _pull(self, name): 40 | try: 41 | sp.run( 42 | ["git", "pull"], 43 | cwd=(self.PATH / name), 44 | check=True, stdout=sp.DEVNULL 45 | ) 46 | except sp.CalledProcessError as err: 47 | raise Exception("Error pulling from repository {0}: {1}".format(name, err)) 48 | 49 | def _download(self, name, url, pull=True): 50 | if (self.PATH / name).exists(): 51 | if pull: 52 | self._pull(name) 53 | else: 54 | self._clone(name, url) 55 | 56 | def get(self, url, pull=True): 57 | name = RepoCache.repo_name(url) 58 | self._download(name, url, pull) 59 | return self.PATH / name 60 | 61 | def get_cached(self, url): 62 | return (self.PATH / RepoCache.repo_name(url)).resolve(strict=True) 63 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | sc2 2 | mpyq 3 | requests==2.18 4 | -------------------------------------------------------------------------------- /rungame.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | # for usage run: ./rungame.py --help 3 | 4 | PROCESS_POLL_INTERVAL = 3 # seconds 5 | 6 | from pathlib import Path 7 | import random 8 | import time 9 | import json 10 | import argparse 11 | import subprocess as sp 12 | import shutil 13 | 14 | from repocache import RepoCache 15 | import read_replay 16 | 17 | class ImageName: 18 | @staticmethod 19 | def make(repocache, repos): 20 | return "sc2_" + ".".join([f"{repocache.repo_name(repo)}__{repocache.latest_hash(repo)}" for repo in repos]).lower() 21 | 22 | @staticmethod 23 | def parse(name): 24 | assert name.startswith("sc2_") 25 | name = name[4:] 26 | return [namedtuple(**zip(("repo", "commit_hash"), repo.rsplit("__", 1))) for repo in name.split(".")] 27 | 28 | def copy_contents(from_directory: Path, to_directory: Path): 29 | for path in from_directory.iterdir(): 30 | fn = shutil.copy if path.is_file() else shutil.copytree 31 | fn(path, to_directory) 32 | 33 | def prepend_all(prefix, container): 34 | return [r for item in container for r in [prefix, item]] 35 | 36 | def create_empty_dir(path): 37 | p = Path(path) 38 | if p.exists(): 39 | shutil.rmtree(p) 40 | p.mkdir(parents=True) 41 | return p 42 | 43 | def fetch_repositories(matches, containers_dir, repocache, noupdate): 44 | """Clone repos and create match folders""" 45 | 46 | for i_match, repos in enumerate(matches): 47 | container = containers_dir / f"match{i_match}" 48 | for i, repo in enumerate(repos): 49 | repo_path = repocache.get(repo, pull=(not noupdate)) 50 | shutil.copytree(repo_path, container / f"repo{i}") 51 | 52 | def collect_bot_info(matches, containers_dir): 53 | botinfo_by_match = [] 54 | for i_match, repos in enumerate(matches): 55 | botinfo_by_match.append([]) 56 | container = containers_dir / f"match{i_match}" 57 | for i, repo in enumerate(repos): 58 | botinfo_file = container / f"repo{i}" / "botinfo.json" 59 | 60 | if not botinfo_file.exists(): 61 | print(f"File botinfo.json is missing for repo{i}") 62 | exit(3) 63 | 64 | with open(botinfo_file) as f: 65 | botinfo = json.load(f) 66 | 67 | REQUIRED_KEYS = {"race": str, "name": str} 68 | for k, t in REQUIRED_KEYS.items(): 69 | if k not in botinfo or not isinstance(botinfo[k], t): 70 | print(f"Invalid botinfo.json for repo{i}:") 71 | print(f"Key '{k}' missing, or type is not {t !r}") 72 | exit(3) 73 | 74 | botinfo_by_match[-1].append(botinfo) 75 | return botinfo_by_match 76 | 77 | def main(): 78 | TIME_START_MS = int(time.time()*1000) # milliseconds 79 | 80 | parser = argparse.ArgumentParser(description="Automatically run sc2 matches and collect results.") 81 | parser.add_argument("--noupdate", action="store_true", help="do not update cached repositories") 82 | time_group = parser.add_mutually_exclusive_group() 83 | time_group.add_argument("--realtime", action="store_true", help="run in realtime mode") 84 | time_group.add_argument("--step-time-limit", nargs=1, default=None, help="step time limit in realtime seconds") 85 | parser.add_argument("--game-time-limit", nargs=1, default=None, help="game time limit in game seconds") 86 | parser.add_argument("map_name", type=str, help="map name") 87 | parser.add_argument("repo", type=str, nargs="+", help="a list of repositories") 88 | args = parser.parse_args() 89 | 90 | if len(args.repo) % 2 != 0: 91 | exit(f"There must be even number of repositories ({len(args.repo)} is odd).") 92 | 93 | for repo in args.repo: 94 | if not repo.startswith("https://"): 95 | print(f"Please use https url to repo, and not {repo}") 96 | exit(2) 97 | 98 | sp.run(["./downloadlinuxpackage.sh"], check=True) 99 | 100 | sc2_linux_dir = Path("StarCraftII").resolve(strict=True) 101 | if (not (sc2_linux_dir / "Maps").exists()) or list((sc2_linux_dir / "Maps").iterdir()) == []: 102 | print("Error: Linux SC2 directory doesn't contain any maps") 103 | exit(2) 104 | 105 | containers_dir = create_empty_dir("containers") 106 | result_dir = create_empty_dir(Path("results") / str(TIME_START_MS)) 107 | 108 | start_all = time.time() 109 | start = start_all 110 | 111 | repocache = RepoCache() 112 | 113 | matches = [ 114 | [args.repo[i], args.repo[i+1]] 115 | for i in range(0, len(args.repo), 2) 116 | ] 117 | 118 | print("Fetching repositiories...") 119 | start = time.time() 120 | fetch_repositories(matches, containers_dir, repocache, noupdate=args.noupdate) 121 | print(f"Ok ({time.time() - start:.2f}s)") 122 | 123 | 124 | print("Collecting bot info...") 125 | start = time.time() 126 | botinfo_by_match = collect_bot_info(matches, containers_dir) 127 | races_by_match = [[b["race"] for b in info] for info in botinfo_by_match] 128 | print(f"Ok ({time.time() - start:.2f}s)") 129 | 130 | print("Starting games...") 131 | start = time.time() 132 | docker_images = sp.check_output(["docker", "image", "ls", "--format", "{{.Repository}}"]).decode("utf-8").split("\n") 133 | for i_match, repos in enumerate(matches): 134 | container = containers_dir / f"match{i_match}" 135 | 136 | copy_contents(Path("template_container"), container) 137 | 138 | # To debug, copy python-sc2 into the container 139 | # shutil.copytree(Path("python-sc2").resolve(strict=True), container / "python-sc2") 140 | 141 | image_name = ImageName.make(repocache, repos) 142 | process_name = f"sc2_match{i_match}" 143 | 144 | 145 | sp.run(["docker", "rm", process_name], cwd=container, check=False) 146 | 147 | if image_name not in docker_images: # is this image is already built? 148 | sp.run(["docker", "build", "-t", image_name, "."], cwd=container, check=True) 149 | 150 | env = { 151 | "sc2_match_id": str(i_match), 152 | "sc2_map_name": args.map_name, 153 | "sc2_races": ",".join(races_by_match[i_match]), 154 | } 155 | 156 | if args.step_time_limit is not None: 157 | env["sc2_step_time_limit"] = str(float(args.step_time_limit[0])) 158 | 159 | if args.game_time_limit is not None: 160 | env["sc2_game_time_limit"] = str(float(args.game_time_limit[0])) 161 | 162 | # TODO: args.realtime 163 | 164 | sp.run([ 165 | "docker", "run", "-d", 166 | *prepend_all("--env", [f"{k}={v}" for k,v in env.items()]), 167 | "--mount", ",".join(map("=".join, { 168 | "type": "bind", 169 | "source": str(sc2_linux_dir), 170 | "destination": "/StarCraftII", 171 | "readonly": "true", 172 | "consistency": "cached" 173 | }.items())), 174 | "--mount", ",".join(map("=".join, { 175 | "type": "bind", 176 | "source": str(result_dir.resolve(strict=True)), 177 | "destination": "/replays", 178 | "readonly": "false", 179 | "consistency": "consistent" 180 | }.items())), 181 | "--name", process_name, 182 | image_name 183 | ], cwd=container, check=True) 184 | 185 | print(f"Ok ({time.time() - start:.2f}s)") 186 | 187 | print("Running game...") 188 | start = time.time() 189 | while True: 190 | docker_process_ids = sp.check_output([ 191 | "docker", "ps", "-q", 192 | "--filter", f"volume={sc2_linux_dir}" 193 | ]).split() 194 | 195 | if len(docker_process_ids) == 0: 196 | break 197 | 198 | time.sleep(PROCESS_POLL_INTERVAL) 199 | 200 | print(f"Ok ({time.time() - start:.2f}s)") 201 | 202 | print("Collecting results...") 203 | start = time.time() 204 | winners = [] 205 | record_ok = [] 206 | for i_match, repos in enumerate(matches): 207 | winner_info = None 208 | 209 | for i, repo in enumerate(repos): 210 | try: 211 | replay_winners = read_replay.winners(result_dir / f"{i_match}_{i}.SC2Replay") 212 | except FileNotFoundError: 213 | print(f"Process match{i_match}:repo{i} didn't record a replay") 214 | continue 215 | 216 | if winner_info is None: 217 | winner_info = replay_winners 218 | elif winner_info != replay_winners: 219 | print(f"Conflicting winner information (match{i_match}:repo{i})") 220 | print(f"({replay_winners !r})") 221 | print(f"({winner_info !r})") 222 | 223 | 224 | if winner_info is None: 225 | print(f"No replays were recorded by either client (match{i_match})") 226 | record_ok.append(False) 227 | winners.append(None) 228 | continue 229 | 230 | # TODO: Assumes player_id == (repo_index + 1) 231 | # Might be possible to at least try to verify this assumption 232 | for player_id, victory in winner_info.items(): 233 | assert player_id >= 1 234 | if victory: 235 | winners.append(player_id - 1) 236 | break 237 | else: # Tie 238 | winners.append(None) 239 | record_ok.append(True) 240 | 241 | result_data = [ 242 | { 243 | "record_ok": record_ok[i_match], 244 | "winner": winners[i_match], 245 | "repositories": matches[i_match] 246 | } 247 | for i_match in range(len(matches)) 248 | ] 249 | 250 | with open(result_dir / "results.json", "w") as f: 251 | json.dump(result_data, f) 252 | 253 | print(f"Ok ({time.time() - start:.2f}s)") 254 | 255 | print(f"Completed (total {time.time() - start_all:.2f}s)") 256 | 257 | 258 | if __name__ == "__main__": 259 | main() 260 | -------------------------------------------------------------------------------- /template_container/Dockerfile: -------------------------------------------------------------------------------- 1 | # Use an official Python runtime as a parent image 2 | FROM ubuntu:17.10 3 | 4 | # Set the working directory to /app 5 | WORKDIR /app 6 | 7 | 8 | # Install needed packages 9 | RUN apt update -y 10 | RUN apt-get install -y python3-pip strace 11 | RUN pip3 install --trusted-host pypi.python.org sc2 12 | 13 | # Alterantive for debugging python-sc2 issues 14 | # RUN pip3 install -e python-sc2 15 | 16 | # Create users 17 | RUN useradd -ms /bin/bash user0 \ 18 | && useradd -ms /bin/bash user1 19 | 20 | RUN groupadd -g 1500 sc2 21 | 22 | RUN gpasswd -a user0 sc2 23 | RUN gpasswd -a user1 sc2 24 | 25 | RUN ln -s /StarCraftII /home/user0/StarCraftII \ 26 | && ln -s /StarCraftII /home/user1/StarCraftII 27 | 28 | # Copy the current directory contents into the container at /app 29 | ADD . /app 30 | 31 | RUN chown -R user0 /app/repo0 \ 32 | && chown -R user1 /app/repo1 \ 33 | && chmod -R 700 /app/repo0 \ 34 | && chmod -R 700 /app/repo1 \ 35 | && ln -s /app/repo0 /home/user0/repo \ 36 | && ln -s /app/repo1 /home/user1/repo 37 | 38 | # Run startup.py when the container launches 39 | ENTRYPOINT python3 startup.py 40 | -------------------------------------------------------------------------------- /template_container/startup.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | from pathlib import Path 4 | import shlex 5 | 6 | import sc2 7 | 8 | 9 | portconfig = sc2.portconfig.Portconfig() 10 | gameid = os.environ["sc2_match_id"] 11 | 12 | # Ensure SC2 gid and write permission 13 | os.chown("/replays", -1, 1500) 14 | os.chmod("/replays", 0o775) 15 | 16 | 17 | commands = [ 18 | [ 19 | "cd" # home directory 20 | ], [ 21 | "cd", "repo" 22 | ], [ 23 | "python3", "start_bot.py", 24 | os.environ["sc2_map_name"], 25 | os.environ["sc2_races"], 26 | portconfig.as_json, 27 | ] 28 | ] 29 | 30 | if "sc2_step_time_limit" in os.environ: 31 | commands[-1] += ["--step-time-limit", os.environ["sc2_step_time_limit"]] 32 | if "sc2_game_time_limit" in os.environ: 33 | commands[-1] += ["--game-time-limit", os.environ["sc2_game_time_limit"]] 34 | 35 | if os.fork() == 0: 36 | commands[-1] += ["--master"] 37 | commands[-1] += ["--log-path", f"/replays/{gameid}_0.log"] 38 | commands[-1] += ["--replay-path", f"/replays/{gameid}_0.SC2Replay"] 39 | os.execlp("runuser", "-l", "user0", "-c", " && ".join(" ".join(shlex.quote(c) for c in cmd) for cmd in commands)) 40 | else: 41 | # HACK: Delay the joining client so the host has time to start up 42 | import time; time.sleep(5) 43 | 44 | commands[-1] += ["--log-path", f"/replays/{gameid}_1.log"] 45 | commands[-1] += ["--replay-path", f"/replays/{gameid}_1.SC2Replay"] 46 | os.execlp("runuser", "-l", "user1", "-c", " && ".join(" ".join(shlex.quote(c) for c in cmd) for cmd in commands)) 47 | -------------------------------------------------------------------------------- /terraform/.gitignore: -------------------------------------------------------------------------------- 1 | .terraform/ 2 | -------------------------------------------------------------------------------- /terraform/README.md: -------------------------------------------------------------------------------- 1 | # Reaktor Starcraft II Runner Terraform setup 2 | -------------------------------------------------------------------------------- /terraform/modules/app/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_security_group" "app_lb" { 2 | name_prefix = "${var.app_name}" 3 | description = "${var.app_name} load balancer network rules" 4 | vpc_id = "${var.vpc_id}" 5 | 6 | ingress { 7 | from_port = 80 8 | to_port = 80 9 | protocol = "tcp" 10 | cidr_blocks = ["0.0.0.0/0"] 11 | } 12 | 13 | ingress { 14 | from_port = 443 15 | to_port = 443 16 | protocol = "tcp" 17 | cidr_blocks = ["0.0.0.0/0"] 18 | } 19 | 20 | egress { 21 | from_port = 0 22 | to_port = 0 23 | protocol = "-1" 24 | cidr_blocks = ["0.0.0.0/0"] 25 | } 26 | } 27 | 28 | resource "aws_security_group" "app" { 29 | name_prefix = "${var.app_name}" 30 | description = "${var.app_name} application network rules" 31 | vpc_id = "${var.vpc_id}" 32 | 33 | ingress { 34 | from_port = 8080 35 | to_port = 8080 36 | protocol = "tcp" 37 | security_groups = ["${aws_security_group.app_lb.id}"] 38 | } 39 | 40 | ingress { 41 | from_port = 22 42 | to_port = 22 43 | protocol = "tcp" 44 | cidr_blocks = ["0.0.0.0/0"] 45 | } 46 | 47 | egress { 48 | from_port = 0 49 | to_port = 0 50 | protocol = "-1" 51 | cidr_blocks = ["0.0.0.0/0"] 52 | } 53 | } 54 | 55 | resource "aws_instance" "runner" { 56 | ami = "${var.image_id}" 57 | instance_type = "${var.instance_type}" 58 | key_name = "${var.ssh_key_name}" 59 | vpc_security_group_ids = ["${aws_security_group.app.id}"] 60 | subnet_id = "${element(var.lb_subnets, 0)}" 61 | associate_public_ip_address = true 62 | root_block_device { 63 | volume_size = "16" 64 | } 65 | tags { 66 | Name = "${var.app_name}" 67 | Environment = "${var.environment}" 68 | } 69 | } 70 | 71 | resource "aws_lb" "app" { 72 | internal = false 73 | load_balancer_type = "application" 74 | security_groups = ["${aws_security_group.app_lb.id}"] 75 | subnets = ["${var.lb_subnets}"] 76 | 77 | tags { 78 | Name = "${var.app_name}-lb" 79 | Project = "${var.project_name}" 80 | Environment = "${var.environment}" 81 | } 82 | } 83 | 84 | resource "aws_lb_target_group" "app" { 85 | port = "${var.app_port}" 86 | protocol = "${var.app_protocol}" 87 | vpc_id = "${var.vpc_id}" 88 | } 89 | 90 | resource "aws_lb_listener" "app" { 91 | load_balancer_arn = "${aws_lb.app.arn}" 92 | port = "80" 93 | protocol = "HTTP" 94 | 95 | default_action { 96 | target_group_arn = "${aws_lb_target_group.app.arn}" 97 | type = "forward" 98 | } 99 | } 100 | -------------------------------------------------------------------------------- /terraform/modules/app/variables.tf: -------------------------------------------------------------------------------- 1 | variable "app_name" { 2 | type = "string" 3 | } 4 | 5 | variable "project_name" { 6 | type = "string" 7 | } 8 | 9 | variable "environment" { 10 | type = "string" 11 | } 12 | 13 | variable "desired_capacity" {} 14 | 15 | variable "image_id" { 16 | type = "string" 17 | } 18 | 19 | variable "instance_type" { 20 | type = "string" 21 | } 22 | 23 | variable "ssh_key_name" { 24 | type = "string" 25 | } 26 | 27 | variable "vpc_id" { 28 | type = "string" 29 | } 30 | 31 | variable "app_subnets" { 32 | type = "list" 33 | } 34 | 35 | variable "lb_subnets" { 36 | type = "list" 37 | } 38 | 39 | variable "app_port" {} 40 | 41 | variable "app_protocol" { 42 | type = "string" 43 | } 44 | -------------------------------------------------------------------------------- /terraform/modules/vpc/README.md: -------------------------------------------------------------------------------- 1 | # VPC Module 2 | 3 | This module creates all the networking resources. 4 | 5 | ## Resources 6 | 7 | - 1 VPC 8 | - 3 public subnets (in different availability zones) 9 | - 3 private subnets (eu-west-1a, b and c) 10 | - 1 Internet Gateway for the whole VPC (for outside internet access) 11 | - 1 NAT Gateway in a public subnet 12 | - 1 Elastic IP for the NAT Gateway 13 | - 1 route table for private subnets 14 | - 1 route table for public subnets -------------------------------------------------------------------------------- /terraform/modules/vpc/main.tf: -------------------------------------------------------------------------------- 1 | resource "aws_vpc" "main" { 2 | cidr_block = "${var.vpc_cidr_block}" 3 | 4 | tags { 5 | Name = "${var.project_name}-vpc" 6 | Project = "${var.project_name}" 7 | } 8 | } 9 | 10 | resource "aws_internet_gateway" "main" { 11 | vpc_id = "${aws_vpc.main.id}" 12 | 13 | tags { 14 | Name = "${var.project_name}-internet-gateway" 15 | } 16 | } 17 | 18 | data "aws_availability_zones" "available" { 19 | state = "available" 20 | } 21 | -------------------------------------------------------------------------------- /terraform/modules/vpc/outputs.tf: -------------------------------------------------------------------------------- 1 | output "private_subnet_ids" { 2 | value = ["${aws_subnet.private.*.id}"] 3 | } 4 | 5 | output "public_subnet_ids" { 6 | value = ["${aws_subnet.public.*.id}"] 7 | } 8 | 9 | output "subnet_availability_zones" { 10 | value = ["${aws_subnet.private.*.availability_zone}"] 11 | } 12 | 13 | output "vpc_id" { 14 | value = "${aws_vpc.main.id}" 15 | } 16 | -------------------------------------------------------------------------------- /terraform/modules/vpc/private_subnets.tf: -------------------------------------------------------------------------------- 1 | resource "aws_subnet" "private" { 2 | count = "${length(var.private_subnet_cidr_blocks)}" 3 | vpc_id = "${aws_vpc.main.id}" 4 | cidr_block = "${var.private_subnet_cidr_blocks[count.index]}" 5 | availability_zone = "${data.aws_availability_zones.available.names[count.index]}" 6 | 7 | tags { 8 | Name = "${var.project_name}-private-subnet-${count.index + 1}" 9 | Project = "${var.project_name}" 10 | Environment = "${var.environment}" 11 | } 12 | } 13 | 14 | resource "aws_route_table" "private" { 15 | count = "${length(var.private_subnet_cidr_blocks)}" 16 | vpc_id = "${aws_vpc.main.id}" 17 | 18 | tags { 19 | Name = "${var.project_name}-private-route-table-${count.index + 1}" 20 | Project = "${var.project_name}" 21 | Environment = "${var.environment}" 22 | } 23 | } 24 | 25 | resource "aws_route" "private_to_nat" { 26 | count = "${length(var.private_subnet_cidr_blocks)}" 27 | route_table_id = "${element(aws_route_table.private.*.id, count.index)}" 28 | 29 | destination_cidr_block = "0.0.0.0/0" 30 | nat_gateway_id = "${element(aws_nat_gateway.main.*.id, count.index)}" 31 | } 32 | 33 | resource "aws_route_table_association" "private" { 34 | count = "${length(var.private_subnet_cidr_blocks)}" 35 | route_table_id = "${element(aws_route_table.private.*.id, count.index)}" 36 | subnet_id = "${element(aws_subnet.private.*.id, count.index)}" 37 | } 38 | -------------------------------------------------------------------------------- /terraform/modules/vpc/public_subnets.tf: -------------------------------------------------------------------------------- 1 | resource "aws_subnet" "public" { 2 | count = "${length(var.public_subnet_cidr_blocks)}" 3 | vpc_id = "${aws_vpc.main.id}" 4 | cidr_block = "${var.public_subnet_cidr_blocks[count.index]}" 5 | availability_zone = "${data.aws_availability_zones.available.names[count.index]}" 6 | 7 | tags { 8 | Name = "${var.project_name}-public-subnet-${count.index + 1}" 9 | Project = "${var.project_name}" 10 | Environment = "${var.environment}" 11 | } 12 | } 13 | 14 | resource "aws_route_table" "public" { 15 | vpc_id = "${aws_vpc.main.id}" 16 | 17 | route { 18 | cidr_block = "0.0.0.0/0" 19 | gateway_id = "${aws_internet_gateway.main.id}" 20 | } 21 | 22 | tags { 23 | Name = "${var.project_name}-public-route-table" 24 | Project = "${var.project_name}" 25 | Environment = "${var.environment}" 26 | } 27 | } 28 | 29 | resource "aws_route_table_association" "public" { 30 | count = "${length(var.public_subnet_cidr_blocks)}" 31 | route_table_id = "${aws_route_table.public.id}" 32 | subnet_id = "${element(aws_subnet.public.*.id, count.index)}" 33 | } 34 | 35 | resource "aws_eip" "nat_gateway_ip" { 36 | count = "${length(var.public_subnet_cidr_blocks)}" 37 | vpc = true 38 | } 39 | 40 | resource "aws_nat_gateway" "main" { 41 | count = "${length(var.public_subnet_cidr_blocks)}" 42 | subnet_id = "${element(aws_subnet.public.*.id, count.index)}" 43 | allocation_id = "${element(aws_eip.nat_gateway_ip.*.id, count.index)}" 44 | 45 | tags { 46 | Name = "${var.project_name}-nat-gateway-${count.index + 1}" 47 | Project = "${var.project_name}" 48 | Environment = "${var.environment}" 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /terraform/modules/vpc/variables.tf: -------------------------------------------------------------------------------- 1 | variable "project_name" { 2 | type = "string" 3 | } 4 | 5 | variable "environment" { 6 | type = "string" 7 | } 8 | 9 | variable "vpc_cidr_block" { 10 | type = "string" 11 | } 12 | 13 | variable "private_subnet_cidr_blocks" { 14 | type = "list" 15 | } 16 | 17 | variable "public_subnet_cidr_blocks" { 18 | type = "list" 19 | } 20 | --------------------------------------------------------------------------------