├── .gitignore ├── unitfiles ├── 1 │ ├── paz-service-directory.service │ ├── paz-scheduler.service │ ├── paz-orchestrator.service │ ├── paz-scheduler-announce.service │ ├── paz-service-directory-announce.service │ └── paz-orchestrator-announce.service ├── 2 │ ├── paz-web.service │ └── paz-web-announce.service └── test.sh ├── test ├── .gitignore ├── tear-down.sh ├── upyet.sh └── integration.sh ├── docs └── images │ └── Screen Shot 2014-11-22 at 16.39.07.png ├── scripts ├── test-demo-api.sh ├── start-runlevel.sh ├── reinstall-units-vagrant.sh ├── install-vagrant.sh └── helpers.sh ├── CONTRIBUTING.md ├── LICENCE.md ├── bare_metal └── user-data ├── vagrant └── user-data ├── digitalocean └── user-data └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | *.swp 3 | *.swo 4 | paz-vagrant 5 | -------------------------------------------------------------------------------- /unitfiles/test.sh: -------------------------------------------------------------------------------- 1 | [[ $1 ]] || { echo "Missing numeric runlevel argument (ie. 1 or 2)"; exit 1; } 2 | -------------------------------------------------------------------------------- /test/.gitignore: -------------------------------------------------------------------------------- 1 | unitfiles 2 | scripts 3 | coreos-vagrant 4 | paz-vagrant 5 | config.rb 6 | .install-temp 7 | -------------------------------------------------------------------------------- /docs/images/Screen Shot 2014-11-22 at 16.39.07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/paz-sh/paz/HEAD/docs/images/Screen Shot 2014-11-22 at 16.39.07.png -------------------------------------------------------------------------------- /scripts/test-demo-api.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | while true; do 3 | curl -o /dev/null -s -i --write-out '%{http_code}\n' http://demo-api.lukeb0nd.com 4 | sleep 1; 5 | done 6 | -------------------------------------------------------------------------------- /test/tear-down.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | checkCWD() { 4 | DIR=$(basename `pwd`) 5 | [ "$DIR" == "test" ] || { "You must run this script from the test directory"; exit 1; } 6 | } 7 | 8 | # import helper scripts 9 | . ../scripts/helpers.sh 10 | 11 | checkCWD 12 | 13 | # XXX check if Vagrant is installed 14 | # XXX check version of CoreOS in local Vagrant 15 | destroyOldVagrantCluster 16 | 17 | echo Done. 18 | -------------------------------------------------------------------------------- /scripts/start-runlevel.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | [[ $1 ]] || { echo "Missing numeric runlevel argument (ie. 1 or 2)"; exit 1; } 3 | [[ $ETCD_ENDPOINT ]] || ETCD_ENDPOINT=172.17.9.101:2379 4 | 5 | etcdctl --peers=$ETCD_ENDPOINT ls > /dev/null 2>&1 || (echo "etcd unreachable at $ETCD_ENDPOINT"; exit 1) 6 | 7 | echo Starting paz runlevel $1 units 8 | fleetctl -strict-host-key-checking=false start unitfiles/$1/* 9 | echo Successfully started all runlevel $1 paz units on the cluster with Fleet 10 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing 2 | 3 | ## Submitting a Pull Request 4 | 5 | Please ensure that your pull request meets the following criteria: 6 | 7 | * Existing tests pass 8 | * New tests are added to cover the issue or feature address by your pull request 9 | * Code style conforms to that of the rest of the project (ie. es-lint reports no errors) 10 | 11 | Other requirements will be added in the future (e.g. code coverage, specific forking/branching rules, etc.), so do check back here. 12 | -------------------------------------------------------------------------------- /LICENCE.md: -------------------------------------------------------------------------------- 1 | # Licence 2 | 3 | Copyright 2015 YLD Ltd. 4 | 5 | Licensed under the Apache License, Version 2.0 (the "License"); 6 | you may not use this file except in compliance with the License. 7 | You may obtain a copy of the License at 8 | 9 | http://www.apache.org/licenses/LICENSE-2.0 10 | 11 | Unless required by applicable law or agreed to in writing, software 12 | distributed under the License is distributed on an "AS IS" BASIS, 13 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14 | See the License for the specific language governing permissions and 15 | limitations under the License. -------------------------------------------------------------------------------- /unitfiles/1/paz-service-directory.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=A catalog of services with versioned config for running as Docker containers with systemd 3 | After=docker.service 4 | Requires=docker.service 5 | After=etcd2.service 6 | 7 | [Service] 8 | User=core 9 | Restart=always 10 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-service-directory:latest 11 | ExecStartPre=-/bin/sh -c "docker inspect paz-service-directory >/dev/null 2>&1 && docker rm -f paz-service-directory || true" 12 | ExecStart=/usr/bin/docker run -P --name paz-service-directory quay.io/yldio/paz-service-directory 13 | ExecStop=/usr/bin/docker rm -f paz-service-directory 14 | TimeoutStartSec=60m 15 | -------------------------------------------------------------------------------- /unitfiles/2/paz-web.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Web front-end for the paz platform. 3 | After=docker.service 4 | Requires=docker.service 5 | 6 | [Service] 7 | User=core 8 | Restart=always 9 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-web:latest 10 | ExecStartPre=-/bin/sh -c "docker inspect paz-web >/dev/null 2>&1 && docker rm -f paz-web || true" 11 | ExecStart=/bin/bash -c " \ 12 | domain=$(etcdctl get /paz/config/domain); \ 13 | /usr/bin/docker run -P \ 14 | -e PAZ_ORCHESTRATOR_URL=paz-orchestrator.$domain \ 15 | -e PAZ_ORCHESTRATOR_SOCKET=paz-orchestrator-socket.$domain \ 16 | -e PAZ_SCHEDULER_URL=paz-scheduler.$domain \ 17 | --name paz-web \ 18 | quay.io/yldio/paz-web" 19 | TimeoutStartSec=60m 20 | -------------------------------------------------------------------------------- /unitfiles/1/paz-scheduler.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-scheduler: Takes apps from your paz service directory and runs them on a CoreOS cluster using fleet 3 | After=docker.service 4 | Requires=docker.service 5 | After=fleet.service 6 | 7 | [Service] 8 | User=core 9 | EnvironmentFile=/etc/paz-environment 10 | Restart=always 11 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-scheduler:latest 12 | ExecStartPre=-/bin/bash -c "docker inspect paz-scheduler >/dev/null 2>&1 && docker rm -f paz-scheduler || true" 13 | ExecStart=/bin/bash -c " \ 14 | host_ip=$(fleetctl list-machines --no-legend | head -1 | awk '{ print $2 }'); \ 15 | /usr/bin/docker run -P \ 16 | -e PAZ_SCHEDULER_SVCDIR_URL=paz-service-directory.paz \ 17 | -e PAZ_SCHEDULER_SSH_HOST=$host_ip \ 18 | -e PAZ_SCHEDULER_GEN_KEY=true \ 19 | -e PAZ_SCHEDULER_ETCD_ENDPOINT=$host_ip:4001 \ 20 | -e PAZ_SCHEDULER_CORS=$PAZ_SCHEDULER_CORS \ 21 | --name paz-scheduler \ 22 | quay.io/yldio/paz-scheduler" 23 | ExecStop=/usr/bin/docker rm -f paz-scheduler 24 | TimeoutStartSec=60m 25 | -------------------------------------------------------------------------------- /test/upyet.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | export FLEETCTL_TUNNEL=127.0.0.1:2222 3 | echo Waiting for services to be activated... 4 | UNIT_COUNT=8 5 | ACTIVE_COUNT=0 6 | DOT_COUNTER=1 7 | until [ "$ACTIVE_COUNT" == "$UNIT_COUNT" ]; do 8 | ACTIVATING_COUNT=$(fleetctl -strict-host-key-checking=false list-units 2>/dev/null | grep "\.service" | awk '{print $3}' | grep -c "activating") 9 | ACTIVE_COUNT=$(fleetctl -strict-host-key-checking=false list-units 2>/dev/null | grep "\.service" | awk '{print $3}' | grep -c "active") 10 | FAILED_COUNT=$(fleetctl -strict-host-key-checking=false list-units 2>/dev/null | grep "\.service" | awk '{print $3}' | grep -c "failed") 11 | echo -n $'\r'Activating: $ACTIVATING_COUNT \| Active: $ACTIVE_COUNT \| Failed: $FAILED_COUNT 12 | for (( c=1; c<=$DOT_COUNTER; c++ )); do echo -n .; done 13 | for (( c=3; c>$DOT_COUNTER; c-- )); do echo -n " "; done 14 | ((DOT_COUNTER++)) 15 | if [ "$DOT_COUNTER" -gt 3 ]; then 16 | DOT_COUNTER=1 17 | fi 18 | if [ "$FAILED_COUNT" -gt 0 ]; then 19 | tput bel 20 | echo 21 | echo Failed unit detected 22 | exit 1 23 | fi 24 | done 25 | 26 | echo 27 | echo All units successfully activated! 28 | -------------------------------------------------------------------------------- /scripts/reinstall-units-vagrant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Re-installing Paz Units onto Vagrant" 4 | 5 | checkScriptsDirExists() { 6 | [ -d "scripts" ] || { echo "You must run this script from the root directory of the repository"; exit 1; } 7 | } 8 | 9 | checkScriptsDirExists 10 | 11 | # import helper scripts 12 | . ./scripts/helpers.sh 13 | 14 | checkDependencies 15 | 16 | checkForVagrantCluster 17 | 18 | configureSSHAgent 19 | 20 | ETCDCTL_CMD="etcdctl --peers=172.17.9.101:2379" 21 | export FLEETCTL_ENDPOINT=http://172.17.9.101:2379 22 | printDebug ETCDCTL_CMD=${ETCDCTL_CMD} 23 | printDebug FLEETCTL_ENDPOINT=${FLEETCTL_ENDPOINT} 24 | 25 | # destroy all paz units, then re-launch all except paz-web & wait 26 | destroyExistingUnits 27 | launchAndWaitForUnits 1 6 28 | waitForCoreServicesAnnounce 29 | 30 | # launch paz-web 31 | launchAndWaitForUnits 2 8 32 | 33 | # XXX need to test if paz-web can talk to orchestrator 34 | 35 | echo 36 | echo You will need to add the following entries to your /etc/hosts: 37 | echo 172.17.9.101 paz-web.paz 38 | echo 172.17.9.101 paz-scheduler.paz 39 | echo 172.17.9.101 paz-orchestrator.paz 40 | echo 172.17.9.101 paz-orchestrator-socket.paz 41 | 42 | echo 43 | echo Paz installation successful 44 | -------------------------------------------------------------------------------- /unitfiles/2/paz-web-announce.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-web announce 3 | BindsTo=paz-web.service 4 | After=docker.service 5 | Requires=docker.service 6 | After=etcd2.service 7 | 8 | [Service] 9 | User=core 10 | EnvironmentFile=/etc/environment 11 | Restart=always 12 | ExecStartPre=/bin/sh -c " \ 13 | until docker inspect \ 14 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-web >/dev/null 2>&1; \ 15 | do sleep 2; \ 16 | done; \ 17 | port=$(docker inspect \ 18 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-web); \ 19 | echo Waiting for $port/tcp...; \ 20 | until netstat -lnt | grep :$port >/dev/null; \ 21 | do sleep 1; \ 22 | done" 23 | ExecStart=/bin/sh -c " \ 24 | port=$(docker inspect \ 25 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-web); \ 26 | echo Connected to $COREOS_PRIVATE_IPV4:$port/tcp, publishing to etcd...; \ 27 | while netstat -lnt | grep :$port >/dev/null; \ 28 | do etcdctl set /paz/services/paz-web $COREOS_PRIVATE_IPV4:$port --ttl 60 >/dev/null; \ 29 | sleep 45; \ 30 | done" 31 | ExecStop=/usr/bin/etcdctl rm --recursive /paz/services/paz-web 32 | TimeoutStartSec=60m 33 | 34 | [X-Fleet] 35 | X-ConditionMachineOf=paz-web.service 36 | -------------------------------------------------------------------------------- /scripts/install-vagrant.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Installing Paz on Vagrant" 4 | 5 | checkScriptsDirExists() { 6 | [ -d "scripts" ] || { echo "You must run this script from the root directory of the repository"; exit 1; } 7 | } 8 | 9 | checkScriptsDirExists 10 | 11 | # import helper scripts 12 | . ./scripts/helpers.sh 13 | 14 | checkDependencies 15 | 16 | # XXX check if Vagrant is installed 17 | # XXX check version of CoreOS in local Vagrant 18 | 19 | destroyOldVagrantCluster 20 | 21 | set -e 22 | createNewVagrantCluster vagrant/user-data 23 | 24 | configureSSHAgent 25 | 26 | ETCDCTL_CMD="etcdctl --peers=172.17.9.101:2379" 27 | export FLEETCTL_ENDPOINT=http://172.17.9.101:2379 28 | printDebug ETCDCTL_CMD=${ETCDCTL_CMD} 29 | printDebug FLEETCTL_ENDPOINT=${FLEETCTL_ENDPOINT} 30 | 31 | # launch all base paz units except paz-web, and wait until announced 32 | set +e 33 | launchAndWaitForUnits 1 6 34 | waitForCoreServicesAnnounce 35 | 36 | # launch paz-web 37 | launchAndWaitForUnits 2 8 38 | 39 | # XXX need to test if paz-web can talk to orchestrator 40 | 41 | echo 42 | echo You will need to add the following entries to your /etc/hosts: 43 | echo 172.17.9.101 paz-web.paz 44 | echo 172.17.9.101 paz-scheduler.paz 45 | echo 172.17.9.101 paz-orchestrator.paz 46 | echo 172.17.9.101 paz-orchestrator-socket.paz 47 | 48 | echo 49 | echo Paz installation successful 50 | -------------------------------------------------------------------------------- /unitfiles/1/paz-orchestrator.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-orchestrator: Main API for all paz services and monitor of services in etcd. 3 | After=docker.service 4 | Requires=docker.service 5 | After=etcd2.service 6 | Requires=etcd2.service 7 | 8 | [Service] 9 | User=core 10 | EnvironmentFile=/etc/paz-environment 11 | Restart=always 12 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-orchestrator:latest 13 | ExecStartPre=/bin/bash -c "etcdctl mkdir /paz/config >/dev/null 2>&1; etcdctl set /paz/config/domain $PAZ_DOMAIN" 14 | ExecStartPre=-/bin/bash -c " \ 15 | docker inspect paz-orchestrator >/dev/null 2>&1 \ 16 | && docker rm -f paz-orchestrator || true" 17 | ExecStart=/bin/bash -c " \ 18 | host_ip=$(fleetctl list-machines --no-legend | head -1 | awk '{ print $2 }'); \ 19 | /usr/bin/docker run -P -e PAZ_ORCHESTRATOR_SVCDIR_URL=paz-service-directory.paz \ 20 | -e PAZ_ORCHESTRATOR_SCHEDULER_URL=paz-scheduler.paz \ 21 | -e PAZ_ORCHESTRATOR_ETCD_ENDPOINT=$host_ip:4001 \ 22 | -e PAZ_ORCHESTRATOR_DNS_EMAIL=$PAZ_DNSIMPLE_EMAIL \ 23 | -e PAZ_ORCHESTRATOR_DNS_APIKEY=$PAZ_DNSIMPLE_APIKEY \ 24 | -e PAZ_ORCHESTRATOR_DNS_DOMAIN=$PAZ_DOMAIN \ 25 | -e PAZ_ORCHESTRATOR_DNS_DISABLED=$PAZ_ORCHESTRATOR_DNS_DISABLED \ 26 | -e PAZ_ORCHESTRATOR_CORS=$PAZ_ORCHESTRATOR_CORS \ 27 | --name paz-orchestrator quay.io/yldio/paz-orchestrator" 28 | ExecStop=/usr/bin/docker rm -f paz-orchestrator 29 | TimeoutStartSec=60m 30 | -------------------------------------------------------------------------------- /unitfiles/1/paz-scheduler-announce.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-scheduler announce 3 | BindsTo=paz-scheduler.service 4 | After=docker.service 5 | Requires=docker.service 6 | After=etcd2.service 7 | Requires=etcd2.service 8 | 9 | [Service] 10 | User=core 11 | EnvironmentFile=/etc/environment 12 | Restart=always 13 | ExecStartPre=/bin/sh -c " \ 14 | until \ 15 | docker inspect \ 16 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-scheduler > /dev/null 2>&1; \ 17 | do sleep 2; \ 18 | done; \ 19 | port=$(docker inspect \ 20 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-scheduler); \ 21 | echo Waiting for $port/tcp...; \ 22 | until netstat -lnt | grep :$port >/dev/null; \ 23 | do sleep 1; \ 24 | done" 25 | ExecStart=/bin/sh -c " \ 26 | port=$(docker inspect \ 27 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-scheduler); \ 28 | echo Connected to $COREOS_PRIVATE_IPV4:$port/tcp, publishing to etcd...; \ 29 | while netstat -lnt | grep :$port >/dev/null; \ 30 | do etcdctl set /paz/services/paz-scheduler $COREOS_PRIVATE_IPV4:$port --ttl 60 >/dev/null; \ 31 | sleep 45; \ 32 | done" 33 | ExecStop=/usr/bin/etcdctl rm --recursive /paz/services/paz-scheduler 34 | TimeoutStartSec=60m 35 | 36 | [X-Fleet] 37 | X-ConditionMachineOf=paz-scheduler.service 38 | -------------------------------------------------------------------------------- /unitfiles/1/paz-service-directory-announce.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-service-directory announce 3 | BindsTo=paz-service-directory.service 4 | After=docker.service 5 | Requires=docker.service 6 | After=etcd2.service 7 | Requires=etcd2.service 8 | 9 | [Service] 10 | User=core 11 | EnvironmentFile=/etc/environment 12 | Restart=always 13 | ExecStartPre=/bin/sh -c " \ 14 | until docker inspect \ 15 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-service-directory >/dev/null 2>&1; \ 16 | do sleep 2; \ 17 | done; \ 18 | port=$(docker inspect \ 19 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-service-directory); \ 20 | echo Waiting for $port/tcp...; \ 21 | until netstat -lnt | grep :$port >/dev/null; \ 22 | do sleep 1; \ 23 | done" 24 | ExecStart=/bin/sh -c " \ 25 | port=$(docker inspect \ 26 | -f '{{range $i, $e := .NetworkSettings.Ports }}{{$p := index $e 0}}{{$p.HostPort}}{{end}}' paz-service-directory); \ 27 | echo Connected to $COREOS_PRIVATE_IPV4:$port/tcp, publishing to etcd...; \ 28 | while netstat -lnt | grep :$port >/dev/null; \ 29 | do etcdctl set /paz/services/paz-service-directory $COREOS_PRIVATE_IPV4:$port --ttl 60 >/dev/null; \ 30 | sleep 45; \ 31 | done" 32 | ExecStop=/usr/bin/etcdctl rm --recursive /paz/services/paz-service-directory 33 | TimeoutStartSec=60m 34 | 35 | [X-Fleet] 36 | X-ConditionMachineOf=paz-service-directory.service 37 | -------------------------------------------------------------------------------- /unitfiles/1/paz-orchestrator-announce.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=paz-orchestrator announce 3 | BindsTo=paz-orchestrator.service 4 | After=docker.service 5 | Requires=docker.service 6 | After=etcd2.service 7 | Requires=etcd2.service 8 | After=fleet.service 9 | 10 | [Service] 11 | User=core 12 | EnvironmentFile=/etc/environment 13 | Restart=always 14 | ExecStartPre=/bin/sh -c " \ 15 | until \ 16 | docker inspect -f '{{.NetworkSettings.Ports}}' paz-orchestrator > /dev/null 2>&1; \ 17 | do sleep 2; \ 18 | done; \ 19 | port=$(docker inspect -f '{{ index .NetworkSettings.Ports \"9000/tcp\"}}' paz-orchestrator \ 20 | | sed 's/.*Port://' \ 21 | | sed 's/].+*//'); \ 22 | echo Waiting for $port/tcp...; \ 23 | until netstat -lnt | grep :$port >/dev/null; \ 24 | do sleep 1; \ 25 | done" 26 | ExecStart=/bin/sh -c \ 27 | "portRest=$(docker inspect -f '{{ index .NetworkSettings.Ports \"9000/tcp\"}}' paz-orchestrator \ 28 | | sed 's/.*Port://' \ 29 | | sed 's/].+*//'); \ 30 | portSocket=$(docker inspect -f '{{ index .NetworkSettings.Ports \"1337/tcp\"}}' paz-orchestrator \ 31 | | sed 's/.*Port://' \ 32 | | sed 's/].+*//'); \ 33 | echo Connected to $COREOS_PRIVATE_IPV4:$portRest/tcp and $COREOS_PRIVATE_IPV4:$portSocket, publishing to etcd...; \ 34 | while netstat -lnt | grep :$portRest >/dev/null; \ 35 | do etcdctl set /paz/services/paz-orchestrator $COREOS_PRIVATE_IPV4:$portRest --ttl 60 >/dev/null \ 36 | && etcdctl set /paz/services/paz-orchestrator-socket $COREOS_PRIVATE_IPV4:$portSocket >/dev/null; \ 37 | sleep 45; \ 38 | done" 39 | ExecStop=/bin/sh -c "/usr/bin/etcdctl rm --recursive /paz/services/paz-orchestrator && /usr/bin/etcdctl rm --recursive /paz/services/paz-orchestrator-socket" 40 | TimeoutStartSec=60m 41 | 42 | [X-Fleet] 43 | X-ConditionMachineOf=paz-orchestrator.service 44 | -------------------------------------------------------------------------------- /test/integration.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Starting Paz integration test script" 4 | 5 | declare -r DIR=$(cd "$(dirname "$0")" && pwd) 6 | cd "${DIR}" 7 | 8 | copyDependencies() { 9 | mkdir unitfiles 10 | cp -R ../unitfiles/* unitfiles 11 | mkdir scripts 12 | cp ../scripts/start-runlevel.sh scripts 13 | } 14 | 15 | # import helper scripts 16 | . ../scripts/helpers.sh 17 | 18 | checkDependencies 19 | 20 | # XXX check version of CoreOS in local Vagrant 21 | destroyOldVagrantCluster 22 | [ $(basename `pwd`) == "test" ] || { cd test; } 23 | rm -rf scripts unitfiles 2>/dev/null 24 | 25 | set -e 26 | 27 | createNewVagrantCluster ../vagrant/user-data 28 | 29 | copyDependencies 30 | configureSSHAgent 31 | 32 | ETCDCTL_CMD="etcdctl --peers=172.17.9.101:2379" 33 | export FLEETCTL_TUNNEL=127.0.0.1:2222 34 | 35 | set +e 36 | 37 | # launch all base paz units except paz-web, and wait until announced 38 | launchAndWaitForUnits 1 6 39 | waitForCoreServicesAnnounce 40 | 41 | # launch paz-web 42 | launchAndWaitForUnits 2 8 43 | 44 | # XXX need to test if paz-web can talk to orchestrator 45 | 46 | echo 47 | echo You will need to add the following entries to your /etc/hosts: 48 | echo 172.17.9.101 paz-web.paz 49 | echo 172.17.9.101 paz-scheduler.paz 50 | echo 172.17.9.101 paz-orchestrator.paz 51 | echo 172.17.9.101 paz-orchestrator-socket.paz 52 | 53 | echo 54 | echo Adding service to directory 55 | # XXX if it fails (e.g. 503) then it doesn't realise 56 | SVCDOC='{"name":"demo-api","description":"Very simple HTTP Hello World server","dockerRepository":"quay.io/lukebond/demo-api","numInstances":3,"publicFacing":false}' 57 | ORCHESTRATOR_URL=$($ETCDCTL_CMD get /paz/services/paz-orchestrator) 58 | until curl -sf -XPOST -H "Content-Type: application/json" -d "$SVCDOC" $ORCHESTRATOR_URL/services; do 59 | sleep 2 60 | done 61 | 62 | echo 63 | echo Deploying new service with the /hooks/deploy endpoint 64 | # XXX if it fails (e.g. 400) then it doesn't realise 65 | SCHEDULER_URL=$($ETCDCTL_CMD get /paz/services/paz-scheduler) 66 | DEPLOY_DOC="{\"serviceName\":\"demo-api\",\"dockerRepository\":\"lukebond/demo-api\",\"pushedAt\":`date +%s`}" 67 | until curl -sf -XPOST -H "Content-Type: application/json" -d "$DEPLOY_DOC" $SCHEDULER_URL/hooks/deploy; do 68 | sleep 2 69 | done 70 | 71 | echo 72 | echo Waiting for service to announce itself 73 | until $ETCDCTL_CMD get /paz/services/demo-api/1/1 >/dev/null 2>&1; do 74 | FAILED_COUNT=$(fleetctl -strict-host-key-checking=false list-units 2>/dev/null | grep "\.service" | awk '{print $3}' | grep -c "failed") 75 | if [ "$FAILED_COUNT" -gt 0 ]; then 76 | tput bel 77 | echo Failed unit detected 78 | exit 1 79 | fi 80 | sleep 1 81 | done 82 | echo "Test service \"demo-api\" is up" 83 | 84 | echo 85 | echo You will need to add the following entries to your /etc/hosts: 86 | echo 172.17.9.101 demo-api.paz 87 | -------------------------------------------------------------------------------- /scripts/helpers.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | printDebug() { 4 | if [ -n "$DEBUG" ]; then echo DEBUG: $*; fi 5 | } 6 | 7 | # XXX check version of fleetctl and etcdctl- should be recent and should match what will be in vagrant 8 | checkDependencies() { 9 | command -v vagrant >/dev/null 2>&1 || { echo >&2 "Please install vagrant. Aborting"; exit 1; } 10 | command -v etcdctl >/dev/null 2>&1 || { echo >&2 "Please install etcdctl. Aborting."; exit 1; } 11 | command -v fleetctl >/dev/null 2>&1 || { echo >&2 "Please install fleetctl. Aborting."; exit 1; } 12 | } 13 | 14 | checkForVagrantCluster() { 15 | [ -d "paz-vagrant" ] || { echo >&2 "No extant Vagrant cluster"; exit 1; } 16 | } 17 | 18 | destroyOldVagrantCluster() { 19 | echo 20 | echo "Checking for existing Vagrant cluster" 21 | if [ -d "paz-vagrant" ]; then 22 | echo "Deleting existing Vagrant cluster" 23 | cd paz-vagrant 24 | vagrant destroy -f 25 | cd .. 26 | fi 27 | rm -rf paz-vagrant 2>/dev/null 28 | } 29 | 30 | destroyExistingUnits() { 31 | echo 32 | echo "Destroying existing units" 33 | fleetctl -strict-host-key-checking=false destroy \ 34 | paz-orchestrator paz-orchestrator-announce \ 35 | paz-service-directory paz-service-directory-announce \ 36 | paz-scheduler paz-scheduler-announce \ 37 | paz-web paz-web-announce 2>/dev/null 38 | } 39 | 40 | createNewVagrantCluster() { 41 | echo 42 | echo "Creating a new Vagrant cluster" 43 | git clone https://github.com/paz-sh/paz-vagrant.git 44 | cp $1 paz-vagrant 45 | cd paz-vagrant 46 | DISCOVERY_TOKEN=`curl -s https://discovery.etcd.io/new` && perl -i -p -e "s@discovery: https://discovery.etcd.io/\w+@discovery: $DISCOVERY_TOKEN@g" user-data 47 | printDebug Using discovery token ${DISCOVERY_TOKEN} 48 | cp config.rb.sample config.rb 49 | vagrant box update 50 | vagrant up 51 | echo Waiting for Vagrant cluster to be ready... 52 | until $ETCDCTL_CMD ls >/dev/null 2>&1; do sleep 1; done 53 | cd .. 54 | echo Paz Vagrant cluster is up 55 | sleep 5 56 | } 57 | 58 | configureSSHAgent() { 59 | echo 60 | echo "Configuring SSH" 61 | if [ -z "$SSH_AUTH_SOCK" ]; then 62 | eval $(ssh-agent) 63 | fi 64 | ssh-add ~/.vagrant.d/insecure_private_key 65 | } 66 | 67 | launchAndWaitForUnits() { 68 | echo 69 | PAZ_RUNLEVEL=$1 70 | ./scripts/start-runlevel.sh ${PAZ_RUNLEVEL} || { 71 | STATUS=$?; 72 | echo "Failed to start at run level ${PAZ_RUNLEVEL}. Exit code ${STATUS}"; 73 | exit ${STATUS}; 74 | } 75 | 76 | echo Waiting for runlevel $PAZ_RUNLEVEL services to be activated... 77 | UNIT_COUNT=$2 78 | ACTIVE_COUNT=0 79 | DOT_COUNTER=1 80 | until [ "$ACTIVE_COUNT" == "$UNIT_COUNT" ]; do 81 | local UNIT_STATUS=$(fleetctl -strict-host-key-checking=false list-units 2>/dev/null) 82 | ACTIVATING_COUNT=$(echo "${UNIT_STATUS}" | grep "\.service" | awk '{print $3}' | grep -c "activating") 83 | ACTIVE_COUNT=$(echo "${UNIT_STATUS}" | grep "\.service" | awk '{print $3}' | grep -c "active") 84 | FAILED_COUNT=$(echo "${UNIT_STATUS}" | grep "\.service" | awk '{print $3}' | grep -c "failed") 85 | echo -n $'\r'Activating: $ACTIVATING_COUNT \| Active: $ACTIVE_COUNT \| Failed: $FAILED_COUNT 86 | for (( c=1; c<=$DOT_COUNTER; c++ )); do echo -n .; done 87 | for (( c=3; c>$DOT_COUNTER; c-- )); do echo -n " "; done 88 | ((DOT_COUNTER++)) 89 | if [ "$DOT_COUNTER" -gt 3 ]; then 90 | DOT_COUNTER=1 91 | fi 92 | if [ "$FAILED_COUNT" -gt 0 ]; then 93 | tput bel 94 | echo 95 | echo Failed unit detected 96 | exit 1 97 | fi 98 | sleep 0.5 99 | done 100 | echo 101 | echo All runlevel $PAZ_RUNLEVEL units successfully activated! 102 | } 103 | 104 | # wait for orchestrator, service directory and scheduler announce entries to be written to etcd 105 | waitForCoreServicesAnnounce() { 106 | echo 107 | echo "Waiting for orchestrator, scheduler and service directory to be announced" 108 | until $ETCDCTL_CMD get /paz/services/paz-orchestrator >/dev/null 2>&1; do 109 | sleep 1 110 | done 111 | until $ETCDCTL_CMD get /paz/services/paz-scheduler >/dev/null 2>&1; do 112 | sleep 1 113 | done 114 | until $ETCDCTL_CMD get /paz/services/paz-service-directory >/dev/null 2>&1; do 115 | sleep 1 116 | done 117 | } 118 | -------------------------------------------------------------------------------- /bare_metal/user-data: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | # assumptions: 3 | # local private IP for this machine is 10.0.1.23 4 | # gateway IP is 10.0.1.1 5 | coreos: 6 | units: 7 | - name: etcd2.service 8 | command: start 9 | - name: fleet.service 10 | command: start 11 | - name: docker.service 12 | drop-ins: 13 | - name: 50-docker-dns.conf 14 | content: | 15 | [Service] 16 | Environment='DOCKER_OPTS=--restart=false -D --dns=10.0.1.23 --dns=10.0.1.1' 17 | - name: cadvisor.service 18 | runtime: true 19 | command: start 20 | content: | 21 | [Unit] 22 | Description=Analyzes resource usage and performance characteristics of running containers. 23 | After=docker.service 24 | Requires=docker.service 25 | 26 | [Service] 27 | Restart=always 28 | ExecStartPre=/usr/bin/docker pull google/cadvisor:latest 29 | ExecStartPre=-/bin/bash -c " \ 30 | docker inspect cadvisor >/dev/null 2>&1 \ 31 | && docker rm -f cadvisor || true" 32 | ExecStart=/usr/bin/docker run --volume=/var/run:/var/run:rw --volume=/sys/fs/cgroup/:/sys/fs/cgroup:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --name=cadvisor google/cadvisor:latest 33 | ExecStop=/usr/bin/docker rm -f cadvisor 34 | - name: paz-dnsmasq.service 35 | runtime: true 36 | command: start 37 | content: | 38 | [Unit] 39 | Description=*.paz traffic will go to the private_ipv4 addr 40 | After=docker.service 41 | Requires=docker.service 42 | 43 | After=etcd2.service 44 | Requires=etcd2.service 45 | After=fleet.service 46 | Requires=fleet.service 47 | 48 | [Service] 49 | Restart=always 50 | ExecStartPre=/usr/bin/docker pull tomgco/dnsmasq-catch:latest 51 | ExecStartPre=-/bin/bash -c " \ 52 | docker inspect paz-dnsmasq >/dev/null 2>&1 \ 53 | && docker rm -f paz-dnsmasq || true" 54 | ExecStart=/usr/bin/docker run -p 10.0.1.23:53:53/udp --privileged --name=paz-dnsmasq tomgco/dnsmasq-catch paz 10.0.1.23 55 | ExecStop=/usr/bin/docker rm -f paz-dnsmasq 56 | - name: paz-haproxy.service 57 | runtime: true 58 | command: start 59 | content: | 60 | [Unit] 61 | Description=paz HAProxy instance that enables service discovery. 62 | After=docker.service 63 | Requires=docker.service 64 | 65 | After=etcd2.service 66 | Requires=etcd2.service 67 | After=fleet.service 68 | Requires=fleet.service 69 | 70 | [Service] 71 | User=core 72 | Restart=always 73 | RestartSec=5s 74 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-haproxy:latest 75 | ExecStartPre=-/bin/bash -c " \ 76 | docker inspect paz-haproxy >/dev/null 2>&1 \ 77 | && docker rm -f paz-haproxy || true" 78 | ExecStart=/usr/bin/docker run -p 80:80 -p 1936:1936 -e ETCD=10.0.1.23:2379 --name paz-haproxy quay.io/yldio/paz-haproxy 79 | ExecStop=/usr/bin/docker rm -f paz-haproxy 80 | TimeoutStartSec=20m 81 | - name: paz-pubkey-watcher.service 82 | runtime: true 83 | command: start 84 | content: | 85 | [Unit] 86 | Description=Watch etcd for scheduler public key changes and update authorized_hosts. 87 | 88 | After=etcd2.service 89 | Requires=etcd2.service 90 | After=fleet.service 91 | Requires=fleet.service 92 | 93 | [Service] 94 | User=core 95 | Restart=always 96 | ExecStartPre=/home/core/bin/paz-pubkey-watcher.sh once 97 | ExecStart=/home/core/bin/paz-pubkey-watcher.sh 98 | etcd: 99 | name: hostname 100 | addr: 10.0.1.23:4001 101 | peer-addr: 10.0.1.23:7001 102 | discovery: https://discovery.etcd.io/0fb290c2f54d9338f0abefe73745f8cd 103 | fleet: 104 | public-ip: 10.0.1.23 105 | etcd_request_timeout: 5 106 | write_files: 107 | - path: /etc/paz-environment 108 | permissions: 0644 109 | content: | 110 | PAZ_PLATFORM=bare_metal 111 | PAZ_DOMAIN=burntsheep.com 112 | PAZ_ORCHESTRATOR_DNS_DISABLED=true 113 | PAZ_ORCHESTRATOR_CORS=true 114 | - path: /home/core/bin/paz-pubkey-watcher.sh 115 | owner: core 116 | permissions: 0754 117 | content: | 118 | #!/bin/bash 119 | set -e 120 | if [[ "$1" == "once" ]]; then 121 | FN=`mktemp /tmp/paz-pubkey.XXXX` 122 | until etcdctl get /paz/config/scheduler/_pubkey 2>/dev/null > $FN.tmp; do sleep 2; done && base64 -d < $FN.tmp > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN 123 | rm $FN $FN.tmp 124 | else 125 | while :; do 126 | FN=`mktemp /tmp/paz-pubkey.XXXX` 127 | etcdctl watch /paz/config/scheduler/_pubkey | base64 -d > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN; 128 | rm $FN 129 | done; 130 | fi 131 | - path: /etc/environment 132 | content: | 133 | COREOS_PUBLIC_IPV4=10.0.1.23 134 | COREOS_PRIVATE_IPV4=10.0.1.23 135 | -------------------------------------------------------------------------------- /vagrant/user-data: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | 3 | coreos: 4 | # Remove this as we want to reboot to update to latest version, maybe not for vagarnt as of yet as we should push this to CI. 5 | update: 6 | reboot-strategy: off 7 | etcd2: 8 | # generate a new token for each unique cluster from https://discovery.etcd.io/new 9 | # discovery: https://discovery.etcd.io/0fb290c2f54d9338f0abefe73745f8cd 10 | # multi-region and multi-cloud deployments need to use $public_ipv4 11 | advertise-client-urls: http://$public_ipv4:2379 12 | # initial-advertise-peer-urls: http://$private_ipv4:2380 13 | # listen on both the official ports and the legacy ports 14 | # legacy ports can be omitted if your application doesn't depend on them 15 | listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001 16 | # listen-peer-urls: http://$private_ipv4:2380,http://$private_ipv4:7001 17 | fleet: 18 | public-ip: $public_ipv4 19 | units: 20 | - name: etcd2.service 21 | command: start 22 | - name: fleet.service 23 | command: start 24 | - name: docker.service 25 | drop-ins: 26 | - name: 50-docker-dns.conf 27 | content: | 28 | [Service] 29 | Environment='DOCKER_OPTS=--restart=false -D --dns=$private_ipv4 --dns=8.8.8.8' 30 | - name: cadvisor.service 31 | runtime: true 32 | command: start 33 | content: | 34 | [Unit] 35 | Description=Analyzes resource usage and performance characteristics of running containers. 36 | After=docker.service 37 | Requires=docker.service 38 | 39 | [Service] 40 | Restart=always 41 | ExecStartPre=/usr/bin/docker pull google/cadvisor:latest 42 | ExecStartPre=-/bin/bash -c " \ 43 | docker inspect cadvisor >/dev/null 2>&1 \ 44 | && docker rm -f cadvisor || true" 45 | ExecStart=/usr/bin/docker run --volume=/var/run:/var/run:rw --volume=/sys/fs/cgroup/:/sys/fs/cgroup:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --name=cadvisor google/cadvisor:latest 46 | ExecStop=/usr/bin/docker rm -f cadvisor 47 | - name: paz-dnsmasq.service 48 | runtime: true 49 | command: start 50 | content: | 51 | [Unit] 52 | Description=*.paz traffic will go to the private_ipv4 addr 53 | After=docker.service 54 | Requires=docker.service 55 | 56 | After=etcd2.service 57 | Requires=etcd2.service 58 | After=fleet.service 59 | Requires=fleet.service 60 | 61 | [Service] 62 | Restart=always 63 | ExecStartPre=/usr/bin/docker pull tomgco/dnsmasq-catch:latest 64 | ExecStartPre=-/bin/bash -c " \ 65 | docker inspect paz-dnsmasq >/dev/null 2>&1 \ 66 | && docker rm -f paz-dnsmasq || true" 67 | ExecStart=/usr/bin/docker run -p $private_ipv4:53:53/udp --privileged --name=paz-dnsmasq tomgco/dnsmasq-catch paz $private_ipv4 68 | ExecStop=/usr/bin/docker rm -f paz-dnsmasq 69 | - name: paz-haproxy.service 70 | runtime: true 71 | command: start 72 | content: | 73 | [Unit] 74 | Description=paz HAProxy instance that enables service discovery. 75 | After=docker.service 76 | Requires=docker.service 77 | 78 | After=etcd2.service 79 | Requires=etcd2.service 80 | After=fleet.service 81 | Requires=fleet.service 82 | 83 | [Service] 84 | User=core 85 | Restart=always 86 | RestartSec=5s 87 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-haproxy:latest 88 | ExecStartPre=-/bin/bash -c " \ 89 | docker inspect paz-haproxy >/dev/null 2>&1 \ 90 | && docker rm -f paz-haproxy || true" 91 | ExecStart=/usr/bin/docker run -p 80:80 -p 1936:1936 -e ETCD=$private_ipv4:2379 --name paz-haproxy quay.io/yldio/paz-haproxy 92 | ExecStop=/usr/bin/docker rm -f paz-haproxy 93 | TimeoutStartSec=20m 94 | - name: paz-pubkey-watcher.service 95 | runtime: true 96 | command: start 97 | content: | 98 | [Unit] 99 | Description=Watch etcd2 for scheduler public key changes and update authorized_hosts. 100 | 101 | After=etcd2.service 102 | Requires=etcd2.service 103 | After=fleet.service 104 | Requires=fleet.service 105 | 106 | [Service] 107 | User=core 108 | Restart=always 109 | ExecStartPre=/home/core/bin/paz-pubkey-watcher.sh once 110 | ExecStart=/home/core/bin/paz-pubkey-watcher.sh 111 | 112 | write_files: 113 | - path: /etc/paz-environment 114 | permissions: 0644 115 | content: | 116 | PAZ_PLATFORM=vagrant 117 | PAZ_DOMAIN=paz 118 | PAZ_ORCHESTRATOR_DNS_DISABLED=true 119 | PAZ_ORCHESTRATOR_CORS=true 120 | - path: /home/core/bin/paz-pubkey-watcher.sh 121 | owner: core 122 | permissions: 0754 123 | content: | 124 | #!/bin/bash 125 | set -e 126 | if [[ "$1" == "once" ]]; then 127 | FN=`mktemp /tmp/paz-pubkey.XXXX` 128 | until etcdctl get /paz/config/scheduler/_pubkey 2>/dev/null > $FN.tmp; do sleep 2; done && base64 -d < $FN.tmp > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN 129 | rm $FN $FN.tmp 130 | else 131 | while :; do 132 | FN=`mktemp /tmp/paz-pubkey.XXXX` 133 | etcdctl watch /paz/config/scheduler/_pubkey | base64 -d > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN; 134 | rm $FN 135 | done; 136 | fi 137 | -------------------------------------------------------------------------------- /digitalocean/user-data: -------------------------------------------------------------------------------- 1 | #cloud-config 2 | 3 | coreos: 4 | update: 5 | reboot-strategy: off 6 | etcd: 7 | #generate a new token for each unique cluster from https://discovery.etcd.io/new 8 | discovery: https://discovery.etcd.io/0fb290c2f54d9338f0abefe73745f8cd 9 | addr: $public_ipv4:4001 10 | peer-addr: $public_ipv4:7001 11 | fleet: 12 | public-ip: $public_ipv4 13 | units: 14 | - name: etcd2.service 15 | command: start 16 | - name: fleet.service 17 | command: start 18 | - name: docker.service 19 | drop-ins: 20 | - name: 50-docker-dns.conf 21 | content: | 22 | [Service] 23 | Environment='DOCKER_OPTS=--restart=false -D --dns=$private_ipv4 --dns=8.8.8.8' 24 | - name: create-swap.service 25 | command: start 26 | runtime: true 27 | content: | 28 | [Unit] 29 | Description=Create swap file 30 | Before=swap.service 31 | 32 | [Service] 33 | Type=oneshot 34 | Environment="SWAPFILE=/2GiB.swap" 35 | ExecStart=/usr/bin/touch ${SWAPFILE} 36 | ExecStart=/usr/bin/chattr +C ${SWAPFILE} 37 | ExecStart=/usr/bin/fallocate -l 2048m ${SWAPFILE} 38 | ExecStart=/usr/bin/chmod 600 ${SWAPFILE} 39 | ExecStart=/usr/sbin/mkswap ${SWAPFILE} 40 | 41 | [Install] 42 | WantedBy=multi-user.target 43 | - name: swap.service 44 | command: start 45 | content: | 46 | [Unit] 47 | Description=Turn on swap 48 | 49 | [Service] 50 | Type=oneshot 51 | Environment="SWAPFILE=/2GiB.swap" 52 | RemainAfterExit=true 53 | ExecStartPre=/usr/sbin/losetup -f ${SWAPFILE} 54 | ExecStart=/usr/bin/sh -c "/sbin/swapon $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)" 55 | ExecStop=/usr/bin/sh -c "/sbin/swapoff $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)" 56 | ExecStopPost=/usr/bin/sh -c "/usr/sbin/losetup -d $(/usr/sbin/losetup -j ${SWAPFILE} | /usr/bin/cut -d : -f 1)" 57 | 58 | [Install] 59 | WantedBy=multi-user.target 60 | - name: cadvisor.service 61 | runtime: true 62 | command: start 63 | content: | 64 | [Unit] 65 | Description=Analyzes resource usage and performance characteristics of running containers. 66 | After=docker.service 67 | Requires=docker.service 68 | 69 | [Service] 70 | Restart=always 71 | ExecStartPre=/usr/bin/docker pull google/cadvisor:latest 72 | ExecStartPre=-/bin/bash -c " \ 73 | docker inspect cadvisor >/dev/null 2>&1 \ 74 | && docker rm -f cadvisor || true" 75 | ExecStart=/usr/bin/docker run --volume=/var/run:/var/run:rw --volume=/sys/fs/cgroup/:/sys/fs/cgroup:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --name=cadvisor google/cadvisor:latest 76 | ExecStop=/usr/bin/docker rm -f cadvisor 77 | - name: paz-dnsmasq.service 78 | runtime: true 79 | command: start 80 | content: | 81 | [Unit] 82 | Description=*.paz traffic will go to the private_ipv4 addr 83 | After=docker.service 84 | Requires=docker.service 85 | 86 | After=etcd2.service 87 | Requires=etcd2.service 88 | After=fleet.service 89 | Requires=fleet.service 90 | 91 | [Service] 92 | Restart=always 93 | ExecStartPre=/usr/bin/docker pull tomgco/dnsmasq-catch:latest 94 | ExecStartPre=-/bin/bash -c " \ 95 | docker inspect paz-dnsmasq >/dev/null 2>&1 \ 96 | && docker rm -f paz-dnsmasq || true" 97 | ExecStart=/usr/bin/docker run -p $private_ipv4:53:53/udp --privileged --name=paz-dnsmasq tomgco/dnsmasq-catch paz $private_ipv4 98 | ExecStop=/usr/bin/docker rm -f paz-dnsmasq 99 | - name: paz-haproxy.service 100 | runtime: true 101 | command: start 102 | content: | 103 | [Unit] 104 | Description=paz HAProxy instance that enables service discovery. 105 | After=docker.service 106 | Requires=docker.service 107 | 108 | After=etcd2.service 109 | Requires=etcd2.service 110 | After=fleet.service 111 | Requires=fleet.service 112 | 113 | [Service] 114 | User=core 115 | Restart=always 116 | RestartSec=5s 117 | ExecStartPre=/usr/bin/docker pull quay.io/yldio/paz-haproxy:latest 118 | ExecStartPre=-/bin/bash -c " \ 119 | docker inspect paz-haproxy >/dev/null 2>&1 \ 120 | && docker rm -f paz-haproxy || true" 121 | ExecStart=/usr/bin/docker run -p 80:80 -p 1936:1936 -e ETCD=$private_ipv4:4001 --name paz-haproxy quay.io/yldio/paz-haproxy 122 | ExecStop=/usr/bin/docker rm -f paz-haproxy 123 | TimeoutStartSec=20m 124 | - name: paz-pubkey-watcher.service 125 | runtime: true 126 | command: start 127 | content: | 128 | [Unit] 129 | Description=Watch etcd for scheduler public key changes and update authorized_hosts. 130 | 131 | After=etcd2.service 132 | Requires=etcd2.service 133 | After=fleet.service 134 | Requires=fleet.service 135 | 136 | [Service] 137 | User=core 138 | Restart=always 139 | ExecStartPre=/home/core/bin/paz-pubkey-watcher.sh once 140 | ExecStart=/home/core/bin/paz-pubkey-watcher.sh 141 | 142 | write_files: 143 | - path: /etc/paz-environment 144 | permissions: 0644 145 | content: | 146 | PAZ_PLATFORM=digitalocean 147 | PAZ_DOMAIN= 148 | PAZ_DNSIMPLE_APIKEY= 149 | PAZ_DNSIMPLE_EMAIL= 150 | - path: /home/core/bin/paz-pubkey-watcher.sh 151 | owner: core 152 | permissions: 0754 153 | content: | 154 | #!/bin/bash 155 | set -e 156 | if [[ "$1" == "once" ]]; then 157 | FN=`mktemp /tmp/paz-pubkey.XXXX` 158 | until etcdctl get /paz/config/scheduler/_pubkey 2>/dev/null > $FN.tmp; do sleep 2; done && base64 -d < $FN.tmp > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN 159 | rm $FN $FN.tmp 160 | else 161 | while :; do 162 | FN=`mktemp /tmp/paz-pubkey.XXXX` 163 | etcdctl watch /paz/config/scheduler/_pubkey | base64 -d > $FN && /usr/bin/update-ssh-keys -u core -a paz-scheduler $FN; 164 | rm $FN 165 | done; 166 | fi 167 | - path: /etc/sysctl.d/swap.conf 168 | permissions: 0644 169 | owner: root 170 | content: | 171 | vm.swappiness=10 172 | vm.vfs_cache_pressure=50 173 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![Gitter chat](https://badges.gitter.im/paz-sh/paz.png)](https://gitter.im/paz-sh/paz) 2 | 3 | Paz 4 | === 5 | _Continuous deployment production environments, built on Docker, CoreOS, etcd and fleet._ 6 | 7 | **THIS PROJECT IS INACTIVE** 8 | 9 | Paz is an in-house service platform with a PaaS-like workflow. 10 | 11 | Paz's documentation can be found [here](http://paz.readme.io). 12 | 13 | ![Screenshot](https://raw.githubusercontent.com/yldio/paz/206283f9f2b0c21bc4abf3a1f3926bd5e0f0a962/docs/images/Screen%20Shot%202014-11-22%20at%2016.39.07.png) 14 | 15 | ## What is Paz? 16 | 17 | Paz is... 18 | * Like your own private PaaS that you can host anywhere 19 | * Free 20 | * Open-source 21 | * Simple 22 | * A web front-end to CoreOS' Fleet with a PaaS-like workflow 23 | * Like a clustered/multi-host Dokku 24 | * Alpha software 25 | * Written in Node.js 26 | 27 | Paz is not... 28 | * A hosted service 29 | * A complete, enterprise-ready orchestration solution 30 | 31 | ## Features 32 | * Beautiful web UI 33 | * Run anywhere (Vagrant, public cloud or bare metal) 34 | * No special code required in your services 35 | - i.e. it will run any containerised application unmodified 36 | * Built for Continuous Deployment 37 | * Zero-downtime deployments 38 | * Service discovery 39 | * Same workflow from dev to production 40 | * Easy environments 41 | 42 | ## Components 43 | * Web front-end - A beautiful UI for configuring and monitoring your services. 44 | * Service directory - A catalog of your services and their configuration. 45 | * Scheduler - Deploys services onto the platform. 46 | * Orchestrator - REST API used by the web front-end; presents a unified subset of functionality from Scheduler, Service Directory, Fleet and Etcd. 47 | * Centralised monitoring and logging. 48 | 49 | ### Service Directory 50 | This is a database of all your services and their configuration (e.g. environment variables, data volumes, port mappings and the number of instances to launch). Ultimately this information will be reduced to a set of systemd unit files (by the scheduler) to be submitted to Fleet for running on the cluster. 51 | The service directory is a Node.js API backed by a LevelDB database. 52 | 53 | ### Scheduler 54 | This service receives HTTP POST commands to deploy services that are defined in the service directory. Using the service data from the directory it will render unit files and run them on the CoreOS cluster using Fleet. A history of deployments and associated config is also available from the scheduler. 55 | 56 | For each service the scheduler will deploy a container for the service and an announce sidekick container. 57 | 58 | The scheduler is a Node.js API backed by a LevelDB database and uses Fleet to launch services. 59 | 60 | ### Orchestrator 61 | This is a service that ties all of the other services together, providing a single access-point for the front-end to interface with. Also offers a web socket endpoint for realtime updates to the web front-end. 62 | 63 | The orchestrator is a Node.js API server that communicates with Etcd, Fleet, the scheduler and service directory. 64 | 65 | ### Web Front-End 66 | A beautiful and easy-to-use web UI for managing your services and observing the health of your cluster. Built in Ember.js. 67 | 68 | ### HAProxy 69 | Paz uses Confd to dynamically configure HAProxy based on service availability information declared in Etcd. HAProxy is configured to route external and internal requests to the correct host for the desired service. 70 | 71 | ### Monitoring and Logging 72 | Currently cAdvisor is used for monitoring, and there is not yet any centralised logging. Monitoring and logging are high-priority features on the roadmap. 73 | 74 | ## Installation 75 | 76 | Paz's Docker repositories are hosted at Quay.io, but they are public so you don't need any credentials. 77 | 78 | You will need to install `fleetctl` and `etcdctl`. On OS/X you can install both with brew: 79 | ``` 80 | $ brew install etcdctl fleetctl 81 | ``` 82 | 83 | ### Vagrant 84 | 85 | Clone this repository and run the following from the root directory of this repository: 86 | 87 | ``` 88 | $ ./scripts/install-vagrant.sh 89 | ``` 90 | 91 | This will bring up a three-node CoreOS Vagrant cluster and install Paz on it. Note that it may take 10 minutes or more to complete. 92 | 93 | For extra debug output, run with `DEBUG=1` environment variable set. 94 | 95 | If you already have a Vagrant cluster running and want to reinstall the units, use: 96 | 97 | ``` 98 | $./script/reinstall-units-vagrant.sh 99 | ``` 100 | 101 | To interact with the units in the cluster via Fleet, just specify the URL to Etcd on one of your hosts as a parameter to Fleet. e.g.: 102 | 103 | ``` 104 | $ fleetctl -strict-host-key-checking=false -endpoint=http://172.17.9.101:4001 list-units 105 | ``` 106 | 107 | You can also SSH into one of the VMs and run `fleetctl` from there: 108 | 109 | ``` 110 | $ cd coreos-vagrant 111 | $ vagrant ssh core-01 112 | ``` 113 | 114 | ...however bear in mind that Fleet needs to SSH into the other VMs in order to perform operations that involve calling down to systemd (e.g. `journal`), and for this you need to have SSHd into the VM running the unit in question. For this reason you may find it simpler (albeit more verbose) to run `fleetctl` from outside the CoreOS VMs. 115 | 116 | ### DigitalOcean 117 | 118 | Paz has been tested on Digital Ocean but there isn't currently an install script for it. 119 | 120 | In short, you need to create your own cluster and then install the Paz units on there. 121 | 122 | The first step is to spin up a CoreOS cluster on DigitalOcean with Paz's cloud-config userdata, and then we'll install Paz on it. 123 | 124 | 1. Click the "Create Droplet" button in the DigitalOcean console. 125 | 2. Give your droplet a name and choose your droplet size and region. 126 | 3. Tick "Private Networking" and "Enable User Data" 127 | 4. Paste the contents of the `digitalocean/userdata` file in the `yldio/paz` repository into the userdata text area. 128 | 5. Go to `http://discovery.etcd.io/new` and copy the URL that it prints in the browser, pasting it into the userdata text area instead of the one that is already there. 129 | 6. In the `write_files` section, in the section for writing the `/etc/environment` file, edit `PAZ_DOMAIN`, `PAZ_DNSIMPLE_APIKEY` and `PAZ_DNSIMPLE_EMAIL` fields, putting in your dnsimple-managed domain name, dnsimple API key and dnsimple account's email address, respectively. 130 | - e.g. "lukeb0nd.com", "ABcdE1fGHi2jk3LmnOP" and "me@blah.com" 131 | 7. Before submitting, copy this userdata to a text file or editor because we'll need to use it again unchanged 132 | 8. Select the CoreOS version you want to install (e.g. latest stable or beta should be fine). 133 | 9. Add the SSH keys that will be added to the box (under `core` user). 134 | 10. Click "Create Droplet". 135 | 11. Repeat for the number of nodes you want in the cluster (e.g. 3), using the exact same userdata file (i.e. don't generate a new discovery token etc.). 136 | 12. Once all droplets have booted (test by trying to SSH into each one, run `docker ps` and observe that `paz-dnsmasq`, `cadvisor` and `paz-haproxy` are all running on each box), you may proceed. 137 | 13. Install Paz: 138 | ``` 139 | $ ssh-add ~/.ssh/id_rsa 140 | $ FLEETCTL_TUNNEL= fleetctl -strict-host-key-checking=false start unitfiles/1/* 141 | ``` 142 | ...where `` is an IP address of any node in your cluster. 143 | You can wait for all units to be active/running like so: 144 | ``` 145 | $ FLEETCTL_TUNNEL= watch -n 5 fleetctl -strict-host-key-checking=false list-units 146 | ``` 147 | Once they're up you can install the final services: 148 | ``` 149 | $ FLEETCTL_TUNNEL= fleetctl -strict-host-key-checking=false start unitfiles/2/* 150 | ``` 151 | ### Bare Metal 152 | 153 | Paz works fine on a bare metal install, but there is no install script available for it yet. 154 | 155 | You need to create your cluster, then add the contents of bare_metal/user-data to your cloud config, and finally submit the unit files. 156 | 157 | 1. Create your cluster. 158 | 2. Paste the contents of bare_metal/user-data into your cloud config file. Be sure to alter the networking information to match your setup. 159 | 3. Go to `http://discovery.etcd.io/new` and copy the URL that it prints in the browser, pasting it into the userdata text area instead of the one that is already there. 160 | 4. Install Paz: 161 | ``` 162 | $ ssh-add ~/.ssh/id_rsa 163 | $ FLEETCTL_TUNNEL= fleetctl -strict-host-key-checking=false start unitfiles/1/* 164 | ``` 165 | ...where `` is an IP address of any node in your cluster. 166 | You can wait for all units to be active/running like so: 167 | ``` 168 | $ FLEETCTL_TUNNEL= watch -n 5 fleetctl -strict-host-key-checking=false list-units 169 | ``` 170 | Once they're up you can install the final services: 171 | ``` 172 | $ FLEETCTL_TUNNEL= fleetctl -strict-host-key-checking=false start unitfiles/2/* 173 | ``` 174 | ## Tests 175 | 176 | There is an integration test that brings up a CoreOS Vagrant cluster, installs Paz and then runs a contrived service on it and verifies that it works: 177 | 178 | ``` 179 | $ cd test 180 | $ ./integration.sh 181 | ``` 182 | 183 | Each paz repository (service directory, orchestrator, scheduler) has tests that run on http://paz-ci.yld.io:8080 (in StriderCD), triggered by a Github webhook. 184 | 185 | ## Paz Repositories 186 | 187 | The various components of Paz are spread across several repositories: 188 | * [Orchestrator](https://github.com/yldio/paz-orchestrator) 189 | * [Service Directory](https://github.com/yldio/paz-service-directory) 190 | * [Scheduler](https://github.com/yldio/paz-scheduler) 191 | * [Web](https://github.com/yldio/paz-web) 192 | * [HAProxy](https://github.com/yldio/paz-haproxy) 193 | --------------------------------------------------------------------------------