├── .github └── workflows │ └── autoblack.yml ├── .gitignore ├── README.md ├── ciclic ├── maintenance ├── chat_notify.sh ├── compress_logs.sh ├── finish_install.sh └── self_upgrade.sh ├── models.py ├── requirements-frozen.txt ├── requirements.txt ├── results ├── logs │ └── .gitkeep └── summary │ └── empty.png ├── run.py ├── schedule.py ├── static ├── css │ ├── bulma-0-7-1.min.css │ └── style.css └── js │ ├── app.js │ ├── fontawesome-all-5-1-0.js │ ├── jquery-3.3.1.min.js │ ├── reconnecting-websocket.min.js │ ├── vue-2-5-17.min.js │ └── vue-timeago.min.js └── templates ├── app.html ├── apps.html ├── base.html ├── index.html ├── job.html └── menu.html /.github/workflows/autoblack.yml: -------------------------------------------------------------------------------- 1 | name: Check / auto apply Black 2 | on: 3 | push: 4 | branches: 5 | - main 6 | 7 | jobs: 8 | black: 9 | name: Check / auto apply black 10 | runs-on: ubuntu-latest 11 | steps: 12 | - uses: actions/checkout@v4 13 | - name: Check files using the black formatter 14 | uses: psf/black@stable 15 | id: black 16 | with: 17 | options: "." 18 | continue-on-error: true 19 | - shell: pwsh 20 | id: check_files_changed 21 | run: | 22 | # Diff HEAD with the previous commit 23 | $diff = git diff 24 | $HasDiff = $diff.Length -gt 0 25 | Write-Host "::set-output name=files_changed::$HasDiff" 26 | - name: Create Pull Request 27 | if: steps.check_files_changed.outputs.files_changed == 'true' 28 | uses: peter-evans/create-pull-request@v6 29 | with: 30 | token: ${{ secrets.GITHUB_TOKEN }} 31 | title: "Format Python code with Black" 32 | commit-message: ":art: Format Python code with Black" 33 | body: | 34 | This pull request uses the [psf/black](https://github.com/psf/black) formatter. 35 | base: ${{ github.head_ref }} # Creates pull request onto pull request or commit branch 36 | branch: actions/black 37 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | ve3 2 | db.sqlite 3 | __pycache__ 4 | .pythonz 5 | results/* 6 | .admin_token 7 | package_check 8 | config.py 9 | venv/ 10 | .lxd/ 11 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # Installation 3 | 4 | It's recommended to use : https://github.com/YunoHost-Apps/yunorunner_ynh 5 | 6 | But you can also install stuff manually .... : 7 | 8 | ```bash 9 | cd /var/www/ 10 | git clone https://github.com/YunoHost/yunorunner 11 | cd yunorunner 12 | python3 -m venv venv 13 | venv/bin/pip install -r requirements.txt 14 | ``` 15 | 16 | In either cases, you probably want to run the `finish_install.sh` script in `maintenance/` 17 | 18 | The configuration happens in `config.py` and typically looks like : 19 | 20 | ``` 21 | BASE_URL = "https://ci-apps-foobar.yunohost.org/ci" 22 | PORT = 34567 23 | PATH_TO_ANALYZER = "/var/www/yunorunner/analyze_yunohost_app.sh" 24 | MONITOR_APPS_LIST = True 25 | MONITOR_GIT = True 26 | MONITOR_ONLY_GOOD_QUALITY_APPS = True 27 | MONTHLY_JOBS = True 28 | WORKER_COUNT = 1 29 | YNH_BRANCH = "unstable" 30 | DIST = "bullseye" 31 | STORAGE_PATH = "/data/yunorunner_data" 32 | ``` 33 | 34 | ### Cli tool 35 | 36 | The cli tool is called "ciclic" because my (Bram ;)) humour is legendary. 37 | 38 | Basic tool signature: 39 | 40 | ``` 41 | $ ve3/bin/python ciclic 42 | usage: ciclic [-h] {add,list,delete,stop,restart} ... 43 | ``` 44 | 45 | This tool works by doing a http request to the CI instance of you choice, by 46 | default the domain is "http://localhost:4242", you cand modify this for EVERY 47 | command by add "-d https://some.other.domain.com/path/". 48 | 49 | ##### Usage 50 | 51 | Adding a new job: 52 | 53 | ``` 54 | $ ve3/bin/python ciclic add "some_app (Official)" "https://github.com/yunohost-apps/some_app_ynh" -d "https://ci-stable.yunohost.org/ci/" 55 | ``` 56 | 57 | Listing jobs: 58 | 59 | ``` 60 | $ ve3/bin/python ciclic list 61 | 31 - mumbleserver_ynh 15 [done] 62 | 30 - mumbleserver_ynh 14 [done] 63 | 29 - mumbleserver_ynh 13 [done] 64 | 28 - mumbleserver_ynh 12 [done] 65 | 27 - mumbleserver_ynh 11 [done] 66 | 26 - mumbleserver_ynh 10 [done] 67 | 25 - mumbleserver_ynh 9 [done] 68 | 24 - mumbleserver_ynh 8 [done] 69 | ... 70 | ``` 71 | 72 | **IMPORTANT**: the first digit is the ID of the job, it will be used in ALL the other commands. 73 | 74 | Deleting a job: 75 | 76 | ``` 77 | $ # $job_id is an id from "ciclic list" or the one in the URL of a job 78 | $ ve3/bin/python ciclic delete $job_id 79 | ``` 80 | 81 | Stop a job: 82 | 83 | ``` 84 | $ # $job_id is an id from "ciclic list" or the one in the URL of a job 85 | $ ve3/bin/python ciclic stop $job_id 86 | ``` 87 | 88 | Restart a job: 89 | 90 | ``` 91 | $ # $job_id is an id from "ciclic list" or the one in the URL of a job 92 | $ ve3/bin/python ciclic restart $job_id 93 | ``` 94 | 95 | Note that delete/restart will stop the job first to free the worker. 96 | 97 | You will see a resulting log on the server for each action. 98 | 99 | List apps: 100 | 101 | ``` 102 | $ ve3/bin/python ciclic app-list 103 | ``` 104 | 105 | # Deployment 106 | 107 | You need to put this program behind a nginx mod proxy AND add the magical lines 108 | to allow websocket (it's not allowed for whatever reason) and that all the way 109 | through all the proxies (if you deploy on bearnaise's incus or something 110 | similar). 111 | 112 | # Licence 113 | 114 | AGPL v3+ 115 | 116 | Copyright YunoHost 2018 (you can find the authors in the commits) 117 | -------------------------------------------------------------------------------- /ciclic: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | 3 | import os 4 | import sys 5 | import argh 6 | import requests 7 | 8 | from argh.decorators import named 9 | 10 | try: 11 | from config import * 12 | except ImportError: 13 | PORT="4242" 14 | 15 | DOMAIN = "localhost:" + str(PORT) 16 | 17 | 18 | def request_api(path, domain, verb, data={}, check_return_code=True): 19 | assert verb in ("get", "post", "put", "delete") 20 | 21 | https = False 22 | if domain.split(":")[0] not in ("localhost", "127.0.0.1", "0.0.0.0"): 23 | https = True 24 | 25 | response = getattr(requests, verb)( 26 | "http%s://%s/api/%s" % ("s" if https else "", domain, path), 27 | headers={"X-Token": open(".admin_token", "r").read().strip()}, 28 | json=data, 29 | ) 30 | 31 | if response.status_code == 403: 32 | print(f"Error: access refused because '{response.json()['status']}'") 33 | sys.exit(1) 34 | 35 | if check_return_code: 36 | # TODO: real error message 37 | assert response.status_code == 200, response.content 38 | 39 | return response 40 | 41 | 42 | def shell(): 43 | import ipdb; ipdb.set_trace() 44 | 45 | 46 | 47 | def add(name, url_or_path, domain=DOMAIN): 48 | request_api( 49 | path="job", 50 | verb="post", 51 | domain=domain, 52 | data={ 53 | "name": name, 54 | "url_or_path": url_or_path, 55 | }, 56 | ) 57 | 58 | 59 | @named("list") 60 | def list_(all=False, domain=DOMAIN): 61 | response = request_api( 62 | path="job", 63 | verb="get", 64 | domain=domain, 65 | data={ 66 | "all": all, 67 | }, 68 | ) 69 | 70 | for i in response.json(): 71 | print(f"{i['id']:4d} - {i['name']} [{i['state']}]") 72 | 73 | 74 | def app_list(all=False, domain=DOMAIN): 75 | response = request_api( 76 | path="app", 77 | verb="get", 78 | domain=domain, 79 | ) 80 | 81 | for i in response.json(): 82 | print(f"{i['name']} - {i['url']}") 83 | 84 | 85 | def delete(job_id, domain=DOMAIN): 86 | response = request_api( 87 | path=f"job/{job_id}", 88 | verb="delete", 89 | domain=domain, 90 | check_return_code=False 91 | ) 92 | 93 | if response.status_code == 404: 94 | print(f"Error: no job with the id '{job_id}'") 95 | sys.exit(1) 96 | 97 | assert response.status_code == 200, response.content 98 | 99 | 100 | def stop(job_id, domain=DOMAIN): 101 | response = request_api( 102 | path=f"job/{job_id}/stop", 103 | verb="post", 104 | domain=domain, 105 | check_return_code=False 106 | ) 107 | 108 | if response.status_code == 404: 109 | print(f"Error: no job with the id '{job_id}'") 110 | sys.exit(1) 111 | 112 | assert response.status_code == 200, response.content 113 | 114 | 115 | def restart(job_id, domain=DOMAIN): 116 | response = request_api( 117 | path=f"job/{job_id}/restart", 118 | verb="post", 119 | domain=domain, 120 | check_return_code=False 121 | ) 122 | 123 | if response.status_code == 404: 124 | print(f"Error: no job with the id '{job_id}'") 125 | sys.exit(1) 126 | 127 | assert response.status_code == 200, response.content 128 | 129 | 130 | if __name__ == '__main__': 131 | argh.dispatch_commands([add, list_, delete, stop, restart, app_list, shell]) 132 | -------------------------------------------------------------------------------- /maintenance/chat_notify.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 4 | # To install before using: 5 | # 6 | # MCHOME="/opt/matrix-commander" 7 | # MCARGS="-c $MCHOME/credentials.json --store $MCHOME/store" 8 | # mkdir -p "$MCHOME/venv" 9 | # python3 -m venv "$MCHOME/venv" 10 | # source "$MCHOME/venv/bin/activate" 11 | # pip3 install matrix-commander 12 | # chmod 700 "$MCHOME" 13 | # matrix-commander $MCARGS --login password # < NB here this is literally 'password' as authentication method, the actual password will be asked by a prompt 14 | # matrix-commander $MCARGS --room-join '#yunohost-apps:matrix.org' 15 | # 16 | # groupadd matrixcommander 17 | # usermod -a -G matrixcommander yunorunner 18 | # chgrp -R matrixcommander $MCHOME 19 | # chmod -R 770 $MCHOME 20 | # 21 | 22 | MCHOME="/opt/matrix-commander/" 23 | MCARGS="-c $MCHOME/credentials.json --store $MCHOME/store" 24 | timeout 10 "$MCHOME/venv/bin/matrix-commander" $MCARGS -m "$@" --room 'yunohost-apps' --markdown 25 | -------------------------------------------------------------------------------- /maintenance/compress_logs.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd ) 4 | 5 | LOGS_DIR=$(dirname "$SCRIPT_DIR")/results/logs 6 | 7 | pushd "$LOGS_DIR" || exit 1 8 | 9 | # Compress older-than-one-week files 10 | find . -mtime +7 -name '*.log' -print0 \ 11 | | xargs -0 -n 1 -P 4 gzip 12 | -------------------------------------------------------------------------------- /maintenance/finish_install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | if (( $# < 1 )) 4 | then 5 | cat << EOF 6 | Usage: ./finish_install.sh [auto|manual] [cluster] 7 | 8 | 1st argument is the CI type (scheduling strategy): 9 | - auto means job will automatically be scheduled by yunorunner from apps.json etc. 10 | - manual means job will be scheduled manually (e.g. via webhooks or yunorunner ciclic) 11 | 12 | 2nd argument is to build the first node of an incus cluster 13 | - lxd cluster will be created with the current server 14 | - some.domain.tld will be the cluster hostname and SecretAdminPasswurzd! the trust password to join the cluster 15 | 16 | EOF 17 | exit 1 18 | fi 19 | 20 | YUNORUNNER_HOME="/var/www/yunorunner" 21 | if [ $(pwd) != "$YUNORUNNER_HOME" ] 22 | then 23 | echo "This script should be ran from $YUNORUNNER_HOME" 24 | exit 1 25 | fi 26 | 27 | ci_type=$1 28 | lxd_cluster=$2 29 | 30 | # User which execute the CI software. 31 | ci_user=yunorunner 32 | 33 | echo_bold () { 34 | echo -e "\e[1m$1\e[0m" 35 | } 36 | 37 | # ----------------------------------------------------------------- 38 | 39 | 40 | function tweak_yunohost() { 41 | 42 | #echo_bold "> Setting up Yunohost..." 43 | #local DIST="bullseye" 44 | #local INSTALL_SCRIPT="https://install.yunohost.org/$DIST" 45 | #curl $INSTALL_SCRIPT | bash -s -- -a 46 | 47 | #echo_bold "> Running yunohost postinstall" 48 | #yunohost tools postinstall --domain $domain --password $yuno_pwd 49 | 50 | # What is it used for :| ... 51 | #echo_bold "> Create Yunohost CI user" 52 | #local ynh_ci_user=ynhci 53 | #yunohost user create --firstname "$ynh_ci_user" --domain "$domain" --lastname "$ynh_ci_user" "$ynh_ci_user" --password $yuno_pwd 54 | 55 | # Idk why this is needed but wokay I guess >_> 56 | echo -e "\n127.0.0.1 $domain #CI_APP" >> /etc/hosts 57 | 58 | echo_bold "> Disabling unecessary services to save up RAM" 59 | for SERVICE in mysql php7.4-fpm metronome rspamd dovecot postfix redis-server postsrsd yunohost-api avahi-daemon 60 | do 61 | systemctl stop $SERVICE 62 | systemctl disable $SERVICE --quiet 63 | done 64 | } 65 | 66 | function tweak_yunorunner() { 67 | echo_bold "> Tweaking YunoRunner..." 68 | 69 | 70 | #if ! yunohost app list --output-as json --quiet | jq -e '.apps[] | select(.id == "yunorunner")' >/dev/null 71 | #then 72 | # yunohost app install --force https://github.com/YunoHost-Apps/yunorunner_ynh -a "domain=$domain&path=/$ci_path" 73 | #fi 74 | domain=$(yunohost app setting yunorunner domain) 75 | ci_path=$(yunohost app setting yunorunner path) 76 | port=$(yunohost app setting yunorunner port) 77 | 78 | # Stop YunoRunner 79 | # to be started manually by the admin after the CI_package_check install 80 | # finishes 81 | systemctl stop $ci_user 82 | 83 | # Remove the original database, in order to rebuilt it with the new config. 84 | rm -f $YUNORUNNER_HOME/db.sqlite 85 | 86 | cat >$YUNORUNNER_HOME/config.py <>$YUNORUNNER_HOME/config.py <>$YUNORUNNER_HOME/config.py </dev/null 122 | then 123 | yunohost app install --force https://github.com/YunoHost-Apps/lxd_ynh 124 | fi 125 | 126 | mkdir .lxd 127 | pushd .lxd 128 | 129 | echo_bold "> Configuring lxd..." 130 | 131 | if [ "$lxd_cluster" == "cluster" ] 132 | then 133 | local free_space=$(df --output=avail / | sed 1d) 134 | local btrfs_size=$(( $free_space * 90 / 100 / 1024 / 1024 )) 135 | local lxc_network=$((1 + $RANDOM % 254)) 136 | 137 | yunohost firewall allow TCP 8443 138 | cat >./preseed.conf < Configuring the CI..." 205 | 206 | # Cron tasks 207 | cat >> "/etc/cron.d/yunorunner" << EOF 208 | # self-upgrade every night 209 | 0 3 * * * root "$YUNORUNNER_HOME/maintenance/self_upgrade.sh" >> "$YUNORUNNER_HOME/maintenance/self_upgrade.log" 2>&1 210 | EOF 211 | } 212 | 213 | # ========================= 214 | # Main stuff 215 | # ========================= 216 | 217 | #install_dependencies 218 | 219 | [ -e /usr/bin/yunohost ] || { echo "YunoHost is not installed"; exit; } 220 | [ -e /etc/yunohost/apps/yunorunner ] || { echo "Yunorunner is not installed on YunoHost"; exit; } 221 | 222 | tweak_yunohost 223 | tweak_yunorunner 224 | git clone https://github.com/YunoHost/package_check "./package_check" 225 | setup_lxd 226 | add_cron_jobs 227 | 228 | # Add permission to the user for the entire yunorunner home because it'll be the one running the tests (as a non-root user) 229 | chown -R $ci_user $YUNORUNNER_HOME 230 | 231 | echo "Done!" 232 | echo " " 233 | echo "N.B. : If you want to enable Matrix notification, you should look at " 234 | echo "the instructions inside lib/chat_notify.sh to deploy matrix-commander" 235 | echo "" 236 | echo "You may also want to tweak the 'config' file to run test with a different branch / arch" 237 | echo "" 238 | echo "When you're ready to start the CI, run: systemctl restart $ci_user" 239 | -------------------------------------------------------------------------------- /maintenance/self_upgrade.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # This script is designed to be used in a cron file 4 | 5 | #================================================= 6 | # Grab the script directory 7 | #================================================= 8 | 9 | if [ "${0:0:1}" == "/" ]; then script_dir="$(dirname "$0")"; else script_dir="$(echo $PWD/$(dirname "$0" | cut -d '.' -f2) | sed 's@/$@@')"; fi 10 | 11 | cd $script_dir/.. 12 | 13 | # We only self-upgrade if we're in a git repo on main branch 14 | # (which should correspond to production contexts) 15 | [[ -d ".git" ]] || exit 16 | 17 | [[ $(git rev-parse --abbrev-ref HEAD) == "master" ]] \ 18 | || [[ $(git rev-parse --abbrev-ref HEAD) == "main" ]] \ 19 | || exit 20 | 21 | git fetch origin --quiet 22 | 23 | # If already up to date, don't do anything else 24 | [[ $(git rev-parse HEAD) == $(git rev-parse origin/main) ]] && exit 25 | 26 | git reset --hard origin/main --quiet 27 | -------------------------------------------------------------------------------- /models.py: -------------------------------------------------------------------------------- 1 | import peewee 2 | 3 | db = peewee.SqliteDatabase("db.sqlite") 4 | 5 | 6 | class Repo(peewee.Model): 7 | name = peewee.CharField() # TODO make this uniq/index 8 | url = peewee.CharField() 9 | revision = peewee.CharField(null=True) 10 | 11 | state = peewee.CharField( 12 | choices=( 13 | ("working", "Working"), 14 | ("other_than_working", "Other than working"), 15 | ), 16 | default="other_than_working", 17 | ) 18 | 19 | random_job_day = peewee.IntegerField(null=True) 20 | 21 | class Meta: 22 | database = db 23 | 24 | 25 | class Job(peewee.Model): 26 | name = peewee.CharField() 27 | url_or_path = peewee.CharField() 28 | 29 | state = peewee.CharField( 30 | choices=( 31 | ("scheduled", "Scheduled"), 32 | ("runnning", "Running"), 33 | ("done", "Done"), 34 | ("failure", "Failure"), 35 | ("error", "Error"), 36 | ("canceled", "Canceled"), 37 | ), 38 | default="scheduled", 39 | ) 40 | 41 | log = peewee.TextField(default="") 42 | 43 | created_time = peewee.DateTimeField( 44 | constraints=[peewee.SQL("DEFAULT (datetime('now'))")] 45 | ) 46 | started_time = peewee.DateTimeField(null=True) 47 | end_time = peewee.DateTimeField(null=True) 48 | 49 | class Meta: 50 | database = db 51 | 52 | 53 | class Worker(peewee.Model): 54 | state = peewee.CharField( 55 | choices=( 56 | ("available", "Available"), 57 | ("busy", "Busy"), 58 | ) 59 | ) 60 | 61 | class Meta: 62 | database = db 63 | 64 | 65 | # peewee is a bit stupid and will crash if the table already exists 66 | for i in [Repo, Job, Worker]: 67 | try: 68 | i.create_table() 69 | except Exception: 70 | pass 71 | -------------------------------------------------------------------------------- /requirements-frozen.txt: -------------------------------------------------------------------------------- 1 | aiofiles==0.7.0 2 | aiohttp==3.10.11 3 | aiosignal==1.3.1 4 | argh==0.26.2 5 | async-timeout==4.0.3 6 | attrs==18.2.0 7 | certifi==2023.7.22 8 | chardet==3.0.4 9 | charset-normalizer==2.0.6 10 | frozenlist==1.4.1 11 | httptools==0.1.0 12 | idna==2.8 13 | idna-ssl==1.1.0 14 | Jinja2==3.1.6 15 | MarkupSafe==2.0.1 16 | multidict==5.1.0 17 | peewee==3.14.4 18 | requests==2.31.0 19 | sanic==21.12.2 20 | sanic-jinja2==0.10.0 21 | sanic-routing==0.7.1 22 | typing-extensions==3.10.0.0 23 | ujson==5.4.0 24 | urllib3==1.26.18 25 | uvloop==0.14.0 26 | websockets==10.0 27 | yarl==1.3.0 28 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | sanic 2 | aiohttp 3 | aiofiles 4 | peewee 5 | sanic-jinja2 6 | websockets 7 | 8 | # cli 9 | argh 10 | requests -------------------------------------------------------------------------------- /results/logs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YunoHost/yunorunner/39f899170fcd3978a84555d6abea91e3997f18f7/results/logs/.gitkeep -------------------------------------------------------------------------------- /results/summary/empty.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/YunoHost/yunorunner/39f899170fcd3978a84555d6abea91e3997f18f7/results/summary/empty.png -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python3 2 | 3 | 4 | import os 5 | import sys 6 | import argh 7 | import random 8 | import logging 9 | import asyncio 10 | import traceback 11 | import itertools 12 | import tracemalloc 13 | import string 14 | import shutil 15 | 16 | import hmac 17 | import hashlib 18 | 19 | from datetime import datetime, date 20 | from collections import defaultdict 21 | from functools import wraps 22 | from concurrent.futures._base import CancelledError 23 | from asyncio import Task 24 | 25 | import json 26 | import aiohttp 27 | import aiofiles 28 | 29 | from websockets.exceptions import ConnectionClosed 30 | from websockets import WebSocketCommonProtocol 31 | 32 | from sanic import Sanic, response 33 | from sanic.exceptions import NotFound, WebsocketClosed 34 | from sanic.log import LOGGING_CONFIG_DEFAULTS 35 | 36 | from jinja2 import FileSystemLoader 37 | from sanic_jinja2 import SanicJinja2 38 | 39 | from peewee import fn 40 | from playhouse.shortcuts import model_to_dict 41 | 42 | from models import Repo, Job, db, Worker 43 | from schedule import always_relaunch, once_per_day 44 | 45 | # This is used by ciclic 46 | admin_token = "".join(random.choices(string.ascii_lowercase + string.digits, k=32)) 47 | open(".admin_token", "w").write(admin_token) 48 | 49 | try: 50 | asyncio_all_tasks = asyncio.all_tasks 51 | except AttributeError as e: 52 | asyncio_all_tasks = asyncio.Task.all_tasks 53 | 54 | LOGGING_CONFIG_DEFAULTS["loggers"] = { 55 | "task": { 56 | "level": "INFO", 57 | "handlers": ["task_console"], 58 | }, 59 | "api": { 60 | "level": "INFO", 61 | "handlers": ["api_console"], 62 | }, 63 | } 64 | 65 | LOGGING_CONFIG_DEFAULTS["handlers"] = { 66 | "api_console": { 67 | "class": "logging.StreamHandler", 68 | "formatter": "api", 69 | "stream": sys.stdout, 70 | }, 71 | "task_console": { 72 | "class": "logging.StreamHandler", 73 | "formatter": "background", 74 | "stream": sys.stdout, 75 | }, 76 | } 77 | 78 | LOGGING_CONFIG_DEFAULTS["formatters"] = { 79 | "background": { 80 | "format": "%(asctime)s [%(process)d] [BACKGROUND] [%(funcName)s] %(message)s", 81 | "datefmt": "[%Y-%m-%d %H:%M:%S %z]", 82 | "class": "logging.Formatter", 83 | }, 84 | "api": { 85 | "format": "%(asctime)s [%(process)d] [API] [%(funcName)s] %(message)s", 86 | "datefmt": "[%Y-%m-%d %H:%M:%S %z]", 87 | "class": "logging.Formatter", 88 | }, 89 | } 90 | 91 | 92 | def datetime_to_epoch_json_converter(o): 93 | if isinstance(o, datetime): 94 | return o.strftime("%s") 95 | 96 | 97 | # define a custom json dumps to convert datetime 98 | def my_json_dumps(o): 99 | return json.dumps(o, default=datetime_to_epoch_json_converter) 100 | 101 | 102 | task_logger = logging.getLogger("task") 103 | api_logger = logging.getLogger("api") 104 | 105 | app = Sanic(__name__, dumps=my_json_dumps) 106 | app.static("/static", "./static/") 107 | 108 | yunorunner_dir = os.path.abspath(os.path.dirname(__file__)) 109 | loader = FileSystemLoader(yunorunner_dir + "/templates", encoding="utf8") 110 | jinja = SanicJinja2(app, loader=loader) 111 | 112 | # to avoid conflict with vue.js 113 | jinja.env.block_start_string = "<%" 114 | jinja.env.block_end_string = "%>" 115 | jinja.env.variable_start_string = "<{" 116 | jinja.env.variable_end_string = "}>" 117 | jinja.env.comment_start_string = "<#" 118 | jinja.env.comment_end_string = "#>" 119 | 120 | APPS_LIST = "https://app.yunohost.org/default/v3/apps.json" 121 | 122 | subscriptions = defaultdict(list) 123 | 124 | # this will have the form: 125 | # jobs_in_memory_state = { 126 | # some_job_id: {"worker": some_worker_id, "task": some_aio_task}, 127 | # } 128 | jobs_in_memory_state = {} 129 | 130 | 131 | async def wait_closed(self): 132 | """ 133 | Wait until the connection is closed. 134 | 135 | This is identical to :attr:`closed`, except it can be awaited. 136 | 137 | This can make it easier to handle connection termination, regardless 138 | of its cause, in tasks that interact with the WebSocket connection. 139 | 140 | """ 141 | await asyncio.shield(self.connection_lost_waiter) 142 | 143 | 144 | # this is a backport of websockets 7.0 which sanic doesn't support yet 145 | WebSocketCommonProtocol.wait_closed = wait_closed 146 | 147 | 148 | def reset_pending_jobs(): 149 | Job.update(state="scheduled", log="").where(Job.state == "running").execute() 150 | 151 | 152 | def reset_busy_workers(): 153 | # XXX when we'll have distant workers that might break those 154 | Worker.update(state="available").execute() 155 | 156 | 157 | def merge_jobs_on_startup(): 158 | task_logger.info(f"looks for jobs to merge on startup") 159 | 160 | query = Job.select().where(Job.state == "scheduled").order_by(Job.name, -Job.id) 161 | 162 | name_to_jobs = defaultdict(list) 163 | 164 | for job in query: 165 | name_to_jobs[job.name].append(job) 166 | 167 | for jobs in name_to_jobs.values(): 168 | # keep oldest job 169 | 170 | if jobs[:-1]: 171 | task_logger.info(f"Merging {jobs[0].name} jobs...") 172 | 173 | for to_delete in jobs[:-1]: 174 | to_delete.delete_instance() 175 | 176 | task_logger.info(f"* delete {to_delete.name} [{to_delete.id}]") 177 | 178 | 179 | def set_random_day_for_monthy_job(): 180 | for repo in Repo.select().where((Repo.random_job_day == None)): 181 | repo.random_job_day = random.randint(1, 28) 182 | task_logger.info( 183 | f"set random day for monthly job of repo '{repo.name}' at '{repo.random_job_day}'" 184 | ) 185 | repo.save() 186 | 187 | 188 | async def create_job(app_id, repo_url, job_comment=""): 189 | job_name = app_id 190 | if job_comment: 191 | job_name += f" ({job_comment})" 192 | 193 | # avoid scheduling twice 194 | if Job.select().where(Job.name == job_name, Job.state == "scheduled").count() > 0: 195 | task_logger.info( 196 | f"a job for '{job_name} is already scheduled, not adding another one" 197 | ) 198 | return 199 | 200 | job = Job.create( 201 | name=job_name, 202 | url_or_path=repo_url, 203 | state="scheduled", 204 | ) 205 | 206 | await broadcast( 207 | { 208 | "action": "new_job", 209 | "data": model_to_dict(job), 210 | }, 211 | "jobs", 212 | ) 213 | 214 | return job 215 | 216 | 217 | @always_relaunch(sleep=60 * 5) 218 | async def monitor_apps_lists(monitor_git=False, monitor_only_good_quality_apps=False): 219 | "parse apps lists every hour or so to detect new apps" 220 | 221 | # only support github for now :( 222 | async def get_main_commit_sha(url): 223 | command = await asyncio.create_subprocess_shell( 224 | f"git ls-remote {url} main master", 225 | stdout=asyncio.subprocess.PIPE, 226 | stderr=asyncio.subprocess.PIPE, 227 | ) 228 | data = await command.stdout.read() 229 | commit_sha = data.decode().strip().replace("\t", " ").split(" ")[0] 230 | return commit_sha 231 | 232 | async with aiohttp.ClientSession() as session: 233 | task_logger.info(f"Downloading applist...") 234 | async with session.get(APPS_LIST) as resp: 235 | data = await resp.json() 236 | data = data["apps"] 237 | 238 | repos = {x.name: x for x in Repo.select()} 239 | 240 | for app_id, app_data in data.items(): 241 | commit_sha = await get_main_commit_sha(app_data["git"]["url"]) 242 | 243 | if app_data["state"] != "working": 244 | task_logger.debug(f"skip {app_id} because state is {app_data['state']}") 245 | continue 246 | 247 | if monitor_only_good_quality_apps: 248 | if app_data.get("level") in [None, "?"] or app_data["level"] <= 4: 249 | task_logger.debug(f"skip {app_id} because app is not good quality") 250 | continue 251 | 252 | # already know, look to see if there is new commits 253 | if app_id in repos: 254 | repo = repos[app_id] 255 | 256 | # but first check if the URL has changed 257 | if repo.url != app_data["git"]["url"]: 258 | task_logger.info( 259 | f"Application {app_id} has changed of url from {repo.url} to {app_data['git']['url']}" 260 | ) 261 | 262 | repo.url = app_data["git"]["url"] 263 | repo.save() 264 | 265 | await broadcast( 266 | { 267 | "action": "update_app", 268 | "data": model_to_dict(repo), 269 | }, 270 | "apps", 271 | ) 272 | 273 | # change the url of all jobs that used to have this URL I 274 | # guess :/ 275 | # this isn't perfect because that could overwrite added by 276 | # hand jobs but well... 277 | for job in Job.select().where( 278 | Job.url_or_path == repo.url, Job.state == "scheduled" 279 | ): 280 | job.url_or_path = repo.url 281 | job.save() 282 | 283 | task_logger.info( 284 | f"Updating job {job.name} #{job.id} for {app_id} to {repo.url} since the app has changed of url" 285 | ) 286 | 287 | await broadcast( 288 | { 289 | "action": "update_job", 290 | "data": model_to_dict(job), 291 | }, 292 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 293 | ) 294 | 295 | # we don't want to do anything else 296 | if not monitor_git: 297 | continue 298 | 299 | repo_is_updated = False 300 | if repo.revision != commit_sha: 301 | task_logger.info( 302 | f"Application {app_id} has new commits on github " 303 | f"({repo.revision} → {commit_sha}), schedule new job" 304 | ) 305 | repo.revision = commit_sha 306 | repo.save() 307 | repo_is_updated = True 308 | 309 | await create_job(app_id, repo.url) 310 | 311 | repo_state = ( 312 | "working" if app_data["state"] == "working" else "other_than_working" 313 | ) 314 | 315 | if repo.state != repo_state: 316 | repo.state = repo_state 317 | repo.save() 318 | repo_is_updated = True 319 | 320 | if repo.random_job_day is None: 321 | repo.random_job_day = random.randint(1, 28) 322 | repo.save() 323 | repo_is_updated = True 324 | 325 | if repo_is_updated: 326 | await broadcast( 327 | { 328 | "action": "update_app", 329 | "data": model_to_dict(repo), 330 | }, 331 | "apps", 332 | ) 333 | 334 | # new app 335 | elif app_id not in repos: 336 | task_logger.info( 337 | f"New application detected: {app_id} " 338 | + (", scheduling a new job" if monitor_git else "") 339 | ) 340 | repo = Repo.create( 341 | name=app_id, 342 | url=app_data["git"]["url"], 343 | revision=commit_sha, 344 | state=( 345 | "working" 346 | if app_data["state"] == "working" 347 | else "other_than_working" 348 | ), 349 | random_job_day=random.randint(1, 28), 350 | ) 351 | 352 | await broadcast( 353 | { 354 | "action": "new_app", 355 | "data": model_to_dict(repo), 356 | }, 357 | "apps", 358 | ) 359 | 360 | if monitor_git: 361 | await create_job(app_id, repo.url) 362 | 363 | await asyncio.sleep(1) 364 | 365 | # delete apps removed from the list 366 | unseen_repos = set(repos.keys()) - set(data.keys()) 367 | 368 | for repo_name in unseen_repos: 369 | repo = repos[repo_name] 370 | 371 | # delete scheduled jobs first 372 | task_logger.info( 373 | f"Application {repo_name} has been removed from the app list, start by removing its scheduled job if there are any..." 374 | ) 375 | for job in Job.select().where( 376 | Job.url_or_path == repo.url, Job.state == "scheduled" 377 | ): 378 | await api_stop_job(None, job.id) # not sure this is going to work 379 | job_id = job.id 380 | 381 | task_logger.info( 382 | f"Delete scheduled job {job.name} #{job.id} for application {repo_name} because the application is being deleted." 383 | ) 384 | 385 | data = model_to_dict(job) 386 | job.delete_instance() 387 | 388 | await broadcast( 389 | { 390 | "action": "delete_job", 391 | "data": data, 392 | }, 393 | ["jobs", f"job-{job_id}", f"app-jobs-{job.url_or_path}"], 394 | ) 395 | 396 | task_logger.info( 397 | f"Delete application {repo_name} because it has been removed from the apps list." 398 | ) 399 | data = model_to_dict(repo) 400 | repo.delete_instance() 401 | 402 | await broadcast( 403 | { 404 | "action": "delete_app", 405 | "data": data, 406 | }, 407 | "apps", 408 | ) 409 | 410 | 411 | @once_per_day 412 | async def launch_monthly_job(): 413 | today = date.today().day 414 | 415 | for repo in Repo.select().where(Repo.random_job_day == today): 416 | task_logger.info( 417 | f"Launch monthly job for {repo.name} on day {today} of the month " 418 | ) 419 | await create_job(repo.name, repo.url) 420 | 421 | 422 | async def ensure_workers_count(): 423 | if Worker.select().count() < app.config.WORKER_COUNT: 424 | for _ in range(app.config.WORKER_COUNT - Worker.select().count()): 425 | Worker.create(state="available") 426 | elif Worker.select().count() > app.config.WORKER_COUNT: 427 | workers_to_remove = Worker.select().count() - app.config.WORKER_COUNT 428 | workers = Worker.select().where(Worker.state == "available") 429 | for worker in workers: 430 | if workers_to_remove == 0: 431 | break 432 | worker.delete_instance() 433 | workers_to_remove -= 1 434 | 435 | jobs_to_stop = workers_to_remove 436 | for job_id in jobs_in_memory_state: 437 | if jobs_to_stop == 0: 438 | break 439 | await stop_job(job_id) 440 | jobs_to_stop -= 1 441 | job = Job.select().where(Job.id == job_id)[0] 442 | job.state = "scheduled" 443 | job.log = "" 444 | job.save() 445 | 446 | workers = Worker.select().where(Worker.state == "available") 447 | for worker in workers: 448 | if workers_to_remove == 0: 449 | break 450 | worker.delete_instance() 451 | workers_to_remove -= 1 452 | 453 | 454 | @always_relaunch(sleep=3) 455 | async def jobs_dispatcher(): 456 | await ensure_workers_count() 457 | 458 | workers = Worker.select().where(Worker.state == "available") 459 | 460 | # no available workers, wait 461 | if workers.count() == 0: 462 | return 463 | 464 | with db.atomic("IMMEDIATE"): 465 | jobs = Job.select().where(Job.state == "scheduled") 466 | 467 | # no jobs to process, wait 468 | if jobs.count() == 0: 469 | await asyncio.sleep(3) 470 | return 471 | 472 | for i in range(min(workers.count(), jobs.count())): 473 | job = jobs[i] 474 | worker = workers[i] 475 | 476 | job.state = "running" 477 | job.started_time = datetime.now() 478 | job.end_time = None 479 | job.save() 480 | 481 | worker.state = "busy" 482 | worker.save() 483 | 484 | jobs_in_memory_state[job.id] = { 485 | "worker": worker.id, 486 | "task": asyncio.ensure_future(run_job(worker, job)), 487 | } 488 | 489 | 490 | async def cleanup_old_package_check_if_lock_exists(worker, job, ignore_error=False): 491 | 492 | await asyncio.sleep(1) 493 | 494 | if not os.path.exists( 495 | app.config.PACKAGE_CHECK_LOCK_PER_WORKER.format(worker_id=worker.id) 496 | ): 497 | return 498 | 499 | job.log += f"Lock for worker {worker.id} still exist ... trying to cleanup the old package check still running ...\n" 500 | job.save() 501 | await broadcast( 502 | { 503 | "action": "update_job", 504 | "id": job.id, 505 | "data": model_to_dict(job), 506 | }, 507 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 508 | ) 509 | 510 | task_logger.info( 511 | f"Lock for worker {worker.id} still exist ... trying to cleanup old check process ..." 512 | ) 513 | 514 | cwd = os.path.split(app.config.PACKAGE_CHECK_PATH)[0] 515 | env = { 516 | "IN_YUNORUNNER": "1", 517 | "WORKER_ID": str(worker.id), 518 | "ARCH": app.config.ARCH, 519 | "DIST": app.config.DIST, 520 | "YNH_BRANCH": app.config.YNH_BRANCH, 521 | "YNHDEV_BACKEND": os.environ.get("YNHDEV_BACKEND", ""), 522 | "PATH": os.environ["PATH"] 523 | + ":/usr/local/bin", # This is because lxc/lxd is in /usr/local/bin 524 | } 525 | 526 | cmd = f"script -qefc '{app.config.PACKAGE_CHECK_PATH} --force-stop 2>&1'" 527 | try: 528 | command = await asyncio.create_subprocess_shell( 529 | cmd, 530 | cwd=cwd, 531 | env=env, 532 | stdout=asyncio.subprocess.PIPE, 533 | stderr=asyncio.subprocess.PIPE, 534 | ) 535 | while not command.stdout.at_eof(): 536 | data = await command.stdout.readline() 537 | await asyncio.sleep(1) 538 | except Exception: 539 | traceback.print_exc() 540 | task_logger.exception(f"ERROR in job '{job.name} #{job.id}'") 541 | 542 | job.log += "\n" 543 | job.log += "Exception:\n" 544 | job.log += traceback.format_exc() 545 | 546 | if not ignore_error: 547 | job.state = "error" 548 | 549 | return False 550 | except (CancelledError, asyncio.exceptions.CancelledError): 551 | command.terminate() 552 | 553 | if not ignore_error: 554 | job.log += "\nFailed to kill old check process?" 555 | job.state = "canceled" 556 | 557 | task_logger.info(f"Job '{job.name} #{job.id}' has been canceled") 558 | 559 | return False 560 | else: 561 | job.log += "Cleaning done\n" 562 | return True 563 | finally: 564 | job.save() 565 | await broadcast( 566 | { 567 | "action": "update_job", 568 | "id": job.id, 569 | "data": model_to_dict(job), 570 | }, 571 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 572 | ) 573 | 574 | # Dirty hack to kill ~zombi processes adopted by init doing funky stuff -_- 575 | os.system( 576 | "for PID in $(ps -ef --forest | grep 'lxc exec' | grep ' 1 ' | awk '{print $2}'); do kill -9 $PID; done" 577 | ) 578 | os.system( 579 | "for PID in $(ps -ef --forest | grep 'incus exec' | grep ' 1 ' | awk '{print $2}'); do kill -9 $PID; done" 580 | ) 581 | os.system( 582 | "for PID in $(ps -ef --forest | grep 'script -qefc' | grep ' 1 ' | awk '{print $2}' ); do kill $PID; done" 583 | ) 584 | 585 | 586 | async def run_job(worker, job): 587 | 588 | async def update_github_commit_status( 589 | app_url, job_url, commit_sha, state, level=None 590 | ): 591 | 592 | token = app.config.GITHUB_COMMIT_STATUS_TOKEN 593 | if token is None: 594 | return 595 | 596 | if state == "canceled": 597 | state = "error" 598 | if state == "done": 599 | state = "success" 600 | 601 | org = app_url.lower().strip("/").replace("https://", "").split("/")[1] 602 | repo = app_url.lower().strip("/").replace("https://", "").split("/")[2] 603 | ci_name = app.config.BASE_URL.lower().replace("https://", "").split(".")[0] 604 | message = f"{ci_name}: " 605 | if level: 606 | message += f"level {level}" 607 | else: 608 | message += state 609 | 610 | api_url = f"https://api.github.com/repos/{org}/{repo}/statuses/{commit_sha}" 611 | 612 | async with aiohttp.ClientSession( 613 | headers={ 614 | "Authorization": f"Bearer {token}", 615 | "Accept": "application/vnd.github+json", 616 | "X-GitHub-Api-Version": "2022-11-28", 617 | } 618 | ) as session: 619 | async with session.post( 620 | api_url, 621 | data=my_json_dumps( 622 | { 623 | "state": state, 624 | "target_url": job_url, 625 | "description": f"{ci_name}: level {level}", 626 | "context": ci_name, 627 | } 628 | ), 629 | ) as resp: 630 | respjson = await resp.json() 631 | if "url" in respjson: 632 | api_logger.info( 633 | f"Updated commit status for {org}/{repo}/{commit_sha}" 634 | ) 635 | else: 636 | api_logger.error( 637 | f"Failed to update commit status for {org}/{repo}/{commit_sha}" 638 | ) 639 | api_logger.error(respjson) 640 | 641 | await broadcast( 642 | { 643 | "action": "update_job", 644 | "data": model_to_dict(job), 645 | }, 646 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 647 | ) 648 | 649 | await asyncio.sleep(5) 650 | 651 | cleanup_ret = await cleanup_old_package_check_if_lock_exists(worker, job) 652 | if cleanup_ret is False: 653 | return 654 | 655 | job_app = job.name.split()[0] 656 | 657 | task_logger.info(f"Starting job '{job.name}' #{job.id}...") 658 | 659 | cwd = os.path.split(app.config.PACKAGE_CHECK_PATH)[0] 660 | env = { 661 | "IN_YUNORUNNER": "1", 662 | "WORKER_ID": str(worker.id), 663 | "ARCH": app.config.ARCH, 664 | "DIST": app.config.DIST, 665 | "YNH_BRANCH": app.config.YNH_BRANCH, 666 | "YNHDEV_BACKEND": os.environ.get("YNHDEV_BACKEND", ""), 667 | "PATH": os.environ["PATH"] 668 | + ":/usr/local/bin", # This is because lxc/lxd is in /usr/local/bin 669 | } 670 | 671 | if hasattr(app.config, "STORAGE_PATH"): 672 | env["YNH_PACKAGE_CHECK_STORAGE_DIR"] = app.config.STORAGE_PATH 673 | 674 | begin = datetime.now() 675 | begin_human = begin.strftime("%d/%m/%Y - %H:%M:%S") 676 | msg = ( 677 | begin_human 678 | + f" - Starting test for {job.name} on arch {app.config.ARCH}, distrib {app.config.DIST}, with YunoHost {app.config.YNH_BRANCH}" 679 | ) 680 | job.log += "=" * len(msg) + "\n" 681 | job.log += msg + "\n" 682 | job.log += "=" * len(msg) + "\n" 683 | job.save() 684 | await broadcast( 685 | { 686 | "action": "update_job", 687 | "id": job.id, 688 | "data": model_to_dict(job), 689 | }, 690 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 691 | ) 692 | 693 | result_json = app.config.PACKAGE_CHECK_RESULT_JSON_PER_WORKER.format( 694 | worker_id=worker.id 695 | ) 696 | full_log = app.config.PACKAGE_CHECK_FULL_LOG_PER_WORKER.format(worker_id=worker.id) 697 | summary_png = app.config.PACKAGE_CHECK_SUMMARY_PNG_PER_WORKER.format( 698 | worker_id=worker.id 699 | ) 700 | 701 | if os.path.exists(result_json): 702 | os.remove(result_json) 703 | if os.path.exists(full_log): 704 | os.remove(full_log) 705 | if os.path.exists(summary_png): 706 | os.remove(summary_png) 707 | 708 | cmd = f"nice --adjustment=10 script -qefc '/bin/bash {app.config.PACKAGE_CHECK_PATH} {job.url_or_path} 2>&1'" 709 | task_logger.info(f"Launching command: {cmd}") 710 | 711 | try: 712 | command = await asyncio.create_subprocess_shell( 713 | cmd, 714 | cwd=cwd, 715 | env=env, 716 | # default limit is not enough in some situations 717 | limit=(2**16) ** 10, 718 | stdout=asyncio.subprocess.PIPE, 719 | stderr=asyncio.subprocess.PIPE, 720 | ) 721 | 722 | while not command.stdout.at_eof(): 723 | 724 | try: 725 | data = await asyncio.wait_for(command.stdout.readline(), 60) 726 | except asyncio.TimeoutError: 727 | if (datetime.now() - begin).total_seconds() > app.config.TIMEOUT: 728 | raise Exception(f"Job timed out ({app.config.TIMEOUT / 60} min.)") 729 | else: 730 | try: 731 | job.log += data.decode("utf-8", "replace") 732 | except UnicodeDecodeError as e: 733 | job.log += "Uhoh ?! UnicodeDecodeError in yunorunner !?" 734 | job.log += str(e) 735 | 736 | job.save() 737 | 738 | await broadcast( 739 | { 740 | "action": "update_job", 741 | "id": job.id, 742 | "data": model_to_dict(job), 743 | }, 744 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 745 | ) 746 | 747 | except (CancelledError, asyncio.exceptions.CancelledError): 748 | command.terminate() 749 | job.log += "\n" 750 | job.state = "canceled" 751 | 752 | task_logger.info(f"Job '{job.name} #{job.id}' has been canceled") 753 | except Exception: 754 | traceback.print_exc() 755 | task_logger.exception(f"ERROR in job '{job.name} #{job.id}'") 756 | 757 | job.log += "\n" 758 | job.log += "Job error on:\n" 759 | job.log += traceback.format_exc() 760 | 761 | job.state = "error" 762 | else: 763 | task_logger.info(f"Finished job '{job.name}'") 764 | 765 | if command.returncode == 124: 766 | job.log += f"\nJob timed out ({app.config.TIMEOUT / 60} min.)\n" 767 | job.state = "error" 768 | else: 769 | if command.returncode != 0 or not os.path.exists(result_json): 770 | job.log += f"\nJob failed ? Return code is {command.returncode} / Or maybe the json result doesnt exist...\n" 771 | job.state = "error" 772 | else: 773 | job.log += f"\nPackage check completed\n" 774 | results = json.load(open(result_json)) 775 | level = results["level"] 776 | job.state = "done" if level > 4 else "failure" 777 | 778 | job.log += f"\nThe full log is available at {app.config.BASE_URL}/logs/{job.id}.log\n" 779 | 780 | shutil.copy(full_log, yunorunner_dir + f"/results/logs/{job.id}.log") 781 | shutil.copy( 782 | result_json, 783 | yunorunner_dir 784 | + f"/results/logs/{job_app}_{app.config.ARCH}_{app.config.YNH_BRANCH}_results.json", 785 | ) 786 | shutil.copy( 787 | summary_png, yunorunner_dir + f"/results/summary/{job.id}.png" 788 | ) 789 | 790 | finally: 791 | job.end_time = datetime.now() 792 | job_url = app.config.BASE_URL + "/job/" + str(job.id) 793 | 794 | now = datetime.now().strftime("%d/%m/%Y - %H:%M:%S") 795 | msg = now + f" - Finished job for {job.name} ({job.state})" 796 | job.log += "=" * len(msg) + "\n" 797 | job.log += msg + "\n" 798 | job.log += "=" * len(msg) + "\n" 799 | 800 | job.save() 801 | await broadcast( 802 | { 803 | "action": "update_job", 804 | "id": job.id, 805 | "data": model_to_dict(job), 806 | }, 807 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 808 | ) 809 | 810 | if "ci-apps.yunohost.org" in app.config.BASE_URL: 811 | try: 812 | async with aiohttp.ClientSession() as session: 813 | async with session.get(APPS_LIST) as resp: 814 | data = await resp.json() 815 | data = data["apps"] 816 | public_level = data.get(job_app, {}).get("level") 817 | 818 | job_id_with_url = f"[#{job.id}]({job_url})" 819 | if job.state == "error": 820 | msg = f"Job {job_id_with_url} for {job_app} failed miserably :(" 821 | elif level == 0: 822 | msg = f"App {job_app} failed all tests in job {job_id_with_url} :(" 823 | elif public_level is None: 824 | msg = f"App {job_app} rises from level (unknown) to {level} in job {job_id_with_url} !" 825 | elif level > public_level: 826 | msg = f"App {job_app} rises from level {public_level} to {level} in job {job_id_with_url} !" 827 | elif level < public_level: 828 | msg = f"App {job_app} goes down from level {public_level} to {level} in job {job_id_with_url}" 829 | elif level < 6: 830 | msg = ( 831 | f"App {job_app} stays at level {level} in job {job_id_with_url}" 832 | ) 833 | else: 834 | # Dont notify anything, reduce CI flood on app chatroom if app is already level 6+ 835 | msg = "" 836 | 837 | if msg: 838 | cmd = f"{yunorunner_dir}/maintenance/chat_notify.sh '{msg}'" 839 | try: 840 | command = await asyncio.create_subprocess_shell(cmd) 841 | while not command.stdout.at_eof(): 842 | await asyncio.sleep(1) 843 | except: 844 | pass 845 | except: 846 | traceback.print_exc() 847 | task_logger.exception(f"ERROR in job '{job.name} #{job.id}'") 848 | 849 | job.log += "\n" 850 | job.log += "Exception:\n" 851 | job.log += traceback.format_exc() 852 | 853 | try: 854 | if os.path.exists(result_json): 855 | results = json.load(open(result_json)) 856 | level = results["level"] 857 | commit = results["commit"] 858 | await update_github_commit_status( 859 | job.url_or_path, job_url, commit, job.state, level 860 | ) 861 | except Exception as e: 862 | task_logger.error( 863 | f"Failed to push commit status for '{job.name}' #{job.id}... : {e}" 864 | ) 865 | 866 | # if job.state != "canceled": 867 | # await cleanup_old_package_check_if_lock_exists(worker, job, ignore_error=True) 868 | 869 | # remove ourself from the state 870 | del jobs_in_memory_state[job.id] 871 | 872 | worker.state = "available" 873 | worker.save() 874 | 875 | await broadcast( 876 | { 877 | "action": "update_job", 878 | "id": job.id, 879 | "data": model_to_dict(job), 880 | }, 881 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 882 | ) 883 | 884 | 885 | async def broadcast(message, channels): 886 | if not isinstance(channels, (list, tuple)): 887 | channels = [channels] 888 | 889 | for channel in channels: 890 | ws_list = subscriptions[channel] 891 | dead_ws = [] 892 | 893 | for ws in ws_list: 894 | try: 895 | await ws.send(my_json_dumps(message)) 896 | except (ConnectionClosed, WebsocketClosed): 897 | dead_ws.append(ws) 898 | except asyncio.exceptions.CancelledError as err: 899 | api_logger.info(f"broadcast ws.send() received cancellederror {err}") 900 | 901 | for to_remove in dead_ws: 902 | try: 903 | ws_list.remove(to_remove) 904 | except ValueError: 905 | pass 906 | 907 | 908 | def subscribe(ws, channel): 909 | subscriptions[channel].append(ws) 910 | 911 | 912 | def unsubscribe_all(ws): 913 | for channel in subscriptions: 914 | if ws in subscriptions[channel]: 915 | if ws in subscriptions[channel]: 916 | print(f"\033[1;36mUnsubscribe ws {ws} from {channel}\033[0m") 917 | subscriptions[channel].remove(ws) 918 | 919 | 920 | def clean_websocket(function): 921 | @wraps(function) 922 | async def _wrap(request, websocket, *args, **kwargs): 923 | try: 924 | to_return = await function(request, websocket, *args, **kwargs) 925 | return to_return 926 | except Exception: 927 | print(function.__name__) 928 | unsubscribe_all(websocket) 929 | raise 930 | 931 | return _wrap 932 | 933 | 934 | def chunks(l, n): 935 | """Yield successive n-sized chunks from l.""" 936 | chunk = [] 937 | a = 0 938 | 939 | for i in l: 940 | if a < n: 941 | a += 1 942 | chunk.append(i) 943 | else: 944 | yield chunk 945 | chunk = [] 946 | a = 0 947 | 948 | yield chunk 949 | 950 | 951 | @app.websocket("/index-ws") 952 | @clean_websocket 953 | async def ws_index(request, websocket): 954 | subscribe(websocket, "jobs") 955 | 956 | # avoid fetch "log" field from the db to reduce memory usage 957 | selected_fields = ( 958 | Job.id, 959 | Job.name, 960 | Job.url_or_path, 961 | Job.state, 962 | Job.created_time, 963 | Job.started_time, 964 | Job.end_time, 965 | ) 966 | 967 | JobAlias = Job.alias() 968 | subquery = ( 969 | JobAlias.select(*selected_fields) 970 | .where(JobAlias.state << ("done", "failure", "canceled", "error")) 971 | .group_by(JobAlias.url_or_path) 972 | .select(fn.Max(JobAlias.id).alias("max_id")) 973 | ) 974 | 975 | latest_done_jobs = ( 976 | Job.select(*selected_fields) 977 | .join(subquery, on=(Job.id == subquery.c.max_id)) 978 | .order_by(-Job.id) 979 | .limit(500) 980 | ) 981 | 982 | subquery = ( 983 | JobAlias.select(*selected_fields) 984 | .where(JobAlias.state == "scheduled") 985 | .group_by(JobAlias.url_or_path) 986 | .select(fn.Min(JobAlias.id).alias("min_id")) 987 | ) 988 | 989 | next_scheduled_jobs = ( 990 | Job.select(*selected_fields) 991 | .join(subquery, on=(Job.id == subquery.c.min_id)) 992 | .order_by(-Job.id) 993 | ) 994 | 995 | # chunks initial data by batch of 30 to avoid killing firefox 996 | data = chunks( 997 | itertools.chain( 998 | map(model_to_dict, next_scheduled_jobs.iterator()), 999 | map(model_to_dict, Job.select().where(Job.state == "running").iterator()), 1000 | map(model_to_dict, latest_done_jobs.iterator()), 1001 | ), 1002 | 30, 1003 | ) 1004 | 1005 | first_chunck = next(data) 1006 | 1007 | await websocket.send( 1008 | my_json_dumps( 1009 | { 1010 | "action": "init_jobs", 1011 | "data": first_chunck, # send first chunk 1012 | } 1013 | ) 1014 | ) 1015 | 1016 | for chunk in data: 1017 | await websocket.send( 1018 | my_json_dumps( 1019 | { 1020 | "action": "init_jobs_stream", 1021 | "data": chunk, 1022 | } 1023 | ) 1024 | ) 1025 | 1026 | await websocket.wait_for_connection_lost() 1027 | 1028 | 1029 | @app.websocket("/job-ws/") 1030 | @clean_websocket 1031 | async def ws_job(request, websocket, job_id): 1032 | job = Job.select().where(Job.id == job_id) 1033 | 1034 | if job.count() == 0: 1035 | raise NotFound() 1036 | 1037 | job = job[0] 1038 | 1039 | subscribe(websocket, f"job-{job.id}") 1040 | 1041 | await websocket.send( 1042 | my_json_dumps( 1043 | { 1044 | "action": "init_job", 1045 | "data": model_to_dict(job), 1046 | } 1047 | ) 1048 | ) 1049 | 1050 | await websocket.wait_for_connection_lost() 1051 | 1052 | 1053 | @app.websocket("/apps-ws") 1054 | @clean_websocket 1055 | async def ws_apps(request, websocket): 1056 | subscribe(websocket, "jobs") 1057 | subscribe(websocket, "apps") 1058 | 1059 | # I need to do this because peewee strangely fuck up on join and remove the 1060 | # subquery fields which breaks everything 1061 | repos = Repo.raw( 1062 | """ 1063 | SELECT 1064 | "id", 1065 | "name", 1066 | "url", 1067 | "revision", 1068 | "state", 1069 | "random_job_day", 1070 | "job_id", 1071 | "job_name", 1072 | "job_state", 1073 | "created_time", 1074 | "started_time", 1075 | "end_time" 1076 | FROM 1077 | "repo" AS "t1" 1078 | INNER JOIN ( 1079 | SELECT 1080 | "t1"."id" as "job_id", 1081 | "t1"."name" as "job_name", 1082 | "t1"."url_or_path", 1083 | "t1"."state" as "job_state", 1084 | "t1"."created_time", 1085 | "t1"."started_time", 1086 | "t1"."end_time" 1087 | FROM 1088 | "job" AS "t1" 1089 | INNER JOIN ( 1090 | SELECT 1091 | Max("t2"."id") AS "max_id" 1092 | FROM 1093 | "job" AS "t2" 1094 | GROUP BY 1095 | "t2"."url_or_path" 1096 | ) 1097 | AS 1098 | "t3" 1099 | ON 1100 | ("t1"."id" = "t3"."max_id") 1101 | ) AS 1102 | "t5" 1103 | ON 1104 | ("t5"."url_or_path" = "t1"."url") 1105 | ORDER BY 1106 | "name" 1107 | """ 1108 | ) 1109 | 1110 | repos = [ 1111 | { 1112 | "id": x.id, 1113 | "name": x.name, 1114 | "url": x.url, 1115 | "revision": x.revision, 1116 | "state": x.state, 1117 | "random_job_day": x.random_job_day, 1118 | "job_id": x.job_id, 1119 | "job_name": x.job_name, 1120 | "job_state": x.job_state, 1121 | "created_time": ( 1122 | datetime.strptime(x.created_time.split(".")[0], "%Y-%m-%d %H:%M:%S") 1123 | if x.created_time 1124 | else None 1125 | ), 1126 | "started_time": ( 1127 | datetime.strptime(x.started_time.split(".")[0], "%Y-%m-%d %H:%M:%S") 1128 | if x.started_time 1129 | else None 1130 | ), 1131 | "end_time": ( 1132 | datetime.strptime(x.end_time.split(".")[0], "%Y-%m-%d %H:%M:%S") 1133 | if x.end_time 1134 | else None 1135 | ), 1136 | } 1137 | for x in repos 1138 | ] 1139 | 1140 | # add apps without jobs 1141 | selected_repos = {x["id"] for x in repos} 1142 | for repo in Repo.select().where(Repo.id.not_in(selected_repos)): 1143 | repos.append( 1144 | { 1145 | "id": repo.id, 1146 | "name": repo.name, 1147 | "url": repo.url, 1148 | "revision": repo.revision, 1149 | "state": repo.state, 1150 | "random_job_day": repo.random_job_day, 1151 | "job_id": None, 1152 | "job_name": None, 1153 | "job_state": None, 1154 | "created_time": None, 1155 | "started_time": None, 1156 | "end_time": None, 1157 | } 1158 | ) 1159 | 1160 | repos = sorted(repos, key=lambda x: x["name"]) 1161 | 1162 | await websocket.send( 1163 | my_json_dumps( 1164 | { 1165 | "action": "init_apps", 1166 | "data": repos, 1167 | } 1168 | ) 1169 | ) 1170 | 1171 | await websocket.wait_for_connection_lost() 1172 | 1173 | 1174 | @app.websocket("/app-ws/") 1175 | @clean_websocket 1176 | async def ws_app(request, websocket, app_name): 1177 | # XXX I don't check if the app exists because this websocket is supposed to 1178 | # be only loaded from the app page which does this job already 1179 | _app = Repo.select().where(Repo.name == app_name)[0] 1180 | 1181 | subscribe(websocket, f"app-jobs-{_app.url}") 1182 | 1183 | job = list( 1184 | Job.select() 1185 | .where(Job.url_or_path == _app.url) 1186 | .order_by(-Job.id) 1187 | .limit(10) 1188 | .dicts() 1189 | ) 1190 | await websocket.send( 1191 | my_json_dumps( 1192 | { 1193 | "action": "init_jobs", 1194 | "data": job, 1195 | } 1196 | ) 1197 | ) 1198 | 1199 | await websocket.wait_for_connection_lost() 1200 | 1201 | 1202 | def require_token(): 1203 | def decorator(f): 1204 | @wraps(f) 1205 | async def decorated_function(request, *args, **kwargs): 1206 | # run some method that checks the request 1207 | # for the client's authorization status 1208 | if "X-Token" not in request.headers: 1209 | return response.json( 1210 | { 1211 | "status": "you need to provide a token " 1212 | "to access the API, please " 1213 | "refer to the README" 1214 | }, 1215 | 403, 1216 | ) 1217 | 1218 | token = request.headers["X-Token"].strip() 1219 | 1220 | if not hmac.compare_digest(token, admin_token): 1221 | api_logger.warning( 1222 | "someone tried to access the API using an invalid admin token" 1223 | ) 1224 | return response.json({"status": "invalid token"}, 403) 1225 | 1226 | result = await f(request, *args, **kwargs) 1227 | return result 1228 | 1229 | return decorated_function 1230 | 1231 | return decorator 1232 | 1233 | 1234 | @app.route("/api/job", methods=["POST"]) 1235 | @require_token() 1236 | async def api_new_job(request): 1237 | job = Job.create( 1238 | name=request.json["name"], 1239 | url_or_path=request.json["url_or_path"], 1240 | created_time=datetime.now(), 1241 | ) 1242 | 1243 | api_logger.info(f"Request to add new job '{job.name}' [{job.id}]") 1244 | 1245 | await broadcast( 1246 | { 1247 | "action": "new_job", 1248 | "data": model_to_dict(job), 1249 | }, 1250 | ["jobs", f"app-jobs-{job.url_or_path}"], 1251 | ) 1252 | 1253 | return response.text("ok") 1254 | 1255 | 1256 | @app.route("/api/job", methods=["GET"]) 1257 | @require_token() 1258 | async def api_list_job(request): 1259 | query = Job.select() 1260 | 1261 | if not all: 1262 | query.where(Job.state in ("scheduled", "running")) 1263 | 1264 | return response.json([model_to_dict(x) for x in query.order_by(-Job.id)]) 1265 | 1266 | 1267 | @app.route("/api/app", methods=["GET"]) 1268 | @require_token() 1269 | async def api_list_app(request): 1270 | query = Repo.select() 1271 | 1272 | return response.json([model_to_dict(x) for x in query.order_by(Repo.name)]) 1273 | 1274 | 1275 | @app.route("/api/job/", methods=["DELETE"]) 1276 | @require_token() 1277 | async def api_delete_job(request, job_id): 1278 | api_logger.info(f"Request to restart job {job_id}") 1279 | # need to stop a job before deleting it 1280 | await api_stop_job(request, job_id) 1281 | 1282 | # no need to check if job exist, api_stop_job will do it for us 1283 | job = Job.select().where(Job.id == job_id)[0] 1284 | 1285 | api_logger.info(f"Request to delete job '{job.name}' [{job.id}]") 1286 | 1287 | data = model_to_dict(job) 1288 | job.delete_instance() 1289 | 1290 | await broadcast( 1291 | { 1292 | "action": "delete_job", 1293 | "data": data, 1294 | }, 1295 | ["jobs", f"job-{job_id}", f"app-jobs-{job.url_or_path}"], 1296 | ) 1297 | 1298 | return response.text("ok") 1299 | 1300 | 1301 | async def stop_job(job_id): 1302 | job = Job.select().where(Job.id == job_id) 1303 | 1304 | if job.count() == 0: 1305 | raise NotFound(f"Error: no job with the id '{job_id}'") 1306 | 1307 | job = job[0] 1308 | 1309 | api_logger.info(f"Request to stop job '{job.name}' [{job.id}]") 1310 | 1311 | if job.state == "scheduled": 1312 | api_logger.info(f"Cancel scheduled job '{job.name}' [job.id] " f"on request") 1313 | job.state = "canceled" 1314 | job.save() 1315 | 1316 | await broadcast( 1317 | { 1318 | "action": "update_job", 1319 | "data": model_to_dict(job), 1320 | }, 1321 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 1322 | ) 1323 | 1324 | return response.text("ok") 1325 | 1326 | if job.state == "running": 1327 | api_logger.info(f"Cancel running job '{job.name}' [job.id] on request") 1328 | 1329 | job.state = "canceled" 1330 | job.end_time = datetime.now() 1331 | job.save() 1332 | 1333 | jobs_in_memory_state[job.id]["task"].cancel() 1334 | 1335 | worker = Worker.select().where( 1336 | Worker.id == jobs_in_memory_state[job.id]["worker"] 1337 | )[0] 1338 | 1339 | worker.state = "available" 1340 | worker.save() 1341 | 1342 | await broadcast( 1343 | { 1344 | "action": "update_job", 1345 | "data": model_to_dict(job), 1346 | }, 1347 | ["jobs", f"job-{job.id}", f"app-jobs-{job.url_or_path}"], 1348 | ) 1349 | 1350 | return response.text("ok") 1351 | 1352 | if job.state in ("done", "canceled", "failure", "error"): 1353 | api_logger.info( 1354 | f"Request to cancel job '{job.name}' " 1355 | f"[job.id] but job is already in '{job.state}' state, " 1356 | f"do nothing" 1357 | ) 1358 | # nothing to do, task is already done 1359 | return response.text("ok") 1360 | 1361 | raise Exception(f"Tryed to cancel a job with an unknown state: " f"{job.state}") 1362 | 1363 | 1364 | @app.route("/api/job//stop", methods=["POST"]) 1365 | async def api_stop_job(request, job_id): 1366 | # TODO auth or some kind 1367 | return await stop_job(job_id) 1368 | 1369 | 1370 | @app.route("/api/job//restart", methods=["POST"]) 1371 | async def api_restart_job(request, job_id): 1372 | api_logger.info(f"Request to restart job {job_id}") 1373 | # Calling a route (eg api_stop_job) doesn't work anymore 1374 | await stop_job(job_id) 1375 | 1376 | # no need to check if job existss, api_stop_job will do it for us 1377 | job = Job.select().where(Job.id == job_id)[0] 1378 | job.state = "scheduled" 1379 | job.log = "" 1380 | job.save() 1381 | 1382 | await broadcast( 1383 | { 1384 | "action": "update_job", 1385 | "data": model_to_dict(job), 1386 | }, 1387 | ["jobs", f"job-{job_id}", f"app-jobs-{job.url_or_path}"], 1388 | ) 1389 | 1390 | return response.text("ok") 1391 | 1392 | 1393 | @app.route("/api/results", methods=["GET"]) 1394 | async def api_results(request): 1395 | 1396 | import re 1397 | 1398 | repos = Repo.select().order_by(Repo.name) 1399 | 1400 | all_results = {} 1401 | 1402 | for repo in repos: 1403 | 1404 | latest_result_path = ( 1405 | yunorunner_dir 1406 | + f"/results/logs/{repo.name}_{app.config.ARCH}_{app.config.YNH_BRANCH}_results.json" 1407 | ) 1408 | if not os.path.exists(latest_result_path): 1409 | continue 1410 | all_results[repo.name] = json.load(open(latest_result_path)) 1411 | 1412 | continue 1413 | # Keeping old code for reference 1414 | 1415 | jobs = ( 1416 | Job.select() 1417 | .where(Job.url_or_path == repo.url, Job.state in ["success", "failure"]) 1418 | .order_by(Job.end_time) 1419 | ) 1420 | if jobs.count() == 0: 1421 | continue 1422 | else: 1423 | job = jobs[-1] 1424 | 1425 | l = re.findall(r"Global level for this application: (\d)", job.log[-2000:]) 1426 | 1427 | if not l: 1428 | continue 1429 | 1430 | all_results[repo.name] = { 1431 | "app": repo.name, 1432 | "timestamp": int(job.end_time.timestamp()), 1433 | "level": int(l[0]), 1434 | } 1435 | 1436 | return response.json(all_results) 1437 | 1438 | 1439 | # Meant to interface with https://shields.io/endpoint 1440 | @app.route("/api/job//badge", methods=["GET"]) 1441 | async def api_badge_job(request, job_id): 1442 | 1443 | job = Job.select().where(Job.id == job_id) 1444 | 1445 | if job.count() == 0: 1446 | raise NotFound(f"Error: no job with the id '{job_id}'") 1447 | 1448 | job = job[0] 1449 | 1450 | state_to_color = { 1451 | "scheduled": "lightgrey", 1452 | "running": "blue", 1453 | "done": "brightgreen", 1454 | "failure": "red", 1455 | "error": "red", 1456 | "canceled": "yellow", 1457 | } 1458 | 1459 | return response.json( 1460 | { 1461 | "schemaVersion": 1, 1462 | "label": "tests", 1463 | "message": job.state, 1464 | "color": state_to_color[job.state], 1465 | } 1466 | ) 1467 | 1468 | 1469 | @app.route("/job/") 1470 | @jinja.template("job.html") 1471 | async def html_job(request, job_id): 1472 | job = Job.select().where(Job.id == job_id) 1473 | 1474 | if job.count() == 0: 1475 | raise NotFound() 1476 | 1477 | job = job[0] 1478 | 1479 | application = Repo.select().where(Repo.url == job.url_or_path) 1480 | application = application[0] if application else None 1481 | 1482 | job_url = app.config.BASE_URL + app.url_for("html_job", job_id=job.id) 1483 | badge_url = app.config.BASE_URL + app.url_for("api_badge_job", job_id=job.id) 1484 | shield_badge_url = f"https://img.shields.io/endpoint?url={badge_url}" 1485 | summary_url = app.config.BASE_URL + "/summary/" + str(job.id) + ".png" 1486 | 1487 | return { 1488 | "job": job, 1489 | "app": application, 1490 | "job_url": job_url, 1491 | "badge_url": badge_url, 1492 | "shield_badge_url": shield_badge_url, 1493 | "summary_url": summary_url, 1494 | "relative_path_to_root": "../", 1495 | "path": request.path, 1496 | } 1497 | 1498 | 1499 | @app.route( 1500 | "/apps/", strict_slashes=True 1501 | ) # To avoid reaching the route "/apps//" with an empty string 1502 | @jinja.template("apps.html") 1503 | async def html_apps(request): 1504 | return {"relative_path_to_root": "../", "path": request.path} 1505 | 1506 | 1507 | @app.route("/apps//") 1508 | @jinja.template("app.html") 1509 | async def html_app(request, app_name): 1510 | _app = Repo.select().where(Repo.name == app_name) 1511 | 1512 | if _app.count() == 0: 1513 | raise NotFound() 1514 | 1515 | return {"app": _app[0], "relative_path_to_root": "../../", "path": request.path} 1516 | 1517 | 1518 | @app.route("/apps//latestjob") 1519 | async def html_app_latestjob(request, app_name): 1520 | _app = Repo.select().where(Repo.name == app_name) 1521 | 1522 | if _app.count() == 0: 1523 | raise NotFound() 1524 | 1525 | jobs = Job.select(fn.MAX(Job.id)).where( 1526 | Job.url_or_path == _app[0].url, Job.state != "scheduled" 1527 | ) 1528 | 1529 | if jobs.count() == 0: 1530 | jobs = Job.select(fn.MAX(Job.id)).where(Job.url_or_path == _app[0].url) 1531 | 1532 | if jobs.count() == 0: 1533 | raise NotFound() 1534 | 1535 | job_url = app.config.BASE_URL + app.url_for("html_job", job_id=jobs[0].id) 1536 | 1537 | return response.redirect(job_url) 1538 | 1539 | 1540 | @app.route("/") 1541 | @jinja.template("index.html") 1542 | async def html_index(request): 1543 | return {"relative_path_to_root": "", "path": request.path} 1544 | 1545 | 1546 | # @always_relaunch(sleep=10) 1547 | # async def number_of_tasks(): 1548 | # print("Number of tasks: %s" % len(asyncio_all_tasks())) 1549 | # 1550 | # 1551 | # @app.route('/monitor') 1552 | # async def monitor(request): 1553 | # snapshot = tracemalloc.take_snapshot() 1554 | # top_stats = snapshot.statistics('lineno') 1555 | # 1556 | # tasks = asyncio_all_tasks() 1557 | # 1558 | # return response.json({ 1559 | # "top_20_trace": [str(x) for x in top_stats[:20]], 1560 | # "tasks": { 1561 | # "number": len(tasks), 1562 | # "array": [show_coro(t) for t in tasks], 1563 | # } 1564 | # }) 1565 | 1566 | 1567 | @app.route("/github", methods=["GET"]) 1568 | async def github_get(request): 1569 | return response.text( 1570 | "You aren't supposed to go on this page using a browser, it's for webhooks push instead." 1571 | ) 1572 | 1573 | 1574 | @app.route("/github", methods=["POST"]) 1575 | async def github(request): 1576 | 1577 | # Abort directly if no secret opened 1578 | # (which also allows to only enable this feature if we define the webhook secret) 1579 | if app.config.GITHUB_WEBHOOK_SECRET is None: 1580 | api_logger.info( 1581 | "Received a webhook but no settings GITHUB_WEBHOOK_SECRET or GITHUB_BOT_TOKEN... ignoring" 1582 | ) 1583 | return response.json({"error": "GitHub hooks not configured"}, 403) 1584 | 1585 | # Only SHA1 is supported 1586 | header_signature = request.headers.get("X-Hub-Signature") 1587 | if header_signature is None: 1588 | api_logger.info("Received a webhook but there's no header X-Hub-Signature") 1589 | return response.json({"error": "No X-Hub-Signature"}, 403) 1590 | 1591 | sha_name, signature = header_signature.split("=") 1592 | if sha_name != "sha1": 1593 | api_logger.info( 1594 | "Received a webhook but signing algo isn't sha1, it's '%s'" % sha_name 1595 | ) 1596 | return response.json({"error": "Signing algorightm is not sha1 ?!"}, 501) 1597 | 1598 | # HMAC requires the key to be bytes, but data is string 1599 | mac = hmac.new( 1600 | app.config.GITHUB_WEBHOOK_SECRET.encode(), 1601 | msg=request.body, 1602 | digestmod=hashlib.sha1, 1603 | ) 1604 | 1605 | if not hmac.compare_digest(str(mac.hexdigest()), str(signature)): 1606 | api_logger.info( 1607 | "Received a webhook but signature authentication failed (is the secret properly configured?)" 1608 | ) 1609 | return response.json({"error": "Bad signature ?!"}, 403) 1610 | 1611 | hook_type = request.headers.get("X-Github-Event") 1612 | hook_infos = request.json 1613 | 1614 | # We expect issue comments (issue = also PR in github stuff...) 1615 | # - *New* comments 1616 | # - On issue/PRs which are still open 1617 | if hook_type == "issue_comment": 1618 | if ( 1619 | hook_infos["action"] != "created" 1620 | or hook_infos["issue"]["state"] != "open" 1621 | or "pull_request" not in hook_infos["issue"] 1622 | ): 1623 | # Nothing to do but success anyway (204 = No content) 1624 | api_logger.debug( 1625 | "Received an issue_comment webhook but doesn't qualify for starting a job." 1626 | ) 1627 | return response.empty(status=204) 1628 | 1629 | # Check the comment contains proper keyword trigger 1630 | body = hook_infos["comment"]["body"].strip()[:100].lower() 1631 | if not any(trigger.lower() in body for trigger in app.config.WEBHOOK_TRIGGERS): 1632 | # Nothing to do but success anyway (204 = No content) 1633 | api_logger.debug( 1634 | "Received an issue_comment webhook but doesn't contain any keyword." 1635 | ) 1636 | return response.empty(status=204) 1637 | 1638 | # We only accept this from people which are member of the org 1639 | # https://docs.github.com/en/rest/reference/orgs#check-organization-membership-for-a-user 1640 | # We need a token an we can't rely on "author_association" because sometimes, users are members in Private, 1641 | # which is not represented in the original webhook 1642 | async def is_user_in_organization(user): 1643 | async with aiohttp.ClientSession( 1644 | headers={ 1645 | "Authorization": f"token {app.config.GITHUB_COMMIT_STATUS_TOKEN}", 1646 | "Accept": "application/vnd.github.v3+json", 1647 | } 1648 | ) as session: 1649 | resp = await session.get( 1650 | f"https://api.github.com/orgs/YunoHost-Apps/members/{user}", 1651 | ) 1652 | return resp.status == 204 1653 | 1654 | github_username = hook_infos["comment"]["user"]["login"] 1655 | if not await is_user_in_organization(github_username): 1656 | # Unauthorized 1657 | api_logger.warning( 1658 | f"User {github_username} is not authorized to run webhooks!" 1659 | ) 1660 | return response.json({"error": "Unauthorized"}, 403) 1661 | # Fetch the PR infos (yeah they ain't in the initial infos we get @_@) 1662 | pr_infos_url = hook_infos["issue"]["pull_request"]["url"] 1663 | 1664 | elif hook_type == "pull_request": 1665 | if hook_infos["action"] != "opened": 1666 | # Nothing to do but success anyway (204 = No content) 1667 | api_logger.debug( 1668 | "Received a pull_request webhook but doesn't qualify for starting a job." 1669 | ) 1670 | return response.empty(status=204) 1671 | 1672 | # We only accept PRs that are created by github-action bot 1673 | if hook_infos["pull_request"]["user"]["login"] not in [ 1674 | "github-actions[bot]", 1675 | "yunohost-bot", 1676 | ] or not hook_infos["pull_request"]["head"]["ref"].startswith( 1677 | "ci-auto-update-" 1678 | ): 1679 | # Unauthorized 1680 | api_logger.debug( 1681 | "Received a pull_request webhook but from an unknown github user." 1682 | ) 1683 | return response.empty(status=204) 1684 | if not app.config.ANSWER_TO_AUTO_UPDATER: 1685 | # Unauthorized 1686 | api_logger.info( 1687 | "Received a pull_request webhook but configured to ignore the auto-updater." 1688 | ) 1689 | return response.empty(status=204) 1690 | # Fetch the PR infos (yeah they ain't in the initial infos we get @_@) 1691 | pr_infos_url = hook_infos["pull_request"]["url"] 1692 | 1693 | else: 1694 | # Nothing to do but success anyway (204 = No content) 1695 | return response.empty(status=204) 1696 | 1697 | async with aiohttp.ClientSession() as session: 1698 | async with session.get(pr_infos_url) as resp: 1699 | pr_infos = await resp.json() 1700 | 1701 | branch_name = pr_infos["head"]["ref"] 1702 | repo = pr_infos["head"]["repo"]["html_url"] 1703 | url_to_test = f"{repo}/tree/{branch_name}" 1704 | app_id = pr_infos["base"]["repo"]["name"].rstrip("") 1705 | if app_id.endswith("_ynh"): 1706 | app_id = app_id[: -len("_ynh")] 1707 | 1708 | pr_id = str(pr_infos["number"]) 1709 | 1710 | # Create the job for the corresponding app (with the branch url) 1711 | 1712 | api_logger.info("Scheduling a new job from comment on a PR") 1713 | job = await create_job( 1714 | app_id, url_to_test, job_comment=f"PR #{pr_id}, {branch_name}" 1715 | ) 1716 | 1717 | if not job: 1718 | api_logger.warning("Corresponding job already scheduled!") 1719 | return response.empty(status=204) 1720 | 1721 | # Answer with comment with link+badge for the job 1722 | 1723 | async def comment(body): 1724 | if hook_type == "issue_comment": 1725 | comments_url = hook_infos["issue"]["comments_url"] 1726 | else: 1727 | comments_url = hook_infos["pull_request"]["comments_url"] 1728 | 1729 | async with aiohttp.ClientSession( 1730 | headers={"Authorization": f"token {app.config.GITHUB_COMMIT_STATUS_TOKEN}"} 1731 | ) as session: 1732 | async with session.post( 1733 | comments_url, data=my_json_dumps({"body": body}) 1734 | ) as resp: 1735 | respjson = await resp.json() 1736 | api_logger.info("Added comment %s" % respjson["html_url"]) 1737 | 1738 | catchphrase = random.choice(app.config.WEBHOOK_CATCHPHRASES) 1739 | # Dirty hack with BASE_URL passed from cmd argument because we can't use request.url_for because Sanic < 20.x 1740 | job_url = app.config.BASE_URL + app.url_for("html_job", job_id=job.id) 1741 | badge_url = app.config.BASE_URL + app.url_for("api_badge_job", job_id=job.id) 1742 | shield_badge_url = f"https://img.shields.io/endpoint?url={badge_url}" 1743 | summary_url = app.config.BASE_URL + f"/summary/{job.id}.png" 1744 | 1745 | body = f"{catchphrase}\n[![Test Badge]({shield_badge_url})]({job_url})\n[![]({summary_url})]({job_url})" 1746 | api_logger.info(body) 1747 | await comment(body) 1748 | 1749 | return response.text("ok") 1750 | 1751 | 1752 | # def show_coro(c): 1753 | # data = { 1754 | # 'txt': str(c), 1755 | # 'type': str(type(c)), 1756 | # 'done': c.done(), 1757 | # 'cancelled': False, 1758 | # 'stack': None, 1759 | # 'exception': None, 1760 | # } 1761 | # if not c.done(): 1762 | # data['stack'] = [format_frame(x) for x in c.get_stack()] 1763 | # else: 1764 | # if c.cancelled(): 1765 | # data['cancelled'] = True 1766 | # else: 1767 | # data['exception'] = str(c.exception()) 1768 | # 1769 | # return data 1770 | 1771 | 1772 | # def format_frame(f): 1773 | # keys = ['f_code', 'f_lineno'] 1774 | # return dict([(k, str(getattr(f, k))) for k in keys]) 1775 | 1776 | 1777 | @app.listener("before_server_start") 1778 | async def listener_before_server_start(*args, **kwargs): 1779 | task_logger.info("before_server_start") 1780 | reset_pending_jobs() 1781 | reset_busy_workers() 1782 | merge_jobs_on_startup() 1783 | 1784 | set_random_day_for_monthy_job() 1785 | 1786 | 1787 | @app.listener("after_server_start") 1788 | async def listener_after_server_start(*args, **kwargs): 1789 | task_logger.info("after_server_start") 1790 | 1791 | 1792 | @app.listener("before_server_stop") 1793 | async def listener_before_server_stop(*args, **kwargs): 1794 | task_logger.info("before_server_stop") 1795 | 1796 | 1797 | @app.listener("after_server_stop") 1798 | async def listener_after_server_stop(*args, **kwargs): 1799 | task_logger.info("after_server_stop") 1800 | for job_id in jobs_in_memory_state: 1801 | await stop_job(job_id) 1802 | job = Job.select().where(Job.id == job_id)[0] 1803 | job.state = "scheduled" 1804 | job.log = "" 1805 | job.save() 1806 | 1807 | 1808 | def set_config(config="./config.py"): 1809 | 1810 | default_config = { 1811 | "BASE_URL": "", 1812 | "PORT": 4242, 1813 | "TIMEOUT": 10800, 1814 | "DEBUG": False, 1815 | "MONITOR_APPS_LIST": False, 1816 | "MONITOR_GIT": False, 1817 | "MONITOR_ONLY_GOOD_QUALITY_APPS": False, 1818 | "MONTHLY_JOBS": False, 1819 | "ANSWER_TO_AUTO_UPDATER": True, 1820 | "WORKER_COUNT": 1, 1821 | "ARCH": "amd64", 1822 | "DIST": "bullseye", 1823 | "YNH_BRANCH": "stable", 1824 | "PACKAGE_CHECK_DIR": yunorunner_dir + "/package_check/", 1825 | "WEBHOOK_TRIGGERS": [ 1826 | "!testme", 1827 | "!gogogadgetoci", 1828 | "By the power of systemd, I invoke The Great App CI to test this Pull Request!", 1829 | ], 1830 | "WEBHOOK_CATCHPHRASES": [ 1831 | "Alrighty!", 1832 | "Fingers crossed!", 1833 | "May the CI gods be with you!", 1834 | ":carousel_horse:", 1835 | ":rocket:", 1836 | ":sunflower:", 1837 | "Meow :cat2:", 1838 | ":v:", 1839 | ":stuck_out_tongue_winking_eye:", 1840 | ], 1841 | "GITHUB_COMMIT_STATUS_TOKEN": None, 1842 | "GITHUB_WEBHOOK_SECRET": None, 1843 | } 1844 | 1845 | app.config.update_config(default_config) 1846 | app.config.update_config(config) 1847 | 1848 | app.config.PACKAGE_CHECK_PATH = app.config.PACKAGE_CHECK_DIR + "package_check.sh" 1849 | app.config.PACKAGE_CHECK_LOCK_PER_WORKER = ( 1850 | app.config.PACKAGE_CHECK_DIR + "pcheck-{worker_id}.lock" 1851 | ) 1852 | app.config.PACKAGE_CHECK_FULL_LOG_PER_WORKER = ( 1853 | app.config.PACKAGE_CHECK_DIR + "full_log_{worker_id}.log" 1854 | ) 1855 | app.config.PACKAGE_CHECK_RESULT_JSON_PER_WORKER = ( 1856 | app.config.PACKAGE_CHECK_DIR + "results_{worker_id}.json" 1857 | ) 1858 | app.config.PACKAGE_CHECK_SUMMARY_PNG_PER_WORKER = ( 1859 | app.config.PACKAGE_CHECK_DIR + "summary_{worker_id}.png" 1860 | ) 1861 | 1862 | if not os.path.exists(app.config.PACKAGE_CHECK_PATH): 1863 | print( 1864 | f"Error: analyzer script doesn't exist at '{app.config.PACKAGE_CHECK_PATH}'. Please fix the configuration in {config}" 1865 | ) 1866 | sys.exit(1) 1867 | 1868 | 1869 | if __name__ == "__main__": 1870 | set_config() 1871 | app.run("localhost", port=app.config.PORT, debug=app.config.DEBUG) 1872 | 1873 | 1874 | if __name__ == "__mp_main__": 1875 | # Worker thread 1876 | set_config() 1877 | 1878 | if app.config.MONITOR_APPS_LIST: 1879 | app.add_task( 1880 | monitor_apps_lists( 1881 | monitor_git=app.config.MONITOR_GIT, 1882 | monitor_only_good_quality_apps=app.config.MONITOR_ONLY_GOOD_QUALITY_APPS, 1883 | ) 1884 | ) 1885 | 1886 | if app.config.MONTHLY_JOBS: 1887 | app.add_task(launch_monthly_job()) 1888 | 1889 | # app.add_task(number_of_tasks()) 1890 | 1891 | app.add_task(jobs_dispatcher()) 1892 | -------------------------------------------------------------------------------- /schedule.py: -------------------------------------------------------------------------------- 1 | import traceback 2 | import asyncio 3 | 4 | from functools import wraps 5 | from datetime import datetime, timedelta 6 | 7 | 8 | def always_relaunch(sleep): 9 | def decorator(function): 10 | @wraps(function) 11 | async def wrap(*args, **kwargs): 12 | while True: 13 | try: 14 | await function(*args, **kwargs) 15 | except KeyboardInterrupt: 16 | return 17 | except Exception as e: 18 | traceback.print_exc() 19 | print( 20 | f"Error: exception in function '{function.__name__}', relaunch in {sleep} seconds" 21 | ) 22 | finally: 23 | await asyncio.sleep(sleep) 24 | 25 | return wrap 26 | 27 | return decorator 28 | 29 | 30 | def once_per_day(function): 31 | async def decorator(*args, **kwargs): 32 | while True: 33 | try: 34 | await function(*args, **kwargs) 35 | except KeyboardInterrupt: 36 | return 37 | except Exception as e: 38 | import traceback 39 | 40 | traceback.print_exc() 41 | print( 42 | f"Error: exception in function '{function.__name__}', relaunch in tomorrow at one am" 43 | ) 44 | finally: 45 | # launch tomorrow at 1 am 46 | now = datetime.now() 47 | tomorrow = now + timedelta(days=1) 48 | tomorrow = tomorrow.replace(hour=1, minute=0, second=0) 49 | seconds_until_next_run = (tomorrow - now).seconds 50 | 51 | # XXX if relaunched twice the same day that will duplicate the jobs 52 | await asyncio.sleep(seconds_until_next_run) 53 | 54 | return decorator 55 | -------------------------------------------------------------------------------- /static/css/style.css: -------------------------------------------------------------------------------- 1 | .consoleOutput { 2 | max-height: 90vh; 3 | background-color: #222; 4 | color: #eee; 5 | padding-top: 1.25rem; 6 | padding-right: 1.5rem; 7 | padding-bottom: 1.25rem; 8 | padding-left: 1.5rem; 9 | white-space: pre-wrap; 10 | } 11 | 12 | .deleted * { 13 | font-style: italic; 14 | color: darkgrey !important;; 15 | } 16 | 17 | .menu-title { 18 | font-weight: bold; 19 | font-family: sans; 20 | } 21 | 22 | .navbar { 23 | border-bottom: 0.5px solid #ddd; 24 | } 25 | 26 | .doneJob { 27 | background-color: #BCFFBC80 !important;; 28 | } 29 | 30 | .failureJob { 31 | background-color: #FCAAAA80 !important;; 32 | } 33 | 34 | .canceledJob { 35 | background-color: #FAFCAA80 !important;; 36 | } 37 | 38 | .runningJob { 39 | background-color: #D9EDF7 !important;; 40 | } 41 | 42 | .errorJob { 43 | background-color: #cccccc !important; 44 | } 45 | 46 | h1 small { 47 | font-size: large; 48 | color: #888; 49 | } 50 | 51 | .randomJobDay { 52 | text-align: right; 53 | } 54 | -------------------------------------------------------------------------------- /static/js/app.js: -------------------------------------------------------------------------------- 1 | function websocketPrefix() { 2 | if (window.location.protocol == "https:") { 3 | return "wss" 4 | } else { 5 | return "ws" 6 | } 7 | } 8 | 9 | function websocketRelativePath(path) { 10 | return window.location.pathname.substring(0, window.location.pathname.length - path.length) 11 | } 12 | 13 | /* ansi_up.js 14 | * author : Dru Nelson 15 | * license : MIT 16 | * http://github.com/drudru/ansi_up 17 | */ 18 | (function (root, factory) { 19 | if (typeof define === 'function' && define.amd) { 20 | // AMD. Register as an anonymous module. 21 | define(['exports'], factory); 22 | } else if (typeof exports === 'object' && typeof exports.nodeName !== 'string') { 23 | // CommonJS 24 | factory(exports); 25 | } else { 26 | // Browser globals 27 | var exp = {}; 28 | factory(exp); 29 | root.AnsiUp = exp.default; 30 | } 31 | }(this, function (exports) { 32 | "use strict"; 33 | function rgx(tmplObj) { 34 | var subst = []; 35 | for (var _i = 1; _i < arguments.length; _i++) { 36 | subst[_i - 1] = arguments[_i]; 37 | } 38 | var regexText = tmplObj.raw[0]; 39 | var wsrgx = /^\s+|\s+\n|\s+#[\s\S]+?\n/gm; 40 | var txt2 = regexText.replace(wsrgx, ''); 41 | return new RegExp(txt2, 'm'); 42 | } 43 | var AnsiUp = (function () { 44 | function AnsiUp() { 45 | this.VERSION = "3.0.0"; 46 | this.ansi_colors = [ 47 | [ 48 | { rgb: [0, 0, 0], class_name: "ansi-black" }, 49 | { rgb: [187, 0, 0], class_name: "ansi-red" }, 50 | { rgb: [0, 187, 0], class_name: "ansi-green" }, 51 | { rgb: [187, 187, 0], class_name: "ansi-yellow" }, 52 | { rgb: [0, 0, 187], class_name: "ansi-blue" }, 53 | { rgb: [187, 0, 187], class_name: "ansi-magenta" }, 54 | { rgb: [0, 187, 187], class_name: "ansi-cyan" }, 55 | { rgb: [255, 255, 255], class_name: "ansi-white" } 56 | ], 57 | [ 58 | { rgb: [85, 85, 85], class_name: "ansi-bright-black" }, 59 | { rgb: [255, 85, 85], class_name: "ansi-bright-red" }, 60 | { rgb: [0, 255, 0], class_name: "ansi-bright-green" }, 61 | { rgb: [255, 255, 85], class_name: "ansi-bright-yellow" }, 62 | { rgb: [85, 85, 255], class_name: "ansi-bright-blue" }, 63 | { rgb: [255, 85, 255], class_name: "ansi-bright-magenta" }, 64 | { rgb: [85, 255, 255], class_name: "ansi-bright-cyan" }, 65 | { rgb: [255, 255, 255], class_name: "ansi-bright-white" } 66 | ] 67 | ]; 68 | this.htmlFormatter = { 69 | transform: function (fragment, instance) { 70 | var txt = fragment.text; 71 | if (txt.length === 0) 72 | return txt; 73 | if (instance._escape_for_html) 74 | txt = instance.old_escape_for_html(txt); 75 | if (!fragment.bold && fragment.fg === null && fragment.bg === null) 76 | return txt; 77 | var styles = []; 78 | var classes = []; 79 | var fg = fragment.fg; 80 | var bg = fragment.bg; 81 | if (fragment.bold) 82 | styles.push('font-weight:bold'); 83 | if (!instance._use_classes) { 84 | if (fg) 85 | styles.push("color:rgb(" + fg.rgb.join(',') + ")"); 86 | if (bg) 87 | styles.push("background-color:rgb(" + bg.rgb + ")"); 88 | } 89 | else { 90 | if (fg) { 91 | if (fg.class_name !== 'truecolor') { 92 | classes.push(fg.class_name + "-fg"); 93 | } 94 | else { 95 | styles.push("color:rgb(" + fg.rgb.join(',') + ")"); 96 | } 97 | } 98 | if (bg) { 99 | if (bg.class_name !== 'truecolor') { 100 | classes.push(bg.class_name + "-bg"); 101 | } 102 | else { 103 | styles.push("background-color:rgb(" + bg.rgb.join(',') + ")"); 104 | } 105 | } 106 | } 107 | var class_string = ''; 108 | var style_string = ''; 109 | if (classes.length) 110 | class_string = " class=\"" + classes.join(' ') + "\""; 111 | if (styles.length) 112 | style_string = " style=\"" + styles.join(';') + "\""; 113 | return "" + txt + ""; 114 | }, 115 | compose: function (segments, instance) { 116 | return segments.join(""); 117 | } 118 | }; 119 | this.textFormatter = { 120 | transform: function (fragment, instance) { 121 | return fragment.text; 122 | }, 123 | compose: function (segments, instance) { 124 | return segments.join(""); 125 | } 126 | }; 127 | this.setup_256_palette(); 128 | this._use_classes = false; 129 | this._escape_for_html = true; 130 | this.bold = false; 131 | this.fg = this.bg = null; 132 | this._buffer = ''; 133 | } 134 | Object.defineProperty(AnsiUp.prototype, "use_classes", { 135 | get: function () { 136 | return this._use_classes; 137 | }, 138 | set: function (arg) { 139 | this._use_classes = arg; 140 | }, 141 | enumerable: true, 142 | configurable: true 143 | }); 144 | Object.defineProperty(AnsiUp.prototype, "escape_for_html", { 145 | get: function () { 146 | return this._escape_for_html; 147 | }, 148 | set: function (arg) { 149 | this._escape_for_html = arg; 150 | }, 151 | enumerable: true, 152 | configurable: true 153 | }); 154 | AnsiUp.prototype.setup_256_palette = function () { 155 | var _this = this; 156 | this.palette_256 = []; 157 | this.ansi_colors.forEach(function (palette) { 158 | palette.forEach(function (rec) { 159 | _this.palette_256.push(rec); 160 | }); 161 | }); 162 | var levels = [0, 95, 135, 175, 215, 255]; 163 | for (var r = 0; r < 6; ++r) { 164 | for (var g = 0; g < 6; ++g) { 165 | for (var b = 0; b < 6; ++b) { 166 | var col = { rgb: [levels[r], levels[g], levels[b]], class_name: 'truecolor' }; 167 | this.palette_256.push(col); 168 | } 169 | } 170 | } 171 | var grey_level = 8; 172 | for (var i = 0; i < 24; ++i, grey_level += 10) { 173 | var gry = { rgb: [grey_level, grey_level, grey_level], class_name: 'truecolor' }; 174 | this.palette_256.push(gry); 175 | } 176 | }; 177 | AnsiUp.prototype.old_escape_for_html = function (txt) { 178 | return txt.replace(/[&<>]/gm, function (str) { 179 | if (str === "&") 180 | return "&"; 181 | if (str === "<") 182 | return "<"; 183 | if (str === ">") 184 | return ">"; 185 | }); 186 | }; 187 | AnsiUp.prototype.old_linkify = function (txt) { 188 | return txt.replace(/(https?:\/\/[^\s]+)/gm, function (str) { 189 | return "" + str + ""; 190 | }); 191 | }; 192 | AnsiUp.prototype.detect_incomplete_ansi = function (txt) { 193 | return !(/.*?[\x40-\x7e]/.test(txt)); 194 | }; 195 | AnsiUp.prototype.detect_incomplete_link = function (txt) { 196 | var found = false; 197 | for (var i = txt.length - 1; i > 0; i--) { 198 | if (/\s|\x1B/.test(txt[i])) { 199 | found = true; 200 | break; 201 | } 202 | } 203 | if (!found) { 204 | if (/(https?:\/\/[^\s]+)/.test(txt)) 205 | return 0; 206 | else 207 | return -1; 208 | } 209 | var prefix = txt.substr(i + 1, 4); 210 | if (prefix.length === 0) 211 | return -1; 212 | if ("http".indexOf(prefix) === 0) 213 | return (i + 1); 214 | }; 215 | AnsiUp.prototype.ansi_to = function (txt, formatter) { 216 | var pkt = this._buffer + txt; 217 | this._buffer = ''; 218 | var raw_text_pkts = pkt.split(/\x1B\[/); 219 | if (raw_text_pkts.length === 1) 220 | raw_text_pkts.push(''); 221 | this.handle_incomplete_sequences(raw_text_pkts); 222 | var first_chunk = this.with_state(raw_text_pkts.shift()); 223 | var blocks = new Array(raw_text_pkts.length); 224 | for (var i = 0, len = raw_text_pkts.length; i < len; ++i) { 225 | blocks[i] = (formatter.transform(this.process_ansi(raw_text_pkts[i]), this)); 226 | } 227 | if (first_chunk.text.length > 0) 228 | blocks.unshift(formatter.transform(first_chunk, this)); 229 | return formatter.compose(blocks, this); 230 | }; 231 | AnsiUp.prototype.ansi_to_html = function (txt) { 232 | return this.ansi_to(txt, this.htmlFormatter); 233 | }; 234 | AnsiUp.prototype.ansi_to_text = function (txt) { 235 | return this.ansi_to(txt, this.textFormatter); 236 | }; 237 | AnsiUp.prototype.with_state = function (text) { 238 | return { bold: this.bold, fg: this.fg, bg: this.bg, text: text }; 239 | }; 240 | AnsiUp.prototype.handle_incomplete_sequences = function (chunks) { 241 | var last_chunk = chunks[chunks.length - 1]; 242 | if ((last_chunk.length > 0) && this.detect_incomplete_ansi(last_chunk)) { 243 | this._buffer = "\x1B[" + last_chunk; 244 | chunks.pop(); 245 | chunks.push(''); 246 | } 247 | else { 248 | if (last_chunk.slice(-1) === "\x1B") { 249 | this._buffer = "\x1B"; 250 | console.log("raw", chunks); 251 | chunks.pop(); 252 | chunks.push(last_chunk.substr(0, last_chunk.length - 1)); 253 | console.log(chunks); 254 | console.log(last_chunk); 255 | } 256 | if (chunks.length === 2 && 257 | chunks[1] === "" && 258 | chunks[0].slice(-1) === "\x1B") { 259 | this._buffer = "\x1B"; 260 | last_chunk = chunks.shift(); 261 | chunks.unshift(last_chunk.substr(0, last_chunk.length - 1)); 262 | } 263 | } 264 | }; 265 | AnsiUp.prototype.process_ansi = function (block) { 266 | if (!this._sgr_regex) { 267 | this._sgr_regex = (_a = ["\n ^ # beginning of line\n ([!<-?]?) # a private-mode char (!, <, =, >, ?)\n ([d;]*) # any digits or semicolons\n ([ -/]? # an intermediate modifier\n [@-~]) # the command\n ([sS]*) # any text following this CSI sequence\n "], _a.raw = ["\n ^ # beginning of line\n ([!\\x3c-\\x3f]?) # a private-mode char (!, <, =, >, ?)\n ([\\d;]*) # any digits or semicolons\n ([\\x20-\\x2f]? # an intermediate modifier\n [\\x40-\\x7e]) # the command\n ([\\s\\S]*) # any text following this CSI sequence\n "], rgx(_a)); 268 | } 269 | var matches = block.match(this._sgr_regex); 270 | if (!matches) { 271 | return this.with_state(block); 272 | } 273 | var orig_txt = matches[4]; 274 | if (matches[1] !== '' || matches[3] !== 'm') { 275 | return this.with_state(orig_txt); 276 | } 277 | var sgr_cmds = matches[2].split(';'); 278 | while (sgr_cmds.length > 0) { 279 | var sgr_cmd_str = sgr_cmds.shift(); 280 | var num = parseInt(sgr_cmd_str, 10); 281 | if (isNaN(num) || num === 0) { 282 | this.fg = this.bg = null; 283 | this.bold = false; 284 | } 285 | else if (num === 1) { 286 | this.bold = true; 287 | } 288 | else if (num === 22) { 289 | this.bold = false; 290 | } 291 | else if (num === 39) { 292 | this.fg = null; 293 | } 294 | else if (num === 49) { 295 | this.bg = null; 296 | } 297 | else if ((num >= 30) && (num < 38)) { 298 | this.fg = this.ansi_colors[0][(num - 30)]; 299 | } 300 | else if ((num >= 40) && (num < 48)) { 301 | this.bg = this.ansi_colors[0][(num - 40)]; 302 | } 303 | else if ((num >= 90) && (num < 98)) { 304 | this.fg = this.ansi_colors[1][(num - 90)]; 305 | } 306 | else if ((num >= 100) && (num < 108)) { 307 | this.bg = this.ansi_colors[1][(num - 100)]; 308 | } 309 | else if (num === 38 || num === 48) { 310 | if (sgr_cmds.length > 0) { 311 | var is_foreground = (num === 38); 312 | var mode_cmd = sgr_cmds.shift(); 313 | if (mode_cmd === '5' && sgr_cmds.length > 0) { 314 | var palette_index = parseInt(sgr_cmds.shift(), 10); 315 | if (palette_index >= 0 && palette_index <= 255) { 316 | if (is_foreground) 317 | this.fg = this.palette_256[palette_index]; 318 | else 319 | this.bg = this.palette_256[palette_index]; 320 | } 321 | } 322 | if (mode_cmd === '2' && sgr_cmds.length > 2) { 323 | var r = parseInt(sgr_cmds.shift(), 10); 324 | var g = parseInt(sgr_cmds.shift(), 10); 325 | var b = parseInt(sgr_cmds.shift(), 10); 326 | if ((r >= 0 && r <= 255) && (g >= 0 && g <= 255) && (b >= 0 && b <= 255)) { 327 | var c = { rgb: [r, g, b], class_name: 'truecolor' }; 328 | if (is_foreground) 329 | this.fg = c; 330 | else 331 | this.bg = c; 332 | } 333 | } 334 | } 335 | } 336 | } 337 | return this.with_state(orig_txt); 338 | var _a; 339 | }; 340 | return AnsiUp; 341 | }()); 342 | // sourceMappingURL=ansi_up.js.map 343 | Object.defineProperty(exports, "__esModule", { value: true }); 344 | exports.default = AnsiUp; 345 | })); 346 | 347 | function notify(message) { 348 | // Let's check if the browser supports notifications 349 | if (!("Notification" in window)) { 350 | return; 351 | } 352 | 353 | // Let's check whether notification permissions have already been granted 354 | else if (Notification.permission === "granted") { 355 | // If it's okay let's create a notification 356 | var notification = new Notification( 357 | "Yunohost Apps CI", 358 | { 359 | icon: "https://yunohost.org/_images/yunohost_package.png", 360 | body: message 361 | } 362 | ); 363 | } 364 | 365 | // Otherwise, we need to ask the user for permission 366 | else if (Notification.permission !== "denied") { 367 | Notification.requestPermission().then(function (permission) { 368 | // If the user accepts, let's create a notification 369 | if (permission === "granted") { 370 | var notification = new Notification( 371 | "Yunohost Apps CI", 372 | { 373 | icon: "https://yunohost.org/_images/yunohost_package.png", 374 | body: message 375 | } 376 | ); 377 | } 378 | }); 379 | } 380 | 381 | // At last, if the user has denied notifications, and you 382 | // want to be respectful there is no need to bother them any more. 383 | } 384 | -------------------------------------------------------------------------------- /static/js/reconnecting-websocket.min.js: -------------------------------------------------------------------------------- 1 | !function(a,b){"function"==typeof define&&define.amd?define([],b):"undefined"!=typeof module&&module.exports?module.exports=b():a.ReconnectingWebSocket=b()}(this,function(){function a(b,c,d){function l(a,b){var c=document.createEvent("CustomEvent");return c.initCustomEvent(a,!1,!1,b),c}var e={debug:!1,automaticOpen:!0,reconnectInterval:1e3,maxReconnectInterval:3e4,reconnectDecay:1.5,timeoutInterval:2e3};d||(d={});for(var f in e)this[f]="undefined"!=typeof d[f]?d[f]:e[f];this.url=b,this.reconnectAttempts=0,this.readyState=WebSocket.CONNECTING,this.protocol=null;var h,g=this,i=!1,j=!1,k=document.createElement("div");k.addEventListener("open",function(a){g.onopen(a)}),k.addEventListener("close",function(a){g.onclose(a)}),k.addEventListener("connecting",function(a){g.onconnecting(a)}),k.addEventListener("message",function(a){g.onmessage(a)}),k.addEventListener("error",function(a){g.onerror(a)}),this.addEventListener=k.addEventListener.bind(k),this.removeEventListener=k.removeEventListener.bind(k),this.dispatchEvent=k.dispatchEvent.bind(k),this.open=function(b){h=new WebSocket(g.url,c||[]),b||k.dispatchEvent(l("connecting")),(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","attempt-connect",g.url);var d=h,e=setTimeout(function(){(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","connection-timeout",g.url),j=!0,d.close(),j=!1},g.timeoutInterval);h.onopen=function(){clearTimeout(e),(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","onopen",g.url),g.protocol=h.protocol,g.readyState=WebSocket.OPEN,g.reconnectAttempts=0;var d=l("open");d.isReconnect=b,b=!1,k.dispatchEvent(d)},h.onclose=function(c){if(clearTimeout(e),h=null,i)g.readyState=WebSocket.CLOSED,k.dispatchEvent(l("close"));else{g.readyState=WebSocket.CONNECTING;var d=l("connecting");d.code=c.code,d.reason=c.reason,d.wasClean=c.wasClean,k.dispatchEvent(d),b||j||((g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","onclose",g.url),k.dispatchEvent(l("close")));var e=g.reconnectInterval*Math.pow(g.reconnectDecay,g.reconnectAttempts);setTimeout(function(){g.reconnectAttempts++,g.open(!0)},e>g.maxReconnectInterval?g.maxReconnectInterval:e)}},h.onmessage=function(b){(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","onmessage",g.url,b.data);var c=l("message");c.data=b.data,k.dispatchEvent(c)},h.onerror=function(b){(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","onerror",g.url,b),k.dispatchEvent(l("error"))}},1==this.automaticOpen&&this.open(!1),this.send=function(b){if(h)return(g.debug||a.debugAll)&&console.debug("ReconnectingWebSocket","send",g.url,b),h.send(b);throw"INVALID_STATE_ERR : Pausing to reconnect websocket"},this.close=function(a,b){"undefined"==typeof a&&(a=1e3),i=!0,h&&h.close(a,b)},this.refresh=function(){h&&h.close()}}return a.prototype.onopen=function(){},a.prototype.onclose=function(){},a.prototype.onconnecting=function(){},a.prototype.onmessage=function(){},a.prototype.onerror=function(){},a.debugAll=!1,a.CONNECTING=WebSocket.CONNECTING,a.OPEN=WebSocket.OPEN,a.CLOSING=WebSocket.CLOSING,a.CLOSED=WebSocket.CLOSED,a}); 2 | -------------------------------------------------------------------------------- /static/js/vue-2-5-17.min.js: -------------------------------------------------------------------------------- 1 | /*! 2 | * Vue.js v2.5.17 3 | * (c) 2014-2018 Evan You 4 | * Released under the MIT License. 5 | */ 6 | !function(e,t){"object"==typeof exports&&"undefined"!=typeof module?module.exports=t():"function"==typeof define&&define.amd?define(t):e.Vue=t()}(this,function(){"use strict";var y=Object.freeze({});function M(e){return null==e}function D(e){return null!=e}function S(e){return!0===e}function T(e){return"string"==typeof e||"number"==typeof e||"symbol"==typeof e||"boolean"==typeof e}function P(e){return null!==e&&"object"==typeof e}var r=Object.prototype.toString;function l(e){return"[object Object]"===r.call(e)}function i(e){var t=parseFloat(String(e));return 0<=t&&Math.floor(t)===t&&isFinite(e)}function t(e){return null==e?"":"object"==typeof e?JSON.stringify(e,null,2):String(e)}function F(e){var t=parseFloat(e);return isNaN(t)?e:t}function s(e,t){for(var n=Object.create(null),r=e.split(","),i=0;ie.id;)n--;bt.splice(n+1,0,e)}else bt.push(e);Ct||(Ct=!0,Ze(At))}}(this)},St.prototype.run=function(){if(this.active){var e=this.get();if(e!==this.value||P(e)||this.deep){var t=this.value;if(this.value=e,this.user)try{this.cb.call(this.vm,e,t)}catch(e){Fe(e,this.vm,'callback for watcher "'+this.expression+'"')}else this.cb.call(this.vm,e,t)}}},St.prototype.evaluate=function(){this.value=this.get(),this.dirty=!1},St.prototype.depend=function(){for(var e=this.deps.length;e--;)this.deps[e].depend()},St.prototype.teardown=function(){if(this.active){this.vm._isBeingDestroyed||f(this.vm._watchers,this);for(var e=this.deps.length;e--;)this.deps[e].removeSub(this);this.active=!1}};var Tt={enumerable:!0,configurable:!0,get:$,set:$};function Et(e,t,n){Tt.get=function(){return this[t][n]},Tt.set=function(e){this[t][n]=e},Object.defineProperty(e,n,Tt)}function jt(e){e._watchers=[];var t=e.$options;t.props&&function(n,r){var i=n.$options.propsData||{},o=n._props={},a=n.$options._propKeys=[];n.$parent&&ge(!1);var e=function(e){a.push(e);var t=Ie(e,r,i,n);Ce(o,e,t),e in n||Et(n,"_props",e)};for(var t in r)e(t);ge(!0)}(e,t.props),t.methods&&function(e,t){e.$options.props;for(var n in t)e[n]=null==t[n]?$:v(t[n],e)}(e,t.methods),t.data?function(e){var t=e.$options.data;l(t=e._data="function"==typeof t?function(e,t){se();try{return e.call(t,t)}catch(e){return Fe(e,t,"data()"),{}}finally{ce()}}(t,e):t||{})||(t={});var n=Object.keys(t),r=e.$options.props,i=(e.$options.methods,n.length);for(;i--;){var o=n[i];r&&p(r,o)||(void 0,36!==(a=(o+"").charCodeAt(0))&&95!==a&&Et(e,"_data",o))}var a;we(t,!0)}(e):we(e._data={},!0),t.computed&&function(e,t){var n=e._computedWatchers=Object.create(null),r=Y();for(var i in t){var o=t[i],a="function"==typeof o?o:o.get;r||(n[i]=new St(e,a||$,$,Nt)),i in e||Lt(e,i,o)}}(e,t.computed),t.watch&&t.watch!==G&&function(e,t){for(var n in t){var r=t[n];if(Array.isArray(r))for(var i=0;iparseInt(this.max)&&bn(a,s[0],s,this._vnode)),t.data.keepAlive=!0}return t||e&&e[0]}}};$n=hn,Cn={get:function(){return j}},Object.defineProperty($n,"config",Cn),$n.util={warn:re,extend:m,mergeOptions:Ne,defineReactive:Ce},$n.set=xe,$n.delete=ke,$n.nextTick=Ze,$n.options=Object.create(null),k.forEach(function(e){$n.options[e+"s"]=Object.create(null)}),m(($n.options._base=$n).options.components,kn),$n.use=function(e){var t=this._installedPlugins||(this._installedPlugins=[]);if(-1=a&&l()};setTimeout(function(){c\/=]+)(?:\s*(=)\s*(?:"([^"]*)"+|'([^']*)'+|([^\s"'=<>`]+)))?/,oo="[a-zA-Z_][\\w\\-\\.]*",ao="((?:"+oo+"\\:)?"+oo+")",so=new RegExp("^<"+ao),co=/^\s*(\/?)>/,lo=new RegExp("^<\\/"+ao+"[^>]*>"),uo=/^]+>/i,fo=/^",""":'"',"&":"&"," ":"\n"," ":"\t"},go=/&(?:lt|gt|quot|amp);/g,_o=/&(?:lt|gt|quot|amp|#10|#9);/g,bo=s("pre,textarea",!0),$o=function(e,t){return e&&bo(e)&&"\n"===t[0]};var wo,Co,xo,ko,Ao,Oo,So,To,Eo=/^@|^v-on:/,jo=/^v-|^@|^:/,No=/([^]*?)\s+(?:in|of)\s+([^]*)/,Lo=/,([^,\}\]]*)(?:,([^,\}\]]*))?$/,Io=/^\(|\)$/g,Mo=/:(.*)$/,Do=/^:|^v-bind:/,Po=/\.[^.]+/g,Fo=e(eo);function Ro(e,t,n){return{type:1,tag:e,attrsList:t,attrsMap:function(e){for(var t={},n=0,r=e.length;n]*>)","i")),n=i.replace(t,function(e,t,n){return r=n.length,ho(o)||"noscript"===o||(t=t.replace(//g,"$1").replace(//g,"$1")),$o(o,t)&&(t=t.slice(1)),d.chars&&d.chars(t),""});a+=i.length-n.length,i=n,A(o,a-r,a)}else{var s=i.indexOf("<");if(0===s){if(fo.test(i)){var c=i.indexOf("--\x3e");if(0<=c){d.shouldKeepComment&&d.comment(i.substring(4,c)),C(c+3);continue}}if(po.test(i)){var l=i.indexOf("]>");if(0<=l){C(l+2);continue}}var u=i.match(uo);if(u){C(u[0].length);continue}var f=i.match(lo);if(f){var p=a;C(f[0].length),A(f[1],p,a);continue}var _=x();if(_){k(_),$o(v,i)&&C(1);continue}}var b=void 0,$=void 0,w=void 0;if(0<=s){for($=i.slice(s);!(lo.test($)||so.test($)||fo.test($)||po.test($)||(w=$.indexOf("<",1))<0);)s+=w,$=i.slice(s);b=i.substring(0,s),C(s)}s<0&&(b=i,i=""),d.chars&&b&&d.chars(b)}if(i===e){d.chars&&d.chars(i);break}}function C(e){a+=e,i=i.substring(e)}function x(){var e=i.match(so);if(e){var t,n,r={tagName:e[1],attrs:[],start:a};for(C(e[0].length);!(t=i.match(co))&&(n=i.match(io));)C(n[0].length),r.attrs.push(n);if(t)return r.unarySlash=t[1],C(t[0].length),r.end=a,r}}function k(e){var t=e.tagName,n=e.unarySlash;m&&("p"===v&&ro(t)&&A(v),g(t)&&v===t&&A(t));for(var r,i,o,a=y(t)||!!n,s=e.attrs.length,c=new Array(s),l=0;l-1"+("true"===d?":("+l+")":":_q("+l+","+d+")")),Ar(c,"change","var $$a="+l+",$$el=$event.target,$$c=$$el.checked?("+d+"):("+v+");if(Array.isArray($$a)){var $$v="+(f?"_n("+p+")":p)+",$$i=_i($$a,$$v);if($$el.checked){$$i<0&&("+Er(l,"$$a.concat([$$v])")+")}else{$$i>-1&&("+Er(l,"$$a.slice(0,$$i).concat($$a.slice($$i+1))")+")}}else{"+Er(l,"$$c")+"}",null,!0);else if("input"===$&&"radio"===w)r=e,i=_,a=(o=b)&&o.number,s=Or(r,"value")||"null",Cr(r,"checked","_q("+i+","+(s=a?"_n("+s+")":s)+")"),Ar(r,"change",Er(i,s),null,!0);else if("input"===$||"textarea"===$)!function(e,t,n){var r=e.attrsMap.type,i=n||{},o=i.lazy,a=i.number,s=i.trim,c=!o&&"range"!==r,l=o?"change":"range"===r?Pr:"input",u="$event.target.value";s&&(u="$event.target.value.trim()"),a&&(u="_n("+u+")");var f=Er(t,u);c&&(f="if($event.target.composing)return;"+f),Cr(e,"value","("+t+")"),Ar(e,l,f,null,!0),(s||a)&&Ar(e,"blur","$forceUpdate()")}(e,_,b);else if(!j.isReservedTag($))return Tr(e,_,b),!1;return!0},text:function(e,t){t.value&&Cr(e,"textContent","_s("+t.value+")")},html:function(e,t){t.value&&Cr(e,"innerHTML","_s("+t.value+")")}},isPreTag:function(e){return"pre"===e},isUnaryTag:to,mustUseProp:Sn,canBeLeftOpenTag:no,isReservedTag:Un,getTagNamespace:Vn,staticKeys:(Go=Wo,Go.reduce(function(e,t){return e.concat(t.staticKeys||[])},[]).join(","))},Qo=e(function(e){return s("type,tag,attrsList,attrsMap,plain,parent,children,attrs"+(e?","+e:""))});function ea(e,t){e&&(Zo=Qo(t.staticKeys||""),Xo=t.isReservedTag||O,function e(t){t.static=function(e){if(2===e.type)return!1;if(3===e.type)return!0;return!(!e.pre&&(e.hasBindings||e.if||e.for||c(e.tag)||!Xo(e.tag)||function(e){for(;e.parent;){if("template"!==(e=e.parent).tag)return!1;if(e.for)return!0}return!1}(e)||!Object.keys(e).every(Zo)))}(t);if(1===t.type){if(!Xo(t.tag)&&"slot"!==t.tag&&null==t.attrsMap["inline-template"])return;for(var n=0,r=t.children.length;n|^function\s*\(/,na=/^[A-Za-z_$][\w$]*(?:\.[A-Za-z_$][\w$]*|\['[^']*?']|\["[^"]*?"]|\[\d+]|\[[A-Za-z_$][\w$]*])*$/,ra={esc:27,tab:9,enter:13,space:32,up:38,left:37,right:39,down:40,delete:[8,46]},ia={esc:"Escape",tab:"Tab",enter:"Enter",space:" ",up:["Up","ArrowUp"],left:["Left","ArrowLeft"],right:["Right","ArrowRight"],down:["Down","ArrowDown"],delete:["Backspace","Delete"]},oa=function(e){return"if("+e+")return null;"},aa={stop:"$event.stopPropagation();",prevent:"$event.preventDefault();",self:oa("$event.target !== $event.currentTarget"),ctrl:oa("!$event.ctrlKey"),shift:oa("!$event.shiftKey"),alt:oa("!$event.altKey"),meta:oa("!$event.metaKey"),left:oa("'button' in $event && $event.button !== 0"),middle:oa("'button' in $event && $event.button !== 1"),right:oa("'button' in $event && $event.button !== 2")};function sa(e,t,n){var r=t?"nativeOn:{":"on:{";for(var i in e)r+='"'+i+'":'+ca(i,e[i])+",";return r.slice(0,-1)+"}"}function ca(t,e){if(!e)return"function(){}";if(Array.isArray(e))return"["+e.map(function(e){return ca(t,e)}).join(",")+"]";var n=na.test(e.value),r=ta.test(e.value);if(e.modifiers){var i="",o="",a=[];for(var s in e.modifiers)if(aa[s])o+=aa[s],ra[s]&&a.push(s);else if("exact"===s){var c=e.modifiers;o+=oa(["ctrl","shift","alt","meta"].filter(function(e){return!c[e]}).map(function(e){return"$event."+e+"Key"}).join("||"))}else a.push(s);return a.length&&(i+="if(!('button' in $event)&&"+a.map(la).join("&&")+")return null;"),o&&(i+=o),"function($event){"+i+(n?"return "+e.value+"($event)":r?"return ("+e.value+")($event)":e.value)+"}"}return n||r?e.value:"function($event){"+e.value+"}"}function la(e){var t=parseInt(e,10);if(t)return"$event.keyCode!=="+t;var n=ra[e],r=ia[e];return"_k($event.keyCode,"+JSON.stringify(e)+","+JSON.stringify(n)+",$event.key,"+JSON.stringify(r)+")"}var ua={on:function(e,t){e.wrapListeners=function(e){return"_g("+e+","+t.value+")"}},bind:function(t,n){t.wrapData=function(e){return"_b("+e+",'"+t.tag+"',"+n.value+","+(n.modifiers&&n.modifiers.prop?"true":"false")+(n.modifiers&&n.modifiers.sync?",true":"")+")"}},cloak:$},fa=function(e){this.options=e,this.warn=e.warn||$r,this.transforms=wr(e.modules,"transformCode"),this.dataGenFns=wr(e.modules,"genData"),this.directives=m(m({},ua),e.directives);var t=e.isReservedTag||O;this.maybeComponent=function(e){return!t(e.tag)},this.onceId=0,this.staticRenderFns=[]};function pa(e,t){var n=new fa(t);return{render:"with(this){return "+(e?da(e,n):'_c("div")')+"}",staticRenderFns:n.staticRenderFns}}function da(e,t){if(e.staticRoot&&!e.staticProcessed)return va(e,t);if(e.once&&!e.onceProcessed)return ha(e,t);if(e.for&&!e.forProcessed)return f=t,v=(u=e).for,h=u.alias,m=u.iterator1?","+u.iterator1:"",y=u.iterator2?","+u.iterator2:"",u.forProcessed=!0,(d||"_l")+"(("+v+"),function("+h+m+y+"){return "+(p||da)(u,f)+"})";if(e.if&&!e.ifProcessed)return ma(e,t);if("template"!==e.tag||e.slotTarget){if("slot"===e.tag)return function(e,t){var n=e.slotName||'"default"',r=_a(e,t),i="_t("+n+(r?","+r:""),o=e.attrs&&"{"+e.attrs.map(function(e){return g(e.name)+":"+e.value}).join(",")+"}",a=e.attrsMap["v-bind"];!o&&!a||r||(i+=",null");o&&(i+=","+o);a&&(i+=(o?"":",null")+","+a);return i+")"}(e,t);var n;if(e.component)a=e.component,c=t,l=(s=e).inlineTemplate?null:_a(s,c,!0),n="_c("+a+","+ya(s,c)+(l?","+l:"")+")";else{var r=e.plain?void 0:ya(e,t),i=e.inlineTemplate?null:_a(e,t,!0);n="_c('"+e.tag+"'"+(r?","+r:"")+(i?","+i:"")+")"}for(var o=0;o':'
',0r?-1:n0?Math.floor(n):Math.ceil(n)};var X=function(e,t){var n=b(e),r=b(t);return 12*(n.getFullYear()-r.getFullYear())+(n.getMonth()-r.getMonth())};var w=function(e,t){var n=b(e).getTime(),r=b(t).getTime();return nr?1:0};var F=function(e,t){var n=b(e),r=b(t),a=w(n,r),o=Math.abs(X(n,r));return n.setMonth(n.getMonth()-a*o),a*(o-(w(n,r)===-a))};var $=["M","MM","Q","D","DD","DDD","DDDD","d","E","W","WW","YY","YYYY","GG","GGGG","H","HH","h","hh","m","mm","s","ss","S","SS","SSS","Z","ZZ","X","x"];var W=function(e){var t=[];for(var n in e)e.hasOwnProperty(n)&&t.push(n);var r=$.concat(t).sort().reverse();return new RegExp("(\\[[^\\[]*\\])|(\\\\)?("+r.join("|")+"|.)","g")};var O=function(){var e=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],t=["January","February","March","April","May","June","July","August","September","October","November","December"],n=["Su","Mo","Tu","We","Th","Fr","Sa"],r=["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],a=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],o=["AM","PM"],u=["am","pm"],i=["a.m.","p.m."],s={MMM:function(t){return e[t.getMonth()]},MMMM:function(e){return t[e.getMonth()]},dd:function(e){return n[e.getDay()]},ddd:function(e){return r[e.getDay()]},dddd:function(e){return a[e.getDay()]},A:function(e){return e.getHours()/12>=1?o[1]:o[0]},a:function(e){return e.getHours()/12>=1?u[1]:u[0]},aa:function(e){return e.getHours()/12>=1?i[1]:i[0]}};return["M","D","DDD","d","Q","W"].forEach(function(e){s[e+"o"]=function(t,n){return function(e){var t=e%100;if(t>20||t<10)switch(t%10){case 1:return e+"st";case 2:return e+"nd";case 3:return e+"rd"}return e+"th"}(n[e](t))}}),{formatters:s,formattingTokensRegExp:W(s)}},z={distanceInWords:function(){var e={lessThanXSeconds:{one:"less than a second",other:"less than {{count}} seconds"},xSeconds:{one:"1 second",other:"{{count}} seconds"},halfAMinute:"half a minute",lessThanXMinutes:{one:"less than a minute",other:"less than {{count}} minutes"},xMinutes:{one:"1 minute",other:"{{count}} minutes"},aboutXHours:{one:"about 1 hour",other:"about {{count}} hours"},xHours:{one:"1 hour",other:"{{count}} hours"},xDays:{one:"1 day",other:"{{count}} days"},aboutXMonths:{one:"about 1 month",other:"about {{count}} months"},xMonths:{one:"1 month",other:"{{count}} months"},aboutXYears:{one:"about 1 year",other:"about {{count}} years"},xYears:{one:"1 year",other:"{{count}} years"},overXYears:{one:"over 1 year",other:"over {{count}} years"},almostXYears:{one:"almost 1 year",other:"almost {{count}} years"}};return{localize:function(t,n,r){var a;return r=r||{},a="string"==typeof e[t]?e[t]:1===n?e[t].one:e[t].other.replace("{{count}}",n),r.addSuffix?r.comparison>0?"in "+a:a+" ago":a}}}(),format:O()},H=1440,A=2520,C=43200,G=86400;var J=function(e,t,n){var r=n||{},a=I(e,t),o=r.locale,u=z.distanceInWords.localize;o&&o.distanceInWords&&o.distanceInWords.localize&&(u=o.distanceInWords.localize);var i,s,c={addSuffix:Boolean(r.addSuffix),comparison:a};a>0?(i=b(e),s=b(t)):(i=b(t),s=b(e));var d,f=U(s,i),l=s.getTimezoneOffset()-i.getTimezoneOffset(),h=Math.round(f/60)-l;if(h<2)return r.includeSeconds?f<5?u("lessThanXSeconds",5,c):f<10?u("lessThanXSeconds",10,c):f<20?u("lessThanXSeconds",20,c):f<40?u("halfAMinute",null,c):u(f<60?"lessThanXMinutes":"xMinutes",1,c):0===h?u("lessThanXMinutes",1,c):u("xMinutes",h,c);if(h<45)return u("xMinutes",h,c);if(h<90)return u("aboutXHours",1,c);if(h0&&void 0!==arguments[0]?arguments[0]:{},t=e.locales||{};return{name:e.name||"Timeago",props:{datetime:{required:!0},title:{type:[String,Boolean]},locale:{type:String},autoUpdate:{type:[Number,Boolean]},converter:{type:Function},converterOptions:{type:Object}},data:function(){return{timeago:this.getTimeago()}},mounted:function(){this.startUpdater()},beforeDestroy:function(){this.stopUpdater()},render:function(e){return e("time",{attrs:{datetime:new Date(this.datetime),title:"string"==typeof this.title?this.title:!1===this.title?null:this.timeago}},[this.timeago])},methods:{getTimeago:function(n){return(this.converter||e.converter||j)(n||this.datetime,t[this.locale||e.locale],this.converterOptions||{})},convert:function(e){this.timeago=this.getTimeago(e)},startUpdater:function(){var e=this;if(this.autoUpdate){var t=!0===this.autoUpdate?60:this.autoUpdate;this.updater=setInterval(function(){e.convert()},1e3*t)}},stopUpdater:function(){this.updater&&(clearInterval(this.updater),this.updater=null)}},watch:{autoUpdate:function(e){this.stopUpdater(),e&&this.startUpdater()},datetime:function(){this.convert()},locale:function(){this.convert()},converter:function(){this.convert()},converterOptions:{handler:function(){this.convert()},deep:!0}}}},N=function(e,t){var n=E(t);e.component(n.name,n)},B=j;e.createTimeago=E,e.install=N,e.converter=B,e.default=N,Object.defineProperty(e,"__esModule",{value:!0})}); 2 | //# sourceMappingURL=vue-timeago.min.js.map 3 | -------------------------------------------------------------------------------- /templates/app.html: -------------------------------------------------------------------------------- 1 | <% extends "base.html" %> 2 | 3 | <% block content %> 4 |
5 |
6 |

Jobs for app <{ app.name }>

7 |
8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 |
AppStateCreated timeStarted timeEnd time
#{{job.id}}#{{job.id}} (deleted){{job.state}}{{timestampToDate(job.created_time)}}{{timestampToDate(job.started_time)}}{{timestampToDate(job.end_time)}}
24 |
25 |
26 |
27 | <% endblock %> 28 | 29 | <% block javascript %> 30 | 77 | <% endblock %> 78 | -------------------------------------------------------------------------------- /templates/apps.html: -------------------------------------------------------------------------------- 1 | <% extends "base.html" %> 2 | 3 | <% block content %> 4 |
5 |
6 |

Apps

7 |
8 | 37 |
Loading...
38 |
39 |
40 |
41 | <% endblock %> 42 | 43 | <% block javascript %> 44 | 154 | <% endblock %> 155 | -------------------------------------------------------------------------------- /templates/base.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | <% block title %>YunoRunner for CI<% endblock %> 5 | 6 | 7 | 8 | 10 | 12 | 14 | 16 | 17 | 18 | 19 | 20 | 21 | <% include "menu.html" %> 22 | <% block content %><% endblock %> 23 | <% block javascript %><% endblock %> 24 | 25 | 26 | -------------------------------------------------------------------------------- /templates/index.html: -------------------------------------------------------------------------------- 1 | <% extends "base.html" %> 2 | 3 | <% block content %> 4 |
5 |
6 |

Jobs display last done job of each app and next scheduled job

7 |
8 | 31 | 34 |
35 |
36 |
37 | <% endblock %> 38 | 39 | <% block javascript %> 40 | 98 | <% endblock %> 99 | -------------------------------------------------------------------------------- /templates/job.html: -------------------------------------------------------------------------------- 1 | <% extends "base.html" %> 2 | 3 | <% block content %> 4 |
5 |
6 |

7 | (DELETED) 8 | Job #{{job.id}} 9 | <% if app %>{{ job.name }} 10 | <% else %>{{ job.name }} 11 | <% endif %> 12 | 13 |

14 | 15 |
16 | 17 | 18 |
19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 |
State{{job.state}}
Created time{{timestampToDate(job.created_time)}}
Started time{{timestampToDate(job.started_time)}}
End time{{timestampToDate(job.end_time)}}
Badge<{job.state}>
27 | 28 |

Execution log:

29 |

 30 |         
31 | 32 | 33 |
34 |
35 |
36 | <% endblock %> 37 | 38 | <% block javascript %> 39 | 99 | 114 | <% endblock %> 115 | -------------------------------------------------------------------------------- /templates/menu.html: -------------------------------------------------------------------------------- 1 | 26 | --------------------------------------------------------------------------------