├── .github └── workflows │ └── s3.yml ├── .gitignore ├── README.md ├── docs ├── Docker │ └── index.rst ├── Gitlab-ci │ ├── Gitlab - Kubernetes integration.rst │ ├── Gitlab runner.rst │ └── index.rst ├── Kubernetes │ ├── Different solutions.rst │ ├── Minikube.rst │ └── index.rst ├── Makefile ├── _static │ └── .empty ├── _templates │ └── footer.html ├── conf.py ├── git │ ├── github-merge-bot.rst │ ├── github-review-bot.rst │ ├── github-telegram-notifications.rst │ ├── github.rst │ └── index.rst ├── ifttt │ ├── github.rst │ └── index.rst ├── index.rst ├── lint.rst └── remote-dev │ ├── aws │ └── index.rst │ ├── edit-remote-files-locally.rst │ ├── gpg-forwarding.rst │ ├── index.rst │ ├── lxd │ ├── lxd.rst │ └── nginx.conf │ ├── run-local-files-remotely.rst │ ├── ssh-forwarding.rst │ └── x2go.rst ├── requirements.txt └── tools ├── ec2-dev-bot └── lambda_function.py ├── github-ifttt └── lambda_function.py ├── github-merge-bot └── lambda_function.py ├── github-review-bot ├── lambda_function.py └── text_tree.py └── porting-bot ├── README.md ├── ec2 ├── ec2-deploy.py ├── ec2-run.py └── ec2-script.sh ├── lambda-function.py └── scripts ├── README.md ├── clone_fork.py ├── fork.py ├── merge.py ├── pull-request.py └── review.py /.github/workflows/s3.yml: -------------------------------------------------------------------------------- 1 | name: Update Docs 2 | 3 | on: [push] 4 | 5 | jobs: 6 | build: 7 | 8 | runs-on: ubuntu-latest 9 | 10 | steps: 11 | - uses: actions/checkout@v2 12 | - name: Set up Python 3.7 13 | uses: actions/setup-python@v1 14 | with: 15 | python-version: 3.7 16 | - name: Install dependencies 17 | run: | 18 | # python -m pip install --upgrade pip 19 | pip install sphinx sphinx-autobuild 20 | pip install sphinx_rtd_theme 21 | - name: Rebuild 22 | run: cd docs && make html 23 | - uses: jakejarvis/s3-sync-action@v0.5.1 24 | name: Upload 25 | with: 26 | args: --follow-symlinks 27 | env: 28 | AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} 29 | AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} 30 | AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 31 | SOURCE_DIR: 'docs/_build/html' 32 | DEST_DIR: 'itpp.dev/ops/' 33 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | tools/merge-bot/Github-bot-deploy-info.json 2 | tools/merge-bot/github-bot-key.pem 3 | _build -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![License: CC BY-NC-SA 4.0](https://licensebuttons.net/l/by-nc-sa/4.0/80x15.png)](https://creativecommons.org/licenses/by-nc-sa/4.0/) 2 | 3 | Source of https://itpp.dev/ops/ docs 4 | 5 | # How to contribute to docs 6 | 7 | ## Initialization 8 | 9 | * Fork this repo 10 | * Clone to your machine 11 | * Install dependencies: 12 | 13 | sudo pip install sphinx sphinx-autobuild 14 | sudo pip install sphinx_rtd_theme 15 | 16 | ## Contribution 17 | 18 | * Edit files in the repo. Check documentations: 19 | 20 | * http://www.sphinx-doc.org/en/stable/rest.html 21 | * http://www.sphinx-doc.org/en/stable/domains.html 22 | * http://www.sphinx-doc.org/en/stable/markup/index.html 23 | * [images.md](images.md) 24 | 25 | * Try it out: 26 | 27 | cd /path/to/odoo-devops/docs 28 | make html 29 | 30 | # (check warningn and errors in compilation logs and fix them if needed) 31 | 32 | # open result 33 | google-chrome _build/html/index.html 34 | 35 | * Make commits, push, create Pull Request 36 | -------------------------------------------------------------------------------- /docs/Docker/index.rst: -------------------------------------------------------------------------------- 1 | Docker 2 | ======================================= 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | 8 | -------------------------------------------------------------------------------- /docs/Gitlab-ci/Gitlab - Kubernetes integration.rst: -------------------------------------------------------------------------------- 1 | Gitlab - Kubernetes integration 2 | ======================================= 3 | 4 | You can easily connect existing Kubernetes cluster to your GitLab project. With connected cluster you can use Review Apps, deploy your applications and run your pipelines. 5 | 6 | Adding an existing Kubernetes cluster 7 | ------------------------------------- 8 | 9 | In order to add your existing Kubernetes cluster to your project: 10 | 11 | * Navigate to your project's Operations > Kubernetes page. 12 | 13 | * Click on Add Kubernetes cluster. 14 | 15 | * Click on Add an existing Kubernetes cluster and fill in the details: 16 | 17 | * Kubernetes cluster name (required) - The name you wish to give the cluster. 18 | * Environment scope (required) - The associated environment to this cluster. You can leave it with "*". 19 | * API URL (required) - It's the URL that GitLab uses to access the Kubernetes base API. You can access it locally with cubectl proxy and need to make it accessible externially. In the end you should have something like "https://kubernetes.example.com". 20 | * CA certificate (optional) - If the API is using a self-signed TLS certificate, you'll also need to include the ca.crt contents here. 21 | * Token - GitLab authenticates against Kubernetes using service tokens, which are scoped to a particular namespace. If you don't have a service token yet, you can follow the Kubernetes documentation to create one. You can also view or create service tokens in the Kubernetes dashboard (under Config > Secrets). The account that will issue the service token must have admin privileges on the cluster. 22 | * Project namespace (optional) - You don't have to fill it in; by leaving it blank, GitLab will create one for you. 23 | 24 | * Click on Create Kubernetes cluster. 25 | 26 | After a couple of minutes, your cluster will be ready to go. 27 | 28 | If you using Minukube cluster or just have Kubernetes Dashboard you can get CA certificate and token in Dashboard. You need to choose default namespace and click on secrets. There should be default token with CA and token inside. 29 | 30 | Installing applications 31 | ----------------------- 32 | 33 | GitLab provides a one-click install for some applications which will be added directly to your connected Kubernetes cluster. 34 | 35 | To one-click install applications: 36 | 37 | * Navigate to your project's Operations > Kubernetes page. 38 | 39 | * Click on your connected cluster. 40 | 41 | * Click install button beside the application you need. 42 | 43 | You need to install Helm Tiller before you install any other application 44 | -------------------------------------------------------------------------------- /docs/Gitlab-ci/Gitlab runner.rst: -------------------------------------------------------------------------------- 1 | GitLab Runner 2 | ======================================= 3 | 4 | There is a different ways to install GitLab Runner on your Kubernetes cluster. 5 | 6 | One-click install 7 | ----------------- 8 | 9 | If your Kubernetes cluster is connected to your GitLab project you can just: 10 | 11 | * Navigate to your project's Operations > Kubernetes page. 12 | 13 | * Click on your connected cluster. 14 | 15 | * Install Helm Tiller by clicking the install button beside it. 16 | 17 | * Install GitLab Runner by clicking the install button beside it. 18 | 19 | Deploy GitLab Runner manually 20 | ----------------------------- 21 | 22 | If you want to cofigure everything yourself, you can deploy runner manually. 23 | 24 | First you need to create namespace for your future deployment: 25 | :: 26 | 27 | kubectl create namespace gitlab-runner-ns 28 | 29 | To check your current namespaces: 30 | :: 31 | 32 | kubectl get namespaces 33 | 34 | Now set created namespace as default: 35 | :: 36 | 37 | kubectl config set-context $(kubectl config current-context) --namespace=gitlab-runner-ns 38 | 39 | To deployment we will need to create a deployment.yaml, config-map.yaml and secret.yaml. 40 | 41 | Start with config-map.yaml: 42 | :: 43 | 44 | apiVersion: v1 45 | kind: ConfigMap 46 | metadata: 47 | name: gitlab-runner-cm 48 | namespace: gitlab-runner-ns 49 | data: 50 | config.toml: | 51 | concurrent = 10 52 | check_interval = 30 53 | 54 | entrypoint: | 55 | #!/bin/bash 56 | 57 | set -xe 58 | 59 | cp /scripts/config.toml /etc/gitlab-runner/ 60 | 61 | # Register the runner 62 | /entrypoint register --non-interactive \ 63 | --url $GITLAB_URL \ 64 | --executor kubernetes 65 | 66 | # Start the runner 67 | /entrypoint run --user=gitlab-runner \ 68 | --working-directory=/home/gitlab-runner 69 | 70 | And create config map with: 71 | :: 72 | 73 | kubectl create -f config-map.yaml 74 | 75 | For sake of not showing your token in clear in your deployment file we need to create secret.yaml with token as base 64 string: 76 | :: 77 | 78 | echo -n "your_token" | base64 79 | 80 | :: 81 | 82 | apiVersion: v1 83 | kind: Secret 84 | metadata: 85 | name: gitlab-runner-secret 86 | namespace: gitlab-runner-ns 87 | type: Opaque 88 | data: 89 | runner-registration-token: 90 | 91 | Now, create secret with: 92 | :: 93 | 94 | kubectl create --validate -f secret.yaml 95 | 96 | And finally deployment.yaml file: 97 | :: 98 | 99 | apiVersion: extensions/v1beta1 100 | kind: Deployment 101 | metadata: 102 | name: gitlab-runner 103 | namespace: gitlab-runner-ns 104 | spec: 105 | replicas: 1 106 | selector: 107 | matchLabels: 108 | name: gitlab-runner 109 | template: 110 | metadata: 111 | labels: 112 | name: gitlab-runner 113 | spec: 114 | containers: 115 | - name: gitlab-runner 116 | image: gitlab/gitlab-runner:alpine-v9.3.0 117 | command: ["/bin/bash", "/scripts/entrypoint"] 118 | env: 119 | - name: GITLAB_URL 120 | value: "https://gitlab.com/" 121 | - name: REGISTRATION_TOKEN 122 | valueFrom: 123 | secretKeyRef: 124 | name: gitlab-runner-secret 125 | key: runner-registration-token 126 | imagePullPolicy: Always 127 | volumeMounts: 128 | - name: config 129 | mountPath: /scripts 130 | - name: cacerts 131 | mountPath: /etc/gitlab-runner/certs 132 | readOnly: true 133 | restartPolicy: Always 134 | volumes: 135 | - name: config 136 | configMap: 137 | name: gitlab-runner-cm 138 | - name: cacerts 139 | hostPath: 140 | path: /var/mozilla 141 | 142 | For creating runners gitlab needs ClusterRoleBinding with cluster-admin role. So before deploying we creating cluster role: 143 | :: 144 | 145 | kubectl create clusterrolebinding gitlab-cluster-admin --clusterrole=cluster-admin --group=system:serviceaccounts --namespace=gitlab-runner-ns 146 | 147 | And now creating deployment: 148 | :: 149 | 150 | kubectl create --validate -f deployment.yaml 151 | -------------------------------------------------------------------------------- /docs/Gitlab-ci/index.rst: -------------------------------------------------------------------------------- 1 | GitLab CI/CD 2 | ======================================= 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | 8 | Gitlab - Kubernetes integration 9 | Gitlab runner 10 | -------------------------------------------------------------------------------- /docs/Kubernetes/Different solutions.rst: -------------------------------------------------------------------------------- 1 | Kubernetes solutions 2 | ======================================= 3 | 4 | There is a lot of ways to run your Kubernetes cluster on different platforms including single-node Minikube with completely automated setup on your own laptop or managed cluster on Google Compute Engine. 5 | 6 | In this documentation we will consider intallation of Minikube cluster on your server or personal computer to give you an idea of ​​how quickly configure minimal working cluster on one machine. 7 | 8 | Different platforms and solutions you can find in `official kubernetes documentation `_ 9 | 10 | There should be no difference in where and how you set up your cluster. So you can pick up any of the solutions presented instead. 11 | -------------------------------------------------------------------------------- /docs/Kubernetes/Minikube.rst: -------------------------------------------------------------------------------- 1 | Minikube 2 | ======================================= 3 | 4 | Minikube is easiest way to run single-node Kubernetes cluster locally. Setup is completely automated so it is just matter of installation and starting the cluster. 5 | 6 | Installing Minikube 7 | ------------------- 8 | 9 | In order to install Minikube you need to: 10 | 11 | * Enable Intel Virtualization Technology or AMD virtualization in your computer’s BIOS 12 | * Install `VirtualBox `_ or alternatively you can install other hypervisors: VMware Fusion, HyperKit, KVM or Hyper-V depending on your OS 13 | * Install `kubectl `_ according to the instructions 14 | * Install latest `Minikube `_ 15 | 16 | Starting Minukube 17 | ----------------- 18 | To start cluster you can just run: 19 | :: 20 | 21 | minikube start 22 | 23 | Depending on the hypervisor you want to use you can specifiy it by --vm-driver option and choose amount of memory you want Minikube to use: 24 | :: 25 | 26 | minikube start --memory 4096 --vm-driver virtualbox 27 | 28 | Minikube also supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. In this case you should have Docker installed. 29 | 30 | Iteract with your cluster 31 | ------------------------- 32 | 33 | Now you can access your cluster with kubectl proxy: 34 | 35 | :: 36 | 37 | kubectl proxy --port=8001 & 38 | 39 | And you can get the API with curl or any browser: 40 | 41 | :: 42 | 43 | curl http://localhost:8001/api/ 44 | 45 | Dashboard 46 | --------- 47 | 48 | Minikube automaticly have Kubernetes Dashboard - web-based UI for Kubernetes clusters. It allows you to monitor and manage aplications on your cluster. 49 | 50 | To access dashboard you can just type in console: 51 | :: 52 | minikube dashboard 53 | 54 | And it will open in your default browser. 55 | 56 | Or to get url you can run: 57 | :: 58 | minikube dashboard --url 59 | 60 | Stopping Minikube 61 | ----------------- 62 | 63 | To stop your cluster just run: 64 | 65 | :: 66 | 67 | minikube stop 68 | 69 | 70 | 71 | -------------------------------------------------------------------------------- /docs/Kubernetes/index.rst: -------------------------------------------------------------------------------- 1 | Kubernetes 2 | ======================================= 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | 8 | Different solutions 9 | Minikube 10 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = OdooDevOps 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) -------------------------------------------------------------------------------- /docs/_static/.empty: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/itpp-labs/odoo-devops-docs/f7f8dbf040bf4b775dcad6ebd6dc6f4b144ca943/docs/_static/.empty -------------------------------------------------------------------------------- /docs/_templates/footer.html: -------------------------------------------------------------------------------- 1 | {% extends '!footer.html' %} 2 | {%- block extrafooter %} 3 | License: CC BY-NC-SA 4.0 4 | {%- endblock %} 5 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | # -*- coding: utf-8 -*- 2 | # 3 | # Configuration file for the Sphinx documentation builder. 4 | # 5 | # This file does only contain a selection of the most common options. For a 6 | # full list see the documentation: 7 | # http://www.sphinx-doc.org/en/master/config 8 | 9 | # -- Path setup -------------------------------------------------------------- 10 | 11 | # If extensions (or modules to document with autodoc) are in another directory, 12 | # add these directories to sys.path here. If the directory is relative to the 13 | # documentation root, use os.path.abspath to make it absolute, like shown here. 14 | # 15 | # import os 16 | # import sys 17 | # sys.path.insert(0, os.path.abspath('.')) 18 | 19 | 20 | # -- Project information ----------------------------------------------------- 21 | 22 | project = u'Odoo DevOps' 23 | copyright = u'2020, IT-Projects Labs; 2018-2019, IT-Projects LLC' 24 | author = u'IT-Projects Labs' 25 | 26 | # The short X.Y version 27 | version = u'' 28 | # The full version, including alpha/beta/rc tags 29 | release = u'' 30 | 31 | 32 | # -- General configuration --------------------------------------------------- 33 | 34 | # If your documentation needs a minimal Sphinx version, state it here. 35 | # 36 | # needs_sphinx = '1.0' 37 | 38 | # Add any Sphinx extension module names here, as strings. They can be 39 | # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom 40 | # ones. 41 | extensions = [ 42 | ] 43 | 44 | # Add any paths that contain templates here, relative to this directory. 45 | templates_path = ['_templates'] 46 | 47 | # The suffix(es) of source filenames. 48 | # You can specify multiple suffix as a list of string: 49 | # 50 | # source_suffix = ['.rst', '.md'] 51 | source_suffix = '.rst' 52 | 53 | # The master toctree document. 54 | master_doc = 'index' 55 | 56 | # The language for content autogenerated by Sphinx. Refer to documentation 57 | # for a list of supported languages. 58 | # 59 | # This is also used if you do content translation via gettext catalogs. 60 | # Usually you set "language" from the command line for these cases. 61 | language = None 62 | 63 | # List of patterns, relative to source directory, that match files and 64 | # directories to ignore when looking for source files. 65 | # This pattern also affects html_static_path and html_extra_path . 66 | exclude_patterns = [u'_build', 'Thumbs.db', '.DS_Store'] 67 | 68 | # The name of the Pygments (syntax highlighting) style to use. 69 | pygments_style = 'sphinx' 70 | 71 | 72 | # -- Options for HTML output ------------------------------------------------- 73 | 74 | html_theme = 'alabaster' 75 | 76 | # Theme options are theme-specific and customize the look and feel of a theme 77 | # further. For a list of options available for each theme, see the 78 | # documentation. 79 | # 80 | html_theme_options = { 81 | 'github_banner': True, 82 | 'show_powered_by': False, 83 | 'github_user': 'itpp-labs', 84 | 'github_repo': 'odoo-devops-docs', 85 | } 86 | 87 | html_show_sphinx = False 88 | 89 | # Theme options are theme-specific and customize the look and feel of a theme 90 | # further. For a list of options available for each theme, see the 91 | # documentation. 92 | # 93 | # html_theme_options = {} 94 | 95 | # Add any paths that contain custom static files (such as style sheets) here, 96 | # relative to this directory. They are copied after the builtin static files, 97 | # so a file named "default.css" will overwrite the builtin "default.css". 98 | html_static_path = ['_static'] 99 | 100 | # Custom sidebar templates, must be a dictionary that maps document names 101 | # to template names. 102 | # 103 | # The default sidebars (for documents that don't match any pattern) are 104 | # defined by theme itself. Builtin themes are using these templates by 105 | # default: ``['localtoc.html', 'relations.html', 'sourcelink.html', 106 | # 'searchbox.html']``. 107 | # 108 | # html_sidebars = {} 109 | 110 | 111 | # -- Options for HTMLHelp output --------------------------------------------- 112 | 113 | # Output file base name for HTML help builder. 114 | htmlhelp_basename = 'OdooDevOpsdoc' 115 | 116 | 117 | # -- Options for LaTeX output ------------------------------------------------ 118 | 119 | latex_elements = { 120 | # The paper size ('letterpaper' or 'a4paper'). 121 | # 122 | # 'papersize': 'letterpaper', 123 | 124 | # The font size ('10pt', '11pt' or '12pt'). 125 | # 126 | # 'pointsize': '10pt', 127 | 128 | # Additional stuff for the LaTeX preamble. 129 | # 130 | # 'preamble': '', 131 | 132 | # Latex figure (float) alignment 133 | # 134 | # 'figure_align': 'htbp', 135 | } 136 | 137 | # Grouping the document tree into LaTeX files. List of tuples 138 | # (source start file, target name, title, 139 | # author, documentclass [howto, manual, or own class]). 140 | latex_documents = [ 141 | (master_doc, 'OdooDevOps.tex', u'Odoo DevOps Documentation', 142 | u'IT-Projects LLC', 'manual'), 143 | ] 144 | 145 | 146 | # -- Options for manual page output ------------------------------------------ 147 | 148 | # One entry per manual page. List of tuples 149 | # (source start file, name, description, authors, manual section). 150 | man_pages = [ 151 | (master_doc, 'odoodevops', u'Odoo DevOps Documentation', 152 | [author], 1) 153 | ] 154 | 155 | 156 | # -- Options for Texinfo output ---------------------------------------------- 157 | 158 | # Grouping the document tree into Texinfo files. List of tuples 159 | # (source start file, target name, title, author, 160 | # dir menu entry, description, category) 161 | texinfo_documents = [ 162 | (master_doc, 'OdooDevOps', u'Odoo DevOps Documentation', 163 | author, 'OdooDevOps', 'One line description of project.', 164 | 'Miscellaneous'), 165 | ] 166 | -------------------------------------------------------------------------------- /docs/git/github-merge-bot.rst: -------------------------------------------------------------------------------- 1 | ====================== 2 | Merge bot for GitHub 3 | ====================== 4 | 5 | The script gives the right to a certain circle of people to merge branches in the repository by sending the certain comment in the pull request. 6 | 7 | Prepare IFTTT's hooks 8 | --------------------- 9 | 10 | * Log in / Sign up at https://ifttt.com/ 11 | * Click on ``Documentation`` button here: https://ifttt.com/maker_webhooks 12 | * Replace ``{event}`` with event name, for example ``travis-not-finished-pr``, ``travis-success-pr`` and ``travis-failed-pr``. Save the links you got. 13 | 14 | Create AWS Lambda function 15 | -------------------------- 16 | 17 | `Create lambda function `__ with following settings: 18 | 19 | * Runtime 20 | 21 | Use ``Python 3.6`` 22 | 23 | * Environment variables 24 | 25 | * ``GITHUB_TOKEN`` -- generate one in https://github.com/settings/tokens . Select scope ``repo``. 26 | * ``USERNAMES`` -- use comma-separated list of Github's usernames without @. 27 | * ``LOG_LEVEL`` -- optional. Set to DEBUG to get detailed logs in AWS CloudWatch. 28 | * ``MSG_RQST_MERGE`` -- message-request for merge. Default: ``I approve to merge it now`` 29 | * ``IFTTT_HOOK_RED_PR``, ``IFTTT_HOOK_GREEN_PR``, ``IFTTT_HOOK_NOT_FINISHED_PR`` -- use IFTTT's hooks 30 | 31 | * Trigger 32 | 33 | Use ``API Gateway``. Once you configure it and save, you will see ``API endpoint`` under Api Gateway **details** section. Use option ``Open`` 34 | 35 | Now register the URL as webhook at github: https://developer.github.com/webhooks/creating/. 36 | Use following webhook settings: 37 | 38 | * **Payload URL** -- the URL 39 | * **Content Type**: application/json 40 | * **Which events would you like to trigger this webhook?** -- *Let me select individual events* and then select ``[x] Issue comments`` 41 | 42 | * Function Code 43 | 44 | * Copy-paste this code: https://raw.githubusercontent.com/itpp-labs/odoo-devops-docs/master/tools/github-merge-bot/lambda_function.py 45 | 46 | * Basic settings 47 | 48 | * Change time running function by 15 sec -- ``Timeout`` (default 3 sec) 49 | 50 | Create IFTTT applets 51 | -------------------- 52 | 53 | * **If** -- Service *Webhooks*. 54 | 55 | Use ``{event}`` from ``Prepare IFTTT's hooks`` of this instruction. For example: ``Event Name`` = ``travis-not-finished-pr``, ``Event Name`` = ``travis-failed-pr``. 56 | 57 | * **Then** -- whatever you like. For actions with text ingredients use following for failed, success and not finished checks: 58 | 59 | * ``Value1`` -- Author of the merge 60 | * ``Value2`` -- Author of the pull-request 61 | * ``Value3`` -- Link to pull-request 62 | 63 | Logs 64 | ---- 65 | 66 | * AWS CloudWatch: https://console.aws.amazon.com/cloudwatch . Choice tab ``Logs`` 67 | * IFTTT logs: https://ifttt.com/activity 68 | 69 | 70 | -------------------------------------------------------------------------------- /docs/git/github-review-bot.rst: -------------------------------------------------------------------------------- 1 | ====================== 2 | Review bot for GitHub 3 | ====================== 4 | 5 | This github bot posts review of pull-requests with odoo modules: list of updated files (installable and non-installable), new features to test (according to doc/changelog.rst file) 6 | 7 | Create AWS Lambda function 8 | -------------------------- 9 | 10 | `Create lambda function `__ with following settings: 11 | 12 | * Runtime 13 | 14 | Use ``Python 3.6`` 15 | 16 | * Environment variables 17 | 18 | * ``GITHUB_TOKEN`` -- generate one in https://github.com/settings/tokens . Select scope ``repo``. 19 | * ``LOG_LEVEL`` -- optional. Set to DEBUG to get detailed logs in AWS CloudWatch. 20 | 21 | * Trigger 22 | 23 | Use ``API Gateway``. Once you configure it and save, you will see ``API endpoint`` under Api Gateway **details** section. Use option ``Open`` 24 | 25 | Now register the URL as webhook at github: https://developer.github.com/webhooks/creating/. 26 | Use following webhook settings: 27 | 28 | * **Payload URL** -- the URL 29 | * **Content Type**: application/json 30 | * **Which events would you like to trigger this webhook?** -- *Let me select individual events* and then select ``[x] Pull request`` 31 | 32 | * Function Code 33 | 34 | * Use this commands: 35 | 36 | .. code-block:: console 37 | 38 | mkdir /tmp/github-review-bot 39 | cd /tmp/github-review-bot 40 | 41 | pip3 install pyGithub -t . 42 | wget https://gitlab.com/itpp/odoo-devops/raw/master/tools/github-review-bot/lambda_function.py 43 | wget https://gitlab.com/itpp/odoo-devops/raw/master/tools/github-review-bot/text_tree.py 44 | zip -r /tmp/github-review-bot.zip * 45 | 46 | * Then set **Code Entry type** to ``Upload a .zip file`` and select the created zip file 47 | * Basic settings 48 | 49 | * Change time running function to 50 sec -- ``Timeout`` (default 3 sec) 50 | 51 | Logs 52 | ---- 53 | 54 | * AWS CloudWatch: https://console.aws.amazon.com/cloudwatch . Choose tab ``Logs`` 55 | 56 | Roadmap 57 | ------- 58 | 59 | * TODO: Deleted files should be listed with tag ``[DELETED]`` 60 | * TODO: Renamed files should be listed with tag ``[RENAMED from path/to/original-file]`` (for new files) and ``[RENAMED]`` (for original place of the file) 61 | * TODO: New modules (e.g. root ``__init__.py`` didn't exist) should be marked with tag ``[NEW]``, e.g. ``├─ [NEW] pos_debt_notebook/`` 62 | * TODO: Ported modules (``installable`` attribute is changed from False to True) should be marked with tag ``[PORT]``, e.g. ``├─ [PORT] pos_debt_notebook/`` 63 | * Updating review doesn't work without write access to the repo: github API returns 404. See https://gitlab.com/itpp/odoo-devops/issues/3 -------------------------------------------------------------------------------- /docs/git/github-telegram-notifications.rst: -------------------------------------------------------------------------------- 1 | ================================= 2 | Notifications to Telegram Group 3 | ================================= 4 | 5 | In this example we make a bot, that will send notifications to telegram group on 6 | new issues. You can slightly change the script to use other type of events. 7 | 8 | Telegram Bot 9 | ============ 10 | 11 | * In telegram client open `BotFather `__ 12 | * Send /newbot command to create a new bot 13 | * Follow instruction to set bot name and get bot token 14 | * Keep your token secure and store safely, it can be used by anyone to control your bot 15 | 16 | Telegram Group 17 | ============== 18 | 19 | Add created bot to the group, where it will send notifications 20 | 21 | You will need Group ID. To get one, temporarly add `Get My ID `__ bot to the group. 22 | 23 | Secrets 24 | ======= 25 | 26 | Add following `secrets `__ 27 | 28 | * ``TELEGRAM_TOKEN`` -- bot token 29 | * ``TELEGRAM_CHAT_ID`` -- Group ID. Normally, it's negative integer 30 | 31 | Github Actions 32 | ============== 33 | 34 | Create ``.github/workflows/main.yml`` file (you can also use ``[Set up a workflow yourself]`` button at ``Actions`` tab of the repository page) 35 | 36 | .. code-block:: yaml 37 | 38 | name: Telegram Notifications 39 | 40 | on: 41 | issues: 42 | types: [opened, reopened, deleted, closed] 43 | 44 | jobs: 45 | notify: 46 | 47 | runs-on: ubuntu-latest 48 | 49 | steps: 50 | - name: Send notifications to Telegram 51 | run: curl -s -X POST https://api.telegram.org/bot${{ secrets.TELEGRAM_TOKEN }}/sendMessage -d chat_id=${{ secrets.TELEGRAM_CHAT_ID }} -d text="${MESSAGE}" >> /dev/null 52 | env: 53 | MESSAGE: "Issue ${{ github.event.action }}: \n${{ github.event.issue.html_url }}" 54 | 55 | Try it out 56 | ========== 57 | 58 | * Create new issue 59 | * RESULT: bot sends a notification 60 | -------------------------------------------------------------------------------- /docs/git/github.rst: -------------------------------------------------------------------------------- 1 | ================================== 2 | Creating Pull Requests in batch 3 | ================================== 4 | 5 | Prerequisites 6 | ============= 7 | 8 | * Add a SSH key to your GitHub account. See: https://help.github.com/en/articles/adding-a-new-ssh-key-to-your-github-account 9 | * Install hub. Look this: https://github.com/github/hub#installation 10 | 11 | Script 12 | ====== 13 | 14 | Make a script ``make-prs.sh`` with following content 15 | 16 | .. code-block:: bash 17 | 18 | #!/bin/bash 19 | 20 | # ORGANIZATION GITHUB URL 21 | ORG=itpp-labs 22 | UPSTREAM_URL_GIT=https://github.com/$ORG 23 | 24 | # DEVELOPER INFO 25 | USERNAME=yelizariev 26 | 27 | # WHERE TO CLONE 28 | DIRECTORY_CLONE=$(pwd) 29 | 30 | # DESCRIPTION OF THE UPDATES 31 | MSG=":shield: travis.yml notifications webhook travis" 32 | BRANCH_SUFFIX=travis-notifications 33 | 34 | REPOS=( 35 | misc-addons 36 | pos-addons 37 | access-addons 38 | mail-addons 39 | website-addons 40 | sync-addons 41 | ) 42 | BRANCHES=( 43 | 10.0 44 | 11.0 45 | 12.0 46 | ) 47 | 48 | for REPO in "${REPOS[@]}"; do 49 | if [ ! -d $DIRECTORY_CLONE/$REPO ] 50 | then 51 | git clone $UPSTREAM_URL_GIT/$REPO.git $DIRECTORY_CLONE/$REPO 52 | cd $DIRECTORY_CLONE/$REPO 53 | git remote rename origin upstream 54 | git remote add origin git@github.com:$USERNAME/$REPO.git 55 | fi 56 | cd $DIRECTORY_CLONE/$REPO 57 | for BRANCH in "${BRANCHES[@]}"; do 58 | git fetch upstream $BRANCH 59 | git checkout -b $BRANCH-$BRANCH_SUFFIX upstream/$BRANCH 60 | 61 | # CHECK THAT UPDATES ARE NOT DONE YET 62 | if grep -qx ' on_failure: change' .travis.yml 63 | then 64 | echo "File are already updated in $REPO#$BRANCH" 65 | else 66 | # MAKE UPDATE 67 | { echo ' webhooks:'; echo ' on_failure: change'; echo ' urls:'; echo ' - "https://ci.it-projects.info/travis/on_failure/change"';} >> ./.travis.yml 68 | fi 69 | git commit -a -m "$MSG" 70 | git push origin $BRANCH-$BRANCH_SUFFIX 71 | hub pull-request -b $ORG:$BRANCH -m "$MSG" 72 | done 73 | done 74 | 75 | Update script according to you needs 76 | 77 | Run it with ``bash make-prs.sh`` 78 | 79 | -------------------------------------------------------------------------------- /docs/git/index.rst: -------------------------------------------------------------------------------- 1 | ========== 2 | Github 3 | ========== 4 | 5 | .. toctree:: 6 | :maxdepth: 2 7 | :caption: Contents: 8 | 9 | github 10 | github-merge-bot 11 | github-review-bot 12 | github-telegram-notifications 13 | -------------------------------------------------------------------------------- /docs/ifttt/github.rst: -------------------------------------------------------------------------------- 1 | =============================== 2 | GitHub Integration with IFTTT 3 | =============================== 4 | 5 | Trigger Travis Success / Failure 6 | ================================ 7 | 8 | Prepare IFTTT's hooks 9 | --------------------- 10 | 11 | * Log in / Sign up at https://ifttt.com/ 12 | * Click on ``Documentation`` button here: https://ifttt.com/maker_webhooks 13 | * Replace ``{event}`` with event name, for example ``travis-success-pr``. Do the same for another event, for example ``travis-failed-pr`` and ``travis-failed-branch``. Save the links you got. 14 | 15 | Create AWS Lambda function 16 | -------------------------- 17 | 18 | `Create lambda function `__ with following settings: 19 | 20 | * Runtime 21 | 22 | Use ``Python 2.7`` 23 | 24 | * Environment variables 25 | 26 | * ``GITHUB_TOKEN`` -- generate one in https://github.com/settings/tokens . No settings are needed for public repositories. 27 | * ``IFTTT_HOOK_GREEN_PR``, ``IFTTT_HOOK_RED_PR``, ``IFTTT_HOOK_RED_BRANCH`` -- use IFTTT's hooks. 28 | * ``IGNORE_BRANCHES`` -- optional. List of branches separated by comma to ignore to notify. 29 | * ``LOG_LEVEL`` -- optional. Set to ``DEBUG`` to get detailed logs in AWS CloudWatch. 30 | 31 | * Trigger 32 | 33 | Use ``API Gateway``. Once you configure it and save, you will see ``API endpoint`` under Api Gateway **details** section. Use option ``Open`` 34 | 35 | Now register the URL as webhook at github: https://developer.github.com/webhooks/creating/. 36 | Use following webhook settings: 37 | 38 | * **Payload URL** -- the URL 39 | * **Content Type**: application/json 40 | * **Which events would you like to trigger this webhook?** -- *Let me select individual events* and then select ``[x] Check runs`` 41 | 42 | * Function Code 43 | 44 | * Copy-paste this code: https://gitlab.com/itpp/odoo-devops/raw/master/tools/github-ifttt/lambda_function.py 45 | 46 | Create IFTTT applets 47 | -------------------- 48 | 49 | * **If** -- Service *Webhooks* 50 | 51 | Use ``{event}`` from ``Prepare IFTTT's hooks`` of this instruction. For example: ``Event Name`` = ``travis-success-pr``, ``Event Name`` = ``travis-failed-pr`` and ``Event Name`` = ``travis-failed-branch`` 52 | 53 | * **Then** -- whatever you like. For actions with text ingredients use following: 54 | 55 | * ``Value1`` -- Author of the pull-request 56 | * ``Value2`` -- Link to pull-request 57 | * ``Value3`` -- Link to the travis check 58 | 59 | and for checks of stable branch: 60 | 61 | * ``Value1`` -- Name of the branch 62 | * ``Value2`` -- Name of the repo 63 | * ``Value3`` -- Link to the travis check 64 | 65 | Travis settings 66 | --------------- 67 | 68 | * Update ``.travis.yml`` to get a notification in lambda when travis check is finished. You can configure either always notify on failure or only when previous check was successful. Check Travis Documentation for details: https://docs.travis-ci.com/user/notifications/#configuring-webhook-notifications 69 | 70 | * Look it for example: 71 | 72 | .. code-block:: python 73 | 74 | notifications: 75 | webhooks: 76 | on_failure: change 77 | urls: 78 | - "https://9ltrkrik2l.execute-api.eu-central-1.amazonaws.com/default/TriggerTravis/" 79 | 80 | Logs 81 | ---- 82 | 83 | * AWS CloudWatch: https://console.aws.amazon.com/cloudwatch . Choice tab ``Logs`` 84 | * IFTTT logs: https://ifttt.com/activity 85 | 86 | 87 | -------------------------------------------------------------------------------- /docs/ifttt/index.rst: -------------------------------------------------------------------------------- 1 | ======== 2 | IFTTT 3 | ======== 4 | 5 | .. toctree:: 6 | :maxdepth: 2 7 | :caption: Contents: 8 | 9 | github 10 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | Odoo DevOps 2 | =========== 3 | 4 | .. toctree:: 5 | :maxdepth: 2 6 | :caption: Contents: 7 | 8 | Docker/index 9 | Kubernetes/index 10 | Gitlab-ci/index 11 | git/index 12 | ifttt/index 13 | lint 14 | remote-dev/index 15 | -------------------------------------------------------------------------------- /docs/lint.rst: -------------------------------------------------------------------------------- 1 | ============= 2 | Lint Checks 3 | ============= 4 | 5 | Preparation 6 | =========== 7 | 8 | Execute once per computer 9 | 10 | .. code-block:: sh 11 | 12 | cd 13 | git clone https://github.com/it-projects-llc/maintainer-quality-tools.git 14 | cd maintainer-quality-tools/travis 15 | LINT_CHECK="1" sudo -E bash -x travis_install_nightly 8.0 16 | 17 | echo "export PATH=\$PATH:$(pwd)/" >> ~/.bashrc 18 | source ~/.bashrc 19 | 20 | Running checks 21 | ============== 22 | 23 | 24 | .. code-block:: sh 25 | 26 | cd YOUR-PATH/TO/REPOSTORY 27 | LINT_CHECK="1" TRAVIS_BUILD_DIR="." VERSION="12.0" travis_run_tests 12.0 28 | -------------------------------------------------------------------------------- /docs/remote-dev/aws/index.rst: -------------------------------------------------------------------------------- 1 | ===================================== 2 | Remote development on EC2 instances 3 | ===================================== 4 | 5 | EC2 instance with EBS is a good option for remote development, because you can 6 | stop instance and don't pay for CPU and RAM when you don't work on the instance. 7 | 8 | To simplify starting and stopping the instance, you can deploy telegram bot as 9 | described below. 10 | 11 | 12 | .. contents:: 13 | :local: 14 | 15 | Roadmap 16 | ======= 17 | 18 | * Schedule is not supported yet 19 | * Domains are not supported yet 20 | 21 | Control commands 22 | ================ 23 | 24 | 25 | * /up -- turn on instance or extend time to shutdown 26 | 27 | In response it sends: 28 | 29 | * IP 30 | * Hostname 31 | * Time to automatic shutdown 32 | * Text: *To shutdown earlier send /shutdown or 33 | schedule message "Shutdown" 34 | 35 | * /status -- get instance info 36 | * /shutdown -- turn the instance off after confirmation 37 | * Shutdown -- turn the instance off without confirmation 38 | 39 | 40 | Settings 41 | ======== 42 | 43 | On creating AWS Lambda, you would need to set following Environment variables: 44 | 45 | * TELEGRAM_TOKEN= 46 | * DOMAIN="USERCODE.example.com" 47 | * DOMAIN_NO_SSL="USERCODE.nossl.example.com" 48 | * USER_123_INSTANCE=**, USER_123_CODE=*some-user-code* 49 | 50 | * 123 is a telegram user ID. You can get one via `Get My ID bot `__ 51 | * *Instance ID* looks like ``i-07e6...`` and can be found in Description tab of existing Instance 52 | * AUTO_SHUTDOWN_SOFT=** 53 | * AUTO_SHUTDOWN_HARD=** 54 | 55 | 56 | * LOG_LEVEL= -- ``DEBUG``, ``INFO``, etc. 57 | 58 | Bot source 59 | ========== 60 | 61 | See https://github.com/itpp-labs/odoo-devops-docs/blob/master/tools/ec2-dev-bot/lambda_function.py 62 | 63 | Deployment 64 | ========== 65 | 66 | Create a bot 67 | ------------ 68 | 69 | https://telegram.me/botfather -- follow instruction to set bot name and get bot token 70 | 71 | Create EC2 instance 72 | ------------------- 73 | 74 | You need a instance per each user. We recommend using burstable instance, e.g. `EC2 75 | T3 instances `__. 76 | 77 | Prepare zip file 78 | ---------------- 79 | 80 | To make a `deployment package `_ execute following commands:: 81 | 82 | mkdir /tmp/bot 83 | cd /tmp/bot 84 | 85 | pip3 install python-telegram-bot pynamodb --system -t . 86 | wget https://raw.githubusercontent.com/itpp-labs/odoo-devops-docs/master/tools/ec2-dev-bot/lambda_function.py -O lambda_function.py 87 | # delete built-in or unused dependencies 88 | rm -rf botocore* tornado* docutils* 89 | zip -r /tmp/bot.zip * 90 | 91 | Create Lambda function 92 | ---------------------- 93 | 94 | * Navigate to https://console.aws.amazon.com/lambda/home 95 | * Click *Create function* 96 | * Configure the function as described below 97 | 98 | Runtime 99 | ~~~~~~~ 100 | 101 | In *AWS: Lambda service* 102 | 103 | Use ``Python 3.8`` 104 | 105 | Permissions (Role) 106 | ~~~~~~~~~~~~~~~~~~ 107 | 108 | In *AWS: IAM service: Policies* 109 | 110 | * Create policy of actions for DynamoDB: 111 | 112 | * *Service* -- ``DynamoDB`` 113 | * *Action* -- ``All DynamoDB actions`` 114 | * *Resources* -- ``All Resources`` 115 | 116 | * Create policy of actions for EC2: 117 | 118 | * *Service* -- ``EC2`` 119 | * *Action* -- ``All EC2 actions`` 120 | * *Resources* -- ``All Resources`` 121 | 122 | In *AWS: IAM service: Roles* 123 | 124 | * Open role attached to the lambda function 125 | * Attache created policies 126 | 127 | Function code 128 | ~~~~~~~~~~~~~ 129 | 130 | * ``Code entry type``: *Upload a .zip file* 131 | * Upload ``bot.zip`` 132 | 133 | Timeout 134 | ~~~~~~~ 135 | 136 | in *AWS: Lambda service* 137 | 138 | Execution time depends on telegram server, instance start/stop time. So, think about at least 35 seconds for limit. For your information, to checking instance status happens every 15 secods, so it's good idea to set limit to mulitple of 15 plus few seconds. 139 | 140 | Trigger 141 | ~~~~~~~ 142 | 143 | In *AWS: Lambda service* 144 | 145 | * **API Gateway**. Once you configure it and save, you will see ``Invoke URL`` under Api Gateway **details** section 146 | * **CloudWatch Events**. Create new rule for reminders, for example set 147 | 148 | * *Rule name* -- ``ec2-dev-bot-cron`` 149 | * *Schedule expression* -- ``rate(1 hour)`` 150 | 151 | Register webhook at telegram 152 | ---------------------------- 153 | 154 | .. code-block:: sh 155 | 156 | AWS_API_GATEWAY=XXX 157 | TELEGRAM_TOKEN=XXX 158 | curl -XPOST https://api.telegram.org/bot$TELEGRAM_TOKEN/setWebhook --data "url=$AWS_API_GATEWAY" --data "allowed_updates=['message','callback_query']" 159 | -------------------------------------------------------------------------------- /docs/remote-dev/edit-remote-files-locally.rst: -------------------------------------------------------------------------------- 1 | =================================== 2 | How to edit server files locally 3 | =================================== 4 | 5 | .. code-block:: sh 6 | 7 | sshfs -p 22 -o idmap=user,nonempty USERNAME@REMOTE-SERVER:/path/to/REMOTE/folder /path/to/LOCAL/folder -------------------------------------------------------------------------------- /docs/remote-dev/gpg-forwarding.rst: -------------------------------------------------------------------------------- 1 | WIP 2 | 3 | ====================== 4 | GPG Agent Forwarding 5 | ====================== 6 | 7 | To sign your commit on remote server you can forward gpg agent via ssh. 8 | 9 | * Execute on local and remote matchine:: 10 | 11 | gpgconf --list-dir agent-extra-socket 12 | 13 | It will give something like ``/run/user/1000/gnupg/S.gpg-agent.extra``. 14 | 15 | * Now you need to modify ``/etc/ssh/sshd_config`` on remote server and set following setting:: 16 | 17 | StreamLocalBindUnlink yes 18 | 19 | Save the file and restart ssh server, e.g.:: 20 | 21 | sudo service ssh restart 22 | 23 | * Then on connecting to the server add following arg:: 24 | 25 | ssh user@domain.example -R /run/user/1000/gnupg/S.gpg-agent.extra:/run/user/1000/gnupg/S.gpg-agent.extra 26 | 27 | You can also configure it in ``~/.ssh/config`` file:: 28 | 29 | Host gpgtunnel 30 | HostName domain.example 31 | RemoteForward 32 | 33 | 34 | References: 35 | * https://wiki.gnupg.org/AgentForwarding 36 | * https://superuser.com/questions/161973/how-can-i-forward-a-gpg-key-via-ssh-agent 37 | -------------------------------------------------------------------------------- /docs/remote-dev/index.rst: -------------------------------------------------------------------------------- 1 | ==================== 2 | Remote Development 3 | ==================== 4 | 5 | The section contains instructions to setup remote development environment. That is developer runs odoo and probably other tools on remote server rather on his machine. Advantages of this approach are: 6 | 7 | * easy way to provide big computing capacity 8 | * the same environment from any device 9 | * easy way to demonstrate work 10 | 11 | 12 | Usage 13 | ===== 14 | 15 | .. toctree:: 16 | :maxdepth: 1 17 | 18 | ssh-forwarding 19 | gpg-forwarding 20 | run-local-files-remotely 21 | edit-remote-files-locally 22 | x2go 23 | 24 | Containers administration 25 | ========================= 26 | 27 | .. toctree:: 28 | 29 | lxd/lxd 30 | aws/index 31 | -------------------------------------------------------------------------------- /docs/remote-dev/lxd/lxd.rst: -------------------------------------------------------------------------------- 1 | ================ 2 | LXD Containers 3 | ================ 4 | 5 | .. code-block:: sh 6 | 7 | # For understanding LXC see https://wiki.debian.org/LXC 8 | 9 | # Based on: 10 | # lxd + docker: https://stgraber.org/2016/04/13/lxd-2-0-docker-in-lxd-712/ 11 | # lxd network (static ip): https://stgraber.org/2016/10/27/network-management-with-lxd-2-3/ 12 | LXD_NETWORK="dev-network2" 13 | 14 | # install lxd 2.3+ 15 | apt-get install software-properties-common iptables-persistent 16 | 17 | add-apt-repository ppa:ubuntu-lxc/lxd-stable 18 | apt-get update 19 | apt-get dist-upgrade 20 | apt-get install lxd 21 | 22 | # init lxd 23 | lxd init 24 | 25 | # init network 26 | lxc network create ${LXD_NETWORK} 27 | lxc network show ${LXD_NETWORK} # check ipv4.address field 28 | 29 | 30 | ############################ 31 | # Per each Developer 32 | GITHUB_USERNAME="yelizariev" 33 | CONTAINER="${GITHUB_USERNAME}" 34 | SERVER_DOMAIN="${GITHUB_USERNAME}.dev.it-projects.info" 35 | NGINX_CONF="dev-${GITHUB_USERNAME}.conf" 36 | LOCAL_IP="10.37.82.100" # use one from network subnet 37 | PORT="10100" # unique per each developer 38 | 39 | # https://discuss.linuxcontainers.org/t/docker-cannot-write-to-devices-allow/998/3 40 | read -r -d '' RAW_LXC <> /root/.ssh/authorized_keys" && \ 87 | # access for noroot 88 | lxc exec ${CONTAINER} -- bash -c "echo $PASS > /root/noroot-password" && \ 89 | lxc exec ${CONTAINER} -- bash -c "echo noroot:$PASS | chpasswd " && \ 90 | lxc exec ${CONTAINER} -- sudo -u "noroot" bash -c "mkdir -p /home/noroot/.ssh" && \ 91 | lxc exec ${CONTAINER} -- sudo -u "noroot" bash -c "curl --silent https://github.com/${GITHUB_USERNAME}.keys >> /home/noroot/.ssh/authorized_keys" && \ 92 | lxc exec ${CONTAINER} -- sudo -u "noroot" sed -i "s/01;32m/01;93m/" /home/noroot/.bashrc && \ 93 | # Manage Docker as a non-root user https://docs.docker.com/install/linux/linux-postinstall/ 94 | lxc exec ${CONTAINER} -- usermod -aG docker noroot && \ 95 | lxc exec ${CONTAINER} -- usermod -aG sudo noroot && \ 96 | lxc exec ${CONTAINER} -- locale-gen --purge en_US.UTF-8 && \ 97 | lxc exec ${CONTAINER} -- bash -c "echo -e 'LANG=\"en_US.UTF-8\"\nLANGUAGE=\"en_US:en\"\n' > /etc/default/locale" 98 | 99 | lxc config device add ${CONTAINER} sharedcachenoroot disk path=/home/noroot/.cache source=/var/lxc/share/cache && \ 100 | lxc stop ${CONTAINER} && \ 101 | lxc start ${CONTAINER} 102 | 103 | ## nginx on host machine 104 | cd /tmp/ 105 | curl -s https://gitlab.com/itpp/odoo-devops/raw/master/docs/remote-dev/lxd/nginx.conf > nginx.conf 106 | sed -i "s/NGINX_SERVER_DOMAIN/.${SERVER_DOMAIN}/g" nginx.conf 107 | sed -i "s/SERVER_HOST/${LOCAL_IP}/g" nginx.conf 108 | cp nginx.conf /etc/nginx/sites-available/${NGINX_CONF} 109 | ln -s /etc/nginx/sites-available/${NGINX_CONF} /etc/nginx/sites-enabled/${NGINX_CONF} 110 | # then restart nginx in a usual way 111 | 112 | ################### 113 | # Control commands 114 | 115 | # delete container 116 | lxc delete CONTAINER-NAME 117 | 118 | # see iptables rules 119 | iptables -L -t nat 120 | 121 | # delete nat rule 122 | iptables -t nat -D PREROUTING POSITION_NUMBER 123 | -------------------------------------------------------------------------------- /docs/remote-dev/lxd/nginx.conf: -------------------------------------------------------------------------------- 1 | server { 2 | listen 80; 3 | server_name NGINX_SERVER_DOMAIN; 4 | 5 | location / { 6 | proxy_pass http://SERVER_HOST:80; 7 | } 8 | 9 | charset utf-8; 10 | 11 | ## increase proxy buffer to handle some OpenERP web requests 12 | proxy_buffers 16 64k; 13 | proxy_buffer_size 128k; 14 | 15 | ## set headers 16 | proxy_set_header Host $host; 17 | proxy_set_header X-Real-IP $remote_addr; 18 | proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 19 | proxy_set_header X-Forwarded-Proto $scheme; 20 | 21 | proxy_read_timeout 600s; 22 | client_max_body_size 200m; 23 | 24 | #general proxy settings 25 | proxy_next_upstream error timeout invalid_header http_500 http_502 http_503; 26 | 27 | # by default, do not forward anything 28 | proxy_redirect off; 29 | proxy_buffering off; 30 | } 31 | -------------------------------------------------------------------------------- /docs/remote-dev/run-local-files-remotely.rst: -------------------------------------------------------------------------------- 1 | ====================================== 2 | How to mount local files on a server 3 | ====================================== 4 | 5 | 6 | sshfs 7 | ===== 8 | 9 | On your local machine: 10 | 11 | .. code-block:: sh 12 | 13 | # Step 1. Install ssh server on your local machine 14 | # TODO 15 | # Step 2. Configure ssh keys on you local machine 16 | cat cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 17 | # Step 3. Connect to your server 18 | ssh USERNAME@SERVER -p 22 -A -R 2222:localhost:22 19 | 20 | 21 | On your remote server: 22 | 23 | .. code-block:: sh 24 | 25 | # Step 4. Mount your directory on remote server 26 | # about allow_other check this: https://github.com/moby/moby/issues/27026#issuecomment-253579983 27 | sshfs -p 2222 -o idmap=user,nonempty,allow_other \ 28 | LOCALUSERNAME@127.0.0.1:/PATH/TO/LOCAL/FOLDER /PATH/TO/REMOTE/FOLDER 29 | 30 | # to unmount: 31 | fusermount -u /PATH/TO/REMOTE/FOLDER 32 | 33 | References 34 | ========== 35 | 36 | * https://superuser.com/questions/616182/how-to-mount-local-directory-to-remote-like-sshfs 37 | -------------------------------------------------------------------------------- /docs/remote-dev/ssh-forwarding.rst: -------------------------------------------------------------------------------- 1 | ====================== 2 | SSH agent forwarding 3 | ====================== 4 | 5 | 6 | To send commit or get access to private repositories you can use either login-password authentication or ssh keys. In later case you can face a problem to do it on remote server, because your private ssh key is not installed there. The good news is that you don't need to do it. You can "forward ssh keys". Just add ``-A`` to your ssh command or add following lines to your ssh config (``~/.ssh/config``) on your (local) computer:: 7 | 8 | Host your.dev.server.example.com 9 | ForwardAgent yes 10 | 11 | Then connect to your server and type to test:: 12 | 13 | ssh -T git@github.com 14 | 15 | For more information see: https://developer.github.com/guides/using-ssh-agent-forwarding/ 16 | 17 | Putty users (Windows) 18 | ===================== 19 | 20 | * install Pageant SSH agent (pageant.exe) https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html 21 | * add your keys to Pageant SSH 22 | * enable ssh agent forwarding in putty settings 23 | -------------------------------------------------------------------------------- /docs/remote-dev/x2go.rst: -------------------------------------------------------------------------------- 1 | ========================= 2 | Remote desktop via X2GO 3 | ========================= 4 | 5 | `x2go `__ allows you to run a browser (or any other application on x-server) on a remote server 6 | 7 | Deploying X2GO server 8 | ===================== 9 | 10 | * Connect to your server 11 | * `install x2go server `_ : 12 | 13 | 14 | .. code-block:: sh 15 | 16 | sudo add-apt-repository ppa:x2go/stable && \ 17 | sudo apt-get update && \ 18 | sudo apt-get install -y x2goserver x2goserver-xsession 19 | 20 | * install desktop environment you prefer, e.g. LXDE: 21 | 22 | .. code-block:: sh 23 | 24 | sudo apt-get install lubuntu-desktop 25 | # choose lightdm 26 | 27 | * Install browser `Pale Moon `_ 28 | 29 | .. code-block:: sh 30 | 31 | # http://linux.palemoon.org 32 | sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/stevenpusser/xUbuntu_18.04/ /' > /etc/apt/sources.list.d/home:stevenpusser.list" 33 | wget -nv https://download.opensuse.org/repositories/home:stevenpusser/xUbuntu_18.04/Release.key -O Release.key 34 | sudo apt-key add - < Release.key 35 | sudo apt-get update 36 | sudo apt-get install palemoon 37 | 38 | X2GO Client 39 | =========== 40 | 41 | * install ``x2goclient`` 42 | 43 | Ubuntu: 44 | 45 | .. code-block:: sh 46 | 47 | sudo add-apt-repository ppa:x2go/stable && \ 48 | sudo apt-get update && \ 49 | sudo apt-get install x2goclient 50 | 51 | References: 52 | 53 | * https://www.howtoforge.com/tutorial/x2go-server-ubuntu-14-04/ 54 | * http://wiki.x2go.org/doku.php/doc:installation:x2goclient 55 | 56 | * Run client: 57 | 58 | .. code-block:: sh 59 | 60 | x2goclient 61 | 62 | 63 | * create a new session with the settings below and connect to it (we assume that you have user named "noroot" with ssh keys configured): 64 | 65 | :: 66 | 67 | Host : YOUHOST 68 | Port : 22 69 | Session type: LXDE 70 | [x] Try auto Login 71 | Input / Output: Use Whole Display 72 | Username: noroot 73 | 74 | Firefox usage 75 | ============= 76 | 77 | You may need to disable ``xrender`` settings in ``about:config`` and `Disable hardware acceleration in Firefox `__. For more information see: 78 | 79 | * https://lists.x2go.org/pipermail/x2go-user/2016-August/003914.html 80 | * https://bugzilla.mozilla.org/show_bug.cgi?id=1263222 81 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | sphinx 2 | sphinx_rtd_theme 3 | -------------------------------------------------------------------------------- /tools/ec2-dev-bot/lambda_function.py: -------------------------------------------------------------------------------- 1 | # Copyright 2020 Ivan Yelizariev 2 | # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl). 3 | import json 4 | import logging 5 | import os 6 | import re 7 | import boto3 8 | from datetime import datetime 9 | # https://github.com/python-telegram-bot/python-telegram-bot 10 | from telegram import Update, Bot, ReplyKeyboardMarkup, ReplyKeyboardRemove 11 | 12 | 13 | logger = logging.getLogger() 14 | LOG_LEVEL = os.getenv("LOG_LEVEL") 15 | DEBUG = LOG_LEVEL == "DEBUG" 16 | if LOG_LEVEL: 17 | level = getattr(logging, LOG_LEVEL) 18 | logging.basicConfig(format='%(name)s [%(levelname)s]: %(message)s', level=level) 19 | 20 | bot = Bot(token=os.getenv('TELEGRAM_TOKEN')) 21 | ec2 = boto3.resource('ec2') 22 | 23 | 24 | def lambda_handler(event, context): 25 | # read event 26 | logger.debug("Event: \n%s", json.dumps(event)) 27 | 28 | telegram_payload = None 29 | cloudwatch_time = None 30 | if event.get("source") == "aws.events": 31 | cloudwatch_time = event.get('time') 32 | else: 33 | telegram_payload = json.loads(event.get("body", '{}')) 34 | logger.debug("Telegram event: \n%s", telegram_payload) 35 | 36 | # handle event 37 | try: 38 | if telegram_payload: 39 | handle_telegram(telegram_payload) 40 | elif cloudwatch_time: 41 | handle_cron(cloudwatch_time) 42 | except: 43 | logger.error("Error on handling event", exc_info=True) 44 | 45 | # return ok to telegram server 46 | return {"statusCode": 200, "headers": {}, "body": ""} 47 | 48 | def handle_telegram(telegram_payload): 49 | update = Update.de_json(telegram_payload, bot) 50 | message = update.message 51 | if not message: 52 | return 53 | 54 | if message.text == "/start": 55 | bot.sendMessage(message.chat.id, "This is a private bot to start/stop AWS EC2 instances. Check out the documentation:\nhttps://itpp.dev/ops/remote-dev/aws/index.html") 56 | return 57 | 58 | # check that we know the user 59 | user_id = message.from_user.id 60 | instance_id = os.getenv("USER_%s_INSTANCE" % user_id) 61 | user_code = os.getenv("USER_%s_CODE" % user_id) 62 | if not (instance_id and user_code): 63 | bot.sendMessage(message.chat.id, "Access denied!") 64 | return 65 | 66 | instance = ec2.Instance(instance_id) 67 | # do what the user asks 68 | if message.text == "/up": 69 | start_instance(message, instance, user_code) 70 | if message.text == "/status": 71 | send_status(message, instance, user_code) 72 | elif message.text == "/shutdown": 73 | confirm_buttons = ReplyKeyboardMarkup([["Shutdown", "Cancel"]]) 74 | bot.sendMessage(message.chat.id, "Are you sure?", reply_markup=confirm_buttons) 75 | elif str(message.text).lower() == "shutdown": 76 | stop_instance(message, instance) 77 | elif str(message.text).lower() == "cancel": 78 | bot.sendMessage(message.chat.id, "Canceled", reply_markup=ReplyKeyboardRemove()) 79 | 80 | def handle_cron(cloudwatch_time): 81 | dt = datetime.strptime(cloudwatch_time, TIME_FORMAT) 82 | unixtime = (dt - datetime(1970, 1, 1)).total_seconds() 83 | # TODO 84 | 85 | def start_instance(message, instance, user_code): 86 | bot.sendMessage(message.chat.id, "Instance is starting...") 87 | response = instance.start() 88 | if DEBUG: 89 | bot.sendMessage(message.chat.id, "Response from AWS: %s" % json.dumps(response)) 90 | instance.wait_until_running() 91 | send_status(message, instance, user_code) 92 | 93 | def stop_instance(message, instance): 94 | bot.sendMessage(message.chat.id, "Instance is stopping...", reply_markup=ReplyKeyboardRemove()) 95 | response = instance.stop() 96 | if DEBUG: 97 | bot.sendMessage(message.chat.id, "Response from AWS: %s" % json.dumps(response)) 98 | instance.wait_until_stopped() 99 | send_status(message, instance) 100 | 101 | def send_status(message, instance, user_code=None): 102 | msg = ["Status: %s" % instance.state['Name']] 103 | if instance.public_dns_name: 104 | msg.append("Public DNS: %s " % instance.public_dns_name) 105 | 106 | # For byte codes meaning see the docs 107 | # https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ec2.html#EC2.Instance.state 108 | state_code = instance.state['Code'] 109 | if state_code & 255 == 16: 110 | # running 111 | msg.append("") 112 | msg.append("To stop instance click /shutdown or schedule message \"Shutdown\"") 113 | 114 | bot.sendMessage(message.chat.id, '\n'.join(msg)) 115 | -------------------------------------------------------------------------------- /tools/github-ifttt/lambda_function.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Ivan Yelizariev 2 | # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl). 3 | import json 4 | import logging 5 | import os 6 | import re 7 | from botocore.vendored.requests.packages import urllib3 8 | 9 | LOG_LEVEL = os.environ.get('LOG_LEVEL') 10 | GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN') 11 | IFTTT_HOOK_RED_PR = os.environ.get('IFTTT_HOOK_RED_PR') 12 | IFTTT_HOOK_RED_BRANCH = os.environ.get('IFTTT_HOOK_RED_BRANCH') 13 | IFTTT_HOOK_GREEN_PR = os.environ.get('IFTTT_HOOK_GREEN_PR') 14 | IGNORE_BRANCHES = os.environ.get('IGNORE_BRANCHES','').split(',') 15 | 16 | logger = logging.getLogger() 17 | if LOG_LEVEL: 18 | logger.setLevel(getattr(logging, LOG_LEVEL)) 19 | 20 | 21 | def lambda_handler(event, context): 22 | logger.debug("Event: \n%s", json.dumps(event)) 23 | payload = json.loads(event["body"]) 24 | logger.debug("Payload: \n%s", json.dumps(payload)) 25 | result = handle_payload(payload) 26 | 27 | if not result: 28 | logger.info("Nothing to do") 29 | 30 | # TODO implement 31 | return { 32 | 'statusCode': 200, 33 | 'body': json.dumps('Done!' if result else "Thanks, but nothing to do here") 34 | } 35 | 36 | 37 | def handle_payload(payload): 38 | # payload: https://developer.github.com/v3/activity/events/types/#webhook-payload-example 39 | # check_run: https://developer.github.com/v3/checks/runs/#parameters 40 | check_run = payload.get('check_run') 41 | check_run_head_branch = check_run.get('check_suite').get('head_branch') 42 | if not check_run: 43 | return 44 | 45 | conclusion = check_run.get('conclusion') 46 | logger.debug('conclusion: %s', conclusion) 47 | if not conclusion: 48 | # not finished yet 49 | return 50 | 51 | if conclusion in ['neutral', 'cancelled']: 52 | # ok 53 | return 54 | 55 | # TODO make more strong check for travis 56 | if check_run['name'] == "Travis CI - Pull Request": 57 | return handle_payload_pr(payload, check_run, conclusion) 58 | elif check_run['name'] == "Travis CI - Branch" and check_run_head_branch not in IGNORE_BRANCHES: 59 | return handle_payload_branch(payload, check_run, conclusion) 60 | else: 61 | logger.debug('Unknown check name: %s', check_run['name']) 62 | return 63 | 64 | 65 | def handle_payload_pr(payload, check_run, conclusion): 66 | output_text = check_run['output']['text'] 67 | pull = re.search("/pull(/[0-9]+)", output_text).group(1) 68 | pulls_url = payload['repository']['pulls_url'] 69 | # pull_info: https://developer.github.com/v3/pulls/#get-a-single-pull-request 70 | pull_info = get_pull_info(pulls_url, pull) 71 | login = pull_info['user']['login'] 72 | check_run_html_url = check_run.get('html_url') 73 | pr_html_url = pull_info.get('html_url') 74 | 75 | if conclusion == 'success': 76 | notify_ifttt( 77 | IFTTT_HOOK_GREEN_PR, 78 | value1=login, 79 | value2=pr_html_url, 80 | value3=check_run_html_url 81 | ) 82 | else: 83 | # failed 84 | notify_ifttt( 85 | IFTTT_HOOK_RED_PR, 86 | value1=login, 87 | value2=pr_html_url, 88 | value3=check_run_html_url 89 | ) 90 | return True 91 | 92 | 93 | def handle_payload_branch(payload, check_run, conclusion): 94 | login = payload.get('sender')['login'] 95 | check_run_html_url = check_run.get('html_url') 96 | search_repo_result = re.search(r'\/.*\/(.*)\/runs', check_run_html_url) 97 | repo = search_repo_result.group(1) 98 | if repo == 'addons-dev': 99 | return 100 | check_run_head_branch = check_run.get('check_suite').get('head_branch') 101 | check_run_details_url = check_run.get('details_url') 102 | 103 | if conclusion in ('failed', 'failure'): 104 | notify_ifttt( 105 | IFTTT_HOOK_RED_BRANCH, 106 | value1=check_run_head_branch, 107 | value2=repo, 108 | value3=check_run_details_url, 109 | ) 110 | return True 111 | 112 | 113 | def notify_ifttt(hook, **data): 114 | logger.debug("notify_ifttt: %s", data) 115 | http = urllib3.PoolManager() 116 | res = http.request( 117 | 'POST', hook, 118 | body=json.dumps(data), 119 | headers={ 120 | 'Content-Type': 'application/json', 121 | 'User-Agent': 'aws lambda handler', 122 | }) 123 | return res 124 | 125 | 126 | def get_pull_info(pulls_url, pull): 127 | url = pulls_url.replace('{/number}', pull) 128 | 129 | http = urllib3.PoolManager() 130 | res = http.request('GET', url, headers={ 131 | 'User-Agent': 'aws lambda handler', 132 | 'Authorization': 'token %s' % GITHUB_TOKEN, 133 | }) 134 | res = json.loads(res.data) 135 | logger.debug("Pull info via %s: \n%s", url, json.dumps(res)) 136 | return res 137 | -------------------------------------------------------------------------------- /tools/github-merge-bot/lambda_function.py: -------------------------------------------------------------------------------- 1 | # Copyright 2019 Ivan Yelizariev 2 | # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl). 3 | import json 4 | import logging 5 | import os 6 | import re 7 | from botocore.vendored import requests 8 | from botocore.vendored.requests.packages import urllib3 9 | 10 | 11 | RED_STATUSES = ['failure', 'neutral', 'cancelled', 'timed_out', 'action_required', 'error'] 12 | NOT_FINISHED_STATUSES = ['queued', 'in_progress', 'pending'] 13 | GREEN = 'green' 14 | RED = 'red' 15 | NOT_FINISHED = 'not_finished' 16 | LOG_LEVEL = os.environ.get('LOG_LEVEL') 17 | GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN') 18 | USERNAMES = os.environ.get('USERNAMES') 19 | MSG_RQST_MERGE = os.environ.get('MSG_RQST_MERGE', 'I approve to merge it now') 20 | IFTTT_HOOK_RED_PR = os.environ.get('IFTTT_HOOK_RED_PR') 21 | IFTTT_HOOK_GREEN_PR = os.environ.get('IFTTT_HOOK_GREEN_PR') 22 | IFTTT_HOOK_NOT_FINISHED_PR = os.environ.get('IFTTT_HOOK_NOT_FINISHED_PR') 23 | LINK_TO_READ_DOCS = '> sent by [:construction_worker_man: Merge Bot](https://itpp.dev/ops/git/github-merge-bot.html)' 24 | 25 | RESPONSE_200 = { 26 | "statusCode": 200, 27 | "headers": {}, 28 | "body": "" 29 | } 30 | 31 | logger = logging.getLogger() 32 | if LOG_LEVEL: 33 | logger.setLevel(getattr(logging, LOG_LEVEL)) 34 | 35 | 36 | def lambda_handler(event, context): 37 | logger.debug("Event: \n%s", json.dumps(event)) 38 | payload = json.loads(event["body"]) 39 | logger.debug("Payload: \n%s", json.dumps(payload)) 40 | comment = payload.get('comment') 41 | if not comment: 42 | return RESPONSE_200 43 | comment = comment['body'].strip() 44 | if comment == MSG_RQST_MERGE: 45 | owner = payload.get('repository')['owner']['login'] 46 | repo = payload.get('repository')['name'] 47 | pull_request = payload.get('issue')['html_url'] 48 | pull = re.search("/pull(/[0-9]+)", pull_request).group(1) 49 | pull_number = re.search("/pull/([0-9]+)", pull_request).group(1) 50 | pulls_url = payload['repository']['pulls_url'] 51 | # pull_info: https://developer.github.com/v3/pulls/#get-a-single-pull-request 52 | pull_info = get_pull_info(pulls_url, pull) 53 | pull_request_state = pull_info['state'] 54 | 55 | if pull_request_state in ('closed', 'merged'): 56 | logger.debug('State of pull request: %s ', pull_request_state) 57 | return RESPONSE_200 58 | 59 | username = payload.get('comment')['user']['login'] 60 | headers = { 61 | 'Authorization': 'token %s' % GITHUB_TOKEN, 62 | 'Content-Type': 'application/json', 63 | 'User-Agent': 'aws lambda handler' 64 | } 65 | if username in USERNAMES.split(","): 66 | sha_head = pull_info['head']['sha'] 67 | owner_base = pull_info['base']['user']['login'] 68 | repo_head = pull_info['head']['repo']['name'] 69 | status_state = [get_status_pr(owner_base, repo_head, sha_head).get('state')] 70 | logger.debug('Status of state: %s ', status_state) 71 | check_runs = get_status_check_run(owner_base, repo_head, sha_head).get('check_runs') 72 | # Merge a pull request (Merge Button): https://developer.github.com/v3/pulls/ 73 | merge = make_merge_pr(owner, repo, pull_number, headers) 74 | if merge == 200: 75 | # Comments: https://developer.github.com/v3/issues/comments/ 76 | approve_comment = 'Approved by @%s' % username 77 | make_issue_comment(owner, repo, pull_number, headers, approve_comment) 78 | res = status_result(check_runs, status_state) 79 | ifttt_handler(res, pull_info, username) 80 | elif merge == 404: 81 | approve_comment = 'Sorry @%s, I don\'t have access rights to push to this repository' % username 82 | make_issue_comment(owner, repo, pull_number, headers, approve_comment) 83 | else: 84 | approve_comment = '@%s. Merge is not successful. See logs' % username 85 | make_issue_comment(owner, repo, pull_number, headers, approve_comment) 86 | else: 87 | approve_comment = 'Sorry @%s, but you don\'t have access to merge it' % username 88 | make_issue_comment(owner, repo, pull_number, headers, approve_comment) 89 | else: 90 | logger.debug('Comment: %s ', comment) 91 | return RESPONSE_200 92 | 93 | 94 | def get_status_check_run(owner_base, repo_head, sha_head): 95 | # GET /repos/:owner/:repo/commits/:ref/check-runs 96 | url = 'https://api.github.com/repos/%s/%s/commits/%s/check-runs' % (owner_base, repo_head, sha_head) 97 | http = urllib3.PoolManager() 98 | res = http.request('GET', url, headers={ 99 | # 'Content-Type': 'application/vnd.github.v3.raw+json', 100 | 'User-Agent': 'aws lambda handler', 101 | 'Accept': 'application/vnd.github.antiope-preview+json', 102 | 'Authorization': 'token %s' % GITHUB_TOKEN, 103 | }) 104 | res = json.loads(res.data) 105 | logger.debug("Status of Check runs: \n%s", json.dumps(res)) 106 | return res 107 | 108 | 109 | def get_status_pr(owner_base, repo_head, sha_head): 110 | # GET /repos/:owner/:repo/commits/:ref/status 111 | url = 'https://api.github.com/repos/%s/%s/commits/%s/status' % (owner_base, repo_head, sha_head) 112 | http = urllib3.PoolManager() 113 | res = http.request('GET', url, headers={ 114 | # 'Content-Type': 'application/vnd.github.v3.raw+json', 115 | 'User-Agent': 'aws lambda handler', 116 | 'Accept': 'application/vnd.github.antiope-preview+json', 117 | 'Authorization': 'token %s' % GITHUB_TOKEN, 118 | }) 119 | res = json.loads(res.data) 120 | logger.debug("Status pull request: \n%s", json.dumps(res)) 121 | return res 122 | 123 | 124 | def status_result(check_runs, status_state): 125 | # get list of statuses check run. May be queued, in_progress or completed. And 126 | # get list of conclusions check run. May be success, failure, neutral, cancelled, timed_out, or action_required if status is completed 127 | statuses_check_run = [] 128 | conclusions_check_run = [] 129 | for check_run in check_runs: 130 | statuses_check_run.append(check_run.get('status')) 131 | conclusions_check_run.append(check_run.get('conclusion')) 132 | logger.debug('List of statuses check run: %s ', statuses_check_run) 133 | logger.debug('List of conclusions check run: %s ', conclusions_check_run) 134 | states = statuses_check_run + conclusions_check_run + status_state 135 | logger.debug('States: %s ', states) 136 | if any(elem in states for elem in RED_STATUSES): 137 | return RED 138 | elif any(elem in states for elem in NOT_FINISHED_STATUSES): 139 | return NOT_FINISHED 140 | else: 141 | return GREEN 142 | 143 | 144 | def ifttt_handler(res, pull_info, username): 145 | pr_html_url = pull_info.get('html_url') 146 | author_pr = pull_info['head']['user']['login'] 147 | values = {'value1': username, 148 | 'value2': author_pr, 149 | 'value3': pr_html_url} 150 | if res == RED: 151 | notify_ifttt(IFTTT_HOOK_RED_PR, **values) 152 | return 153 | elif res == GREEN: 154 | # successful 155 | notify_ifttt(IFTTT_HOOK_GREEN_PR, **values) 156 | return 157 | else: 158 | # not finished yet 159 | notify_ifttt(IFTTT_HOOK_NOT_FINISHED_PR, **values) 160 | return 161 | 162 | 163 | def notify_ifttt(hook, **data): 164 | logger.debug("notify_ifttt: %s", data) 165 | http = urllib3.PoolManager() 166 | res = http.request( 167 | 'POST', hook, 168 | body=json.dumps(data), 169 | headers={ 170 | 'Content-Type': 'application/json', 171 | 'User-Agent': 'aws lambda handler', 172 | }) 173 | return res 174 | 175 | 176 | def get_pull_info(pulls_url, pull): 177 | url = pulls_url.replace('{/number}', pull) 178 | http = urllib3.PoolManager() 179 | res = http.request('GET', url, headers={ 180 | 'User-Agent': 'aws lambda handler', 181 | 'Authorization': 'token %s' % GITHUB_TOKEN, 182 | }) 183 | res = json.loads(res.data) 184 | logger.debug("Pull info via %s: \n%s", url, json.dumps(res)) 185 | return res 186 | 187 | 188 | def make_merge_pr(owner, repo, pull_number, headers): 189 | # PUT /repos/:owner/:repo/pulls/:pull_number/merge 190 | url = 'https://api.github.com/repos/%s/%s/pulls/%s/merge' % (owner, repo, pull_number) 191 | data = {"commit_message": "commit is created by :construction_worker_man: Merge Bot: https://odoo-devops.readthedocs.io/en/latest/git/github-merge-bot.html"} 192 | response = requests.request("PUT", url, headers=headers, json=data) 193 | if response.status_code == 200: 194 | logger.debug('Pull Request %s successfully merged', pull_number) 195 | return response.status_code 196 | else: 197 | logger.debug('Response: "%s"', response.content) 198 | return response.status_code 199 | 200 | 201 | def make_issue_comment(owner, repo, pull_number, headers, approve_comment=None): 202 | # POST /repos/:owner/:repo/issues/:issue_number/comments 203 | url = 'https://api.github.com/repos/%s/%s/issues/%s/comments' % (owner, repo, pull_number) 204 | approve_comment += '\n\n{}'.format(LINK_TO_READ_DOCS) 205 | body = {'body': approve_comment} 206 | comment = json.dumps(body) 207 | response = requests.request("POST", url, data=comment, headers=headers) 208 | if response.status_code == 201: 209 | logger.debug('Successfully created Comment "%s"', comment) 210 | else: 211 | logger.debug('Could not create Comment "%s"', comment) 212 | logger.debug('Response: "%s"', response.content) 213 | -------------------------------------------------------------------------------- /tools/github-review-bot/lambda_function.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/python 2 | import ast 3 | import os 4 | import json 5 | import logging 6 | import re 7 | from botocore.vendored import requests 8 | from github import Github 9 | from text_tree import draw_tree, parser 10 | from botocore.vendored.requests.packages import urllib3 11 | 12 | LOG_LEVEL = os.environ.get('LOG_LEVEL') 13 | GITHUB_TOKEN = os.environ.get('GITHUB_TOKEN') 14 | LINK_TO_READ_DOCS = '> sent by [:v: Odoo Review Bot](https://odoo-devops.readthedocs.io/en/latest/git/github-review-bot.html)' 15 | 16 | RESPONSE_200 = { 17 | "statusCode": 200, 18 | "headers": {}, 19 | "body": "" 20 | } 21 | 22 | logger = logging.getLogger() 23 | if LOG_LEVEL: 24 | logger.setLevel(getattr(logging, LOG_LEVEL)) 25 | 26 | 27 | def lambda_handler(event, context): 28 | logger.debug("Event: \n%s", json.dumps(event)) 29 | payload = json.loads(event["body"]) 30 | logger.debug("Payload: \n%s", json.dumps(payload)) 31 | pull_request = payload.get('pull_request')['html_url'] 32 | pull_number = re.search("/pull/([0-9]+)", pull_request).group(1) 33 | full_name = payload['repository']['full_name'] 34 | full_name_head_repo = payload['pull_request']['head']['repo']['full_name'] 35 | branch_head_repo = payload['pull_request']['head']['sha'] 36 | state_pr = payload.get('pull_request')['state'] 37 | if state_pr != 'closed': 38 | main(GITHUB_TOKEN, full_name, pull_number, full_name_head_repo, branch_head_repo) 39 | return RESPONSE_200 40 | 41 | 42 | def main(GITHUB_TOKEN, full_name, pull_number, full_name_head_repo, branch_head_repo): 43 | if GITHUB_TOKEN: 44 | github = Github(GITHUB_TOKEN) 45 | else: 46 | print('Please specify github login/password or token') 47 | exit() 48 | 49 | repo = github.get_repo(full_name) 50 | pr = repo.get_pull(int(pull_number)) 51 | review_comments = [] 52 | paths_inst_mod = [] 53 | paths_non_inst_mod = [] 54 | paths_other = [] 55 | list_update_files_pr = pr.get_files() 56 | paths_to_update_files = [pr_file.filename for pr_file in list_update_files_pr] 57 | pr_modules = set([path_to_update_file.split('/')[0] for path_to_update_file in paths_to_update_files if 58 | '/' in path_to_update_file]) 59 | # create dictionary of check from manifest or openerp file for installable or not installable modules 60 | modules_check = {} 61 | for pr_module in pr_modules: 62 | # Get content manifest files in module 63 | # Look: https://developer.github.com/v3/repos/contents/#get-contents 64 | link_to_manifest = get_link_to_manifest(GITHUB_TOKEN, full_name_head_repo, branch_head_repo, pr_module) 65 | logger.debug("Link to manifest: %s", link_to_manifest) 66 | if link_to_manifest is None: 67 | continue 68 | html = requests.get(link_to_manifest) 69 | html = html.text 70 | installable = ast.literal_eval(html).get('installable', True) 71 | modules_check[pr_module] = installable 72 | logger.debug("Dict of check for installable or not installable modules: \n%s", modules_check) 73 | 74 | for path_to_file in paths_to_update_files: 75 | name_module = path_to_file.split('/')[0] 76 | if modules_check != {}: 77 | # check each updated file in accordance with the manifest 78 | if name_module in modules_check: 79 | if modules_check.get(name_module): 80 | paths_inst_mod.append(path_to_file) 81 | else: 82 | paths_non_inst_mod.append(path_to_file) 83 | else: 84 | paths_other.append(path_to_file) 85 | # if updated file is not in the module, then we send it to the tree with the installable modules 86 | if '/' not in path_to_file: 87 | paths_other.append(path_to_file) 88 | logger.debug("Paths of update files in installable modules: \n%s", paths_inst_mod) 89 | logger.debug("Paths of update files in non-installable modules: \n%s", paths_non_inst_mod) 90 | logger.debug("Paths of other updated files: \n%s", paths_other) 91 | 92 | tree_inst = None 93 | tree_non_inst = None 94 | tree_other = None 95 | if paths_inst_mod != []: 96 | tree_inst = create_tree(paths_inst_mod) 97 | if paths_non_inst_mod != []: 98 | tree_non_inst = create_tree(paths_non_inst_mod) 99 | if paths_other != []: 100 | tree_other = create_tree(paths_other) 101 | installable_modules = set([module.split('/')[0] for module in paths_inst_mod if '/' in module]) 102 | non_installable_modules = set([module.split('/')[0] for module in paths_non_inst_mod]) 103 | 104 | for pr_file in list_update_files_pr: 105 | path_to_pr_file = pr_file.filename 106 | if 'changelog.rst' in path_to_pr_file and path_to_pr_file.split('/')[0] in installable_modules: 107 | comment_line = 0 108 | change_started = False 109 | for line in pr_file.patch.split('\n')[1:]: 110 | if change_started: 111 | if not line.startswith('+'): 112 | break 113 | else: 114 | if line.startswith('+'): 115 | change_started = True 116 | comment_line += 1 117 | review_comments.append({'path': path_to_pr_file, 118 | 'position': comment_line, 119 | 'body': 'Has to be tested'}) 120 | blank_block = "```\n```" 121 | quantity_inst_mod = len(installable_modules) 122 | quantity_non_inst_mod = len(non_installable_modules) 123 | review_body = "{}\n\n" \ 124 | "{}\n" \ 125 | "{}\n\n" \ 126 | "{}\n" \ 127 | "{}\n\n".format( 128 | '%s' % (tree_other or ''), 129 | '%s' % 'Installable modules remain unchanged.' if len( 130 | installable_modules) == 0 else '**{} installable** module{} updated:'.format( 131 | quantity_inst_mod, ' is' if quantity_inst_mod == 1 else 's are'), 132 | '%s' % (tree_inst if tree_inst else blank_block), 133 | '%s' % 'Not installable modules remain unchanged.' if len( 134 | non_installable_modules) == 0 else '**{} not installable** modules {} updated:'.format( 135 | quantity_non_inst_mod, ' is' if quantity_non_inst_mod == 1 else 's are'), 136 | '%s' % (tree_non_inst if tree_non_inst else blank_block)) 137 | 138 | reviews = pr.get_reviews() 139 | id_review = None 140 | for review in reviews: 141 | body_review = review.body 142 | # If this link is not found, then this is not the same pull request review 143 | if not LINK_TO_READ_DOCS in body_review: 144 | continue 145 | id_review = review.id 146 | break 147 | if not review_comments: 148 | review_body += 'No new features in *doc/changelog.rst* files of installable modules\n\n' 149 | review_body += '%s' % LINK_TO_READ_DOCS 150 | if id_review: 151 | # Update a pull request review 152 | # Look: https://developer.github.com/v3/pulls/reviews/#update-a-pull-request-review 153 | update_review(GITHUB_TOKEN, full_name, pull_number, id_review, review_body) 154 | else: 155 | # Create a pull request review 156 | # Look: https://pygithub.readthedocs.io/en/latest/github_objects/PullRequest.html#github.PullRequest.PullRequest.create_review 157 | pr_commits = pr.get_commits() 158 | pr.create_review(commit=pr_commits[pr_commits.totalCount - 1], 159 | body=review_body 160 | , event='COMMENT', comments=review_comments) 161 | 162 | 163 | def get_link_to_manifest(GITHUB_TOKEN, full_name_head_repo, branch_head_repo, pr_module): 164 | logger.debug("Full name head repo: %s", full_name_head_repo) 165 | logger.debug("Branch head repo: %s", branch_head_repo) 166 | # GET /repos/:owner/:repo/contents/:path 167 | url = 'https://api.github.com/repos/%s/contents/%s?ref=%s#' % (full_name_head_repo, pr_module, branch_head_repo) 168 | http = urllib3.PoolManager() 169 | res = http.request('GET', url, headers={ 170 | 'Accept': 'application/vnd.github.v3.raw', 171 | 'User-Agent': 'https://gitlab.com/itpp/odoo-devops/blob/master/docs/git/github-review-bot.rst', 172 | 'Authorization': 'token %s' % GITHUB_TOKEN}) 173 | list_files = json.loads(res.data) 174 | logger.debug("list_files: \n%s", list_files) 175 | for file in list_files: 176 | if type(file) is not dict: 177 | continue 178 | name_file = file.get('name') 179 | if name_file == '__manifest__.py' or name_file == '__openerp__.py': 180 | link_to_manifest = file.get('download_url') 181 | return link_to_manifest 182 | 183 | 184 | def create_tree(paths): 185 | text = path_to_text(paths) 186 | # https://stackoverflow.com/questions/32151776/visualize-tree-in-bash-like-the-output-of-unix-tree 187 | tree = draw_tree(parser(text)) 188 | return tree 189 | 190 | 191 | def update_review(GITHUB_TOKEN, full_name, pull_number, id_review, review_body): 192 | # PUT /repos/:owner/:repo/pulls/:pull_number/reviews/:review_id 193 | url = 'https://api.github.com/repos/%s/pulls/%s/reviews/%s' % (full_name, pull_number, id_review) 194 | http = urllib3.PoolManager() 195 | body = {'body': review_body} 196 | res = http.request('PUT', url, headers={ 197 | 'Content-Type': 'application/vnd.github.v3.raw+json', 198 | 'User-Agent': 'https://gitlab.com/itpp/odoo-devops/blob/master/docs/git/github-review-bot.rst', 199 | 'Authorization': 'token %s' % GITHUB_TOKEN, 200 | }, body=json.dumps(body)) 201 | res = json.loads(res.data) 202 | logger.debug("Update review pull request: \n%s", json.dumps(res)) 203 | return res 204 | 205 | 206 | def paths_to_dict(paths): 207 | dct_dir = {} 208 | for item in paths: 209 | p = dct_dir 210 | for x in item.split('/'): 211 | p = p.setdefault(x, {}) 212 | return dct_dir 213 | 214 | 215 | def path_to_text(paths): 216 | dct_dir = paths_to_dict(paths) 217 | text = dict_to_text(dct_dir) 218 | return text 219 | 220 | 221 | def dict_to_text(dct_dir): 222 | """Converts dict to specially formatted string""" 223 | text = "" 224 | for key, d in dct_dir.items(): 225 | text += key + ': ' + ' '.join(d.keys()) + '\n' 226 | text += dict_to_text(d) 227 | return text 228 | -------------------------------------------------------------------------------- /tools/github-review-bot/text_tree.py: -------------------------------------------------------------------------------- 1 | from functools import reduce 2 | 3 | branch = '├' 4 | pipe = '|' 5 | end = '└' 6 | dash = '─' 7 | 8 | 9 | class Tree(object): 10 | def __init__(self, tag): 11 | self.tag = tag 12 | 13 | 14 | class Node(Tree): 15 | def __init__(self, tag, *nodes): 16 | super(Node, self).__init__(tag) 17 | self.nodes = list(nodes) 18 | 19 | 20 | class Leaf(Tree): 21 | pass 22 | 23 | 24 | def _draw_tree(tree, level, last=False, sup=[]): 25 | dir_tree = '' 26 | def update(left, i): 27 | if i < len(left): 28 | left[i] = ' ' 29 | return left 30 | 31 | dir_tree += (''.join(reduce(update, sup, ['{} '.format(pipe)] * level)) \ 32 | + (end if last else branch) + '{} '.format(dash) \ 33 | + (str(tree.tag) + '/' if isinstance(tree, Node) else str(tree.tag)) + '\n') 34 | if isinstance(tree, Node): 35 | level += 1 36 | for node in tree.nodes[:-1]: 37 | dir_tree += _draw_tree(node, level, sup=sup) 38 | dir_tree += _draw_tree(tree.nodes[-1], level, True, [level] + sup) 39 | return dir_tree 40 | 41 | 42 | def draw_tree(trees): 43 | dir_tree = '```\n' 44 | for tree in trees[:-1]: 45 | dir_tree += _draw_tree(tree, 0) 46 | dir_tree += _draw_tree(trees[-1], 0, True, [0]) 47 | dir_tree += '```' 48 | return dir_tree 49 | 50 | class Track(object): 51 | def __init__(self, parent, idx): 52 | self.parent, self.idx = parent, idx 53 | 54 | 55 | def parser(text): 56 | trees = [] 57 | tracks = {} 58 | for line in text.splitlines(): 59 | line = line.strip() 60 | key, value = map(lambda s: s.strip(), line.split(':', 1)) 61 | nodes = value.split() 62 | if len(nodes): 63 | parent = Node(key) 64 | for i, node in enumerate(nodes): 65 | tracks[node] = Track(parent, i) 66 | parent.nodes.append(Leaf(node)) 67 | curnode = parent 68 | if curnode.tag in tracks: 69 | t = tracks[curnode.tag] 70 | t.parent.nodes[t.idx] = curnode 71 | else: 72 | trees.append(curnode) 73 | else: 74 | curnode = Leaf(key) 75 | if curnode.tag in tracks: 76 | # well, how you want to handle it? 77 | pass # ignore 78 | else: 79 | trees.append(curnode) 80 | return trees -------------------------------------------------------------------------------- /tools/porting-bot/README.md: -------------------------------------------------------------------------------- 1 | # Porting-bot bot 2 | 3 | This is bot for transferring changes between different Odoo versions and writing review. It based on Amazon Web Services (EC2, Lambda, SQS) and python scripts. 4 | 5 | ## Requirements 6 | 7 | For bot deployment you will need: 8 | 9 | * Github repository with your odoo modules. Like [this one](https://github.com/it-projects-llc/pos-addons); 10 | 11 | * Account in Github from witch pull requests and reviews will be made. 12 | 13 | * Account in [Amazon Web Services](https://aws.amazon.com); 14 | 15 | ## Deployment and setup 16 | 17 | In order to deploy bot and set it up for your repository you will need to: 18 | 19 | * Go to your [AWS Management Console](https://console.aws.amazon.com); 20 | 21 | * In AWS Management Console click on your login in top right corner and then click on "My Security Credentials"; 22 | 23 | * In My Security Credentials page click on "Access keys (access key ID and secret access key)"; 24 | 25 | * Now click on "Create New Access Key" and download the key; 26 | 27 | * Follow up [this](https://docs.aws.amazon.com/en_us/cli/latest/userguide/cli-chap-install.html) instruction for installing AWS CLI; 28 | 29 | * Follow up [this](https://docs.aws.amazon.com/en_us/cli/latest/userguide/cli-chap-configure.html) instruction to configure AWS CLI. Use Access Key and Secret Access Key witch you downloaded from AWS Management Console; 30 | 31 | * Clone repository odoo-devops repository: 32 | 33 | $ git clone git@gitlab.com:itpp/odoo-devops.git 34 | 35 | * Install Boto3 package with pip or pip3: 36 | 37 | $ pip install boto3 38 | 39 | * Login in your Github account (from witch pull requests and reviews will be made) and go to [personal access tokens page](https://github.com/settings/tokens); 40 | 41 | * Click on "Generate new token" button and select "repo" in scopes. Then click on "Generate token" and save yout generated token; 42 | 43 | * Set local environment variable GITHUB_TOKEN_FOR_BOT with value of your Github token: 44 | 45 | $ export GITHUB_TOKEN_FOR_BOT= 46 | 47 | * Run ec2-deploy.py script (you can use python3 instead): 48 | 49 | $ python ./odoo-devops/tools/porting-bot/ec2/ec2-deploy.py 50 | 51 | ------- 52 | 53 | * Go to [AWS Lambda page](https://console.aws.amazon.com/lambda/home) and click on github-bot-lambda; 54 | 55 | * Now click on "API Gateway" button on the left panel to crate API Gateway for your Lambda function; 56 | 57 | * In the panel that appears below pick "Create a new API" and "Open" (you also can choose other security mechanism but it will require additional set up) and click "Add"; 58 | 59 | * Click on "Save" button on the top right corner and copy your API endpoint in below panel; 60 | ------- 61 | 62 | * Login in your Github account (one with repository for witch you want use bot) and go to the repository page; 63 | 64 | * Click on "Settings" and then on "Webhooks" button; 65 | 66 | * Click on "Add webhook" and enter your API endpoint to Payload URL field. Choose "application/json" in "Content type" field; 67 | 68 | * In field "Which events would you like to trigger this webhook?" press "Let me select individual events.", then choose "Pull requests" and press on "Add webhook". 69 | 70 | ## Usage 71 | 72 | When bot is deployed and set up to one or more repositories it will make reviews with changes for testing and list with all changed modules in all new pull requests. 73 | 74 | Transfer of changes between branches will soon be implemented and documented. 75 | 76 | More information can be obtained in scripts/README.md file, where scripts witch bot uses described. 77 | 78 | ## Removal 79 | 80 | If you want to remove bot from your AWS, simply run deploy script with argument "--remove_bot": 81 | 82 | $ python ./odoo-devops/tools/porting-bot/ec2/ec2-deploy.py --remove_bot -------------------------------------------------------------------------------- /tools/porting-bot/ec2/ec2-deploy.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import time 4 | from zipfile import ZipFile 5 | import argparse 6 | import boto3 7 | 8 | 9 | def deploy_bot(github_token, deployment_info, info_filename): 10 | path_to_info_filename = '/'.join(os.path.realpath(__file__).split('/')[:-2]) + '/' + info_filename 11 | 12 | queue_name = deployment_info['queue_name'] 13 | key_name = deployment_info['key_name'] 14 | role_name_ec2 = deployment_info['role_name_ec2'] 15 | role_name_lambda = deployment_info['role_name_lambda'] 16 | lambda_name = deployment_info['lambda_name'] 17 | instance_profile_name = deployment_info['instance_profile_name'] 18 | git_author = deployment_info['git_author'] 19 | hook_exists = deployment_info['hook_exists'] 20 | hook_created = deployment_info['hook_created'] 21 | 22 | print('Starting deployment process.') 23 | user_data = open('/'.join(os.path.realpath(__file__).split('/')[:-1]) + '/ec2-script.sh').read() 24 | 25 | user_data += '\nsudo git config --global user.name {}'.format(git_author.split()[0]) 26 | user_data += '\nsudo git config --global user.email {}'.format(git_author.split()[0]) 27 | 28 | role_policies_for_ec2 = ['arn:aws:iam::aws:policy/AmazonSQSFullAccess', 29 | 'arn:aws:iam::aws:policy/AmazonEC2FullAccess', 30 | 'arn:aws:iam::aws:policy/AWSLambdaExecute', 31 | 'arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM', 32 | 'arn:aws:iam::aws:policy/AmazonSSMFullAccess'] 33 | deployment_info['role_policies_for_ec2'] = role_policies_for_ec2 34 | 35 | role_policies_for_lambda = ['arn:aws:iam::aws:policy/AmazonSQSFullAccess', 36 | 'arn:aws:iam::aws:policy/AmazonEC2FullAccess'] 37 | deployment_info['role_policies_for_lambda'] = role_policies_for_lambda 38 | 39 | if hook_exists == '': 40 | hook_exists = 'none' 41 | 42 | if hook_created == '': 43 | hook_created = 'none' 44 | 45 | ssm_parameters = { 46 | 'QUEUE_NAME': queue_name, 47 | 'SHUTDOWN_TIME': '60', 48 | 'GITHUB_TOKEN_FOR_BOT': github_token, 49 | 'GIT_AUTHOR': git_author, 50 | 'WEBHOOK_WHEN_PORTING_PR_EXISTS': hook_exists, 51 | 'WEBHOOK_WHEN_PORTING_PR_CREATED': hook_created 52 | } 53 | 54 | deployment_info['ssm_parameters'] = ssm_parameters 55 | 56 | sqs_response = create_sqs(queue_name) 57 | print('SQS queue {} created'.format(queue_name)) 58 | deployment_info['sqs_queue_url'] = sqs_response['QueueUrl'] 59 | 60 | create_key_pair_for_ec2(key_name) 61 | print('Key pair for EC2 {} created'.format(key_name)) 62 | 63 | create_ssm_parameters(ssm_parameters) 64 | for name in ssm_parameters: 65 | print('SSM parameter {} created'.format(name)) 66 | 67 | create_role(role_name_ec2, 'ec2.amazonaws.com', role_policies_for_ec2) 68 | print('IAM role {} created'.format(role_name_ec2)) 69 | 70 | iam_response = create_instance_profile(instance_profile_name, role_name_ec2) 71 | instance_profile_arn = iam_response['InstanceProfile']['Arn'] 72 | print('Instance profile {} created'.format(instance_profile_name)) 73 | 74 | ec2_response = create_ec2_instance(instance_profile_name, instance_profile_arn, key_name, user_data) 75 | instance_id = ec2_response['Instances'][0]['InstanceId'] 76 | print('EC2 instance (id: {}) created'.format(instance_id)) 77 | deployment_info['ec2_instance_id'] = instance_id 78 | 79 | iam_response = create_role(role_name_lambda, 'lambda.amazonaws.com', role_policies_for_lambda) 80 | role_arn = iam_response['Role']['Arn'] 81 | print('IAM role {} created'.format(role_name_lambda)) 82 | 83 | time.sleep(10) 84 | 85 | create_lambda_function(role_arn, lambda_name, instance_id, queue_name) 86 | print('Lambda function {} created'.format(lambda_name)) 87 | 88 | with open(path_to_info_filename, 'w') as fp: 89 | json.dump(deployment_info, fp, indent=4) 90 | print('Json file {} with deployment info is written'.format(info_filename)) 91 | 92 | print('Deployment process succeeded.') 93 | 94 | 95 | def remove_bot(info_filename): 96 | path_to_info_filename = '/'.join(os.path.realpath(__file__).split('/')[:-2]) + '/' + info_filename 97 | 98 | deployment_info = read_deploy_info(path_to_info_filename) 99 | queue_name = deployment_info['queue_name'] 100 | key_name = deployment_info['key_name'] 101 | path_to_key = '/'.join(os.path.realpath(__file__).split('/')[:-2]) + '/' + key_name + '.pem' 102 | 103 | role_name_ec2 = deployment_info['role_name_ec2'] 104 | role_name_lambda = deployment_info['role_name_lambda'] 105 | lambda_name = deployment_info['lambda_name'] 106 | ec2_instance_id = deployment_info['ec2_instance_id'] 107 | ssm_parameters = list(deployment_info['ssm_parameters'].keys()) 108 | sqs_queue_url = deployment_info['sqs_queue_url'] 109 | role_policies_for_ec2 = deployment_info['role_policies_for_ec2'] 110 | role_policies_for_lambda = deployment_info['role_policies_for_lambda'] 111 | instance_profile_name = deployment_info['instance_profile_name'] 112 | 113 | sqs_client = boto3.client('sqs') 114 | ec2_client = boto3.client('ec2') 115 | lambda_client = boto3.client('lambda') 116 | ssm_client = boto3.client('ssm') 117 | 118 | print('Starting removal process.') 119 | 120 | sqs_client.delete_queue(QueueUrl=sqs_queue_url) 121 | print('SQS queue {} removed'.format(queue_name)) 122 | 123 | ec2_client.delete_key_pair(KeyName=key_name) 124 | print('Key pair for EC2 {} removed'.format(key_name)) 125 | 126 | ssm_client.delete_parameters(Names=ssm_parameters) 127 | for name in ssm_parameters: 128 | print('SSM parameter {} removed'.format(name)) 129 | 130 | delete_instance_profile(instance_profile_name, role_name_ec2) 131 | print('Instance profile {} removed'.format(instance_profile_name)) 132 | 133 | delete_role(role_name_ec2, role_policies_for_ec2) 134 | print('IAM role {} removed'.format(role_name_ec2)) 135 | 136 | ec2_client.terminate_instances(InstanceIds=[ec2_instance_id]) 137 | print('EC2 instance (id: {}) terminated'.format(ec2_instance_id)) 138 | 139 | delete_role(role_name_lambda, role_policies_for_lambda) 140 | print('IAM role {} removed'.format(role_name_lambda)) 141 | 142 | lambda_client.delete_function(FunctionName=lambda_name) 143 | print('Lambda function {} removed'.format(lambda_name)) 144 | 145 | os.remove(path_to_key) 146 | print('Key file {} deleted'.format(info_filename)) 147 | 148 | os.remove(path_to_info_filename) 149 | print('Json file {} with deployment info is deleted'.format(info_filename)) 150 | 151 | print('Removal process succeeded.') 152 | 153 | 154 | def read_deploy_info(info_filename): 155 | with open(info_filename, 'r') as fp: 156 | deployment_info = json.load(fp) 157 | return deployment_info 158 | 159 | 160 | def create_ec2_instance(instance_profile_name, instance_profile_arn, key_name, user_data): 161 | ec2_client = boto3.client('ec2') 162 | 163 | response = ec2_client.run_instances( 164 | BlockDeviceMappings=[ 165 | { 166 | 'DeviceName': '/dev/xvda', 167 | 'Ebs': { 168 | 169 | 'DeleteOnTermination': True, 170 | 'VolumeSize': 8, 171 | 'VolumeType': 'gp2' 172 | }, 173 | }, 174 | ], 175 | KeyName=key_name, 176 | UserData=user_data, 177 | ImageId='ami-0cd3dfa4e37921605', 178 | InstanceType='t2.micro', 179 | MaxCount=1, 180 | MinCount=1, 181 | Monitoring={ 182 | 'Enabled': False 183 | }, 184 | SecurityGroupIds=[ 185 | 'sg-ad82ddc1', 186 | ] 187 | ) 188 | time.sleep(30) 189 | ec2_client.associate_iam_instance_profile( 190 | IamInstanceProfile={ 191 | 'Arn': instance_profile_arn, 192 | 'Name': instance_profile_name 193 | }, 194 | InstanceId=response['Instances'][0]['InstanceId'] 195 | ) 196 | 197 | return response 198 | 199 | 200 | def create_ssm_parameters(ssm_parameters): 201 | ssm_client = boto3.client('ssm') 202 | for name in ssm_parameters: 203 | ssm_client.put_parameter( 204 | Name=name, 205 | Value=ssm_parameters[name], 206 | Type='SecureString', 207 | Overwrite=True 208 | ) 209 | 210 | 211 | def create_key_pair_for_ec2(key_name): 212 | ec2_client = boto3.client('ec2') 213 | response = ec2_client.create_key_pair(KeyName=key_name) 214 | 215 | path_to_key = '{}/{}.pem'.format('/'.join(os.path.realpath(__file__).split('/')[:-2]), key_name) 216 | 217 | with open(path_to_key, 'w') as key: 218 | key.write(response['KeyMaterial']) 219 | os.chmod(path_to_key, 0o400) 220 | 221 | return response 222 | 223 | 224 | def create_lambda_function(function_role, function_name, ec2_instance_id, queue_name): 225 | lambda_client = boto3.client('lambda') 226 | 227 | path_to_lambda = '/'.join(os.path.realpath(__file__).split('/')[:-2]) + '/lambda-function.py' 228 | zipf = ZipFile('lambda.zip', 'w') 229 | zipf.write(path_to_lambda, os.path.basename(path_to_lambda)) 230 | zipf.close() 231 | 232 | with open('./lambda.zip', 'rb') as lambda_zip: 233 | lambda_code = lambda_zip.read() 234 | 235 | response = lambda_client.create_function( 236 | FunctionName=function_name, 237 | Runtime='python3.6', 238 | Role=function_role, 239 | Handler='lambda-function.handler', 240 | Code={'ZipFile': lambda_code}, 241 | Environment={ 242 | 'Variables': { 243 | 'INSTANCE_ID': ec2_instance_id, 244 | 'QUEUE_NAME': queue_name 245 | } 246 | } 247 | ) 248 | # TODO: connect api gateway to lambda 249 | '''lambda_client.add_permission( 250 | FunctionName=function_name, 251 | StatementId=function_name + "-ID", 252 | Action="lambda:InvokeFunction", 253 | Principal="apigateway.amazonaws.com", 254 | SourceArn="arn:aws:execute-api:" + self.region + ":" + self.getAccountId() + ":" + apiId + "/*/" + httpMethod + "/" + httpPath, 255 | # SourceAccount='string', 256 | # Qualifier='string' 257 | 258 | )''' 259 | os.remove('lambda.zip') 260 | return response 261 | 262 | 263 | def create_role(role_name, service, role_policies): 264 | iam_client = boto3.client('iam') 265 | 266 | assume_role_policy_document = json.dumps({ 267 | "Version": "2012-10-17", 268 | "Statement": [ 269 | { 270 | "Effect": "Allow", 271 | "Principal": { 272 | "Service": service 273 | }, 274 | "Action": "sts:AssumeRole" 275 | } 276 | ] 277 | }) 278 | 279 | response = iam_client.create_role( 280 | Path='/service-role/', 281 | RoleName=role_name, 282 | AssumeRolePolicyDocument=assume_role_policy_document 283 | ) 284 | for policy in role_policies: 285 | iam_client.attach_role_policy( 286 | RoleName=response['Role']['RoleName'], 287 | PolicyArn=policy 288 | ) 289 | return response 290 | 291 | 292 | def create_instance_profile(profile_name, role_name): 293 | iam_client = boto3.client('iam') 294 | iam = boto3.resource('iam') 295 | 296 | response = iam_client.create_instance_profile( 297 | InstanceProfileName=profile_name, 298 | Path='/' 299 | ) 300 | 301 | instance_profile = iam.InstanceProfile(profile_name) 302 | 303 | instance_profile.add_role( 304 | RoleName=role_name 305 | ) 306 | return response 307 | 308 | 309 | def create_api_gateway(function_name): 310 | apigateway_client = boto3.client('apigateway') 311 | # TODO: create api gateway 312 | 313 | 314 | def create_sqs(queue_name): 315 | sqs_client = boto3.client('sqs') 316 | response = sqs_client.create_queue( 317 | QueueName=queue_name 318 | ) 319 | return response 320 | 321 | 322 | def delete_role(role_name, role_policies): 323 | iam_client = boto3.client('iam') 324 | for policy_arn in role_policies: 325 | iam_client.detach_role_policy( 326 | RoleName=role_name, 327 | PolicyArn=policy_arn 328 | ) 329 | iam_client.delete_role(RoleName=role_name) 330 | 331 | 332 | def delete_instance_profile(profile_name, role_name): 333 | iam = boto3.resource('iam') 334 | 335 | instance_profile = iam.InstanceProfile(profile_name) 336 | 337 | instance_profile.remove_role( 338 | RoleName=role_name 339 | ) 340 | instance_profile.delete() 341 | 342 | 343 | def main(): 344 | parser = argparse.ArgumentParser() 345 | parser.add_argument( 346 | "--github_token", 347 | help="Token from github account. If token not specified, it" 348 | " will be taken from GITHUB_TOKEN_FOR_BOT environmental variable.", 349 | default=os.getenv("GITHUB_TOKEN_FOR_BOT")) 350 | parser.add_argument( 351 | "--git_author", 352 | help="Author info to use in commits. If not specified, it" 353 | "will be taken from GIT_AUTHOR environmental variable.", 354 | default=os.getenv("GIT_AUTHOR")) 355 | parser.add_argument( 356 | "--key_name", 357 | help="Name of a key in ec2 key pair to be created. Default value is \"github-bot-key\".", 358 | default="github-bot-key") 359 | parser.add_argument( 360 | "--queue_name", 361 | help="Name of a queue to be created in SQS . Default value is \"github-bot-queue\".", 362 | default="github-bot-queue") 363 | parser.add_argument( 364 | "--lambda_name", 365 | help="Name of a Lambda function to be created. Default value is \"ggithub-bot-lambda\".", 366 | default="github-bot-lambda") 367 | parser.add_argument( 368 | "--role_name_lambda", 369 | help="Name of a role to be created for Lambda . Default value is \"github-bot-lambda-role\".", 370 | default="github-bot-lambda-role") 371 | parser.add_argument( 372 | "--role_name_ec2", 373 | help="Name of a role to be created for EC2 . Default value is \"github-bot-ec2-role\".", 374 | default="github-bot-ec2-role") 375 | parser.add_argument( 376 | "--instance_profile_name", 377 | help="Name of a instance profile to be created for EC2 . Default value is \"github-instance-profile-name\".", 378 | default="github-instance-profile-name") 379 | parser.add_argument( 380 | "--webhook_when_porting_pr_exists", 381 | help="URL for webhook when porting PR exists. Default value is \"github-instance-profile-name\".", 382 | default='') 383 | parser.add_argument( 384 | "--webhook_when_porting_pr_created", 385 | help="URL for webhook when porting PR created. Default value is \"github-instance-profile-name\".", 386 | default='') 387 | parser.add_argument( 388 | "--info_filename", 389 | help="Name of the json file with deployment information to be created." 390 | " Default value is \"Github-bot-deploy-info.json\".", 391 | default="Github-bot-deploy-info.json") 392 | parser.add_argument( 393 | '--remove_bot', 394 | help="Option for removing already deployed bot from AWS.", dest='remove_bot', 395 | action='store_true') 396 | 397 | args = parser.parse_args() 398 | if args.github_token is None: 399 | print('You need to specify github token in local variable GITHUB_TOKEN_FOR_BOT or with --github_token argument') 400 | else: 401 | if not args.remove_bot: 402 | deployment_info = {'key_name': args.key_name, 403 | 'queue_name': args.queue_name, 404 | 'lambda_name': args.lambda_name, 405 | 'role_name_lambda': args.role_name_lambda, 406 | 'role_name_ec2': args.role_name_ec2, 407 | 'instance_profile_name': args.instance_profile_name, 408 | 'git_author': args.git_author, 409 | 'hook_exists': args.webhook_when_porting_pr_exists, 410 | 'hook_created': args.webhook_when_porting_pr_created} 411 | 412 | deploy_bot(args.github_token, deployment_info, args.info_filename) 413 | else: 414 | remove_bot(args.info_filename) 415 | 416 | 417 | if __name__ == "__main__": 418 | main() 419 | -------------------------------------------------------------------------------- /tools/porting-bot/ec2/ec2-run.py: -------------------------------------------------------------------------------- 1 | """Script for ec2 instance to run.""" 2 | 3 | import json 4 | import boto3 5 | from subprocess import Popen, call, check_output 6 | import requests 7 | import datetime 8 | import os 9 | import io 10 | 11 | 12 | def write_in_log(log_message): 13 | """ 14 | Writes log messages to log file. 15 | 16 | :param log_message: 17 | Text of the message. 18 | """ 19 | 20 | now = datetime.datetime.now() 21 | if not os.path.isdir('/home/ec2-user/logs-github-bot/'): 22 | os.mkdir('/home/ec2-user/logs-github-bot/') 23 | with open('/home/ec2-user/logs-github-bot/{}.txt'.format(now.strftime('%Y-%m-%d')), 'a') as logfile: 24 | logfile.write('{} {}\n'.format(now.strftime('%Y-%m-%d %H:%M:%S'), log_message)) 25 | 26 | print(log_message) 27 | 28 | 29 | def write_message(message): 30 | """ 31 | Writes message from queue to logs. 32 | 33 | :param message: 34 | Text of the message. 35 | """ 36 | 37 | now = datetime.datetime.now() 38 | message_num = 1 39 | if not os.path.isdir('/home/ec2-user/logs-github-bot/messages'): 40 | os.mkdir('/home/ec2-user/logs-github-bot/messages') 41 | while os.path.isfile('/home/ec2-user/logs-github-bot/messages/{}-{}.txt'.format(now.strftime('%Y-%m-%d'), 42 | message_num)): 43 | message_num += 1 44 | with io.open('/home/ec2-user/logs-github-bot/messages/{}-{}.txt'.format(now.strftime('%Y-%m-%d'), message_num), 45 | 'w', encoding="utf-8") as file: 46 | file.write(unicode(message)) 47 | 48 | 49 | def update_repository(path): 50 | """ 51 | Updates repo in specified path. 52 | 53 | :param path: 54 | Path of folder where repo is located. 55 | 56 | """ 57 | 58 | call(['git', '-C', path, 'fetch', '--all']) 59 | call(['git', '-C', path, 'reset', '--hard', 'origin']) 60 | 61 | 62 | def update_bot(): 63 | """ 64 | Updates bot itself. 65 | """ 66 | 67 | call(['sudo', 'git', '-C', 'odoo-devops', 'fetch', '--all']) 68 | call(['sudo', 'git', '-C', 'odoo-devops', 'reset', '--hard', 'origin']) 69 | 70 | 71 | def process_message(msg_body, required_fields, github_token, git_author=None, 72 | hook_exists=None, hook_created=None): 73 | """ 74 | Processes message. 75 | 76 | :param msg_body: 77 | Message to process in dictionary format. 78 | :param required_fields:po_po_ 79 | Fields witch must be in message body to process it. 80 | :param github_token: 81 | Token from github account. 82 | :param git_author: 83 | Author info to use in commits. 84 | :return: 85 | If message is processed correctly returns True. 86 | """ 87 | 88 | successful = False 89 | 90 | if all(fld in msg_body for fld in required_fields): 91 | full_repo_name = msg_body['repository']['full_name'] 92 | repo_name = msg_body['repository']['name'] 93 | 94 | repo_path = '/home/ec2-user/repositories/{}'.format(repo_name) 95 | action = msg_body['action'] 96 | merged = msg_body['pull_request']['merged'] 97 | base_branch = msg_body['pull_request']['base']['ref'] 98 | 99 | if action == 'closed' and merged and base_branch in ['10.0', '11.0']: 100 | next_branch = str(int(base_branch.split('.')[0]) + 1) + '.0' 101 | 102 | if next_branch in ['11.0', '12.0']: 103 | 104 | write_in_log('forking repo: {}'.format(full_repo_name)) 105 | 106 | Popen(['python', '/home/ec2-user/odoo-devops/tools/porting-bot/scripts/fork.py', 107 | full_repo_name, '--github_token', github_token]).wait() 108 | write_in_log('fork complete') 109 | 110 | if os.path.isdir(repo_path): 111 | write_in_log('updating repo in {}'.format(repo_path)) 112 | update_repository(repo_path) 113 | write_in_log('update complete') 114 | 115 | else: 116 | write_in_log('cloning fork repo in {}'.format(repo_path)) 117 | Popen(['python', '/home/ec2-user/odoo-devops/tools/porting-bot/scripts/clone_fork.py', 118 | repo_name, repo_path, '--github_token', github_token]).wait() 119 | write_in_log('clone complete') 120 | 121 | write_in_log('merging repo: {}'.format(full_repo_name)) 122 | os.chdir(repo_path) 123 | 124 | Popen(['python', '/home/ec2-user/odoo-devops/tools/porting-bot/scripts/merge.py', 125 | base_branch, next_branch, '--auto_resolve', '--auto_push', 126 | '--author', git_author]).wait() 127 | write_in_log('merge in branch {} complete'.format(next_branch)) 128 | 129 | merge_branch = check_output(['git', 'rev-parse', '--abbrev-ref', 'HEAD']).decode()[:-1] 130 | fork_user =\ 131 | check_output(['git', 'remote', 'get-url', 'origin']).decode().split('/')[-2] 132 | 133 | write_in_log('making pull-request in {} {} from {} {}'.format(full_repo_name, next_branch, 134 | fork_user, merge_branch)) 135 | 136 | pr_call_params = ['python', '/home/ec2-user/odoo-devops/tools/porting-bot/scripts/pull-request.py', 137 | full_repo_name, next_branch, fork_user, merge_branch, 138 | '--github_token', github_token, 139 | '--original_pr_title', '"{}"'.format(msg_body['pull_request']['title'])] 140 | 141 | if hook_exists is not None: 142 | pr_call_params.extend(['--webhook_when_porting_pr_exists', hook_exists]) 143 | if hook_created is not None: 144 | pr_call_params.extend(['--webhook_when_porting_pr_created', hook_created]) 145 | 146 | write_in_log(' '.join(pr_call_params)) 147 | Popen(pr_call_params).wait() 148 | 149 | write_in_log('pull-request complete'.format(next_branch)) 150 | successful = True 151 | 152 | else: 153 | write_in_log('merge in branch "{}" is not supported'.format(next_branch)) 154 | 155 | else: 156 | write_in_log('action is {}, pull request not merged'.format(action)) 157 | 158 | 159 | else: 160 | absent_fields = '' 161 | for field in required_fields: 162 | if 'field' not in msg_body: 163 | absent_fields += '{}, '.format(field) 164 | absent_fields = absent_fields[:-2] 165 | write_in_log('wrong message format. Fields {} not found'.format(absent_fields)) 166 | 167 | return successful 168 | 169 | 170 | def main(): 171 | write_in_log('ec2-run script is running') 172 | 173 | write_in_log('updating bot...') 174 | update_bot() 175 | 176 | region_name = requests.get('http://169.254.169.254/latest/meta-data/placement/availability-zone').text[:-1] 177 | ssm_client = boto3.client('ssm', region_name=region_name) 178 | 179 | queue_name = ssm_client.get_parameter(Name='QUEUE_NAME', WithDecryption=True)['Parameter']['Value'] 180 | shutdown_time = ssm_client.get_parameter(Name='SHUTDOWN_TIME', WithDecryption=True)['Parameter']['Value'] 181 | github_token = ssm_client.get_parameter(Name='GITHUB_TOKEN_FOR_BOT', WithDecryption=True)['Parameter']['Value'] 182 | git_author = ssm_client.get_parameter(Name='GIT_AUTHOR', WithDecryption=True)['Parameter']['Value'] 183 | hook_exists = ssm_client.get_parameter(Name='WEBHOOK_WHEN_PORTING_PR_EXISTS', WithDecryption=True)['Parameter']['Value'] 184 | hook_created = ssm_client.get_parameter(Name='WEBHOOK_WHEN_PORTING_PR_CREATED', WithDecryption=True)['Parameter']['Value'] 185 | 186 | if hook_exists == 'none': 187 | hook_exists = None 188 | 189 | if hook_created == 'none': 190 | hook_created = None 191 | 192 | sqs = boto3.resource('sqs', region_name=region_name) 193 | queue = sqs.get_queue_by_name(QueueName=queue_name) 194 | 195 | write_in_log('Region name: {}; Queue name: {}; Shutdown time: {}'.format(region_name, queue_name, shutdown_time)) 196 | 197 | messages = [] 198 | response = queue.receive_messages(MaxNumberOfMessages=10) 199 | while len(response) > 0: 200 | messages.extend(response) 201 | response = queue.receive_messages(MaxNumberOfMessages=10) 202 | 203 | write_in_log('{} messages received from SQS'.format(len(messages))) 204 | 205 | for message in messages: 206 | try: 207 | msg_body = json.loads(message.body) 208 | required_fields = ['action', 'number', 'repository'] 209 | 210 | successful = process_message(msg_body, required_fields, 211 | github_token, git_author=git_author, 212 | hook_exists=hook_exists, 213 | hook_created=hook_created) 214 | if successful: 215 | write_message(message.body) 216 | 217 | queue.delete_messages(Entries=[{ 218 | 'Id': message.message_id, 219 | 'ReceiptHandle': message.receipt_handle 220 | }]) 221 | except ValueError: 222 | write_in_log('message cant be processed'.format(len(messages))) 223 | 224 | if len(messages) == 0: 225 | 226 | write_in_log('shutdown is in schedule') 227 | else: 228 | 229 | Popen(['sudo', 'shutdown', '-c']) 230 | write_in_log('shutdown is initiated in {} minutes'.format(shutdown_time)) 231 | 232 | Popen(['sudo', 'shutdown', '-h', '+{}'.format(shutdown_time)]) 233 | 234 | 235 | if __name__ == "__main__": 236 | main() 237 | -------------------------------------------------------------------------------- /tools/porting-bot/ec2/ec2-script.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | sudo yum install git -y 3 | pip install PyGithub 4 | pip install boto3 5 | git clone https://gitlab.com/Rusllan/odoo-devops.git /home/ec2-user/odoo-devops 6 | sudo chmod +x /home/ec2-user/odoo-devops/tools/merge-bot/ec2/ec2-run.py 7 | sudo chmod +x /home/ec2-user/odoo-devops/tools/merge-bot/scripts/merge.py 8 | sudo chmod +x /home/ec2-user/odoo-devops/tools/merge-bot/scripts/fork.py 9 | sudo chmod +x /home/ec2-user/odoo-devops/tools/merge-bot/scripts/clone_fork.py 10 | sudo chmod +x /home/ec2-user/odoo-devops/tools/merge-bot/scripts/pull-request.py 11 | sudo echo "*/5 * * * * sudo /usr/bin/python /home/ec2-user/odoo-devops/tools/porting-bot/ec2/ec2-run.py" >> mycron 12 | sudo echo "@reboot sudo /usr/bin/python /home/ec2-user/odoo-devops/tools/porting-bot/ec2/ec2-run.py" >> mycron 13 | sudo crontab mycron 14 | sudo python /home/ec2-user/odoo-devops/tools/merge-bot/ec2/ec2-run.py 15 | -------------------------------------------------------------------------------- /tools/porting-bot/lambda-function.py: -------------------------------------------------------------------------------- 1 | import os 2 | from botocore.vendored import requests 3 | import boto3 4 | 5 | 6 | def handler(event, context): 7 | make_review(event) 8 | return { 9 | 'statusCode': 200 10 | } 11 | 12 | 13 | def get_file(url): 14 | return requests.get(url).text 15 | 16 | 17 | def make_review(event): 18 | instance_id = os.getenv("INSTANCE_ID") 19 | queue_name = os.getenv("QUEUE_NAME") 20 | sqs = boto3.resource('sqs') 21 | queue = sqs.get_queue_by_name(QueueName=queue_name) 22 | queue.send_message(MessageBody=event['body']) 23 | 24 | ec2 = boto3.resource('ec2') 25 | 26 | instance = ec2.Instance(instance_id) 27 | while instance.state['Name'] not in ['running', 'pending']: 28 | print('Instance {} is {}'.format(instance_id, instance.state['Name'])) 29 | if instance.state['Name'] == 'stopped': 30 | instance.start() 31 | elif instance.state['Name'] == 'stopping': 32 | instance.wait_until_stopped() 33 | instance = ec2.Instance(instance_id) 34 | 35 | print('Instance {} is {}'.format(instance_id, instance.state['Name'])) 36 | -------------------------------------------------------------------------------- /tools/porting-bot/scripts/README.md: -------------------------------------------------------------------------------- 1 | # Github bot Scripts 2 | 3 | This is scripts which Github bot uses, but they can be used separately to do the same thing manually. 4 | 5 | To use any script you will need to: 6 | 7 | * Download scripts: 8 | 9 | $ git clone git@gitlab.com:itpp/odoo-devops.git 10 | 11 | * Give script you need required permissions: 12 | 13 | sudo chmod +x odoo-devops/tools/merge-bot/