├── .gitignore ├── LICENSE ├── README.md ├── charts ├── bitcoind │ ├── .helmignore │ ├── Chart.yaml │ ├── README.md │ ├── templates │ │ ├── NOTES.txt │ │ ├── _bitcoin.conf │ │ ├── _check_node_health.sh │ │ ├── _helpers.tpl │ │ ├── configmap.yaml │ │ ├── ilb.yaml │ │ ├── lb-p2p.yaml │ │ ├── lb.yaml │ │ ├── secret.yaml │ │ ├── service.yaml │ │ └── statefulset.yaml │ └── values.yaml ├── parity │ ├── .helmignore │ ├── Chart.yaml │ ├── check_node_health.sh │ ├── templates │ │ ├── NOTES.txt │ │ ├── _helpers.tpl │ │ ├── _parity.toml │ │ ├── configmap.yaml │ │ ├── ilb.yaml │ │ ├── ingress.yaml │ │ ├── lb-p2p-discovery.yaml │ │ ├── lb-p2p.yaml │ │ ├── lb.yaml │ │ ├── service.yaml │ │ └── statefulset.yaml │ └── values.yaml └── theta │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ ├── _config.yaml │ ├── _helpers.tpl │ ├── configmap.yaml │ ├── ilb.yaml │ ├── lb.yaml │ ├── secret.yaml │ ├── service.yaml │ ├── serviceaccount.yaml │ └── statefulset.yaml │ └── values.yaml ├── cloudbuild.md ├── cloudbuild.yaml ├── example-values-bitcoind.yaml ├── example-values-parity.yaml ├── gke.md ├── helm-rbac.yaml ├── helm.md ├── ops.md ├── pv-r.yaml ├── pv.yaml ├── resources.md ├── sc-ssd-regional.yaml ├── sc-ssd.yaml └── sc-standard-regional.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | values-bitcoind.yaml 2 | values-parity.yaml 3 | .idea 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2019 Blockchain ETL 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # blockchain-kubernetes 2 | Kubernetes manifests for running cryptocurrency nodes. 3 | 4 | Here is quick HOWTO deploy nodes into GKE env. 5 | 6 | ### Requirements 7 | * linux/macos terminal 8 | * git 9 | * gcloud 10 | * kubectl (version from gcloud is ok) 11 | * helm 12 | * follow "Before you begin" part of [GCP manual](https://cloud.google.com/kubernetes-engine/docs/how-to/iam) 13 | 14 | ### Deploy 15 | * [Create k8s GKE two zone cluster](gke.md), use at least [n1-highmem4 instances](https://cloud.google.com/compute/docs/machine-types#n1_machine_types) 16 | * [Install](helm.md) [Helm](https://helm.sh) 17 | * Allocate 2 regional IP adresses, use the same region as your GKE cluster 18 | ```bash 19 | export PROJECT_ID=$(gcloud config get-value project) 20 | export REGION=us-central1 21 | 22 | gcloud compute addresses create dev-btc-0 --region $REGION --project=$PROJECT_ID 23 | gcloud compute addresses create dev-eth-0 --region $REGION --project=$PROJECT_ID 24 | gcloud compute addresses create dev-btc-1 --region $REGION --project=$PROJECT_ID 25 | gcloud compute addresses create dev-eth-1 --region $REGION --project=$PROJECT_ID 26 | 27 | gcloud compute addresses list --project=$PROJECT_ID 28 | ``` 29 | * Adjust zones in [regional storage classes](https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd) `sc-ssd-regional.yaml` and `sc-standard-regional.yaml`, use the same zones as you used with GKE cluster. 30 | * Create storage classes, replace *K8S_CONTEXT* with real value. 31 | ```bash 32 | export K8S_CONTEXT=baas0 33 | kubectl --context $K8S_CONTEXT create -f sc-ssd.yaml 34 | kubectl --context $K8S_CONTEXT create -f sc-ssd-regional.yaml 35 | kubectl --context $K8S_CONTEXT create -f sc-standard-regional.yaml 36 | ``` 37 | * Copy `example-values-parity.yaml` and `example-values-bitcoind.yaml` to `values-parity.yaml` and `values-bitcoind.yaml` 38 | ```bash 39 | cp example-values-parity.yaml values-parity.yaml 40 | cp example-values-bitcoind.yaml values-bitcoind.yaml 41 | ``` 42 | * Adjust `values-parity.yaml` and `values-bitcoind.yaml`, pay attention to [resource requests and limits](resources.md), IP adresses, volume size, and RPC credentials. Replace `198.51.100.0` and `203.0.113.0` with real IP values of allocated adresses. 43 | ```bash 44 | export EDITOR=vi 45 | $EDITOR values-bitcoind.yaml 46 | $EDITOR values-parity.yaml 47 | ``` 48 | * Deploy cryptonodes 49 | ```bash 50 | helm --kube-context $K8S_CONTEXT install charts/parity/ --namespace dev-eth-0 --name dev-eth-0 --values values-parity.yaml 51 | helm --kube-context $K8S_CONTEXT install charts/bitcoind/ --namespace dev-btc-0 --name dev-btc-0 --values values-bitcoind.yaml 52 | 53 | ``` 54 | * Use `kubectl describe` to check/troubleshoot, for example: 55 | ```bash 56 | kubectl --context $K8S_CONTEXT --namespace dev-eth-0 describe statefulset dev-eth-0-parity 57 | kubectl --context $K8S_CONTEXT --namespace dev-eth-0 describe pod dev-eth-0-parity-0 58 | ``` 59 | Please check [separate file](ops.md) for more details about additional troubleshooting. 60 | 61 | **TIP**: when you need archive parity node to sync up faster - get a tons of RAM and preload synced blockchain into OS cache. My case was 640GB of RAM and blockchain preload from inside container via `find | xargs cat > /dev/null` or [vmtouch](https://github.com/hoytech/vmtouch/), 3-5x speedup from 0.5-2 blocks/sec(100-200 tx/sec) to 7-10 blocks/sec (700-1000 tx/sec) and sustained blockchain write near 150MB/s, just $1/hour with preemptible nodes. 62 | 63 | ### Charts repository 64 | You can use [Cloud Build](https://cloud.google.com/cloud-build/) to [update chart repository](cloudbuild.md) with these charts 65 | -------------------------------------------------------------------------------- /charts/bitcoind/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | cloudbuild.yaml 23 | -------------------------------------------------------------------------------- /charts/bitcoind/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | appVersion: 0.18.0 3 | description: Bitcoin is an innovative payment network and a new kind of money. 4 | engine: gotpl 5 | home: https://bitcoin.org/ 6 | icon: https://bitcoin.org/img/icons/logotop.png 7 | keywords: 8 | - bitcoind 9 | - cryptocurrency 10 | - blockchain 11 | maintainers: 12 | - email: daniel@dysnix.com 13 | name: daniel-yavorovich 14 | - email: av@dysnix.com 15 | name: voron 16 | name: bitcoind 17 | sources: 18 | - https://github.com/kubernetes/charts 19 | - https://github.com/blockchain-etl/docker-bitcoind 20 | - https://github.com/kylemanna/docker-bitcoind 21 | version: 0.3.58 22 | -------------------------------------------------------------------------------- /charts/bitcoind/README.md: -------------------------------------------------------------------------------- 1 | # Bitcoind 2 | 3 | [Bitcoin](https://bitcoin.org/) uses peer-to-peer technology to operate with no central authority or banks; 4 | managing transactions and the issuing of bitcoins is carried out collectively by the network. 5 | 6 | ## Introduction 7 | 8 | This chart bootstraps a single node Bitcoin deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. 9 | Docker image was taken from [Bitcoind for Docker](https://github.com/kylemanna/docker-bitcoind) - many thanks! 10 | 11 | ## Prerequisites 12 | 13 | - Kubernetes 1.8+ 14 | - PV provisioner support in the underlying infrastructure 15 | 16 | ## Installing the Chart 17 | 18 | To install the chart with the release name `my-release`: 19 | 20 | ```bash 21 | $ helm install --name my-release stable/bitcoind 22 | ``` 23 | 24 | The command deploys bitcoind on the Kubernetes cluster in the default configuration. 25 | The [configuration](#configuration) section lists the parameters that can be configured during installation. 26 | 27 | > **Tip**: List all releases using `helm list` 28 | 29 | ## Uninstalling the Chart 30 | 31 | To uninstall/delete the `my-release` deployment: 32 | 33 | ```bash 34 | $ helm delete my-release 35 | ``` 36 | 37 | The command removes all the Kubernetes components associated with the chart and deletes the release. 38 | 39 | ## Configuration 40 | 41 | The following table lists the configurable parameters of the bitcoind chart and their default values. 42 | 43 | Parameter | Description | Default 44 | ------------------------------- | ------------------------------------------------- | ---------------------------------------------------------- 45 | `image.repository` | Image source repository name | `arilot/docker-bitcoind` 46 | `image.tag` | `bitcoind` release tag. | `0.17.1` 47 | `image.pullPolicy` | Image pull policy | `IfNotPresent` 48 | `service.rpcPort` | RPC port | `8332` 49 | `service.p2pPort` | P2P port | `8333` 50 | `service.testnetPort` | Testnet port | `18332` 51 | `service.testnetP2pPort` | Testnet p2p ports | `18333` 52 | `service.selector` | Node selector | `tx-broadcast-svc` 53 | `persistence.enabled` | Create a volume to store data | `true` 54 | `persistence.accessMode` | ReadWriteOnce or ReadOnly | `ReadWriteOnce` 55 | `persistence.size` | Size of persistent volume claim | `300Gi` 56 | `resources` | CPU/Memory resource requests/limits | `{}` 57 | `configurationFile` | Config file ConfigMap entry | 58 | `terminationGracePeriodSeconds` | Wait time before forcefully terminating container | `30` 59 | 60 | For more information about Bitcoin configuration please see [Bitcoin.conf_Configuration_File](https://en.bitcoin.it/wiki/Running_Bitcoin#Bitcoin.conf_Configuration_File). 61 | 62 | Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, 63 | 64 | ```bash 65 | $ helm install --name my-release -f values.yaml stable/bitcoind 66 | ``` 67 | 68 | > **Tip**: You can use the default [values.yaml](values.yaml) 69 | 70 | ## Persistence 71 | 72 | The bitcoind image stores the Bitcoind node data (Blockchain and wallet) and configurations at the `/bitcoin` path of the container. 73 | 74 | By default a PersistentVolumeClaim is created and mounted into that directory. In order to disable this functionality 75 | you can change the values.yaml to disable persistence and use an emptyDir instead. 76 | 77 | > *"An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node. When a Pod is removed from a node for any reason, the data in the emptyDir is deleted forever."* 78 | 79 | !!! WARNING !!! 80 | 81 | Please NOT use emptyDir for production cluster! Your wallets will be lost on container restart! 82 | 83 | ## Customize bitcoind configuration file 84 | 85 | ```yaml 86 | configurationFile: 87 | rpcuser: "rpcuser" 88 | rpcpassword: "rpcpassword" 89 | externalLBp2pIP: 198.51.100.5 90 | custom: |- 91 | txindex=1 92 | ``` 93 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | bitcoind RPC can be accessed via port {{ .Values.service.rpcPort }} on the following DNS name from within your cluster: 2 | {{ template "bitcoind.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local 3 | 4 | To connect to bitcoind RPC: 5 | 6 | 1. Forward the port for the node: 7 | 8 | $ kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ template "bitcoind.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{ .items[0].metadata.name }") {{ .Values.service.rpcPort }} 9 | 10 | 2. Test connection with user and password provided in configuration file: 11 | 12 | $ curl --user {{ .Values.configurationFile.rpcuser | quote | default "rpcuser" }}:{{ .Values.configurationFile.rpcpassword | quote | default "rpcpassword" }} -k http://127.0.0.1:{{ .Values.service.rpcPort }} --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getblockchaininfo", "params": [] }' -H 'content-type: text/plain;' 13 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/_bitcoin.conf: -------------------------------------------------------------------------------- 1 | {{- if .Values.configurationFile -}} 2 | server=1 3 | printtoconsole=1 4 | rpcuser={{ .Values.configurationFile.rpcuser | default "rpcuser" }} 5 | rpcpassword={{ .Values.configurationFile.rpcpassword | default "rpcpassword" }} 6 | rpcbind={{ .Values.configurationFile.rpcbind | default "0.0.0.0" }} 7 | rpcallowip={{ .Values.configurationFile.rpcallowip | default "::/0" }} 8 | {{ if .Values.externalLBp2p -}} 9 | externalip={{ .Values.configurationFile.externalLBp2pIP }} 10 | {{ end -}} 11 | {{ if .Values.configurationFile.custom -}} 12 | {{ .Values.configurationFile.custom }} 13 | {{ end -}} 14 | {{ end -}} 15 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/_check_node_health.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | set -ex # -e exits on error 3 | 4 | usage() { echo "Usage: $0 ]" 1>&2; exit 1; } 5 | 6 | datadir="$1" 7 | max_lag_in_seconds="$2" 8 | last_synced_block_file="$3" 9 | 10 | if [ -z "${datadir}" ] || [ -z "${max_lag_in_seconds}" ] || [ -z "${last_synced_block_file}" ]; then 11 | usage 12 | fi 13 | 14 | set +e 15 | 16 | # it may take up to 5-10 minutes during sync 17 | block_number=$({{ .Values.bitcoind.cli_binary }} -datadir=${datadir} getblockcount) 18 | 19 | ret=$? 20 | # https://en.bitcoin.it/wiki/Original_Bitcoin_client/API_calls_list#Error_Codes 21 | if [[ "$ret" -eq "28" ]];then 22 | echo Loading block index... 23 | exit 0 24 | fi 25 | 26 | set -e 27 | 28 | number_re='^[0-9]+$' 29 | if [ -z "${block_number}" ] || [[ ! ${block_number} =~ $number_re ]]; then 30 | echo "Block number returned by the node is empty or not a number" 31 | exit 1 32 | fi 33 | 34 | # handling special case with blockchain re-index 35 | if [[ "$block_number" -eq "0" ]];then 36 | echo "Reindexing ..." 37 | exit 0 38 | fi 39 | 40 | if [ ! -f ${last_synced_block_file} ]; then 41 | old_block_number=""; 42 | else 43 | old_block_number=$(cat ${last_synced_block_file}); 44 | fi; 45 | 46 | if [ "${block_number}" != "${old_block_number}" ]; then 47 | mkdir -p $(dirname "${last_synced_block_file}") 48 | echo ${block_number} > ${last_synced_block_file} 49 | fi 50 | 51 | file_age=$(($(date +%s) - $(date -r ${last_synced_block_file} +%s))); 52 | max_age=${max_lag_in_seconds}; 53 | echo "${last_synced_block_file} age is $file_age seconds. Max healthy age is $max_age seconds"; 54 | if [ ${file_age} -lt ${max_age} ]; then exit 0; else exit 1; fi 55 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "bitcoind.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "bitcoind.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "bitcoind.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "bitcoind.labels" -}} 38 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 39 | helm.sh/chart: {{ include "bitcoind.chart" . }} 40 | app.kubernetes.io/instance: {{ .Release.Name }} 41 | {{- if .Chart.AppVersion }} 42 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 43 | {{- end }} 44 | app.kubernetes.io/managed-by: {{ .Release.Service }} 45 | {{- end -}} 46 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Release.Name }}-scripts" 5 | data: 6 | check_node_health.sh: | 7 | {{- include (print $.Template.BasePath "/_check_node_health.sh") . | nindent 4 }} 8 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/ilb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.internalLB }} 2 | ## use this to aggregate internal pods behind 3 | ## a single load balanced IP 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-ilb 8 | labels: 9 | chain: {{ .Values.bitcoind.chain }} 10 | {{ include "bitcoind.labels" . | indent 4 }} 11 | annotations: 12 | cloud.google.com/load-balancer-type: "Internal" 13 | cloud.google.com/network-tier: "PREMIUM" 14 | spec: 15 | type: LoadBalancer 16 | {{ if .Values.internalLBIP }} 17 | loadBalancerIP: {{ .Values.internalLBIP }} 18 | {{ end }} 19 | ports: 20 | - name: {{ .Values.service.rpcPortName }} 21 | port: {{ .Values.service.rpcPort }} 22 | targetPort: {{ .Values.service.rpcPortName }} 23 | - name: {{ .Values.service.p2pPortName }} 24 | port: {{ .Values.service.p2pPort }} 25 | targetPort: {{ .Values.service.p2pPortName }} 26 | selector: 27 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 28 | app.kubernetes.io/instance: {{ .Release.Name }} 29 | {{ end }} 30 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/lb-p2p.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLBp2p }} 2 | ## use this if you want to expose blockchain p2p (not RPC) to public 3 | apiVersion: v1 4 | kind: Service 5 | metadata: 6 | name: {{ .Release.Name }}-lb-p2p 7 | labels: 8 | chain: {{ .Values.bitcoind.chain }} 9 | {{ include "bitcoind.labels" . | indent 4 }} 10 | spec: 11 | type: LoadBalancer 12 | {{ if .Values.configurationFile.externalLBp2pIP -}} 13 | loadBalancerIP: {{ .Values.configurationFile.externalLBp2pIP }} 14 | {{ end -}} 15 | ports: 16 | - name: {{ .Values.service.p2pPortName }} 17 | port: {{ .Values.service.p2pPort }} 18 | targetPort: {{ .Values.service.p2pPortName }} 19 | selector: 20 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 21 | app.kubernetes.io/instance: {{ .Release.Name }} 22 | {{ end }} 23 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/lb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLB }} 2 | ## only use this if you want to expose 3 | ## json services to a public ip 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-lb 8 | labels: 9 | chain: {{ .Values.bitcoind.chain }} 10 | {{ include "bitcoind.labels" . | indent 4 }} 11 | spec: 12 | type: LoadBalancer 13 | {{ if .Values.externalLBIP }} 14 | loadBalancerIP: {{ .Values.externalLBIP }} 15 | {{ end }} 16 | {{- if .Values.externalLBSourceRanges }} 17 | loadBalancerSourceRanges: 18 | {{- range $val := .Values.externalLBSourceRanges }} 19 | - {{ $val -}} 20 | {{ end }} 21 | {{ end }} 22 | ports: 23 | - name: {{ .Values.service.rpcPortName }} 24 | port: {{ .Values.service.rpcPort }} 25 | targetPort: {{ .Values.service.rpcPortName }} 26 | - name: {{ .Values.service.p2pPortName }} 27 | port: {{ .Values.service.p2pPort }} 28 | targetPort: {{ .Values.service.p2pPortName }} 29 | selector: 30 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 31 | app.kubernetes.io/instance: {{ .Release.Name }} 32 | {{ end }} 33 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/secret.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: "{{ .Release.Name }}-config" 5 | labels: 6 | {{ include "bitcoind.labels" . | indent 4 }} 7 | type: Opaque 8 | data: 9 | {{ .Values.bitcoind.configurationFileName }}: |- 10 | {{- include (print $.Template.BasePath "/_bitcoin.conf") . | b64enc | nindent 4 }} 11 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ .Release.Name }}-service 5 | labels: 6 | chain: {{ .Values.bitcoind.chain }} 7 | {{ include "bitcoind.labels" . | indent 4 }} 8 | spec: 9 | ports: 10 | - name: {{ .Values.service.rpcPortName }} 11 | port: {{ .Values.service.rpcPort }} 12 | targetPort: {{ .Values.service.rpcPortName }} 13 | - name: {{ .Values.service.p2pPortName }} 14 | port: {{ .Values.service.p2pPort }} 15 | targetPort: {{ .Values.service.p2pPortName }} 16 | selector: 17 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 18 | app.kubernetes.io/instance: {{ .Release.Name }} 19 | -------------------------------------------------------------------------------- /charts/bitcoind/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ include "bitcoind.fullname" . }} 5 | labels: 6 | {{ include "bitcoind.labels" . | indent 4 }} 7 | spec: 8 | serviceName: "{{ .Release.Name }}-service" 9 | replicas: {{ .Values.replicaCount }} # by default is 1 10 | selector: 11 | matchLabels: 12 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 13 | app.kubernetes.io/instance: {{ .Release.Name }} 14 | template: 15 | metadata: 16 | labels: 17 | app.kubernetes.io/name: {{ include "bitcoind.name" . }} 18 | app.kubernetes.io/instance: {{ .Release.Name }} 19 | annotations: 20 | checksum/configmap.yaml: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} 21 | checksum/secret.yaml: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} 22 | spec: 23 | {{- with .Values.securityContext }} 24 | securityContext: 25 | {{- toYaml . | nindent 8 }} 26 | {{- end }} 27 | {{- with .Values.nodeSelector }} 28 | nodeSelector: 29 | {{- toYaml . | nindent 8 }} 30 | {{- end }} 31 | {{- with .Values.affinity }} 32 | affinity: 33 | {{- toYaml . | nindent 8 }} 34 | {{- end }} 35 | {{- with .Values.tolerations }} 36 | tolerations: 37 | {{- toYaml . | nindent 8 }} 38 | {{- end }} 39 | {{- with .Values.imagePullSecrets }} 40 | imagePullSecrets: 41 | {{- toYaml . | nindent 8 }} 42 | {{- end }} 43 | terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} 44 | containers: 45 | - name: {{ .Chart.Name }} 46 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 47 | imagePullPolicy: {{ .Values.image.pullPolicy }} 48 | args: ["-datadir={{ .Values.bitcoind.base_path }}"] 49 | workingDir: "{{ .Values.bitcoind.base_path }}" 50 | resources: 51 | {{- toYaml .Values.resources | nindent 10 }} 52 | ports: 53 | - containerPort: {{ .Values.service.rpcPort }} 54 | name: "{{ .Values.service.rpcPortName }}" 55 | protocol: "TCP" 56 | - containerPort: {{ .Values.service.p2pPort }} 57 | name: "{{ .Values.service.p2pPortName }}" 58 | protocol: "TCP" 59 | volumeMounts: 60 | - name: bitcoind-pvc 61 | mountPath: {{ .Values.bitcoind.base_path }} 62 | - name: scripts 63 | mountPath: /scripts 64 | livenessProbe: 65 | exec: 66 | command: 67 | - /bin/bash 68 | - /scripts/check_node_health.sh 69 | - "{{ .Values.bitcoind.base_path }}" 70 | - "{{ .Values.bitcoind.maxHealthyAge }}" 71 | - last_synced_block.txt 72 | initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} 73 | periodSeconds: {{ .Values.livenessProbe.periodSeconds }} 74 | timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} 75 | successThreshold: {{ .Values.livenessProbe.successThreshold }} 76 | failureThreshold: {{ .Values.livenessProbe.failureThreshold }} 77 | lifecycle: 78 | preStop: 79 | exec: 80 | # we don't need to poll some resources, as we stop PID 1, just sleep after stop command to tell k8s - "shutdown is in process" 81 | command: 82 | - /bin/sh 83 | - -c 84 | - "{{ .Values.bitcoind.cli_binary }} -datadir={{ .Values.bitcoind.base_path }} stop; sleep {{ .Values.terminationGracePeriodSeconds }}" 85 | initContainers: 86 | {{- if .Values.configurationFile }} 87 | # we keep this useless copy to be able to customize config at runtime, such as update rpc creds from other sources 88 | - name: copy-bitcoind-config 89 | image: busybox 90 | command: ['sh', '-c', 'cp /config/{{ .Values.bitcoind.configurationFileName }} {{ .Values.bitcoind.base_path }}/{{ .Values.bitcoind.configurationFileName }}'] 91 | volumeMounts: 92 | - name: bitcoind-config 93 | mountPath: /config 94 | - name: bitcoind-pvc 95 | mountPath: {{ .Values.bitcoind.base_path }} 96 | {{- end }} 97 | {{- if .Values.zcash_fetch_params }} 98 | - name: zcash-fetch-params 99 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 100 | imagePullPolicy: {{ .Values.image.pullPolicy }} 101 | command: ['zcash-fetch-params'] 102 | volumeMounts: 103 | - name: bitcoind-pvc 104 | mountPath: "/home/zcash" 105 | {{- end }} 106 | volumes: 107 | {{- if .Values.configurationFile }} 108 | - name: bitcoind-config 109 | secret: 110 | secretName: "{{ .Release.Name }}-config" 111 | {{- end }} 112 | - name: scripts 113 | configMap: 114 | name: "{{ .Release.Name }}-scripts" 115 | volumeClaimTemplates: 116 | - metadata: 117 | name: bitcoind-pvc 118 | spec: 119 | accessModes: 120 | - {{.Values.persistence.accessMode }} 121 | {{- if .Values.persistence.storageClass }} 122 | {{- if (eq "-" .Values.persistence.storageClass) }} 123 | storageClassName: "" 124 | {{- else }} 125 | storageClassName: "{{ .Values.persistence.storageClass }}" 126 | {{- end }} 127 | {{- end }} 128 | resources: 129 | requests: 130 | storage: {{ .Values.persistence.size }} 131 | volumeMode: Filesystem 132 | -------------------------------------------------------------------------------- /charts/bitcoind/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for bitcoind. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | terminationGracePeriodSeconds: 30 5 | image: 6 | repository: blockchainetl/bitcoind 7 | tag: 0.19.1 8 | pullPolicy: IfNotPresent 9 | 10 | service: 11 | rpcPortName: jsonrpc 12 | rpcPort: 8332 13 | p2pPort: 8333 14 | p2pPortName: p2p 15 | 16 | externalLB: false 17 | externalLBIP: "" 18 | externalLBSourceRanges: {} 19 | # - 203.0.113.2/32 20 | # - 203.0.113.3/32 21 | 22 | externalLBp2p: false 23 | internalLB: false 24 | internalLBIP: "" 25 | 26 | persistence: 27 | enabled: true 28 | # storageClass: "default" 29 | accessMode: ReadWriteOnce 30 | size: "500Gi" 31 | 32 | ## Configure resource requests and limits 33 | ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ 34 | ## 35 | resources: 36 | requests: 37 | cpu: "2800m" 38 | memory: "2000Mi" 39 | limits: 40 | cpu: "3000m" 41 | memory: "3000Mi" 42 | 43 | securityContext: 44 | runAsUser: 1000 45 | runAsGroup: 1000 46 | fsGroup: 1000 47 | 48 | affinity: 49 | podAntiAffinity: 50 | preferredDuringSchedulingIgnoredDuringExecution: 51 | - weight: 100 52 | podAffinityTerm: 53 | labelSelector: 54 | matchLabels: 55 | app.kubernetes.io/name: "bitcoind" 56 | topologyKey: failure-domain.beta.kubernetes.io/zone 57 | 58 | bitcoind: 59 | base_path: "/data" 60 | # you may need to override this with clones such as litecoin 61 | configurationFileName: "bitcoin.conf" 62 | cli_binary: "bitcoin-cli" 63 | chain: "btc" 64 | # how many seconds should liveness check wait for a new block. Increase with BCH 65 | maxHealthyAge: 3600 66 | 67 | # Custom bitcoind configuration file used to override default bitcoind settings 68 | configurationFile: 69 | rpcuser: "rpcuser" 70 | rpcpassword: "rpcpassword" 71 | rpcbind: "0.0.0.0" 72 | rpcallowip: "::/0" 73 | externalLBp2pIP: "198.51.100.1" 74 | custom: |- 75 | txindex=1 76 | disablewallet=1 77 | 78 | zcash_fetch_params: false 79 | 80 | livenessProbe: 81 | initialDelaySeconds: 600 82 | periodSeconds: 600 83 | timeoutSeconds: 500 84 | successThreshold: 1 85 | failureThreshold: 2 86 | -------------------------------------------------------------------------------- /charts/parity/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | .vscode/ 23 | cloudbuild.yaml 24 | 25 | -------------------------------------------------------------------------------- /charts/parity/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | appVersion: "2.7" 3 | description: Parity chart for Kubernetes 4 | name: parity 5 | version: 0.1.49 6 | -------------------------------------------------------------------------------- /charts/parity/check_node_health.sh: -------------------------------------------------------------------------------- 1 | set -ex # -e exits on error 2 | 3 | usage() { echo "Usage: $0 ]" 1>&2; exit 1; } 4 | 5 | rpc_endpoint="$1" 6 | max_lag_in_seconds="$2" 7 | last_synced_block_file="$3" 8 | 9 | if [ -z "${rpc_endpoint}" ] || [ -z "${max_lag_in_seconds}" ] || [ -z "${last_synced_block_file}" ]; then 10 | usage 11 | fi 12 | 13 | block_number_request='{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}' 14 | block_number_response=$(curl -H 'Content-Type: application/json' -X POST --data "$block_number_request" ${rpc_endpoint}) 15 | 16 | # -r: print raw output, -n: don't read any input, --argjson data: create variable data with passed json as value 17 | block_number_hex=$(jq -r -n --argjson data "${block_number_response}" '$data.result') 18 | 19 | if [ -z "${block_number_hex}" ] || [ "${block_number_hex}" == "null" ]; then 20 | echo "Block number returned by the node is empty or null" 21 | exit 1 22 | fi 23 | 24 | if [ ! -f ${last_synced_block_file} ]; then 25 | old_block_number_hex=""; 26 | else 27 | old_block_number_hex=$(cat ${last_synced_block_file}); 28 | fi; 29 | 30 | if [ "${block_number_hex}" != "${old_block_number_hex}" ]; then 31 | mkdir -p $(dirname "${last_synced_block_file}") 32 | echo ${block_number_hex} > ${last_synced_block_file} 33 | fi 34 | 35 | file_age=$(($(date +%s) - $(date -r ${last_synced_block_file} +%s))); 36 | max_age=${max_lag_in_seconds}; 37 | echo "${last_synced_block_file} age is $file_age seconds. Max healthy age is $max_age seconds"; 38 | if [ ${file_age} -lt ${max_age} ]; then exit 0; else exit 1; fi -------------------------------------------------------------------------------- /charts/parity/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | parity RPC can be accessed via port {{ .Values.service.rpcPort }} on the following DNS name from within your cluster: 2 | {{ .Release.Name }}-service.{{ .Release.Namespace }}.svc.cluster.local 3 | 4 | To connect to parity RPC: 5 | 6 | 1. Forward the port for the node: 7 | 8 | $ kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "parity.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{ .items[0].metadata.name }") {{ .Values.service.rpcPort }} 9 | 10 | 2. Test connection : 11 | 12 | $ curl -k http://127.0.0.1:{{ .Values.service.rpcPort }} --data-binary '{"method":"parity_versionInfo","params":[],"id":1,"jsonrpc":"2.0"}' -H 'Content-Type: application/json' 13 | -------------------------------------------------------------------------------- /charts/parity/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "parity.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "parity.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "parity.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "parity.labels" -}} 38 | app.kubernetes.io/name: {{ include "parity.name" . }} 39 | helm.sh/chart: {{ include "parity.chart" . }} 40 | app.kubernetes.io/instance: {{ .Release.Name }} 41 | {{- if .Chart.AppVersion }} 42 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 43 | {{- end }} 44 | app.kubernetes.io/managed-by: {{ .Release.Service }} 45 | {{- end -}} 46 | -------------------------------------------------------------------------------- /charts/parity/templates/_parity.toml: -------------------------------------------------------------------------------- 1 | [parity] 2 | # Parity continously syncs the chain 3 | mode = "active" 4 | # Stable 5 | release_track = "stable" 6 | 7 | # https://wiki.parity.io/Chain-specification#chain-presets-available 8 | # mainnet, kovan, classic, ... 9 | chain = "{{ .Values.parity.chain }}" 10 | 11 | # Blockchain and settings will be stored in {{ .Values.parity.base_path }}. 12 | base_path = "{{ .Values.parity.base_path }}" 13 | 14 | [network] 15 | # Parity will sync by downloading latest state first. Node will be operational in couple minutes. 16 | warp = false 17 | # Parity will try to maintain connection to at least {{ .Values.parity.network.min_peers }} peers. 18 | min_peers = {{ .Values.parity.network.min_peers }} 19 | # Parity will maintain at most {{ .Values.parity.network.max_peers }} peers. 20 | max_peers = {{ .Values.parity.network.max_peers }} 21 | {{ if .Values.parity.network.bootnodes }} 22 | #Override the bootnodes from our chain. NODES should be comma-delimited enodes. 23 | bootnodes = {{ .Values.parity.network.bootnodes | toJson }} 24 | {{ end }} 25 | [rpc] 26 | # JSON-RPC will be listening for connections on IP all. 27 | interface = "all" 28 | # Only selected APIs will be exposed over this interface. 29 | apis = ["eth", "pubsub", "net", "parity", "private", "parity_pubsub", "traces", "rpc", "shh", "shh_pubsub", "web3"] 30 | # Threads for handling incoming connections for HTTP JSON-RPC server. 31 | server_threads = 6 32 | # Turn on additional processing threads for JSON-RPC servers (all transports). Setting this to a non-zero value allows parallel execution of cpu-heavy queries. 33 | # removed in 2.7 34 | #processing_threads = 7 35 | 36 | [websockets] 37 | # UI won't work and WebSockets server will be not available. 38 | disable = true 39 | 40 | [footprint] 41 | # Compute and Store tracing data. (Enables trace_* APIs). 42 | tracing = "on" 43 | # Database compaction type. TYPE may be one of: ssd - suitable for SSDs and fast HDDs; hdd - suitable for slow HDDs; auto - determine automatically. (default: auto) 44 | db_compaction = "{{ .Values.parity.footprint.db_compaction }}" 45 | # Keep all state trie data. No pruning. 46 | pruning = "{{ .Values.parity.footprint.pruning }}" 47 | # Will keep up to {{ .Values.parity.footprint.pruning_history }} old state entries. 48 | pruning_history = {{ .Values.parity.footprint.pruning_history }} 49 | # Will keep up to {{ .Values.parity.footprint.pruning_memory }} MB old state entries. 50 | pruning_memory = {{ .Values.parity.footprint.pruning_memory }} 51 | # Number of threads will vary depending on the workload. Not guaranteed to be faster. 52 | scale_verifiers = true 53 | 54 | # Will keep up to {{ .Values.parity.cache_size_db }}MB data in Database cache. 55 | cache_size_db = {{ .Values.parity.footprint.cache_size_db }} 56 | # Will keep up to {{ .Values.parity.footprint.cache_size_blocks }}MB data in Blockchain cache. 57 | cache_size_blocks = {{ .Values.parity.footprint.cache_size_blocks }} 58 | # Will keep up to {{ .Values.parity.footprint.cache_size_queue }}MB of blocks in block import queue. 59 | cache_size_queue = {{ .Values.parity.footprint.cache_size_queue }} 60 | # Will keep up to {{ .Values.parity.footprint.cache_size_state }}MB data in State cache. 61 | cache_size_state = {{ .Values.parity.footprint.cache_size_state }} 62 | # If defined will never use more then {{ .Values.parity.footprint.cache_size }}MB for all caches. (Overrides other cache settings). 63 | cache_size = {{ .Values.parity.footprint.cache_size }} 64 | 65 | [snapshots] 66 | disable_periodic = true 67 | 68 | [misc] 69 | # Logging pattern (`=`, e.g. `own_tx=trace`). 70 | logging = "{{ .Values.parity.misc.logging }}" 71 | log_file = "{{ .Values.parity.base_path }}/parity.log" 72 | color = true 73 | -------------------------------------------------------------------------------- /charts/parity/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Release.Name }}-config" 5 | data: 6 | parity.toml: |- 7 | {{- include (print $.Template.BasePath "/_parity.toml") . | nindent 4 }} 8 | --- 9 | apiVersion: v1 10 | kind: ConfigMap 11 | metadata: 12 | name: "{{ .Release.Name }}-scripts" 13 | data: 14 | {{- (.Files.Glob "check_node_health.sh").AsConfig | nindent 2 }} 15 | --- 16 | -------------------------------------------------------------------------------- /charts/parity/templates/ilb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.internalLB }} 2 | ## use this to aggregate internal pods behind 3 | ## a single load balanced IP 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-ilb 8 | labels: 9 | chain: eth 10 | {{ include "parity.labels" . | indent 4 }} 11 | annotations: 12 | cloud.google.com/load-balancer-type: "Internal" 13 | cloud.google.com/network-tier: "PREMIUM" 14 | spec: 15 | type: LoadBalancer 16 | {{ if .Values.internalLBIP }} 17 | loadBalancerIP: {{ .Values.internalLBIP }} 18 | {{ end }} 19 | ports: 20 | - name: {{ .Values.service.rpcPortName }} 21 | port: {{ .Values.service.rpcPort }} 22 | targetPort: {{ .Values.service.rpcPortName }} 23 | - name: {{ .Values.service.wsPortName }} 24 | port: {{ .Values.service.wsPort }} 25 | targetPort: {{ .Values.service.wsPortName }} 26 | selector: 27 | app.kubernetes.io/name: {{ include "parity.name" . }} 28 | app.kubernetes.io/instance: {{ .Release.Name }} 29 | {{ end }} 30 | -------------------------------------------------------------------------------- /charts/parity/templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled -}} 2 | {{- $fullName := include "parity.fullname" . -}} 3 | apiVersion: extensions/v1beta1 4 | kind: Ingress 5 | metadata: 6 | name: {{ $fullName }} 7 | labels: 8 | {{ include "parity.labels" . | indent 4 }} 9 | {{- with .Values.ingress.annotations }} 10 | annotations: 11 | {{- toYaml . | nindent 4 }} 12 | {{- end }} 13 | spec: 14 | {{- if .Values.ingress.tls }} 15 | tls: 16 | {{- range .Values.ingress.tls }} 17 | - hosts: 18 | {{- range .hosts }} 19 | - {{ . | quote }} 20 | {{- end }} 21 | secretName: {{ .secretName }} 22 | {{- end }} 23 | {{- end }} 24 | rules: 25 | {{- range .Values.ingress.hosts }} 26 | - host: {{ .host | quote }} 27 | http: 28 | paths: 29 | {{- range .paths }} 30 | - path: {{ . }} 31 | backend: 32 | serviceName: {{ $fullName }} 33 | servicePort: http 34 | {{- end }} 35 | {{- end }} 36 | {{- end }} 37 | -------------------------------------------------------------------------------- /charts/parity/templates/lb-p2p-discovery.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLBp2pDiscovery }} 2 | ## use this if you want to expose blockchain p2p (not RPC) to public 3 | apiVersion: v1 4 | kind: Service 5 | metadata: 6 | name: {{ .Release.Name }}-lb-p2p-discovery 7 | labels: 8 | chain: eth 9 | {{ include "parity.labels" . | indent 4 }} 10 | spec: 11 | type: LoadBalancer 12 | {{ if .Values.externalLBp2pDiscoveryIP }} 13 | loadBalancerIP: {{ .Values.externalLBp2pDiscoveryIP }} 14 | {{ end }} 15 | ports: 16 | - name: {{ .Values.service.p2pPortName1 }} 17 | port: {{ .Values.service.p2pPort1 }} 18 | targetPort: {{ .Values.service.p2pPortName1 }} 19 | protocol: {{ .Values.service.p2pPortProtocol1 }} 20 | selector: 21 | app.kubernetes.io/name: {{ include "parity.name" . }} 22 | app.kubernetes.io/instance: {{ .Release.Name }} 23 | {{ end }} 24 | -------------------------------------------------------------------------------- /charts/parity/templates/lb-p2p.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLBp2p }} 2 | ## use this if you want to expose blockchain p2p (not RPC) to public 3 | apiVersion: v1 4 | kind: Service 5 | metadata: 6 | name: {{ .Release.Name }}-lb-p2p 7 | labels: 8 | chain: eth 9 | {{ include "parity.labels" . | indent 4 }} 10 | spec: 11 | type: LoadBalancer 12 | {{ if .Values.externalLBp2pIP }} 13 | loadBalancerIP: {{ .Values.externalLBp2pIP }} 14 | {{ end }} 15 | ports: 16 | - name: {{ .Values.service.p2pPortName0 }} 17 | port: {{ .Values.service.p2pPort0 }} 18 | targetPort: {{ .Values.service.p2pPortName0 }} 19 | protocol: {{ .Values.service.p2pPortProtocol0 }} 20 | selector: 21 | app.kubernetes.io/name: {{ include "parity.name" . }} 22 | app.kubernetes.io/instance: {{ .Release.Name }} 23 | {{ end }} 24 | -------------------------------------------------------------------------------- /charts/parity/templates/lb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLB }} 2 | ## only use this if you want to expose 3 | ## json services to a public ip 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-lb 8 | labels: 9 | chain: eth 10 | {{ include "parity.labels" . | indent 4 }} 11 | spec: 12 | type: LoadBalancer 13 | {{ if .Values.externalLBIP }} 14 | loadBalancerIP: {{ .Values.externalLBIP }} 15 | {{ end }} 16 | {{- if .Values.externalLBSourceRanges }} 17 | loadBalancerSourceRanges: 18 | {{- range $val := .Values.externalLBSourceRanges }} 19 | - {{ $val -}} 20 | {{ end }} 21 | {{ end }} 22 | ports: 23 | - name: {{ .Values.service.rpcPortName }} 24 | port: {{ .Values.service.rpcPort }} 25 | targetPort: {{ .Values.service.rpcPortName }} 26 | - name: {{ .Values.service.wsPortName }} 27 | port: {{ .Values.service.wsPort }} 28 | targetPort: {{ .Values.service.wsPortName }} 29 | selector: 30 | app.kubernetes.io/name: {{ include "parity.name" . }} 31 | app.kubernetes.io/instance: {{ .Release.Name }} 32 | {{ end }} 33 | -------------------------------------------------------------------------------- /charts/parity/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ .Release.Name }}-service 5 | labels: 6 | chain: eth 7 | {{ include "parity.labels" . | indent 4 }} 8 | spec: 9 | type: {{ .Values.service.type }} 10 | ports: 11 | {{- toYaml .Values.service.ports | nindent 4 }} 12 | selector: 13 | app.kubernetes.io/name: {{ include "parity.name" . }} 14 | app.kubernetes.io/instance: {{ .Release.Name }} 15 | -------------------------------------------------------------------------------- /charts/parity/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ include "parity.fullname" . }} 5 | labels: 6 | {{ include "parity.labels" . | indent 4 }} 7 | spec: 8 | serviceName: "{{ .Release.Name }}-service" 9 | replicas: {{ .Values.replicaCount }} # by default is 1 10 | selector: 11 | matchLabels: 12 | app.kubernetes.io/name: {{ include "parity.name" . }} 13 | app.kubernetes.io/instance: {{ .Release.Name }} 14 | parity/chain: {{ .Values.parity.chain }} 15 | template: 16 | metadata: 17 | labels: 18 | app.kubernetes.io/name: {{ include "parity.name" . }} 19 | app.kubernetes.io/instance: {{ .Release.Name }} 20 | parity/chain: {{ .Values.parity.chain }} 21 | annotations: 22 | checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} 23 | spec: 24 | {{- with .Values.securityContext }} 25 | securityContext: 26 | {{- toYaml . | nindent 8 }} 27 | {{- end }} 28 | {{- with .Values.nodeSelector }} 29 | nodeSelector: 30 | {{- toYaml . | nindent 8 }} 31 | {{- end }} 32 | {{- with .Values.affinity }} 33 | affinity: 34 | {{- toYaml . | nindent 8 }} 35 | {{- end }} 36 | {{- with .Values.tolerations }} 37 | tolerations: 38 | {{- toYaml . | nindent 8 }} 39 | {{- end }} 40 | {{- with .Values.imagePullSecrets }} 41 | imagePullSecrets: 42 | {{- toYaml . | nindent 8 }} 43 | {{- end }} 44 | terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} 45 | containers: 46 | - name: {{ .Chart.Name }} 47 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 48 | imagePullPolicy: {{ .Values.image.pullPolicy }} 49 | {{- if .Values.parityCmdOverride }} 50 | command: [ "{{ .Values.parityCmd }}" ] 51 | {{- end }} 52 | {{- if and .Values.externalLBp2p .Values.externalLBp2pIP }} 53 | args: ["--config=/config/parity.toml","--nat","extip:{{- .Values.externalLBp2pIP -}}"] 54 | {{- else }} 55 | args: ["--config=/config/parity.toml"] 56 | {{- end }} 57 | workingDir: "{{ .Values.parity.base_path }}" 58 | resources: 59 | {{- toYaml .Values.resources | nindent 10 }} 60 | ports: 61 | {{- range $val := .Values.service.ports }} 62 | - containerPort: {{ $val.port }} 63 | name: "{{ $val.name }}" 64 | protocol: {{ $val.protocol | default "TCP" }} 65 | {{- end }} 66 | volumeMounts: 67 | - name: parity-config 68 | mountPath: /config 69 | - name: scripts 70 | mountPath: /scripts 71 | - name: parity-pvc 72 | mountPath: /data 73 | livenessProbe: 74 | exec: 75 | command: 76 | - /bin/bash 77 | - /scripts/check_node_health.sh 78 | - http://127.0.0.1:{{ .Values.service.rpcPort }} 79 | - "300" 80 | - last_synced_block.txt 81 | initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} 82 | periodSeconds: {{ .Values.livenessProbe.periodSeconds }} 83 | timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} 84 | successThreshold: {{ .Values.livenessProbe.successThreshold }} 85 | failureThreshold: {{ .Values.livenessProbe.failureThreshold }} 86 | volumes: 87 | - name: parity-config 88 | configMap: 89 | name: "{{ .Release.Name }}-config" 90 | - name: scripts 91 | configMap: 92 | name: "{{ .Release.Name }}-scripts" 93 | volumeClaimTemplates: 94 | - metadata: 95 | name: parity-pvc 96 | spec: 97 | accessModes: 98 | - {{.Values.persistence.accessMode }} 99 | {{- if .Values.persistence.storageClass }} 100 | {{- if (eq "-" .Values.persistence.storageClass) }} 101 | storageClassName: "" 102 | {{- else }} 103 | storageClassName: "{{ .Values.persistence.storageClass }}" 104 | {{- end }} 105 | {{- end }} 106 | resources: 107 | requests: 108 | storage: {{ .Values.persistence.size }} 109 | volumeMode: Filesystem 110 | -------------------------------------------------------------------------------- /charts/parity/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for parity. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | 5 | replicaCount: 1 6 | 7 | terminationGracePeriodSeconds: 180 8 | 9 | image: 10 | repository: parity/parity 11 | tag: v2.5.13-stable 12 | pullPolicy: IfNotPresent 13 | 14 | imagePullSecrets: [] 15 | nameOverride: "" 16 | fullnameOverride: "" 17 | 18 | # Don't use spaces or special chars 19 | parity: 20 | chain: "mainnet" 21 | base_path: "/data" 22 | footprint: 23 | cache_size_db: 4096 24 | cache_size_blocks: 200 25 | cache_size_queue: 200 26 | cache_size_state: 250 27 | cache_size: 6144 28 | db_compaction: "ssd" 29 | pruning: "archive" 30 | pruning_history: 64 31 | pruning_memory: 32 32 | network: 33 | min_peers: 250 34 | max_peers: 300 35 | # f.e. ["enode://...@203.0.113.1:30303","enode://...@203.0.113.2:30304", "enode://...@203.0.113.3:30304"] 36 | bootnodes: [] 37 | misc: 38 | logging: info 39 | 40 | livenessProbe: 41 | initialDelaySeconds: 300 42 | periodSeconds: 300 43 | timeoutSeconds: 10 44 | successThreshold: 1 45 | failureThreshold: 2 46 | 47 | service: 48 | type: ClusterIP 49 | rpcPortName: &rpcPortName jsonrpc 50 | rpcPort: &rpcPort 8545 51 | wsPort: &wsPort 8546 52 | wsPortName: &wsPortName web-socket 53 | p2pPort0: &p2pPort0 30303 54 | p2pPortName0: &p2pPortName0 p2p 55 | p2pPortProtocol0: &p2pPortProtocol0 TCP 56 | p2pPort1: &p2pPort1 30303 57 | p2pPortName1: &p2pPortName1 p2p-discovery 58 | p2pPortProtocol1: &p2pPortProtocol1 UDP 59 | ports: 60 | - port: *rpcPort 61 | name: *rpcPortName 62 | - port: *wsPort 63 | name: *wsPortName 64 | - port: *p2pPort0 65 | name: *p2pPortName0 66 | protocol: *p2pPortProtocol0 67 | - port: *p2pPort1 68 | name: *p2pPortName1 69 | protocol: *p2pPortProtocol1 70 | 71 | parityCmdOverride: false 72 | parityCmd: "" 73 | 74 | externalLB: false 75 | externalLBIP: "" 76 | externalLBSourceRanges: {} 77 | # - 198.51.100.1/32 78 | # - 198.51.100.2/32 79 | 80 | externalLBp2p: false 81 | externalLBp2pIP: 203.0.113.0 82 | 83 | externalLBp2pDiscovery: false 84 | externalLBp2pDiscoveryIP: 203.0.113.0 85 | 86 | internalLB: false 87 | internalLBIP: "" 88 | 89 | persistence: 90 | enabled: true 91 | # storageClass: "standard" 92 | accessMode: ReadWriteOnce 93 | size: "100Gi" 94 | 95 | 96 | ingress: 97 | enabled: false 98 | annotations: {} 99 | # kubernetes.io/ingress.class: nginx 100 | # kubernetes.io/tls-acme: "true" 101 | hosts: 102 | - host: chart-example.local 103 | paths: [] 104 | 105 | tls: [] 106 | # - secretName: chart-example-tls 107 | # hosts: 108 | # - chart-example.local 109 | 110 | resources: 111 | requests: 112 | cpu: "2800m" 113 | memory: "10000Mi" 114 | limits: 115 | cpu: "3000m" 116 | memory: "12000Mi" 117 | 118 | securityContext: 119 | runAsUser: 1000 120 | runAsGroup: 1000 121 | fsGroup: 1000 122 | 123 | 124 | nodeSelector: {} 125 | 126 | tolerations: [] 127 | 128 | affinity: 129 | podAntiAffinity: 130 | preferredDuringSchedulingIgnoredDuringExecution: 131 | - weight: 100 132 | podAffinityTerm: 133 | labelSelector: 134 | matchLabels: 135 | app.kubernetes.io/name: "parity" 136 | parity/chain: "mainnet" 137 | topologyKey: failure-domain.beta.kubernetes.io/zone 138 | 139 | 140 | -------------------------------------------------------------------------------- /charts/theta/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | .vscode/ 23 | -------------------------------------------------------------------------------- /charts/theta/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | appVersion: "1.2.0" 3 | name: theta 4 | description: Theta cryptonode 5 | engine: gotpl 6 | home: https://www.thetatoken.org 7 | keywords: 8 | - theta 9 | - cryptocurrency 10 | - blockchain 11 | maintainers: 12 | - email: av@dysnix.com 13 | name: voron 14 | sources: 15 | - https://github.com/kubernetes/charts 16 | - https://github.com/blockchain-etl/docker-theta 17 | version: 0.1.12 18 | -------------------------------------------------------------------------------- /charts/theta/templates/_config.yaml: -------------------------------------------------------------------------------- 1 | # Theta configuration 2 | # https://github.com/thetatoken/theta-protocol-ledger/blob/master/common/config.go 3 | p2p: 4 | port: {{ .Values.configurationFile.p2p.port | default "50001" }} 5 | seeds: {{ .Values.configurationFile.p2p.seeds | default "" }} 6 | opt: 0 7 | seedPeerOnlyOutbound: "true" 8 | minNumPeers: 5 9 | maxNumPeers: 10 10 | 11 | rpc: 12 | enabled: true 13 | address: {{ .Values.configurationFile.rpc.address | default "127.0.0.1" }} 14 | port: {{ .Values.configurationFile.rpc.port | default "16888" }} 15 | 16 | storage: 17 | # true by default, when set to true the node will perform state pruning which can effectively reduce the disk space consumption 18 | {{- if .Values.configurationFile.storage.statePruningEnabled }} 19 | statePruningEnabled: true 20 | {{ else }} 21 | statePruningEnabled: false 22 | {{- end }} 23 | # the purning interval (in terms of blocks) which control the frequency the pruning procedure is activated 24 | statePruningInterval: 16 25 | # the number of blocks prior to the latest finalized block whose corresponding state tree need to be retained 26 | statePruningRetainedBlocks: 512 27 | log: 28 | levels: "*:info" 29 | 30 | sync: 31 | messageQueueSize: 512 32 | 33 | consensus: 34 | messageQueueSize: 512 35 | -------------------------------------------------------------------------------- /charts/theta/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "theta.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "theta.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "theta.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Common labels 36 | */}} 37 | {{- define "theta.labels" -}} 38 | app.kubernetes.io/name: {{ include "theta.name" . }} 39 | helm.sh/chart: {{ include "theta.chart" . }} 40 | app.kubernetes.io/instance: {{ .Release.Name }} 41 | {{- if .Chart.AppVersion }} 42 | app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} 43 | {{- end }} 44 | app.kubernetes.io/managed-by: {{ .Release.Service }} 45 | {{- end -}} 46 | 47 | {{/* 48 | Create the name of the service account to use 49 | */}} 50 | {{- define "theta.serviceAccountName" -}} 51 | {{- if .Values.serviceAccount.create -}} 52 | {{ default (include "theta.fullname" .) .Values.serviceAccount.name }} 53 | {{- else -}} 54 | {{ default "default" .Values.serviceAccount.name }} 55 | {{- end -}} 56 | {{- end -}} 57 | -------------------------------------------------------------------------------- /charts/theta/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: "{{ .Release.Name }}-config" 5 | labels: 6 | {{ include "theta.labels" . | indent 4 }} 7 | data: 8 | config.yaml: | 9 | {{- include (print $.Template.BasePath "/_config.yaml") . | nindent 4 }} 10 | -------------------------------------------------------------------------------- /charts/theta/templates/ilb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.internalLB }} 2 | ## use this to aggregate internal pods behind 3 | ## a single load balanced IP 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-ilb 8 | labels: 9 | {{ include "theta.labels" . | indent 4 }} 10 | annotations: 11 | cloud.google.com/load-balancer-type: "Internal" 12 | cloud.google.com/network-tier: "PREMIUM" 13 | spec: 14 | type: LoadBalancer 15 | {{ if .Values.internalLBIP }} 16 | loadBalancerIP: {{ .Values.internalLBIP }} 17 | {{ end }} 18 | ports: 19 | - name: {{ .Values.service.rpcPortName }} 20 | port: {{ .Values.service.rpcPort }} 21 | targetPort: {{ .Values.service.rpcPortName }} 22 | - name: {{ .Values.service.p2pPortName }} 23 | port: {{ .Values.service.p2pPort }} 24 | targetPort: {{ .Values.service.p2pPortName }} 25 | selector: 26 | app.kubernetes.io/name: {{ include "theta.name" . }} 27 | app.kubernetes.io/instance: {{ .Release.Name }} 28 | {{ end }} 29 | -------------------------------------------------------------------------------- /charts/theta/templates/lb.yaml: -------------------------------------------------------------------------------- 1 | {{ if .Values.externalLB }} 2 | ## only use this if you want to expose 3 | ## json services to a public ip 4 | apiVersion: v1 5 | kind: Service 6 | metadata: 7 | name: {{ .Release.Name }}-lb 8 | labels: 9 | chain: {{ .Values.theta.chain }} 10 | {{ include "theta.labels" . | indent 4 }} 11 | spec: 12 | type: LoadBalancer 13 | {{ if .Values.externalLBIP }} 14 | loadBalancerIP: {{ .Values.externalLBIP }} 15 | {{ end }} 16 | {{- if .Values.externalLBSourceRanges }} 17 | loadBalancerSourceRanges: 18 | {{- range $val := .Values.externalLBSourceRanges }} 19 | - {{ $val -}} 20 | {{ end }} 21 | {{ end }} 22 | ports: 23 | - name: {{ .Values.service.rpcPortName }} 24 | port: {{ .Values.service.rpcPort }} 25 | targetPort: {{ .Values.service.rpcPortName }} 26 | - name: {{ .Values.service.p2pPortName }} 27 | port: {{ .Values.service.p2pPort }} 28 | targetPort: {{ .Values.service.p2pPortName }} 29 | selector: 30 | app.kubernetes.io/name: {{ include "theta.name" . }} 31 | app.kubernetes.io/instance: {{ .Release.Name }} 32 | {{ end }} 33 | -------------------------------------------------------------------------------- /charts/theta/templates/secret.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Secret 3 | metadata: 4 | name: "{{ .Release.Name }}-secret" 5 | labels: 6 | {{ include "theta.labels" . | indent 4 }} 7 | type: Opaque 8 | data: 9 | node_passwd: {{ .Values.theta.node_passwd | b64enc }} 10 | -------------------------------------------------------------------------------- /charts/theta/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ .Release.Name }}-service 5 | labels: 6 | {{ include "theta.labels" . | indent 4 }} 7 | spec: 8 | ports: 9 | - name: {{ .Values.service.rpcPortName }} 10 | port: {{ .Values.service.rpcPort }} 11 | targetPort: {{ .Values.service.rpcPortName }} 12 | - name: {{ .Values.service.p2pPortName }} 13 | port: {{ .Values.service.p2pPort }} 14 | targetPort: {{ .Values.service.p2pPortName }} 15 | selector: 16 | app.kubernetes.io/name: {{ include "theta.name" . }} 17 | app.kubernetes.io/instance: {{ .Release.Name }} 18 | -------------------------------------------------------------------------------- /charts/theta/templates/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.serviceAccount.create -}} 2 | apiVersion: v1 3 | kind: ServiceAccount 4 | metadata: 5 | name: {{ template "theta.serviceAccountName" . }} 6 | labels: 7 | {{ include "theta.labels" . | indent 4 }} 8 | {{- end -}} 9 | -------------------------------------------------------------------------------- /charts/theta/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ include "theta.fullname" . }} 5 | labels: 6 | {{ include "theta.labels" . | indent 4 }} 7 | spec: 8 | serviceName: "{{ .Release.Name }}-service" 9 | replicas: {{ .Values.replicaCount }} # by default is 1 10 | selector: 11 | matchLabels: 12 | app.kubernetes.io/name: {{ include "theta.name" . }} 13 | app.kubernetes.io/instance: {{ .Release.Name }} 14 | template: 15 | metadata: 16 | labels: 17 | app.kubernetes.io/name: {{ include "theta.name" . }} 18 | app.kubernetes.io/instance: {{ .Release.Name }} 19 | annotations: 20 | checksum/configmap.yaml: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} 21 | checksum/secret.yaml: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} 22 | spec: 23 | {{- with .Values.securityContext }} 24 | securityContext: 25 | {{- toYaml . | nindent 8 }} 26 | {{- end }} 27 | {{- with .Values.nodeSelector }} 28 | nodeSelector: 29 | {{- toYaml . | nindent 8 }} 30 | {{- end }} 31 | {{- with .Values.affinity }} 32 | affinity: 33 | {{- toYaml . | nindent 8 }} 34 | {{- end }} 35 | {{- with .Values.tolerations }} 36 | tolerations: 37 | {{- toYaml . | nindent 8 }} 38 | {{- end }} 39 | {{- with .Values.imagePullSecrets }} 40 | imagePullSecrets: 41 | {{- toYaml . | nindent 8 }} 42 | {{- end }} 43 | terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} 44 | serviceAccountName: {{ template "theta.serviceAccountName" . }} 45 | securityContext: 46 | {{- toYaml .Values.podSecurityContext | nindent 8 }} 47 | containers: 48 | - name: {{ .Chart.Name }} 49 | securityContext: 50 | {{- toYaml .Values.securityContext | nindent 10 }} 51 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 52 | imagePullPolicy: {{ .Values.image.pullPolicy }} 53 | args: ["start", "--config", "{{ .Values.theta.base_path }}/validator/","--password","$(node_passwd)"] 54 | envFrom: 55 | - secretRef: 56 | name: "{{ .Release.Name }}-secret" 57 | workingDir: "{{ .Values.theta.base_path }}" 58 | resources: 59 | {{- toYaml .Values.resources | nindent 10 }} 60 | ports: 61 | - containerPort: {{ .Values.service.rpcPort }} 62 | name: "{{ .Values.service.rpcPortName }}" 63 | protocol: "TCP" 64 | - containerPort: {{ .Values.service.p2pPort }} 65 | name: "{{ .Values.service.p2pPortName }}" 66 | protocol: "TCP" 67 | volumeMounts: 68 | - name: theta-pvc 69 | mountPath: {{ .Values.theta.base_path }} 70 | livenessProbe: 71 | exec: 72 | command: 73 | - /bin/bash 74 | - /scripts/check_node_health.sh 75 | - "{{ .Values.theta.base_path }}" 76 | - "{{ .Values.theta.maxHealthyAge }}" 77 | - last_synced_block.txt 78 | initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} 79 | periodSeconds: {{ .Values.livenessProbe.periodSeconds }} 80 | timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} 81 | successThreshold: {{ .Values.livenessProbe.successThreshold }} 82 | failureThreshold: {{ .Values.livenessProbe.failureThreshold }} 83 | lifecycle: 84 | preStop: 85 | exec: 86 | # we don't need to poll some resources, as we stop PID 1, just sleep after stop command to tell k8s - "shutdown is in process" 87 | command: 88 | - /bin/sh 89 | - -c 90 | - "kill -s INT 1; sleep {{ .Values.terminationGracePeriodSeconds }}" 91 | initContainers: 92 | # copy integration 93 | - name: copy-integration 94 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 95 | imagePullPolicy: {{ .Values.image.pullPolicy }} 96 | command: ['sh', '-c', 'cp -a /opt/theta/integration/mainnet/validator {{ .Values.theta.base_path }}'] 97 | volumeMounts: 98 | - name: theta-pvc 99 | mountPath: {{ .Values.theta.base_path }} 100 | 101 | {{- if .Values.configurationFile }} 102 | # copy config 103 | - name: copy-theta-config 104 | image: busybox 105 | command: ['sh', '-c', 'cp /config/{{ .Values.theta.configurationFileName }} {{ .Values.theta.base_path }}/validator/{{ .Values.theta.configurationFileName }}'] 106 | volumeMounts: 107 | - name: theta-config 108 | mountPath: /config 109 | - name: theta-pvc 110 | mountPath: {{ .Values.theta.base_path }} 111 | {{- end }} 112 | volumes: 113 | {{- if .Values.configurationFile }} 114 | - name: theta-config 115 | configMap: 116 | name: "{{ .Release.Name }}-config" 117 | {{- end }} 118 | volumeClaimTemplates: 119 | - metadata: 120 | name: theta-pvc 121 | spec: 122 | accessModes: 123 | - {{.Values.persistence.accessMode }} 124 | {{- if .Values.persistence.storageClass }} 125 | {{- if (eq "-" .Values.persistence.storageClass) }} 126 | storageClassName: "" 127 | {{- else }} 128 | storageClassName: "{{ .Values.persistence.storageClass }}" 129 | {{- end }} 130 | {{- end }} 131 | resources: 132 | requests: 133 | storage: {{ .Values.persistence.size }} 134 | volumeMode: Filesystem 135 | -------------------------------------------------------------------------------- /charts/theta/values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for theta. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | terminationGracePeriodSeconds: 30 5 | 6 | theta: 7 | configurationFileName: config.yaml 8 | base_path: /theta/mainnet 9 | node_passwd: changemeASAP 10 | 11 | configurationFile: 12 | p2p: 13 | port: "50001" 14 | seeds: "18.217.234.19:21000,3.16.9.73:21000,18.223.85.230:21000,18.216.45.28:21000,18.191.140.202:21000" 15 | rpc: 16 | port: "16888" 17 | address: "127.0.0.1" 18 | storage: 19 | # we need all the data 20 | statePruningEnabled: false 21 | 22 | replicaCount: 1 23 | 24 | image: 25 | repository: blockchainetl/theta 26 | tag: 1.2.0 27 | pullPolicy: IfNotPresent 28 | 29 | service: 30 | rpcPortName: rpc 31 | rpcPort: 16888 32 | p2pPort: 50001 33 | p2pPortName: p2p 34 | 35 | externalLB: false 36 | externalLBIP: "" 37 | externalLBSourceRanges: {} 38 | # - 203.0.113.2/32 39 | # - 203.0.113.3/32 40 | 41 | #externalLBp2p: false 42 | internalLB: false 43 | internalLBIP: "" 44 | 45 | persistence: 46 | enabled: true 47 | # storageClass: "default" 48 | accessMode: ReadWriteOnce 49 | size: "500Gi" 50 | 51 | ## Configure resource requests and limits 52 | ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ 53 | ## 54 | resources: 55 | requests: 56 | cpu: "500m" 57 | memory: "1000Mi" 58 | limits: 59 | cpu: "3000m" 60 | memory: "2000Mi" 61 | 62 | imagePullSecrets: [] 63 | nameOverride: "" 64 | fullnameOverride: "" 65 | 66 | serviceAccount: 67 | # Specifies whether a service account should be created 68 | create: true 69 | # The name of the service account to use. 70 | # If not set and create is true, a name is generated using the fullname template 71 | name: 72 | 73 | podSecurityContext: 74 | runAsNonRoot: true 75 | runAsUser: 1000 76 | runAsGroup: 1000 77 | fsGroup: 1000 78 | 79 | securityContext: 80 | capabilities: 81 | drop: 82 | - ALL 83 | readOnlyRootFilesystem: true 84 | allowPrivilegeEscalation: false 85 | 86 | #podSecurityContext: {} 87 | # fsGroup: 2000 88 | 89 | #securityContext: {} 90 | # capabilities: 91 | # drop: 92 | # - ALL 93 | # readOnlyRootFilesystem: true 94 | # runAsNonRoot: true 95 | # runAsUser: 1000 96 | 97 | livenessProbe: 98 | # effectively disable check 99 | initialDelaySeconds: "1000000000" 100 | periodSeconds: 600 101 | timeoutSeconds: 500 102 | successThreshold: 1 103 | failureThreshold: 2 104 | 105 | ingress: 106 | enabled: false 107 | annotations: {} 108 | # kubernetes.io/ingress.class: nginx 109 | # kubernetes.io/tls-acme: "true" 110 | hosts: 111 | - host: chart-example.local 112 | paths: [] 113 | 114 | tls: [] 115 | # - secretName: chart-example-tls 116 | # hosts: 117 | # - chart-example.local 118 | 119 | #resources: {} 120 | # We usually recommend not to specify default resources and to leave this as a conscious 121 | # choice for the user. This also increases chances charts run on environments with little 122 | # resources, such as Minikube. If you do want to specify resources, uncomment the following 123 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 124 | # limits: 125 | # cpu: 100m 126 | # memory: 128Mi 127 | # requests: 128 | # cpu: 100m 129 | # memory: 128Mi 130 | 131 | nodeSelector: {} 132 | 133 | tolerations: [] 134 | 135 | affinity: {} 136 | -------------------------------------------------------------------------------- /cloudbuild.md: -------------------------------------------------------------------------------- 1 | We use GCP [Cloud Build](https://cloud.google.com/cloud-build/docs/) to package charts in this repo and push result to GCS bucket. 2 | Check [How-to guides](https://cloud.google.com/cloud-build/docs/how-to) to cover common Cloud Build use cases. 3 | In this manual we explain our Cloud Build configuration step by step. 4 | ### Why do we need build? 5 | [Helm](https://helm.sh) can deploy releases into [Kubernetes](https://k8s.io) from file system or repositories. 6 | Common practice is to add external/remote chart repository into helm and use repo for deploys and updates. 7 | Thus we can build our charts once to be usable for everyone from chart repository, and we have a standard way to deliver updates - 8 | just push new chart version into the chart repo. Cloud Build, like other CI/CDs, offloads this part from developers to machines. 9 | ### cloudbuild.yaml review 10 | We use single Cloud Build manifest with [substitutions](https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values) to pack 2 charts - bitcoind and parity. 11 | Structure of our manifest is the following: 12 | 1. `steps`. Build steps, package and push happens here. Files are passed between steps 13 | 2. `substitutions`. Variables with default values and override possibility during manual or automatically triggered builds 14 | 3. `options`. Various build options 15 | 4. `artifacts`. Artifacts [are used](https://cloud.google.com/cloud-build/docs/configuring-builds/store-images-artifacts#storing_artifacts_in) 16 | to upload build results such as binaries, archives, text files etc to some permanent storage or repository 17 | 18 | Let's dive in: 19 | #### steps 20 | We use in-project helm cloud builder image just to speed up builds due to smaller image. It's image from 21 | [GCP helm cloud builder](https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/helm), but `gcloud-slim`-based. 22 | You may use [this manual](https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/helm#building-this-builder) 23 | to build your own helm cloud builder image. 24 | 25 | Here is steps description: 26 | 1. `helm package` 27 | 1. install [helm-gcs plugin](https://github.com/hayorov/helm-gcs) to support GCS buckets as chart repository storage 28 | 1. configure helm to use chart repository from substituted environment variables 29 | 1. package chart to archive 30 | 1. `add version plugin`. Install [helm-local-chart-version](https://github.com/mbenabda/helm-local-chart-version) plugin 31 | 1. `save version to artifact`. Get chart version and save it to file 32 | 1. `helm push`. Push packaged chart to chart repository and update repository metadata 33 | #### substitutions 34 | * `_REGISTRY` base registry domain, use `eu.gcr.io` to save bandwidth with EU deployments 35 | * `_ENV` environment we build chart for. Used with artifacts 36 | * `_CHART_NAME` bitcoind or parity 37 | * `_HELM_REPO_NAME` local name of chart repository 38 | * `_HELM_REPO_URL` chart repository URL we push packaged chart to 39 | * `_ARTIFACT_URL` GCS bucket path Cloud Build uploads artifacts to on success 40 | * `_ARTIFACT_FILENAME` file name to store latest build chart version 41 | #### options 42 | * `env` var `SKIP_CLUSTER_CONFIG` is required by helm cloud builder to skip configuration of kubectl context. It's added to every step. 43 | Thus you don't require working GKE cluster in the project where you want to run builds from this cloudbuild manifest. 44 | #### artifacts 45 | * `location` - path where Cloud Build uploads artifacts on success. We use `_ENV` as a path part to store artifacts for environments separately 46 | * `paths` - path list of files/directories to upload. We store single file with chart version only 47 | ### Manual Cloud Build usage 48 | 1. Please meet the requirements from [this readme](README.md) 49 | 1. Activate Cloud Build API on GCP side: 50 | ```bash 51 | export GCP_PROJECT_ID=$(gcloud config get-value project) 52 | gcloud services enable cloudbuild.googleapis.com --project=${GCP_PROJECT_ID} 53 | ``` 54 | 1. Create GCS buckets to store chart repository and artifacts. We use single bucket in this manual to store both. 55 | ```bash 56 | export HELM_REPO_BUCKET=${GCP_PROJECT_ID}-helm-repo 57 | gsutil mb -p ${GCP_PROJECT_ID} -c standard gs://${HELM_REPO_BUCKET} 58 | export HELM_REPO_URL=gs://${HELM_REPO_BUCKET}/charts/ 59 | export ARTIFACT_URL=gs://${HELM_REPO_BUCKET}/versions/ 60 | ``` 61 | 1. [Build](https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/helm#building-this-builder) 62 | GCP [helm cloud builder](https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/helm) to have this builder image in your project 63 | 1. Install [helm-gcs plugin](https://github.com/hayorov/helm-gcs) 64 | ```bash 65 | # 0.2.2 is the last version with helm 2 support 66 | helm plugin install https://github.com/hayorov/helm-gcs --version 0.2.2 67 | ``` 68 | 1. Init chart repository in GCS bucket. We have to do it once per every new chart repository 69 | ```bash 70 | helm gcs init ${HELM_REPO_URL} 71 | ``` 72 | 1. Clone this git repo and change dir to the cloned repo root 73 | 1. Run Cloud Build. We build `parity` chart in this example: 74 | ```bash 75 | gcloud builds submit --config=cloudbuild.yaml . --project=${GCP_PROJECT_ID} --substitutions=_HELM_REPO_URL=${HELM_REPO_URL},_ARTIFACT_URL=${ARTIFACT_URL},_CHART_NAME=parity,_ARTIFACT_FILENAME=parity-chart-version 76 | ``` 77 | Check console output for `REMOTE BUILD OUTPUT`, it may take some time to finish. You may hit errors during build, here are some examples: 78 | * "not a valid chart repository" 79 | ```bash 80 | Step #0 - "helm package": Error: Looks like "gs://..." is not a valid chart repository or cannot be reached: plugin "scripts/pull.sh" exited with error 81 | Finished Step #0 - "helm package" 82 | ERROR 83 | ``` 84 | It looks like init of GCS chart repo failed/wasn't performed on path specified in the error text. Retry chart repo init, check repo paths match. 85 | * "chart already indexed" 86 | ```bash 87 | Step #3 - "helm push": chart parity-0.1.38 already indexed. Use --force to still upload the chart 88 | Step #3 - "helm push": Error: plugin "gcs" exited with error 89 | Finished Step #3 - "helm push" 90 | ERROR 91 | ERROR: build step 3 "gcr.io/.../helm" failed: exit status 1 92 | ``` 93 | It means that you have this chart version in the repo already. Increasing chart version in `Chart.yaml` file at `version:` line is the recommended way to solve this. 94 | Then rerun `gcloud builds ...` again. 95 | 96 | You may get further support [here](https://cloud.google.com/cloud-build/docs/getting-support) 97 | ### Cloud Build trigger configuration 98 | Usually you need to trigger Cloud Build on every push to your Github repo. Take a note - Cloud Build is a paid service, 99 | check [pricing](https://cloud.google.com/cloud-build/pricing) before proceed. 100 | 101 | Use official [docs](https://cloud.google.com/cloud-build/docs/running-builds/create-manage-triggers) to configure triggers, 102 | [Github](https://cloud.google.com/cloud-build/docs/create-github-app-triggers) triggers specially. We just emphasize key points: 103 | * create one trigger per resulting chart 104 | * use `Included files filter (glob)`, for example `charts/bitcoind/**` for bitcoind chart to trigger corresponding build when related files are changed only. 105 | * specify `Substitution variables`, at least 106 | 1. `_HELM_REPO_URL` 107 | 1. `_ARTIFACT_URL` 108 | 1. `_CHART_NAME` 109 | 1. `_ARTIFACT_FILENAME` 110 | ### Teardown / cleanup 111 | You may need to cleanup staff after you've stopped to use Cloud Build. Here is a check list: 112 | * helm cloud builder images in container registry inside your project 113 | * chart repo and artifacts in GCS bucket(s) 114 | * "_cloudbuild" GCS bucket to store code for manual Cloud Build submits 115 | * Cloud Build trigger(s) and/or connected repositories 116 | -------------------------------------------------------------------------------- /cloudbuild.yaml: -------------------------------------------------------------------------------- 1 | steps: 2 | - name: '${_REGISTRY}/$PROJECT_ID/helm' 3 | args: 4 | - package 5 | - charts/$_CHART_NAME 6 | id: 'helm package' 7 | env: 8 | - GCS_PLUGIN_VERSION=0.2.2 9 | - HELMFILE_VERSION=v0.85.3 10 | - HELM_REPO_NAME=$_HELM_REPO_NAME 11 | - HELM_REPO_URL=$_HELM_REPO_URL 12 | 13 | - name: '${_REGISTRY}/$PROJECT_ID/helm' 14 | args: 15 | - plugin 16 | - install 17 | - https://github.com/mbenabda/helm-local-chart-version 18 | - --version 19 | - v0.0.6 20 | id: 'add version plugin' 21 | 22 | - name: '${_REGISTRY}/$PROJECT_ID/helm' 23 | entrypoint: 'bash' 24 | args: 25 | - -c 26 | - helm local-chart-version get -c charts/$_CHART_NAME > $_ARTIFACT_FILENAME 27 | id: 'save version to artifact' 28 | 29 | - name: '${_REGISTRY}/$PROJECT_ID/helm' 30 | entrypoint: 'bash' 31 | args: 32 | - -c 33 | - helm gcs push ${_CHART_NAME}-$(cat $_ARTIFACT_FILENAME).tgz $_HELM_REPO_NAME 34 | id: 'helm push' 35 | 36 | # disable caching on repo index, https://github.com/helm/helm/issues/2453#issuecomment-301904742 37 | - name: gcr.io/cloud-builders/gsutil 38 | args: 39 | - setmeta 40 | - -h 41 | - "Cache-Control:private, max-age=0, no-transform" 42 | - ${_HELM_REPO_URL}/index.yaml 43 | 44 | substitutions: 45 | _REGISTRY: gcr.io 46 | _ENV: dev 47 | _CHART_NAME: bitcoind 48 | _HELM_REPO_NAME: blockchain-k8s 49 | _HELM_REPO_URL: gs://blockchain-k8s/charts 50 | _ARTIFACT_URL: gs://artifacts-/versions/ 51 | _ARTIFACT_FILENAME: bitcoind-chart-version 52 | options: 53 | env: 54 | - SKIP_CLUSTER_CONFIG=true 55 | 56 | artifacts: 57 | objects: 58 | location: ${_ARTIFACT_URL}/${_ENV}/ 59 | paths: 60 | - $_ARTIFACT_FILENAME 61 | 62 | -------------------------------------------------------------------------------- /example-values-bitcoind.yaml: -------------------------------------------------------------------------------- 1 | configurationFile: 2 | rpcuser: "rpcuser" 3 | rpcpassword: "rpcpassword" 4 | custom: |- 5 | txindex=1 6 | 7 | persistence: 8 | enabled: true 9 | storageClass: "standard-regional-us-central1-bc" 10 | size: "500Gi" 11 | -------------------------------------------------------------------------------- /example-values-parity.yaml: -------------------------------------------------------------------------------- 1 | externalLBp2p: true 2 | externalLBp2pIP: 203.0.113.0 3 | externalLBp2pDiscovery: true 4 | externalLBp2pDiscoveryIP: 203.0.113.0 5 | 6 | persistence: 7 | enabled: true 8 | storageClass: "ssd-regional-us-central1-bc" 9 | size: "1500Gi" 10 | 11 | parity: 12 | network: 13 | chain: "mainnet" 14 | # ETH mainnet bootnodes only 15 | # pick up bootnodes from https://github.com/ethereum/go-ethereum/blob/master/params/bootnodes.go 16 | bootnodes: 17 | - "enode://d860a01f9722d78051619d1e2351aba3f43f943f6f00718d1b9baa4101932a1f5011f16bb2b1bb35db20d6fe28fa0bf09636d26a87d31de9ec6203eeedb1f666@18.138.108.67:30303" 18 | - "enode://22a8232c3abc76a16ae9d6c3b164f98775fe226f0917b0ca871128a74a8e9630b458460865bab457221f1d448dd9791d24c4e5d88786180ac185df813a68d4de@3.209.45.79:30303" 19 | - "enode://ca6de62fce278f96aea6ec5a2daadb877e51651247cb96ee310a318def462913b653963c155a0ef6c7d50048bba6e6cea881130857413d9f50a621546b590758@34.255.23.113:30303" 20 | - "enode://279944d8dcd428dffaa7436f25ca0ca43ae19e7bcf94a8fb7d1641651f92d121e972ac2e8f381414b80cc8e5555811c2ec6e1a99bb009b3f53c4c69923e11bd8@35.158.244.151:30303" 21 | - "enode://8499da03c47d637b20eee24eec3c356c9a2e6148d6fe25ca195c7949ab8ec2c03e3556126b0d7ed644675e78c4318b08691b7b57de10e5f0d40d05b09238fa0a@52.187.207.27:30303" 22 | - "enode://103858bdb88756c71f15e9b5e09b56dc1be52f0a5021d46301dbbfb7e130029cc9d0d6f73f693bc29b665770fff7da4d34f3c6379fe12721b5d7a0bcb5ca1fc1@191.234.162.198:30303" 23 | - "enode://715171f50508aba88aecd1250af392a45a330af91d7b90701c436b618c86aaa1589c9184561907bebbb56439b8f8787bc01f49a7c77276c58c1b09822d75e8e8@52.231.165.108:30303" 24 | - "enode://5d6d7cd20d6da4bb83a1d28cadb5d409b64edf314c0335df658c1a54e32c7c4a7ab7823d57c39b6a757556e68ff1df17c748b698544a55cb488b52479a92b60f@104.42.217.25:30303" 25 | - "enode://979b7fa28feeb35a4741660a16076f1943202cb72b6af70d327f053e248bab9ba81760f39d0701ef1d8f89cc1fbd2cacba0710a12cd5314d5e0c9021aa3637f9@5.1.83.226:30303" 26 | 27 | 28 | -------------------------------------------------------------------------------- /gke.md: -------------------------------------------------------------------------------- 1 | Here are commands to create GKE cluster 2 | ```bash 3 | gcloud config set compute/region us-central1 4 | gcloud config set compute/zone us-central1-b 5 | 6 | 7 | gcloud services enable compute.googleapis.com 8 | gcloud services enable container.googleapis.com 9 | 10 | 11 | export PROJECT_ID=$(gcloud config get-value project) 12 | export SA_NAME=baas-gke-nodes 13 | export CLUSTER_NAME=baas0 14 | 15 | export MASTER_ZONE=us-central1-b 16 | export NODE_LOCATIONS="us-central1-c,us-central1-b" 17 | export K8S_CONTEXT=baas0 18 | 19 | gcloud iam service-accounts create $SA_NAME \ 20 | --display-name="baas gke nodes" 21 | 22 | gcloud projects add-iam-policy-binding $PROJECT_ID \ 23 | --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ 24 | --role roles/logging.logWriter 25 | 26 | gcloud projects add-iam-policy-binding ${PROJECT_ID} \ 27 | --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ 28 | --role roles/monitoring.metricWriter 29 | 30 | gcloud projects add-iam-policy-binding ${PROJECT_ID} \ 31 | --member "serviceAccount:${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" \ 32 | --role roles/monitoring.viewer 33 | 34 | gcloud container clusters create $CLUSTER_NAME --create-subnetwork name=${CLUSTER_NAME}-0 --num-nodes 1 --enable-autoscaling --max-nodes=1 --min-nodes=1 --machine-type=n1-highmem-4 --preemptible --cluster-version latest --enable-network-policy --enable-autorepair --enable-ip-alias --enable-master-authorized-networks --master-authorized-networks 198.51.100.3/32,203.0.113.7/32 --no-enable-basic-auth --zone=$MASTER_ZONE --node-locations="$NODE_LOCATIONS" --service-account="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com" --project=$PROJECT_ID 35 | 36 | gcloud projects add-iam-policy-binding $PROJECT_ID --member=user:$(gcloud config get-value core/account) --role=roles/container.admin 37 | gcloud container clusters get-credentials $CLUSTER_NAME --project=$PROJECT_ID 38 | 39 | kubectl config rename-context gke_${PROJECT_ID}_${MASTER_ZONE}_${CLUSTER_NAME} $K8S_CONTEXT 40 | 41 | kubectl --context $K8S_CONTEXT create clusterrolebinding cluster-admin-binding --clusterrole cluster-admin --user $(gcloud config get-value core/account) 42 | 43 | ``` 44 | -------------------------------------------------------------------------------- /helm-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: tiller 5 | namespace: kube-system 6 | --- 7 | kind: ClusterRoleBinding 8 | apiVersion: rbac.authorization.k8s.io/v1 9 | metadata: 10 | name: tiller-clusterrolebinding 11 | subjects: 12 | - kind: ServiceAccount 13 | name: tiller 14 | namespace: kube-system 15 | roleRef: 16 | kind: ClusterRole 17 | name: cluster-admin 18 | apiGroup: "" 19 | -------------------------------------------------------------------------------- /helm.md: -------------------------------------------------------------------------------- 1 | Install helm agent(tiller) into the cluster, adjust RBAC 2 | ```bash 3 | helm init 4 | kubectl create -f helm-rbac.yaml 5 | kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' 6 | ``` 7 | -------------------------------------------------------------------------------- /ops.md: -------------------------------------------------------------------------------- 1 | Here is short HOWTO to maintain and troubleshoot GKE and cryptonodes. 2 | 3 | You may also use [official doc](https://cloud.google.com/kubernetes-engine/docs/troubleshooting) to troubleshoot GKE cluster. 4 | 5 | We assume kubectl default context is configured correctly to connect to the required cluster. 6 | 7 | ### Diagnose and troubleshooting 8 | Locate required pods 9 | ```bash 10 | # get namespaces list 11 | kubectl get ns 12 | # get pods list per namespace dev-eth-0 13 | kubectl -n dev-eth-0 get pod 14 | ``` 15 | Check pod logs (`-f` to follow) 16 | ```bash 17 | kubectl -n dev-eth-0 logs -f dev-eth0-parity-0 --tail=10 18 | ``` 19 | Check pod info to troubleshoot startup problems, liveness check problems, etc: 20 | ```bash 21 | kubectl -n dev-eth-0 describe pod dev-eth0-parity-0 22 | ``` 23 | Restart pod in case of hung 24 | ```bash 25 | kubectl -n dev-eth-0 delete pod dev-eth0-parity-0 26 | ``` 27 | Check allocated disk size 28 | ```bash 29 | kubectl -n dev-eth-0 get pvc 30 | ``` 31 | Shell exec to container to check/troubleshoot from inside 32 | ```bash 33 | kubectl -n dev-eth-0 exec -it dev-eth0-parity-0 bash 34 | ``` 35 | 36 | Get blockchain-specific logs 37 | * parity 38 | ```bash 39 | kubectl -n dev-eth-0 exec -it dev-eth0-parity-0 bash 40 | tail -f parity.log 41 | ``` 42 | * bitcoind 43 | ```bash 44 | kubectl -n dev-btc-0 exec -it dev-btc0-bitcoind-0 bash 45 | tail -f debug.log 46 | ``` 47 | 48 | Get current block count 49 | * parity, look after `Syncing` word 50 | ```bash 51 | kubectl -n dev-eth-0 logs --tail=10 dev-eth0-parity-0 52 | ``` 53 | 54 | * bitcoind 55 | ```bash 56 | kubectl -n dev-btc-0 exec -it dev-btc0-bitcoind-0 bash 57 | bitcoin-cli -datadir=/data getblockcount 58 | ``` 59 | 60 | Get peers count 61 | * parity, look before `peers` word 62 | ```bash 63 | kubectl -n dev-eth-0 logs --tail=10 dev-eth0-parity-0 64 | ``` 65 | * bitcoind, it should be 8+ connections 66 | ```bash 67 | kubectl -n dev-btc-0 exec -it dev-btc0-bitcoind-0 bash 68 | bitcoin-cli -datadir=/data -getinfo|grep connections 69 | ``` 70 | ### Upgrade cryptonode version 71 | Let's assume you need to upgrade parity from `v2.5.8-stable` to `v2.5.10-stable`. Here is what you need to do: 72 | * update `values-parity.yaml` you used before to deploy parity or create new `values-parity.yaml` file with following content 73 | ```yaml 74 | image: 75 | repository: parity/parity 76 | tag: v2.5.10-stable 77 | ``` 78 | * upgrade parity helm release in the cluster, we use release named `dev-eth0` in example below 79 | ```bash 80 | cd blockchain-kubernetes 81 | helm upgrade dev-eth0 charts/parity/ --reuse-values --force --atomic --values values-parity.yaml 82 | ``` 83 | 84 | ### Snapshot disk with blockchain 85 | Let's assume you need to snapshot disk from pod `dev-eth0-parity-0` in namespace `dev-eth-0`. First we need to find what disk is actually used by this pod 86 | ```bash 87 | kubectl -n dev-eth-0 describe pod dev-eth0-parity-0 88 | ``` 89 | Check output for Volumes, my case is 90 | ```yaml 91 | Volumes: 92 | parity-pvc: 93 | Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) 94 | ClaimName: parity-pvc-dev-eth0-parity-0 95 | ReadOnly: false 96 | ``` 97 | We get `ClaimName: parity-pvc-dev-eth0-parity-0`, now we need to find corresponding `PersistentVolume`: 98 | ```bash 99 | kubectl -n dev-eth-0 describe pvc parity-pvc-dev-eth0-parity-0 100 | ``` 101 | Check output for Volumes, my case is 102 | ```yaml 103 | Volume: pvc-d0846f83-df05-11e9-8a31-42010a8001be 104 | ``` 105 | And now we need to get disk name from PV 106 | ```bash 107 | kubectl describe pv pvc-d0846f83-df05-11e9-8a31-42010a8001be 108 | ``` 109 | Check output for `Source` 110 | ```yaml 111 | Source: 112 | Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine) 113 | PDName: gke-baas0-fff79c5e-dyn-pvc-d0846f83-df05-11e9-8a31-42010a8001be 114 | ``` 115 | We get `gke-baas0-fff79c5e-dyn-pvc-d0846f83-df05-11e9-8a31-42010a8001be`, that's the name of disk we need to snapshot. 116 | You may use [official doc](https://cloud.google.com/compute/docs/disks/create-snapshots) to create snapshot, here is quick example command to do so 117 | ```bash 118 | gcloud compute disks snapshot gke-baas0-fff79c5e-dyn-pvc-d0846f83-df05-11e9-8a31-42010a8001be --snapshot-names=dev-eth0-parity 119 | ``` 120 | It may be better to stop blockchain node [to get consistent snapshot with high probability](https://cloud.google.com/compute/docs/disks/snapshot-best-practices). Here is how you can do it for example with eth node: 121 | ```bash 122 | kubectl -n dev-eth-0 scale statefulset dev-eth0-parity --replicas=0 123 | ``` 124 | Wait 1 minute and then create the snapshot. Use following command to start node again: 125 | ```bash 126 | kubectl -n dev-eth-0 scale statefulset dev-eth0-parity --replicas=1 127 | ``` 128 | You may need to convert your snapshot to an image, for example to share the image 129 | ```bash 130 | gcloud compute images create parity-2019-10-16 --source-snapshot=dev-eth0-parity 131 | ``` 132 | ### Provision cryptonode with pre-existing image 133 | When you have someone who shared pre-synced cryptonode disk image with you, you can create a new disk from this image and use it with your cryptonode, and here is how. 134 | * (optional) copy disk image to your project 135 | ```bash 136 | gcloud compute images create parity-2-5-5-2019-10-16 --source-image=parity-2-5-5-2019-10-16 --source-image-project= 137 | ``` 138 | Now we have two options - single zone disk or regional disk, choose one 139 | #### Single zone disk 140 | * just create SSD disk from the image, pay attention to zone, it must be the same as your GKE cluster 141 | ```bash 142 | gcloud compute disks create parity-0 --type pd-ssd --zone us-central1-b --image=parity-2-5-5-2019-10-16 --image-project= 143 | ``` 144 | #### Regional disk 145 | Due to `Creating a regional disk from a source image is not supported yet.` we need to perform this task with intermediate steps 146 | * create single zone standard disk from the image 147 | ```bash 148 | gcloud compute disks create parity-tmp --type pd-standard --zone us-central1-b --image=parity-2-5-5-2019-10-16 --image-project= 149 | ``` 150 | * create a snapshot from disk we just created. We can then create regional disk from snapshot it the corresponding region 151 | ```bash 152 | gcloud compute disks snapshot parity-tmp --snapshot-names=parity-2-5-5-2019-10-16 --storage-location=us-central1 --zone us-central1-b 153 | ``` 154 | * remove standard disk, we don't need it 155 | ```bash 156 | gcloud compute disks delete parity-tmp --zone us-central1-b 157 | ``` 158 | * create regional SSD disk from the snapshot, use same `replica-zones` as your GKE cluster 159 | ```bash 160 | gcloud compute disks create parity-0 --type pd-ssd --region us-central1 --replica-zones=us-central1-c,us-central1-b --source-snapshot=parity-2-5-5-2019-10-16 161 | ``` 162 | #### Attaching disk to cryptonode pod 163 | Now we need to create [PersistentVolume in Kubernetes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/)(PV) and use this PV with [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) (PVC) we already have. 164 | * adjust [pv.yaml](pv.yaml)(1 zone disk) or [pv-r.yaml](pv-r.yaml)(regional disk) with your disk name, zones etc. In this manual we assume you have required storage classes already from cryptonode deployment. 165 | * create PV via following command, use one of them: 166 | ```bash 167 | kubectl create -f pv.yaml 168 | # or 169 | kubectl create -f pv-r.yaml 170 | ``` 171 | * shutdown cryptonode: 172 | ```bash 173 | kubectl -n prod-eth-0 scale statefulset prod-eth0-parity --replicas=0 174 | ``` 175 | let it some time to shutdown, you can monitor it with `kubectl -n prod-eth-0 get pod -w` usually 176 | 177 | * replace existing PVC by a copy with another disk name `parity-0`, I use `parity-pvc-prod-eth1-parity-0` PVC in `prod-eth-1` namespace in the example below: 178 | ```bash 179 | # backup just in case 180 | kubectl -n prod-eth-1 get pvc parity-pvc-prod-eth1-parity-0 -o yaml > parity-pvc-prod-eth1-parity-0.yaml 181 | kubectl -n prod-eth-1 get pvc parity-pvc-prod-eth1-parity-0 -o json|jq '.spec.volumeName="parity-0"'| kubectl -n prod-eth-1 replace --force -f - 182 | ``` 183 | * start cryptonode up and check logs 184 | ```bash 185 | kubectl -n prod-eth-0 scale statefulset prod-eth0-parity --replicas=1 186 | kubectl -n prod-eth-0 get pod -w 187 | kubectl -n prod-eth-0 logs -f prod-eth0-parity-0 188 | ``` 189 | -------------------------------------------------------------------------------- /pv-r.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | annotations: 5 | kubernetes.io/createdby: gce-pd-dynamic-provisioner 6 | pv.kubernetes.io/bound-by-controller: "yes" 7 | pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd 8 | finalizers: 9 | - kubernetes.io/pv-protection 10 | labels: 11 | failure-domain.beta.kubernetes.io/region: us-central1 12 | failure-domain.beta.kubernetes.io/zone: us-central1-c__us-central1-b 13 | name: parity-0 14 | spec: 15 | accessModes: 16 | - ReadWriteOnce 17 | capacity: 18 | storage: 5000Gi 19 | gcePersistentDisk: 20 | fsType: ext4 21 | pdName: parity-0 22 | nodeAffinity: 23 | required: 24 | nodeSelectorTerms: 25 | - matchExpressions: 26 | - key: failure-domain.beta.kubernetes.io/zone 27 | operator: In 28 | values: 29 | - us-central1-c 30 | - us-central1-b 31 | - key: failure-domain.beta.kubernetes.io/region 32 | operator: In 33 | values: 34 | - us-central1 35 | persistentVolumeReclaimPolicy: Delete 36 | storageClassName: ssd-regional-us-central1-bc 37 | volumeMode: Filesystem 38 | -------------------------------------------------------------------------------- /pv.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | annotations: 5 | kubernetes.io/createdby: gce-pd-dynamic-provisioner 6 | pv.kubernetes.io/bound-by-controller: "yes" 7 | pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd 8 | finalizers: 9 | - kubernetes.io/pv-protection 10 | labels: 11 | failure-domain.beta.kubernetes.io/region: us-central1 12 | failure-domain.beta.kubernetes.io/zone: us-central1-c 13 | name: parity-0 14 | spec: 15 | accessModes: 16 | - ReadWriteOnce 17 | capacity: 18 | storage: 5000Gi 19 | gcePersistentDisk: 20 | fsType: ext4 21 | pdName: parity-1 22 | nodeAffinity: 23 | required: 24 | nodeSelectorTerms: 25 | - matchExpressions: 26 | - key: failure-domain.beta.kubernetes.io/zone 27 | operator: In 28 | values: 29 | - us-central1-c 30 | - key: failure-domain.beta.kubernetes.io/region 31 | operator: In 32 | values: 33 | - us-central1 34 | persistentVolumeReclaimPolicy: Delete 35 | storageClassName: ssd 36 | volumeMode: Filesystem 37 | -------------------------------------------------------------------------------- /resources.md: -------------------------------------------------------------------------------- 1 | Cryptonodes require resources to perform initial sync and to stay in sync. 2 | 3 | We'll focus on these resources 4 | * CPU 5 | * Memory 6 | * Disk size 7 | * Disk IOPS and latency 8 | 9 | We store full archive nodes with traces/txindex in all cases where possible, so here you'll see maximum cryptonodes requirements. Full (but not archive) node requires less resources. 10 | Data is actual at the end of 2019. 11 | Let's start from [parity](https://www.parity.io/ethereum/) 12 | 13 | ## Parity 14 | Parity version 2.5/2.6 loves memory. And it absolutely loves low disk latency, which isn't perfect with cloud disks including SSD. 15 | Simple IOPS increase may not help, as disk access is more or less single-threaded during sync and thus may be limited by IO latency instead of IOPS. 16 | Local NVMe disk will do it's job for chains like ETC, but it's size is not enough to work with ETH mainnet usually. 17 | Here is some hack to speedup initial sync - get instance with tons of RAM and preload synced blockchain into OS cache. 18 | My case was 640GB of RAM and blockchain preload from inside container via `find | xargs cat > /dev/null` or [vmtouch](https://github.com/hoytech/vmtouch/), 19 | 3-5x speedup from 0.5-2 blocks/sec(100-200 tx/sec) to 7-10 blocks/sec (700-1000 tx/sec) and sustained blockchain write near 150MB/s, just $1/hour with preemptible nodes. 20 | Get pre-synced snapshot when you can :) 21 | 22 | ### Initial sync 23 | 24 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 25 | |-------|-------------|----------------|-----------|-----------|-------------| 26 | |ETH mainnet|2/4|20G/30G|4TB SSD|1000+|as low as you can get| 27 | |ETC|2/4|20G/30G|600GB SSD|1000+|as low as you can get| 28 | |Kovan|2/4|20G/30G|500 GB SSD|1000+|as low as you can get| 29 | 30 | ### Keep chain synced 31 | You may use less resources to keep chain synced, except ETH mainnet. It requires even more resources, than during initial sync. 32 | 33 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 34 | |-------|-------------|----------------|-----------|-----------|-------------| 35 | |ETH mainnet|2/4|20G/30G|4TB SSD|2000+|as low as you can get| 36 | |ETC|0.3/1|15G/20G|600GB SSD|100+|low| 37 | |Kovan|2/4|10G/15G|500 GB SSD|100+|low| 38 | 39 | ## Bitcoind-like nodes 40 | All the bitcoind-like cryptonodes have similar requirements. 41 | 42 | ### Initial sync 43 | It's better to use SSD with BTC and BCH during initial sync or reindex. 44 | 45 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 46 | |-------|-------------|----------------|-----------|-----------|-------------| 47 | | BTC|1/2|2G/3G|400GB SSD|500+|low| 48 | | BCH|1/2|2G/3G|250GB SSD|500+|low| 49 | | DASH|1/2|2G/3G|30GB HDD|50+|medium| 50 | | DOGE|1/2|2G/3G|50GB HDD|50+|medium| 51 | | LTC|1/2|2G/3G|40GB HDD|50+|medium| 52 | | ZCASH|1/2|2G/3G|40GB HDD|50+|medium| 53 | 54 | ### Keep chain synced 55 | All these nodes require ~ 0.01 CPU to keep chain in sync. You'll need more CPU to start up, warm up, serve RPC requests etc. 56 | 57 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 58 | |-------|-------------|----------------|-----------|-----------|-------------| 59 | | BTC|0.1/1|2G/3G|400GB HDD|30+|medium| 60 | | BCH|0.1/1|0.5G/1G|250GB HDD|30+|medium| 61 | | DASH|0.1/1|1G/2G|30GB HDD|30+|medium| 62 | | DOGE|0.1/1|2G/3G|50GB HDD|30+|medium| 63 | | LTC|0.1/1|1G/2G|40GB HDD|30+|medium| 64 | | ZCASH|0.1/1|2G/3G|40GB HDD|30+|medium| 65 | 66 | ## Theta 67 | You may use [pre-synced snapshot](https://mainnet-data.thetatoken.org/snapshot) from Theta to bootstrap Your node. But here are resources requirements to sync from scratch. 68 | ### Initial sync 69 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 70 | |-------|-------------|----------------|-----------|-----------|-------------| 71 | |mainnet|1.5/2.2|1.5G/2G|350GB SSD|1200+|low| 72 | 73 | ### Keep chain synced 74 | TBD 75 | 76 | | Chain | CPU req/lim | Memory req/lim | Disk size | Disk IOPS | Disk latency| 77 | |-------|-------------|----------------|-----------|-----------|-------------| 78 | |mainnet|0|0|0|0|-| 79 | -------------------------------------------------------------------------------- /sc-ssd-regional.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: storage.k8s.io/v1 2 | kind: StorageClass 3 | metadata: 4 | name: ssd-regional-us-central1-bc 5 | provisioner: kubernetes.io/gce-pd 6 | parameters: 7 | type: pd-ssd 8 | replication-type: regional-pd 9 | zones: us-central1-b, us-central1-c 10 | -------------------------------------------------------------------------------- /sc-ssd.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: storage.k8s.io/v1 2 | kind: StorageClass 3 | metadata: 4 | name: ssd 5 | provisioner: kubernetes.io/gce-pd 6 | parameters: 7 | type: pd-ssd 8 | -------------------------------------------------------------------------------- /sc-standard-regional.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: storage.k8s.io/v1 2 | kind: StorageClass 3 | metadata: 4 | name: standard-regional-us-central1-bc 5 | provisioner: kubernetes.io/gce-pd 6 | parameters: 7 | type: pd-standard 8 | replication-type: regional-pd 9 | zones: us-central1-b, us-central1-c 10 | --------------------------------------------------------------------------------