├── Chart.yaml ├── charts ├── zookeeper │ ├── OWNERS │ ├── .helmignore │ ├── templates │ │ ├── poddisruptionbudget.yaml │ │ ├── NOTES.txt │ │ ├── config-jmx-exporter.yaml │ │ ├── service-headless.yaml │ │ ├── service.yaml │ │ ├── _helpers.tpl │ │ ├── servicemonitors.yaml │ │ ├── job-chroots.yaml │ │ ├── config-script.yaml │ │ └── statefulset.yaml │ ├── Chart.yaml │ ├── README.md │ └── values.yaml ├── cassandra │ ├── OWNERS │ ├── sample │ │ └── create-storage-gce.yaml │ ├── .helmignore │ ├── templates │ │ ├── configmap.yaml │ │ ├── pdb.yaml │ │ ├── servicemonitor.yaml │ │ ├── service.yaml │ │ ├── _helpers.tpl │ │ ├── NOTES.txt │ │ └── statefulset.yaml │ ├── Chart.yaml │ ├── values.yaml │ └── README.md └── solr │ ├── requirements.yaml │ ├── requirements.lock │ ├── Chart.yaml │ ├── templates │ ├── poddisruptionbudget.yaml │ ├── service-headless.yaml │ ├── service.yaml │ ├── NOTES.txt │ ├── solr-xml-configmap.yaml │ ├── exporter-deployment.yaml │ ├── _helpers.tpl │ └── statefulset.yaml │ ├── values.yaml │ └── README.md ├── apache-maven.sh ├── requirements.yaml ├── .helmignore ├── templates ├── _helpers.tpl ├── service.yaml ├── ingress.yaml ├── NOTES.txt ├── deployment.yaml └── configmap.yaml ├── README.md ├── Dockerfile └── values.yaml /Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | description: A Helm chart for Kubernetes 3 | name: atlas 4 | version: 0.1.0 -------------------------------------------------------------------------------- /charts/zookeeper/OWNERS: -------------------------------------------------------------------------------- 1 | approvers: 2 | - lachie83 3 | - kow3ns 4 | reviewers: 5 | - lachie83 6 | - kow3ns 7 | -------------------------------------------------------------------------------- /charts/cassandra/OWNERS: -------------------------------------------------------------------------------- 1 | approvers: 2 | - KongZ 3 | - maver1ck 4 | - maorfr 5 | reviewers: 6 | - KongZ 7 | - maver1ck 8 | - maorfr 9 | -------------------------------------------------------------------------------- /charts/solr/requirements.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | dependencies: 4 | - name: zookeeper 5 | version: 1.2.2 6 | repository: "https://kubernetes-charts-incubator.storage.googleapis.com/" 7 | -------------------------------------------------------------------------------- /apache-maven.sh: -------------------------------------------------------------------------------- 1 | export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 2 | export M2_HOME=/usr/local/apache-maven 3 | export MAVEN_HOME=/usr/local/apache-maven 4 | export PATH=${M2_HOME}/bin:${PATH} 5 | -------------------------------------------------------------------------------- /charts/cassandra/sample/create-storage-gce.yaml: -------------------------------------------------------------------------------- 1 | kind: StorageClass 2 | apiVersion: storage.k8s.io/v1 3 | metadata: 4 | name: generic 5 | provisioner: kubernetes.io/gce-pd 6 | parameters: 7 | type: pd-ssd 8 | -------------------------------------------------------------------------------- /requirements.yaml: -------------------------------------------------------------------------------- 1 | dependencies: 2 | - name: zookeeper 3 | version: "1.2.3" 4 | repository: "file://../zookeeper" 5 | - name: cassandra 6 | version: "1.2.3" 7 | repository: "file://../cassandra" 8 | - name: solr 9 | version: "1.2.3" 10 | repository: "file://../solr" 11 | -------------------------------------------------------------------------------- /charts/solr/requirements.lock: -------------------------------------------------------------------------------- 1 | dependencies: 2 | - name: zookeeper 3 | repository: https://kubernetes-charts-incubator.storage.googleapis.com/ 4 | version: 1.2.2 5 | digest: sha256:535c0850e71490a52df2686fc1f0b3c737535388df646e1eefe1f0a76999283e 6 | generated: 2019-02-05T15:07:14.273428Z 7 | -------------------------------------------------------------------------------- /charts/cassandra/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | # Common backup files 9 | *.swp 10 | *.bak 11 | *.tmp 12 | *~ 13 | # Various IDEs 14 | .project 15 | .idea/ 16 | *.tmproj 17 | OWNERS 18 | -------------------------------------------------------------------------------- /.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /charts/zookeeper/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /charts/solr/Chart.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | apiVersion: "v1" 4 | name: "solr" 5 | version: "1.3.3" 6 | appVersion: "7.7.2" 7 | description: "A helm chart to install Apache Solr: http://lucene.apache.org/solr/" 8 | keywords: 9 | - "solr" 10 | home: "http://lucene.apache.org/solr/" 11 | sources: 12 | - "https://gitbox.apache.org/repos/asf?p=lucene-solr.git" 13 | maintainers: 14 | - name: "ian-thebridge-lucidworks" 15 | email: "ian.thebridge@lucidworks.com" 16 | -------------------------------------------------------------------------------- /charts/cassandra/templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.configOverrides }} 2 | kind: ConfigMap 3 | apiVersion: v1 4 | metadata: 5 | name: {{ template "cassandra.name" . }} 6 | namespace: {{ .Release.Namespace }} 7 | labels: 8 | app: {{ template "cassandra.name" . }} 9 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 10 | release: {{ .Release.Name }} 11 | heritage: {{ .Release.Service }} 12 | data: 13 | {{ toYaml .Values.configOverrides | indent 2 }} 14 | {{- end }} 15 | -------------------------------------------------------------------------------- /charts/solr/templates/poddisruptionbudget.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: "policy/v1beta1" 3 | kind: "PodDisruptionBudget" 4 | metadata: 5 | name: "{{ include "solr.fullname" . }}" 6 | labels: 7 | {{ include "solr.common.labels" . | indent 4 }} 8 | spec: 9 | selector: 10 | matchLabels: 11 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 12 | app.kubernetes.io/instance: "{{ .Release.Name }}" 13 | app.kubernetes.io/component: "server" 14 | {{ toYaml .Values.podDisruptionBudget | indent 2 }} 15 | -------------------------------------------------------------------------------- /charts/solr/templates/service-headless.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | apiVersion: "v1" 4 | kind: "Service" 5 | metadata: 6 | name: "{{ include "solr.headless-service-name" . }}" 7 | labels: 8 | {{ include "solr.common.labels" . | indent 4 }} 9 | spec: 10 | clusterIP: "None" 11 | ports: 12 | - port: {{ .Values.port }} 13 | name: "solr-headless" 14 | selector: 15 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 16 | app.kubernetes.io/instance: "{{ .Release.Name }}" 17 | app.kubernetes.io/component: "server" 18 | -------------------------------------------------------------------------------- /charts/cassandra/templates/pdb.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.podDisruptionBudget -}} 2 | apiVersion: policy/v1beta1 3 | kind: PodDisruptionBudget 4 | metadata: 5 | labels: 6 | app: {{ template "cassandra.name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version }} 8 | heritage: {{ .Release.Service }} 9 | release: {{ .Release.Name }} 10 | name: {{ template "cassandra.fullname" . }} 11 | spec: 12 | selector: 13 | matchLabels: 14 | app: {{ template "cassandra.name" . }} 15 | release: {{ .Release.Name }} 16 | {{ toYaml .Values.podDisruptionBudget | indent 2 }} 17 | {{- end -}} 18 | -------------------------------------------------------------------------------- /charts/solr/templates/service.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | apiVersion: "v1" 4 | kind: "Service" 5 | metadata: 6 | name: "{{ include "solr.service-name" . }}" 7 | labels: 8 | {{ include "solr.common.labels" . | indent 4 }} 9 | annotations: 10 | {{ toYaml .Values.service.annotations | indent 4}} 11 | spec: 12 | type: "{{ .Values.service.type }}" 13 | ports: 14 | - port: {{ .Values.port }} 15 | name: "solr-client" 16 | selector: 17 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 18 | app.kubernetes.io/instance: "{{ .Release.Name }}" 19 | app.kubernetes.io/component: "server" 20 | -------------------------------------------------------------------------------- /templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | */}} 13 | {{- define "fullname" -}} 14 | {{- $name := default .Chart.Name .Values.nameOverride -}} 15 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 16 | {{- end -}} 17 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/poddisruptionbudget.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: policy/v1beta1 2 | kind: PodDisruptionBudget 3 | metadata: 4 | name: {{ template "zookeeper.fullname" . }} 5 | labels: 6 | app: {{ template "zookeeper.name" . }} 7 | chart: {{ template "zookeeper.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | component: server 11 | spec: 12 | selector: 13 | matchLabels: 14 | app: {{ template "zookeeper.name" . }} 15 | release: {{ .Release.Name }} 16 | component: server 17 | {{ toYaml .Values.podDisruptionBudget | indent 2 }} 18 | -------------------------------------------------------------------------------- /templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ template "fullname" . }} 5 | labels: 6 | app: {{ template "name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | type: {{ .Values.service.type }} 12 | ports: 13 | - port: {{ .Values.service.externalPort }} 14 | targetPort: {{ .Values.service.internalPort }} 15 | protocol: TCP 16 | name: {{ .Values.service.name }} 17 | selector: 18 | app: {{ template "name" . }} 19 | release: {{ .Release.Name }} 20 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Thank you for installing ZooKeeper on your Kubernetes cluster. More information 2 | about ZooKeeper can be found at https://zookeeper.apache.org/doc/current/ 3 | 4 | Your connection string should look like: 5 | {{ template "zookeeper.fullname" . }}-0.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},{{ template "zookeeper.fullname" . }}-1.{{ template "zookeeper.fullname" . }}-headless:{{ .Values.service.ports.client.port }},... 6 | 7 | You can also use the client service {{ template "zookeeper.fullname" . }}:{{ .Values.service.ports.client.port }} to connect to an available ZooKeeper server. 8 | -------------------------------------------------------------------------------- /charts/zookeeper/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | name: zookeeper 3 | home: https://zookeeper.apache.org/ 4 | version: 2.1.3 5 | appVersion: 3.5.5 6 | kubeVersion: "^1.10.0-0" 7 | description: Centralized service for maintaining configuration information, naming, 8 | providing distributed synchronization, and providing group services. 9 | icon: https://zookeeper.apache.org/images/zookeeper_small.gif 10 | sources: 11 | - https://github.com/apache/zookeeper 12 | - https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper 13 | maintainers: 14 | - name: lachie83 15 | email: lachlan.evenson@microsoft.com 16 | - name: kow3ns 17 | email: owensk@google.com 18 | -------------------------------------------------------------------------------- /charts/solr/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Your Solr cluster has now been installed, and can be accessed in the following ways: 2 | 3 | * Internally, within the kubernetes cluster on: 4 | 5 | {{ template "solr.service-name" . }}.{{ .Release.Namespace }}:{{ .Values.port }} 6 | 7 | * External to the kubernetes cluster: 8 | 9 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "solr.name" . }},component=server,release={{ .Release.Name }}" -o jsonpath="{ .items[0].metadata.name }") 10 | echo "Visit http://127.0.0.1:{{ .Values.port }} to access Solr" 11 | kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.port }}:{{ .Values.port }} 12 | -------------------------------------------------------------------------------- /charts/cassandra/Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | name: cassandra 3 | version: 0.13.3 4 | appVersion: 3.11.3 5 | description: Apache Cassandra is a free and open-source distributed database management 6 | system designed to handle large amounts of data across many commodity servers, providing 7 | high availability with no single point of failure. 8 | icon: https://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Cassandra_logo.svg/330px-Cassandra_logo.svg.png 9 | keywords: 10 | - cassandra 11 | - database 12 | - nosql 13 | home: http://cassandra.apache.org 14 | maintainers: 15 | - name: KongZ 16 | email: goonohc@gmail.com 17 | - name: maorfr 18 | email: maor.friedman@redhat.com 19 | engine: gotpl 20 | -------------------------------------------------------------------------------- /charts/cassandra/templates/servicemonitor.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.exporter.enabled .Values.exporter.servicemonitor }} 2 | apiVersion: monitoring.coreos.com/v1 3 | kind: ServiceMonitor 4 | metadata: 5 | name: {{ template "cassandra.fullname" . }} 6 | name: hello-prometheus-scraping 7 | labels: 8 | app: {{ template "cassandra.name" . }} 9 | chart: {{ template "cassandra.chart" . }} 10 | release: {{ .Release.Name }} 11 | heritage: {{ .Release.Service }} 12 | spec: 13 | jobLabel: {{ template "cassandra.name" . }} 14 | endpoints: 15 | - port: metrics 16 | interval: 10s 17 | selector: 18 | matchLabels: 19 | app: {{ template "cassandra.name" . }} 20 | namespaceSelector: 21 | any: true 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/config-jmx-exporter.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.exporters.jmx.enabled }} 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: {{ .Release.Name }}-jmx-exporter 6 | labels: 7 | app: {{ template "zookeeper.name" . }} 8 | chart: {{ template "zookeeper.chart" . }} 9 | release: {{ .Release.Name }} 10 | heritage: {{ .Release.Service }} 11 | data: 12 | config.yml: |- 13 | hostPort: 127.0.0.1:{{ .Values.env.JMXPORT }} 14 | lowercaseOutputName: {{ .Values.exporters.jmx.config.lowercaseOutputName }} 15 | rules: 16 | {{ .Values.exporters.jmx.config.rules | toYaml | indent 6 }} 17 | ssl: false 18 | startDelaySeconds: {{ .Values.exporters.jmx.config.startDelaySeconds }} 19 | {{- end }} 20 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/service-headless.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ template "zookeeper.headless" . }} 5 | labels: 6 | app: {{ template "zookeeper.name" . }} 7 | chart: {{ template "zookeeper.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | {{- if .Values.headless.annotations }} 11 | annotations: 12 | {{ .Values.headless.annotations | toYaml | trimSuffix "\n" | indent 4 }} 13 | {{- end }} 14 | spec: 15 | clusterIP: None 16 | ports: 17 | {{- range $key, $port := .Values.ports }} 18 | - name: {{ $key }} 19 | port: {{ $port.containerPort }} 20 | targetPort: {{ $key }} 21 | protocol: {{ $port.protocol }} 22 | {{- end }} 23 | selector: 24 | app: {{ template "zookeeper.name" . }} 25 | release: {{ .Release.Name }} 26 | -------------------------------------------------------------------------------- /templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled -}} 2 | {{- $serviceName := include "fullname" . -}} 3 | {{- $servicePort := .Values.service.externalPort -}} 4 | apiVersion: extensions/v1beta1 5 | kind: Ingress 6 | metadata: 7 | name: {{ template "fullname" . }} 8 | labels: 9 | app: {{ template "name" . }} 10 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 11 | release: {{ .Release.Name }} 12 | heritage: {{ .Release.Service }} 13 | annotations: 14 | {{- range $key, $value := .Values.ingress.annotations }} 15 | {{ $key }}: {{ $value | quote }} 16 | {{- end }} 17 | spec: 18 | rules: 19 | {{- range $host := .Values.ingress.hosts }} 20 | - host: {{ $host }} 21 | http: 22 | paths: 23 | - path: / 24 | backend: 25 | serviceName: {{ $serviceName }} 26 | servicePort: {{ $servicePort }} 27 | {{- end -}} 28 | {{- if .Values.ingress.tls }} 29 | tls: 30 | {{ toYaml .Values.ingress.tls | indent 4 }} 31 | {{- end -}} 32 | {{- end -}} 33 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | 2 | # atlas-helm-chart 3 | 4 | This is a Kubernetes Helm Chart that Deploys the Apache Atlas Cluster on Azure Kubernetes Service. Atlas will use Solr for indexing and Cassandra as Backend Storage. 5 | 6 | # PreRequisites 7 | 8 | This Post assumes that you have an already running Kubernetes Cluster in Azure Kubernetes Service. This Chart will also work in other environment as well. Also make sure to install Helm on top of the cluster. 9 | 10 | # Helm Chart Deployment 11 | 12 | This Helm chart is referenced from another Github Repo: https://github.com/xmavrck/atlas-helm-chart 13 | 14 | Let us go through the Folder Structure of the Helm Chart. The charts folder contains solr, atlas, Cassandra and zookeeper Helm Charts embedded. The Atlas Chart will deploy these charts. To deploy the helm chart, clone the repo and run the below command- 15 | 16 | ```sh 17 | helm install --name atlas-helm-chart 18 | 19 | Sample Example: helm install --name atlas atlas-helm-chart 20 | 21 | This will run the solr, atlas, Cassandra and zookeeper pods 22 | 23 | solr version : 7.7.2 24 | 25 | atlas version : 2.1.1 26 | 27 | cassandra version : 3.11.3 28 | 29 | zookeeper version : 3.5.5 30 | ``` 31 | 32 | The Helm Chart for Cassandra, Zookeeper and Solr is taken from Helm Stable Chart repository. 33 | -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM openjdk:8-jdk-alpine 2 | 3 | # Install required packages for installation 4 | RUN apk add --no-cache \ 5 | bash \ 6 | su-exec \ 7 | python \ 8 | git \ 9 | maven 10 | 11 | 12 | 13 | # Downloading maven in local bin archive 14 | RUN cd /usr/local 15 | ADD https://www-eu.apache.org/dist/maven/maven-3/3.6.3/binaries/apache-maven-3.6.3-bin.tar.gz . 16 | RUN ln -s apache-maven-3.6.3 apache-maven 17 | 18 | 19 | COPY apache-maven.sh /etc/profile.d/ 20 | RUN source /etc/profile.d/apache-maven.sh 21 | 22 | 23 | RUN git clone https://git-wip-us.apache.org/repos/asf/atlas.git atlas \ 24 | && cd atlas \ 25 | && git checkout branch-2.0 \ 26 | && export MAVEN_OPTS="-Xms2g -Xmx2g" 27 | 28 | ADD https://github.com/manjitsin/apache-atlas-setup/releases/download/2.0/apache-atlas-2.1.0-SNAPSHOT-server.tar.gz / 29 | 30 | # Unarchive 31 | RUN set -x \ 32 | && cd / \ 33 | && tar -xvzf apache-atlas-2.1.0-SNAPSHOT-server.tar.gz \ 34 | && rm apache-atlas-2.1.0-SNAPSHOT-server.tar.gz 35 | 36 | 37 | WORKDIR /apache-atlas-2.1.0-SNAPSHOT 38 | 39 | EXPOSE 21000 40 | 41 | ENV PATH=$PATH:/apache-atlas-2.1.0-SNAPSHOT/bin 42 | 43 | CMD ["/bin/bash", "-c", "/apache-atlas-2.1.0-SNAPSHOT/bin/atlas_start.py; tail -fF /apache-atlas-2.1.0-SNAPSHOT/logs/application.log"] 44 | 45 | 46 | -------------------------------------------------------------------------------- /charts/cassandra/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ template "cassandra.fullname" . }} 5 | labels: 6 | app: {{ template "cassandra.name" . }} 7 | chart: {{ template "cassandra.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | clusterIP: None 12 | type: {{ .Values.service.type }} 13 | ports: 14 | {{- if .Values.exporter.enabled }} 15 | - name: metrics 16 | port: 5556 17 | targetPort: {{ .Values.exporter.port }} 18 | {{- end }} 19 | - name: intra 20 | port: 7000 21 | targetPort: 7000 22 | - name: tls 23 | port: 7001 24 | targetPort: 7001 25 | - name: jmx 26 | port: 7199 27 | targetPort: 7199 28 | - name: cql 29 | port: {{ default 9042 .Values.config.ports.cql }} 30 | targetPort: {{ default 9042 .Values.config.ports.cql }} 31 | - name: thrift 32 | port: {{ default 9160 .Values.config.ports.thrift }} 33 | targetPort: {{ default 9160 .Values.config.ports.thrift }} 34 | {{- if .Values.config.ports.agent }} 35 | - name: agent 36 | port: {{ .Values.config.ports.agent }} 37 | targetPort: {{ .Values.config.ports.agent }} 38 | {{- end }} 39 | selector: 40 | app: {{ template "cassandra.name" . }} 41 | release: {{ .Release.Name }} 42 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ template "zookeeper.fullname" . }} 5 | labels: 6 | app: {{ template "zookeeper.name" . }} 7 | chart: {{ template "zookeeper.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | {{- if .Values.service.annotations }} 11 | annotations: 12 | {{- with .Values.service.annotations }} 13 | {{ toYaml . | indent 4 }} 14 | {{- end }} 15 | {{- end }} 16 | spec: 17 | type: {{ .Values.service.type }} 18 | ports: 19 | {{- range $key, $value := .Values.service.ports }} 20 | - name: {{ $key }} 21 | {{ toYaml $value | indent 6 }} 22 | {{- end }} 23 | {{- if .Values.exporters.jmx.enabled }} 24 | {{- range $key, $port := .Values.exporters.jmx.ports }} 25 | - name: {{ $key }} 26 | port: {{ $port.containerPort }} 27 | targetPort: {{ $key }} 28 | protocol: {{ $port.protocol }} 29 | {{- end }} 30 | {{- end}} 31 | {{- if .Values.exporters.zookeeper.enabled }} 32 | {{- range $key, $port := .Values.exporters.zookeeper.ports }} 33 | - name: {{ $key }} 34 | port: {{ $port.containerPort }} 35 | targetPort: {{ $key }} 36 | protocol: {{ $port.protocol }} 37 | {{- end }} 38 | {{- end}} 39 | selector: 40 | app: {{ template "zookeeper.name" . }} 41 | release: {{ .Release.Name }} 42 | -------------------------------------------------------------------------------- /charts/solr/templates/solr-xml-configmap.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | apiVersion: "v1" 4 | kind: "ConfigMap" 5 | metadata: 6 | name: "{{ include "solr.configmap-name" . }}" 7 | labels: 8 | {{ include "solr.common.labels" . | indent 4}} 9 | data: 10 | solr.xml: | 11 | 12 | 13 | 14 | ${host:} 15 | ${jetty.port:8983} 16 | ${hostContext:solr} 17 | ${genericCoreNodeNames:true} 18 | ${zkClientTimeout:30000} 19 | ${distribUpdateSoTimeout:600000} 20 | ${distribUpdateConnTimeout:60000} 21 | ${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider} 22 | ${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider} 23 | 24 | 26 | ${socketTimeout:600000} 27 | ${connTimeout:60000} 28 | 29 | 30 | -------------------------------------------------------------------------------- /values.yaml: -------------------------------------------------------------------------------- 1 | # Default values for atlas. 2 | # This is a YAML-formatted file. 3 | # Declare variables to be passed into your templates. 4 | replicaCount: 1 5 | image: 6 | repository: manjitsing664/atlasimage 7 | tag: 'latest' 8 | pullPolicy: IfNotPresent 9 | config_parameter: 10 | cassandra_clustername: cassandra 11 | cassandra_storage_port: 9042 12 | service: 13 | name: atlas 14 | type: LoadBalancer 15 | externalPort: 21000 16 | internalPort: 21000 17 | ingress: 18 | enabled: false 19 | # Used to create an Ingress record. 20 | hosts: 21 | - chart-example.local 22 | annotations: 23 | # kubernetes.io/ingress.class: nginx 24 | # kubernetes.io/tls-acme: "true" 25 | tls: 26 | # Secrets must be manually created in the namespace. 27 | # - secretName: chart-example-tls 28 | # hosts: 29 | # - chart-example.local 30 | resources: {} 31 | # We usually recommend not to specify default resources and to leave this as a conscious 32 | # choice for the user. This also increases chances charts run on environments with little 33 | # resources, such as Minikube. If you do want to specify resources, uncomment the following 34 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 35 | # limits: 36 | # cpu: 100m 37 | # memory: 128Mi 38 | # requests: 39 | # cpu: 100m 40 | # memory: 128Mi 41 | 42 | kafka: 43 | url: azure_event_hub_url 44 | connectionString: eventhubconnectionstring 45 | -------------------------------------------------------------------------------- /templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | 1. Get the application URL by running these commands: 2 | {{- if .Values.ingress.enabled }} 3 | {{- range .Values.ingress.hosts }} 4 | http://{{ . }} 5 | {{- end }} 6 | {{- else if contains "NodePort" .Values.service.type }} 7 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "fullname" . }}) 8 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 9 | echo http://$NODE_IP:$NODE_PORT 10 | {{- else if contains "LoadBalancer" .Values.service.type }} 11 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 12 | You can watch the status of by running 'kubectl get svc -w {{ template "fullname" . }}' 13 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 14 | echo http://$SERVICE_IP:{{ .Values.service.externalPort }} 15 | {{- else if contains "ClusterIP" .Values.service.type }} 16 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 17 | kubectl port-forward $POD_NAME 8080:{{ .Values.service.externalPort }} 18 | echo "Visit http://127.0.0.1:8080 to use your application" 19 | echo "Default username/password is admin/admin" 20 | {{- end }} 21 | -------------------------------------------------------------------------------- /charts/cassandra/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "cassandra.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "cassandra.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "cassandra.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Create the name of the service account to use 36 | */}} 37 | {{- define "cassandra.serviceAccountName" -}} 38 | {{- if .Values.serviceAccount.create -}} 39 | {{ default (include "cassandra.fullname" .) .Values.serviceAccount.name }} 40 | {{- else -}} 41 | {{ default "default" .Values.serviceAccount.name }} 42 | {{- end -}} 43 | {{- end -}} 44 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "zookeeper.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "zookeeper.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Create chart name and version as used by the chart label. 29 | */}} 30 | {{- define "zookeeper.chart" -}} 31 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | The name of the zookeeper headless service. 36 | */}} 37 | {{- define "zookeeper.headless" -}} 38 | {{- printf "%s-headless" (include "zookeeper.fullname" .) | trunc 63 | trimSuffix "-" -}} 39 | {{- end -}} 40 | 41 | {{/* 42 | The name of the zookeeper chroots job. 43 | */}} 44 | {{- define "zookeeper.chroots" -}} 45 | {{- printf "%s-chroots" (include "zookeeper.fullname" .) | trunc 63 | trimSuffix "-" -}} 46 | {{- end -}} 47 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/servicemonitors.yaml: -------------------------------------------------------------------------------- 1 | {{- if and .Values.exporters.jmx.enabled .Values.prometheus.serviceMonitor.enabled }} 2 | apiVersion: monitoring.coreos.com/v1 3 | kind: ServiceMonitor 4 | metadata: 5 | name: {{ include "zookeeper.fullname" . }} 6 | {{- if .Values.prometheus.serviceMonitor.namespace }} 7 | namespace: {{ .Values.prometheus.serviceMonitor.namespace }} 8 | {{- end }} 9 | labels: 10 | {{ toYaml .Values.prometheus.serviceMonitor.selector | indent 4 }} 11 | spec: 12 | endpoints: 13 | {{- range $key, $port := .Values.exporters.jmx.ports }} 14 | - port: {{ $key }} 15 | path: {{ $.Values.exporters.jmx.path }} 16 | interval: {{ $.Values.exporters.jmx.serviceMonitor.interval }} 17 | scrapeTimeout: {{ $.Values.exporters.jmx.serviceMonitor.scrapeTimeout }} 18 | scheme: {{ $.Values.exporters.jmx.serviceMonitor.scheme }} 19 | {{- end }} 20 | selector: 21 | matchLabels: 22 | app: {{ include "zookeeper.name" . }} 23 | release: {{ .Release.Name }} 24 | namespaceSelector: 25 | matchNames: 26 | - {{ .Release.Namespace }} 27 | {{- end }} 28 | --- 29 | 30 | {{- if and .Values.exporters.zookeeper.enabled .Values.prometheus.serviceMonitor.enabled }} 31 | apiVersion: monitoring.coreos.com/v1 32 | kind: ServiceMonitor 33 | metadata: 34 | name: {{ include "zookeeper.fullname" . }}-exporter 35 | {{- if .Values.prometheus.serviceMonitor.namespace }} 36 | namespace: {{ .Values.prometheus.serviceMonitor.namespace }} 37 | {{- end }} 38 | labels: 39 | {{ toYaml .Values.prometheus.serviceMonitor.selector | indent 4 }} 40 | spec: 41 | endpoints: 42 | {{- range $key, $port := .Values.exporters.zookeeper.ports }} 43 | - port: {{ $key }} 44 | path: {{ $.Values.exporters.zookeeper.path }} 45 | interval: {{ $.Values.exporters.zookeeper.serviceMonitor.interval }} 46 | scrapeTimeout: {{ $.Values.exporters.zookeeper.serviceMonitor.scrapeTimeout }} 47 | scheme: {{ $.Values.exporters.zookeeper.serviceMonitor.scheme }} 48 | {{- end }} 49 | selector: 50 | matchLabels: 51 | app: {{ include "zookeeper.name" . }} 52 | release: {{ .Release.Name }} 53 | namespaceSelector: 54 | matchNames: 55 | - {{ .Release.Namespace }} 56 | {{- end }} -------------------------------------------------------------------------------- /charts/zookeeper/templates/job-chroots.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.jobs.chroots.enabled }} 2 | {{- $root := . }} 3 | {{- $job := .Values.jobs.chroots }} 4 | apiVersion: batch/v1 5 | kind: Job 6 | metadata: 7 | name: {{ template "zookeeper.chroots" . }} 8 | annotations: 9 | "helm.sh/hook": post-install,post-upgrade 10 | "helm.sh/hook-weight": "-5" 11 | "helm.sh/hook-delete-policy": hook-succeeded 12 | labels: 13 | app: {{ template "zookeeper.name" . }} 14 | chart: {{ template "zookeeper.chart" . }} 15 | release: {{ .Release.Name }} 16 | heritage: {{ .Release.Service }} 17 | component: jobs 18 | job: chroots 19 | spec: 20 | activeDeadlineSeconds: {{ $job.activeDeadlineSeconds }} 21 | backoffLimit: {{ $job.backoffLimit }} 22 | completions: {{ $job.completions }} 23 | parallelism: {{ $job.parallelism }} 24 | template: 25 | metadata: 26 | labels: 27 | app: {{ template "zookeeper.name" . }} 28 | release: {{ .Release.Name }} 29 | component: jobs 30 | job: chroots 31 | spec: 32 | restartPolicy: {{ $job.restartPolicy }} 33 | {{- if .Values.priorityClassName }} 34 | priorityClassName: "{{ .Values.priorityClassName }}" 35 | {{- end }} 36 | containers: 37 | - name: main 38 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 39 | imagePullPolicy: {{ .Values.image.pullPolicy }} 40 | command: 41 | - /bin/bash 42 | - -o 43 | - pipefail 44 | - -euc 45 | {{- $port := .Values.service.ports.client.port }} 46 | - > 47 | sleep 15; 48 | export SERVER={{ template "zookeeper.fullname" $root }}:{{ $port }}; 49 | {{- range $job.config.create }} 50 | echo '==> {{ . }}'; 51 | echo '====> Create chroot if does not exist.'; 52 | zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid' 53 | || zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} create {{ . }} ""; 54 | echo '====> Confirm chroot exists.'; 55 | zkCli.sh -server {{ template "zookeeper.fullname" $root }}:{{ $port }} get {{ . }} 2>&1 >/dev/null | grep 'cZxid'; 56 | echo '====> Chroot exists.'; 57 | {{- end }} 58 | env: 59 | {{- range $key, $value := $job.env }} 60 | - name: {{ $key | upper | replace "." "_" }} 61 | value: {{ $value | quote }} 62 | {{- end }} 63 | resources: 64 | {{ toYaml $job.resources | indent 12 }} 65 | {{- end -}} 66 | -------------------------------------------------------------------------------- /templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "fullname" . }} 5 | labels: 6 | app: {{ template "name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | replicas: {{ .Values.replicaCount }} 12 | selector: 13 | matchLabels: 14 | app: {{ template "name" . }} 15 | release: {{ .Release.Name }} 16 | template: 17 | metadata: 18 | labels: 19 | app: {{ template "name" . }} 20 | release: {{ .Release.Name }} 21 | spec: 22 | initContainers: 23 | - name: {{ .Chart.Name }}-init 24 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 25 | imagePullPolicy: {{ .Values.image.pullPolicy }} 26 | command: [ 27 | "/bin/bash", 28 | "-c", 29 | "apk update; 30 | apk add curl zip; 31 | zip config.zip /apache-atlas-2.1.0-SNAPSHOT/conf/solr/*; 32 | curl -X POST --header 'Content-Type:text/xml' -d @config.zip 'http://{{ .Release.Name }}-solr-headless:8983/solr/admin/configs?action=CREATE&name=vertex_index'; 33 | curl -X POST --header 'Content-Type:text/xml' -d @config.zip 'http://{{ .Release.Name }}-solr-headless:8983/solr/admin/configs?action=CREATE&name=edge_index'; 34 | curl -X POST --header 'Content-Type:text/xml' -d @config.zip 'http://{{ .Release.Name }}-solr-headless:8983/solr/admin/configs?action=CREATE&name=fulltext_index';" 35 | ] 36 | env: 37 | - name: ZK_CLIENT_TIMEOUT 38 | value: "{{ .Values.zkClientTimeout }}" 39 | containers: 40 | - name: {{ .Chart.Name }} 41 | command: [ 42 | "/bin/bash", 43 | "-c", 44 | "/apache-atlas-2.1.0-SNAPSHOT/bin/atlas_start.py; 45 | tail -f /apache-atlas-2.1.0-SNAPSHOT/logs/*.log;" 46 | ] 47 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 48 | imagePullPolicy: {{ .Values.image.pullPolicy }} 49 | ports: 50 | - containerPort: {{ .Values.service.internalPort }} 51 | resources: 52 | {{ toYaml .Values.resources | indent 12 }} 53 | {{- if .Values.nodeSelector }} 54 | nodeSelector: 55 | {{ toYaml .Values.nodeSelector | indent 8 }} 56 | {{- end }} 57 | volumeMounts: 58 | - name: atlas-config 59 | mountPath: /apache-atlas-2.1.0-SNAPSHOT/conf/atlas-application.properties 60 | subPath: atlas-application.properties 61 | volumes: 62 | - name: atlas-config 63 | configMap: 64 | name: atlas-config 65 | 66 | -------------------------------------------------------------------------------- /charts/cassandra/templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | Cassandra CQL can be accessed via port {{ .Values.config.ports.cql }} on the following DNS name from within your cluster: 2 | Cassandra Thrift can be accessed via port {{ .Values.config.ports.thrift }} on the following DNS name from within your cluster: 3 | 4 | If you want to connect to the remote instance with your local Cassandra CQL cli. To forward the API port to localhost:9042 run the following: 5 | - kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l app={{ template "cassandra.name" . }},release={{ .Release.Name }} -o jsonpath='{ .items[0].metadata.name }') 9042:{{ .Values.config.ports.cql }} 6 | 7 | If you want to connect to the Cassandra CQL run the following: 8 | {{- if contains "NodePort" .Values.service.type }} 9 | - export CQL_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "cassandra.fullname" . }}) 10 | - export CQL_HOST=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 11 | - cqlsh $CQL_HOST $CQL_PORT 12 | 13 | {{- else if contains "LoadBalancer" .Values.service.type }} 14 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 15 | Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "cassandra.fullname" . }}' 16 | - export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "cassandra.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 17 | - echo cqlsh $SERVICE_IP 18 | {{- else if contains "ClusterIP" .Values.service.type }} 19 | - kubectl port-forward --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "cassandra.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 9042:{{ .Values.config.ports.cql }} 20 | echo cqlsh 127.0.0.1 9042 21 | {{- end }} 22 | 23 | You can also see the cluster status by run the following: 24 | - kubectl exec -it --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l app={{ template "cassandra.name" . }},release={{ .Release.Name }} -o jsonpath='{.items[0].metadata.name}') nodetool status 25 | 26 | To tail the logs for the Cassandra pod run the following: 27 | - kubectl logs -f --namespace {{ .Release.Namespace }} $(kubectl get pods --namespace {{ .Release.Namespace }} -l app={{ template "cassandra.name" . }},release={{ .Release.Name }} -o jsonpath='{ .items[0].metadata.name }') 28 | 29 | {{- if not .Values.persistence.enabled }} 30 | 31 | Note that the cluster is running with node-local storage instead of PersistentVolumes. In order to prevent data loss, 32 | pods will be decommissioned upon termination. Decommissioning may take some time, so you might also want to adjust the 33 | pod termination gace period, which is currently set to {{ .Values.podSettings.terminationGracePeriodSeconds }} seconds. 34 | 35 | {{- end}} 36 | -------------------------------------------------------------------------------- /templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: atlas-config 5 | data: 6 | atlas-application.properties: | 7 | atlas.graph.storage.backend=cql 8 | atlas.graph.storage.hostname={{ .Release.Name }}-cassandra 9 | atlas.graph.storage.cassandra.keyspace=JanusGraph 10 | atlas.graph.storage.clustername={{ .Values.config_parameter.cassandra_clustername }} 11 | atlas.graph.storage.port={{ .Values.config_parameter.cassandra_storage_port }} 12 | 13 | atlas.EntityAuditRepository.impl=org.apache.atlas.repository.audit.CassandraBasedAuditRepository 14 | atlas.EntityAuditRepository.keyspace=atlas_audit 15 | atlas.EntityAuditRepository.replicationFactor=1 16 | 17 | atlas.graph.index.search.backend=solr 18 | atlas.graph.index.search.solr.mode=cloud 19 | atlas.graph.index.search.solr.zookeeper-url={{ .Release.Name }}-zookeeper:{{ .Values.zookeeper.ports.client.containerPort }} 20 | atlas.graph.index.search.solr.zookeeper-connect-timeout=60000 21 | atlas.graph.index.search.solr.zookeeper-session-timeout=60000 22 | atlas.graph.index.search.solr.wait-searcher=true 23 | 24 | atlas.graph.index.search.max-result-set-size=150 25 | 26 | atlas.notification.embedded=false 27 | atlas.kafka.data=${sys:atlas.home}/data/kafka 28 | atlas.kafka.zookeeper.connect={{ .Release.Name }}-zookeeper:2181 29 | atlas.kafka.bootstrap.servers={{ .Values.kafka.url }} 30 | atlas.kafka.sasl.mechanism=PLAIN 31 | atlas.kafka.security.protocol=SASL_SSL 32 | atlas.kafka.zookeeper.session.timeout.ms=400 33 | atlas.kafka.zookeeper.connection.timeout.ms=200 34 | atlas.kafka.zookeeper.sync.time.ms=20 35 | atlas.kafka.auto.commit.interval.ms=1000 36 | atlas.kafka.hook.group.id=atlas 37 | 38 | atlas.kafka.enable.auto.commit=false 39 | atlas.kafka.auto.offset.reset=earliest 40 | atlas.kafka.session.timeout.ms=30000 41 | atlas.kafka.offsets.topic.replication.factor=1 42 | atlas.kafka.poll.timeout.ms=1000 43 | atlas.kafka.request.timeout.ms=60000 44 | 45 | atlas.notification.create.topics=true 46 | atlas.notification.replicas=1 47 | atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES 48 | atlas.notification.log.failed.messages=true 49 | atlas.notification.consumer.retry.interval=500 50 | atlas.notification.hook.retry.interval=1000 51 | 52 | atlas.enableTLS=false 53 | 54 | atlas.authentication.method.kerberos=false 55 | atlas.authentication.method.file=true 56 | 57 | atlas.authentication.method.ldap.type=none 58 | 59 | atlas.authentication.method.file.filename=${sys:atlas.home}/conf/users-credentials.properties 60 | 61 | 62 | atlas.rest.address=http://localhost:21000 63 | 64 | atlas.audit.hbase.tablename=apache_atlas_entity_audit 65 | atlas.audit.zookeeper.session.timeout.ms=1000 66 | atlas.audit.hbase.zookeeper.quorum=atlas-zookeeper:2181 67 | 68 | atlas.server.ha.enabled=false 69 | atlas.authorizer.impl=simple 70 | atlas.authorizer.simple.authz.policy.file=atlas-simple-authz-policy.json 71 | atlas.rest-csrf.enabled=true 72 | atlas.rest-csrf.browser-useragents-regex=^Mozilla.*,^Opera.*,^Chrome.* 73 | atlas.rest-csrf.methods-to-ignore=GET,OPTIONS,HEAD,TRACE 74 | atlas.rest-csrf.custom-header=X-XSRF-HEADER 75 | 76 | atlas.metric.query.cache.ttlInSecs=900 77 | 78 | ######### Gremlin Search Configuration ######### 79 | 80 | #Set to false to disable gremlin search. 81 | atlas.search.gremlin.enable=false 82 | 83 | 84 | 85 | 86 | atlas.jaas.KafkaClient.loginModuleName=org.apache.kafka.common.security.plain.PlainLoginModule 87 | atlas.jaas.KafkaClient.loginModuleControlFlag=required 88 | atlas.jaas.KafkaClient.option.username=$ConnectionString 89 | atlas.jaas.KafkaClient.option.password=Endpoint={{ .Values.kafka.connectionString }} 90 | atlas.jaas.KafkaClient.option.mechanism=PLAIN 91 | atlas.jaas.KafkaClient.option.protocol=SASL_SSL 92 | -------------------------------------------------------------------------------- /charts/solr/values.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | ## Global Docker image parameters 3 | ## Please, note that this will override the image parameters, including dependencies, configured to use the global value 4 | ## Current available global Docker image parameters: imagePullSecrets 5 | ## 6 | # global: 7 | # imagePullSecrets: 8 | # - myRegistryKeySecretName 9 | 10 | # Which port should solr listen on 11 | port: 8983 12 | 13 | # Number of solr instances to run 14 | replicaCount: 1 15 | 16 | # Settings for solr java memory 17 | javaMem: "-Xms2g -Xmx3g" 18 | 19 | # Set the limits and requests on solr pod resources 20 | resources: {} 21 | 22 | # Extra environment variables - allows yaml definitions 23 | extraEnvVars: [] 24 | 25 | # Sets the termination Grace period for the solr pods 26 | # This can take a while for shards to elect new leaders 27 | terminationGracePeriodSeconds: 180 28 | 29 | # Solr image settings 30 | image: 31 | repository: solr 32 | tag: 7.7.2 33 | pullPolicy: IfNotPresent 34 | ## Optionally specify an array of imagePullSecrets. 35 | ## Secrets must be manually created in the namespace. 36 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 37 | ## 38 | # pullSecrets: 39 | # - myRegistryKeySecretName 40 | 41 | 42 | # Solr pod liveness 43 | livenessProbe: 44 | initialDelaySeconds: 45 45 | periodSeconds: 10 46 | 47 | # Solr pod readiness 48 | readinessProbe: 49 | initialDelaySeconds: 15 50 | periodSeconds: 5 51 | 52 | # Annotations to apply to the solr pods 53 | podAnnotations: {} 54 | 55 | # Affinity group rules or the solr pods 56 | affinity: {} 57 | 58 | # Update Strategy for solr pods 59 | updateStrategy: 60 | type: "RollingUpdate" 61 | 62 | # The log level of the Solr instances 63 | logLevel: "INFO" 64 | 65 | # Solr pod disruption budget 66 | podDisruptionBudget: 67 | maxUnavailable: 1 68 | 69 | ## Use an alternate scheduler, e.g. "stork". 70 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 71 | ## 72 | # schedulerName: 73 | 74 | # Configuration for the solr PVC 75 | volumeClaimTemplates: 76 | storageClassName: "" 77 | storageSize: "20Gi" 78 | accessModes: 79 | - "ReadWriteOnce" 80 | 81 | # Configuration for solr TLS handling, see README.md for more instructions 82 | tls: 83 | enabled: false 84 | wantClientAuth: "false" 85 | needClientAuth: "false" 86 | keystorePassword: "changeit" 87 | importKubernetesCA: "false" 88 | checkPeerName: "false" 89 | caSecret: 90 | name: "" 91 | bundlePath: "" 92 | certSecret: 93 | name: "" 94 | keyPath: "tls.key" 95 | certPath: "tls.crt" 96 | 97 | # Configuration for the solr service 98 | service: 99 | type: ClusterIP 100 | annotations: {} 101 | 102 | # Configuration for the solr prometheus exporter 103 | exporter: 104 | image: {} 105 | ## Optionally specify an array of imagePullSecrets. 106 | ## Secrets must be manually created in the namespace. 107 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 108 | ## 109 | # pullSecrets: 110 | # - myRegistryKeySecretName 111 | enabled: false # Deploy the exporter 112 | configFile: "/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml" # The config file to point the exporter to 113 | updateStrategy: {} 114 | podAnnotations: {} # Annotations to apply to the exporter pod 115 | resources: {} # Resource limits for the exporter 116 | port: 9983 # The port to run the exporter on 117 | threads: 7 # The number of threads the exporter uses to query solr 118 | livenessProbe: # Liveness configuration for exporter pod 119 | initialDelaySeconds: 20 120 | periodSeconds: 10 121 | readinessProbe: # Readiness configuration for exporter pod 122 | initialDelaySeconds: 15 123 | periodSeconds: 5 124 | service: 125 | type: "ClusterIP" 126 | annotations: {} 127 | -------------------------------------------------------------------------------- /charts/solr/templates/exporter-deployment.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.exporter.enabled }} 2 | --- 3 | 4 | apiVersion: "v1" 5 | kind: "Service" 6 | metadata: 7 | name: "{{ include "solr.exporter-name" . }}" 8 | labels: 9 | {{ include "solr.common.labels" . | indent 4 }} 10 | app.kubernetes.io/component: "exporter" 11 | annotations: 12 | {{ toYaml .Values.exporter.service.annotations | indent 4}} 13 | spec: 14 | type: "{{ .Values.exporter.service.type }}" 15 | ports: 16 | - port: {{ .Values.exporter.port }} 17 | name: "solr-client" 18 | selector: 19 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 20 | app.kubernetes.io/instance: "{{ .Release.Name }}" 21 | app.kubernetes.io/component: "exporter" 22 | 23 | 24 | --- 25 | 26 | apiVersion: apps/v1 27 | kind: Deployment 28 | metadata: 29 | name: {{ include "solr.exporter-name" . }} 30 | labels: 31 | {{ include "solr.common.labels" . | indent 4 }} 32 | app.kubernetes.io/component: "exporter" 33 | spec: 34 | selector: 35 | matchLabels: 36 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 37 | app.kubernetes.io/instance: "{{ .Release.Name }}" 38 | app.kubernetes.io/component: "exporter" 39 | replicas: 1 40 | strategy: 41 | {{ toYaml .Values.exporter.updateStrategy | indent 4}} 42 | template: 43 | metadata: 44 | labels: 45 | {{ include "solr.common.labels" . | indent 8 }} 46 | app.kubernetes.io/component: "exporter" 47 | annotations: 48 | {{ toYaml .Values.exporter.podAnnotations | indent 8 }} 49 | spec: 50 | {{- include "solr.imagePullSecrets" . | indent 6 }} 51 | affinity: 52 | {{ tpl (toYaml .Values.affinity) . | indent 8 }} 53 | containers: 54 | - name: exporter 55 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 56 | imagePullPolicy: {{ .Values.image.pullPolicy }} 57 | resources: 58 | {{ toYaml .Values.exporter.resources | indent 12 }} 59 | ports: 60 | - containerPort: {{ .Values.port }} 61 | name: solr-client 62 | command: 63 | - "/opt/solr/contrib/prometheus-exporter/bin/solr-exporter" 64 | - "-p" 65 | - "{{ .Values.exporter.port }}" 66 | - "-z" 67 | - "{{ include "solr.zookeeper-service-name" . }}:2181" 68 | - "-n" 69 | - "{{ .Values.exporter.threads }}" 70 | - "-f" 71 | - "{{ .Values.exporter.configFile }}" 72 | livenessProbe: 73 | initialDelaySeconds: {{ .Values.exporter.livenessProbe.initialDelaySeconds }} 74 | periodSeconds: {{ .Values.exporter.livenessProbe.periodSeconds }} 75 | httpGet: 76 | path: "/metrics" 77 | port: {{ .Values.exporter.port }} 78 | readinessProbe: 79 | initialDelaySeconds: {{ .Values.exporter.readinessProbe.initialDelaySeconds }} 80 | periodSeconds: {{ .Values.exporter.readinessProbe.periodSeconds }} 81 | httpGet: 82 | path: "/metrics" 83 | port: {{ .Values.exporter.port }} 84 | initContainers: 85 | - name: solr-init 86 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 87 | imagePullPolicy: {{ .Values.image.pullPolicy }} 88 | command: 89 | - 'sh' 90 | - '-c' 91 | - | 92 | {{- if .Values.tls.enabled }} 93 | PROTOCOL="https://" 94 | {{ else }} 95 | PROTOCOL="http://" 96 | {{- end }} 97 | COUNTER=0; 98 | while [ $COUNTER -lt 30 ]; do 99 | curl -k -s --connect-timeout 10 "${PROTOCOL}{{ include "solr.service-name" . }}:{{ .Values.port }}/solr/admin/info/system" && exit 0 100 | sleep 2 101 | done; 102 | echo "Did NOT see a Running Solr instance after 60 secs!"; 103 | exit 1; 104 | {{ end }} 105 | -------------------------------------------------------------------------------- /charts/solr/templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "solr.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | If release name contains chart name it will be used as a full name. 13 | */}} 14 | {{- define "solr.fullname" -}} 15 | {{- if .Values.fullnameOverride -}} 16 | {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} 17 | {{- else -}} 18 | {{- $name := default .Chart.Name .Values.nameOverride -}} 19 | {{- if contains $name .Release.Name -}} 20 | {{- .Release.Name | trunc 63 | trimSuffix "-" -}} 21 | {{- else -}} 22 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 23 | {{- end -}} 24 | {{- end -}} 25 | {{- end -}} 26 | 27 | {{/* 28 | Define the name of the headless service for solr 29 | */}} 30 | {{- define "solr.headless-service-name" -}} 31 | {{- printf "%s-%s" (include "solr.fullname" .) "headless" | trunc 63 | trimSuffix "-" -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Define the name of the client service for solr 36 | */}} 37 | {{- define "solr.service-name" -}} 38 | {{- printf "%s-%s" (include "solr.fullname" .) "svc" | trunc 63 | trimSuffix "-" -}} 39 | {{- end -}} 40 | 41 | {{/* 42 | Define the name of the solr exporter 43 | */}} 44 | {{- define "solr.exporter-name" -}} 45 | {{- printf "%s-%s" (include "solr.fullname" .) "exporter" | trunc 63 | trimSuffix "-" -}} 46 | {{- end -}} 47 | 48 | {{/* 49 | The name of the zookeeper service 50 | */}} 51 | {{- define "solr.zookeeper-name" -}} 52 | {{- printf "%s-%s" .Release.Name "zookeeper" | trunc 63 | trimSuffix "-" -}} 53 | {{- end -}} 54 | 55 | {{/* 56 | The name of the zookeeper headless service 57 | */}} 58 | {{- define "solr.zookeeper-service-name" -}} 59 | {{ printf "%s-%s" (include "solr.zookeeper-name" .) "headless" | trunc 63 | trimSuffix "-" }} 60 | {{- end -}} 61 | 62 | {{/* 63 | Create chart name and version as used by the chart label. 64 | */}} 65 | {{- define "solr.chart" -}} 66 | {{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} 67 | {{- end -}} 68 | 69 | {{/* 70 | Define the name of the solr PVC 71 | */}} 72 | {{- define "solr.pvc-name" -}} 73 | {{ printf "%s-%s" (include "solr.fullname" .) "pvc" | trunc 63 | trimSuffix "-" }} 74 | {{- end -}} 75 | 76 | {{/* 77 | Define the name of the solr.xml configmap 78 | */}} 79 | {{- define "solr.configmap-name" -}} 80 | {{- printf "%s-%s" (include "solr.fullname" .) "config-map" | trunc 63 | trimSuffix "-" -}} 81 | {{- end -}} 82 | 83 | {{/* 84 | Define the labels that should be applied to all resources in the chart 85 | */}} 86 | {{- define "solr.common.labels" -}} 87 | app.kubernetes.io/name: {{ include "solr.name" . }} 88 | app.kubernetes.io/instance: {{ .Release.Name }} 89 | app.kubernetes.io/managed-by: {{ .Release.Service }} 90 | helm.sh/chart: {{ include "solr.chart" . }} 91 | {{- end -}} 92 | 93 | {{/* 94 | Return the proper Docker Image Registry Secret Names 95 | */}} 96 | {{- define "solr.imagePullSecrets" -}} 97 | {{/* 98 | Helm 2.11 supports the assignment of a value to a variable defined in a different scope, 99 | but Helm 2.9 and 2.10 does not support it, so we need to implement this if-else logic. 100 | Also, we can not use a single if because lazy evaluation is not an option 101 | */}} 102 | {{- if .Values.global }} 103 | {{- if .Values.global.imagePullSecrets }} 104 | imagePullSecrets: 105 | {{- range .Values.global.imagePullSecrets }} 106 | - name: {{ . }} 107 | {{- end }} 108 | {{- else if or .Values.image.pullSecrets .Values.exporter.image.pullSecrets }} 109 | imagePullSecrets: 110 | {{- range .Values.image.pullSecrets }} 111 | - name: {{ . }} 112 | {{- end }} 113 | {{- range .Values.exporter.image.pullSecrets }} 114 | - name: {{ . }} 115 | {{- end }} 116 | {{- end -}} 117 | {{- else if or .Values.image.pullSecrets .Values.exporter.image.pullSecrets }} 118 | imagePullSecrets: 119 | {{- range .Values.image.pullSecrets }} 120 | - name: {{ . }} 121 | {{- end }} 122 | {{- range .Values.exporter.image.pullSecrets }} 123 | - name: {{ . }} 124 | {{- end }} 125 | {{- end -}} 126 | {{- end -}} -------------------------------------------------------------------------------- /charts/zookeeper/templates/config-script.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: ConfigMap 4 | metadata: 5 | name: {{ template "zookeeper.fullname" . }} 6 | labels: 7 | app: {{ template "zookeeper.name" . }} 8 | chart: {{ template "zookeeper.chart" . }} 9 | release: {{ .Release.Name }} 10 | heritage: {{ .Release.Service }} 11 | component: server 12 | data: 13 | ok: | 14 | #!/bin/sh 15 | zkServer.sh status 16 | 17 | ready: | 18 | #!/bin/sh 19 | echo ruok | nc 127.0.0.1 ${1:-2181} 20 | 21 | run: | 22 | #!/bin/bash 23 | 24 | set -a 25 | ROOT=$(echo /apache-zookeeper-*) 26 | 27 | ZK_USER=${ZK_USER:-"zookeeper"} 28 | ZK_LOG_LEVEL=${ZK_LOG_LEVEL:-"INFO"} 29 | ZK_DATA_DIR=${ZK_DATA_DIR:-"/data"} 30 | ZK_DATA_LOG_DIR=${ZK_DATA_LOG_DIR:-"/data/log"} 31 | ZK_CONF_DIR=${ZK_CONF_DIR:-"/conf"} 32 | ZK_CLIENT_PORT=${ZK_CLIENT_PORT:-2181} 33 | ZK_SERVER_PORT=${ZK_SERVER_PORT:-2888} 34 | ZK_ELECTION_PORT=${ZK_ELECTION_PORT:-3888} 35 | ZK_TICK_TIME=${ZK_TICK_TIME:-2000} 36 | ZK_INIT_LIMIT=${ZK_INIT_LIMIT:-10} 37 | ZK_SYNC_LIMIT=${ZK_SYNC_LIMIT:-5} 38 | ZK_HEAP_SIZE=${ZK_HEAP_SIZE:-2G} 39 | ZK_MAX_CLIENT_CNXNS=${ZK_MAX_CLIENT_CNXNS:-60} 40 | ZK_MIN_SESSION_TIMEOUT=${ZK_MIN_SESSION_TIMEOUT:- $((ZK_TICK_TIME*2))} 41 | ZK_MAX_SESSION_TIMEOUT=${ZK_MAX_SESSION_TIMEOUT:- $((ZK_TICK_TIME*20))} 42 | ZK_SNAP_RETAIN_COUNT=${ZK_SNAP_RETAIN_COUNT:-3} 43 | ZK_PURGE_INTERVAL=${ZK_PURGE_INTERVAL:-0} 44 | ID_FILE="$ZK_DATA_DIR/myid" 45 | ZK_CONFIG_FILE="$ZK_CONF_DIR/zoo.cfg" 46 | LOG4J_PROPERTIES="$ZK_CONF_DIR/log4j.properties" 47 | HOST=$(hostname) 48 | DOMAIN=`hostname -d` 49 | JVMFLAGS="-Xmx$ZK_HEAP_SIZE -Xms$ZK_HEAP_SIZE" 50 | 51 | APPJAR=$(echo $ROOT/*jar) 52 | CLASSPATH="${ROOT}/lib/*:${APPJAR}:${ZK_CONF_DIR}:" 53 | 54 | if [[ $HOST =~ (.*)-([0-9]+)$ ]]; then 55 | NAME=${BASH_REMATCH[1]} 56 | ORD=${BASH_REMATCH[2]} 57 | MY_ID=$((ORD+1)) 58 | else 59 | echo "Failed to extract ordinal from hostname $HOST" 60 | exit 1 61 | fi 62 | 63 | mkdir -p $ZK_DATA_DIR 64 | mkdir -p $ZK_DATA_LOG_DIR 65 | echo $MY_ID >> $ID_FILE 66 | 67 | echo "clientPort=$ZK_CLIENT_PORT" >> $ZK_CONFIG_FILE 68 | echo "dataDir=$ZK_DATA_DIR" >> $ZK_CONFIG_FILE 69 | echo "dataLogDir=$ZK_DATA_LOG_DIR" >> $ZK_CONFIG_FILE 70 | echo "tickTime=$ZK_TICK_TIME" >> $ZK_CONFIG_FILE 71 | echo "initLimit=$ZK_INIT_LIMIT" >> $ZK_CONFIG_FILE 72 | echo "syncLimit=$ZK_SYNC_LIMIT" >> $ZK_CONFIG_FILE 73 | echo "maxClientCnxns=$ZK_MAX_CLIENT_CNXNS" >> $ZK_CONFIG_FILE 74 | echo "minSessionTimeout=$ZK_MIN_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE 75 | echo "maxSessionTimeout=$ZK_MAX_SESSION_TIMEOUT" >> $ZK_CONFIG_FILE 76 | echo "autopurge.snapRetainCount=$ZK_SNAP_RETAIN_COUNT" >> $ZK_CONFIG_FILE 77 | echo "autopurge.purgeInterval=$ZK_PURGE_INTERVAL" >> $ZK_CONFIG_FILE 78 | echo "4lw.commands.whitelist=*" >> $ZK_CONFIG_FILE 79 | 80 | for (( i=1; i<=$ZK_REPLICAS; i++ )) 81 | do 82 | echo "server.$i=$NAME-$((i-1)).$DOMAIN:$ZK_SERVER_PORT:$ZK_ELECTION_PORT" >> $ZK_CONFIG_FILE 83 | done 84 | 85 | rm -f $LOG4J_PROPERTIES 86 | 87 | echo "zookeeper.root.logger=$ZK_LOG_LEVEL, CONSOLE" >> $LOG4J_PROPERTIES 88 | echo "zookeeper.console.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES 89 | echo "zookeeper.log.threshold=$ZK_LOG_LEVEL" >> $LOG4J_PROPERTIES 90 | echo "zookeeper.log.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES 91 | echo "zookeeper.log.file=zookeeper.log" >> $LOG4J_PROPERTIES 92 | echo "zookeeper.log.maxfilesize=256MB" >> $LOG4J_PROPERTIES 93 | echo "zookeeper.log.maxbackupindex=10" >> $LOG4J_PROPERTIES 94 | echo "zookeeper.tracelog.dir=$ZK_DATA_LOG_DIR" >> $LOG4J_PROPERTIES 95 | echo "zookeeper.tracelog.file=zookeeper_trace.log" >> $LOG4J_PROPERTIES 96 | echo "log4j.rootLogger=\${zookeeper.root.logger}" >> $LOG4J_PROPERTIES 97 | echo "log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender" >> $LOG4J_PROPERTIES 98 | echo "log4j.appender.CONSOLE.Threshold=\${zookeeper.console.threshold}" >> $LOG4J_PROPERTIES 99 | echo "log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout" >> $LOG4J_PROPERTIES 100 | echo "log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n" >> $LOG4J_PROPERTIES 101 | 102 | if [ -n "$JMXDISABLE" ] 103 | then 104 | MAIN=org.apache.zookeeper.server.quorum.QuorumPeerMain 105 | else 106 | MAIN="-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=$JMXPORT -Dcom.sun.management.jmxremote.authenticate=$JMXAUTH -Dcom.sun.management.jmxremote.ssl=$JMXSSL -Dzookeeper.jmx.log4j.disable=$JMXLOG4J org.apache.zookeeper.server.quorum.QuorumPeerMain" 107 | fi 108 | 109 | set -x 110 | exec java -cp "$CLASSPATH" $JVMFLAGS $MAIN $ZK_CONFIG_FILE 111 | -------------------------------------------------------------------------------- /charts/solr/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | 3 | apiVersion: apps/v1 4 | kind: StatefulSet 5 | metadata: 6 | name: {{ include "solr.fullname" . }} 7 | labels: 8 | {{ include "solr.common.labels" . | indent 4 }} 9 | app.kubernetes.io/component: server 10 | spec: 11 | selector: 12 | matchLabels: 13 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 14 | app.kubernetes.io/instance: "{{ .Release.Name }}" 15 | app.kubernetes.io/component: "server" 16 | serviceName: {{ include "solr.headless-service-name" . }} 17 | replicas: {{ .Values.replicaCount }} 18 | updateStrategy: 19 | {{ toYaml .Values.updateStrategy | indent 4}} 20 | template: 21 | metadata: 22 | labels: 23 | app.kubernetes.io/name: "{{ include "solr.name" . }}" 24 | app.kubernetes.io/instance: "{{ .Release.Name }}" 25 | app.kubernetes.io/component: "server" 26 | annotations: 27 | {{ toYaml .Values.podAnnotations | indent 8 }} 28 | spec: 29 | {{- include "solr.imagePullSecrets" . | indent 6 }} 30 | {{- if .Values.schedulerName }} 31 | schedulerName: "{{ .Values.schedulerName }}" 32 | {{- end }} 33 | securityContext: 34 | fsGroup: 8983 35 | runAsUser: 8983 36 | affinity: 37 | {{ tpl (toYaml .Values.affinity) . | indent 8 }} 38 | terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} 39 | volumes: 40 | {{- if .Values.tls.enabled }} 41 | - name: keystore-volume 42 | emptyDir: {} 43 | - name: "tls-secret" 44 | secret: 45 | secretName: {{ .Values.tls.certSecret.name }} 46 | {{- if not (eq .Values.tls.caSecret.name "") }} 47 | - name: "tls-ca" 48 | secret: 49 | secretName: {{ .Values.tls.caSecret.name }} 50 | {{- end }} 51 | {{- end }} 52 | - name: solr-xml 53 | configMap: 54 | name: {{ include "solr.configmap-name" . }} 55 | items: 56 | - key: solr.xml 57 | path: solr.xml 58 | initContainers: 59 | - name: check-zk 60 | image: busybox:latest 61 | command: 62 | - 'sh' 63 | - '-c' 64 | - | 65 | COUNTER=0; 66 | while [ $COUNTER -lt 60 ]; do 67 | addr=$(nslookup -type=a {{ include "solr.zookeeper-service-name" . }} | grep "Address:" | awk 'NR>1 {print $2}') 68 | if [ ! -z "$addr" ]; then 69 | while read -r line; do 70 | echo $line; 71 | mode=$(echo srvr | nc $line 2181 | grep "Mode"); 72 | echo $mode; 73 | if [ "$mode" = "Mode: leader" ] || [ "$mode" = "Mode: standalone" ]; then 74 | echo "Found a leader!"; 75 | exit 0; 76 | fi; 77 | done < v1beta1/PodDisruptionBudget 45 | NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE 46 | zookeeper N/A 1 1 2m 47 | 48 | ==> v1/Service 49 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 50 | zookeeper-headless ClusterIP None 2181/TCP,3888/TCP,2888/TCP 2m 51 | zookeeper ClusterIP 10.98.179.165 2181/TCP 2m 52 | 53 | ==> v1beta1/StatefulSet 54 | NAME DESIRED CURRENT AGE 55 | zookeeper 3 3 2m 56 | 57 | ==> monitoring.coreos.com/v1/ServiceMonitor 58 | NAME AGE 59 | zookeeper 2m 60 | zookeeper-exporter 2m 61 | ``` 62 | 63 | 1. `statefulsets/zookeeper` is the StatefulSet created by the chart. 64 | 1. `po/zookeeper-<0|1|2>` are the Pods created by the StatefulSet. Each Pod has a single container running a ZooKeeper server. 65 | 1. `svc/zookeeper-headless` is the Headless Service used to control the network domain of the ZooKeeper ensemble. 66 | 1. `svc/zookeeper` is a Service that can be used by clients to connect to an available ZooKeeper server. 67 | 1. `servicemonitor/zookeeper` is a Prometheus ServiceMonitor which scrapes the jmx-exporter metrics endpoint 68 | 1. `servicemonitor/zookeeper-exporter` is a Prometheus ServiceMonitor which scrapes the zookeeper-exporter metrics endpoint 69 | 70 | ## Configuration 71 | You can specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. 72 | 73 | Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, 74 | 75 | ```console 76 | $ helm install --name my-release -f values.yaml incubator/zookeeper 77 | ``` 78 | 79 | ## Default Values 80 | 81 | - You can find all user-configurable settings, their defaults and commentary about them in [values.yaml](values.yaml). 82 | 83 | ## Deep Dive 84 | 85 | ## Image Details 86 | The image used for this chart is based on Alpine 3.9.0. 87 | 88 | ## JVM Details 89 | The Java Virtual Machine used for this chart is the OpenJDK JVM 8u192 JRE (headless). 90 | 91 | ## ZooKeeper Details 92 | The chart defaults to ZooKeeper 3.5 (latest released version). 93 | 94 | ## Failover 95 | You can test failover by killing the leader. Insert a key: 96 | ```console 97 | $ kubectl exec zookeeper-0 -- bin/zkCli.sh create /foo bar; 98 | $ kubectl exec zookeeper-2 -- bin/zkCli.sh get /foo; 99 | ``` 100 | 101 | Watch existing members: 102 | ```console 103 | $ kubectl run --attach bbox --image=busybox --restart=Never -- sh -c 'while true; do for i in 0 1 2; do echo zk-${i} $(echo stats | nc -${i}.:2181 | grep Mode); sleep 1; done; done'; 104 | 105 | zk-2 Mode: follower 106 | zk-0 Mode: follower 107 | zk-1 Mode: leader 108 | zk-2 Mode: follower 109 | ``` 110 | 111 | Delete Pods and wait for the StatefulSet controller to bring them back up: 112 | ```console 113 | $ kubectl delete po -l app=zookeeper 114 | $ kubectl get po --watch-only 115 | NAME READY STATUS RESTARTS AGE 116 | zookeeper-0 0/1 Running 0 35s 117 | zookeeper-0 1/1 Running 0 50s 118 | zookeeper-1 0/1 Pending 0 0s 119 | zookeeper-1 0/1 Pending 0 0s 120 | zookeeper-1 0/1 ContainerCreating 0 0s 121 | zookeeper-1 0/1 Running 0 19s 122 | zookeeper-1 1/1 Running 0 40s 123 | zookeeper-2 0/1 Pending 0 0s 124 | zookeeper-2 0/1 Pending 0 0s 125 | zookeeper-2 0/1 ContainerCreating 0 0s 126 | zookeeper-2 0/1 Running 0 19s 127 | zookeeper-2 1/1 Running 0 41s 128 | ``` 129 | 130 | Check the previously inserted key: 131 | ```console 132 | $ kubectl exec zookeeper-1 -- bin/zkCli.sh get /foo 133 | ionid = 0x354887858e80035, negotiated timeout = 30000 134 | 135 | WATCHER:: 136 | 137 | WatchedEvent state:SyncConnected type:None path:null 138 | bar 139 | ``` 140 | 141 | ## Scaling 142 | ZooKeeper can not be safely scaled in versions prior to 3.5.x 143 | 144 | ## Limitations 145 | * Only supports storage options that have backends for persistent volume claims. 146 | -------------------------------------------------------------------------------- /charts/cassandra/values.yaml: -------------------------------------------------------------------------------- 1 | ## Cassandra image version 2 | ## ref: https://hub.docker.com/r/library/cassandra/ 3 | image: 4 | repo: cassandra 5 | tag: 3.11.3 6 | pullPolicy: IfNotPresent 7 | ## Specify ImagePullSecrets for Pods 8 | ## ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod 9 | # pullSecrets: myregistrykey 10 | 11 | ## Specify a service type 12 | ## ref: http://kubernetes.io/docs/user-guide/services/ 13 | service: 14 | type: ClusterIP 15 | 16 | ## Use an alternate scheduler, e.g. "stork". 17 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 18 | ## 19 | # schedulerName: 20 | 21 | ## Persist data to a persistent volume 22 | persistence: 23 | enabled: true 24 | ## cassandra data Persistent Volume Storage Class 25 | ## If defined, storageClassName: 26 | ## If set to "-", storageClassName: "", which disables dynamic provisioning 27 | ## If undefined (the default) or set to null, no storageClassName spec is 28 | ## set, choosing the default provisioner. (gp2 on AWS, standard on 29 | ## GKE, AWS & OpenStack) 30 | ## 31 | # storageClass: "-" 32 | accessMode: ReadWriteOnce 33 | size: 10Gi 34 | 35 | ## Configure resource requests and limits 36 | ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ 37 | ## Minimum memory for development is 4GB and 2 CPU cores 38 | ## Minimum memory for production is 8GB and 4 CPU cores 39 | ## ref: http://docs.datastax.com/en/archived/cassandra/2.0/cassandra/architecture/architecturePlanningHardware_c.html 40 | resources: {} 41 | # requests: 42 | # memory: 4Gi 43 | # cpu: 2 44 | # limits: 45 | # memory: 4Gi 46 | # cpu: 2 47 | 48 | ## Change cassandra configuration parameters below: 49 | ## ref: http://docs.datastax.com/en/cassandra/3.0/cassandra/configuration/configCassandra_yaml.html 50 | ## Recommended max heap size is 1/2 of system memory 51 | ## Recommended heap new size is 1/4 of max heap size 52 | ## ref: http://docs.datastax.com/en/cassandra/3.0/cassandra/operations/opsTuneJVM.html 53 | config: 54 | cluster_domain: cluster.local 55 | cluster_name: cassandra 56 | cluster_size: 1 57 | seed_size: 2 58 | num_tokens: 256 59 | # If you want Cassandra to use this datacenter and rack name, 60 | # you need to set endpoint_snitch to GossipingPropertyFileSnitch. 61 | # Otherwise, these values are ignored and datacenter1 and rack1 62 | # are used. 63 | dc_name: DC1 64 | rack_name: RAC1 65 | endpoint_snitch: SimpleSnitch 66 | max_heap_size: 2048M 67 | heap_new_size: 512M 68 | start_rpc: false 69 | ports: 70 | cql: 9042 71 | thrift: 9160 72 | # If a JVM Agent is in place 73 | # agent: 61621 74 | 75 | ## Cassandra config files overrides 76 | configOverrides: {} 77 | 78 | ## Cassandra docker command overrides 79 | commandOverrides: [] 80 | 81 | ## Cassandra docker args overrides 82 | argsOverrides: [] 83 | 84 | ## Custom env variables. 85 | ## ref: https://hub.docker.com/_/cassandra/ 86 | env: {} 87 | 88 | ## Liveness and Readiness probe values. 89 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/ 90 | livenessProbe: 91 | initialDelaySeconds: 90 92 | periodSeconds: 30 93 | timeoutSeconds: 5 94 | successThreshold: 1 95 | failureThreshold: 3 96 | readinessProbe: 97 | initialDelaySeconds: 90 98 | periodSeconds: 30 99 | timeoutSeconds: 5 100 | successThreshold: 1 101 | failureThreshold: 3 102 | address: "${POD_IP}" 103 | 104 | ## Configure node selector. Edit code below for adding selector to pods 105 | ## ref: https://kubernetes.io/docs/user-guide/node-selection/ 106 | # selector: 107 | # nodeSelector: 108 | # cloud.google.com/gke-nodepool: pool-db 109 | 110 | ## Additional pod annotations 111 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 112 | podAnnotations: {} 113 | 114 | ## Additional pod labels 115 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 116 | podLabels: {} 117 | 118 | ## Additional pod-level settings 119 | podSettings: 120 | # Change this to give pods more time to properly leave the cluster when not using persistent storage. 121 | terminationGracePeriodSeconds: 30 122 | 123 | ## Pod distruption budget 124 | podDisruptionBudget: {} 125 | # maxUnavailable: 1 126 | # minAvailable: 2 127 | 128 | podManagementPolicy: OrderedReady 129 | updateStrategy: 130 | type: OnDelete 131 | 132 | ## Pod Security Context 133 | securityContext: 134 | enabled: false 135 | fsGroup: 999 136 | runAsUser: 999 137 | 138 | ## Affinity for pod assignment 139 | ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity 140 | affinity: {} 141 | 142 | ## Node tolerations for pod assignment 143 | ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ 144 | tolerations: [] 145 | 146 | rbac: 147 | # Specifies whether RBAC resources should be created 148 | create: true 149 | 150 | serviceAccount: 151 | # Specifies whether a ServiceAccount should be created 152 | create: true 153 | # The name of the ServiceAccount to use. 154 | # If not set and create is true, a name is generated using the fullname template 155 | # name: 156 | 157 | # Use host network for Cassandra pods 158 | # You must pass seed list into config.seeds property if set to true 159 | hostNetwork: false 160 | 161 | ## Backup cronjob configuration 162 | ## Ref: https://github.com/maorfr/cain 163 | backup: 164 | enabled: false 165 | 166 | # Schedule to run jobs. Must be in cron time format 167 | # Ref: https://crontab.guru/ 168 | schedule: 169 | - keyspace: keyspace1 170 | cron: "0 7 * * *" 171 | - keyspace: keyspace2 172 | cron: "30 7 * * *" 173 | 174 | annotations: 175 | # Example for authorization to AWS S3 using kube2iam 176 | # Can also be done using environment variables 177 | iam.amazonaws.com/role: cain 178 | 179 | image: 180 | repository: maorfr/cain 181 | tag: 0.6.0 182 | 183 | # Additional arguments for cain 184 | # Ref: https://github.com/maorfr/cain#usage 185 | extraArgs: [] 186 | 187 | # Add additional environment variables 188 | 189 | resources: 190 | requests: 191 | memory: 1Gi 192 | cpu: 1 193 | limits: 194 | memory: 1Gi 195 | cpu: 1 196 | 197 | # Name of the secret containing the credentials of the service account used by GOOGLE_APPLICATION_CREDENTIALS, as a credentials.json file 198 | # google: 199 | # serviceAccountSecret: 200 | 201 | # Destination to store the backup artifacts 202 | # Supported cloud storage services: AWS S3, Minio S3, Azure Blob Storage, Google Cloud Storage 203 | # Additional support can added. Visit this repository for details 204 | # Ref: https://github.com/maorfr/skbn 205 | destination: s3://bucket/cassandra 206 | 207 | ## Cassandra exported configuration 208 | ## ref: https://github.com/criteo/cassandra_exporter 209 | exporter: 210 | # If exporter is enabled this will create a ServiceMonitor by default as well 211 | servicemonitor: true 212 | enabled: false 213 | image: 214 | repo: criteord/cassandra_exporter 215 | tag: 2.0.2 216 | port: 5556 217 | jvmOpts: "" 218 | resources: {} 219 | # limits: 220 | # cpu: 1 221 | # memory: 1Gi 222 | # requests: 223 | # cpu: 1 224 | # memory: 1Gi 225 | -------------------------------------------------------------------------------- /charts/cassandra/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ template "cassandra.fullname" . }} 5 | labels: 6 | app: {{ template "cassandra.name" . }} 7 | chart: {{ template "cassandra.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | selector: 12 | matchLabels: 13 | app: {{ template "cassandra.name" . }} 14 | release: {{ .Release.Name }} 15 | serviceName: {{ template "cassandra.fullname" . }} 16 | replicas: {{ .Values.config.cluster_size }} 17 | podManagementPolicy: {{ .Values.podManagementPolicy }} 18 | updateStrategy: 19 | type: {{ .Values.updateStrategy.type }} 20 | template: 21 | metadata: 22 | labels: 23 | app: {{ template "cassandra.name" . }} 24 | release: {{ .Release.Name }} 25 | {{- if .Values.podLabels }} 26 | {{ toYaml .Values.podLabels | indent 8 }} 27 | {{- end }} 28 | {{- if .Values.podAnnotations }} 29 | annotations: 30 | {{ toYaml .Values.podAnnotations | indent 8 }} 31 | {{- end }} 32 | spec: 33 | {{- if .Values.schedulerName }} 34 | schedulerName: "{{ .Values.schedulerName }}" 35 | {{- end }} 36 | hostNetwork: {{ .Values.hostNetwork }} 37 | {{- if .Values.selector }} 38 | {{ toYaml .Values.selector | indent 6 }} 39 | {{- end }} 40 | {{- if .Values.securityContext.enabled }} 41 | securityContext: 42 | fsGroup: {{ .Values.securityContext.fsGroup }} 43 | runAsUser: {{ .Values.securityContext.runAsUser }} 44 | {{- end }} 45 | {{- if .Values.affinity }} 46 | affinity: 47 | {{ toYaml .Values.affinity | indent 8 }} 48 | {{- end }} 49 | {{- if .Values.tolerations }} 50 | tolerations: 51 | {{ toYaml .Values.tolerations | indent 8 }} 52 | {{- end }} 53 | containers: 54 | {{- if .Values.exporter.enabled }} 55 | - name: cassandra-exporter 56 | image: "{{ .Values.exporter.image.repo }}:{{ .Values.exporter.image.tag }}" 57 | resources: 58 | {{ toYaml .Values.exporter.resources | indent 10 }} 59 | env: 60 | - name: CASSANDRA_EXPORTER_CONFIG_listenPort 61 | value: {{ .Values.exporter.port | quote }} 62 | - name: JVM_OPTS 63 | value: {{ .Values.exporter.jvmOpts | quote }} 64 | ports: 65 | - name: metrics 66 | containerPort: {{ .Values.exporter.port }} 67 | protocol: TCP 68 | - name: jmx 69 | containerPort: 5555 70 | livenessProbe: 71 | tcpSocket: 72 | port: {{ .Values.exporter.port }} 73 | readinessProbe: 74 | httpGet: 75 | path: /metrics 76 | port: {{ .Values.exporter.port }} 77 | initialDelaySeconds: 20 78 | timeoutSeconds: 45 79 | {{- end }} 80 | - name: {{ template "cassandra.fullname" . }} 81 | image: "{{ .Values.image.repo }}:{{ .Values.image.tag }}" 82 | imagePullPolicy: {{ .Values.image.pullPolicy | quote }} 83 | {{- if .Values.commandOverrides }} 84 | command: {{ .Values.commandOverrides }} 85 | {{- end }} 86 | {{- if .Values.argsOverrides }} 87 | args: {{ .Values.argsOverrides }} 88 | {{- end }} 89 | resources: 90 | {{ toYaml .Values.resources | indent 10 }} 91 | env: 92 | {{- $seed_size := default 1 .Values.config.seed_size | int -}} 93 | {{- $global := . }} 94 | - name: CASSANDRA_SEEDS 95 | {{- if .Values.hostNetwork }} 96 | value: {{ required "You must fill \".Values.config.seeds\" with list of Cassandra seeds when hostNetwork is set to true" .Values.config.seeds | quote }} 97 | {{- else }} 98 | value: "{{- range $i, $e := until $seed_size }}{{ template "cassandra.fullname" $global }}-{{ $i }}.{{ template "cassandra.fullname" $global }}.{{ $global.Release.Namespace }}.svc.{{ $global.Values.config.cluster_domain }}{{- if (lt ( add1 $i ) $seed_size ) }},{{- end }}{{- end }}" 99 | {{- end }} 100 | - name: MAX_HEAP_SIZE 101 | value: {{ default "8192M" .Values.config.max_heap_size | quote }} 102 | - name: HEAP_NEWSIZE 103 | value: {{ default "200M" .Values.config.heap_new_size | quote }} 104 | - name: CASSANDRA_ENDPOINT_SNITCH 105 | value: {{ default "SimpleSnitch" .Values.config.endpoint_snitch | quote }} 106 | - name: CASSANDRA_CLUSTER_NAME 107 | value: {{ default "Cassandra" .Values.config.cluster_name | quote }} 108 | - name: CASSANDRA_DC 109 | value: {{ default "DC1" .Values.config.dc_name | quote }} 110 | - name: CASSANDRA_RACK 111 | value: {{ default "RAC1" .Values.config.rack_name | quote }} 112 | - name: CASSANDRA_START_RPC 113 | value: {{ default "false" .Values.config.start_rpc | quote }} 114 | - name: POD_IP 115 | valueFrom: 116 | fieldRef: 117 | fieldPath: status.podIP 118 | {{- range $key, $value := .Values.env }} 119 | - name: {{ $key | quote }} 120 | value: {{ $value | quote }} 121 | {{- end }} 122 | livenessProbe: 123 | exec: 124 | command: [ "/bin/sh", "-c", "nodetool status" ] 125 | initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} 126 | periodSeconds: {{ .Values.livenessProbe.periodSeconds }} 127 | timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} 128 | successThreshold: {{ .Values.livenessProbe.successThreshold }} 129 | failureThreshold: {{ .Values.livenessProbe.failureThreshold }} 130 | readinessProbe: 131 | exec: 132 | command: [ "/bin/sh", "-c", "nodetool status | grep -E \"^UN\\s+{{ .Values.readinessProbe.address }}\"" ] 133 | initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} 134 | periodSeconds: {{ .Values.readinessProbe.periodSeconds }} 135 | timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} 136 | successThreshold: {{ .Values.readinessProbe.successThreshold }} 137 | failureThreshold: {{ .Values.readinessProbe.failureThreshold }} 138 | ports: 139 | - name: intra 140 | containerPort: 7000 141 | - name: tls 142 | containerPort: 7001 143 | - name: jmx 144 | containerPort: 7199 145 | - name: cql 146 | containerPort: {{ default 9042 .Values.config.ports.cql }} 147 | - name: thrift 148 | containerPort: {{ default 9160 .Values.config.ports.thrift }} 149 | {{- if .Values.config.ports.agent }} 150 | - name: agent 151 | containerPort: {{ .Values.config.ports.agent }} 152 | {{- end }} 153 | volumeMounts: 154 | - name: data 155 | mountPath: /var/lib/cassandra 156 | {{- range $key, $value := .Values.configOverrides }} 157 | - name: cassandra-config-{{ $key | replace "." "-" }} 158 | mountPath: /etc/cassandra/{{ $key }} 159 | subPath: {{ $key }} 160 | {{- end }} 161 | {{- if not .Values.persistence.enabled }} 162 | lifecycle: 163 | preStop: 164 | exec: 165 | command: ["/bin/sh", "-c", "exec nodetool decommission"] 166 | {{- end }} 167 | terminationGracePeriodSeconds: {{ default 30 .Values.podSettings.terminationGracePeriodSeconds }} 168 | {{- if .Values.image.pullSecrets }} 169 | imagePullSecrets: 170 | - name: {{ .Values.image.pullSecrets }} 171 | {{- end }} 172 | {{- if or .Values.configOverrides (not .Values.persistence.enabled) }} 173 | volumes: 174 | {{- end }} 175 | {{- range $key, $value := .Values.configOverrides }} 176 | - configMap: 177 | name: cassandra 178 | name: cassandra-config-{{ $key | replace "." "-" }} 179 | {{- end }} 180 | {{- if not .Values.persistence.enabled }} 181 | - name: data 182 | emptyDir: {} 183 | {{- else }} 184 | volumeClaimTemplates: 185 | - metadata: 186 | name: data 187 | labels: 188 | app: {{ template "cassandra.name" . }} 189 | release: {{ .Release.Name }} 190 | spec: 191 | accessModes: 192 | - {{ .Values.persistence.accessMode | quote }} 193 | resources: 194 | requests: 195 | storage: {{ .Values.persistence.size | quote }} 196 | {{- if .Values.persistence.storageClass }} 197 | {{- if (eq "-" .Values.persistence.storageClass) }} 198 | storageClassName: "" 199 | {{- else }} 200 | storageClassName: "{{ .Values.persistence.storageClass }}" 201 | {{- end }} 202 | {{- end }} 203 | {{- end }} 204 | -------------------------------------------------------------------------------- /charts/zookeeper/templates/statefulset.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: StatefulSet 3 | metadata: 4 | name: {{ template "zookeeper.fullname" . }} 5 | labels: 6 | app: {{ template "zookeeper.name" . }} 7 | chart: {{ template "zookeeper.chart" . }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | component: server 11 | spec: 12 | serviceName: {{ template "zookeeper.headless" . }} 13 | replicas: {{ .Values.replicaCount }} 14 | selector: 15 | matchLabels: 16 | app: {{ template "zookeeper.name" . }} 17 | release: {{ .Release.Name }} 18 | component: server 19 | updateStrategy: 20 | {{ toYaml .Values.updateStrategy | indent 4 }} 21 | template: 22 | metadata: 23 | labels: 24 | app: {{ template "zookeeper.name" . }} 25 | release: {{ .Release.Name }} 26 | component: server 27 | {{- if .Values.podLabels }} 28 | ## Custom pod labels 29 | {{- range $key, $value := .Values.podLabels }} 30 | {{ $key }}: {{ $value | quote }} 31 | {{- end }} 32 | {{- end }} 33 | {{- if .Values.podAnnotations }} 34 | annotations: 35 | ## Custom pod annotations 36 | {{- range $key, $value := .Values.podAnnotations }} 37 | {{ $key }}: {{ $value | quote }} 38 | {{- end }} 39 | {{- end }} 40 | spec: 41 | terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} 42 | {{- if .Values.schedulerName }} 43 | schedulerName: "{{ .Values.schedulerName }}" 44 | {{- end }} 45 | securityContext: 46 | {{ toYaml .Values.securityContext | indent 8 }} 47 | {{- if .Values.priorityClassName }} 48 | priorityClassName: "{{ .Values.priorityClassName }}" 49 | {{- end }} 50 | containers: 51 | 52 | - name: zookeeper 53 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 54 | imagePullPolicy: {{ .Values.image.pullPolicy }} 55 | {{- with .Values.command }} 56 | command: {{ range . }} 57 | - {{ . | quote }} 58 | {{- end }} 59 | {{- end }} 60 | ports: 61 | {{- range $key, $port := .Values.ports }} 62 | - name: {{ $key }} 63 | {{ toYaml $port | indent 14 }} 64 | {{- end }} 65 | livenessProbe: 66 | exec: 67 | command: 68 | - sh 69 | - /config-scripts/ok 70 | initialDelaySeconds: 20 71 | periodSeconds: 30 72 | timeoutSeconds: 5 73 | failureThreshold: 2 74 | successThreshold: 1 75 | readinessProbe: 76 | exec: 77 | command: 78 | - sh 79 | - /config-scripts/ready 80 | initialDelaySeconds: 20 81 | periodSeconds: 30 82 | timeoutSeconds: 5 83 | failureThreshold: 2 84 | successThreshold: 1 85 | env: 86 | - name: ZK_REPLICAS 87 | value: {{ .Values.replicaCount | quote }} 88 | {{- range $key, $value := .Values.env }} 89 | - name: {{ $key | upper | replace "." "_" }} 90 | value: {{ $value | quote }} 91 | {{- end }} 92 | {{- range $secret := .Values.secrets }} 93 | {{- range $key := $secret.keys }} 94 | - name: {{ (print $secret.name "_" $key) | upper }} 95 | valueFrom: 96 | secretKeyRef: 97 | name: {{ $secret.name }} 98 | key: {{ $key }} 99 | {{- end }} 100 | {{- end }} 101 | resources: 102 | {{ toYaml .Values.resources | indent 12 }} 103 | volumeMounts: 104 | - name: data 105 | mountPath: /data 106 | {{- range $secret := .Values.secrets }} 107 | {{- if $secret.mountPath }} 108 | {{- range $key := $secret.keys }} 109 | - name: {{ $.Release.Name }}-{{ $secret.name }} 110 | mountPath: {{ $secret.mountPath }}/{{ $key }} 111 | subPath: {{ $key }} 112 | readOnly: true 113 | {{- end }} 114 | {{- end }} 115 | {{- end }} 116 | - name: config 117 | mountPath: /config-scripts 118 | 119 | 120 | {{- if .Values.exporters.jmx.enabled }} 121 | - name: jmx-exporter 122 | image: "{{ .Values.exporters.jmx.image.repository }}:{{ .Values.exporters.jmx.image.tag }}" 123 | imagePullPolicy: {{ .Values.exporters.jmx.image.pullPolicy }} 124 | ports: 125 | {{- range $key, $port := .Values.exporters.jmx.ports }} 126 | - name: {{ $key }} 127 | {{ toYaml $port | indent 14 }} 128 | {{- end }} 129 | livenessProbe: 130 | {{ toYaml .Values.exporters.jmx.livenessProbe | indent 12 }} 131 | readinessProbe: 132 | {{ toYaml .Values.exporters.jmx.readinessProbe | indent 12 }} 133 | env: 134 | - name: SERVICE_PORT 135 | value: {{ .Values.exporters.jmx.ports.jmxxp.containerPort | quote }} 136 | {{- with .Values.exporters.jmx.env }} 137 | {{- range $key, $value := . }} 138 | - name: {{ $key | upper | replace "." "_" }} 139 | value: {{ $value | quote }} 140 | {{- end }} 141 | {{- end }} 142 | resources: 143 | {{ toYaml .Values.exporters.jmx.resources | indent 12 }} 144 | volumeMounts: 145 | - name: config-jmx-exporter 146 | mountPath: /opt/jmx_exporter/config.yml 147 | subPath: config.yml 148 | {{- end }} 149 | 150 | {{- if .Values.exporters.zookeeper.enabled }} 151 | - name: zookeeper-exporter 152 | image: "{{ .Values.exporters.zookeeper.image.repository }}:{{ .Values.exporters.zookeeper.image.tag }}" 153 | imagePullPolicy: {{ .Values.exporters.zookeeper.image.pullPolicy }} 154 | args: 155 | - -bind-addr=:{{ .Values.exporters.zookeeper.ports.zookeeperxp.containerPort }} 156 | - -metrics-path={{ .Values.exporters.zookeeper.path }} 157 | - -zookeeper=localhost:{{ .Values.ports.client.containerPort }} 158 | - -log-level={{ .Values.exporters.zookeeper.config.logLevel }} 159 | - -reset-on-scrape={{ .Values.exporters.zookeeper.config.resetOnScrape }} 160 | ports: 161 | {{- range $key, $port := .Values.exporters.zookeeper.ports }} 162 | - name: {{ $key }} 163 | {{ toYaml $port | indent 14 }} 164 | {{- end }} 165 | livenessProbe: 166 | {{ toYaml .Values.exporters.zookeeper.livenessProbe | indent 12 }} 167 | readinessProbe: 168 | {{ toYaml .Values.exporters.zookeeper.readinessProbe | indent 12 }} 169 | env: 170 | {{- range $key, $value := .Values.exporters.zookeeper.env }} 171 | - name: {{ $key | upper | replace "." "_" }} 172 | value: {{ $value | quote }} 173 | {{- end }} 174 | resources: 175 | {{ toYaml .Values.exporters.zookeeper.resources | indent 12 }} 176 | {{- end }} 177 | 178 | {{- with .Values.nodeSelector }} 179 | nodeSelector: 180 | {{ toYaml . | indent 8 }} 181 | {{- end }} 182 | {{- with .Values.affinity }} 183 | affinity: 184 | {{ toYaml . | indent 8 }} 185 | {{- end }} 186 | {{- with .Values.tolerations }} 187 | tolerations: 188 | {{ toYaml . | indent 8 }} 189 | {{- end }} 190 | volumes: 191 | - name: config 192 | configMap: 193 | name: {{ template "zookeeper.fullname" . }} 194 | defaultMode: 0555 195 | {{- range .Values.secrets }} 196 | - name: {{ $.Release.Name }}-{{ .name }} 197 | secret: 198 | secretName: {{ .name }} 199 | {{- end }} 200 | {{- if .Values.exporters.jmx.enabled }} 201 | - name: config-jmx-exporter 202 | configMap: 203 | name: {{ .Release.Name }}-jmx-exporter 204 | {{- end }} 205 | {{- if not .Values.persistence.enabled }} 206 | - name: data 207 | emptyDir: {} 208 | {{- end }} 209 | {{- if .Values.persistence.enabled }} 210 | volumeClaimTemplates: 211 | - metadata: 212 | name: data 213 | spec: 214 | accessModes: 215 | - {{ .Values.persistence.accessMode | quote }} 216 | resources: 217 | requests: 218 | storage: {{ .Values.persistence.size | quote }} 219 | {{- if .Values.persistence.storageClass }} 220 | {{- if (eq "-" .Values.persistence.storageClass) }} 221 | storageClassName: "" 222 | {{- else }} 223 | storageClassName: "{{ .Values.persistence.storageClass }}" 224 | {{- end }} 225 | {{- end }} 226 | {{- end }} 227 | -------------------------------------------------------------------------------- /charts/solr/README.md: -------------------------------------------------------------------------------- 1 | # Solr Helm Chart 2 | 3 | This helm chart installs a Solr cluster and its required Zookeeper cluster into a running 4 | kubernetes cluster. 5 | 6 | The chart installs the Solr docker image from: https://hub.docker.com/_/solr/ 7 | 8 | ## Dependencies 9 | 10 | - The zookeeper incubator helm chart 11 | - Tested on kubernetes 1.10+ 12 | 13 | ## Installation 14 | 15 | To install the Solr helm chart run: 16 | 17 | ```txt 18 | helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator 19 | helm install --name solr incubator/solr 20 | ``` 21 | 22 | ## Configuration Options 23 | 24 | The following table shows the configuration options for the Solr helm chart: 25 | 26 | | Parameter | Description | Default Value | 27 | | --------------------------------------------- | ------------------------------------- | --------------------------------------------------------------------- | 28 | | `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | 29 | | `port` | The port that Solr will listen on | `8983` | 30 | | `replicaCount` | The number of replicas in the Solr statefulset | `3` | 31 | | `javaMem` | JVM memory settings to pass to Solr | `-Xms2g -Xmx3g` | 32 | | `resources` | Resource limits and requests to set on the solr pods | `{}` | 33 | | `extraEnvVars` | Additional environment variables to set on the solr pods (in yaml syntax) | `[]` | 34 | | `terminationGracePeriodSeconds` | The termination grace period of the Solr pods | `180`| 35 | | `image.repository` | The repository to pull the docker image from| `solr` | 36 | | `image.tag` | The tag on the repository to pull | `7.7.2` | 37 | | `image.pullPolicy` | Solr pod pullPolicy | `IfNotPresent` | 38 | | `image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | 39 | | `livenessProbe.initialDelaySeconds` | Initial Delay for Solr pod liveness probe | `20` | 40 | | `livenessProbe.periodSeconds` | Poll rate for liveness probe | `10` | 41 | | `readinessProbe.initialDelaySeconds` | Initial Delay for Solr pod readiness probe | `15` | 42 | | `readinessProbe.periodSeconds` | Poll rate for readiness probe | `5` | 43 | | `podAnnotations` | Annotations to be applied to the solr pods | `{}` | 44 | | `affinity` | Affinity policy to be applied to the Solr pods | `{}` | 45 | | `updateStrategy` | The update strategy of the solr pods | `{}` | 46 | | `logLevel` | The log level of the solr pods | `INFO` | 47 | | `podDisruptionBudget` | The pod disruption budget for the Solr statefulset | `{"maxUnavailable": 1}` | 48 | | `schedulerName` | The name of the k8s scheduler (other than default) | ` nil` | 49 | | `volumeClaimTemplates.storageClassName` | The name of the storage class for the Solr PVC | `` | 50 | | `volumeClaimTemplates.storageSize` | The size of the PVC | `20Gi` | 51 | | `volumeClaimTemplates.accessModes` | The access mode of the PVC| `[ "ReadWriteOnce" ]` | 52 | | `tls.enabled` | Whether to enable TLS, requires `tls.certSecret.name` to be set to a secret containing cert details, see README for details | `false` | 53 | | `tls.wantClientAuth` | Whether Solr wants client authentication | `false` | 54 | | `tls.needClientAuth` | Whether Solr requires client authentication | `false` | 55 | | `tls.keystorePassword` | Password for the tls java keystore | `changeit` | 56 | | `tls.importKubernetesCA` | Whether to import the kubernetes CA into the Solr truststore | `false` | 57 | | `tls.checkPeerName` | Whether Solr checks the name in the TLS certs | `false` | 58 | | `tls.caSecret.name` | The name of the Kubernetes secret containing the ca bunble to import into the truststore | `` | 59 | | `tls.caSecret.bundlePath` | The key in the Kubernetes secret that contains the CA bundle | `` | 60 | | `tls.certSecret.name` | The name of the Kubernetes secret that contains the TLS certificate and private key | `` | 61 | | `tls.certSecret.keyPath` | The key in the Kubernetes secret that contains the private key | `tls.key` | 62 | | `tls.certSecret.certPath` | The key in the Kubernetes secret that contains the TLS certificate | `tls.crt` | 63 | | `service.type` | The type of service for the solr client service | `ClusterIP` | 64 | | `service.annotations` | Annotations to apply to the solr client service | `{}` | 65 | | `exporter.enabled` | Whether to enable the Solr Prometheus exporter | `false` | 66 | | `exporter.image.pullSecrets` | Specify docker-registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | 67 | | `exporter.configFile` | The path in the docker image that the exporter loads the config from | `/opt/solr/contrib/prometheus-exporter/conf/solr-exporter-config.xml` | 68 | | `exporter.updateStrategy` | Update strategy for the exporter deployment | `{}` | 69 | | `exporter.podAnnotations` | Annotations to set on the exporter pods | `{}` 70 | | `exporter.resources` | Resource limits to set on the exporter pods | `{}` | 71 | | `exporter.port` | The port that the exporter runs on | `9983` | 72 | | `exporter.threads` | The number of query threads that the exporter runs | `7` | 73 | | `exporter.livenessProbe.initialDelaySeconds` | Initial Delay for the exporter pod liveness| `20` | 74 | | `exporter.livenessProbe.periodSeconds` | Poll rate for liveness probe | `10` | 75 | | `exporter.readinessProbe.initialDelaySeconds` | Initial Delay for the exporter pod readiness | `15` | 76 | | `exporter.readinessProbe.periodSeconds` | Poll rate for readiness probe | `5` | 77 | | `exporter.service.type` | The type of the exporter service | `ClusterIP` | 78 | | `exporter.service.annotations` | Annotations to apply to the exporter service | `{}` | 79 | 80 | ## Service Start with command sets 81 | 82 | ```sh 83 | helm install --name solr \ 84 | --set image.tag=7.7.2,javaMem="-Xms1g -Xmx1g",logLevel=INFO,replicaCount=2,livenessProbe.initialDelaySeconds=420,exporter.readinessProbe.periodSeconds=30 incubator/solr 85 | ``` 86 | 87 | ## TLS Configuration 88 | 89 | Solr can be configured to use TLS to encrypt the traffic between solr nodes. To set this up with a certificate signed by the Kubernetes CA: 90 | 91 | Generate SSL certificate for the installation: 92 | 93 | `cfssl genkey ssl_config.json | cfssljson -bare server` 94 | 95 | base64 Encode the CSR and apply into kubernetes as a CertificateSigningRequest 96 | 97 | ```sh 98 | export MY_CSR_NAME="solr-certifiate" 99 | cat < server-cert.pem` 119 | 120 | We store the certificate and private key in a Kubernetes secret: 121 | 122 | `kubectl create secret tls solr-certificate --cert server-cert.pem --key server-key.pem` 123 | 124 | Now the secret can be used in the solr installation: 125 | 126 | `helm install . --set tls.enabled=true,tls.certSecret.name=solr-certificate,tls.importKubernetesCA=true` 127 | 128 | ## Minikube Notes 129 | 130 | - Chart out of the box start with 2G,2G...So.. 131 | - minikube start --vm-driver=hyperkit --memory 4096 132 | - minikube start --vm-driver=virtualbox --memory 4096 133 | -------------------------------------------------------------------------------- /charts/zookeeper/values.yaml: -------------------------------------------------------------------------------- 1 | ## As weighted quorums are not supported, it is imperative that an odd number of replicas 2 | ## be chosen. Moreover, the number of replicas should be either 1, 3, 5, or 7. 3 | ## 4 | ## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper#stateful-set 5 | replicaCount: 1 # Desired quantity of ZooKeeper pods. This should always be (1,3,5, or 7) 6 | 7 | podDisruptionBudget: 8 | maxUnavailable: 1 # Limits how many Zokeeper pods may be unavailable due to voluntary disruptions. 9 | 10 | terminationGracePeriodSeconds: 1800 # Duration in seconds a Zokeeper pod needs to terminate gracefully. 11 | 12 | updateStrategy: 13 | type: RollingUpdate 14 | 15 | ## refs: 16 | ## - https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper 17 | ## - https://github.com/kubernetes/contrib/blob/master/statefulsets/zookeeper/Makefile#L1 18 | image: 19 | repository: zookeeper # Container image repository for zookeeper container. 20 | tag: 3.5.5 # Container image tag for zookeeper container. 21 | pullPolicy: IfNotPresent # Image pull criteria for zookeeper container. 22 | 23 | service: 24 | type: ClusterIP # Exposes zookeeper on a cluster-internal IP. 25 | annotations: {} # Arbitrary non-identifying metadata for zookeeper service. 26 | ## AWS example for use with LoadBalancer service type. 27 | # external-dns.alpha.kubernetes.io/hostname: zookeeper.cluster.local 28 | # service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" 29 | # service.beta.kubernetes.io/aws-load-balancer-internal: "true" 30 | ports: 31 | client: 32 | port: 2181 # Service port number for client port. 33 | targetPort: client # Service target port for client port. 34 | protocol: TCP # Service port protocol for client port. 35 | 36 | ## Headless service. 37 | ## 38 | headless: 39 | annotations: {} 40 | 41 | ports: 42 | client: 43 | containerPort: 2181 # Port number for zookeeper container client port. 44 | protocol: TCP # Protocol for zookeeper container client port. 45 | election: 46 | containerPort: 3888 # Port number for zookeeper container election port. 47 | protocol: TCP # Protocol for zookeeper container election port. 48 | server: 49 | containerPort: 2888 # Port number for zookeeper container server port. 50 | protocol: TCP # Protocol for zookeeper container server port. 51 | 52 | resources: {} # Optionally specify how much CPU and memory (RAM) each zookeeper container needs. 53 | # We usually recommend not to specify default resources and to leave this as a conscious 54 | # choice for the user. This also increases chances charts run on environments with little 55 | # resources, such as Minikube. If you do want to specify resources, uncomment the following 56 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 57 | # limits: 58 | # cpu: 100m 59 | # memory: 128Mi 60 | # requests: 61 | # cpu: 100m 62 | # memory: 128Mi 63 | 64 | priorityClassName: "" 65 | 66 | nodeSelector: {} # Node label-values required to run zookeeper pods. 67 | 68 | tolerations: [] # Node taint overrides for zookeeper pods. 69 | 70 | affinity: {} # Criteria by which pod label-values influence scheduling for zookeeper pods. 71 | # podAntiAffinity: 72 | # requiredDuringSchedulingIgnoredDuringExecution: 73 | # - topologyKey: "kubernetes.io/hostname" 74 | # labelSelector: 75 | # matchLabels: 76 | # release: zookeeper 77 | 78 | podAnnotations: {} # Arbitrary non-identifying metadata for zookeeper pods. 79 | # prometheus.io/scrape: "true" 80 | # prometheus.io/path: "/metrics" 81 | # prometheus.io/port: "9141" 82 | 83 | podLabels: {} # Key/value pairs that are attached to zookeeper pods. 84 | # team: "developers" 85 | # service: "zookeeper" 86 | 87 | securityContext: 88 | fsGroup: 1000 89 | runAsUser: 1000 90 | 91 | ## Useful, if you want to use an alternate image. 92 | command: 93 | - /bin/bash 94 | - -xec 95 | - /config-scripts/run 96 | 97 | ## Useful if using any custom authorizer. 98 | ## Pass any secrets to the kafka pods. Each secret will be passed as an 99 | ## environment variable by default. The secret can also be mounted to a 100 | ## specific path (in addition to environment variable) if required. Environment 101 | ## variable names are generated as: `_` (All upper case) 102 | # secrets: 103 | # - name: myKafkaSecret 104 | # keys: 105 | # - username 106 | # - password 107 | # # mountPath: /opt/kafka/secret 108 | # - name: myZkSecret 109 | # keys: 110 | # - user 111 | # - pass 112 | # mountPath: /opt/zookeeper/secret 113 | 114 | persistence: 115 | enabled: true 116 | ## zookeeper data Persistent Volume Storage Class 117 | ## If defined, storageClassName: 118 | ## If set to "-", storageClassName: "", which disables dynamic provisioning 119 | ## If undefined (the default) or set to null, no storageClassName spec is 120 | ## set, choosing the default provisioner. (gp2 on AWS, standard on 121 | ## GKE, AWS & OpenStack) 122 | ## 123 | # storageClass: "-" 124 | accessMode: ReadWriteOnce 125 | size: 5Gi 126 | 127 | ## Exporters query apps for metrics and make those metrics available for 128 | ## Prometheus to scrape. 129 | exporters: 130 | 131 | jmx: 132 | enabled: false 133 | image: 134 | repository: sscaling/jmx-prometheus-exporter 135 | tag: 0.3.0 136 | pullPolicy: IfNotPresent 137 | config: 138 | lowercaseOutputName: false 139 | ## ref: https://github.com/prometheus/jmx_exporter/blob/master/example_configs/zookeeper.yaml 140 | rules: 141 | - pattern: "org.apache.ZooKeeperService<>(\\w+)" 142 | name: "zookeeper_$2" 143 | - pattern: "org.apache.ZooKeeperService<>(\\w+)" 144 | name: "zookeeper_$3" 145 | labels: 146 | replicaId: "$2" 147 | - pattern: "org.apache.ZooKeeperService<>(\\w+)" 148 | name: "zookeeper_$4" 149 | labels: 150 | replicaId: "$2" 151 | memberType: "$3" 152 | - pattern: "org.apache.ZooKeeperService<>(\\w+)" 153 | name: "zookeeper_$4_$5" 154 | labels: 155 | replicaId: "$2" 156 | memberType: "$3" 157 | startDelaySeconds: 30 158 | env: {} 159 | resources: {} 160 | path: /metrics 161 | ports: 162 | jmxxp: 163 | containerPort: 9404 164 | protocol: TCP 165 | livenessProbe: 166 | httpGet: 167 | path: /metrics 168 | port: jmxxp 169 | initialDelaySeconds: 30 170 | periodSeconds: 15 171 | timeoutSeconds: 60 172 | failureThreshold: 8 173 | successThreshold: 1 174 | readinessProbe: 175 | httpGet: 176 | path: /metrics 177 | port: jmxxp 178 | initialDelaySeconds: 30 179 | periodSeconds: 15 180 | timeoutSeconds: 60 181 | failureThreshold: 8 182 | successThreshold: 1 183 | serviceMonitor: 184 | interval: 30s 185 | scrapeTimeout: 30s 186 | scheme: http 187 | 188 | zookeeper: 189 | ## refs: 190 | ## - https://github.com/carlpett/zookeeper_exporter 191 | ## - https://hub.docker.com/r/josdotso/zookeeper-exporter/ 192 | ## - https://www.datadoghq.com/blog/monitoring-kafka-performance-metrics/#zookeeper-metrics 193 | enabled: false 194 | image: 195 | repository: josdotso/zookeeper-exporter 196 | tag: v1.1.2 197 | pullPolicy: IfNotPresent 198 | config: 199 | logLevel: info 200 | resetOnScrape: "true" 201 | env: {} 202 | resources: {} 203 | path: /metrics 204 | ports: 205 | zookeeperxp: 206 | containerPort: 9141 207 | protocol: TCP 208 | livenessProbe: 209 | httpGet: 210 | path: /metrics 211 | port: zookeeperxp 212 | initialDelaySeconds: 30 213 | periodSeconds: 15 214 | timeoutSeconds: 60 215 | failureThreshold: 8 216 | successThreshold: 1 217 | readinessProbe: 218 | httpGet: 219 | path: /metrics 220 | port: zookeeperxp 221 | initialDelaySeconds: 30 222 | periodSeconds: 15 223 | timeoutSeconds: 60 224 | failureThreshold: 8 225 | successThreshold: 1 226 | serviceMonitor: 227 | interval: 30s 228 | scrapeTimeout: 30s 229 | scheme: http 230 | 231 | ## ServiceMonitor configuration in case you are using Prometheus Operator 232 | prometheus: 233 | serviceMonitor: 234 | ## If true a ServiceMonitor for each enabled exporter will be installed 235 | enabled: false 236 | ## The namespace where the ServiceMonitor(s) will be installed 237 | # namespace: monitoring 238 | ## The selector the Prometheus instance is searching for 239 | ## [Default Prometheus Operator selector] (https://github.com/helm/charts/blob/f5a751f174263971fafd21eee4e35416d6612a3d/stable/prometheus-operator/templates/prometheus/prometheus.yaml#L74) 240 | selector: {} 241 | 242 | ## Use an alternate scheduler, e.g. "stork". 243 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 244 | ## 245 | # schedulerName: 246 | 247 | ## ref: https://github.com/kubernetes/contrib/tree/master/statefulsets/zookeeper 248 | env: 249 | 250 | ## Options related to JMX exporter. 251 | ## ref: https://github.com/apache/zookeeper/blob/master/bin/zkServer.sh#L36 252 | JMXAUTH: "false" 253 | JMXDISABLE: "false" 254 | JMXPORT: 1099 255 | JMXSSL: "false" 256 | 257 | ## The port on which the server will accept client requests. 258 | ZOO_PORT: 2181 259 | 260 | ## The number of Ticks that an ensemble member is allowed to perform leader 261 | ## election. 262 | ZOO_INIT_LIMIT: 5 263 | 264 | ZOO_TICK_TIME: 2000 265 | 266 | ## The maximum number of concurrent client connections that 267 | ## a server in the ensemble will accept. 268 | ZOO_MAX_CLIENT_CNXNS: 60 269 | 270 | ## The number of Tick by which a follower may lag behind the ensembles leader. 271 | ZK_SYNC_LIMIT: 10 272 | 273 | ## The number of wall clock ms that corresponds to a Tick for the ensembles 274 | ## internal time. 275 | ZK_TICK_TIME: 2000 276 | 277 | ZOO_AUTOPURGE_PURGEINTERVAL: 0 278 | ZOO_AUTOPURGE_SNAPRETAINCOUNT: 3 279 | ZOO_STANDALONE_ENABLED: false 280 | 281 | jobs: 282 | ## ref: http://zookeeper.apache.org/doc/r3.4.10/zookeeperProgrammers.html#ch_zkSessions 283 | chroots: 284 | enabled: false 285 | activeDeadlineSeconds: 300 286 | backoffLimit: 5 287 | completions: 1 288 | config: 289 | create: [] 290 | # - /kafka 291 | # - /ureplicator 292 | env: [] 293 | parallelism: 1 294 | resources: {} 295 | restartPolicy: Never 296 | -------------------------------------------------------------------------------- /charts/cassandra/README.md: -------------------------------------------------------------------------------- 1 | # Cassandra 2 | A Cassandra Chart for Kubernetes 3 | 4 | ## Install Chart 5 | To install the Cassandra Chart into your Kubernetes cluster (This Chart requires persistent volume by default, you may need to create a storage class before install chart. To create storage class, see [Persist data](#persist_data) section) 6 | 7 | ```bash 8 | helm install --namespace "cassandra" -n "cassandra" incubator/cassandra 9 | ``` 10 | 11 | After installation succeeds, you can get a status of Chart 12 | 13 | ```bash 14 | helm status "cassandra" 15 | ``` 16 | 17 | If you want to delete your Chart, use this command 18 | ```bash 19 | helm delete --purge "cassandra" 20 | ``` 21 | 22 | ## Upgrading 23 | 24 | To upgrade your Cassandra release, simply run 25 | 26 | ```bash 27 | helm upgrade "cassandra" incubator/cassandra 28 | ``` 29 | 30 | ### 0.12.0 31 | 32 | This version fixes https://github.com/helm/charts/issues/7803 by removing mutable labels in `spec.VolumeClaimTemplate.metadata.labels` so that it is upgradable. 33 | 34 | Until this version, in order to upgrade, you have to delete the Cassandra StatefulSet before upgrading: 35 | ```bash 36 | $ kubectl delete statefulset --cascade=false my-cassandra-release 37 | ``` 38 | 39 | 40 | ## Persist data 41 | You need to create `StorageClass` before able to persist data in persistent volume. 42 | To create a `StorageClass` on Google Cloud, run the following 43 | 44 | ```bash 45 | kubectl create -f sample/create-storage-gce.yaml 46 | ``` 47 | 48 | And set the following values in `values.yaml` 49 | 50 | ```yaml 51 | persistence: 52 | enabled: true 53 | ``` 54 | 55 | If you want to create a `StorageClass` on other platform, please see documentation here [https://kubernetes.io/docs/user-guide/persistent-volumes/](https://kubernetes.io/docs/user-guide/persistent-volumes/) 56 | 57 | When running a cluster without persistence, the termination of a pod will first initiate a decommissioning of that pod. 58 | Depending on the amount of data stored inside the cluster this may take a while. In order to complete a graceful 59 | termination, pods need to get more time for it. Set the following values in `values.yaml`: 60 | 61 | ```yaml 62 | podSettings: 63 | terminationGracePeriodSeconds: 1800 64 | ``` 65 | 66 | ## Install Chart with specific cluster size 67 | By default, this Chart will create a cassandra with 3 nodes. If you want to change the cluster size during installation, you can use `--set config.cluster_size={value}` argument. Or edit `values.yaml` 68 | 69 | For example: 70 | Set cluster size to 5 71 | 72 | ```bash 73 | helm install --namespace "cassandra" -n "cassandra" --set config.cluster_size=5 incubator/cassandra/ 74 | ``` 75 | 76 | ## Install Chart with specific resource size 77 | By default, this Chart will create a cassandra with CPU 2 vCPU and 4Gi of memory which is suitable for development environment. 78 | If you want to use this Chart for production, I would recommend to update the CPU to 4 vCPU and 16Gi. Also increase size of `max_heap_size` and `heap_new_size`. 79 | To update the settings, edit `values.yaml` 80 | 81 | ## Install Chart with specific node 82 | Sometime you may need to deploy your cassandra to specific nodes to allocate resources. You can use node selector by edit `nodes.enabled=true` in `values.yaml` 83 | For example, you have 6 vms in node pools and you want to deploy cassandra to node which labeled as `cloud.google.com/gke-nodepool: pool-db` 84 | 85 | Set the following values in `values.yaml` 86 | 87 | ```yaml 88 | nodes: 89 | enabled: true 90 | selector: 91 | nodeSelector: 92 | cloud.google.com/gke-nodepool: pool-db 93 | ``` 94 | 95 | ## Configuration 96 | 97 | The following table lists the configurable parameters of the Cassandra chart and their default values. 98 | 99 | | Parameter | Description | Default | 100 | | ----------------------- | --------------------------------------------- | ---------------------------------------------------------- | 101 | | `image.repo` | `cassandra` image repository | `cassandra` | 102 | | `image.tag` | `cassandra` image tag | `3.11.3` | 103 | | `image.pullPolicy` | Image pull policy | `Always` if `imageTag` is `latest`, else `IfNotPresent` | 104 | | `image.pullSecrets` | Image pull secrets | `nil` | 105 | | `config.cluster_domain` | The name of the cluster domain. | `cluster.local` | 106 | | `config.cluster_name` | The name of the cluster. | `cassandra` | 107 | | `config.cluster_size` | The number of nodes in the cluster. | `3` | 108 | | `config.seed_size` | The number of seed nodes used to bootstrap new clients joining the cluster. | `2` | 109 | | `config.seeds` | The comma-separated list of seed nodes. | Automatically generated according to `.Release.Name` and `config.seed_size` | 110 | | `config.num_tokens` | Initdb Arguments | `256` | 111 | | `config.dc_name` | Initdb Arguments | `DC1` | 112 | | `config.rack_name` | Initdb Arguments | `RAC1` | 113 | | `config.endpoint_snitch` | Initdb Arguments | `SimpleSnitch` | 114 | | `config.max_heap_size` | Initdb Arguments | `2048M` | 115 | | `config.heap_new_size` | Initdb Arguments | `512M` | 116 | | `config.ports.cql` | Initdb Arguments | `9042` | 117 | | `config.ports.thrift` | Initdb Arguments | `9160` | 118 | | `config.ports.agent` | The port of the JVM Agent (if any) | `nil` | 119 | | `config.start_rpc` | Initdb Arguments | `false` | 120 | | `configOverrides` | Overrides config files in /etc/cassandra dir | `{}` | 121 | | `commandOverrides` | Overrides default docker command | `[]` | 122 | | `argsOverrides` | Overrides default docker args | `[]` | 123 | | `env` | Custom env variables | `{}` | 124 | | `schedulerName` | Name of k8s scheduler (other than the default) | `nil` | 125 | | `persistence.enabled` | Use a PVC to persist data | `true` | 126 | | `persistence.storageClass` | Storage class of backing PVC | `nil` (uses alpha storage class annotation) | 127 | | `persistence.accessMode` | Use volume as ReadOnly or ReadWrite | `ReadWriteOnce` | 128 | | `persistence.size` | Size of data volume | `10Gi` | 129 | | `resources` | CPU/Memory resource requests/limits | Memory: `4Gi`, CPU: `2` | 130 | | `service.type` | k8s service type exposing ports, e.g. `NodePort`| `ClusterIP` | 131 | | `podManagementPolicy` | podManagementPolicy of the StatefulSet | `OrderedReady` | 132 | | `podDisruptionBudget` | Pod distruption budget | `{}` | 133 | | `podAnnotations` | pod annotations for the StatefulSet | `{}` | 134 | | `updateStrategy.type` | UpdateStrategy of the StatefulSet | `OnDelete` | 135 | | `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | `90` | 136 | | `livenessProbe.periodSeconds` | How often to perform the probe | `30` | 137 | | `livenessProbe.timeoutSeconds` | When the probe times out | `5` | 138 | | `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` | 139 | | `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `3` | 140 | | `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | `90` | 141 | | `readinessProbe.periodSeconds` | How often to perform the probe | `30` | 142 | | `readinessProbe.timeoutSeconds` | When the probe times out | `5` | 143 | | `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | `1` | 144 | | `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | `3` | 145 | | `readinessProbe.address` | Address to use for checking node has joined the cluster and is ready. | `${POD_IP}` | 146 | | `rbac.create` | Specifies whether RBAC resources should be created | `true` | 147 | | `serviceAccount.create` | Specifies whether a ServiceAccount should be created | `true` | 148 | | `serviceAccount.name` | The name of the ServiceAccount to use | | 149 | | `backup.enabled` | Enable backup on chart installation | `false` | 150 | | `backup.schedule` | Keyspaces to backup, each with cron time | | 151 | | `backup.annotations` | Backup pod annotations | iam.amazonaws.com/role: `cain` | 152 | | `backup.image.repository` | Backup image repository | `maorfr/cain` | 153 | | `backup.image.tag` | Backup image tag | `0.6.0` | 154 | | `backup.extraArgs` | Additional arguments for cain | `[]` | 155 | | `backup.env` | Backup environment variables | AWS_REGION: `us-east-1` | 156 | | `backup.resources` | Backup CPU/Memory resource requests/limits | Memory: `1Gi`, CPU: `1` | 157 | | `backup.destination` | Destination to store backup artifacts | `s3://bucket/cassandra` | 158 | | `backup.google.serviceAccountSecret` | Secret containing credentials if GCS is used as destination | | 159 | | `exporter.enabled` | Enable Cassandra exporter | `false` | 160 | | `exporter.servicemonitor` | Enable ServiceMonitor for exporter | `true` | 161 | | `exporter.image.repo` | Exporter image repository | `criteord/cassandra_exporter` | 162 | | `exporter.image.tag` | Exporter image tag | `2.0.2` | 163 | | `exporter.port` | Exporter port | `5556` | 164 | | `exporter.jvmOpts` | Exporter additional JVM options | | 165 | | `exporter.resources` | Exporter CPU/Memory resource requests/limits | `{}` | 166 | | `affinity` | Kubernetes node affinity | `{}` | 167 | | `tolerations` | Kubernetes node tolerations | `[]` | 168 | 169 | 170 | ## Scale cassandra 171 | When you want to change the cluster size of your cassandra, you can use the helm upgrade command. 172 | 173 | ```bash 174 | helm upgrade --set config.cluster_size=5 cassandra incubator/cassandra 175 | ``` 176 | 177 | ## Get cassandra status 178 | You can get your cassandra cluster status by running the command 179 | 180 | ```bash 181 | kubectl exec -it --namespace cassandra $(kubectl get pods --namespace cassandra -l app=cassandra-cassandra -o jsonpath='{.items[0].metadata.name}') nodetool status 182 | ``` 183 | 184 | Output 185 | ```bash 186 | Datacenter: asia-east1 187 | ====================== 188 | Status=Up/Down 189 | |/ State=Normal/Leaving/Joining/Moving 190 | -- Address Load Tokens Owns (effective) Host ID Rack 191 | UN 10.8.1.11 108.45 KiB 256 66.1% 410cc9da-8993-4dc2-9026-1dd381874c54 a 192 | UN 10.8.4.12 84.08 KiB 256 68.7% 96e159e1-ef94-406e-a0be-e58fbd32a830 c 193 | UN 10.8.3.6 103.07 KiB 256 65.2% 1a42b953-8728-4139-b070-b855b8fff326 b 194 | ``` 195 | 196 | ## Benchmark 197 | You can use [cassandra-stress](https://docs.datastax.com/en/cassandra/3.0/cassandra/tools/toolsCStress.html) tool to run the benchmark on the cluster by the following command 198 | 199 | ```bash 200 | kubectl exec -it --namespace cassandra $(kubectl get pods --namespace cassandra -l app=cassandra-cassandra -o jsonpath='{.items[0].metadata.name}') cassandra-stress 201 | ``` 202 | 203 | Example of `cassandra-stress` argument 204 | - Run both read and write with ration 9:1 205 | - Operator total 1 million keys with uniform distribution 206 | - Use QUORUM for read/write 207 | - Generate 50 threads 208 | - Generate result in graph 209 | - Use NetworkTopologyStrategy with replica factor 2 210 | 211 | ```bash 212 | cassandra-stress mixed ratio\(write=1,read=9\) n=1000000 cl=QUORUM -pop dist=UNIFORM\(1..1000000\) -mode native cql3 -rate threads=50 -log file=~/mixed_autorate_r9w1_1M.log -graph file=test2.html title=test revision=test2 -schema "replication(strategy=NetworkTopologyStrategy, factor=2)" 213 | ``` 214 | --------------------------------------------------------------------------------