├── Chart.yaml ├── README.md ├── templates ├── NOTES.txt ├── _helpers.tpl ├── deployment.yaml ├── ingress.yaml ├── jobs.yaml └── svc.yaml └── values.yaml /Chart.yaml: -------------------------------------------------------------------------------- 1 | name: kafka-manager 2 | description: Deploy kafka-manager 3 | version: 0.1.1 4 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # DEPRECATION NOTICE 2 | 3 | I do not maintain this chart and it is likely not to work. I suggest using https://github.com/confluentinc/cp-helm-charts. 4 | 5 | # Kafka manager helm chart 6 | 7 | This chart let you install [Kafka manager](https://github.com/yahoo/kafka-manager) 8 | on a Kubernetes cluster using the [kafka-manager-docker](https://github.com/sheepkiller/kafka-manager-docker) 9 | image. 10 | 11 | ## Configuration 12 | 13 | ### Kafka clusters 14 | 15 | This section allow you to configure which Kafka cluster the manager will 16 | monitor. You can skip this step and configure this using the webUI, but 17 | you will need to configure a Zookeeper backend (See below). 18 | 19 | ```yaml 20 | kafka: 21 | clusters: 22 | - # You can embed any configuration you need, variables have the same 23 | # name than the web ui "Add Cluster" form variables. 24 | name: "default" 25 | # MANDATORY VALUE 26 | # Uri at which Kafka Zookeeper's can be contacted. 27 | zkHosts: "kafka-zookeeper:2181" 28 | kafkaVersion: "0.9.1" 29 | ``` 30 | 31 | 32 | ### Zookeeper backend 33 | 34 | Kafka manager need a `Zookeeper` cluster to work, this chart let you 2 options 35 | for it's configuration : 36 | 37 | 1. Use the Kafka internal `Zookeeper` cluster (by default if you have configured your clusters) . 38 | 2. Use an external cluster (`values.yaml` entries below) 39 | 40 | ```yaml 41 | kafkaManager: 42 | useKafkaZookeeper: false 43 | zkHosts: "some-zookeeper:2181" 44 | ``` 45 | 46 | 47 | ## Deployment 48 | 49 | 1. Clone the repository 50 | 2. Configure the application by tweaking `values.yaml` 51 | 3. `helm install .` 52 | 53 | Note that the helm installation/upgrade can be slow because the cluster configuration 54 | must be done at runtime, we are thus using a helm `post-install` hook to send 55 | cluster add requests from a `alpine-curl` image. Helm won't exit before the 56 | `post-install` job succeeded. 57 | 58 | If you are running into an `Error: timed out waiting for the condition`, this probably mean that the `kafka-manager` deployment isn't healthy, 59 | thus preventing the hook termination. 60 | 61 | ### Accessing the application 62 | 63 | Once the chart is deployed, it will give you some informations on how to access your application: 64 | 65 | ``` 66 | Access to the application using the ingress : 67 | http://kafka-manager.local 68 | 69 | Access to the application using service : 70 | export POD_NAME=$(kubectl get pods --namespace default -l "app=kafka-manager,release=torpid-yak" -o jsonpath="{.items[0].metadata.name}") 71 | echo "Visit http://127.0.0.1:8080 to use your application" 72 | kubectl port-forward $POD_NAME 8080:9000 73 | 74 | ``` 75 | -------------------------------------------------------------------------------- /templates/NOTES.txt: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled }} 2 | Access to the application using the ingress : 3 | {{- range .Values.ingress.hosts }} 4 | http://{{ . }} 5 | {{- end }} 6 | {{- end }} 7 | 8 | Access to the application using service : 9 | {{- if contains "NodePort" .Values.service.type }} 10 | export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "kafka-manager.servicename" . }}) 11 | export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") 12 | echo http://$NODE_IP:$NODE_PORT 13 | {{- else if contains "LoadBalancer" .Values.service.type }} 14 | NOTE: It may take a few minutes for the LoadBalancer IP to be available. 15 | You can watch the status of by running 'kubectl get svc -w {{ template "kafka-manager.servicename" . }}' 16 | export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "kafka-manager.servicename" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') 17 | echo http://$SERVICE_IP:{{ .Values.service.externalPort }} 18 | {{- else if contains "ClusterIP" .Values.service.type }} 19 | export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "kafka-manager.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") 20 | echo "Visit http://127.0.0.1:8080 to use your application" 21 | kubectl port-forward $POD_NAME 8080:{{ .Values.service.internalPort }} 22 | {{- end }} 23 | -------------------------------------------------------------------------------- /templates/_helpers.tpl: -------------------------------------------------------------------------------- 1 | {{/* vim: set filetype=mustache: */}} 2 | {{/* 3 | Expand the name of the chart. 4 | */}} 5 | {{- define "kafka-manager.name" -}} 6 | {{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} 7 | {{- end -}} 8 | 9 | {{/* 10 | Create a default fully qualified app name. 11 | We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). 12 | */}} 13 | {{- define "kafka-manager.fullname" -}} 14 | {{- $name := default .Chart.Name .Values.nameOverride -}} 15 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 16 | {{- end -}} 17 | 18 | 19 | {{/* 20 | Resolve the name of the service based on the user configuration 21 | Will either yield a fixed name (.Values.service.fixed.enabled) or 22 | one parametrized with the release. 23 | */}} 24 | 25 | {{- define "kafka-manager.servicename" -}} 26 | {{- if .Values.service.fixed.enabled -}} 27 | {{- required "A valid service name is required when fixed mode if enabled" .Values.service.fixed.name -}} 28 | {{- else -}} 29 | {{- $name := default .Chart.Name .Values.nameOverride -}} 30 | {{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} 31 | {{- end -}} 32 | {{- end -}} 33 | 34 | {{/* 35 | Handle Zookeeper cluster hosts configurations. It can either be the first 36 | zk cluster of the configured kafkas or a uri to a distant cluster. 37 | */}} 38 | 39 | {{- define "kafka-manager.zkHosts" -}} 40 | {{- if not .Values.kafka.clusters -}} 41 | {{- required "A valid .Values.kafkaManager.zkHosts is required when kafka clusters are not configured" .Values.kafkaManager.zkHosts -}} 42 | {{- else -}} 43 | {{- if .Values.kafkaManager.useKafkaZookeeper -}} 44 | {{- required "Need first cluster zkHosts to be set when using kafka zookeeper as kafka-manager backend" ((index .Values.kafka.clusters 0).zkHosts) -}} 45 | {{- else -}} 46 | {{- required ".Values.kafkaManager.zkHosts must be configured if not connecting to kafka zookeeper instances" .Values.kafkaManager.zkHosts -}} 47 | {{- end -}} 48 | {{- end -}} 49 | {{- end -}} 50 | 51 | {{/* 52 | Generate a curl command with parameters of the cluster to feed the API. 53 | */}} 54 | {{- define "kafka-manager.bootstrapShellCommand" -}} 55 | {{- range $cluster := .Values.kafka.clusters -}} 56 | {{- printf "curl http://%s/clusters -X POST " (include "kafka-manager.servicename" $) -}} 57 | {{- range $k, $v := $cluster -}} 58 | {{- printf "-d %s=%s " $k $v -}} 59 | {{- end -}} 60 | {{- printf "|| exit 1;" -}} 61 | {{- end -}} 62 | {{- end -}} 63 | -------------------------------------------------------------------------------- /templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: {{ template "kafka-manager.fullname" . }} 5 | labels: 6 | app: {{ template "kafka-manager.name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | replicas: {{ .Values.replicaCount }} 12 | template: 13 | metadata: 14 | labels: 15 | app: {{ template "kafka-manager.name" . }} 16 | release: {{ .Release.Name }} 17 | spec: 18 | containers: 19 | - name: {{ .Chart.Name }} 20 | image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" 21 | imagePullPolicy: {{ .Values.image.pullPolicy }} 22 | env: 23 | - name: "ZK_HOSTS" 24 | value: {{ include "kafka-manager.zkHosts" . | quote }} 25 | ports: 26 | - containerPort: {{ .Values.service.internalPort }} 27 | readinessProbe: 28 | httpGet: 29 | path: / 30 | port: {{ .Values.service.internalPort }} 31 | livenessProbe: 32 | httpGet: 33 | path: / 34 | port: {{ .Values.service.internalPort }} 35 | resources: 36 | {{ toYaml .Values.resources | indent 12 }} 37 | {{- if .Values.nodeSelector }} 38 | nodeSelector: 39 | {{ toYaml .Values.nodeSelector | indent 8 }} 40 | {{- end }} 41 | -------------------------------------------------------------------------------- /templates/ingress.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.ingress.enabled -}} 2 | {{- $servicePort := .Values.service.externalPort -}} 3 | apiVersion: extensions/v1beta1 4 | kind: Ingress 5 | metadata: 6 | name: {{ template "kafka-manager.fullname" . }} 7 | labels: 8 | app: {{ template "kafka-manager.name" . }} 9 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 10 | release: {{ .Release.Name }} 11 | heritage: {{ .Release.Service }} 12 | annotations: 13 | {{- range $key, $value := .Values.ingress.annotations }} 14 | {{ $key }}: {{ $value | quote }} 15 | {{- end }} 16 | spec: 17 | rules: 18 | {{- range $host := .Values.ingress.hosts }} 19 | - host: {{ $host }} 20 | http: 21 | paths: 22 | - path: / 23 | backend: 24 | serviceName: {{ template "kafka-manager.servicename" $ }} 25 | servicePort: {{ $servicePort }} 26 | {{- end -}} 27 | {{- end -}} 28 | -------------------------------------------------------------------------------- /templates/jobs.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v1 2 | kind: Job 3 | metadata: 4 | name: {{ template "kafka-manager.fullname" . }}-bootstrap 5 | labels: 6 | app: {{ template "kafka-manager.name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | annotations: 11 | "helm.sh/hook": post-install,post-upgrade 12 | "helm.sh/hook-weight": "10" 13 | "helm.sh/hook-delete-policy": hook-succeeded 14 | spec: 15 | template: 16 | metadata: 17 | name: {{ template "kafka-manager.fullname" . }}-bootstrap 18 | labels: 19 | app: {{ template "kafka-manager.name" . }} 20 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 21 | release: {{ .Release.Name }} 22 | heritage: {{ .Release.Service }} 23 | spec: 24 | containers: 25 | - name: bootstrap-kafka-manager 26 | image: byrnedo/alpine-curl 27 | command: 28 | - "sh" 29 | - "-c" 30 | - {{ include "kafka-manager.bootstrapShellCommand" . | quote }} 31 | restartPolicy: Never 32 | backoffLimit: 20 33 | -------------------------------------------------------------------------------- /templates/svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ template "kafka-manager.servicename" . }} 5 | labels: 6 | app: {{ template "kafka-manager.name" . }} 7 | chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} 8 | release: {{ .Release.Name }} 9 | heritage: {{ .Release.Service }} 10 | spec: 11 | type: {{ .Values.service.type }} 12 | ports: 13 | - port: {{ .Values.service.externalPort }} 14 | targetPort: {{ .Values.service.internalPort }} 15 | protocol: TCP 16 | name: kafka-manager 17 | selector: 18 | app: {{ template "kafka-manager.name" . }} 19 | release: {{ .Release.Name }} 20 | -------------------------------------------------------------------------------- /values.yaml: -------------------------------------------------------------------------------- 1 | replicaCount: 1 2 | image: 3 | repository: sheepkiller/kafka-manager 4 | tag: alpine 5 | pullPolicy: IfNotPresent 6 | service: 7 | fixed: 8 | # If this flag is toggled, the uri of the kafka manager Service 9 | # will be set to the name provided 10 | # /!\ You won't be able to deploy multiple releases on the same namespace 11 | # as Helm will complain that an existing service with the same name already 12 | # exist. 13 | enabled: false 14 | name: kafka-manager 15 | type: ClusterIP 16 | externalPort: 80 17 | internalPort: 9000 18 | ingress: 19 | enabled: true 20 | # Used to create an Ingress record. 21 | hosts: 22 | - kafka-manager.local 23 | annotations: {} 24 | 25 | # Managed kafka cluster 26 | kafka: 27 | # List of cluster you want kafka-manager to ingest. 28 | # You can also configure them at runtime with the webui / the API. 29 | # see this Github issue for more details: 30 | # https://github.com/yahoo/kafka-manager/issues/244 31 | 32 | # Note: at least one cluster have to be configured for Kafka-manager 33 | # to use Kafka Zk instances as a backend (see below) 34 | clusters: 35 | # - # You can embed any configuration you need, variables have the same 36 | # # name than the web ui "Add Cluster" form variables. 37 | # name: "default" 38 | # # MANDATORY VALUE 39 | # # Uri at which Kafka Zookeeper's can be contacted. 40 | # zkHosts: "kafka-zookeeper:2181" 41 | # kafkaVersion: "0.9.1" 42 | 43 | # Kafka-manager zookeeper configuration. 44 | # Internal to kafka-manager. 45 | kafkaManager: 46 | # By default, we use the same zookeeper than the first kafka cluster, but if 47 | # you want to use another one, just set useKafkaZookeeper to false and 48 | # change the zhHosts variable to point to the zk instances. 49 | useKafkaZookeeper: true 50 | zkHosts: "" 51 | 52 | resources: {} 53 | # If you do want to specify resources for kafka-manager, uncomment the following 54 | # lines, adjust them as necessary, and remove the curly braces after 'resources:'. 55 | # limits: 56 | # cpu: 100m 57 | # memory: 128Mi 58 | # requests: 59 | # cpu: 100m 60 | # memory: 128Mi 61 | --------------------------------------------------------------------------------