├── .circleci └── config.yml ├── CHANGELOG.md ├── README.md ├── docs ├── index.md └── kibana.png ├── helm ├── g8s-efk-chart │ ├── .helmignore │ ├── Chart.yaml │ ├── templates │ │ ├── curator │ │ │ └── cronjob.yaml │ │ ├── elasticsearch │ │ │ ├── configmap.yaml │ │ │ ├── deployment.yaml │ │ │ ├── persistent-volume-claim.yaml │ │ │ ├── psp.yaml │ │ │ ├── rbac.yaml │ │ │ ├── service.yaml │ │ │ └── serviceaccount.yaml │ │ ├── fluentbit │ │ │ ├── configmap.yaml │ │ │ ├── daemonset.yaml │ │ │ ├── psp.yaml │ │ │ ├── rbac.yaml │ │ │ ├── service.yaml │ │ │ └── serviceaccount.yaml │ │ └── kibana │ │ │ ├── certs-secret.yaml │ │ │ ├── configmap.yaml │ │ │ ├── deployment.yaml │ │ │ ├── ingress.yaml │ │ │ ├── nginx-configmap.yaml │ │ │ ├── nginx-secret.yaml │ │ │ └── service.yaml │ └── values.yaml └── kubernetes-elastic-stack-elastic-logging │ ├── Chart.yaml │ ├── charts │ ├── elasticsearch-0.2.1.tgz │ ├── elasticsearch-curator-2.0.3.tgz │ ├── elasticsearch-exporter-0.1.0.tgz │ ├── fluentd-elasticsearch-0.2.2.tgz │ ├── keycloak-gatekeeper-1.1.1-3.tgz │ └── kibana-0.2.0.tgz │ ├── requirements.lock │ ├── requirements.yaml │ └── values.yaml ├── manifests-all.yaml ├── manifests ├── elasticsearch │ ├── 1-rbac-sa-psp.yaml │ ├── configmap.yaml │ ├── deployment.yaml │ ├── ingress.yaml │ ├── persistentvolumeclaim.yaml │ └── service.yaml ├── fluentd │ ├── 1-rbac-sa-psp.yaml │ ├── configmap.yaml │ └── daemonset.yaml └── kibana │ ├── configmap.yaml │ ├── deployment.yaml │ ├── ingress.yaml │ └── service.yaml └── test ├── .gitignore ├── README.md ├── aggregate-results.sh ├── sharness.sh └── simple.t /.circleci/config.yml: -------------------------------------------------------------------------------- 1 | version: 2.1 2 | orbs: 3 | architect: giantswarm/architect@0.1.2 4 | 5 | workflows: 6 | package-and-push-chart-on-tag: 7 | jobs: 8 | - architect/push-to-app-catalog: 9 | name: "package and push elastic-logging chart" 10 | app_catalog: "giantswarm-incubator-catalog" 11 | app_catalog_test: "giantswarm-incubator-test-catalog" 12 | chart: "kubernetes-elastic-stack-elastic-logging" 13 | # Trigger job on git tag. 14 | filters: 15 | tags: 16 | only: /^v.*/ 17 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # v0.2.0 2 | Upgraded all elastic components to 7.4.0 [Release Notes](https://www.elastic.co/guide/en/elasticsearch/reference/7.4/release-notes-7.4.0.html) 3 | 4 | Added curator with 3 days history 5 | 6 | The following namespaces are ommited for logging: 7 | - default 8 | - giantswarm 9 | - kube-node-lease 10 | - kube-public 11 | - kube-system 12 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | [![CircleCI](https://circleci.com/gh/giantswarm/kubernetes-elastic-stack.svg?style=shield)](https://circleci.com/gh/giantswarm/kubernetes-elastic-stack) 2 | 3 | # Logging with Elastic in Kubernetes 4 | 5 | See [docs](docs/index.md) for full recipe content. 6 | 7 | 8 | This setup is similar to the [`Full Stack Example`](https://github.com/elastic/examples/tree/master/Miscellaneous/docker/full_stack_example), but adopted to be run on a Kubernetes cluster. 9 | 10 | There is no access control for the Kibana web interface. If you want to run this in public you need to secure your setup. The provided manifests here are for demonstration purposes only. 11 | 12 | 13 | # Local Setup 14 | 15 | ## Start a local Kubernetes using minikube 16 | 17 | > If some webpages don't show up immediately wait a bit and reload. Also the Kubernetes Dashboard needs reloading to update its view. 18 | 19 | ```bash 20 | minikube start --memory 4096 21 | 22 | minikube dashboard 23 | # maybe wait a bit and retry 24 | kubectl get --all-namespaces services,pods 25 | ``` 26 | 27 | ## Logging with Elasticsearch and fluentd 28 | 29 | ```bash 30 | kubectl apply \ 31 | --filename https://raw.githubusercontent.com/giantswarm/kubernetes-elastic-stack/master/manifests-all.yaml 32 | 33 | minikube service kibana 34 | ``` 35 | 36 | For the index pattern in Kibana choose `fluentd-*`, then switch to the "Discover" view. 37 | Every log line by containers running within the Kubernetes cluster is enhanced by meta data like `namespace_name`, `labels` and so on. This way it is easy to group and filter down on specific parts. 38 | 39 | 40 | ## Turn down all logging components 41 | 42 | ```bash 43 | kubectl delete \ 44 | --filename https://raw.githubusercontent.com/giantswarm/kubernetes-elastic-stack/master/manifests-all.yaml 45 | ``` 46 | 47 | FIXME alternatively 48 | --selector stack=logging 49 | 50 | To delete the whole local Kubernetes cluster use this: 51 | 52 | ```bash 53 | minikube delete 54 | ``` 55 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | +++ 2 | title = "Logging with the Elastic Stack" 3 | description = "The Elastic stack, also known as the ELK stack, has become a wide-spread tool for aggregating logs. This recipe helps you to set it up in Kubernetes." 4 | date = "2017-10-30" 5 | type = "page" 6 | weight = 50 7 | tags = ["recipe"] 8 | +++ 9 | 10 | # Logging with the Elastic Stack 11 | 12 | The Elastic stack, most prominently know as the ELK stack, in this recipe is the combination of Fluentd, Elasticsearch, and Kibana. This stack helps you get all logs from your containers into a single searchable data store without having to worry about logs disappearing together with the containers. With Kibana you get a nice analytics and visualization platform on top. 13 | 14 | ![Kibana](kibana.png) 15 | 16 | ## Deploying Elasticsearch, Filebeat, and Kibana 17 | 18 | First we create a namespace and deploy our manifests to it. 19 | 20 | ```bash 21 | kubectl apply \ 22 | --filename https://raw.githubusercontent.com/giantswarm/kubernetes-elastic-stack/master/manifests-all.yaml 23 | ``` 24 | 25 | ## Configuring Kibana 26 | 27 | Now we need to open up Kibana. As we have no authentication set up in this recipe (you can check out [Shield](https://www.elastic.co/products/x-pack/security) for that), we access Kibana through 28 | 29 | ```nohighlight 30 | $ POD=$(kubectl get pods --selector component=kibana \ 31 | -o template --template '{{range .items}}{{.metadata.name}} {{.status.phase}}{{"\n"}}{{end}}' \ 32 | | grep Running | head -1 | cut -f1 -d' ') 33 | $ kubectl port-forward --namespace logging $POD 5601:5601 34 | ``` 35 | 36 | Now you can open up your browser at `http://localhost:5601/app/kibana/` and access the Kibana frontend. 37 | 38 | Now set `fluentd-*` for `index pattern`. 39 | 40 | All set! You can now use Kibana to access your logs including filtering logs based on pod names and namespaces. 41 | -------------------------------------------------------------------------------- /docs/kibana.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/giantswarm/kubernetes-elastic-stack/3b7ff795ebedf11b58116f50207b504e5b1911e0/docs/kibana.png -------------------------------------------------------------------------------- /helm/g8s-efk-chart/.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/Chart.yaml: -------------------------------------------------------------------------------- 1 | name: efk 2 | version: 1.0.0 3 | appVersion: 6.1.1 4 | description: Elasticsearch, Fluentbit and Kibana stack ready to be your logging system. 5 | icon: https://static-www.elastic.co/assets/blteb1c97719574938d/logo-elastic-elasticsearch-lt.svg 6 | sources: 7 | - https://www.elastic.co/products/elasticsearch 8 | - https://github.com/jetstack/elasticsearch-pet 9 | - https://github.com/GoogleCloudPlatform/elasticsearch-docker 10 | - https://github.com/clockworksoul/helm-elasticsearch 11 | - https://github.com/pires/kubernetes-elasticsearch-cluster 12 | maintainers: 13 | - name: giantswarm 14 | email: info@giantswarm.io 15 | engine: gotpl 16 | tillerVersion: ">=2.8.0" -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/curator/cronjob.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: batch/v2alpha1 2 | kind: CronJob 3 | metadata: 4 | namespace: "{{ .Values.namespace }}" 5 | name: curator 6 | spec: 7 | schedule: "{{ .Values.curator.cron }}" 8 | successfulJobsHistoryLimit: 2 9 | failedJobsHistoryLimit: 2 10 | jobTemplate: 11 | spec: 12 | template: 13 | metadata: 14 | name: curator 15 | labels: 16 | app: curator 17 | spec: 18 | containers: 19 | - name: curator 20 | image: quay.io/giantswarm/curator:latest 21 | imagePullPolicy: Always 22 | env: 23 | - name: ELASTICSEARCH_HOST 24 | value: elasticsearch:9200 25 | - name: RETENTION_DAYS 26 | value: "{{ .Values.curator.retention }}" 27 | - name: INDEX_NAME_PREFIX 28 | value: "{{ .Values.logsPrefix }}-" 29 | - name: INDEX_NAME_TIMEFORMAT 30 | value: "%Y.%m.%d" 31 | resources: 32 | limits: 33 | cpu: 50m 34 | memory: 50Mi 35 | requests: 36 | cpu: 50m 37 | memory: 50Mi 38 | restartPolicy: OnFailure 39 | # retry for a maximum of 10 minutes 40 | activeDeadlineSeconds: 600 41 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: elasticsearch 5 | namespace: "{{ .Values.namespace }}" 6 | labels: 7 | app: elasticsearch 8 | data: 9 | elasticsearch.yml: | 10 | cluster.name: {{ .Values.clusterName }} 11 | node.name: "es_node" 12 | path.data: /usr/share/elasticsearch/data 13 | http: 14 | host: 0.0.0.0 15 | port: 9200 16 | bootstrap.memory_lock: true 17 | transport.host: 127.0.0.1 18 | discovery: 19 | zen: 20 | minimum_master_nodes: 1 21 | logger.org.elasticsearch.transport: debug 22 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: Deployment 3 | metadata: 4 | name: elasticsearch 5 | namespace: "{{ .Values.namespace }}" 6 | labels: 7 | app: elasticsearch 8 | spec: 9 | replicas: 1 10 | revisionHistoryLimit: 3 11 | strategy: 12 | type: Recreate 13 | template: 14 | metadata: 15 | annotations: 16 | releasetime: {{ $.Release.Time }} 17 | labels: 18 | app: elasticsearch 19 | spec: 20 | affinity: 21 | nodeAffinity: 22 | requiredDuringSchedulingIgnoredDuringExecution: 23 | nodeSelectorTerms: 24 | - matchExpressions: 25 | - key: role 26 | operator: NotIn 27 | values: 28 | - master 29 | {{- if .Values.elasticsearch.nodeSelector }} 30 | nodeSelector: 31 | {{ toYaml .Values.data.nodeSelector | indent 8 }} 32 | {{- end }} 33 | {{- if .Values.elasticsearch.tolerations }} 34 | tolerations: 35 | {{ toYaml .Values.elasticsearch.tolerations | indent 8 }} 36 | {{- end }} 37 | initContainers: 38 | - name: set-vm-max-map-count 39 | image: quay.io/giantswarm/busybox:1.28.3 40 | imagePullPolicy: IfNotPresent 41 | command: ['sysctl', '-w', 'vm.max_map_count=262144'] 42 | securityContext: 43 | privileged: true 44 | {{- if .Values.elasticsearch.persistence.enabled }} 45 | - name: volume-mount-hack 46 | image: quay.io/giantswarm/busybox:1.28.3 47 | imagePullPolicy: IfNotPresent 48 | command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] 49 | volumeMounts: 50 | - name: elasticsearch-data 51 | mountPath: /usr/share/elasticsearch/data 52 | {{- end }} 53 | serviceAccountName: elasticsearch 54 | containers: 55 | - name: elasticsearch 56 | image: "{{ .Values.elasticsearch.image.repository }}:{{ .Values.elasticsearch.image.tag }}" 57 | imagePullPolicy: {{ .Values.elasticsearch.image.pullPolicy | quote }} 58 | env: 59 | - name: ES_JAVA_OPTS 60 | value: "-Djava.net.preferIPv4Stack=true -Xms4g -Xmx4g" 61 | ports: 62 | - containerPort: 9200 63 | livenessProbe: 64 | httpGet: 65 | path: /_cluster/health?local=true 66 | port: 9200 67 | initialDelaySeconds: 60 68 | readinessProbe: 69 | httpGet: 70 | path: /_cluster/health?local=true 71 | port: 9200 72 | initialDelaySeconds: 30 73 | resources: 74 | {{ toYaml .Values.elasticsearch.resources | indent 12 }} 75 | volumeMounts: 76 | - name: config 77 | mountPath: /usr/share/elasticsearch/elasticsearch.yml 78 | subPath: elasticsearch.yml 79 | - name: elasticsearch-data 80 | mountPath: /usr/share/elasticsearch/data 81 | restartPolicy: Always 82 | volumes: 83 | - name: config 84 | configMap: 85 | name: elasticsearch 86 | - name: elasticsearch-data 87 | {{- if .Values.elasticsearch.persistence.enabled }} 88 | persistentVolumeClaim: 89 | claimName: {{ .Values.elasticsearch.persistence.pvcName | quote }} 90 | {{- else }} 91 | emptyDir: {} 92 | {{- end }} 93 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/persistent-volume-claim.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.elasticsearch.persistence.enabled }} 2 | kind: PersistentVolumeClaim 3 | apiVersion: v1 4 | metadata: 5 | labels: 6 | app: elasticsearch 7 | name: {{ .Values.elasticsearch.persistence.pvcName }} 8 | namespace: "{{ .Values.namespace }}" 9 | annotations: 10 | "helm.sh/resource-policy": keep 11 | spec: 12 | accessModes: 13 | - ReadWriteOnce 14 | resources: 15 | requests: 16 | storage: {{ .Values.elasticsearch.persistence.size }} 17 | {{- end }} 18 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/psp.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: extensions/v1beta1 2 | kind: PodSecurityPolicy 3 | metadata: 4 | name: elasticsearch-psp 5 | spec: 6 | privileged: true 7 | fsGroup: 8 | rule: RunAsAny 9 | runAsUser: 10 | rule: RunAsAny 11 | seLinux: 12 | rule: RunAsAny 13 | supplementalGroups: 14 | rule: RunAsAny 15 | volumes: 16 | - 'secret' 17 | - 'configMap' 18 | - 'hostPath' 19 | - 'persistentVolumeClaim' 20 | - 'emptyDir' 21 | hostNetwork: false 22 | hostIPC: false 23 | hostPID: false 24 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/rbac.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRole 2 | apiVersion: rbac.authorization.k8s.io/v1beta1 3 | metadata: 4 | name: elasticsearch 5 | rules: 6 | - apiGroups: 7 | - "" 8 | resources: 9 | - "services" 10 | - "namespaces" 11 | - "endpoints" 12 | verbs: 13 | - "get" 14 | --- 15 | apiVersion: rbac.authorization.k8s.io/v1beta1 16 | kind: ClusterRoleBinding 17 | metadata: 18 | name: elasticsearch 19 | subjects: 20 | - kind: ServiceAccount 21 | name: elasticsearch 22 | namespace: "{{ .Values.namespace }}" 23 | roleRef: 24 | kind: ClusterRole 25 | name: elasticsearch 26 | apiGroup: rbac.authorization.k8s.io 27 | --- 28 | apiVersion: rbac.authorization.k8s.io/v1beta1 29 | kind: ClusterRole 30 | metadata: 31 | name: elasticsearch-psp 32 | rules: 33 | - apiGroups: 34 | - extensions 35 | resources: 36 | - podsecuritypolicies 37 | verbs: 38 | - use 39 | resourceNames: 40 | - elasticsearch-psp 41 | --- 42 | apiVersion: rbac.authorization.k8s.io/v1beta1 43 | kind: ClusterRoleBinding 44 | metadata: 45 | name: elasticsearch-psp 46 | subjects: 47 | - kind: ServiceAccount 48 | name: elasticsearch 49 | namespace: "{{ .Values.namespace }}" 50 | roleRef: 51 | kind: ClusterRole 52 | name: elasticsearch-psp 53 | apiGroup: rbac.authorization.k8s.io 54 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: elasticsearch 5 | namespace: "{{ .Values.namespace }}" 6 | labels: 7 | app: elasticsearch 8 | spec: 9 | ports: 10 | - name: nginx 11 | port: 8000 12 | targetPort: 8000 13 | - name: elasticsearch 14 | port: 9200 15 | targetPort: 9200 16 | selector: 17 | app: elasticsearch 18 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/elasticsearch/serviceaccount.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ServiceAccount 3 | metadata: 4 | name: elasticsearch 5 | namespace: "{{ .Values.namespace }}" 6 | -------------------------------------------------------------------------------- /helm/g8s-efk-chart/templates/fluentbit/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: fluentbit-config 5 | namespace: "{{ .Values.namespace }}" 6 | labels: 7 | app: fluentbit 8 | data: 9 | # Configuration files: server, input, filters and output 10 | # ====================================================== 11 | fluentbit.conf: | 12 | [SERVICE] 13 | Flush 5 14 | Log_Level info 15 | Daemon off 16 | Parsers_File parsers.conf 17 | HTTP_Server On 18 | HTTP_Listen 0.0.0.0 19 | HTTP_Port 2020 20 | 21 | [INPUT] 22 | Name tail 23 | Tag kube.* 24 | Path /var/log/containers/*.log 25 | Parser docker 26 | DB /var/log/flb_kube.db 27 | Buffer_Max_Size 128k 28 | Mem_Buf_Limit 10MB 29 | Skip_Long_Lines On 30 | Refresh_Interval 10 31 | 32 | [FILTER] 33 | # Remove garbage log entries from fluent-bit (https://github.com/fluent/fluent-bit/issues/429) 34 | Name grep 35 | Match * 36 | Exclude log \"took\"\"errors\"\"took\"\"errors\" 37 | 38 | [FILTER] 39 | Name kubernetes 40 | Match kube.* 41 | Kube_URL https://kubernetes.default.svc:443 42 | Merge_Log Off 43 | K8S-Logging.Parser On 44 | 45 | [OUTPUT] 46 | Name es 47 | Match * 48 | Host ${FLUENT_ELASTICSEARCH_HOST} 49 | Port ${FLUENT_ELASTICSEARCH_PORT} 50 | Logstash_Format On 51 | Logstash_Prefix ${FLUENT_ELASTICSEARCH_PREFIX} 52 | Retry_Limit False 53 | 54 | parsers.conf: | 55 | [PARSER] 56 | Name json-test 57 | Format json 58 | Time_Key time 59 | Time_Format %d/%b/%Y:%H:%M:%S %z 60 | 61 | [PARSER] 62 | Name docker 63 | Format json 64 | Time_Key time 65 | Time_Format %Y-%m-%dT%H:%M:%S.%L 66 | Time_Keep On 67 | # Command | Decoder | Field | Optional Action 68 | # =============|==================|================= 69 | Decode_Field_As escaped log 70 | 71 | [PARSER] 72 | Name syslog 73 | Format regex 74 | Regex ^\<(?[0-9]+)\>(?