├── .gitignore ├── storage ├── minio │ ├── MINIO_ACCESS_KEY │ ├── MINIO_SECRET_KEY │ ├── AWS_ACCESS_KEY_ID │ ├── AWS_SECRET_ACCESS_KEY │ ├── images │ │ ├── minio-1.png │ │ ├── minio-2.png │ │ ├── minio-3.png │ │ ├── external-bucket.png │ │ ├── internal-bucket.png │ │ └── external-bucket-file.png │ ├── artifacts │ │ ├── pvc.yaml │ │ ├── minio-svc.yaml │ │ ├── nodeport-svc.yaml │ │ ├── osm-pod.yaml │ │ └── deployment.yaml │ ├── ca.crt │ ├── server.crt │ ├── ca.key │ ├── private.key │ ├── server.key │ ├── public.crt │ └── README.md └── nfs │ ├── artifacts │ ├── nfs-pvc.yaml │ ├── nfs-service.yaml │ ├── nfs-pv.yaml │ ├── nfs-pod-pvc.yaml │ ├── nfs-direct.yaml │ ├── shared-pod-1.yaml │ ├── shared-pod-2.yaml │ └── nfs-server.yaml │ └── README.md ├── monitoring ├── prometheus │ ├── builtin │ │ ├── images │ │ │ └── prom-targets.png │ │ ├── artifacts │ │ │ ├── rbac.yaml │ │ │ ├── deployment.yaml │ │ │ ├── sample-workloads.yaml │ │ │ └── configmap.yaml │ │ └── README.md │ └── operator │ │ ├── artifacts │ │ ├── prometheus.yaml │ │ └── prometheus-rbac.yaml │ │ └── README.md └── grafana │ ├── artifacts │ └── grafana.yaml │ └── README.md └── README.md /.gitignore: -------------------------------------------------------------------------------- 1 | .vscode/* 2 | .idea/* -------------------------------------------------------------------------------- /storage/minio/MINIO_ACCESS_KEY: -------------------------------------------------------------------------------- 1 | my-access-key -------------------------------------------------------------------------------- /storage/minio/MINIO_SECRET_KEY: -------------------------------------------------------------------------------- 1 | my-secret-key -------------------------------------------------------------------------------- /storage/minio/AWS_ACCESS_KEY_ID: -------------------------------------------------------------------------------- 1 | my-access-key -------------------------------------------------------------------------------- /storage/minio/AWS_SECRET_ACCESS_KEY: -------------------------------------------------------------------------------- 1 | my-secret-key -------------------------------------------------------------------------------- /storage/minio/images/minio-1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/minio-1.png -------------------------------------------------------------------------------- /storage/minio/images/minio-2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/minio-2.png -------------------------------------------------------------------------------- /storage/minio/images/minio-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/minio-3.png -------------------------------------------------------------------------------- /storage/minio/images/external-bucket.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/external-bucket.png -------------------------------------------------------------------------------- /storage/minio/images/internal-bucket.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/internal-bucket.png -------------------------------------------------------------------------------- /storage/minio/images/external-bucket-file.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/storage/minio/images/external-bucket-file.png -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/images/prom-targets.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/appscode/third-party-tools/HEAD/monitoring/prometheus/builtin/images/prom-targets.png -------------------------------------------------------------------------------- /storage/minio/artifacts/pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: minio-pvc 5 | namespace: storage 6 | spec: 7 | storageClassName: standard 8 | accessModes: 9 | - ReadWriteOnce 10 | resources: 11 | requests: 12 | storage: 5Gi 13 | -------------------------------------------------------------------------------- /storage/minio/artifacts/minio-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: minio 5 | namespace: storage 6 | spec: 7 | ports: 8 | - name: https 9 | port: 443 10 | targetPort: https 11 | protocol: TCP 12 | selector: 13 | app: minio # must match with the label used in minio deployment 14 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-pvc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolumeClaim 3 | metadata: 4 | name: nfs-pvc 5 | namespace: demo 6 | spec: 7 | accessModes: 8 | - ReadWriteMany 9 | storageClassName: "" 10 | resources: 11 | requests: 12 | storage: 1Gi 13 | selector: 14 | matchLabels: 15 | app: nfs-data 16 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-service.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: nfs-service 5 | namespace: storage 6 | spec: 7 | ports: 8 | - name: nfs 9 | port: 2049 10 | - name: mountd 11 | port: 20048 12 | - name: rpcbind 13 | port: 111 14 | selector: 15 | app: nfs-server # must match with the label of NFS pod 16 | -------------------------------------------------------------------------------- /storage/minio/artifacts/nodeport-svc.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: minio-nodeport-svc 5 | namespace: storage 6 | spec: 7 | type: NodePort 8 | ports: 9 | - name: https 10 | port: 443 11 | targetPort: https 12 | protocol: TCP 13 | selector: 14 | app: minio # must match with the label used in minio deployment 15 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-pv.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: PersistentVolume 3 | metadata: 4 | name: nfs-pv 5 | labels: 6 | app: nfs-data 7 | spec: 8 | capacity: 9 | storage: 1Gi 10 | accessModes: 11 | - ReadWriteMany 12 | nfs: 13 | server: "nfs-service.storage.svc.cluster.local" 14 | path: "/pvc" # "pvc" folder must exist in "/exports" directory of NFS server 15 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-pod-pvc.yaml: -------------------------------------------------------------------------------- 1 | kind: Pod 2 | apiVersion: v1 3 | metadata: 4 | name: nfs-pod-pvc 5 | namespace: demo 6 | spec: 7 | containers: 8 | - name: busybox 9 | image: busybox 10 | command: 11 | - sleep 12 | - "3600" 13 | volumeMounts: 14 | - name: data 15 | mountPath: /demo/data 16 | volumes: 17 | - name: data 18 | persistentVolumeClaim: 19 | claimName: nfs-pvc 20 | -------------------------------------------------------------------------------- /monitoring/grafana/artifacts/grafana.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: grafana 5 | namespace: monitoring 6 | labels: 7 | app: grafana 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: grafana 13 | template: 14 | metadata: 15 | labels: 16 | app: grafana 17 | spec: 18 | containers: 19 | - name: grafana 20 | image: grafana/grafana:5.3.1 21 | -------------------------------------------------------------------------------- /monitoring/prometheus/operator/artifacts/prometheus.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: monitoring.coreos.com/v1 2 | kind: Prometheus 3 | metadata: 4 | name: prometheus 5 | labels: 6 | prometheus: prometheus 7 | spec: 8 | replicas: 1 9 | serviceAccountName: prometheus 10 | serviceMonitorSelector: 11 | matchLabels: 12 | k8s-app: prometheus 13 | serviceMonitorNamespaceSelector: 14 | matchLabels: 15 | prometheus: prometheus 16 | resources: 17 | requests: 18 | memory: 400Mi -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-direct.yaml: -------------------------------------------------------------------------------- 1 | kind: Pod 2 | apiVersion: v1 3 | metadata: 4 | name: nfs-direct 5 | namespace: demo 6 | spec: 7 | containers: 8 | - name: busybox 9 | image: busybox 10 | command: 11 | - sleep 12 | - "3600" 13 | volumeMounts: 14 | - name: data 15 | mountPath: /demo/data 16 | volumes: 17 | - name: data 18 | nfs: 19 | server: "nfs-service.storage.svc.cluster.local" 20 | path: "/nfs-direct" # "nfs-direct" folder must exist inside "/exports" directory of NFS server 21 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/shared-pod-1.yaml: -------------------------------------------------------------------------------- 1 | kind: Pod 2 | apiVersion: v1 3 | metadata: 4 | name: shared-pod-1 5 | namespace: demo 6 | spec: 7 | containers: 8 | - name: busybox 9 | image: busybox 10 | command: 11 | - sleep 12 | - "3600" 13 | volumeMounts: 14 | - name: data 15 | mountPath: /demo/data 16 | volumes: 17 | - name: data 18 | nfs: 19 | server: "nfs-service.storage.svc.cluster.local" 20 | path: "/shared" # "shared" folder must exist inside "/exports" directory of NFS server 21 | -------------------------------------------------------------------------------- /storage/nfs/artifacts/shared-pod-2.yaml: -------------------------------------------------------------------------------- 1 | kind: Pod 2 | apiVersion: v1 3 | metadata: 4 | name: shared-pod-2 5 | namespace: demo 6 | spec: 7 | containers: 8 | - name: busybox 9 | image: busybox 10 | command: 11 | - sleep 12 | - "3600" 13 | volumeMounts: 14 | - name: data 15 | mountPath: /demo/data 16 | volumes: 17 | - name: data 18 | nfs: 19 | server: "nfs-service.storage.svc.cluster.local" 20 | path: "/shared" # "shared" folder must exist inside "/exports" directory of NFS server 21 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Third Party Tools 2 | 3 | This repository is intended to hold the common third-party tools we used accross all our AppsCode repository. 4 | 5 | ## Contents 6 | 7 | - Monitoring 8 | - [Configure Prometheus Server to Monitor Kubernetes Resources](/monitoring/prometheus/builtin/README.md) 9 | - [Deploy Prometheus Operator](/monitoring/prometheus/operator/README.md) 10 | - [Deploy Grafana Dashboard](/monitoring/grafana/README.md) 11 | - Storage 12 | - [Deploy TLS Secured Minio Server in Kubernetes](/storage/minio/README.md) 13 | - [Deploying NFS Server in Kubernetes](/storage/nfs/README.md) 14 | -------------------------------------------------------------------------------- /monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: prometheus 5 | rules: 6 | - apiGroups: [""] 7 | resources: 8 | - nodes 9 | - nodes/metrics 10 | - services 11 | - endpoints 12 | - pods 13 | verbs: ["get", "list", "watch"] 14 | - apiGroups: [""] 15 | resources: 16 | - configmaps 17 | verbs: ["get"] 18 | - nonResourceURLs: ["/metrics"] 19 | verbs: ["get"] 20 | --- 21 | apiVersion: v1 22 | kind: ServiceAccount 23 | metadata: 24 | name: prometheus 25 | namespace: default 26 | --- 27 | apiVersion: rbac.authorization.k8s.io/v1 28 | kind: ClusterRoleBinding 29 | metadata: 30 | name: prometheus 31 | roleRef: 32 | apiGroup: rbac.authorization.k8s.io 33 | kind: ClusterRole 34 | name: prometheus 35 | subjects: 36 | - kind: ServiceAccount 37 | name: prometheus 38 | namespace: default -------------------------------------------------------------------------------- /storage/nfs/artifacts/nfs-server.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: nfs-server 5 | namespace: storage 6 | spec: 7 | selector: 8 | matchLabels: 9 | app: nfs-server 10 | template: 11 | metadata: 12 | labels: 13 | app: nfs-server 14 | spec: 15 | containers: 16 | - name: nfs-server 17 | image: k8s.gcr.io/volume-nfs:0.8 18 | ports: 19 | - name: nfs 20 | containerPort: 2049 21 | - name: mountd 22 | containerPort: 20048 23 | - name: rpcbind 24 | containerPort: 111 25 | securityContext: 26 | privileged: true 27 | volumeMounts: 28 | - name: storage 29 | mountPath: /exports 30 | volumes: 31 | - name: storage 32 | hostPath: 33 | path: /data/nfs # store all data in "/data/nfs" directory of the node where it is running 34 | type: DirectoryOrCreate # if the directroy does not exist then create it 35 | -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/artifacts/rbac.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: rbac.authorization.k8s.io/v1 2 | kind: ClusterRole 3 | metadata: 4 | name: prometheus 5 | labels: 6 | app: prometheus-demo 7 | rules: 8 | - apiGroups: [""] 9 | resources: 10 | - nodes 11 | - nodes/proxy 12 | - services 13 | - endpoints 14 | - pods 15 | verbs: ["get", "list", "watch"] 16 | - apiGroups: 17 | - extensions 18 | resources: 19 | - ingresses 20 | verbs: ["get", "list", "watch"] 21 | - nonResourceURLs: ["/metrics"] 22 | verbs: ["get"] 23 | --- 24 | apiVersion: v1 25 | kind: ServiceAccount 26 | metadata: 27 | name: prometheus 28 | namespace: monitoring 29 | labels: 30 | app: prometheus-demo 31 | --- 32 | apiVersion: rbac.authorization.k8s.io/v1 33 | kind: ClusterRoleBinding 34 | metadata: 35 | name: prometheus 36 | labels: 37 | app: prometheus-demo 38 | roleRef: 39 | apiGroup: rbac.authorization.k8s.io 40 | kind: ClusterRole 41 | name: prometheus 42 | subjects: 43 | - kind: ServiceAccount 44 | name: prometheus 45 | namespace: monitoring 46 | -------------------------------------------------------------------------------- /storage/minio/ca.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIICuDCCAaCgAwIBAgIBADANBgkqhkiG9w0BAQsFADANMQswCQYDVQQDEwJjYTAe 3 | Fw0xODExMjkxMzI1MzhaFw0yODExMjYxMzI1MzhaMA0xCzAJBgNVBAMTAmNhMIIB 4 | IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0bMzhOXHtIGuNjiTKo3M1IXE 5 | Acf3LNPXpTyBKQDxfifEO/91ROYMwcEchsXZYhqXcRJ3+v+Vmhp6shCwZYNzpDxS 6 | tAvo5PAv5fioMgv8lP9oYrAqHOwJzaSiSeNoTJkf+0lOHEhKiTFe22o6e+pYFvFI 7 | aejONjqUwd+BcCwMp/cb+PPVjAn+ewxtCsqsCr2PFawD7FASJ1z8qWorHOS2ZBTs 8 | bRoqGLU4sXj+P5PBz4r9CPOgA7zyonb8w4G+qmS5upkvxMekyjyHdxzwCJtPPljr 9 | u7iXQvyqc5DBP6PWH1HXtr5sLxwsatguoNoAvfm6oDmtj5AJrEdmqqkj3spRPQID 10 | AQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG 11 | 9w0BAQsFAAOCAQEAAZoVLfi3DTr4V0oTZ3I/M9IMDfommB+Fuf2miCrogUnpxDEs 12 | vO3mYIRHRPKPEuqGYGNVPuJ/956OboPvVhL3uT2MecljD6EBrTg25OVO1ZHhkAgj 13 | UJeT5mRig3pk+asGYVEirvapoagxXs2EQKtZvLBm28Wi3nK/LAWsV/RzLNE5Jfec 14 | 1mPx7AZWLyPYgfSdsGowMSGHQ3GFLMF4ctp00KxvxhC0aoHpKqtSsIZl58t7ki0P 15 | J4e4t6A7H3rng8QSi7Cr1nhwrMOC11gC8yWf6HT/iVgkZWLJadKATBq2sNYIxpt6 16 | reTkEm5TaxP3WDpDY597FEmENy+a4H4T56okPA== 17 | -----END CERTIFICATE----- 18 | -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/artifacts/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: prometheus 5 | namespace: monitoring 6 | labels: 7 | app: prometheus-demo 8 | spec: 9 | replicas: 1 10 | selector: 11 | matchLabels: 12 | app: prometheus 13 | template: 14 | metadata: 15 | labels: 16 | app: prometheus 17 | spec: 18 | serviceAccountName: prometheus 19 | containers: 20 | - name: prometheus 21 | image: prom/prometheus:v2.20.1 22 | args: 23 | - "--config.file=/etc/prometheus/prometheus.yml" 24 | - "--storage.tsdb.path=/prometheus/" 25 | ports: 26 | - containerPort: 9090 27 | volumeMounts: 28 | - name: prometheus-config 29 | mountPath: /etc/prometheus/ 30 | - name: prometheus-storage 31 | mountPath: /prometheus/ 32 | volumes: 33 | - name: prometheus-config 34 | configMap: 35 | defaultMode: 420 36 | name: prometheus-config 37 | - name: prometheus-storage 38 | emptyDir: {} 39 | -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/artifacts/sample-workloads.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Pod 3 | metadata: 4 | name: pod-monitoring-demo 5 | namespace: demo 6 | labels: 7 | app: prometheus-demo 8 | annotations: 9 | prometheus.io/scrape: "true" 10 | prometheus.io/port: "9091" 11 | prometheus.io/path: "/metrics" 12 | spec: 13 | containers: 14 | - name: pushgateway 15 | image: prom/pushgateway 16 | --- 17 | 18 | apiVersion: v1 19 | kind: Pod 20 | metadata: 21 | name: service-endpoint-monitoring-demo 22 | namespace: demo 23 | labels: 24 | app: prometheus-demo 25 | pod: prom-pushgateway 26 | spec: 27 | containers: 28 | - name: pushgateway 29 | image: prom/pushgateway 30 | --- 31 | 32 | kind: Service 33 | apiVersion: v1 34 | metadata: 35 | name: pushgateway-service 36 | namespace: demo 37 | labels: 38 | app: prometheus-demo 39 | annotations: 40 | prometheus.io/scrape: "true" 41 | prometheus.io/port: "9091" 42 | prometheus.io/path: "/metrics" 43 | spec: 44 | selector: 45 | pod: prom-pushgateway 46 | ports: 47 | - name: metrics 48 | port: 9091 49 | targetPort: 9091 50 | -------------------------------------------------------------------------------- /storage/minio/server.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIC8DCCAdigAwIBAgIIJY+FqktA+rcwDQYJKoZIhvcNAQELBQAwDTELMAkGA1UE 3 | AxMCY2EwHhcNMTgxMTI5MTMyNTM4WhcNMTkxMTI5MTM0NzM4WjARMQ8wDQYDVQQD 4 | EwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMK+9c9jJ8 5 | m0gzSyJT3yS6qezHmjW9luClO7zdnHQFIH+ySemS3dUQ9GA2i4QAg5F56GFBrO3o 6 | oHs66H384d91SLePRyypSbRhnPjFDi3wbyeV4gSvt/DUPBRPDmppjA2xvmEB+5a9 7 | /ST+1x/JOYo9YFCnB6h/dTlUX3o1sgPE+5Z5VZu4IyApXM3GTtU/X0gDWF7eyT1i 8 | qq3NBo1Eo2gYHVRXV61LQ4UiTq7aZBseDZmwn0KZymA/OLBWDy3yxWf33e0vvD32 9 | wiYk0OchqNb6rL40FHBBYlGJ2vZ6JmuBBwcmheBOyyNjSx4HJSNOP/kF2Bn/AuCB 10 | fwNeWDQLomr1AgMBAAGjUDBOMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr 11 | BgEFBQcDATAnBgNVHREEIDAeggZzZXJ2ZXKCDm1pbmlvLmRlbW8uc3ZjhwTAqGNk 12 | MA0GCSqGSIb3DQEBCwUAA4IBAQAQqrpN9UIZv/cJlptDpONJAUYQgdRPM4gPIMVc 13 | W7vIuraxfrHFkUjm67jvQVQnGIc3jH0O1Z5F9CLJSCEBY7RtLpySKG732XtHE0lE 14 | 5EDhE9HSdol5Lwq/BbH+syFhnfrHx+7yOXCtAAo2d0um5wZZAw2AWqnYN6XusLto 15 | DHuUKdLlAdM7vcohftYHhDUavsG5y+CCT4LAzX8R2ppA+9W5A0czGJfkP2TFlHLV 16 | 0lktx8q3nEMW0flVueTqwIPTXkXT3r2SFrZcvGRMVkFC3Kv2x3KO1c9kVcEUS7t8 17 | 1CiJW5YSQlPeJwsitBysfwBRuvU8EOQHk9Xknx0Jxnf9/TFk 18 | -----END CERTIFICATE----- 19 | -------------------------------------------------------------------------------- /storage/minio/artifacts/osm-pod.yaml: -------------------------------------------------------------------------------- 1 | kind: Pod 2 | apiVersion: v1 3 | metadata: 4 | name: osm-pod 5 | namespace: demo 6 | spec: 7 | restartPolicy: Never 8 | containers: 9 | - name: osm 10 | image: appscodeci/osm 11 | env: 12 | - name: PROVIDER 13 | value: s3 14 | - name: AWS_ENDPOINT 15 | value: https://minio.storage.svc 16 | - name: AWS_ACCESS_KEY_ID 17 | valueFrom: 18 | secretKeyRef: 19 | name: minio-client-secret 20 | key: AWS_ACCESS_KEY_ID 21 | - name: AWS_SECRET_ACCESS_KEY 22 | valueFrom: 23 | secretKeyRef: 24 | name: minio-client-secret 25 | key: AWS_SECRET_ACCESS_KEY 26 | - name: CA_CERT_FILE 27 | value: /etc/minio/certs/ca.crt # root ca has been mounted here 28 | args: 29 | - "mc internal-bucket" # create a bucket named "internal-bucket" 30 | volumeMounts: # mount root ca in /etc/minio/certs directory 31 | - name: credentials 32 | mountPath: /etc/minio/certs 33 | volumes: 34 | - name: credentials 35 | secret: 36 | secretName: minio-client-secret 37 | items: 38 | - key: ca.crt 39 | path: ca.crt 40 | -------------------------------------------------------------------------------- /storage/minio/ca.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpgIBAAKCAQEA0bMzhOXHtIGuNjiTKo3M1IXEAcf3LNPXpTyBKQDxfifEO/91 3 | ROYMwcEchsXZYhqXcRJ3+v+Vmhp6shCwZYNzpDxStAvo5PAv5fioMgv8lP9oYrAq 4 | HOwJzaSiSeNoTJkf+0lOHEhKiTFe22o6e+pYFvFIaejONjqUwd+BcCwMp/cb+PPV 5 | jAn+ewxtCsqsCr2PFawD7FASJ1z8qWorHOS2ZBTsbRoqGLU4sXj+P5PBz4r9CPOg 6 | A7zyonb8w4G+qmS5upkvxMekyjyHdxzwCJtPPljru7iXQvyqc5DBP6PWH1HXtr5s 7 | LxwsatguoNoAvfm6oDmtj5AJrEdmqqkj3spRPQIDAQABAoIBAQCa8xl8d/WrEa/S 8 | 7NcBuKnD19vPnRytiRNtS1n9HG9VUrkTxF24vWxrtvAHPia08QU6TfVOCJFYv3wu 9 | G1rch9dpYhGSbMJ4eGpMOgK+iFDpIBjX42ga2ucbhy1L/7dP8k3Jdo87IsfAvDRl 10 | WQdCDRVuTne9moLVW1AUOb0BT+tCKGRVwSXV0mNGHItUGMWsZsQamnYTBe21jrYi 11 | /7oHDuoZ94x3aO9l3X1ExBccmJTvpSww1mgIEzmNXCm9+gvBb9Y9UKap7KaAieNw 12 | DQrvO28yJQuh0Tw1B9wPwkffYZwaHJuaDn6UaRjxeedzF2Wgcondzq8krEC19rJq 13 | kF8+v5dZAoGBANRx+2S3BmEyRruYWVbY4bHiT8qZP41sYki2DaCjWdqDgXcTh1TD 14 | zQFo3ZlzFh/jYXo7sUxZ8PrzGwpNEkOlvdUZTTPsZKMFOlxn58dTO4E2m8XOcWmX 15 | U99XAJcLwt1GD0ceL3Jkfr0XTE4iRwuepS0B3NrfPHc6paPc0jxzHt8jAoGBAPyx 16 | Iz3jFdt4lWOLA3CiuLTr5T6Hboc6cXKyVtud+2PW1xyV/WntuN1kXTR6MYVqvVHs 17 | bLVtfkUNzBZbToQPiJUdxBzGzsi7wVM8ipp7Ngzoujsx4WziVMPsj4hNenXqivgX 18 | AlJt50zS/KGcztEs8n0PScFSLbbIFBfzdBSqmEQfAoGBALW2KMkko5hPYKDk1sWq 19 | DKISaR1ppypYIlj/Hvjfv+NfyEUJtx+RurAR+jlebvYnjyD2Hdiota5wchiFg7HI 20 | +m5jjd1zvUCTIDAZz+52Ctei1eqDgg5HGb5WtHJ95NdPLZIvB3ZY7u7eFq5eM1aF 21 | A9NTXIz5lMaGq1dVcZ2y+hzxAoGBALyBgKLYVyPkrr0VpTlPiq8dE2U0LxYeWSeR 22 | Nw6aqkDusoaWtfdh6fjuuEE/rtWyrQ0CbI5j4kCtbER5VPdbhy6GiBhXj0dcGXp4 23 | vYVEySuUKemi6mIJ7eZDAUhTVDnHAGjW8VqAtn4vH1uI2RheiX8V+pWHMqcaVzMO 24 | 4NfR88lNAoGBAID5oT/MhulgIMZ7/I1oMAIki1ChzyaW2vvYxJEhB0AYeMwkluuG 25 | FCY5MR3ev4LGailx9G1W4SoPdQy8he+UqGz7GM5GxcHdYbPB5lSFScGh0tYFF4g4 26 | eahupoDlsq4mTB64gMrf4Fp+Iu80ZlT8cm9Zyl+v85crVj2eSeIDQ0fp 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /storage/minio/private.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAzCvvXPYyfJtIM0siU98kuqnsx5o1vZbgpTu83Zx0BSB/sknp 3 | kt3VEPRgNouEAIOReehhQazt6KB7Ouh9/OHfdUi3j0csqUm0YZz4xQ4t8G8nleIE 4 | r7fw1DwUTw5qaYwNsb5hAfuWvf0k/tcfyTmKPWBQpweof3U5VF96NbIDxPuWeVWb 5 | uCMgKVzNxk7VP19IA1he3sk9YqqtzQaNRKNoGB1UV1etS0OFIk6u2mQbHg2ZsJ9C 6 | mcpgPziwVg8t8sVn993tL7w99sImJNDnIajW+qy+NBRwQWJRidr2eiZrgQcHJoXg 7 | TssjY0seByUjTj/5BdgZ/wLggX8DXlg0C6Jq9QIDAQABAoIBAQCypCZ1UjzuZfeQ 8 | WccZV38NjCxOoREwZ1j7ef9Qb9nbuonAd4dVJ5+LjCa60uuWf4fEAJ1IF4S6K+Bm 9 | tJG3t/IK7qsdRAtBu+mGFxBbaoKrgrZCIFY1YV3odQDYAyb8XryErqy2TWmhpmK+ 10 | T3/SUvQvq5wl6T929hxJRJjrbmx55o3TYDoEHGQ4cK5t+vDXkR1SUqf2AcWWjc1F 11 | bQ6HExqw2wdIfJrbiRDaa44iZ8cteuwAP60564UVjSByWGzBqGQS9fCqV0nYXbbK 12 | thNLFdBO9Q6CN3XR/U2yhs0TK4jubziGO1GnZ62wlEsXuJReBbfLYVFRNOFga9m/ 13 | zs+sY2BBAoGBAM9oLZn8kCe+4VWwPghM6QsWBWSnvLLvDRHdi/KAAESUMpkXhsXy 14 | EMiiRmpJrcyOvvWWur0OpBqEmStWBDXbKC/L0HN+MHqVkYkVuPU/lOl2AS70zQed 15 | n1L2Tpm0Sbqc6MMJUkjI/b2ksH9BTbyuom6Ew4pASM0S/MCR7JWAKibpAoGBAPwB 16 | tYX6FVkzifKmZ2FaEFsAhXXFG9QxnbeQm6sdxqQZrqzXGT1QqZrU4/zAHa/1mJVa 17 | tXgCWMUIP/i4iVWs7T3Ds2J1lrWM48sMu4ENdVXwJmM9bkjx0mg2+K4yDPXHDdUo 18 | Q96n5VJr2eaR5Qak7X22kXt98mogFLrEvTIm2HQtAoGAC22HDbP/0WDQE6OZV2W9 19 | dXHqLCid2hIX20MkweDRovWzcAH+2AtFZ3ihfpu+qsW2udtrQJ1850UlF2Eu7DS+ 20 | GxwUyThLvYVeNnpu7XxqXQ62c/rjDSdfLvgJTqjDYzfgD1cFJKOGb5uSagCUIvBQ 21 | XNyN1aFDIaGJMacYrQgZynkCgYEAo4YAYiV7ANzeoKO15YfpoQNflqIGgtSHQPwG 22 | 5yx1HzrDC8ivygezZpLKNdH78ZfuIMwxgOQU8hV+XUhxZTTG5RM+LZ+b4cbAcZub 23 | eAxhnRgt8KuGCrNQEuvIxlAX9Mvrf+uWzr4noin1xRXahUs0CCUVlgqN6KtUiDTt 24 | h8OJJSkCgYAUmAYVzNWM1rKu7Oiiwjck1gPysbKZUMWwpd3hSYzWlEV5ELimy14M 25 | CmWFIWiK3Ykl4xcaRn4FtqCzf31CHj18xXhMiDgq8D/zfHc17Pk8Fq6OMpIrV92/ 26 | yTUM0fpwRRFsx0pxdU4WPpB/R10sgMYHHqcRzPvMMl2twDiHSdFoqw== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /storage/minio/server.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEpAIBAAKCAQEAzCvvXPYyfJtIM0siU98kuqnsx5o1vZbgpTu83Zx0BSB/sknp 3 | kt3VEPRgNouEAIOReehhQazt6KB7Ouh9/OHfdUi3j0csqUm0YZz4xQ4t8G8nleIE 4 | r7fw1DwUTw5qaYwNsb5hAfuWvf0k/tcfyTmKPWBQpweof3U5VF96NbIDxPuWeVWb 5 | uCMgKVzNxk7VP19IA1he3sk9YqqtzQaNRKNoGB1UV1etS0OFIk6u2mQbHg2ZsJ9C 6 | mcpgPziwVg8t8sVn993tL7w99sImJNDnIajW+qy+NBRwQWJRidr2eiZrgQcHJoXg 7 | TssjY0seByUjTj/5BdgZ/wLggX8DXlg0C6Jq9QIDAQABAoIBAQCypCZ1UjzuZfeQ 8 | WccZV38NjCxOoREwZ1j7ef9Qb9nbuonAd4dVJ5+LjCa60uuWf4fEAJ1IF4S6K+Bm 9 | tJG3t/IK7qsdRAtBu+mGFxBbaoKrgrZCIFY1YV3odQDYAyb8XryErqy2TWmhpmK+ 10 | T3/SUvQvq5wl6T929hxJRJjrbmx55o3TYDoEHGQ4cK5t+vDXkR1SUqf2AcWWjc1F 11 | bQ6HExqw2wdIfJrbiRDaa44iZ8cteuwAP60564UVjSByWGzBqGQS9fCqV0nYXbbK 12 | thNLFdBO9Q6CN3XR/U2yhs0TK4jubziGO1GnZ62wlEsXuJReBbfLYVFRNOFga9m/ 13 | zs+sY2BBAoGBAM9oLZn8kCe+4VWwPghM6QsWBWSnvLLvDRHdi/KAAESUMpkXhsXy 14 | EMiiRmpJrcyOvvWWur0OpBqEmStWBDXbKC/L0HN+MHqVkYkVuPU/lOl2AS70zQed 15 | n1L2Tpm0Sbqc6MMJUkjI/b2ksH9BTbyuom6Ew4pASM0S/MCR7JWAKibpAoGBAPwB 16 | tYX6FVkzifKmZ2FaEFsAhXXFG9QxnbeQm6sdxqQZrqzXGT1QqZrU4/zAHa/1mJVa 17 | tXgCWMUIP/i4iVWs7T3Ds2J1lrWM48sMu4ENdVXwJmM9bkjx0mg2+K4yDPXHDdUo 18 | Q96n5VJr2eaR5Qak7X22kXt98mogFLrEvTIm2HQtAoGAC22HDbP/0WDQE6OZV2W9 19 | dXHqLCid2hIX20MkweDRovWzcAH+2AtFZ3ihfpu+qsW2udtrQJ1850UlF2Eu7DS+ 20 | GxwUyThLvYVeNnpu7XxqXQ62c/rjDSdfLvgJTqjDYzfgD1cFJKOGb5uSagCUIvBQ 21 | XNyN1aFDIaGJMacYrQgZynkCgYEAo4YAYiV7ANzeoKO15YfpoQNflqIGgtSHQPwG 22 | 5yx1HzrDC8ivygezZpLKNdH78ZfuIMwxgOQU8hV+XUhxZTTG5RM+LZ+b4cbAcZub 23 | eAxhnRgt8KuGCrNQEuvIxlAX9Mvrf+uWzr4noin1xRXahUs0CCUVlgqN6KtUiDTt 24 | h8OJJSkCgYAUmAYVzNWM1rKu7Oiiwjck1gPysbKZUMWwpd3hSYzWlEV5ELimy14M 25 | CmWFIWiK3Ykl4xcaRn4FtqCzf31CHj18xXhMiDgq8D/zfHc17Pk8Fq6OMpIrV92/ 26 | yTUM0fpwRRFsx0pxdU4WPpB/R10sgMYHHqcRzPvMMl2twDiHSdFoqw== 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /storage/minio/artifacts/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: minio-deployment 5 | namespace: storage 6 | labels: 7 | app: minio 8 | spec: 9 | selector: 10 | matchLabels: 11 | app: minio 12 | template: 13 | metadata: 14 | labels: 15 | app: minio 16 | spec: 17 | containers: 18 | - name: minio 19 | image: minio/minio 20 | args: 21 | - server 22 | - --address 23 | - ":443" 24 | - /storage 25 | env: 26 | # credentials to access minio server. use from secret "minio-server-secret" 27 | - name: MINIO_ACCESS_KEY 28 | valueFrom: 29 | secretKeyRef: 30 | name: minio-server-secret 31 | key: MINIO_ACCESS_KEY 32 | - name: MINIO_SECRET_KEY 33 | valueFrom: 34 | secretKeyRef: 35 | name: minio-server-secret 36 | key: MINIO_SECRET_KEY 37 | ports: 38 | - name: https 39 | containerPort: 443 40 | volumeMounts: 41 | - name: storage # mount the "storage" volume into the pod 42 | mountPath: "/storage" 43 | - name: minio-certs # mount the certificates in "/root/.minio/certs" directory 44 | mountPath: "/root/.minio/certs" 45 | volumes: 46 | - name: storage # use "minio-pvc" to store data 47 | persistentVolumeClaim: 48 | claimName: minio-pvc 49 | - name: minio-certs # use secret "minio-server-secret" as volume to mount the certificates 50 | secret: 51 | secretName: minio-server-secret 52 | items: 53 | - key: public.crt 54 | path: public.crt 55 | - key: private.key 56 | path: private.key 57 | - key: public.crt 58 | path: CAs/public.crt # mark self signed certificate as trusted 59 | -------------------------------------------------------------------------------- /monitoring/grafana/README.md: -------------------------------------------------------------------------------- 1 | # Use Grafana Dashboard 2 | 3 | [Grafana](https://grafana.com) provides an elegant graphical user interface to visualize data. You can create beautiful dashboard easily with a meaningful representation of your Prometheus metrics in Grafana. 4 | 5 | To keep Grafana resources isolated, we will use a separate namespace `monitoring`. 6 | 7 | ```console 8 | $ kubectl create ns monitoring 9 | namespace/monitoring created 10 | ``` 11 | 12 | ## Deploy Grafana 13 | 14 | Below the YAML for deploying grafana using a Deployment. 15 | 16 | ```yaml 17 | apiVersion: apps/v1 18 | kind: Deployment 19 | metadata: 20 | name: grafana 21 | namespace: monitoring 22 | labels: 23 | app: grafana 24 | spec: 25 | replicas: 1 26 | selector: 27 | matchLabels: 28 | app: grafana 29 | template: 30 | metadata: 31 | labels: 32 | app: grafana 33 | spec: 34 | containers: 35 | - name: grafana 36 | image: grafana/grafana:5.3.1 37 | ``` 38 | 39 | Let's create the deployment we have shown above, 40 | 41 | ```console 42 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/grafana/artifacts/grafana.yaml 43 | deployment.apps/grafana created 44 | ``` 45 | 46 | Wait for grafana pod to goes in running state, 47 | 48 | ```console 49 | $ kubectl get pod -n monitoring -l=app=grafana 50 | NAME READY STATUS RESTARTS AGE 51 | grafana-7f594dc9c6-xwkf2 1/1 Running 0 3m22s 52 | ``` 53 | 54 | Grafana is running on port `3000`. We will forward this port to access grafana UI. Run following command on a separate terminal, 55 | 56 | ```console 57 | $ kubectl port-forward -n monitoring grafana-7f594dc9c6-xwkf2 3000 58 | Forwarding from 127.0.0.1:3000 -> 3000 59 | Forwarding from [::1]:3000 -> 3000 60 | ``` 61 | 62 | Now, we can access grafana UI in `localhost:3000`. Use `username: admin` and `password:admin` to login to the UI. 63 | 64 | ## Cleanup 65 | 66 | To cleanup the Kubernetes resources created by this tutorial, run: 67 | 68 | ```console 69 | kubectl delete -n monitoring deployment grafana 70 | kubectl delete ns monitoring 71 | ``` 72 | -------------------------------------------------------------------------------- /storage/minio/public.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIC8DCCAdigAwIBAgIIJY+FqktA+rcwDQYJKoZIhvcNAQELBQAwDTELMAkGA1UE 3 | AxMCY2EwHhcNMTgxMTI5MTMyNTM4WhcNMTkxMTI5MTM0NzM4WjARMQ8wDQYDVQQD 4 | EwZzZXJ2ZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMK+9c9jJ8 5 | m0gzSyJT3yS6qezHmjW9luClO7zdnHQFIH+ySemS3dUQ9GA2i4QAg5F56GFBrO3o 6 | oHs66H384d91SLePRyypSbRhnPjFDi3wbyeV4gSvt/DUPBRPDmppjA2xvmEB+5a9 7 | /ST+1x/JOYo9YFCnB6h/dTlUX3o1sgPE+5Z5VZu4IyApXM3GTtU/X0gDWF7eyT1i 8 | qq3NBo1Eo2gYHVRXV61LQ4UiTq7aZBseDZmwn0KZymA/OLBWDy3yxWf33e0vvD32 9 | wiYk0OchqNb6rL40FHBBYlGJ2vZ6JmuBBwcmheBOyyNjSx4HJSNOP/kF2Bn/AuCB 10 | fwNeWDQLomr1AgMBAAGjUDBOMA4GA1UdDwEB/wQEAwIFoDATBgNVHSUEDDAKBggr 11 | BgEFBQcDATAnBgNVHREEIDAeggZzZXJ2ZXKCDm1pbmlvLmRlbW8uc3ZjhwTAqGNk 12 | MA0GCSqGSIb3DQEBCwUAA4IBAQAQqrpN9UIZv/cJlptDpONJAUYQgdRPM4gPIMVc 13 | W7vIuraxfrHFkUjm67jvQVQnGIc3jH0O1Z5F9CLJSCEBY7RtLpySKG732XtHE0lE 14 | 5EDhE9HSdol5Lwq/BbH+syFhnfrHx+7yOXCtAAo2d0um5wZZAw2AWqnYN6XusLto 15 | DHuUKdLlAdM7vcohftYHhDUavsG5y+CCT4LAzX8R2ppA+9W5A0czGJfkP2TFlHLV 16 | 0lktx8q3nEMW0flVueTqwIPTXkXT3r2SFrZcvGRMVkFC3Kv2x3KO1c9kVcEUS7t8 17 | 1CiJW5YSQlPeJwsitBysfwBRuvU8EOQHk9Xknx0Jxnf9/TFk 18 | -----END CERTIFICATE----- 19 | -----BEGIN CERTIFICATE----- 20 | MIICuDCCAaCgAwIBAgIBADANBgkqhkiG9w0BAQsFADANMQswCQYDVQQDEwJjYTAe 21 | Fw0xODExMjkxMzI1MzhaFw0yODExMjYxMzI1MzhaMA0xCzAJBgNVBAMTAmNhMIIB 22 | IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA0bMzhOXHtIGuNjiTKo3M1IXE 23 | Acf3LNPXpTyBKQDxfifEO/91ROYMwcEchsXZYhqXcRJ3+v+Vmhp6shCwZYNzpDxS 24 | tAvo5PAv5fioMgv8lP9oYrAqHOwJzaSiSeNoTJkf+0lOHEhKiTFe22o6e+pYFvFI 25 | aejONjqUwd+BcCwMp/cb+PPVjAn+ewxtCsqsCr2PFawD7FASJ1z8qWorHOS2ZBTs 26 | bRoqGLU4sXj+P5PBz4r9CPOgA7zyonb8w4G+qmS5upkvxMekyjyHdxzwCJtPPljr 27 | u7iXQvyqc5DBP6PWH1HXtr5sLxwsatguoNoAvfm6oDmtj5AJrEdmqqkj3spRPQID 28 | AQABoyMwITAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zANBgkqhkiG 29 | 9w0BAQsFAAOCAQEAAZoVLfi3DTr4V0oTZ3I/M9IMDfommB+Fuf2miCrogUnpxDEs 30 | vO3mYIRHRPKPEuqGYGNVPuJ/956OboPvVhL3uT2MecljD6EBrTg25OVO1ZHhkAgj 31 | UJeT5mRig3pk+asGYVEirvapoagxXs2EQKtZvLBm28Wi3nK/LAWsV/RzLNE5Jfec 32 | 1mPx7AZWLyPYgfSdsGowMSGHQ3GFLMF4ctp00KxvxhC0aoHpKqtSsIZl58t7ki0P 33 | J4e4t6A7H3rng8QSi7Cr1nhwrMOC11gC8yWf6HT/iVgkZWLJadKATBq2sNYIxpt6 34 | reTkEm5TaxP3WDpDY597FEmENy+a4H4T56okPA== 35 | -----END CERTIFICATE----- 36 | -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/artifacts/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: prometheus-config 5 | labels: 6 | app: prometheus-demo 7 | namespace: monitoring 8 | data: 9 | prometheus.yml: |- 10 | global: 11 | scrape_interval: 30s 12 | scrape_timeout: 10s 13 | scrape_configs: 14 | #------------- configuration to collect pods metrics ------------------- 15 | - job_name: 'kubernetes-pods' 16 | honor_labels: true 17 | kubernetes_sd_configs: 18 | - role: pod 19 | relabel_configs: 20 | # select only those pods that has "prometheus.io/scrape: true" annotation 21 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] 22 | action: keep 23 | regex: true 24 | # set metrics_path (default is /metrics) to the metrics path specified in "prometheus.io/path: " annotation. 25 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] 26 | action: replace 27 | target_label: __metrics_path__ 28 | regex: (.+) 29 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 30 | - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] 31 | action: replace 32 | regex: ([^:]+)(?::\d+)?;(\d+) 33 | replacement: $1:$2 34 | target_label: __address__ 35 | - action: labelmap 36 | regex: __meta_kubernetes_pod_label_(.+) 37 | - source_labels: [__meta_kubernetes_namespace] 38 | action: replace 39 | target_label: kubernetes_namespace 40 | - source_labels: [__meta_kubernetes_pod_name] 41 | action: replace 42 | target_label: kubernetes_pod_name 43 | 44 | #-------------- configuration to collect metrics from service endpoints ----------------------- 45 | - job_name: 'kubernetes-service-endpoints' 46 | honor_labels: true 47 | kubernetes_sd_configs: 48 | - role: endpoints 49 | relabel_configs: 50 | # select only those endpoints whose service has "prometheus.io/scrape: true" annotation 51 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] 52 | action: keep 53 | regex: true 54 | # set the metrics_path to the path specified in "prometheus.io/path: " annotation. 55 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] 56 | action: replace 57 | target_label: __metrics_path__ 58 | regex: (.+) 59 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 60 | - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] 61 | action: replace 62 | target_label: __address__ 63 | regex: ([^:]+)(?::\d+)?;(\d+) 64 | replacement: $1:$2 65 | - action: labelmap 66 | regex: __meta_kubernetes_service_label_(.+) 67 | - source_labels: [__meta_kubernetes_namespace] 68 | action: replace 69 | target_label: kubernetes_namespace 70 | - source_labels: [__meta_kubernetes_service_name] 71 | action: replace 72 | target_label: kubernetes_name 73 | 74 | #---------------- configuration to collect metrics from kubernetes apiserver ------------------------- 75 | - job_name: 'kubernetes-apiservers' 76 | honor_labels: true 77 | kubernetes_sd_configs: 78 | - role: endpoints 79 | # kubernetes apiserver serve metrics on a TLS secure endpoints. so, we have to use "https" scheme 80 | scheme: https 81 | # we have to provide certificate to establish tls secure connection 82 | tls_config: 83 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 84 | # bearer_token_file is required for authorizating prometheus server to kubernetes apiserver 85 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 86 | 87 | relabel_configs: 88 | - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] 89 | action: keep 90 | regex: default;kubernetes;https 91 | 92 | #--------------- configuration to collect metrics from nodes ----------------------- 93 | - job_name: 'kubernetes-nodes' 94 | honor_labels: true 95 | scheme: https 96 | tls_config: 97 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 98 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 99 | 100 | kubernetes_sd_configs: 101 | - role: node 102 | relabel_configs: 103 | - action: labelmap 104 | regex: __meta_kubernetes_node_label_(.+) 105 | - target_label: __address__ 106 | replacement: kubernetes.default.svc:443 107 | - source_labels: [__meta_kubernetes_node_name] 108 | regex: (.+) 109 | target_label: __metrics_path__ 110 | replacement: /api/v1/nodes/${1}/proxy/metrics 111 | -------------------------------------------------------------------------------- /monitoring/prometheus/operator/README.md: -------------------------------------------------------------------------------- 1 | # Deploy Prometheus Operator 2 | 3 | [Prometheus operator](https://github.com/prometheus-operator/prometheus-operator) provides simple and Kubernetes native ways to deploy and configure the Prometheus server. This tutorial will show you how to deploy CoreOS prometheus-operator. You can also follow the official docs to deploy Prometheus operator from [here](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/user-guides/getting-started.md). 4 | 5 | ## Deploy Prometheus Operator 6 | 7 | To follow this getting started you will need a Kubernetes cluster you have access to. This [example](https://github.com/prometheus-operator/prometheus-operator/blob/master/bundle.yaml) describes a Prometheus Operator Deployment, and its required ClusterRole, ClusterRoleBinding, Service Account, and Custom Resource Definitions. 8 | 9 | Now we are going to deploy the above example manifest for the prometheus operator for release `release-0.41`. Let's deploy the above manifest using the following command, 10 | 11 | ```console 12 | $ kubectl apply -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml 13 | customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created 14 | customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created 15 | customresourcedefinition.apiextensions.k8s.io/probes.monitoring.coreos.com created 16 | customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created 17 | customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created 18 | customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created 19 | customresourcedefinition.apiextensions.k8s.io/thanosrulers.monitoring.coreos.com created 20 | clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created 21 | clusterrole.rbac.authorization.k8s.io/prometheus-operator created 22 | deployment.apps/prometheus-operator created 23 | serviceaccount/prometheus-operator created 24 | service/prometheus-operator created 25 | ``` 26 | 27 | You can see above that all the resources of the prometheus operator and required stuff are created in the `default` namespace. we assumed that our cluster is RBAC enabled cluster. 28 | 29 | Wait for Prometheus operator pod to be ready, 30 | 31 | ```console 32 | $ kubectl get pod -n default | grep "prometheus-operator" 33 | prometheus-operator-7589597769-gp46z 1/1 Running 0 6m13s 34 | ``` 35 | 36 | ## Deploy Prometheus Server 37 | 38 | To deploy the Prometheus server, we have to create [Prometheus](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#prometheus) custom resource. Prometheus custom resource defines a desired Prometheus server setup. It specifies which [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/master/Documentation/design.md#servicemonitor)'s should be covered by this Prometheus instance. ServiceMonitor custom resource defines a set of services that should be monitored dynamically. 39 | 40 | Prometheus operator watches for `Prometheus` custom resource. Once a `Prometheus` custom resource is created, prometheus operator generates respective configuration (`prometheus.yaml` file) and creates a StatefulSet to run the desired Prometheus server. 41 | 42 | #### Create RBAC 43 | 44 | We assumed that our cluster is RBAC enabled. Below is the YAML of RBAC resources for Prometheus custom resource that we are going to create, 45 | 46 |
47 | RBAC resources for Prometheus custom resource 48 | 49 | ```yaml 50 | apiVersion: rbac.authorization.k8s.io/v1 51 | kind: ClusterRole 52 | metadata: 53 | name: prometheus 54 | rules: 55 | - apiGroups: [""] 56 | resources: 57 | - nodes 58 | - nodes/metrics 59 | - services 60 | - endpoints 61 | - pods 62 | verbs: ["get", "list", "watch"] 63 | - apiGroups: [""] 64 | resources: 65 | - configmaps 66 | verbs: ["get"] 67 | - nonResourceURLs: ["/metrics"] 68 | verbs: ["get"] 69 | --- 70 | apiVersion: v1 71 | kind: ServiceAccount 72 | metadata: 73 | name: prometheus 74 | namespace: default 75 | --- 76 | apiVersion: rbac.authorization.k8s.io/v1 77 | kind: ClusterRoleBinding 78 | metadata: 79 | name: prometheus 80 | roleRef: 81 | apiGroup: rbac.authorization.k8s.io 82 | kind: ClusterRole 83 | name: prometheus 84 | subjects: 85 | - kind: ServiceAccount 86 | name: prometheus 87 | namespace: default 88 | ``` 89 |
90 |
91 | 92 | Let's create the following RBAC resources for Prometheus custom resource. 93 | 94 | ```console 95 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml 96 | clusterrole.rbac.authorization.k8s.io/prometheus created 97 | serviceaccount/prometheus created 98 | clusterrolebinding.rbac.authorization.k8s.io/prometheus created 99 | ``` 100 | 101 | #### Create Prometheus CR 102 | 103 | Below is the YAML of `Prometheus` custom resource that we are going to create for this tutorial, 104 | 105 | ```yaml 106 | apiVersion: monitoring.coreos.com/v1 107 | kind: Prometheus 108 | metadata: 109 | name: prometheus 110 | labels: 111 | prometheus: prometheus 112 | spec: 113 | replicas: 1 114 | serviceAccountName: prometheus 115 | serviceMonitorSelector: 116 | matchLabels: 117 | k8s-app: prometheus 118 | serviceMonitorNamespaceSelector: 119 | matchLabels: 120 | prometheus: prometheus 121 | resources: 122 | requests: 123 | memory: 400Mi 124 | ``` 125 | 126 | This Prometheus custom resource will select all `ServiceMonitor` that meet up the below conditions: 127 | 128 | - `ServiceMonitor` will have the `k8s-app: prometheus` label. 129 | - `ServiceMonitor` will be created in that namespaces which have `prometheus: prometheus` label. 130 | 131 | Let's create the `Prometheus` custom resource we have shown above, 132 | 133 | ```console 134 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus.yaml 135 | prometheus.monitoring.coreos.com/prometheus created 136 | ``` 137 | 138 | Now, wait for a few seconds. Prometheus operator will create a StatefulSet. Let's check StatefulSet has been created, 139 | 140 | ```console 141 | $ kubectl get statefulset -n default -l prometheus=prometheus 142 | NAME READY AGE 143 | prometheus-prometheus 1/1 3m5s 144 | ``` 145 | 146 | Check StatefulSet's pod is running, 147 | 148 | ```console 149 | $ kubectl get pod -n default -l prometheus=prometheus 150 | NAME READY STATUS RESTARTS AGE 151 | prometheus-prometheus-0 3/3 Running 1 3m40s 152 | ``` 153 | 154 | Prometheus server is running on port `9090`. Now, we are ready to access Prometheus dashboard. We can use `NodePort` type service to access Prometheus server. In this tutorial, we will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. Run the following command on a separate terminal, 155 | 156 | ```console 157 | $ kubectl port-forward -n default prometheus-prometheus-0 9090 158 | Forwarding from 127.0.0.1:9090 -> 9090 159 | Forwarding from [::1]:9090 -> 9090 160 | ``` 161 | 162 | Now, you can access the Prometheus dashboard at `localhost:9090`. 163 | 164 | ## Cleanup 165 | 166 | To clean up the Kubernetes resources created by this tutorial, run: 167 | 168 | ```console 169 | # cleanup Prometheus resources 170 | kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus.yaml 171 | kubectl delete -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/operator/artifacts/prometheus-rbac.yaml 172 | 173 | # cleanup Prometheus operator resources 174 | kubectl delete -f https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/release-0.41/bundle.yaml 175 | ``` 176 | -------------------------------------------------------------------------------- /storage/nfs/README.md: -------------------------------------------------------------------------------- 1 | # Deploying NFS Server in Kubernetes 2 | 3 | NFS (Network File System) volumes can be mounted(https://kubernetes.io/docs/concepts/storage/volumes/#nfs) as a `PersistentVolume` in Kubernetes pods. You can also pre-populate an `nfs` volume. An `nfs` volume can be shared between pods. It is particularly helpful when you need some files to be writable between multiple pods. 4 | 5 | This tutorial will show you how to deploy an NFS server in Kubernetes. This tutorial also shows you how to use `nfs` volume in a pod. 6 | 7 | ## Before You Begin 8 | 9 | At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [Minikube](https://github.com/kubernetes/minikube). 10 | 11 | To keep NFS resources isolated, we will use a separate namespace called `storage` throughout this tutorial. We will also use another namespace called `demo` to deploy sample workloads. 12 | 13 | ```console 14 | $ kubectl create ns storage 15 | namespace/storage created 16 | 17 | $ kubectl create ns demo 18 | namespace/demo created 19 | ``` 20 | 21 | **Add Kubernetes cluster's DNS in host's `resolved.conf` :** 22 | 23 | There is an [issue](https://github.com/kubernetes/minikube/issues/2218) that prevents accessing NFS server through service DNS. However, accessing through IP address works fine. If you face this issue, you have to add IP address of `kube-dns` Service into your host's `/etc/systemd/resolved.conf` and restart `systemd-networkd`, `systemd-resolved`. 24 | 25 | We are using Minikube for this tutorial. Below steps show how we can do this in Minikube. 26 | 27 | 1. Get IP address of `kube-dns` Service. 28 | ```console 29 | $ kubectl get service -n kube-system kube-dns 30 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 31 | kube-dns ClusterIP 10.96.0.10 53/UDP,53/TCP 173m 32 | ``` 33 | Look at the `CLUSTER-IP` field. Here, `10.96.0.10` is the IP address of `kube-dns` service. 34 | 35 | 2. Add the IP address into `/etc/systemd/resolved.conf` file of Minikube and restart networking stuff. 36 | ```console 37 | # Login to minikube 38 | $ minikube ssh 39 | # Run commands as root 40 | $ su 41 | # Add IP address in `/etc/systemd/resolved.conf` file 42 | $ echo "DNS=10.96.0.10" >> /etc/systemd/resolved.conf 43 | # Restart netwokring stuff 44 | $ systemctl daemon-reload 45 | $ systemctl restart systemd-networkd 46 | $ systemctl restart systemd-resolved 47 | ```` 48 | 49 | Now, we will be able to access NFS server using DNS of a Service i.e. `{service name}.{namespace}.svc.cluster.local`. 50 | 51 | ## Deploy NFS Server 52 | 53 | We will deploy NFS server using a [Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/). We will configure our NFS server to store data in [hostPath](https://kubernetes.io/docs/concepts/storage/volumes/#hostpath). You can also use any cloud volume such as [awsElasticBlockStore](https://kubernetes.io/docs/concepts/storage/volumes/#awselasticblockstore), [azureDisk](https://kubernetes.io/docs/concepts/storage/volumes/#azuredisk), [gcePersistentDisk](https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk) etc as a persistent store for NFS server. 54 | 55 | Then, we will create a [Service](https://kubernetes.io/docs/concepts/services-networking/service/) for this NFS server so that pods can consume `nfs` volume using this Service. 56 | 57 | **Create Deployment :** 58 | 59 | Below the YAML for the Deployment we are using to deploy NFS server. 60 | 61 | ```yaml 62 | apiVersion: apps/v1 63 | kind: Deployment 64 | metadata: 65 | name: nfs-server 66 | namespace: storage 67 | spec: 68 | selector: 69 | matchLabels: 70 | app: nfs-server 71 | template: 72 | metadata: 73 | labels: 74 | app: nfs-server 75 | spec: 76 | containers: 77 | - name: nfs-server 78 | image: k8s.gcr.io/volume-nfs:0.8 79 | ports: 80 | - name: nfs 81 | containerPort: 2049 82 | - name: mountd 83 | containerPort: 20048 84 | - name: rpcbind 85 | containerPort: 111 86 | securityContext: 87 | privileged: true 88 | volumeMounts: 89 | - name: storage 90 | mountPath: /exports 91 | volumes: 92 | - name: storage 93 | hostPath: 94 | path: /data/nfs # store all data in "/data/nfs" directory of the node where it is running 95 | type: DirectoryOrCreate # if the directory does not exist then create it 96 | ``` 97 | 98 | Here, we have mounted `/data/nfs` directory of the host as `storage` volume in `/exports` directory of NFS pod. All the data stored in this NFS server will be located at `/data/nfs` directory of the host node where it is running. 99 | 100 | Let's create the deployment we have shown above, 101 | 102 | ```console 103 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-server.yaml 104 | deployment.apps/nfs-server created 105 | ``` 106 | 107 | **Create Service :** 108 | 109 | As we have deployed the NFS server using a Deployment, the server IP address can change in case of pod restart. So, we need a stable DNS/IP address so that our apps can consume the volume using it. So, we are going to create a `Service` for NFS server pods. 110 | 111 | Below is the YAML for the `Service` we are going to create for our NFS server. 112 | 113 | ```yaml 114 | apiVersion: v1 115 | kind: Service 116 | metadata: 117 | name: nfs-service 118 | namespace: storage 119 | spec: 120 | ports: 121 | - name: nfs 122 | port: 2049 123 | - name: mountd 124 | port: 20048 125 | - name: rpcbind 126 | port: 111 127 | selector: 128 | app: nfs-server # must match with the label of NFS pod 129 | ``` 130 | 131 | Let's create the service we have shown above, 132 | 133 | ```console 134 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-service.yaml 135 | service/nfs-service created 136 | ``` 137 | 138 | Now, we can access the NFS server using `nfs-service.storage.svc.cluster.local` domain name. 139 | 140 | >If you want to access the NFS server from outside of the cluster, you have to create [NodePort](https://kubernetes.io/docs/concepts/services-networking/service/#nodeport) or [LoadBalancer](https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer) type Service. 141 | 142 | ## Use NFS Volume 143 | 144 | This section will show you how to use `nfs` volume in a pod. Here, we will demonstrate how we can use the `nfs` volume directly in a pod or through a [Persistent Volume Claim (PVC)](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims). We will also show how `nfs` volume can be used to share files between multiple pods. 145 | 146 | ### Use NFS volume directly in Pod 147 | 148 | Let's create a simple pod that directly mount a directory from the NFS server as volume. 149 | 150 | Below is the YAML for a sample pod that mount `/exports/nfs-direct` directory of the NFS server as volume, 151 | 152 | ```yaml 153 | kind: Pod 154 | apiVersion: v1 155 | metadata: 156 | name: nfs-direct 157 | namespace: demo 158 | spec: 159 | containers: 160 | - name: busybox 161 | image: busybox 162 | command: 163 | - sleep 164 | - "3600" 165 | volumeMounts: 166 | - name: data 167 | mountPath: /demo/data 168 | volumes: 169 | - name: data 170 | nfs: 171 | server: "nfs-service.storage.svc.cluster.local" 172 | path: "/nfs-direct" # "nfs-direct" folder must exist inside "/exports" directory of NFS server 173 | ``` 174 | 175 | Here, we have mounted `/exports/nfs-direct` directory of NFS server into `/demo/data` directory. Now, if we write anything in `/demo/data` directory of this pod, it will be written on `/exports/nfs-direct` directory of the NFS server. 176 | 177 | >Note that `path: "/nfs-direct"` is relative to `/exports` directory of NFS server. 178 | 179 | At first, let's create `nfs-direct` folder inside `/exports` directory of NFS server. 180 | 181 | ```console 182 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv mkdir /exports/nfs-direct 183 | ``` 184 | 185 | Verify that directory has been created successfully, 186 | 187 | ```console 188 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv ls /exports 189 | index.html 190 | nfs-direct 191 | ``` 192 | 193 | Now, let's create the pod we have shown above, 194 | 195 | ```console 196 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-direct.yaml 197 | pod/nfs-direct created 198 | ``` 199 | 200 | When the pod is ready, let's create a sample file inside `/demo/data` directory. This file will be written on `/exports/nfs-direct` directory of NFS server. 201 | 202 | ```console 203 | $ kubectl exec -n demo nfs-direct touch /demo/data/demo.txt 204 | ``` 205 | 206 | Verify that the file has been stored in the NFS server, 207 | 208 | ```console 209 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv ls /exports/nfs-direct 210 | demo.txt 211 | ``` 212 | 213 | ### Use NFS volume through PVC 214 | 215 | You can also use the NFS volume through a `PersistentVolumeClaim`. You have to create a `PersistentVolume` that will hold the information about NFS server. Then, you have to create a `PersistentVolumeClaim` that will be bounded with the `PersistentVolume`. Finally, you can mount the PVC into a pod. 216 | 217 | **Create PersistentVolume :** 218 | 219 | Below the YAML for `PersistentVolume` that provision volume from `/exports/pvc` directory of the NFS server. 220 | 221 | ```yaml 222 | apiVersion: v1 223 | kind: PersistentVolume 224 | metadata: 225 | name: nfs-pv 226 | labels: 227 | app: nfs-data 228 | spec: 229 | capacity: 230 | storage: 1Gi 231 | accessModes: 232 | - ReadWriteMany 233 | nfs: 234 | server: "nfs-service.storage.svc.cluster.local" 235 | path: "/pvc" # "pvc" folder must exist in "/exports" directory of NFS server 236 | ``` 237 | 238 | >Note that `path: "/pvc"` is relative to `/exports` directory of NFS server. 239 | 240 | At first, let's create `pvc` folder inside `/exports` directory of the NFS server. 241 | 242 | ```console 243 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv mkdir /exports/pvc 244 | ``` 245 | 246 | Verify that the directory has been created successfully, 247 | 248 | ```console 249 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv ls /exports 250 | index.html 251 | nfs-direct 252 | pvc 253 | ``` 254 | 255 | Now, let's create the `PersistentVolume` we have shown above, 256 | 257 | ```console 258 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-pv.yaml 259 | persistentvolume/nfs-pv created 260 | ``` 261 | 262 | **Create a PersistentVolumeClaim:** 263 | 264 | Now, we have to create a `PersistentVolumeClaim`. This PVC will be bounded with the `PersistentVolume` we have created above. 265 | 266 | Below is the YAML for the `PersistentVolumeClaim` that we are going to create, 267 | 268 | ```yaml 269 | apiVersion: v1 270 | kind: PersistentVolumeClaim 271 | metadata: 272 | name: nfs-pvc 273 | namespace: demo 274 | spec: 275 | accessModes: 276 | - ReadWriteMany 277 | storageClassName: "" 278 | resources: 279 | requests: 280 | storage: 1Gi 281 | selector: 282 | matchLabels: 283 | app: nfs-data 284 | ``` 285 | 286 | Let's create the PVC we have shown above, 287 | 288 | ```console 289 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-pvc.yaml 290 | persistentvolumeclaim/nfs-pvc created 291 | ``` 292 | 293 | Verify that the `PersistentVolumeClaim` has been bounded with the `PersistentVolume`. 294 | 295 | ```console 296 | $ kubectl get pvc -n demo nfs-pvc 297 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 298 | nfs-pvc Bound nfs-pv 1Gi RWX 10s 299 | ``` 300 | 301 | **Create Pod :** 302 | 303 | Finally, we can deploy the pod that will use the NFS volume through a PVC. 304 | 305 | Below is the YAML for sample pod that we are going to create for this purpose, 306 | 307 | ```yaml 308 | kind: Pod 309 | apiVersion: v1 310 | metadata: 311 | name: nfs-pod-pvc 312 | namespace: demo 313 | spec: 314 | containers: 315 | - name: busybox 316 | image: busybox 317 | command: 318 | - sleep 319 | - "3600" 320 | volumeMounts: 321 | - name: data 322 | mountPath: /demo/data 323 | volumes: 324 | - name: data 325 | persistentVolumeClaim: 326 | claimName: nfs-pvc 327 | ``` 328 | 329 | Here, we have mounted PVC `nfs-pvc` into `/demo/data` directory. Now, if we write anything in `/demo/data` directory of this pod, it will be written on `/exports/pvc` directory of the NFS server. 330 | 331 | Let's create the pod we have shown above, 332 | 333 | ```console 334 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/nfs-pod-pvc.yaml 335 | pod/nfs-pod-pvc created 336 | ``` 337 | 338 | When the pod is ready, let's create a sample file inside `/demo/data` directory. This file will be written on `/exports/pvc` directory of NFS server. 339 | 340 | ```console 341 | $ kubectl exec -n demo nfs-pod-pvc touch /demo/data/demo.txt 342 | ``` 343 | 344 | Verify that the file has been stored in the NFS server, 345 | 346 | ```console 347 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv ls /exports/pvc 348 | demo.txt 349 | ``` 350 | 351 | ### Share file between multiple pods using NFS volume 352 | 353 | Sometimes we need to share some common files (i.e. configuration file) between multiple pods. We can easily achieve it using a shared NFS volume. This section will show you how to we can share a common directory between multiple pods. 354 | 355 | **Create a Shared Directory :** 356 | 357 | At first, let's create the directory that we want to share in `/exports` directory of NFS server. 358 | 359 | ```console 360 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv mkdir /exports/shared 361 | ``` 362 | 363 | Verify that the directory has been created successfully, 364 | 365 | ```console 366 | $ kubectl exec -n storage nfs-server-f9c6cbc7f-n85lv ls /exports 367 | index.html 368 | nfs-direct 369 | pvc 370 | shared 371 | ``` 372 | 373 | Now, we will mount this `shared` directory as `nfs` volume into the pods who need to share files with others. 374 | 375 | **Create Pods :** 376 | 377 | Below YAML show a pod that mount `shared` directory of the NFS server as volume. 378 | 379 | ```yaml 380 | kind: Pod 381 | apiVersion: v1 382 | metadata: 383 | name: shared-pod-1 384 | namespace: demo 385 | spec: 386 | containers: 387 | - name: busybox 388 | image: busybox 389 | command: 390 | - sleep 391 | - "3600" 392 | volumeMounts: 393 | - name: data 394 | mountPath: /demo/data 395 | volumes: 396 | - name: data 397 | nfs: 398 | server: "nfs-service.storage.svc.cluster.local" 399 | path: "/shared" # "shared" folder must exist inside "/exports" directory of NFS server 400 | ``` 401 | 402 | Here, we have mounted `/exports/shared` directory of NFS server into `/demo/data` directory. Now, if we write anything in `/demo/data` directory of this pod, it will be written on `/exports/shared` directory of the NFS server. 403 | 404 | >Note that `path: "/shared"` is relative to `/exports` directory of NFS server. 405 | 406 | Now, let's create two pod with this configuration. 407 | 408 | ```console 409 | # create shared-pod-1 410 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/shared-pod-1.yaml 411 | pod/shared-pod-1 created 412 | 413 | # create shared-pod-2 414 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/nfs/artifacts/shared-pod-2.yaml 415 | pod/shared-pod-2 created 416 | ``` 417 | 418 | Now, if we write any file in `/demo/data` directory of any pod, it will be instantly visible from other pod. 419 | 420 | **Verify File Sharing :** 421 | 422 | Let's verify that file change in `/demo/data` directory one pod is reflected in another pod. 423 | 424 | ```console 425 | # create a file "file-1.txt" in "shared-pod-1" 426 | $ kubectl exec -n demo shared-pod-1 touch /demo/data/file-1.txt 427 | 428 | # check if "file-1.txt" is available in "shared-pod-2" 429 | $ kubectl exec -n demo shared-pod-2 ls /demo/data 430 | file-1.txt 431 | 432 | # create a file "file-2.txt" in "shared-pod-2" 433 | $ kubectl exec -n demo shared-pod-2 touch /demo/data/file-2.txt 434 | 435 | # check if "file-2.txt" is available in "shared-pod-1" 436 | $ kubectl exec -n demo shared-pod-1 ls /demo/data 437 | file-1.txt 438 | file-2.txt 439 | ``` 440 | 441 | So, we can see from above that any change in `/demo/data` directory of one pod is instantly reflected on the other pod. 442 | 443 | ## Cleanup 444 | 445 | To clean-up the Kubernetes resources created by this tutorial run following commands, 446 | 447 | ```console 448 | kubectl delete -n demo pod/shared-pod-1 449 | kubectl delete -n demo pod/shared-pod-2 450 | 451 | kubectl delete -n demo pod/nfs-pod-pvc 452 | kubectl delete -n demo pvc/nfs-pvc 453 | kubectl delete -n demo pv/nfs-pv 454 | 455 | kubectl delete -n demo pod/nfs-direct 456 | 457 | kubectl delete -n storage svc/nfs-service 458 | kubectl delete -n storage deployment/nfs-server 459 | 460 | kubectl delete ns storage 461 | kubectl delete ns demo 462 | ``` 463 | -------------------------------------------------------------------------------- /storage/minio/README.md: -------------------------------------------------------------------------------- 1 | # Deploy TLS Secured Minio Server in Kubernetes 2 | 3 | [Minio](https://minio.io/) is an open source object storage server compatible with [Amazon S3](https://aws.amazon.com/s3/) cloud storage service. You can deploy Minio server in docker container, locally, Kubernetes cluster, Microsoft Azure, GCP etc. 4 | 5 | This tutorial will show you how to deploy a TLS secured Minio server in Kubernetes. This will also show you how to access this TLS secured Minio server both from inside and outside of the Kubernetes cluster. 6 | 7 | >You will find official guides for using Minio server at [here](https://docs.minio.io/). 8 | 9 | ## Before You Begin 10 | 11 | At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [Minikube](https://github.com/kubernetes/minikube). 12 | 13 | To keep Minio resources isolated, we will use a separate namespace called `storage` throughout this tutorial. We will also use another seperate namespace called `demo` to deploy sample workloads. 14 | 15 | ```console 16 | $ kubectl create ns storage 17 | namespace/storage created 18 | 19 | $ kubectl create ns demo 20 | namespace/demo created 21 | ``` 22 | 23 | ## Generate self-signed Certificate 24 | 25 | TLS is crucial to secure your production services over the web. Usually, a certificate issued by a trusted third party known as [Certificate Authority](https://en.wikipedia.org/wiki/Certificate_authority) is used for TLS secured application. However, we can also use a self-signed certificate. In this tutorial, we will use self-signed certificate to secure a Minio server. 26 | 27 | We will use a tool called [onessl](https://github.com/kubepack/onessl) developed by [AppsCode](https://appscode.com/) to generate self signed certificate. `onessl` makes generating self-signed certificate very easy and matter of two or three commands. If you already don't have `onessl` installed, please install it first from [here](https://github.com/kubepack/onessl/releases). 28 | 29 | **Generate Root CA :** 30 | 31 | At first, let's generate root certificate, 32 | 33 | ```console 34 | $ onessl create ca-cert 35 | ``` 36 | 37 | This will create two files `ca.crt` and `ca.key` in your working directory. This root certificate will be used to create server certificates. 38 | 39 | **Generate Server Certificate :** 40 | 41 | Now, we will generate server certificate using the root certificate. Now, we have to provide the `domain` or `ip address` for which this certificate will be valid. 42 | 43 | We want to access Minio server both from inside and outside the cluster. In order to access Minio from inside the cluster, we will use a service named `minio` in `storage` namespace. So, our domain will be `minio.storage.svc`. To access Minio from outside of cluster through `NodePort`, we will require Cluster's IP address. As we are using minikube, it is `192.168.99.100` (run `minikube ip` to confirm). We will create a certificate that is valid for both `minio.storage.svc` domain and `198.168.99.100` ip address. 44 | 45 | Let's create server certificates, 46 | 47 | ```console 48 | $ onessl create server-cert --domains minio.storage.svc --ips 192.168.99.100 49 | ``` 50 | 51 | This will generate two files `server.crt` and `server.key`. 52 | 53 | >Generated certificate will have key size of 2048 bytes and valid for 1 years. 54 | 55 | **Prepare Certificates for Minio Server :** 56 | 57 | Minio server will start TLS secure service if it find `public.crt` and `private.key` files in `/root/.minio/certs/` directory of the container. The `public.crt` file is concatenation of `server.crt` and `ca.crt` where `private.key` file is only the `server.key` file. 58 | 59 | Let's generate `public.crt` and `private.key` file, 60 | 61 | ```console 62 | $ cat {server.crt,ca.crt} > public.crt 63 | $ cat server.key > private.key 64 | ``` 65 | 66 | Be careful about the order of `server.crt` and `ca.crt`. The order will be `server's certificate > intermediate certificates > CA's root certificate`. The intermediate certificates are required if the server certificate is created using a certificate which is not the root certificate but signed by the root certificate. [onessl](https://github.com/kubepack/onessl) use root certificate by default to generate server certificate if no certificate path is specified by `--cert-dir` flag. Hence, the intermediate certificates are not used here. 67 | 68 | We will create a Kubernetes secret with this `public.crt` and `private.key` files and mount the secret to `/root/.minio/certs/` directory of minio container. 69 | 70 | > Minio server will not trust a self-signed certificate by default. We can mark the self-signed certificate as a trusted certificate by adding `public.crt` file in `/root/.minio/certs/CAs` directory. 71 | 72 | ## Deploy Minio Server 73 | 74 | Now, we are ready to deploy TLS secured Minio server. At first, we will create a Secret with credentials and certificates. Then, we will create a PVC for Minio to store data. Finally, we will deploy Minio server using a Deployment. 75 | 76 | **Create Secret :** 77 | 78 | Now, let's create a secret `minio-server-secret` with credentials `MINIO_ACCESS_KEY`, `MINIO_SECRET_KEY` and certificates `public.crt`, `private.key` files, 79 | 80 | ```console 81 | $ echo -n '' > MINIO_ACCESS_KEY 82 | $ echo -n '' > MINIO_SECRET_KEY 83 | 84 | $ kubectl create secret generic -n storage minio-server-secret \ 85 | --from-file=./MINIO_ACCESS_KEY \ 86 | --from-file=./MINIO_SECRET_KEY \ 87 | --from-file=./public.crt \ 88 | --from-file=./private.key 89 | secret/minio-server-secret created 90 | ``` 91 | 92 | Now, verify that the credentials and certificate data are present in the secret, 93 | 94 | ```console 95 | $ kubectl get secret -n storage minio-server-secret -o yaml 96 | ``` 97 | 98 | ```yaml 99 | apiVersion: v1 100 | data: 101 | MINIO_ACCESS_KEY: bXktYWNjZXNzLWtleQ== 102 | MINIO_SECRET_KEY: bXktc2VjcmV0LWtleQ== 103 | private.key: 104 | public.crt: 105 | kind: Secret 106 | metadata: 107 | creationTimestamp: 2018-11-30T05:17:54Z 108 | name: minio-server-secret 109 | namespace: storage 110 | resourceVersion: "7057" 111 | selfLink: /api/v1/namespaces/storage/secrets/minio-server-secret 112 | uid: 4a4c0365-f45f-11e8-ae3b-0800279630e8 113 | type: Opaque 114 | ``` 115 | 116 | **Create Persistent Volume Claim :** 117 | 118 | Minio server needs a Persistent Volume to store data. Let's create a [PersistentVolumeClaim](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims) to request Persistent Volume from the cluster. 119 | 120 | ```console 121 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/minio/artifacts/pvc.yaml 122 | persistentvolumeclaim/minio-pvc created 123 | ``` 124 | 125 | YAML for PersistentVolumeClaim, 126 | 127 | ```yaml 128 | apiVersion: v1 129 | kind: PersistentVolumeClaim 130 | metadata: 131 | name: minio-pvc 132 | namespace: storage 133 | spec: 134 | storageClassName: standard 135 | accessModes: 136 | - ReadWriteOnce 137 | resources: 138 | requests: 139 | storage: 5Gi 140 | ``` 141 | 142 | Verify that the cluster has provisioned the claimed volume 143 | 144 | ```console 145 | $ kubectl get pvc -n storage minio-pvc 146 | NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE 147 | minio-pvc Bound pvc-3842dfb9-f460-11e8-ae3b-0800279630e8 5Gi RWO standard 53s 148 | ``` 149 | 150 | **Create Deployment :** 151 | 152 | Now, let's create deployment for Minio server, 153 | 154 | ```console 155 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/minio/artifacts/deployment.yaml 156 | deployment.apps/minio-deployment created 157 | ``` 158 | 159 | Below the YAML for `minio-deployment` that we have created above, 160 | 161 | ```yaml 162 | apiVersion: apps/v1 163 | kind: Deployment 164 | metadata: 165 | name: minio-deployment 166 | namespace: storage 167 | labels: 168 | app: minio 169 | spec: 170 | selector: 171 | matchLabels: 172 | app: minio 173 | template: 174 | metadata: 175 | labels: 176 | app: minio 177 | spec: 178 | containers: 179 | - name: minio 180 | image: minio/minio 181 | args: 182 | - server 183 | - --address 184 | - ":443" 185 | - /storage 186 | env: 187 | # credentials to access minio server. use from secret "minio-server-secret" 188 | - name: MINIO_ACCESS_KEY 189 | valueFrom: 190 | secretKeyRef: 191 | name: minio-server-secret 192 | key: MINIO_ACCESS_KEY 193 | - name: MINIO_SECRET_KEY 194 | valueFrom: 195 | secretKeyRef: 196 | name: minio-server-secret 197 | key: MINIO_SECRET_KEY 198 | ports: 199 | - name: https 200 | containerPort: 443 201 | volumeMounts: 202 | - name: storage # mount the "storage" volume into the pod 203 | mountPath: "/storage" 204 | - name: minio-certs # mount the certificates in "/root/.minio/certs" directory 205 | mountPath: "/root/.minio/certs" 206 | volumes: 207 | - name: storage # use "minio-pvc" to store data 208 | persistentVolumeClaim: 209 | claimName: minio-pvc 210 | - name: minio-certs # use secret "minio-server-secret" as volume to mount the certificates 211 | secret: 212 | secretName: minio-server-secret 213 | items: 214 | - key: public.crt 215 | path: public.crt 216 | - key: private.key 217 | path: private.key 218 | - key: public.crt 219 | path: CAs/public.crt # mark self signed certificate as trusted 220 | ``` 221 | 222 | **Minio Web UI :** 223 | 224 | Minio server is running on port `:443`. We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Minio Web UI. 225 | 226 | At first, let's check if the Minio pod is in `Running` state. 227 | 228 | ```console 229 | $ kubectl get pod -n storage -l=app=minio 230 | NAME READY STATUS RESTARTS AGE 231 | minio-deployment-7d4c847d9d-8trcr 1/1 Running 0 13m 232 | ``` 233 | 234 | Now, run following command on a separate terminal to forward `:443` port of `minio-deployment-7d4c847d9d-8trcr` pod, 235 | 236 | ```console 237 | $ kubectl port-forward -n storage minio-deployment-7d4c847d9d-8trcr :443 238 | Forwarding from 127.0.0.1:37817 -> 443 239 | Forwarding from [::1]:37817 -> 443 240 | ``` 241 | 242 | Our host port `31817` has been forward to `:443` port of the pod. Now, we can access the dashboard at `https://localhost:31817`. Open the url in your to access Minio Web UI. 243 | 244 | As we are using self-signed certificate, the browser will not trust it. If you are using Google Chrome browser, you will be greeted with following message, 245 | 246 |

247 |   Warning: Your connection is not private 248 |

249 | 250 | Click on `ADVANCED` marked by a red rectangle in the above image. Then click on `Proceed to localhost(unsafe)` as marked in below image. 251 | 252 |

253 |   Proceed to localhost(unsafe) 254 |

255 | 256 | Then, you will be taken to Minio Login UI. Log in with your `MINIO_ACCESS_KEY` and `MINIO_SECRET_KEY`. If you succeed, you will see below UI, 257 | 258 |

259 |   Mino Web UI 260 |

261 | 262 | ## Accessing TLS Secured Minio Server 263 | 264 | This section will show you how to access the TLS secured Minio server we have deployed above from both inside and outside of the Kubernetes cluster. We will use a tool called [osm](https://github.com/appscode/osm) developed by [AppsCode](https://appscode.com/) that gives simple and easy way to interact with various cloud storage services. 265 | 266 | ### Accessing from Outside of Cluster 267 | 268 | Althugh we have already accessed the Minio Web UI from a browser that runs outside of the cluster, it did not used TLS secured connection. In this section, we will show how an application can access the Minio server with TLS secured connection. 269 | 270 | Here, we will use [osm](https://github.com/appscode/osm) command line binary to interact with the Minio server. If you haven't installed `osm` already, please install it first. 271 | 272 | **Create a NodePort type Service :** 273 | 274 | We need a `NodePort` type service so that we can access the Minio server from outside of the cluster. Let's create a `NodePort` type Service first, 275 | 276 | ```console 277 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/minio/artifacts/nodeport-svc.yaml 278 | service/minio-nodeport-svc created 279 | ``` 280 | 281 | Here is YAML for the Service we have created above, 282 | 283 | ```yaml 284 | apiVersion: v1 285 | kind: Service 286 | metadata: 287 | name: minio-nodeport-svc 288 | namespace: storage 289 | spec: 290 | type: NodePort 291 | ports: 292 | - name: https 293 | port: 443 294 | targetPort: https 295 | protocol: TCP 296 | selector: 297 | app: minio # must match with the label used in minio deployment 298 | ``` 299 | 300 | We need to know the `NodePort` allocated for this service. 301 | 302 | ```console 303 | $ kubectl get service -n storage minio-nodeport-svc 304 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 305 | minio-nodeport-svc NodePort 10.108.252.121 443:32733/TCP 6m53s 306 | ``` 307 | 308 | Notice the `PORT(S)` field. Here, `32733` is the allocated `NodePort` for this service. Now, we can connect to Minio server using `https://:32733` url. As we are using minikube for this tutorial, the Node IP is `192.168.99.100`. We have already used this IP address while generating self-signed certificate to make it valid for this IP. 309 | 310 | **Connect with Minio Server :** 311 | 312 | Now, let's use `osm` to create bucket and upload some files to the Minio server. 313 | 314 | At first, create `osm` configuration for the Minio server. `osm` configuration holds the connection information of the cloud bucket. So, every time you will run an operation, you don't have to provide the them again. 315 | 316 | For `s3` compatible Minio server, we have to provide following connection information while creating the `osm` configuration. 317 | 318 | - `--provider` tells `osm` that it is s3 or s3 compatible cloud storage. 319 | - `--s3.access_key_id` is used to provide your `MINIO_ACCESS_KEY`. 320 | - `--s3.secret_key` is used to provide your `MINIO_SECRET_KEY`. 321 | - `--s3.endpoint` is used to specify the endpoint where your Minio server is running. 322 | - `--s3.cacert_file` is used to provide the root certificate for TLS secured endpoint. 323 | 324 | Let's create a `osm` configuration named `minio` for our Minio server. 325 | 326 | ```console 327 | $ osm config set-context minio --provider=s3 \ 328 | --s3.access_key_id=my-access-key \ 329 | --s3.secret_key=my-secret-key \ 330 | --s3.endpoint=https://192.168.99.100:32733 \ 331 | --s3.cacert_file=./ca.crt 332 | ``` 333 | 334 | Check that `osm` has set this newly created configuration as it's current context, 335 | 336 | ```console 337 | $ osm config current-context 338 | minio 339 | ``` 340 | 341 | Let's create a bucket named `external-bucket` in our Minio server, 342 | 343 | ```console 344 | # Here, mc = make container 345 | $ osm mc external-bucket 346 | Successfully created container external-bucket 347 | ``` 348 | 349 | Check the bucket has been created successfully by, 350 | 351 | ```console 352 | # Here, lc = list container 353 | $ osm lc 354 | external-bucket 355 | Found 1 container in 356 | ``` 357 | 358 | You can also check in the Minio Web UI to see if the bucket has been created. 359 | 360 |

361 |   Mino UI: external-bucket 362 |

363 | 364 | Let's upload a file in `external-bucket` 365 | 366 | ```console 367 | $ osm push -c external-bucket ./deployment.yaml deployment.yaml 368 | Successfully pushed item deployment.yaml 369 | ``` 370 | 371 | List all files of `external-bucket`, 372 | 373 | ```console 374 | $ osm ls external-bucket 375 | deployment.yaml 376 | Found 1 item in container external-bucket 377 | ``` 378 | 379 | You can also browse the Web UI to see if the files are present in `external-bucket` 380 | 381 |

382 |   Mino Web UI: files in external-bucket 383 |

384 | 385 | **Try Without Certificates :** 386 | 387 | Now, let's try to connect with the Minio server without certificates. Let's create another osm configuration that doesn't provide certificate. This time we will not provide `--s3.cacert_file=./ca.crt` flag while creating `osm` configuration. 388 | 389 | ```console 390 | $ osm config set-context minio-not-ca --provider=s3 --s3.access_key_id=my-access-key --s3.secret_key=my-secret-key --s3.endpoint=192.168.99.100:32733 391 | # check if current context is `minio-not-ca` 392 | $ osm config current-context 393 | minio-not-ca 394 | ``` 395 | 396 | Now, let's try to list files from `external-bucket`, 397 | 398 | ```console 399 | $ osm ls external-bucket 400 | Container, getting the bucket location: RequestError: send request failed 401 | caused by: Get https://192.168.99.100:32733/external-bucket?location=: x509: certificate signed by unknown authority 402 | Container, getting the bucket location: RequestError: send request failed 403 | caused by: Get https://192.168.99.100:32733/external-bucket?location=: x509: certificate signed by unknown authority 404 | ``` 405 | 406 | So, we can see our Minio server is rejecting the request if we don't provide the root certificate. 407 | 408 | ### Accessing from Inside Cluster 409 | 410 | Now, we will show how to connect with the Minio server from inside the Kubernetes cluster. This time, we will use a `ClusterIP` type service to access the Minio server from a pod running inside the cluster. 411 | 412 | **Create ClusterIP type Service :** 413 | 414 | We have used `minio.storage.svc` domain while generating the self-signed certificates. So, our certificate is valid for a Service named `minio` in `storage` namespace. Let's create the service first, 415 | 416 | ```console 417 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/minio/artifacts/minio-svc.yaml 418 | service/minio created 419 | ``` 420 | 421 | Below is the YAML for the service we have created above, 422 | 423 | ```yaml 424 | apiVersion: v1 425 | kind: Service 426 | metadata: 427 | name: minio 428 | namespace: storage 429 | spec: 430 | ports: 431 | - name: https 432 | port: 443 433 | targetPort: https 434 | protocol: TCP 435 | selector: 436 | app: minio # must match with the label used in minio deployment 437 | ``` 438 | 439 | **Create Secret :** 440 | 441 | We will create a Secret with credentials to access the Minio server and the root certificate so that our client pod can access the Minio server over TLS. 442 | 443 | Let's create the client secret, 444 | 445 | ```console 446 | $ echo -n '' > AWS_ACCESS_KEY_ID 447 | $ echo -n '' > AWS_SECRET_ACCESS_KEY 448 | 449 | $ kubectl create secret generic -n demo minio-client-secret \ 450 | --from-file=./AWS_ACCESS_KEY_ID \ 451 | --from-file=./AWS_SECRET_ACCESS_KEY \ 452 | --from-file=./ca.crt 453 | secret/minio-client-secret created 454 | ``` 455 | 456 | **Create Pod :** 457 | 458 | Now, let's create a simple pod that will create a bucket named `internal-bucket` in the Minio server. This time, we will use [appscodeci/osm](https://hub.docker.com/r/appscodeci/osm/) docker image that is created from same `osm` binary we have used earlier. 459 | 460 | Below the YAML for simple `osm-pod` that will just create a bucket named `internal-bucket` in Minio server then will go to `Complted` state. 461 | 462 | ```yaml 463 | kind: Pod 464 | apiVersion: v1 465 | metadata: 466 | name: osm-pod 467 | namespace: demo 468 | spec: 469 | restartPolicy: Never 470 | containers: 471 | - name: osm 472 | image: appscodeci/osm 473 | env: 474 | - name: PROVIDER 475 | value: s3 476 | - name: AWS_ENDPOINT 477 | value: https://minio.storage.svc 478 | - name: AWS_ACCESS_KEY_ID 479 | valueFrom: 480 | secretKeyRef: 481 | name: minio-client-secret 482 | key: AWS_ACCESS_KEY_ID 483 | - name: AWS_SECRET_ACCESS_KEY 484 | valueFrom: 485 | secretKeyRef: 486 | name: minio-client-secret 487 | key: AWS_SECRET_ACCESS_KEY 488 | - name: CA_CERT_FILE 489 | value: /etc/minio/certs/ca.crt # root ca has been mounted here 490 | args: 491 | - "mc internal-bucket" # create a bucket named "internal-bucket" 492 | volumeMounts: # mount root ca in /etc/minio/certs directory 493 | - name: credentials 494 | mountPath: /etc/minio/certs 495 | volumes: 496 | - name: credentials 497 | secret: 498 | secretName: minio-client-secret 499 | items: 500 | - key: ca.crt 501 | path: ca.crt 502 | ``` 503 | 504 | Let's create the above pod, 505 | 506 | ```console 507 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/storage/minio/artifacts/osm-pod.yaml 508 | pod/osm-pod created 509 | ``` 510 | 511 | Now, wait for the pod to go in `Running` state. Once, it is in `Running` state, it will create a bucket in the Minio server. 512 | 513 | You can check the pod's log to see if the bucket was created successfully. 514 | 515 | ```console 516 | $ kubectl logs -n demo osm-pod -f 517 | Configuring osm context for s3 storage 518 | osm config set-context s3 --provider=s3 --s3.access_key_id=my-access-key --s3.secret_key=my-secret-key --s3.endpoint=https://minio.storage.svc --s3.cacert_file=/etc/minio/certs/ca.crt 519 | Successfully configured 520 | 521 | Running main command..... 522 | osm mc internal-bucket 523 | Successfully created container internal-bucket 524 | ``` 525 | 526 | You can also check Minio Web UI to ensure that the bucket is showing there. 527 | 528 |

529 |   Mino Web UI: internal-bucket 530 |

531 | 532 | ## Cleanup 533 | 534 | To cleanup the Kubernetes resources created by this tutorial run following commands, 535 | 536 | ```console 537 | kubectl delete -n storage secret/minio-server-secret 538 | kubectl delete -n storage deployment/minio-deployment 539 | kubectl delete -n storage persistentvolumeclaim/minio-pvc 540 | kubectl delete -n storage service/minio-nodeport-svc 541 | kubectl delete -n storage service/minio 542 | kubectl delete -n storage secret/minio-client-secret 543 | kubectl delete -n demo pod/osm-pod 544 | 545 | kubectl delete ns storage 546 | kubectl delete ns demo 547 | ``` 548 | -------------------------------------------------------------------------------- /monitoring/prometheus/builtin/README.md: -------------------------------------------------------------------------------- 1 | # Configure Prometheus Server to Monitor Kubernetes Resources 2 | 3 | Prometheus has native support for monitoring Kubernetes resources. This tutorial will show you how to configure and deploy a Prometheus server in Kubernetes to collect metrics from various Kubernetes resources. 4 | 5 | ## Before You Begin 6 | 7 | At first, you need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using [Minikube](https://github.com/kubernetes/minikube). 8 | 9 | To keep Prometheus resources isolated, we will use a separate namespace `monitoring` to deploy Prometheus server. We will deploy sample workload on another separate namespace called `demo`. 10 | 11 | ```console 12 | $ kubectl create ns monitoring 13 | namespace/monitoring created 14 | 15 | $ kubectl create ns demo 16 | namespace/demo created 17 | ``` 18 | 19 | ## Configure Prometheus 20 | 21 | Prometheus is configured through a configuration file `prometheus.yaml`. This configuration file describe how Prometheus server should collect metrics from different resources. 22 | 23 | In this tutorial, we are going to configure Prometheus to collect metrics from [Pod](https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/), [Service Endpoints](https://kubernetes.io/docs/concepts/services-networking/service/), [Kubernetes API Server](https://kubernetes.io/docs/concepts/overview/components/#kube-apiserver) and [Nodes](https://kubernetes.io/docs/concepts/architecture/nodes/). 24 | 25 | A typical configuration file should look like this, 26 | 27 | ```yaml 28 | global: 29 | # specifies configuration such as scrape_interval, evaluation_interval etc 30 | # that are valid for all configuration context 31 | 32 | scrape_configs: 33 | # specifies the configuration about where and how to collect the metrics. 34 | 35 | rule_files: 36 | # Rule files specifies a list of globs. Rules and alerts are read from 37 | # all matching files. 38 | 39 | alerting: 40 | # Alerting specifies settings related to the Alertmanager. 41 | 42 | remote_write: 43 | # Settings related to the remote write feature. 44 | 45 | remote_read: 46 | # Settings related to the remote read feature. 47 | ``` 48 | 49 | For this tutorial, we will only configure `global` and `scrape_config` parts. To know about other configuration parts, please check the Prometheus official configuration guide from [here](https://prometheus.io/docs/prometheus/latest/configuration/configuration). 50 | 51 | ### `global` configuration 52 | 53 | `global` configuration part specifies configuration that are valid to all other configuration context. If you specify same configuration in local context, it will overwrite the global one. We are going to use following `global` configuration. 54 | 55 | ```yaml 56 | global: 57 | scrape_interval: 30s 58 | scrape_timeout: 10s 59 | ``` 60 | 61 | Here, `scrape_interval: 30s` indicates that Prometheus server should scrape metrics with 30 seconds interval. `scrape_timeout: 10s` indicates how long until a scrape request times out. 62 | 63 | ### `scrape_config` configuration 64 | 65 | `scrape_config` section specifies the targets of metric collection and how to collect it. It is actually an array of configuration called `job`. Each `job` specify the configuration to collect metrics from a specific resource or specific type of resources. Here, we are going to configure four different jobs `kubernetes-pod`, `kubernetes-service-endpoints`, `kubernetes-apiservers` and `kubernetes-nodes` to collect metrics from Pod, Service Endpoints, Kubernetes API Server and Nodes respectively. 66 | 67 | #### `kubernetes-pod` 68 | 69 | Here, we are going to configure Prometheus to collect metrics from Kubernetes Pods that have following three annotation, 70 | 71 | ```yaml 72 | prometheus.io/scrape: true 73 | prometheus.io/path: 74 | prometheus.io/port: 75 | ``` 76 | 77 | Here, `prometheus.io/scrape: true` annotation indicate that Prometheus should scrape metrics from this pod. `prometheus.io/port: ` and `prometheus.io/path: ` specifies the port and path where the pod is serving metrics. 78 | 79 | Below is the yaml for a sample pod that exports Prometheus metrics at `/metrics` path of `9091` port. 80 | 81 | ```yaml 82 | apiVersion: v1 83 | kind: Pod 84 | metadata: 85 | name: pod-monitoring-demo 86 | namespace: demo 87 | labels: 88 | app: prometheus-demo 89 | annotations: 90 | prometheus.io/scrape: "true" 91 | prometheus.io/port: "9091" 92 | prometheus.io/path: "/metrics" 93 | spec: 94 | containers: 95 | - name: pushgateway 96 | image: prom/pushgateway 97 | ``` 98 | 99 | Now, if we want to scrape metrics from above pod, we should configure a job under `scrape_config` as below, 100 | 101 | ```yaml 102 | - job_name: 'kubernetes-pods' 103 | honor_labels: true 104 | kubernetes_sd_configs: 105 | - role: pod 106 | relabel_configs: 107 | # select only those pods that has "prometheus.io/scrape: true" annotation 108 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] 109 | action: keep 110 | regex: true 111 | # set metrics_path (default is /metrics) to the metrics path specified in "prometheus.io/path: " annotation. 112 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] 113 | action: replace 114 | target_label: __metrics_path__ 115 | regex: (.+) 116 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 117 | - source_labels: [__address__ __meta_kubernetes_pod_annotation_prometheus_io_port] 118 | action: replace 119 | regex: ([^:]+)(?::\d+)?;(\d+) 120 | replacement: $1:$2 121 | target_label: __address__ 122 | - action: labelmap 123 | regex: __meta_kubernetes_pod_label_(.+) 124 | - source_labels: [__meta_kubernetes_namespace] 125 | action: replace 126 | target_label: kubernetes_namespace 127 | - source_labels: [__meta_kubernetes_pod_name] 128 | action: replace 129 | target_label: kubernetes_pod_name 130 | ``` 131 | 132 | Prometheus itself add some labels on collected metrics. If the label is already present in the metric, it cause conflict. In this case, Prometheus rename existing label and add `exported_` prefix to it. Then it add its own label with original name. Here, `honor_labels: true` tells Prometheus to respect existing label in case of any conflict. So, Prometheus will not add its own label then. 133 | 134 | `kubernetes_sd_configs` tells Prometheus that we want to collect metrics form Kubernetes resource and the resource is `pod` in this case. 135 | 136 | Here, `relabel_config` is used to dynamically configure the target. Prometheus select all pods as possible targets. Here, we are keeping only those pods that has `prometheus.io/scrape: "true"` annotation and dynamically configuring metrics path, port etc. for each pod. 137 | 138 | #### `kubernetes-service-endpoints` 139 | 140 | Now, we are going to configure Prometheus to collect metrics from the endpoints of a Service. In this case, we will apply respective annotations in the Service instead of pod that we have done in earlier section. 141 | 142 | We are going to collect metrics from below Pod, 143 | 144 | ```yaml 145 | apiVersion: v1 146 | kind: Pod 147 | metadata: 148 | name: service-endpoint-monitoring-demo 149 | namespace: demo 150 | labels: 151 | app: prometheus-demo 152 | pod: prom-pushgateway 153 | spec: 154 | containers: 155 | - name: pushgateway 156 | image: prom/pushgateway 157 | ``` 158 | 159 | We are going to use below Service to collect metrics from that Pod, 160 | 161 | ```yaml 162 | kind: Service 163 | apiVersion: v1 164 | metadata: 165 | name: pushgateway-service 166 | namespace: demo 167 | labels: 168 | app: prometheus-demo 169 | annotations: 170 | prometheus.io/scrape: "true" 171 | prometheus.io/port: "9091" 172 | prometheus.io/path: "/metrics" 173 | spec: 174 | selector: 175 | pod: prom-pushgateway 176 | ports: 177 | - name: metrics 178 | port: 9091 179 | targetPort: 9091 180 | ``` 181 | 182 | Look at the annotations of this service. This time, we have applied annotations with metrics information in the Service instead of the Pod. 183 | 184 | Now, we have to configure a job under `scrape_config` as below to collect metrics using this Service. 185 | 186 | ```yaml 187 | - job_name: 'kubernetes-service-endpoints' 188 | honor_labels: true 189 | kubernetes_sd_configs: 190 | - role: endpoints 191 | relabel_configs: 192 | # select only those endpoints whose service has "prometheus.io/scrape: true" annotation 193 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] 194 | action: keep 195 | regex: true 196 | # set the metrics_path to the path specified in "prometheus.io/path: " annotation. 197 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] 198 | action: replace 199 | target_label: __metrics_path__ 200 | regex: (.+) 201 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 202 | - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] 203 | action: replace 204 | target_label: __address__ 205 | regex: ([^:]+)(?::\d+)?;(\d+) 206 | replacement: $1:$2 207 | - action: labelmap 208 | regex: __meta_kubernetes_service_label_(.+) 209 | - source_labels: [__meta_kubernetes_namespace] 210 | action: replace 211 | target_label: kubernetes_namespace 212 | - source_labels: [__meta_kubernetes_service_name] 213 | action: replace 214 | target_label: kubernetes_name 215 | ``` 216 | 217 | Here, `role: endpoints` under `kubernetes_sd_configs` field tells Prometheus that the targeted resources are endpoints of a service. 218 | 219 | #### `kubernetes-apiservers` 220 | 221 | Kubernetes API Server exports metrics in a TLS secure endpoint. So, Prometheus server has to provide certificate to collect these metrics. 222 | 223 | We have to configure a job under `scrape_config` as below to collect metrics from the API Server. 224 | 225 | ```yaml 226 | - job_name: 'kubernetes-apiservers' 227 | honor_labels: true 228 | kubernetes_sd_configs: 229 | - role: endpoints 230 | # kubernetes apiserver serve metrics on a TLS secure endpoints. so, we have to use "https" scheme 231 | scheme: https 232 | # we have to provide certificate to establish tls secure connection 233 | tls_config: 234 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 235 | # bearer_token_file is required for authorizating prometheus server to kubernetes apiserver 236 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 237 | 238 | relabel_configs: 239 | - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] 240 | action: keep 241 | regex: default;kubernetes;https 242 | ``` 243 | 244 | Look at the `tls_config` field. We are specifying root certificate path in `ca_file` field. Kubernetes automatically mount a secret with respective certificate and [service account token](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#service-account-tokens) in `/var/run/secrets/kubernetes.io/serviceaccount/` directory of Prometheus pod. 245 | 246 | We need to authorize Prometheus server to the Kubernetes API Server in order to collect the metrics. So, we are providing service account token through `bearer_token_file` field. 247 | 248 | > You can also collect metrics of a [Kubernetes Extension API Server](https://kubernetes.io/docs/tasks/access-kubernetes-api/setup-extension-api-server/) with similar configuration. However, you have to mount a secret with certificate of the Extension API Server to your Prometheus Deployment and you have to points that certificate with `ca_file` field. You will also require to add following in `relabel_config`. 249 | 250 | ```yaml 251 | - target_label: __address__ 252 | replacement: :443 253 | ``` 254 | 255 | #### `kubernetes-nodes` 256 | 257 | We can use Kubernets API Server to collect node metrics. The scrapping will be proxied through the API server. This enables Prometheus to collect node metrics without directly connecting to the node. This is particularly helpful when you are running Prometheus outside of the cluster or the nodes are not directly accessible to Prometheus server. 258 | 259 | Below yaml show a job under `scrape_config` to collect node metrics. 260 | 261 | ```yaml 262 | - job_name: 'kubernetes-nodes' 263 | honor_labels: true 264 | scheme: https 265 | tls_config: 266 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 267 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 268 | 269 | kubernetes_sd_configs: 270 | - role: node 271 | relabel_configs: 272 | - action: labelmap 273 | regex: __meta_kubernetes_node_label_(.+) 274 | - target_label: __address__ 275 | replacement: kubernetes.default.svc:443 276 | - source_labels: [__meta_kubernetes_node_name] 277 | regex: (.+) 278 | target_label: __metrics_path__ 279 | replacement: /api/v1/nodes/${1}/proxy/metrics 280 | ``` 281 | 282 | Here, `replacement: /api/v1/nodes/${1}/proxy/metrics` line is responsible for proxy to the respective nodes. 283 | 284 | ### Rounding up the Configuration 285 | 286 | Finally, our final Prometheus configuration file (`prometheus.yaml`) to collect metrics from these four sources should look like this, 287 | 288 | ```yaml 289 | global: 290 | scrape_interval: 30s 291 | scrape_timeout: 10s 292 | scrape_configs: 293 | #------------- configuration to collect pods metrics ------------------- 294 | - job_name: 'kubernetes-pods' 295 | honor_labels: true 296 | kubernetes_sd_configs: 297 | - role: pod 298 | relabel_configs: 299 | # select only those pods that has "prometheus.io/scrape: true" annotation 300 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] 301 | action: keep 302 | regex: true 303 | # set metrics_path (default is /metrics) to the metrics path specified in "prometheus.io/path: " annotation. 304 | - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] 305 | action: replace 306 | target_label: __metrics_path__ 307 | regex: (.+) 308 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 309 | - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port] 310 | action: replace 311 | regex: ([^:]+)(?::\d+)?;(\d+) 312 | replacement: $1:$2 313 | target_label: __address__ 314 | - action: labelmap 315 | regex: __meta_kubernetes_pod_label_(.+) 316 | - source_labels: [__meta_kubernetes_namespace] 317 | action: replace 318 | target_label: kubernetes_namespace 319 | - source_labels: [__meta_kubernetes_pod_name] 320 | action: replace 321 | target_label: kubernetes_pod_name 322 | 323 | #-------------- configuration to collect metrics from service endpoints ----------------------- 324 | - job_name: 'kubernetes-service-endpoints' 325 | honor_labels: true 326 | kubernetes_sd_configs: 327 | - role: endpoints 328 | relabel_configs: 329 | # select only those endpoints whose service has "prometheus.io/scrape: true" annotation 330 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] 331 | action: keep 332 | regex: true 333 | # set the metrics_path to the path specified in "prometheus.io/path: " annotation. 334 | - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] 335 | action: replace 336 | target_label: __metrics_path__ 337 | regex: (.+) 338 | # set the scrapping port to the port specified in "prometheus.io/port: " annotation and set address accordingly. 339 | - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] 340 | action: replace 341 | target_label: __address__ 342 | regex: ([^:]+)(?::\d+)?;(\d+) 343 | replacement: $1:$2 344 | - action: labelmap 345 | regex: __meta_kubernetes_service_label_(.+) 346 | - source_labels: [__meta_kubernetes_namespace] 347 | action: replace 348 | target_label: kubernetes_namespace 349 | - source_labels: [__meta_kubernetes_service_name] 350 | action: replace 351 | target_label: kubernetes_name 352 | 353 | #---------------- configuration to collect metrics from kubernetes apiserver ------------------------- 354 | - job_name: 'kubernetes-apiservers' 355 | honor_labels: true 356 | kubernetes_sd_configs: 357 | - role: endpoints 358 | # kubernetes apiserver serve metrics on a TLS secure endpoints. so, we have to use "https" scheme 359 | scheme: https 360 | # we have to provide certificate to establish tls secure connection 361 | tls_config: 362 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 363 | # bearer_token_file is required for authorizating prometheus server to kubernetes apiserver 364 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 365 | 366 | relabel_configs: 367 | - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] 368 | action: keep 369 | regex: default;kubernetes;https 370 | 371 | #--------------- configuration to collect metrics from nodes ----------------------- 372 | - job_name: 'kubernetes-nodes' 373 | honor_labels: true 374 | scheme: https 375 | tls_config: 376 | ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt 377 | bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token 378 | 379 | kubernetes_sd_configs: 380 | - role: node 381 | relabel_configs: 382 | - action: labelmap 383 | regex: __meta_kubernetes_node_label_(.+) 384 | - target_label: __address__ 385 | replacement: kubernetes.default.svc:443 386 | - source_labels: [__meta_kubernetes_node_name] 387 | regex: (.+) 388 | target_label: __metrics_path__ 389 | replacement: /api/v1/nodes/${1}/proxy/metrics 390 | ``` 391 | 392 | Now, we can use this configuration file to deploy our Prometheus server. 393 | 394 | ## Deploy Prometheus Server 395 | 396 | As we have configured `prometheus.yaml` to collect metrics from the targets. We are ready to deploy Prometheus Deployment. 397 | 398 | **Create sample workload:** 399 | 400 | At first, let's create the sample pods and service we have shown earlier so that we can verify our configured scrapping job for `kubernetes-pod` and `kubernetes-service-endpoints` are working. 401 | 402 | ```console 403 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/builtin/artifacts/sample-workloads.yaml 404 | pod/pod-monitoring-demo created 405 | pod/service-endpoint-monitoring-demo created 406 | service/pushgateway-service created 407 | ``` 408 | 409 | >YAML for sample workloads can be found [here](/monitoring/prometheus/builtin/artifacts/sample-workloads.yaml). 410 | 411 | **Create ConfigMap:** 412 | 413 | Now, we have to create a ConfigMap with the configuration (`prometheus.yaml`) file. We will mount this into Prometheus Deployment. 414 | 415 | ```console 416 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/builtin/artifacts/configmap.yaml 417 | configmap/prometheus-config created 418 | ``` 419 | 420 | >YAML for the ConfigMap can be found [here](/monitoring/prometheus/builtin/artifacts/configmap.yaml). 421 | 422 | **Create RBAC resources:** 423 | 424 | If you are using a RBAC enabled cluster, you have to give necessary permissions to Prometheus server. Let's create the necessary RBAC resources, 425 | 426 | ```console 427 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/builtin/artifacts/rbac.yaml 428 | clusterrole.rbac.authorization.k8s.io/prometheus created 429 | serviceaccount/prometheus created 430 | clusterrolebinding.rbac.authorization.k8s.io/prometheus created 431 | ``` 432 | 433 | >YAML for RBAC resources can be found [here](/monitoring/prometheus/builtin/artifacts/rbac.yaml). 434 | 435 | **Create Deployment:** 436 | 437 | Finally, let's deploy the Prometheus server. 438 | 439 | ```console 440 | $ kubectl apply -f https://raw.githubusercontent.com/appscode/third-party-tools/master/monitoring/prometheus/builtin/artifacts/deployment.yaml 441 | deployment.apps/prometheus created 442 | ```` 443 | 444 | Below the YAML for Prometheus Deployment that we have deployed above, 445 | 446 | ```yaml 447 | apiVersion: apps/v1 448 | kind: Deployment 449 | metadata: 450 | name: prometheus 451 | namespace: monitoring 452 | labels: 453 | app: prometheus-demo 454 | spec: 455 | replicas: 1 456 | selector: 457 | matchLabels: 458 | app: prometheus 459 | template: 460 | metadata: 461 | labels: 462 | app: prometheus 463 | spec: 464 | serviceAccountName: prometheus 465 | containers: 466 | - name: prometheus 467 | image: prom/prometheus:v2.20.1 468 | args: 469 | - "--config.file=/etc/prometheus/prometheus.yml" 470 | - "--storage.tsdb.path=/prometheus/" 471 | ports: 472 | - containerPort: 9090 473 | volumeMounts: 474 | - name: prometheus-config 475 | mountPath: /etc/prometheus/ 476 | - name: prometheus-storage 477 | mountPath: /prometheus/ 478 | volumes: 479 | - name: prometheus-config 480 | configMap: 481 | defaultMode: 420 482 | name: prometheus-config 483 | - name: prometheus-storage 484 | emptyDir: {} 485 | ``` 486 | 487 | >Use a persistent volume instead of `emptyDir` for `prometheus-storage` volume if you don't want to lose collected metrics on Prometheus pod restart. 488 | 489 | **Verify Metrics:** 490 | 491 | Prometheus server is running on port `9090`. We will use [port forwarding](https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/) to access Prometheus dashboard. 492 | 493 | At first, let's check if the Prometheus pod is in `Running` state. 494 | 495 | ```console 496 | $ kubectl get pod -n monitoring -l=app=prometheus 497 | NAME READY STATUS RESTARTS AGE 498 | prometheus-8568c86d86-vpzx5 1/1 Running 0 102s 499 | ``` 500 | 501 | Now, run following command on a separate terminal to forward `9090` port of `prometheus-8568c86d86-vpzx5` pod, 502 | 503 | ```console 504 | $ kubectl port-forward -n monitoring prometheus-8568c86d86-vpzx5 9090 505 | Forwarding from 127.0.0.1:9090 -> 9090 506 | Forwarding from [::1]:9090 -> 9090 507 | ``` 508 | 509 | Now, we can access the dashboard at `localhost:9090`. Open [http://localhost:9090](http://localhost:9090) in your browser. You should see the configured jobs as target and they are in `UP` state which means Prometheus is able collect metrics from them. 510 | 511 |

512 |   Prometheus Target 513 |

514 | 515 | ## Cleanup 516 | 517 | To cleanup the Kubernetes resources created by this tutorial, run: 518 | 519 | ```console 520 | # delete prometheus resources 521 | $ kubectl delete all -n demo -l=app=prometheus-demo 522 | pod "pod-monitoring-demo" deleted 523 | pod "service-endpoint-monitoring-demo" deleted 524 | service "pushgateway-service" deleted 525 | 526 | $ kubectl delete all -n monitoring -l=app=prometheus-demo 527 | deployment.apps "prometheus" deleted 528 | 529 | # delete rbac stuff 530 | $ kubectl delete clusterrole -l=app=prometheus-demo 531 | clusterrole.rbac.authorization.k8s.io "prometheus" deleted 532 | 533 | $ kubectl delete clusterrolebinding -l=app=prometheus-demo 534 | clusterrolebinding.rbac.authorization.k8s.io "prometheus" deleted 535 | 536 | $ kubectl delete serviceaccount -n monitoring -l=app=prometheus-demo 537 | serviceaccount "prometheus" deleted 538 | 539 | # delete namespace 540 | $ kubectl delete ns monitoring 541 | namespace "monitoring" deleted 542 | 543 | $ kubectl delete ns demo 544 | namespace "demo" deleted 545 | ``` 546 | --------------------------------------------------------------------------------