├── .gitignore
├── README.md
├── images
├── arch
│ └── thanos-receive-multi-tenancy.png
└── screenshots
│ └── demo_test_result.png
├── local-cluster
├── create-cluster.sh
├── kind-calico-cluster-1.yaml
└── nginx-ingress-controller.yaml
└── manifests
├── nginx-proxy-a.yaml
├── nginx-proxy-b.yaml
├── prometheus-tenant-a.yaml
├── prometheus-tenant-b.yaml
├── thanos-query.yaml
├── thanos-receive-controller.yaml
├── thanos-receive-default.yaml
├── thanos-receive-hashring-0.yaml
├── thanos-receive-service.yaml
├── thanos-receiver-hashring-configmap-base.yaml
└── thanos-store-shard-0.yaml
/.gitignore:
--------------------------------------------------------------------------------
1 | *.out
2 | *.tgz
3 | *.tar.gz
4 | *.tar
5 | *.terraform/
6 | temp*
7 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Introduction
2 |
3 | Hey there! If you are reading this blog post, then I guess you are already aware of [Prometheus](https://prometheus.io/) and how it helps us in monitoring distributed systems like [Kubernetes](https://kubernetes.io/). And if you are familiar with [Prometheus](https://prometheus.io/), then chances are that you have come across the name called Thanos. [Thanos](https://thanos.io/) is an popular OSS which helps enterprises acheive a HA [Prometheus](https://prometheus.io/) setup with long-term storage capabilities.
4 | One of the common challenges of distributed monitoring is to implement multi-tenancy. [Thanos Receiver](https://thanos.io/v0.14/components/receive/) is Thanos component designed to address this common challenge. [Receiver](https://thanos.io/v0.14/components/receive/) was part of Thanos for a long time, but it was EXPERIMENTAL. Recently, [Thanos](https://thanos.io) went GA with the [Receiver](https://thanos.io/v0.14/components/receive/) component.
5 |
6 | # Motivation
7 | We tried this component with one of our clients, it worked well. However, due to lack of documentation, the setup wasn't as smooth as we would have liked it to be. Purpose of this blog post is to lay out a simple guide for those who are looking forward to create a multi-tenant monitoring setup using Prometheus and Thanos Receive. In this blog post we will try to use [Thanos Reciever](https://thanos.io/v0.14/components/receive) to acheive a simple multi-tenant monitoring setup where prometheus can be a near stateless component on the tenant side.
8 |
9 | # A few words on Thanos Receiver
10 | [Receiver](https://thanos.io/v0.14/components/receive/) is a Thanos component that can accept [remote write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) requests from any Prometheus instance and store the data in its local TSDB, optionally it can upload those TSDB blocks to an [object storage](https://thanos.io/v0.14/thanos/storage.md/) like S3 or GCS at regular intervals. Receiver does this by implementing the [Prometheus Remote Write API](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write). It builds on top of existing Prometheus TSDB and retains their usefulness while extending their functionality with long-term-storage, horizontal scalability, and downsampling. It exposes the StoreAPI so that Thanos Queriers can query received metrics in real-time.
11 | ## Multi-tenancy
12 | Thanos Receiver supports multi-tenancy. It accepts Prometheus remote write requests, and writes these into a local instance of Prometheus TSDB. The value of the HTTP header("THANOS-TENANT") of the incoming request determines the id of the tenant Prometheus. To prevent data leaking at the database level, each tenant has an individual TSDB instance, meaning a single Thanos receiver may manage multiple TSDB instances. Once the data is successfully committed to the tenant’s TSDB, the requests return successfully. Thanos Receiver also supports multi-tenancy by exposing labels which are similar to Prometheus [external labels](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#configuration-file).
13 | ## Hashring config file
14 | If we want features like load-balancing and data replication, we can run multiple instances of Thanos receiver as a part of a single hashring. The receiver instances within the same hashring become aware of their peers through a hashring config file. Following is an example of a hashring config file.
15 | ```
16 | [
17 | {
18 | "hashring": "tenant-a",
19 | "endpoints": ["tenant-a-1.metrics.local:19291/api/v1/receive", "tenant-a-2.metrics.local:19291/api/v1/receive"],
20 | "tenants": ["tenant-a"]
21 | },
22 | {
23 | "hashring": "tenants-b-c",
24 | "endpoints": ["tenant-b-c-1.metrics.local:19291/api/v1/receive", "tenant-b-c-2.metrics.local:19291/api/v1/receive"],
25 | "tenants": ["tenant-b", "tenant-c"]
26 | },
27 | {
28 | "hashring": "soft-tenants",
29 | "endpoints": ["http://soft-tenants-1.metrics.local:19291/api/v1/receive"]
30 | }
31 | ]
32 | ```
33 | - Soft tenancy - If a hashring specifies no explicit tenants, then any tenant is considered a valid match; this allows for a cluster to provide soft-tenancy. Requests whose tenant ID matches no other hashring explicitly, will automatically land in this soft tenancy hashring. All incoming remote write requests which don't set the tenant header in the HTTP request, fall under soft tenancy and default tenant ID(configurable through the flag --receive.default-tenant-id) is attached to their metrics.
34 | - Hard tenancy - Hard tenants must set the tenant header in every HTTP request for remote write. Hard tenants in the Thanos receiver are configured in a hashring config file. Changes to this configuration must be orchestrated by a configuration management tool. When a remote write request is received by a Thanos receiver, it goes through the list of configured hard tenants. A hard tenant also has the number of associated receiver endpoints belonging to it.
35 | **P.S: A remote write request can be initially received by any receiver instance, however, will only be dispatched to receiver endpoints that correspond to that hard tenant.**
36 |
37 | # Architecture
38 | In this blog post, we are trying to implement the following architecture.
39 |
40 | 
41 | A simple multi-tenancy model with thanos receive
42 |
43 |
44 | Brief overview on the above architecture:
45 | - We have 3 Prometheuses running in namespaces: `sre`, `tenant-a` and `tenant-b` respectively.
46 | - The Prometheus in `sre` namespace is demonstrated as a soft-tenant therefore it does not set any additional HTTP headers to the remote write requests.
47 | - The Prometheuses in `tenant-a` and `tenant-b` are demonstrated as hard tenants. The NGINX servers in those respective namespaces are used for setting tenant header for the tenant Prometheus.
48 | - From security point of view we are only exposing the thanos receive statefulset responsible for the soft-tenant(sre prometheus).
49 | - For both Thanos receiver statefulsets(soft and hard) we are setting a [replication factor=2](https://github.com/dsayan154/thanos-receiver-demo/blob/fbaf6e4cfdf96c0840b71029ed2d51ca1c8ca94e/manifests/thanos-receive-hashring-0.yaml#L35). This would ensure that the incoming data get replicated between two receiver pods.
50 | - The remote write request which is received by the [soft tenant receiver](https://github.com/dsayan154/thanos-receiver-demo/blob/master/manifests/thanos-receive-default.yaml) instance is forwarded to the [hard tenant thanos receiver](https://github.com/dsayan154/thanos-receiver-demo/blob/master/manifests/thanos-receive-hashring-0.yaml). This routing is based on the hashring config.
51 |
52 | The above architecture obviously misses few features that one would also expect from a multi-tenant architecture, e.g: tenant isolation, authentication, etc. This blog post only focuses how we can use the [Thanos Receiver](https://thanos.io/v0.14/components/receive) to store time-series from multiple prometheus(es) to acheive multi-tenancy. Also the idea behind this setup is to show how we can make the prometheus on the tenant side nearly stateless yet maintain data resiliency.
53 | > We will improve this architecture, in the upcoming posts.
54 | We will be building this demo on Thanos v0.14.
55 | # Prerequisites
56 | - [KIND](https://kind.sigs.k8s.io/docs/user/quick-start/) or a managed cluster/minikube
57 | - `kubectl`
58 | - `helm`
59 | - `jq`(optional)
60 |
61 | # Cluster setup
62 |
63 | Clone the repo:
64 | ```
65 | git clone https://github.com/dsayan154/thanos-receiver-demo.git
66 | ```
67 |
68 | ## Setup a local [KIND](https://kind.sigs.k8s.io/docs/user/quick-start/) cluster
69 | 1. `cd local-cluster/`
70 | 2. Create the cluster with calico, ingress and extra-port mappings: `./create-cluster.sh cluster-1 kind-calico-cluster-1.yaml`
71 | 3. Deploy the nginx ingress controller: `kubectl apply -f nginx-ingress-controller.yaml`
72 | 4. `cd -`
73 |
74 | ## Install minio as object storage
75 | 1. `kubectl create ns minio`
76 | 2. `helm repo add bitnami https://charts.bitnami.com/bitnami`
77 | 3. `helm upgrade --install --namespace minio my-minio bitnami/minio --set ingress.enabled=true --set accessKey.password=minio --set secretKey.password=minio123 --debug`
78 | 4. Add the following line to */etc/hosts*: `127.0.0.1 minio.local`
79 | 5. Login to http://minio.local/ with credentials `minio:minio123`.
80 | 6. Create a bucket with name **thanos**
81 |
82 | ## Install Thanos Components
83 | ### Create shared components
84 | ```
85 | kubectl create ns thanos
86 |
87 | ## Create a file _thanos-s3.yaml_ containing the minio object storage config for tenant-a:
88 | cat << EOF > thanos-s3.yaml
89 | type: S3
90 | config:
91 | bucket: "thanos"
92 | endpoint: "my-minio.minio.svc.cluster.local:9000"
93 | access_key: "minio"
94 | secret_key: "minio123"
95 | insecure: true
96 | EOF
97 |
98 | ## Create secret from the file created above to be used with the thanos components e.g store, receiver
99 | kubectl -n thanos create secret generic thanos-objectstorage --from-file=thanos-s3.yaml
100 | kubectl -n thanos label secrets thanos-objectstorage part-of=thanos
101 |
102 | ## go to manifests directory
103 | cd manifests/
104 | ```
105 |
106 | ### Install Thanos Receive Controller
107 | - Deploy a thanos-receiver-controller to auto-update the hashring configmap when the thanos receiver statefulset scales:
108 | ```
109 | kubectl apply -f thanos-receiver-hashring-configmap-base.yaml
110 | kubectl apply -f thanos-receive-controller.yaml
111 | ```
112 |
113 | The deployment above would generate a new configmap `thanos-receive-generated` and keep it updated with a list of endpoints when a statefulset with label: `controller.receive.thanos.io/hashring=hashring-0` and/or `controller.receive.thanos.io/hashring=default`. The thanos receiver pods would load the `thanos-receive-generated` configmaps in them.
114 | >NOTE: The __default__ and __hashring-0__ hashrings would be responsible for the soft-tenancy and hard-tenancy respectively.
115 |
116 | ### Install Thanos Receiver
117 | 1. Create the thanos-receiver statefulsets and headless services for soft and hard tenants.
118 | > We are not using persistent volumes just for this demo.
119 | ```
120 | kubectl apply -f thanos-receive-default.yaml
121 | kubectl apply -f thanos-receive-hashring-0.yaml
122 | ```
123 | > The receiver pods are configured to store 15d of data and with replication factor of 2
124 | 2. Create a service in front of the thanos receive statefulset for the soft tenants.
125 | ```
126 | kubectl apply -f thanos-receive-service.yaml
127 | ```
128 | > The pods of **thanos-receive-default** statefulset would load-balance the incoming requests to other receiver pods based on the hashring config maintained by the thanos receiver controller.
129 | ### Install Thanos Store
130 | 1. Create a thanos store statefulsets.
131 | ```
132 | kubectl apply -f thanos-store-shard-0.yaml
133 | ```
134 | > We have configured it such that the thanos query only fans out to the store for data older than 2w. Data ealier than 15d are to be provided by the receiver pods. P.S: There is a overlap of 1d between the two time windows is intentional for data-resilency.
135 | ### Install Thanos Query
136 | 1. Create a thanos query deployment, expose it through service and ingress
137 | ```
138 | kubectl apply -f thanos-query.yaml
139 | ```
140 | > We configure the thanos query to connect to receiver(s) and store(s) for fanning out queries.
141 |
142 | ## Install Prometheus(es)
143 | #### Create shared resource
144 | ```
145 | kubectl create ns sre
146 | kubectl create ns tenant-a
147 | kubectl create ns tenant-b
148 | ```
149 | ## Install Prometheus Operator and Prometheus
150 | We install the [prometheus-operator](https://github.com/prometheus-operator/prometheus-operator) and a default prometheus to monitor the cluster
151 | ```
152 | helm upgrade --namespace sre --debug --install cluster-monitor stable/prometheus-operator \
153 | --set prometheus.ingress.enabled=true \
154 | --set prometheus.ingress.hosts[0]="cluster.prometheus.local" \
155 | --set prometheus.prometheusSpec.remoteWrite[0].url="http://thanos-receive.thanos.svc.cluster.local:19291/api/v1/receive" \
156 | --set alertmanager.ingress.enabled=true \
157 | --set alertmanager.ingress.hosts[0]="cluster.alertmanager.local" \
158 | --set grafana.ingress.enabled=true --set grafana.ingress.hosts[0]="grafana.local"
159 | ```
160 | ## Install Prometheus and ServiceMonitor for tenant-a
161 | In _tenant-a_ namespace:
162 | - Deploy a nginx proxy to forward the requests from prometheus to _thanos-receive_ service in _thanos_ namespace. It also sets the tenant headerof the outgoing request.
163 | ```
164 | kubectl apply -f nginx-proxy-a.yaml
165 | ```
166 | - Create a [prometheus](https://coreos.com/operators/prometheus/docs/latest/api.html#prometheus) and a [servicemonitor](https://coreos.com/operators/prometheus/docs/latest/api.html#servicemonitor) to monitor itself
167 | ```
168 | kubectl apply -f prometheus-tenant-a.yaml
169 | ```
170 |
171 | ## Install Prometheus and ServiceMonitor for tenant-b
172 | In _tenant-b_ namespace:
173 | - Deploy a nginx proxy to forward the requests from prometheus to _thanos-receive_ service in _thanos_ namespace. It also sets the tenant headerof the outgoing request.
174 | ```
175 | kubectl apply -f nginx-proxy-b.yaml
176 | ```
177 | - Create a [prometheus](https://coreos.com/operators/prometheus/docs/latest/api.html#prometheus) and a [servicemonitor](https://coreos.com/operators/prometheus/docs/latest/api.html#servicemonitor) to monitor itself
178 | ```
179 | kubectl apply -f prometheus-tenant-b.yaml
180 | ```
181 |
182 | ### Add some extra localhost aliases
183 | Add the following lines to `/etc/hosts` :
184 | ```
185 | 127.0.0.1 minio.local
186 | 127.0.0.1 query.local
187 | 127.0.0.1 cluster.prometheus.local
188 | 127.0.0.1 tenant-a.prometheus.local
189 | 127.0.0.1 tenant-b.prometheus.local
190 | ```
191 | The above would allow you to locally access the [**minio**](http://minio.local), [**thanos query**](http://query.local), [**cluster monitoring prometheus**](http://cluster.prometheus.local), [**tenant-a's prometheus**](http://tenant-a.prometheus.local), [**tenant-b's prometheus**](http://tenant-b.prometheus.local). We are also exposing [Alertmanager](https://prometheus.io/docs/alerting/latest/overview/) and [Grafana](https://prometheus.io/docs/visualization/grafana/), but we don't require those in this demo.
192 |
193 | ### Test the setup
194 | Access the thanos query from http://query.local/graph and from the UI, execute the query `count (up) by (tenant_id)`. We should see a following output:
195 |
196 | 
197 | Query Output
198 |
199 | Otherwise, if we have `jq` installed, you can run the following command:
200 |
201 | ```
202 | $ curl -s http://query.local/api/v1/query?query="count(up)by("tenant_id")"|jq -r '.data.result[]|"\(.metric) \(.value[1])"'
203 | {"tenant_id":"a"} 1
204 | {"tenant_id":"b"} 1
205 | {"tenant_id":"cluster"} 17
206 | ```
207 | Either of the above outputs show that, *cluster*, *a* and *b* prometheus tenants are having 17, 1 and 1 scrape targets up and running. All these data are getting stored in thanos-receiver in real time by prometheus' [remote write queue](https://prometheus.io/docs/practices/remote_write/#remote-write-characteristics). This model creates an oportunity for the tenant side prometheus to be nearly stateless yet maintain data resiliency.
208 |
--------------------------------------------------------------------------------
/images/arch/thanos-receive-multi-tenancy.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dsayan154/thanos-receiver-demo/e26082a313179a629c64cf56e92c491698dc3740/images/arch/thanos-receive-multi-tenancy.png
--------------------------------------------------------------------------------
/images/screenshots/demo_test_result.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/dsayan154/thanos-receiver-demo/e26082a313179a629c64cf56e92c491698dc3740/images/screenshots/demo_test_result.png
--------------------------------------------------------------------------------
/local-cluster/create-cluster.sh:
--------------------------------------------------------------------------------
1 | #!/bin/sh
2 | help() {
3 | echo "Usage: create-cluster "
4 | }
5 | if [ "$#" -lt 2 ]
6 | then
7 | help
8 | exit 1
9 | fi
10 | clusterName=$1
11 | configFilePath=$2
12 | kind create cluster --name ${clusterName} --config ${configFilePath}
13 | kubectl get pods -n kube-system
14 | kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
15 | kubectl -n kube-system set env daemonset/calico-node FELIX_IGNORELOOSERPF=true
16 | kubectl -n kube-system get pods | grep calico-node
17 |
--------------------------------------------------------------------------------
/local-cluster/kind-calico-cluster-1.yaml:
--------------------------------------------------------------------------------
1 | kind: Cluster
2 | apiVersion: kind.x-k8s.io/v1alpha4
3 | networking:
4 | disableDefaultCNI: true
5 | podSubnet: 192.168.0.0/16
6 | nodes:
7 | - role: control-plane
8 | image: kindest/node:v1.17.5
9 | kubeadmConfigPatches:
10 | - |
11 | kind: InitConfiguration
12 | nodeRegistration:
13 | kubeletExtraArgs:
14 | node-labels: "ingress-ready=true"
15 | extraPortMappings:
16 | - containerPort: 80
17 | hostPort: 80
18 | protocol: TCP
19 | - containerPort: 443
20 | hostPort: 443
21 | protocol: TCP
22 |
--------------------------------------------------------------------------------
/local-cluster/nginx-ingress-controller.yaml:
--------------------------------------------------------------------------------
1 |
2 | apiVersion: v1
3 | kind: Namespace
4 | metadata:
5 | name: ingress-nginx
6 | labels:
7 | app.kubernetes.io/name: ingress-nginx
8 | app.kubernetes.io/instance: ingress-nginx
9 |
10 | ---
11 | # Source: ingress-nginx/templates/controller-serviceaccount.yaml
12 | apiVersion: v1
13 | kind: ServiceAccount
14 | metadata:
15 | labels:
16 | helm.sh/chart: ingress-nginx-2.11.1
17 | app.kubernetes.io/name: ingress-nginx
18 | app.kubernetes.io/instance: ingress-nginx
19 | app.kubernetes.io/version: 0.34.1
20 | app.kubernetes.io/managed-by: Helm
21 | app.kubernetes.io/component: controller
22 | name: ingress-nginx
23 | namespace: ingress-nginx
24 | ---
25 | # Source: ingress-nginx/templates/controller-configmap.yaml
26 | apiVersion: v1
27 | kind: ConfigMap
28 | metadata:
29 | labels:
30 | helm.sh/chart: ingress-nginx-2.11.1
31 | app.kubernetes.io/name: ingress-nginx
32 | app.kubernetes.io/instance: ingress-nginx
33 | app.kubernetes.io/version: 0.34.1
34 | app.kubernetes.io/managed-by: Helm
35 | app.kubernetes.io/component: controller
36 | name: ingress-nginx-controller
37 | namespace: ingress-nginx
38 | data:
39 | ---
40 | # Source: ingress-nginx/templates/clusterrole.yaml
41 | apiVersion: rbac.authorization.k8s.io/v1
42 | kind: ClusterRole
43 | metadata:
44 | labels:
45 | helm.sh/chart: ingress-nginx-2.11.1
46 | app.kubernetes.io/name: ingress-nginx
47 | app.kubernetes.io/instance: ingress-nginx
48 | app.kubernetes.io/version: 0.34.1
49 | app.kubernetes.io/managed-by: Helm
50 | name: ingress-nginx
51 | rules:
52 | - apiGroups:
53 | - ''
54 | resources:
55 | - configmaps
56 | - endpoints
57 | - nodes
58 | - pods
59 | - secrets
60 | verbs:
61 | - list
62 | - watch
63 | - apiGroups:
64 | - ''
65 | resources:
66 | - nodes
67 | verbs:
68 | - get
69 | - apiGroups:
70 | - ''
71 | resources:
72 | - services
73 | verbs:
74 | - get
75 | - list
76 | - update
77 | - watch
78 | - apiGroups:
79 | - extensions
80 | - networking.k8s.io # k8s 1.14+
81 | resources:
82 | - ingresses
83 | verbs:
84 | - get
85 | - list
86 | - watch
87 | - apiGroups:
88 | - ''
89 | resources:
90 | - events
91 | verbs:
92 | - create
93 | - patch
94 | - apiGroups:
95 | - extensions
96 | - networking.k8s.io # k8s 1.14+
97 | resources:
98 | - ingresses/status
99 | verbs:
100 | - update
101 | - apiGroups:
102 | - networking.k8s.io # k8s 1.14+
103 | resources:
104 | - ingressclasses
105 | verbs:
106 | - get
107 | - list
108 | - watch
109 | ---
110 | # Source: ingress-nginx/templates/clusterrolebinding.yaml
111 | apiVersion: rbac.authorization.k8s.io/v1
112 | kind: ClusterRoleBinding
113 | metadata:
114 | labels:
115 | helm.sh/chart: ingress-nginx-2.11.1
116 | app.kubernetes.io/name: ingress-nginx
117 | app.kubernetes.io/instance: ingress-nginx
118 | app.kubernetes.io/version: 0.34.1
119 | app.kubernetes.io/managed-by: Helm
120 | name: ingress-nginx
121 | roleRef:
122 | apiGroup: rbac.authorization.k8s.io
123 | kind: ClusterRole
124 | name: ingress-nginx
125 | subjects:
126 | - kind: ServiceAccount
127 | name: ingress-nginx
128 | namespace: ingress-nginx
129 | ---
130 | # Source: ingress-nginx/templates/controller-role.yaml
131 | apiVersion: rbac.authorization.k8s.io/v1
132 | kind: Role
133 | metadata:
134 | labels:
135 | helm.sh/chart: ingress-nginx-2.11.1
136 | app.kubernetes.io/name: ingress-nginx
137 | app.kubernetes.io/instance: ingress-nginx
138 | app.kubernetes.io/version: 0.34.1
139 | app.kubernetes.io/managed-by: Helm
140 | app.kubernetes.io/component: controller
141 | name: ingress-nginx
142 | namespace: ingress-nginx
143 | rules:
144 | - apiGroups:
145 | - ''
146 | resources:
147 | - namespaces
148 | verbs:
149 | - get
150 | - apiGroups:
151 | - ''
152 | resources:
153 | - configmaps
154 | - pods
155 | - secrets
156 | - endpoints
157 | verbs:
158 | - get
159 | - list
160 | - watch
161 | - apiGroups:
162 | - ''
163 | resources:
164 | - services
165 | verbs:
166 | - get
167 | - list
168 | - update
169 | - watch
170 | - apiGroups:
171 | - extensions
172 | - networking.k8s.io # k8s 1.14+
173 | resources:
174 | - ingresses
175 | verbs:
176 | - get
177 | - list
178 | - watch
179 | - apiGroups:
180 | - extensions
181 | - networking.k8s.io # k8s 1.14+
182 | resources:
183 | - ingresses/status
184 | verbs:
185 | - update
186 | - apiGroups:
187 | - networking.k8s.io # k8s 1.14+
188 | resources:
189 | - ingressclasses
190 | verbs:
191 | - get
192 | - list
193 | - watch
194 | - apiGroups:
195 | - ''
196 | resources:
197 | - configmaps
198 | resourceNames:
199 | - ingress-controller-leader-nginx
200 | verbs:
201 | - get
202 | - update
203 | - apiGroups:
204 | - ''
205 | resources:
206 | - configmaps
207 | verbs:
208 | - create
209 | - apiGroups:
210 | - ''
211 | resources:
212 | - endpoints
213 | verbs:
214 | - create
215 | - get
216 | - update
217 | - apiGroups:
218 | - ''
219 | resources:
220 | - events
221 | verbs:
222 | - create
223 | - patch
224 | ---
225 | # Source: ingress-nginx/templates/controller-rolebinding.yaml
226 | apiVersion: rbac.authorization.k8s.io/v1
227 | kind: RoleBinding
228 | metadata:
229 | labels:
230 | helm.sh/chart: ingress-nginx-2.11.1
231 | app.kubernetes.io/name: ingress-nginx
232 | app.kubernetes.io/instance: ingress-nginx
233 | app.kubernetes.io/version: 0.34.1
234 | app.kubernetes.io/managed-by: Helm
235 | app.kubernetes.io/component: controller
236 | name: ingress-nginx
237 | namespace: ingress-nginx
238 | roleRef:
239 | apiGroup: rbac.authorization.k8s.io
240 | kind: Role
241 | name: ingress-nginx
242 | subjects:
243 | - kind: ServiceAccount
244 | name: ingress-nginx
245 | namespace: ingress-nginx
246 | ---
247 | # Source: ingress-nginx/templates/controller-service-webhook.yaml
248 | apiVersion: v1
249 | kind: Service
250 | metadata:
251 | labels:
252 | helm.sh/chart: ingress-nginx-2.11.1
253 | app.kubernetes.io/name: ingress-nginx
254 | app.kubernetes.io/instance: ingress-nginx
255 | app.kubernetes.io/version: 0.34.1
256 | app.kubernetes.io/managed-by: Helm
257 | app.kubernetes.io/component: controller
258 | name: ingress-nginx-controller-admission
259 | namespace: ingress-nginx
260 | spec:
261 | type: ClusterIP
262 | ports:
263 | - name: https-webhook
264 | port: 443
265 | targetPort: webhook
266 | selector:
267 | app.kubernetes.io/name: ingress-nginx
268 | app.kubernetes.io/instance: ingress-nginx
269 | app.kubernetes.io/component: controller
270 | ---
271 | # Source: ingress-nginx/templates/controller-service.yaml
272 | apiVersion: v1
273 | kind: Service
274 | metadata:
275 | labels:
276 | helm.sh/chart: ingress-nginx-2.11.1
277 | app.kubernetes.io/name: ingress-nginx
278 | app.kubernetes.io/instance: ingress-nginx
279 | app.kubernetes.io/version: 0.34.1
280 | app.kubernetes.io/managed-by: Helm
281 | app.kubernetes.io/component: controller
282 | name: ingress-nginx-controller
283 | namespace: ingress-nginx
284 | spec:
285 | type: NodePort
286 | ports:
287 | - name: http
288 | port: 80
289 | protocol: TCP
290 | targetPort: http
291 | - name: https
292 | port: 443
293 | protocol: TCP
294 | targetPort: https
295 | selector:
296 | app.kubernetes.io/name: ingress-nginx
297 | app.kubernetes.io/instance: ingress-nginx
298 | app.kubernetes.io/component: controller
299 | ---
300 | # Source: ingress-nginx/templates/controller-deployment.yaml
301 | apiVersion: apps/v1
302 | kind: Deployment
303 | metadata:
304 | labels:
305 | helm.sh/chart: ingress-nginx-2.11.1
306 | app.kubernetes.io/name: ingress-nginx
307 | app.kubernetes.io/instance: ingress-nginx
308 | app.kubernetes.io/version: 0.34.1
309 | app.kubernetes.io/managed-by: Helm
310 | app.kubernetes.io/component: controller
311 | name: ingress-nginx-controller
312 | namespace: ingress-nginx
313 | spec:
314 | selector:
315 | matchLabels:
316 | app.kubernetes.io/name: ingress-nginx
317 | app.kubernetes.io/instance: ingress-nginx
318 | app.kubernetes.io/component: controller
319 | revisionHistoryLimit: 10
320 | strategy:
321 | rollingUpdate:
322 | maxUnavailable: 1
323 | type: RollingUpdate
324 | minReadySeconds: 0
325 | template:
326 | metadata:
327 | labels:
328 | app.kubernetes.io/name: ingress-nginx
329 | app.kubernetes.io/instance: ingress-nginx
330 | app.kubernetes.io/component: controller
331 | spec:
332 | dnsPolicy: ClusterFirst
333 | containers:
334 | - name: controller
335 | image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20
336 | imagePullPolicy: IfNotPresent
337 | lifecycle:
338 | preStop:
339 | exec:
340 | command:
341 | - /wait-shutdown
342 | args:
343 | - /nginx-ingress-controller
344 | - --election-id=ingress-controller-leader
345 | - --ingress-class=nginx
346 | - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
347 | - --validating-webhook=:8443
348 | - --validating-webhook-certificate=/usr/local/certificates/cert
349 | - --validating-webhook-key=/usr/local/certificates/key
350 | - --publish-status-address=localhost
351 | securityContext:
352 | capabilities:
353 | drop:
354 | - ALL
355 | add:
356 | - NET_BIND_SERVICE
357 | runAsUser: 101
358 | allowPrivilegeEscalation: true
359 | env:
360 | - name: POD_NAME
361 | valueFrom:
362 | fieldRef:
363 | fieldPath: metadata.name
364 | - name: POD_NAMESPACE
365 | valueFrom:
366 | fieldRef:
367 | fieldPath: metadata.namespace
368 | livenessProbe:
369 | httpGet:
370 | path: /healthz
371 | port: 10254
372 | scheme: HTTP
373 | initialDelaySeconds: 10
374 | periodSeconds: 10
375 | timeoutSeconds: 1
376 | successThreshold: 1
377 | failureThreshold: 5
378 | readinessProbe:
379 | httpGet:
380 | path: /healthz
381 | port: 10254
382 | scheme: HTTP
383 | initialDelaySeconds: 10
384 | periodSeconds: 10
385 | timeoutSeconds: 1
386 | successThreshold: 1
387 | failureThreshold: 3
388 | ports:
389 | - name: http
390 | containerPort: 80
391 | protocol: TCP
392 | hostPort: 80
393 | - name: https
394 | containerPort: 443
395 | protocol: TCP
396 | hostPort: 443
397 | - name: webhook
398 | containerPort: 8443
399 | protocol: TCP
400 | volumeMounts:
401 | - name: webhook-cert
402 | mountPath: /usr/local/certificates/
403 | readOnly: true
404 | resources:
405 | requests:
406 | cpu: 100m
407 | memory: 90Mi
408 | nodeSelector:
409 | ingress-ready: 'true'
410 | tolerations:
411 | - effect: NoSchedule
412 | key: node-role.kubernetes.io/master
413 | operator: Equal
414 | serviceAccountName: ingress-nginx
415 | terminationGracePeriodSeconds: 0
416 | volumes:
417 | - name: webhook-cert
418 | secret:
419 | secretName: ingress-nginx-admission
420 | ---
421 | # Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
422 | # before changing this value, check the required kubernetes version
423 | # https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
424 | apiVersion: admissionregistration.k8s.io/v1beta1
425 | kind: ValidatingWebhookConfiguration
426 | metadata:
427 | labels:
428 | helm.sh/chart: ingress-nginx-2.11.1
429 | app.kubernetes.io/name: ingress-nginx
430 | app.kubernetes.io/instance: ingress-nginx
431 | app.kubernetes.io/version: 0.34.1
432 | app.kubernetes.io/managed-by: Helm
433 | app.kubernetes.io/component: admission-webhook
434 | name: ingress-nginx-admission
435 | webhooks:
436 | - name: validate.nginx.ingress.kubernetes.io
437 | rules:
438 | - apiGroups:
439 | - extensions
440 | - networking.k8s.io
441 | apiVersions:
442 | - v1beta1
443 | operations:
444 | - CREATE
445 | - UPDATE
446 | resources:
447 | - ingresses
448 | failurePolicy: Fail
449 | sideEffects: None
450 | admissionReviewVersions:
451 | - v1
452 | - v1beta1
453 | clientConfig:
454 | service:
455 | namespace: ingress-nginx
456 | name: ingress-nginx-controller-admission
457 | path: /extensions/v1beta1/ingresses
458 | ---
459 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
460 | apiVersion: v1
461 | kind: ServiceAccount
462 | metadata:
463 | name: ingress-nginx-admission
464 | annotations:
465 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
466 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
467 | labels:
468 | helm.sh/chart: ingress-nginx-2.11.1
469 | app.kubernetes.io/name: ingress-nginx
470 | app.kubernetes.io/instance: ingress-nginx
471 | app.kubernetes.io/version: 0.34.1
472 | app.kubernetes.io/managed-by: Helm
473 | app.kubernetes.io/component: admission-webhook
474 | namespace: ingress-nginx
475 | ---
476 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
477 | apiVersion: rbac.authorization.k8s.io/v1
478 | kind: ClusterRole
479 | metadata:
480 | name: ingress-nginx-admission
481 | annotations:
482 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
483 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
484 | labels:
485 | helm.sh/chart: ingress-nginx-2.11.1
486 | app.kubernetes.io/name: ingress-nginx
487 | app.kubernetes.io/instance: ingress-nginx
488 | app.kubernetes.io/version: 0.34.1
489 | app.kubernetes.io/managed-by: Helm
490 | app.kubernetes.io/component: admission-webhook
491 | rules:
492 | - apiGroups:
493 | - admissionregistration.k8s.io
494 | resources:
495 | - validatingwebhookconfigurations
496 | verbs:
497 | - get
498 | - update
499 | ---
500 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
501 | apiVersion: rbac.authorization.k8s.io/v1
502 | kind: ClusterRoleBinding
503 | metadata:
504 | name: ingress-nginx-admission
505 | annotations:
506 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
507 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
508 | labels:
509 | helm.sh/chart: ingress-nginx-2.11.1
510 | app.kubernetes.io/name: ingress-nginx
511 | app.kubernetes.io/instance: ingress-nginx
512 | app.kubernetes.io/version: 0.34.1
513 | app.kubernetes.io/managed-by: Helm
514 | app.kubernetes.io/component: admission-webhook
515 | roleRef:
516 | apiGroup: rbac.authorization.k8s.io
517 | kind: ClusterRole
518 | name: ingress-nginx-admission
519 | subjects:
520 | - kind: ServiceAccount
521 | name: ingress-nginx-admission
522 | namespace: ingress-nginx
523 | ---
524 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
525 | apiVersion: rbac.authorization.k8s.io/v1
526 | kind: Role
527 | metadata:
528 | name: ingress-nginx-admission
529 | annotations:
530 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
531 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
532 | labels:
533 | helm.sh/chart: ingress-nginx-2.11.1
534 | app.kubernetes.io/name: ingress-nginx
535 | app.kubernetes.io/instance: ingress-nginx
536 | app.kubernetes.io/version: 0.34.1
537 | app.kubernetes.io/managed-by: Helm
538 | app.kubernetes.io/component: admission-webhook
539 | namespace: ingress-nginx
540 | rules:
541 | - apiGroups:
542 | - ''
543 | resources:
544 | - secrets
545 | verbs:
546 | - get
547 | - create
548 | ---
549 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
550 | apiVersion: rbac.authorization.k8s.io/v1
551 | kind: RoleBinding
552 | metadata:
553 | name: ingress-nginx-admission
554 | annotations:
555 | helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
556 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
557 | labels:
558 | helm.sh/chart: ingress-nginx-2.11.1
559 | app.kubernetes.io/name: ingress-nginx
560 | app.kubernetes.io/instance: ingress-nginx
561 | app.kubernetes.io/version: 0.34.1
562 | app.kubernetes.io/managed-by: Helm
563 | app.kubernetes.io/component: admission-webhook
564 | namespace: ingress-nginx
565 | roleRef:
566 | apiGroup: rbac.authorization.k8s.io
567 | kind: Role
568 | name: ingress-nginx-admission
569 | subjects:
570 | - kind: ServiceAccount
571 | name: ingress-nginx-admission
572 | namespace: ingress-nginx
573 | ---
574 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
575 | apiVersion: batch/v1
576 | kind: Job
577 | metadata:
578 | name: ingress-nginx-admission-create
579 | annotations:
580 | helm.sh/hook: pre-install,pre-upgrade
581 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
582 | labels:
583 | helm.sh/chart: ingress-nginx-2.11.1
584 | app.kubernetes.io/name: ingress-nginx
585 | app.kubernetes.io/instance: ingress-nginx
586 | app.kubernetes.io/version: 0.34.1
587 | app.kubernetes.io/managed-by: Helm
588 | app.kubernetes.io/component: admission-webhook
589 | namespace: ingress-nginx
590 | spec:
591 | template:
592 | metadata:
593 | name: ingress-nginx-admission-create
594 | labels:
595 | helm.sh/chart: ingress-nginx-2.11.1
596 | app.kubernetes.io/name: ingress-nginx
597 | app.kubernetes.io/instance: ingress-nginx
598 | app.kubernetes.io/version: 0.34.1
599 | app.kubernetes.io/managed-by: Helm
600 | app.kubernetes.io/component: admission-webhook
601 | spec:
602 | containers:
603 | - name: create
604 | image: docker.io/jettech/kube-webhook-certgen:v1.2.2
605 | imagePullPolicy: IfNotPresent
606 | args:
607 | - create
608 | - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
609 | - --namespace=$(POD_NAMESPACE)
610 | - --secret-name=ingress-nginx-admission
611 | env:
612 | - name: POD_NAMESPACE
613 | valueFrom:
614 | fieldRef:
615 | fieldPath: metadata.namespace
616 | restartPolicy: OnFailure
617 | serviceAccountName: ingress-nginx-admission
618 | securityContext:
619 | runAsNonRoot: true
620 | runAsUser: 2000
621 | ---
622 | # Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
623 | apiVersion: batch/v1
624 | kind: Job
625 | metadata:
626 | name: ingress-nginx-admission-patch
627 | annotations:
628 | helm.sh/hook: post-install,post-upgrade
629 | helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
630 | labels:
631 | helm.sh/chart: ingress-nginx-2.11.1
632 | app.kubernetes.io/name: ingress-nginx
633 | app.kubernetes.io/instance: ingress-nginx
634 | app.kubernetes.io/version: 0.34.1
635 | app.kubernetes.io/managed-by: Helm
636 | app.kubernetes.io/component: admission-webhook
637 | namespace: ingress-nginx
638 | spec:
639 | template:
640 | metadata:
641 | name: ingress-nginx-admission-patch
642 | labels:
643 | helm.sh/chart: ingress-nginx-2.11.1
644 | app.kubernetes.io/name: ingress-nginx
645 | app.kubernetes.io/instance: ingress-nginx
646 | app.kubernetes.io/version: 0.34.1
647 | app.kubernetes.io/managed-by: Helm
648 | app.kubernetes.io/component: admission-webhook
649 | spec:
650 | containers:
651 | - name: patch
652 | image: docker.io/jettech/kube-webhook-certgen:v1.2.2
653 | imagePullPolicy: IfNotPresent
654 | args:
655 | - patch
656 | - --webhook-name=ingress-nginx-admission
657 | - --namespace=$(POD_NAMESPACE)
658 | - --patch-mutating=false
659 | - --secret-name=ingress-nginx-admission
660 | - --patch-failure-policy=Fail
661 | env:
662 | - name: POD_NAMESPACE
663 | valueFrom:
664 | fieldRef:
665 | fieldPath: metadata.namespace
666 | restartPolicy: OnFailure
667 | serviceAccountName: ingress-nginx-admission
668 | securityContext:
669 | runAsNonRoot: true
670 | runAsUser: 2000
671 |
--------------------------------------------------------------------------------
/manifests/nginx-proxy-a.yaml:
--------------------------------------------------------------------------------
1 | kind: ConfigMap
2 | apiVersion: v1
3 | metadata:
4 | labels:
5 | app: nginx-proxy
6 | tenant: a
7 | name: nginx-proxy-a
8 | namespace: tenant-a
9 | data:
10 | nginx.conf: |-
11 | worker_processes 5; ## Default: 1
12 | error_log /dev/stderr;
13 | pid /tmp/nginx.pid;
14 | worker_rlimit_nofile 8192;
15 |
16 | events {
17 | worker_connections 4096; ## Default: 1024
18 | }
19 |
20 | http {
21 | default_type application/octet-stream;
22 | log_format main '$remote_addr - $remote_user [$time_local] $status '
23 | '"$request" $body_bytes_sent "$http_referer" '
24 | '"$http_user_agent" "$http_x_forwarded_for"';
25 | access_log /dev/stderr main;
26 | sendfile on;
27 | tcp_nopush on;
28 | resolver kube-dns.kube-system.svc.cluster.local;
29 |
30 | server { # reverse-proxy and tenant header setting
31 | listen 80;
32 | proxy_set_header THANOS-TENANT a;
33 |
34 | location / {
35 | proxy_pass http://thanos-receive.thanos.svc.cluster.local:19291$request_uri;
36 | }
37 | }
38 | }
39 | ---
40 | apiVersion: apps/v1
41 | kind: Deployment
42 | metadata:
43 | labels:
44 | app: nginx-proxy
45 | tenant: a
46 | name: nginx-proxy-a
47 | namespace: tenant-a
48 | spec:
49 | replicas: 1
50 | selector:
51 | matchLabels:
52 | app: nginx-proxy
53 | tenant: a
54 | part-of: thanos
55 | template:
56 | metadata:
57 | labels:
58 | app: nginx-proxy
59 | tenant: a
60 | part-of: thanos
61 | spec:
62 | containers:
63 | - name: nginx
64 | image: nginx
65 | imagePullPolicy: IfNotPresent
66 | ports:
67 | - name: http
68 | containerPort: 80
69 | volumeMounts:
70 | - name: config-volume
71 | mountPath: /etc/nginx
72 | volumes:
73 | - name: config-volume
74 | configMap:
75 | name: nginx-proxy-a
76 | ---
77 | apiVersion: v1
78 | kind: Service
79 | metadata:
80 | labels:
81 | app: nginx-proxy
82 | tenant: a
83 | servicemonitor: default-servicemonitor
84 | monitor: "false"
85 | name: nginx-proxy-a
86 | namespace: tenant-a
87 | spec:
88 | ports:
89 | - name: web
90 | port: 80
91 | targetPort: http
92 | selector:
93 | app: nginx-proxy
94 | tenant: a
95 |
--------------------------------------------------------------------------------
/manifests/nginx-proxy-b.yaml:
--------------------------------------------------------------------------------
1 | kind: ConfigMap
2 | apiVersion: v1
3 | metadata:
4 | labels:
5 | app: nginx-proxy
6 | tenant: b
7 | name: nginx-proxy-b
8 | namespace: tenant-b
9 | data:
10 | nginx.conf: |-
11 | worker_processes 5; ## Default: 1
12 | error_log /dev/stderr;
13 | pid /tmp/nginx.pid;
14 | worker_rlimit_nofile 8192;
15 |
16 | events {
17 | worker_connections 4096; ## Default: 1024
18 | }
19 |
20 | http {
21 | default_type application/octet-stream;
22 | log_format main '$remote_addr - $remote_user [$time_local] $status '
23 | '"$request" $body_bytes_sent "$http_referer" '
24 | '"$http_user_agent" "$http_x_forwarded_for"';
25 | access_log /dev/stderr main;
26 | sendfile on;
27 | tcp_nopush on;
28 | resolver kube-dns.kube-system.svc.cluster.local;
29 |
30 | server { # reverse-proxy and tenant header setting
31 | listen 80;
32 | proxy_set_header THANOS-TENANT b;
33 |
34 | location / {
35 | proxy_pass http://thanos-receive.thanos.svc.cluster.local:19291$request_uri;
36 | }
37 | }
38 | }
39 | ---
40 | apiVersion: apps/v1
41 | kind: Deployment
42 | metadata:
43 | labels:
44 | app: nginx-proxy
45 | tenant: b
46 | name: nginx-proxy-b
47 | namespace: tenant-b
48 | spec:
49 | replicas: 1
50 | selector:
51 | matchLabels:
52 | app: nginx-proxy
53 | tenant: b
54 | part-of: thanos
55 | template:
56 | metadata:
57 | labels:
58 | app: nginx-proxy
59 | tenant: b
60 | part-of: thanos
61 | spec:
62 | containers:
63 | - name: nginx
64 | image: nginx
65 | imagePullPolicy: IfNotPresent
66 | ports:
67 | - name: http
68 | containerPort: 80
69 | volumeMounts:
70 | - name: config-volume
71 | mountPath: /etc/nginx
72 | volumes:
73 | - name: config-volume
74 | configMap:
75 | name: nginx-proxy-b
76 | ---
77 | apiVersion: v1
78 | kind: Service
79 | metadata:
80 | labels:
81 | app: nginx-proxy
82 | tenant: b
83 | servicemonitor: default-servicemonitor
84 | monitor: "false"
85 | name: nginx-proxy-b
86 | namespace: tenant-b
87 | spec:
88 | ports:
89 | - name: web
90 | port: 80
91 | targetPort: http
92 | selector:
93 | app: nginx-proxy
94 | tenant: b
95 |
--------------------------------------------------------------------------------
/manifests/prometheus-tenant-a.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | labels:
5 | app: prometheus-operator-prometheus
6 | tenant: a
7 | name: tenant-a-prometheus
8 | namespace: tenant-a
9 | ---
10 | apiVersion: rbac.authorization.k8s.io/v1
11 | kind: Role
12 | metadata:
13 | creationTimestamp: null
14 | name: prometheus-tenants
15 | namespace: tenant-a
16 | rules:
17 | - apiGroups:
18 | - ""
19 | resources:
20 | - '*'
21 | verbs:
22 | - list
23 | - watch
24 | ---
25 | apiVersion: rbac.authorization.k8s.io/v1beta1
26 | kind: RoleBinding
27 | metadata:
28 | creationTimestamp: null
29 | name: prometheus-tenant-a
30 | namespace: tenant-a
31 | roleRef:
32 | apiGroup: rbac.authorization.k8s.io
33 | kind: Role
34 | name: prometheus-tenants
35 | subjects:
36 | - kind: ServiceAccount
37 | name: tenant-a-prometheus
38 | namespace: tenant-a
39 | ---
40 | apiVersion: monitoring.coreos.com/v1
41 | kind: Prometheus
42 | metadata:
43 | labels:
44 | app: prometheus-operator-prometheus
45 | tenant: a
46 | name: tenant-a-prometheus
47 | namespace: tenant-a
48 | spec:
49 | # alerting:
50 | # alertmanagers:
51 | # - apiVersion: v2
52 | # name: cluster-monitor-prometheus-alertmanager
53 | # namespace: monitoring
54 | # pathPrefix: /
55 | # port: web
56 | baseImage: quay.io/prometheus/prometheus
57 | enableAdminAPI: false
58 | externalUrl: http://tenant-a.prometheus.local/
59 | listenLocal: false
60 | logFormat: logfmt
61 | logLevel: info
62 | portName: web
63 | remoteWrite:
64 | - url: http://nginx-proxy-a:80/api/v1/receive
65 | replicas: 1
66 | retention: 10d
67 | routePrefix: /
68 | securityContext:
69 | fsGroup: 2000
70 | runAsGroup: 2000
71 | runAsNonRoot: true
72 | runAsUser: 1000
73 | serviceMonitorSelector:
74 | matchLabels:
75 | app: prometheus-operator-prometheus
76 | tenant: a
77 | serviceAccountName: tenant-a-prometheus
78 | version: v2.18.1
79 | ---
80 | apiVersion: extensions/v1beta1
81 | kind: Ingress
82 | metadata:
83 | labels:
84 | app: prometheus-operator-prometheus
85 | tenant: a
86 | name: tenant-a-prometheus
87 | namespace: tenant-a
88 | spec:
89 | rules:
90 | - host: tenant-a.prometheus.local
91 | http:
92 | paths:
93 | - backend:
94 | serviceName: tenant-a-prometheus
95 | servicePort: 9090
96 | path: /
97 | ---
98 | apiVersion: v1
99 | kind: Service
100 | metadata:
101 | labels:
102 | app: prometheus-operator-prometheus
103 | tenant: a
104 | servicemonitor: default-servicemonitor
105 | monitor: "true"
106 | name: tenant-a-prometheus
107 | namespace: tenant-a
108 | spec:
109 | ports:
110 | - name: web
111 | port: 9090
112 | protocol: TCP
113 | targetPort: web
114 | selector:
115 | app: prometheus
116 | prometheus: tenant-a-prometheus
117 | sessionAffinity: None
118 | type: ClusterIP
119 | ---
120 | apiVersion: monitoring.coreos.com/v1
121 | kind: ServiceMonitor
122 | metadata:
123 | labels:
124 | app: prometheus-operator-prometheus
125 | tenant: a
126 | name: default-servicemonitor
127 | namespace: tenant-a
128 | spec:
129 | endpoints:
130 | - port: web
131 | path: /metrics
132 | namespaceSelector:
133 | matchNames:
134 | - tenant-a
135 | selector:
136 | matchLabels:
137 | servicemonitor: default-servicemonitor
138 | monitor: "true"
--------------------------------------------------------------------------------
/manifests/prometheus-tenant-b.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | labels:
5 | app: prometheus-operator-prometheus
6 | tenant: b
7 | name: tenant-b-prometheus
8 | namespace: tenant-b
9 | ---
10 | apiVersion: rbac.authorization.k8s.io/v1
11 | kind: Role
12 | metadata:
13 | creationTimestamp: null
14 | name: prometheus-tenants
15 | namespace: tenant-b
16 | rules:
17 | - apiGroups:
18 | - ""
19 | resources:
20 | - '*'
21 | verbs:
22 | - list
23 | - watch
24 | ---
25 | apiVersion: rbac.authorization.k8s.io/v1beta1
26 | kind: RoleBinding
27 | metadata:
28 | creationTimestamp: null
29 | name: prometheus-tenant-b
30 | namespace: tenant-b
31 | roleRef:
32 | apiGroup: rbac.authorization.k8s.io
33 | kind: Role
34 | name: prometheus-tenants
35 | subjects:
36 | - kind: ServiceAccount
37 | name: tenant-b-prometheus
38 | namespace: tenant-b
39 | ---
40 | apiVersion: monitoring.coreos.com/v1
41 | kind: Prometheus
42 | metadata:
43 | labels:
44 | app: prometheus-operator-prometheus
45 | tenant: b
46 | name: tenant-b-prometheus
47 | namespace: tenant-b
48 | spec:
49 | # alerting:
50 | # alertmanagers:
51 | # - apiVersion: v2
52 | # name: cluster-monitor-prometheus-alertmanager
53 | # namespace: monitoring
54 | # pathPrefix: /
55 | # port: web
56 | baseImage: quay.io/prometheus/prometheus
57 | enableAdminAPI: false
58 | externalUrl: http://tenant-b.prometheus.local/
59 | listenLocal: false
60 | logFormat: logfmt
61 | logLevel: info
62 | remoteWrite:
63 | - url: http://nginx-proxy-b:80/api/v1/receive
64 | replicas: 1
65 | retention: 10d
66 | routePrefix: /
67 | securityContext:
68 | fsGroup: 2000
69 | runAsGroup: 2000
70 | runAsNonRoot: true
71 | runAsUser: 1000
72 | serviceMonitorSelector:
73 | matchLabels:
74 | app: prometheus-operator-prometheus
75 | tenant: b
76 | serviceAccountName: tenant-b-prometheus
77 | version: v2.18.1
78 | ---
79 | apiVersion: extensions/v1beta1
80 | kind: Ingress
81 | metadata:
82 | labels:
83 | app: prometheus-operator-prometheus
84 | tenant: b
85 | name: tenant-b-prometheus
86 | namespace: tenant-b
87 | spec:
88 | rules:
89 | - host: tenant-b.prometheus.local
90 | http:
91 | paths:
92 | - backend:
93 | serviceName: tenant-b-prometheus
94 | servicePort: 9090
95 | path: /
96 | ---
97 | apiVersion: v1
98 | kind: Service
99 | metadata:
100 | labels:
101 | app: prometheus-operator-prometheus
102 | tenant: b
103 | servicemonitor: default-servicemonitor
104 | monitor: "true"
105 | name: tenant-b-prometheus
106 | namespace: tenant-b
107 | spec:
108 | ports:
109 | - name: web
110 | port: 9090
111 | protocol: TCP
112 | targetPort: web
113 | selector:
114 | app: prometheus
115 | prometheus: tenant-b-prometheus
116 | sessionAffinity: None
117 | type: ClusterIP
118 | ---
119 | apiVersion: monitoring.coreos.com/v1
120 | kind: ServiceMonitor
121 | metadata:
122 | labels:
123 | app: prometheus-operator-prometheus
124 | tenant: b
125 | name: default-servicemonitor
126 | namespace: tenant-b
127 | spec:
128 | endpoints:
129 | - port: web
130 | path: /metrics
131 | namespaceSelector:
132 | matchNames:
133 | - tenant-b
134 | selector:
135 | matchLabels:
136 | servicemonitor: default-servicemonitor
137 | monitor: "true"
--------------------------------------------------------------------------------
/manifests/thanos-query.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: Deployment
3 | metadata:
4 | labels:
5 | app: thanos-query
6 | part-of: thanos
7 | name: thanos-query
8 | namespace: thanos
9 | spec:
10 | replicas: 1
11 | selector:
12 | matchLabels:
13 | app: thanos-query
14 | part-of: thanos
15 | template:
16 | metadata:
17 | labels:
18 | app: thanos-query
19 | part-of: thanos
20 | spec:
21 | affinity:
22 | podAntiAffinity:
23 | preferredDuringSchedulingIgnoredDuringExecution:
24 | - podAffinityTerm:
25 | labelSelector:
26 | matchExpressions:
27 | - key: app.kubernetes.io/name
28 | operator: In
29 | values:
30 | - thanos-query
31 | namespaces:
32 | - observatorium
33 | topologyKey: kubernetes.io/hostname
34 | weight: 100
35 | containers:
36 | - args:
37 | - query
38 | - --log.level=info
39 | - --grpc-address=0.0.0.0:10901
40 | - --http-address=0.0.0.0:9090
41 | - --query.replica-label=prometheus_replica
42 | - --query.replica-label=receive_replica
43 | - --store=dnssrv+_grpc._tcp.thanos-store-shard-0.thanos.svc.cluster.local
44 | - --store=dnssrv+_grpc._tcp.thanos-receive-default.thanos.svc.cluster.local
45 | - --store=dnssrv+_grpc._tcp.thanos-receive-hashring-0.thanos.svc.cluster.local
46 | - --query.timeout=15m
47 | image: quay.io/thanos/thanos:v0.14.0
48 | livenessProbe:
49 | failureThreshold: 4
50 | httpGet:
51 | path: /-/healthy
52 | port: 9090
53 | scheme: HTTP
54 | periodSeconds: 30
55 | name: thanos-query
56 | ports:
57 | - containerPort: 10901
58 | name: grpc
59 | - containerPort: 9090
60 | name: http
61 | readinessProbe:
62 | failureThreshold: 20
63 | httpGet:
64 | path: /-/ready
65 | port: 9090
66 | scheme: HTTP
67 | periodSeconds: 5
68 | terminationMessagePolicy: FallbackToLogsOnError
69 | terminationGracePeriodSeconds: 120
70 | ---
71 | apiVersion: v1
72 | kind: Service
73 | metadata:
74 | labels:
75 | app: thanos-query
76 | part-of: thanos
77 | name: thanos-query
78 | namespace: thanos
79 | spec:
80 | ports:
81 | - name: grpc
82 | port: 10901
83 | targetPort: grpc
84 | - name: http
85 | port: 9090
86 | targetPort: http
87 | selector:
88 | app: thanos-query
89 | part-of: thanos
90 | ---
91 | apiVersion: extensions/v1beta1
92 | kind: Ingress
93 | metadata:
94 | labels:
95 | app: thanos-query
96 | part-of: thanos
97 | name: thanos-query
98 | namespace: thanos
99 | spec:
100 | rules:
101 | - host: query.local
102 | http:
103 | paths:
104 | - backend:
105 | serviceName: thanos-query
106 | servicePort: http
107 | path: /
108 |
109 |
--------------------------------------------------------------------------------
/manifests/thanos-receive-controller.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ServiceAccount
3 | metadata:
4 | name: thanos-receive-controller
5 | namespace: thanos
6 | labels:
7 | part-of: thanos
8 | app: thanos-receive-controller
9 | ---
10 | apiVersion: rbac.authorization.k8s.io/v1
11 | kind: Role
12 | metadata:
13 | name: thanos-receive-controller
14 | namespace: thanos
15 | labels:
16 | part-of: thanos
17 | app: thanos-receive-controller
18 | rules:
19 | - apiGroups:
20 | - ""
21 | resources:
22 | - configmaps
23 | verbs:
24 | - list
25 | - watch
26 | - get
27 | - create
28 | - update
29 | - apiGroups:
30 | - apps
31 | resources:
32 | - statefulsets
33 | verbs:
34 | - list
35 | - watch
36 | - get
37 | ---
38 | apiVersion: rbac.authorization.k8s.io/v1
39 | kind: RoleBinding
40 | metadata:
41 | name: thanos-receive-controller
42 | namespace: thanos
43 | labels:
44 | part-of: thanos
45 | app: thanos-receive-controller
46 | roleRef:
47 | apiGroup: rbac.authorization.k8s.io
48 | kind: Role
49 | name: thanos-receive-controller
50 | subjects:
51 | - kind: ServiceAccount
52 | name: thanos-receive-controller
53 | namespace: thanos
54 | ---
55 | apiVersion: apps/v1
56 | kind: Deployment
57 | metadata:
58 | labels:
59 | app: thanos-receive-controller
60 | part-of: thanos
61 | name: thanos-receive-controller
62 | namespace: thanos
63 | spec:
64 | replicas: 1
65 | selector:
66 | matchLabels:
67 | app: thanos-receive-controller
68 | part-of: thanos
69 | template:
70 | metadata:
71 | labels:
72 | app: thanos-receive-controller
73 | part-of: thanos
74 | spec:
75 | containers:
76 | - args:
77 | - --configmap-name=thanos-receive-base
78 | - --configmap-generated-name=thanos-receive-generated
79 | - --file-name=hashrings.json
80 | - --namespace=$(NAMESPACE)
81 | env:
82 | - name: NAMESPACE
83 | valueFrom:
84 | fieldRef:
85 | fieldPath: metadata.namespace
86 | image: quay.io/observatorium/thanos-receive-controller:latest
87 | name: thanos-receive-controller
88 | ports:
89 | - containerPort: 8080
90 | name: http
91 | serviceAccount: thanos-receive-controller
92 |
--------------------------------------------------------------------------------
/manifests/thanos-receive-default.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: StatefulSet
3 | metadata:
4 | labels:
5 | app: thanos-receive
6 | tenant: default-tenant
7 | controller.receive.thanos.io: thanos-receive-controller
8 | controller.receive.thanos.io/hashring: default
9 | part-of: thanos
10 | name: thanos-receive-default
11 | namespace: thanos
12 | spec:
13 | replicas: 4
14 | selector:
15 | matchLabels:
16 | app: thanos-receive
17 | tenant: default-tenant
18 | controller.receive.thanos.io: thanos-receive-controller
19 | controller.receive.thanos.io/hashring: default
20 | part-of: thanos
21 | serviceName: thanos-receive-default
22 | template:
23 | metadata:
24 | labels:
25 | app: thanos-receive
26 | tenant: default-tenant
27 | controller.receive.thanos.io: thanos-receive-controller
28 | controller.receive.thanos.io/hashring: default
29 | part-of: thanos
30 | spec:
31 | affinity: {}
32 | containers:
33 | - args:
34 | - receive
35 | - --receive.replication-factor=2
36 | - --objstore.config=$(OBJSTORE_CONFIG)
37 | - --tsdb.path=/var/thanos/receive
38 | - --label=receive_replica="$(NAME)"
39 | - --receive.local-endpoint=$(NAME).thanos-receive-default.$(NAMESPACE).svc.cluster.local:10901
40 | - --tsdb.retention=15d
41 | - --receive.default-tenant-id=cluster
42 | - --receive.hashrings-file=/var/lib/thanos-receive/hashrings.json
43 | env:
44 | - name: NAME
45 | valueFrom:
46 | fieldRef:
47 | fieldPath: metadata.name
48 | - name: NAMESPACE
49 | valueFrom:
50 | fieldRef:
51 | fieldPath: metadata.namespace
52 | - name: OBJSTORE_CONFIG
53 | valueFrom:
54 | secretKeyRef:
55 | key: thanos-s3.yaml
56 | name: thanos-objectstorage
57 | image: quay.io/thanos/thanos:v0.14.0
58 | livenessProbe:
59 | failureThreshold: 8
60 | httpGet:
61 | path: /-/healthy
62 | port: 10902
63 | scheme: HTTP
64 | periodSeconds: 30
65 | name: thanos-receive
66 | ports:
67 | - containerPort: 10901
68 | name: grpc
69 | - containerPort: 10902
70 | name: http
71 | - containerPort: 19291
72 | name: remote-write
73 | readinessProbe:
74 | failureThreshold: 20
75 | httpGet:
76 | path: /-/ready
77 | port: 10902
78 | scheme: HTTP
79 | periodSeconds: 5
80 | terminationMessagePolicy: FallbackToLogsOnError
81 | volumeMounts:
82 | - mountPath: /var/thanos/receive
83 | name: data
84 | readOnly: false
85 | - mountPath: /var/lib/thanos-receive
86 | name: hashring-config
87 | terminationGracePeriodSeconds: 900
88 | volumes:
89 | - emptyDir: {}
90 | name: data
91 | - configMap:
92 | name: thanos-receive-generated
93 | name: hashring-config
94 | ---
95 | apiVersion: v1
96 | kind: Service
97 | metadata:
98 | labels:
99 | app: thanos-receive
100 | tenant: default-tenant
101 | controller.receive.thanos.io/hashring: default
102 | part-of: thanos
103 | name: thanos-receive-default
104 | namespace: thanos
105 | spec:
106 | clusterIP: None
107 | ports:
108 | - name: grpc
109 | port: 10901
110 | targetPort: 10901
111 | - name: http
112 | port: 10902
113 | targetPort: 10902
114 | - name: remote-write
115 | port: 19291
116 | targetPort: 19291
117 | selector:
118 | app: thanos-receive
119 | tenant: default-tenant
120 | controller.receive.thanos.io: thanos-receive-controller
121 | controller.receive.thanos.io/hashring: default
122 | part-of: thanos
123 |
--------------------------------------------------------------------------------
/manifests/thanos-receive-hashring-0.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: StatefulSet
3 | metadata:
4 | labels:
5 | app: thanos-receive
6 | tenant: ab
7 | controller.receive.thanos.io: thanos-receive-controller
8 | controller.receive.thanos.io/hashring: hashring-0
9 | part-of: thanos
10 | name: thanos-receive-hashring-0
11 | namespace: thanos
12 | spec:
13 | replicas: 4
14 | selector:
15 | matchLabels:
16 | app: thanos-receive
17 | tenant: ab
18 | controller.receive.thanos.io: thanos-receive-controller
19 | controller.receive.thanos.io/hashring: hashring-0
20 | part-of: thanos
21 | serviceName: thanos-receive-hashring-0
22 | template:
23 | metadata:
24 | labels:
25 | app: thanos-receive
26 | tenant: ab
27 | controller.receive.thanos.io: thanos-receive-controller
28 | controller.receive.thanos.io/hashring: hashring-0
29 | part-of: thanos
30 | spec:
31 | affinity: {}
32 | containers:
33 | - args:
34 | - receive
35 | - --receive.replication-factor=2
36 | - --objstore.config=$(OBJSTORE_CONFIG)
37 | - --tsdb.path=/var/thanos/receive
38 | - --label=receive_replica="$(NAME)"
39 | - --receive.local-endpoint=$(NAME).thanos-receive-hashring-0.$(NAMESPACE).svc.cluster.local:10901
40 | - --receive.default-tenant-id=ab
41 | - --tsdb.retention=15d
42 | - --receive.hashrings-file=/var/lib/thanos-receive/hashrings.json
43 | env:
44 | - name: NAME
45 | valueFrom:
46 | fieldRef:
47 | fieldPath: metadata.name
48 | - name: NAMESPACE
49 | valueFrom:
50 | fieldRef:
51 | fieldPath: metadata.namespace
52 | - name: OBJSTORE_CONFIG
53 | valueFrom:
54 | secretKeyRef:
55 | key: thanos-s3.yaml
56 | name: thanos-objectstorage
57 | image: quay.io/thanos/thanos:v0.14.0
58 | livenessProbe:
59 | failureThreshold: 8
60 | httpGet:
61 | path: /-/healthy
62 | port: 10902
63 | scheme: HTTP
64 | periodSeconds: 30
65 | name: thanos-receive
66 | ports:
67 | - containerPort: 10901
68 | name: grpc
69 | - containerPort: 10902
70 | name: http
71 | - containerPort: 19291
72 | name: remote-write
73 | readinessProbe:
74 | failureThreshold: 20
75 | httpGet:
76 | path: /-/ready
77 | port: 10902
78 | scheme: HTTP
79 | periodSeconds: 5
80 | terminationMessagePolicy: FallbackToLogsOnError
81 | volumeMounts:
82 | - mountPath: /var/thanos/receive
83 | name: data
84 | readOnly: false
85 | - mountPath: /var/lib/thanos-receive
86 | name: hashring-config
87 | terminationGracePeriodSeconds: 900
88 | volumes:
89 | - emptyDir: {}
90 | name: data
91 | - configMap:
92 | name: thanos-receive-generated
93 | name: hashring-config
94 | ---
95 | apiVersion: v1
96 | kind: Service
97 | metadata:
98 | labels:
99 | app: thanos-receive
100 | tenant: ab
101 | controller.receive.thanos.io/hashring: hashring-0
102 | part-of: thanos
103 | name: thanos-receive-hashring-0
104 | namespace: thanos
105 | spec:
106 | clusterIP: None
107 | ports:
108 | - name: grpc
109 | port: 10901
110 | targetPort: 10901
111 | - name: http
112 | port: 10902
113 | targetPort: 10902
114 | - name: remote-write
115 | port: 19291
116 | targetPort: 19291
117 | selector:
118 | app: thanos-receive
119 | tenant: ab
120 | controller.receive.thanos.io: thanos-receive-controller
121 | controller.receive.thanos.io/hashring: hashring-0
122 | part-of: thanos
--------------------------------------------------------------------------------
/manifests/thanos-receive-service.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: Service
3 | metadata:
4 | labels:
5 | app: thanos-receive
6 | tenant: default-tenant
7 | controller.receive.thanos.io/hashring: default
8 | part-of: thanos
9 | name: thanos-receive
10 | namespace: thanos
11 | spec:
12 | ports:
13 | - name: grpc
14 | port: 10901
15 | targetPort: 10901
16 | - name: http
17 | port: 10902
18 | targetPort: 10902
19 | - name: remote-write
20 | port: 19291
21 | targetPort: 19291
22 | selector:
23 | app: thanos-receive
24 | tenant: default-tenant
25 | controller.receive.thanos.io: thanos-receive-controller
26 | controller.receive.thanos.io/hashring: default
27 | part-of: thanos
--------------------------------------------------------------------------------
/manifests/thanos-receiver-hashring-configmap-base.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: v1
2 | kind: ConfigMap
3 | metadata:
4 | name: thanos-receive-base
5 | namespace: thanos
6 | labels:
7 | part-of: thanos
8 | app: thanos-receive-controller
9 | data:
10 | hashrings.json: |
11 | [
12 | {
13 | "hashring": "default"
14 | },
15 | {
16 | "hashring": "hashring-0",
17 | "tenants": ["tenant-a", "tenant-b"]
18 | }
19 | ]
20 |
--------------------------------------------------------------------------------
/manifests/thanos-store-shard-0.yaml:
--------------------------------------------------------------------------------
1 | apiVersion: apps/v1
2 | kind: StatefulSet
3 | metadata:
4 | labels:
5 | app: thanos-store
6 | shard: shard-0
7 | tenant: a
8 | part-of: thanos
9 | name: thanos-store-shard-0
10 | namespace: thanos
11 | spec:
12 | replicas: 1
13 | selector:
14 | matchLabels:
15 | app: thanos-store
16 | shard: shard-0
17 | tenant: a
18 | part-of: thanos
19 | serviceName: thanos-store-shard-0
20 | template:
21 | metadata:
22 | labels:
23 | app: thanos-store
24 | shard: shard-0
25 | tenant: a
26 | part-of: thanos
27 | spec:
28 | containers:
29 | - args:
30 | - store
31 | - --data-dir=/var/thanos/store
32 | - --objstore.config=$(OBJSTORE_CONFIG)
33 | - --max-time=-2w
34 | env:
35 | - name: OBJSTORE_CONFIG
36 | valueFrom:
37 | secretKeyRef:
38 | key: thanos-s3.yaml
39 | name: thanos-objectstorage
40 | image: quay.io/thanos/thanos:v0.14.0
41 | livenessProbe:
42 | failureThreshold: 8
43 | httpGet:
44 | path: /-/healthy
45 | port: 10902
46 | scheme: HTTP
47 | periodSeconds: 30
48 | name: thanos-store
49 | ports:
50 | - containerPort: 10901
51 | name: grpc
52 | - containerPort: 10902
53 | name: http
54 | readinessProbe:
55 | failureThreshold: 20
56 | httpGet:
57 | path: /-/ready
58 | port: 10902
59 | scheme: HTTP
60 | periodSeconds: 5
61 | terminationMessagePolicy: FallbackToLogsOnError
62 | volumeMounts:
63 | - mountPath: /var/thanos/store
64 | name: data
65 | readOnly: false
66 | terminationGracePeriodSeconds: 120
67 | volumes:
68 | - emptyDir: {}
69 | name: data
70 | ---
71 | apiVersion: v1
72 | kind: Service
73 | metadata:
74 | labels:
75 | app: thanos-store
76 | shard: shard-0
77 | tenant: a
78 | part-of: thanos
79 | name: thanos-store-shard-0
80 | namespace: thanos
81 | spec:
82 | clusterIP: None
83 | ports:
84 | - name: grpc
85 | port: 10901
86 | targetPort: 10901
87 | - name: http
88 | port: 10902
89 | targetPort: 10902
90 | selector:
91 | app: thanos-store
92 | shard: shard-0
93 | tenant: a
94 | part-of: thanos
95 |
--------------------------------------------------------------------------------