├── .gitignore ├── README.md ├── assets ├── factory-login.gif ├── kerberos-factory.png └── kerberosio-enterprise.png ├── config ├── global.json ├── license.json └── template.json └── kubernetes ├── README.md ├── assets └── kerberosfactory-deployments.svg ├── kerberos-factory ├── assets │ ├── env.js │ └── style.css ├── clusterrole.yaml ├── deployment.yaml └── ingress.yaml ├── metallb └── configmap.yaml └── mongodb ├── mongodb.config.yaml └── values.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | deployment-develop.yaml 2 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kerberos Factory 2 | 3 | Kerberos Factory brings [the Kerberos Agent](https://github.com/kerberos-io/agent) to another level. [Kerberos Agent's](https://github.com/kerberos-io/agent) are deployed where and how you want, it can run as a binary, Docker container or inside a Kubernetes cluster. The latter is where Kerberos Factory comes into the picture. Kerberos Factory is a UI, build for non-technical users, that allows to deploy and configure [Kerberos Agents](https://github.com/kerberos-io/agent) into your Kubernetes cluster more easily. It bypasses the complexity of creating Kubernetes resources, by providing a simple UI to connect a Kerberos Agent to your camera, and configure it for your usecase. 4 | 5 | Kerberos Factory is build for the management of [Kerberos Agents](https://github.com/kerberos-io/agent) in a Kubernetes cluster. If Kubernetes is out-of-scope for your deployment, and you plan to use a manual or single-node container deployment (e.g. Docker), it's recommended to use the default `docker compose` or `docker` CLI. 6 | 7 | ## :thinking: Prerequisites 8 | 9 | * a Kubernetes cluster configured with one or more nodes. 10 | 11 | ## :books: Overview 12 | 13 | ### Installation 14 | 1. [Kubernetes](#kubernetes) 15 | 16 | ### Introductions 17 | 1. [Kerberos Factory](#kerberos-factory-1) 18 | 2. [Mission](#mission) 19 | 20 | ## Installation 21 | 22 | As previously mentioned, running a Kerberos Factory, requires a Kubernetes cluster. If you plan to use solely `docker` or `docker compose` then Kerberos Factory is out-of-scope. 23 | 24 | ### Kubernetes 25 | 26 | Leveraging Kerberos Factory allows you to deploy Kerberos Agents in you cluster. Kubernetes will automatically load balance your Kerberos Agents across your nodes, without you requiring all the hassle of scaling out your video landscape. 27 | 28 | > Follow the `Kubernetes` tutorial [by navigating to the kubernetes sub folder in this repostitory](kubernetes/). 29 | 30 | ## Introductions 31 | 32 | Please note that we have added a brief introduction to Kerberos Factory below. To get a complete overview [visit the documentation page](https://doc.kerberos.io), where you will be able to learn about all the ins and outs of the Kerberos.io ecosystem. 33 | 34 | ### Kerberos Factory 35 | 36 | Kerberos Factory is a user interface which consumes and interacts with the Kubernetes API. It schedules [Kerberos Agents](https://github.com/kerberos-io/agent) as Kubernetes resource, and more specificly Kubernetes deployments. For every camera stream a Kerberos Agent is created as a Kubernetes deployment. 37 | 38 | ![Kerberos Factory ui](assets/factory-login.gif) 39 | 40 | Through a web interface a non-technical administrator can configure and add more [Kerberos Agents](https://github.com/kerberos-io/agent) to its cluster. The administrator has the ability to interact with the Kerberos Agent through one or more configuration screens, to tune and optimize the Kerberos Agent for his/her specific usecase. 41 | 42 | #### ONVIF 43 | 44 | Kerberos Factory allows you to scan the local network and create Kerberos Agents for every discovered camera. Once discovered, Kerberos Factory will create a Kubernetes deployment for every Kerberos Agent. 45 | 46 | #### Global settings 47 | 48 | Instead of tuning all your Kerberos Agent, Kerberos Factory allows you to set up global settings which are inherited by all your Kerberos Agents. This feature helps scaling out and controlling your video landscape more easily. 49 | 50 | [![Kerberos Factory](./assets/kerberos-factory.png)](https://kerberos.io/) 51 | 52 | ### Mission 53 | 54 | Kerberos Factory belong to the Enterprise suite. The goal of this suite is to support enterprises building a scalable video surveillance infrastructure that is open to support all business processes and usecases. Kerberos Enterprise Suite will do all the heavy lifting in terms scaling the processing and storage of you surveillance cameras. On top of that it will provide integration and extensibility to build your own applications on top of that using Swagger API's, and real-time messaging such as Kafka. 55 | 56 | [![Kerberos Enterprise Suite](./assets/kerberosio-enterprise.png)](https://kerberos.io/) -------------------------------------------------------------------------------- /assets/factory-login.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerberos-io/factory/0f40791490ad42ec3c7a229842d865a78ff0074e/assets/factory-login.gif -------------------------------------------------------------------------------- /assets/kerberos-factory.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerberos-io/factory/0f40791490ad42ec3c7a229842d865a78ff0074e/assets/kerberos-factory.png -------------------------------------------------------------------------------- /assets/kerberosio-enterprise.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/kerberos-io/factory/0f40791490ad42ec3c7a229842d865a78ff0074e/assets/kerberosio-enterprise.png -------------------------------------------------------------------------------- /config/global.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "global", 3 | "timezone": "Europe/Brussels", 4 | "capture": { 5 | "id": "", 6 | "ipcamera": { 7 | "rtsp": "", 8 | "fps": "" 9 | }, 10 | "continuous": "false", 11 | "postrecording": 0, 12 | "prerecording": 0, 13 | "maxlengthrecording": 0 14 | }, 15 | "cloud": "kstorage", 16 | "s3": { 17 | "proxy": "false", 18 | "proxyuri": "http://proxy.kerberos.io" 19 | }, 20 | "kstorage": { 21 | "uri": "https://storage.kerberos.io" 22 | }, 23 | "mqtturi": "tcp://mqtt.kerberos.io:1883", 24 | "heartbeaturi": "https://cloud.kerberos.io/api/v1/health" 25 | } 26 | -------------------------------------------------------------------------------- /config/license.json: -------------------------------------------------------------------------------- 1 | { 2 | "key": "" 3 | } 4 | -------------------------------------------------------------------------------- /config/template.json: -------------------------------------------------------------------------------- 1 | { 2 | "type": "template", 3 | "key": "", 4 | "name": "", 5 | "capture": { 6 | "id": "ipcamera", 7 | "ipcamera": { 8 | "rtsp": "", 9 | "fps": "25" 10 | }, 11 | "continuous": "false", 12 | "postrecording": 10, 13 | "prerecording": 5, 14 | "maxlengthrecording": 20 15 | }, 16 | "timetable": [ 17 | { 18 | "start1": 0, 19 | "end1": 43199, 20 | "start2": 43200, 21 | "end2": 86400 22 | }, 23 | { 24 | "start1": 0, 25 | "end1": 43199, 26 | "start2": 43200, 27 | "end2": 86400 28 | }, 29 | { 30 | "start1": 0, 31 | "end1": 43199, 32 | "start2": 43200, 33 | "end2": 86400 34 | }, 35 | { 36 | "start1": 0, 37 | "end1": 43199, 38 | "start2": 43200, 39 | "end2": 86400 40 | }, 41 | { 42 | "start1": 0, 43 | "end1": 43199, 44 | "start2": 43200, 45 | "end2": 86400 46 | }, 47 | { 48 | "start1": 0, 49 | "end1": 43199, 50 | "start2": 43200, 51 | "end2": 86400 52 | }, 53 | { 54 | "start1": 0, 55 | "end1": 43199, 56 | "start2": 43200, 57 | "end2": 86400 58 | } 59 | ], 60 | "region": { 61 | "rectangle": { 62 | "x1": 0, 63 | "y1": 0, 64 | "x2": 800, 65 | "y2": 640 66 | }, 67 | "polygon": [] 68 | } 69 | } 70 | -------------------------------------------------------------------------------- /kubernetes/README.md: -------------------------------------------------------------------------------- 1 | # Kerberos Factory on Kubernetes 2 | 3 | As described in the `README.md` of this repository. Kerberos Factory runs on kubernetes to leverage the resilience and auto-scaling. It allows Kerberos Agents to be deployed in a kubernetes cluster without the management and complexity of creating kubernetes resources files. 4 | 5 | ## Managed Kubernetes vs Self-hosted Kubernetes 6 | 7 | Just like `docker`, you bring your Kubernetes cluster where you want `edge` or `cloud`; private or public. Depending where you will host and how (e.g. managed Kubernetes cluster vs self-hosted) you'll have less/more responsibilities and/or control. Where and how is totally up to you, and your company preferences. 8 | 9 | This installation guide will slighy modify depending on if you are self-hosting or leveraging a managed Kubernetes service by a cloud provider. Within a self-hosted installation you'll be required to install specific Kubernetes resources yourself, such as persistent volumes, storage and a load baluancer. 10 | 11 | ![Kerberos Factory deployments](assets/kerberosfactory-deployments.svg) 12 | 13 | ### A. Self-hosted Kubernetes 14 | 15 | 1. [Prerequisites](#prerequisites-1) 16 | 2. [Container Engine](#container-engine) 17 | 3. [Kubernetes](#kubernetes) 18 | 4. [Untaint all nodes](#untaint-all-nodes) 19 | 5. [Calico](#calico) 20 | 6. [Introduction](#introduction-1) 21 | 7. [Kerberos Factory](#kerberos-factory-1) 22 | 8. [MetalLB](#metallb) 23 | 9. [OpenEBS](#openebs) 24 | 10. [Proceed with managed Kubernetes](#proceed-with-managed-kubernetes) 25 | 26 | ### B. Managed Kubernetes 27 | 28 | 1. [Prerequisites](#prerequisites) 29 | 2. [Introduction](#introduction) 30 | 3. [Kerberos Factory](#kerberos-factory) 31 | 4. [Namespace](#namespace) 32 | 5. [Helm](#helm) 33 | 6. [Traefik](#traefik) 34 | 7. [Ingress-Nginx (alternative for Traefik)](#ingress-nginx-alternative-for-traefik) 35 | 8. [MongoDB](#mongodb) 36 | 9. [Config Map](#config-map) 37 | 10. [Deployment](#deployment) 38 | 11. [Test out configuration](#test-out-configuration) 39 | 12. [Access the system](#access-the-system) 40 | 41 | ## A. Self-hosted Kubernetes 42 | 43 | To simplify the installation we will start with the most common setup, where we will install Kerberos Factory on a self-hosted Kubernetes services. 44 | 45 | In most cases you will require to host your Kerberos Factory (and Kerberos Agents) very close to your cameras, to improve latency and efficency. However it's perfect possible to host a managed Kubernetes cluster, and connect to remote cameras through a secured connection (VPN). 46 | 47 | The good things is that installation of a self-hosted Kubernetes cluster, contains the same steps as a managed Kubernetes installation with a few extra resources on top. Let's start with setting up our self-hosted Kubernetes machine; if you already have a cluster setup you might [skip the Kubernetes installation](#calico), or if you are running a managed Kubernetes you can jump to [B. Managed Kubernetes directly](#b-managed-kubernetes-1). 48 | 49 | ### Prerequisites 50 | 51 | We'll assume you have a blank Ubuntu 20.04 / 22.04 LTS machine (or multiple machines/nodes) at your posession. We'll start with updating the Ubuntu operating system. 52 | 53 | apt-get update -y && apt-get upgrade -y 54 | 55 | ### Container Engine 56 | 57 | export OS_VERSION_ID=xUbuntu_$(cat /etc/os-release | grep VERSION_ID | awk -F"=" '{print $2}' | tr -d '"') 58 | export CRIO_VERSION=1.25 59 | 60 | Add repositories 61 | 62 | echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list 63 | echo "deb http://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$CRIO_VERSION/$OS_VERSION_ID/ /"|sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION.list 64 | curl -L https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$CRIO_VERSION/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - 65 | curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS_VERSION_ID/Release.key | sudo apt-key --keyring /etc/apt/trusted.gpg.d/libcontainers.gpg add - 66 | 67 | Update package index and install crio: 68 | 69 | apt-get update 70 | apt-get install cri-o cri-o-runc cri-tools -y 71 | 72 | Enable and start crio: 73 | 74 | systemctl daemon-reload 75 | systemctl enable crio --now 76 | 77 | ### Kubernetes 78 | 79 | After Container Engine being installed go ahead and install the different Kubernetes servicess and tools. 80 | 81 | apt update -y 82 | apt-gt install -y apt-transport-https ca-certificates curl 83 | curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - 84 | echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list 85 | 86 | apt-get update 87 | apt-get install -y kubelet kubeadm kubectl 88 | 89 | Hold these packages to prevent unintentional updates: 90 | 91 | apt-mark hold kubelet kubeadm kubectl 92 | 93 | Make sure you disable swap, this is required by Kubernetes. 94 | 95 | swapoff -a 96 | 97 | And if you want to make it permanent after every boot. 98 | 99 | sudo sed -i.bak '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 100 | 101 | **_Special note_**: If you already had Kubernetes installed, make sure you are running latest version and/or have properly cleaned up previous installation. 102 | 103 | kubeadm reset 104 | rm -rf $HOME/.kube 105 | 106 | Initiate a new Kubernetes cluster using following command. This will use the current CIDR. If you want to use another CIDR, specify following arguments: `--pod-network-cidr=10.244.0.0/16`. 107 | 108 | kubeadm init 109 | 110 | Once successful you should see the following. Note the `discovery token` which you need to use to connect additional nodes to your cluster. 111 | 112 | Your Kubernetes control-plane has initialized successfully! 113 | 114 | To start using your cluster, you need to run the following as a regular user: 115 | 116 | mkdir -p $HOME/.kube 117 | sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 118 | sudo chown $(id -u):$(id -g) $HOME/.kube/config 119 | 120 | You should now deploy a pod network to the cluster. 121 | Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: 122 | https://kubernetes.io/docs/concepts/cluster-administration/addons/ 123 | 124 | Then you can join any number of worker nodes by running the following on each as root: 125 | 126 | kubeadm join 192.168.1.103:6443 --token ej7ckt.uof7o2iplqf0r2up \ 127 | --discovery-token-ca-cert-hash sha256:9cbcc00d34be2dbd605174802d9e52fbcdd617324c237bf58767b369fa586209 128 | 129 | Now we have a Kubernetes cluster, we need to make sure we add make it available in our `kubeconfig`. This will allow us to query our Kubernetes cluster with the `kubectl` command. 130 | 131 | mkdir -p $HOME/.kube 132 | cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 133 | chown $(id -u):$(id -g) $HOME/.kube/config 134 | 135 | ### Untaint all nodes 136 | 137 | By default, and in this example, we only have one node our master node. In a production scenario we would have additional worker nodes. By default the master nodes are marked as `tainted`, this means they cannot run workloads. To allow master nodes to run workloads, we need to untaint them. If we wouldn't do this our pods would never be scheduled, as we do not have worker nodes at this moment. 138 | 139 | kubectl taint nodes node-role.kubernetes.io/control-plane- 140 | 141 | ### Calico 142 | 143 | Calico is an open source networking and network security solution for containers, virtual machines, and native host-based workloads. (https://www.projectcalico.org/). We will use it as our network layer in our Kubernetes cluster. You could use otthers like Flannel aswell, but we prefer Calico. 144 | 145 | curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/calico.yaml -O 146 | kubectl apply -f calico.yaml 147 | 148 | ### Introduction 149 | 150 | As you might have read in the `A. Managed Kubernetes` section, Kerberos Factory requires some initial components to be installed. 151 | 152 | - Helm 153 | - MongoDB 154 | - Traefik (or alternatively Nginx ingress) 155 | 156 | However for a self-hosted cluster, we'll need following components on top: 157 | 158 | - MetalLB 159 | - OpenEBS 160 | 161 | For simplicity we'll start with the installation of `MetalLB` and `OpenEBS`. Afterwards we'll move back to the `A. Managed Kubernetes` section, to install the remaining components. 162 | 163 | ### Kerberos Factory 164 | 165 | We'll start by cloning the configurations from our [Github repo](https://github.com/kerberos-io/factory). This repo contains all the relevant configuration files required. 166 | 167 | git clone https://github.com/kerberos-io/factory 168 | 169 | Make sure to change directory to the `kubernetes` folder. 170 | 171 | cd factory/kubernetes 172 | 173 | ### MetalLB 174 | 175 | In a self-hosted scenario, we do not have fancy Load balancers and Public IPs from which we can "automatically" benefit. To overcome thism, solutions such as MetalLB - Baremetal Load Balancer - have been developed (https://metallb.universe.tf/installation/). MetalLB will dedicate an internal IP address, or IP range, which will be assigned to one or more Load Balancers. Using this dedicated IP address, you can reach your services or ingress. 176 | 177 | kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/namespace.yaml 178 | kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.1/manifests/metallb.yaml 179 | kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)" 180 | 181 | After installing the different MetalLB components, we need to modify a `configmap.yaml` file, which you can find here `./metallb/configmap.yaml`. This file contains information of how MetalLB can get and use internal IP's as LoadBalancers. 182 | 183 | apiVersion: v1 184 | kind: ConfigMap 185 | metadata: 186 | namespace: metallb-system 187 | name: config 188 | data: 189 | config: | 190 | address-pools: 191 | - name: default 192 | protocol: layer2 193 | addresses: 194 | --> - 192.168.1.200-192.168.1.210 195 | 196 | You can change the IP range above to match your needs. MetalLB will use this range as a reference to assign IP addresses to your LoadBalancers. Once ready you can apply the configuration map. 197 | 198 | kubectl apply -f ./metallb/configmap.yaml 199 | 200 | Once installed, all services created in your Kubernetes cluster will receive an unique IP address as configured in the `configmap.yaml`. 201 | 202 | ### OpenEBS 203 | 204 | Some of the services we'll leverage such as MongoDB or Minio require storage, to persist their data safely. In a managed Kubernetes cluster, the relevant cloud provider will allocate storage automatically for you, as you might expect this is not the case for a self-hosted cluster. 205 | 206 | Therefore we will need to prepare some storage or persistent volume. To simplify this we can leverage the OpenEBS storage solution, which can automatically provision PV (Persistent volumes) for us. 207 | 208 | Let us start with installing the OpenEBS operator. Please note that you might need to change the mount folder. Download the `openebs-operator.yaml`. 209 | 210 | wget https://openebs.github.io/charts/openebs-operator.yaml 211 | 212 | Scroll to the bottom, until you hit the `StorageClass` section. Modify the `BasePath` value to the destination (external mount) you prefer. 213 | 214 | #Specify the location (directory) where 215 | # where PV(volume) data will be saved. 216 | # A sub-directory with pv-name will be 217 | # created. When the volume is deleted, 218 | # the PV sub-directory will be deleted. 219 | #Default value is /var/openebs/local 220 | - name: BasePath 221 | value: "/var/openebs/local/" 222 | 223 | Once you are ok with the `BasePath` go ahead and apply the operator. 224 | 225 | kubectl apply -f openebs-operator.yaml 226 | 227 | Once done it should start installing several resources in the `openebs` namespace. If all resources are created successfully we can launch the `helm install` for MongoDB. 228 | 229 | ### Proceed with managed Kubernetes 230 | 231 | Now you're done with installing the self-hosted prerequisites, you should be able to proceed with the [B. Managed Kubernetes](#b-managed-kubernetes) section. This will install all the remaining resources. 232 | 233 | ## B. Managed Kubernetes 234 | 235 | As of now there are many cloud providers such as, but not limited too, Azure, Google Cloud, AWS, and many more. Each cloud provider has build a management service on top of Kubernetes which takes over the heavy lifting of managing a cluster yourself. It makes specific resources such as built-in loadbalancers, storage, and more availble for your needs. The cloud provider manages all the complex things for you in the back-end and implements features such as load balancing, data replication etc. 236 | 237 | ### Prerequisites 238 | 239 | For this installation we assume you have chosen a specific cloud provider that provides you with a manager Kubernetes. We'll assume you have the relevant `.kubeconfig` configuration to be able to connect the api server of your Kubernetes installation. Make sure you can run following commands on you cluster. 240 | 241 | kubectl get nodes 242 | 243 | or 244 | 245 | kubectl get pods --all-namespaces 246 | 247 | ### Introduction 248 | 249 | Kerberos Factory requires some initial components to be installed. If you run Kerberos Factory in the same cluster as where you have a Kerberos Vault installed, there is not much to do, and you might skip the most of the paragraphs. 250 | 251 | If you plan to run Kerberos Factory in a different cluster (which is perfectly possible), you will need to make sure you complete this initial setup. To be more specific you will need to have following components running: 252 | 253 | - Helm 254 | - MongoDB 255 | - Traefik (or alternatively Nginx ingress) 256 | 257 | We'll assume you are starting an installation from scratch and therefore still need to install and configure previously mentioned components. 258 | 259 | ### Kerberos Factory 260 | 261 | We'll start by cloning the configurations from our [Github repo](https://github.com/kerberos-io/factory). This repo contains all the relevant configuration files required. 262 | 263 | git clone https://github.com/kerberos-io/factory 264 | 265 | Make sure to change directory to the `kubernetes` folder. 266 | 267 | cd factory/kubernetes 268 | 269 | ### Namespace 270 | 271 | A best practices is to isole tools and/or applications in a namespace, this will group relevant (micro)services. As a best practice we'll create a namespace `kerberos-factory`. 272 | 273 | kubectl create namespace kerberos-factory 274 | 275 | This namespace will later be used to deploy the relevant services for Kerberos Factory. 276 | 277 | ### Helm 278 | 279 | Next we will install a couple of dependencies which are required for Kerberos Factory. [**Helm**](https://helm.sh/) is a package manager for Kubernetes, it helps you to set up services more easily (this could be a MQTT broker, a database, etc). 280 | Instead of writing yaml files for every service we need, we use so-called Charts (libraries), that you can reuse and configure the, with the appropriate settings. 281 | 282 | Use one of the preferred OS package managers to install the Helm client: 283 | 284 | brew install helm 285 | 286 | choco install kubernetes-helm 287 | 288 | scoop install helm 289 | 290 | gofish install helm 291 | 292 | ### MongoDB 293 | 294 | When using Kerberos Factory, it will persist the configurations of your Kerberos Agents in a MongoDB database. As used before, we are using `helm` to install MongoDB in our Kubernetes cluster. 295 | 296 | Have a look into the `./mongodb/values.yaml` file, you will find plenty of configurations for the MongoDB helm chart. To change the username and password of the MongoDB instance, go ahead and [find the attribute where](https://github.com/kerberos-io/factory/blob/master/kubernetes/mongodb/values.yaml#L148) you can change the root password. 297 | 298 | helm repo add bitnami https://charts.bitnami.com/bitnami 299 | kubectl create namespace mongodb 300 | 301 | Note: If you are installing a self-hosted Kubernetes cluster, we recommend using `openebs`. Therefore make sure to uncomment the `global`.`storageClass` attribute, and make sure it's using `openebs-hostpath` instead. 302 | 303 | helm install mongodb -n mongodb bitnami/mongodb --values ./mongodb/values.yaml 304 | 305 | Once installed successfully, we should verify if the password has been set correctly. Print out the password using `echo $MONGODB_ROOT_PASSWORD` and confirm the password is what you've specified in the `values.yaml` file. 306 | 307 | export MONGODB_ROOT_PASSWORD=$(kubectl get secret -n mongodb mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode) 308 | echo $MONGODB_ROOT_PASSWORD 309 | 310 | ### Config Map 311 | 312 | Kerberos Factory requires a configuration to connect to the MongoDB instance. To handle this `configmap` map is created in the `./mongodb/mongodb.config.yaml` file. 313 | 314 | Modify the MongoDB credentials in the `./mongodb/mongodb.config.yaml`, and make sure they match the credentials of your MongoDB instance, as described above. 315 | 316 | - name: MONGODB_USERNAME 317 | value: "root" 318 | - name: MONGODB_PASSWORD 319 | --> value: "yourmongodbpassword" 320 | 321 | Create the config map. 322 | 323 | kubectl apply -f ./mongodb/mongodb.config.yaml -n kerberos-factory 324 | 325 | ### Deployment 326 | 327 | To install the Kerberos Factory web app inside your cluster, simply execute below `kubectl` command. This will create the deployment for us with the necessary configurations, and exposed it on internal/external IP address, thanks to our `LoadBalancer` MetalLB or cloud provider. 328 | 329 | kubectl apply -f ./kerberos-factory/deployment.yaml -n kerberos-factory 330 | 331 | Kerberos Factory will create Kerberos Agents on our behalf, and so create Kubernetes resource deployments. Therefore we'll need to enable some `ClusterRole` and `ClusterRoleBinding`, so we are able to create deployments from Kerberos Factory web app through the Kubernetes Golang SDK. 332 | 333 | kubectl apply -f ./kerberos-factory/clusterrole.yaml -n kerberos-factory 334 | 335 | Verify that the Kerberos Factory got assigned an internal IP address. 336 | 337 | kubectl get svc -n kerberos-factory 338 | 339 | You should see the service `factory-lb` being created, together with and IP address assigned from the MetalLB pool or cloud provider. 340 | 341 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 342 | --> factory-lb LoadBalancer 10.107.209.226 192.168.1.81 80:30636/TCP 24m 343 | 344 | ### (Optional) Ingress 345 | 346 | By default Kerberos Factory deployment will use and create a `LoadBalancer`. This means that Kerberos Factory will be granted an internal/external IP from which you can navigate to and consume the Kerberos Factory UI. However if you wish to use the `Ingress` functionality by assigning a readable DNS name, you'll need to modify a few things. 347 | 348 | First make sure to install either Traefik or Ingress-nginx, following sections below. Once you have chosen an `Ingress`, open the `./kerberos-factory/ingress.yaml` configuration file. At the of the bottom file you will find an endpoint, similar to the `Ingress` file below. Update the hostname to your own preferred domain, and add these to your DNS server or `/etc/hosts` file (pointing to the same IP address as the Traefik/Ingress-nginx IP address). 349 | 350 | spec: 351 | rules: 352 | --> - host: factory.domain.com 353 | http: 354 | paths: 355 | - path: / 356 | backend: 357 | serviceName: factory 358 | servicePort: 80 359 | 360 | If you are using Ingress Nginx, do not forgot to comment `Traefik` and uncomment `Ingress Nginx`. 361 | 362 | apiVersion: extensions/v1beta1 363 | kind: Ingress 364 | metadata: 365 | name: factory 366 | annotations: 367 | #kubernetes.io/ingress.class: traefik 368 | kubernetes.io/ingress.class: nginx 369 | kubernetes.io/tls-acme: "true" 370 | nginx.ingress.kubernetes.io/ssl-redirect: "true" 371 | cert-manager.io/cluster-issuer: "letsencrypt-prod" 372 | 373 | Once done, apply the `Ingress` file. 374 | 375 | kubectl apply -f ./kerberos-factory/ingress.yaml -n kerberos-factory 376 | 377 | #### (Option 1) Traefik 378 | 379 | [**Traefik**](https://containo.us/traefik/) is a reverse proxy and load balancer which allows you to expose your deployments more easily. Kerberos uses Traefik to expose its APIs more easily. 380 | 381 | Add the Helm repository and install traefik. 382 | 383 | kubectl create namespace traefik 384 | helm repo add traefik https://helm.traefik.io/traefik 385 | helm install traefik traefik/traefik -n traefik 386 | 387 | After installation, you should have an IP attached to Traefik service, look for it by executing the `get service` command. You will see the ip address in the `EXTERNAL-IP` attribute. 388 | 389 | kubectl get svc 390 | 391 | NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE 392 | kubernetes ClusterIP 10.0.0.1 443/TCP 36h 393 | --> traefik LoadBalancer 10.0.27.93 40.114.168.96 443:31623/TCP,80:31804/TCP 35h 394 | traefik-dashboard NodePort 10.0.252.6 80:31146/TCP 35h 395 | 396 | Go to your DNS provider and link the domain you've configured in the first step `traefik.domain.com` to the IP address of thT `EXTERNAL-IP` attribute. When browsing to `traefik.domain.com`, you should see the traefik dashboard showing up. 397 | 398 | #### (Option 2) Ingress-Nginx (alternative for Traefik) 399 | 400 | If you don't like `Traefik` but you prefer `Ingress Nginx`, that works as well. 401 | 402 | helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx 403 | helm repo update 404 | kubectl create namespace ingress-nginx 405 | helm install ingress-nginx -n ingress-nginx ingress-nginx/ingress-nginx 406 | 407 | ### Test out configuration 408 | 409 | If everything worked out as expected, you should now have following services in your cluster across different namespaces: 410 | 411 | - MongoDB 412 | - Traefik or Nginx 413 | - Kerberos Factory 414 | 415 | It should look like this. 416 | 417 | $ kubectl get pods -n kerberos-factory 418 | NAME READY STATUS RESTARTS AGE 419 | kerberos-factory-6f5c877d7c-hf77p 1/1 Running 0 2d11h 420 | 421 | $ kubectl get pods -n mongodb 422 | NAME READY STATUS RESTARTS AGE 423 | mongodb-758d5c5ddd-qsfq9 1/1 Running 0 5m31s 424 | 425 | $ kubectl get pods -n traefik 426 | NAME READY STATUS RESTARTS AGE 427 | traefik-7d566ccc47-mwslb 1/1 Running 0 4d12h 428 | 429 | ### Access the system 430 | 431 | Once everything is configured correctly, you should be able to access the Kerberos Factory application. By navigating to the internal/external IP address (`LoadBalancer`) or domain (`Ingress`) with your browser you will see the Kerberos Factory login page showing up. 432 | 433 | ![Once successfully installed Kerberos Factory, it will show you the login page.](../assets/factory-login.gif) 434 | -------------------------------------------------------------------------------- /kubernetes/kerberos-factory/assets/env.js: -------------------------------------------------------------------------------- 1 | (function(window) { 2 | window["env"] = window["env"] || {}; 3 | // Environment variables 4 | window["env"]["environment"] = "whitelabel"; 5 | window["env"]["pageTitle"] = "Your application"; 6 | })(this); 7 | -------------------------------------------------------------------------------- /kubernetes/kerberos-factory/assets/style.css: -------------------------------------------------------------------------------- 1 | @charset "utf-8"; 2 | 3 | .body { 4 | background: white; 5 | } -------------------------------------------------------------------------------- /kubernetes/kerberos-factory/clusterrole.yaml: -------------------------------------------------------------------------------- 1 | kind: ClusterRole 2 | apiVersion: rbac.authorization.k8s.io/v1 3 | metadata: 4 | name: pods-list 5 | rules: 6 | - apiGroups: ["", "apps"] 7 | resources: ["pods", "pods/log", "deployments", "services", "services/proxy", "endpoints", "nodes"] 8 | verbs: ["get", "list", "create", "update", "delete", "watch"] 9 | --- 10 | kind: ClusterRoleBinding 11 | apiVersion: rbac.authorization.k8s.io/v1 12 | metadata: 13 | name: pods-list 14 | subjects: 15 | - kind: ServiceAccount 16 | name: default 17 | namespace: kerberos-factory 18 | roleRef: 19 | kind: ClusterRole 20 | name: pods-list 21 | apiGroup: rbac.authorization.k8s.io 22 | -------------------------------------------------------------------------------- /kubernetes/kerberos-factory/deployment.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: apps/v1 2 | kind: Deployment 3 | metadata: 4 | name: factory 5 | spec: 6 | replicas: 1 7 | selector: 8 | matchLabels: 9 | app: factory 10 | minReadySeconds: 10 11 | strategy: 12 | type: RollingUpdate 13 | rollingUpdate: 14 | maxUnavailable: 1 15 | maxSurge: 1 16 | template: 17 | metadata: 18 | labels: 19 | app: factory 20 | spec: 21 | containers: 22 | - name: factory 23 | image: "uugai/factory:v1.0.9" # or you can use "uugai/factory:latest" 24 | #imagePullPolicy: Always 25 | resources: 26 | requests: 27 | memory: 256Mi 28 | cpu: 500m 29 | limits: 30 | memory: 256Mi 31 | cpu: 500m 32 | ports: 33 | - containerPort: 80 34 | envFrom: 35 | - configMapRef: 36 | name: mongodb 37 | 38 | # Injecting the ca-certificates inside the container. 39 | #volumeMounts: 40 | #- name: rootcerts 41 | # mountPath: /etc/ssl/certs/ca-certificates.crt 42 | # subPath: ca-certificates.crt 43 | #- name: custom-layout 44 | # mountPath: /home/factory/www/assets/custom 45 | 46 | env: 47 | - name: GIN_MODE 48 | value: release 49 | - name: KERBEROS_LOGIN_USERNAME 50 | value: "root" 51 | - name: KERBEROS_LOGIN_PASSWORD 52 | value: "kerberos" 53 | 54 | - name: KERBEROS_AGENT_IMAGE 55 | value: "kerberos/agent:latest" 56 | - name: KERBEROS_AGENT_MEMORY_LIMIT 57 | value: "256Mi" 58 | 59 | # Do not touch this, unless you know what you are doing. 60 | - name: NAMESPACE 61 | value: "kerberos-factory" 62 | - name: FACTORY_ENVIRONMENT 63 | value: "kubernetes" 64 | - name: K8S_PROXY 65 | value: http://localhost:80 66 | 67 | # Additional certificates can be injected into the Kerberos Agents, through the creation of a configmap. 68 | # A certificate "ca-certificates.crt" is expected in the configmap, and will be added to 69 | # the Kerberos Agent in following directory: /etc/ssl/certs/ 70 | #- name: CERTIFICATES_CONFIGMAP 71 | # value: "rootcerts" 72 | 73 | #volumes: 74 | #- name: rootcerts 75 | # configMap: 76 | # name: rootcerts 77 | #- name: custom-layout 78 | # persistentVolumeClaim: 79 | # claimName: custom-layout-claim 80 | --- 81 | apiVersion: v1 82 | kind: Service 83 | metadata: 84 | name: factory-lb 85 | labels: 86 | app: factory 87 | spec: 88 | type: LoadBalancer 89 | ports: 90 | - port: 80 91 | targetPort: 80 92 | name: frontend 93 | protocol: TCP 94 | selector: 95 | app: factory 96 | -------------------------------------------------------------------------------- /kubernetes/kerberos-factory/ingress.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: v1 3 | kind: Service 4 | metadata: 5 | name: factory 6 | labels: 7 | app: factory 8 | spec: 9 | ports: 10 | - port: 80 11 | targetPort: 80 12 | name: frontend 13 | protocol: TCP 14 | selector: 15 | app: factory 16 | --- 17 | apiVersion: networking.k8s.io/v1 18 | kind: Ingress 19 | metadata: 20 | name: factory 21 | annotations: 22 | kubernetes.io/ingress.class: traefik 23 | #kubernetes.io/ingress.class: nginx 24 | #kubernetes.io/tls-acme: "true" 25 | #nginx.ingress.kubernetes.io/ssl-redirect: "true" 26 | #cert-manager.io/cluster-issuer: "letsencrypt-prod" 27 | spec: 28 | #tls: 29 | #- hosts: 30 | #- factory.domain.com 31 | #secretName: factory-tls 32 | rules: 33 | - host: factory.domain.com 34 | http: 35 | paths: 36 | - path: / 37 | pathType: Prefix 38 | backend: 39 | service: 40 | name: factory 41 | port: 42 | number: 80 -------------------------------------------------------------------------------- /kubernetes/metallb/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: config 5 | namespace: metallb-system 6 | data: 7 | config: | 8 | address-pools: 9 | - name: default 10 | protocol: layer2 11 | addresses: 12 | - 192.168.1.200-192.168.1.210 13 | -------------------------------------------------------------------------------- /kubernetes/mongodb/mongodb.config.yaml: -------------------------------------------------------------------------------- 1 | 23rapiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: mongodb 5 | data: 6 | # This is the mongodb database where configurations will be stored, you might use a different name if you want. 7 | MONGODB_DATABASE_FACTORY: "KerberosFactory" 8 | 9 | # MongoDB URI (for example for a SaaS service like MongoDB Atlas) 10 | # If uri is set, the below properties are not used (host, adminDatabase, username, password) 11 | #MONGODB_URI: "mongodb+srv://xx:xx@kerberos-hub.xxx.mongodb.net/?retryWrites=true&w=majority&appName=xxx" 12 | 13 | # If you do not wish to use the URI, you can specify the individual values. 14 | MONGODB_HOST: "mongodb.mongodb" 15 | MONGODB_DATABASE_CREDENTIALS: "admin" 16 | MONGODB_USERNAME: "root" 17 | MONGODB_PASSWORD: "yourmongodbpassword" 18 | -------------------------------------------------------------------------------- /kubernetes/mongodb/values.yaml: -------------------------------------------------------------------------------- 1 | ## @section Global parameters 2 | ## Global Docker image parameters 3 | ## Please, note that this will override the image parameters, including dependencies, configured to use the global value 4 | ## Current available global Docker image parameters: imageRegistry, imagePullSecrets and storageClass 5 | ## 6 | 7 | ## @param global.imageRegistry Global Docker image registry 8 | ## @param global.imagePullSecrets Global Docker registry secret names as an array 9 | ## @param global.storageClass Global StorageClass for Persistent Volume(s) 10 | ## @param global.namespaceOverride Override the namespace for resource deployed by the chart, but can itself be overridden by the local namespaceOverride 11 | ## 12 | global: 13 | imageRegistry: "" 14 | ## E.g. 15 | ## imagePullSecrets: 16 | ## - myRegistryKeySecretName 17 | ## 18 | imagePullSecrets: [] 19 | storageClass: "" # "openebs-hostpath" # or other.. 20 | namespaceOverride: "" 21 | 22 | ## @section Common parameters 23 | ## 24 | 25 | ## @param nameOverride String to partially override mongodb.fullname template (will maintain the release name) 26 | ## 27 | nameOverride: "" 28 | ## @param fullnameOverride String to fully override mongodb.fullname template 29 | ## 30 | fullnameOverride: "" 31 | ## @param namespaceOverride String to fully override common.names.namespace 32 | ## 33 | namespaceOverride: "" 34 | ## @param kubeVersion Force target Kubernetes version (using Helm capabilities if not set) 35 | ## 36 | kubeVersion: "" 37 | ## @param clusterDomain Default Kubernetes cluster domain 38 | ## 39 | clusterDomain: cluster.local 40 | ## @param extraDeploy Array of extra objects to deploy with the release 41 | ## extraDeploy: 42 | ## This needs to be uncommented and added to 'extraDeploy' in order to use the replicaset 'mongo-labeler' sidecar 43 | ## for dynamically discovering the mongodb primary pod 44 | ## suggestion is to use a hard-coded and predictable TCP port for the primary mongodb pod (here is 30001, choose your own) 45 | ## - apiVersion: v1 46 | ## kind: Service 47 | ## metadata: 48 | ## name: mongodb-primary 49 | ## namespace: the-mongodb-namespace 50 | ## labels: 51 | ## app.kubernetes.io/component: mongodb 52 | ## app.kubernetes.io/instance: mongodb 53 | ## app.kubernetes.io/managed-by: Helm 54 | ## app.kubernetes.io/name: mongodb 55 | ## spec: 56 | ## type: NodePort 57 | ## externalTrafficPolicy: Cluster 58 | ## ports: 59 | ## - name: mongodb 60 | ## port: 30001 61 | ## nodePort: 30001 62 | ## protocol: TCP 63 | ## targetPort: mongodb 64 | ## selector: 65 | ## app.kubernetes.io/component: mongodb 66 | ## app.kubernetes.io/instance: mongodb 67 | ## app.kubernetes.io/name: mongodb 68 | ## primary: "true" 69 | ## 70 | extraDeploy: [] 71 | ## @param commonLabels Add labels to all the deployed resources (sub-charts are not considered). Evaluated as a template 72 | ## 73 | commonLabels: {} 74 | ## @param commonAnnotations Common annotations to add to all Mongo resources (sub-charts are not considered). Evaluated as a template 75 | ## 76 | commonAnnotations: {} 77 | 78 | ## Enable diagnostic mode in the deployment 79 | ## 80 | diagnosticMode: 81 | ## @param diagnosticMode.enabled Enable diagnostic mode (all probes will be disabled and the command will be overridden) 82 | ## 83 | enabled: false 84 | ## @param diagnosticMode.command Command to override all containers in the deployment 85 | ## 86 | command: 87 | - sleep 88 | ## @param diagnosticMode.args Args to override all containers in the deployment 89 | ## 90 | args: 91 | - infinity 92 | 93 | ## @section MongoDB(®) parameters 94 | ## 95 | 96 | ## Bitnami MongoDB(®) image 97 | ## ref: https://hub.docker.com/r/bitnami/mongodb/tags/ 98 | ## @param image.registry MongoDB(®) image registry 99 | ## @param image.repository MongoDB(®) image registry 100 | ## @param image.tag MongoDB(®) image tag (immutable tags are recommended) 101 | ## @param image.pullPolicy MongoDB(®) image pull policy 102 | ## @param image.pullSecrets Specify docker-registry secret names as an array 103 | ## @param image.debug Set to true if you would like to see extra information on logs 104 | ## 105 | image: 106 | registry: docker.io 107 | repository: bitnami/mongodb 108 | tag: 4.4.15-debian-10-r8 109 | ## Specify a imagePullPolicy 110 | ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images 111 | ## 112 | pullPolicy: IfNotPresent 113 | ## Optionally specify an array of imagePullSecrets. 114 | ## Secrets must be manually created in the namespace. 115 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 116 | ## e.g: 117 | ## pullSecrets: 118 | ## - myRegistryKeySecretName 119 | ## 120 | pullSecrets: [] 121 | ## Set to true if you would like to see extra information on logs 122 | ## 123 | debug: false 124 | 125 | ## @param schedulerName Name of the scheduler (other than default) to dispatch pods 126 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 127 | ## 128 | schedulerName: "" 129 | ## @param architecture MongoDB(®) architecture (`standalone` or `replicaset`) 130 | ## 131 | architecture: standalone 132 | ## @param useStatefulSet Set to true to use a StatefulSet instead of a Deployment (only when `architecture=standalone`) 133 | ## 134 | useStatefulSet: false 135 | ## MongoDB(®) Authentication parameters 136 | ## 137 | auth: 138 | ## @param auth.enabled Enable authentication 139 | ## ref: https://docs.mongodb.com/manual/tutorial/enable-authentication/ 140 | ## 141 | enabled: true 142 | ## @param auth.rootUser MongoDB(®) root user 143 | ## 144 | rootUser: root 145 | ## @param auth.rootPassword MongoDB(®) root password 146 | ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#setting-the-root-password-on-first-run 147 | ## 148 | rootPassword: "yourmongodbpassword" 149 | ## MongoDB(®) custom users and databases 150 | ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#creating-users-and-databases-on-first-run 151 | ## @param auth.usernames List of custom users to be created during the initialization 152 | ## @param auth.passwords List of passwords for the custom users set at `auth.usernames` 153 | ## @param auth.databases List of custom databases to be created during the initialization 154 | ## 155 | usernames: [] 156 | passwords: [] 157 | databases: [] 158 | ## @param auth.username DEPRECATED: use `auth.usernames` instead 159 | ## @param auth.password DEPRECATED: use `auth.passwords` instead 160 | ## @param auth.database DEPRECATED: use `auth.databases` instead 161 | username: "" 162 | password: "" 163 | database: "" 164 | ## @param auth.replicaSetKey Key used for authentication in the replicaset (only when `architecture=replicaset`) 165 | ## 166 | replicaSetKey: "" 167 | ## @param auth.existingSecret Existing secret with MongoDB(®) credentials (keys: `mongodb-passwords`, `mongodb-root-password`, `mongodb-metrics-password`, ` mongodb-replica-set-key`) 168 | ## NOTE: When it's set the previous parameters are ignored. 169 | ## 170 | existingSecret: "" 171 | tls: 172 | ## @param tls.enabled Enable MongoDB(®) TLS support between nodes in the cluster as well as between mongo clients and nodes 173 | ## 174 | enabled: false 175 | ## @param tls.autoGenerated Generate a custom CA and self-signed certificates 176 | ## 177 | autoGenerated: true 178 | ## @param tls.existingSecret Existing secret with TLS certificates (keys: `mongodb-ca-cert`, `mongodb-ca-key`, `client-pem`) 179 | ## NOTE: When it's set it will disable certificate creation 180 | ## 181 | existingSecret: "" 182 | ## Add Custom CA certificate 183 | ## @param tls.caCert Custom CA certificated (base64 encoded) 184 | ## @param tls.caKey CA certificate private key (base64 encoded) 185 | ## 186 | caCert: "" 187 | caKey: "" 188 | ## Bitnami Nginx image 189 | ## @param tls.image.registry Init container TLS certs setup image registry 190 | ## @param tls.image.repository Init container TLS certs setup image repository 191 | ## @param tls.image.tag Init container TLS certs setup image tag (immutable tags are recommended) 192 | ## @param tls.image.pullPolicy Init container TLS certs setup image pull policy 193 | ## @param tls.image.pullSecrets Init container TLS certs specify docker-registry secret names as an array 194 | ## @param tls.extraDnsNames Add extra dns names to the CA, can solve x509 auth issue for pod clients 195 | ## 196 | image: 197 | registry: docker.io 198 | repository: bitnami/nginx 199 | tag: 1.21.6-debian-11-r13 200 | pullPolicy: IfNotPresent 201 | ## Optionally specify an array of imagePullSecrets. 202 | ## Secrets must be manually created in the namespace. 203 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 204 | ## e.g: 205 | ## pullSecrets: 206 | ## - myRegistryKeySecretName 207 | ## 208 | pullSecrets: [] 209 | 210 | ## e.g: 211 | ## extraDnsNames 212 | ## "DNS.6": "$my_host" 213 | ## "DNS.7": "$test" 214 | ## 215 | extraDnsNames: [] 216 | ## @param tls.mode Allows to set the tls mode which should be used when tls is enabled (options: `allowTLS`, `preferTLS`, `requireTLS`) 217 | ## 218 | mode: requireTLS 219 | ## Init Container resource requests and limits 220 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 221 | ## We usually recommend not to specify default resources and to leave this as a conscious 222 | ## choice for the user. This also increases chances charts run on environments with little 223 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 224 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 225 | ## @param tls.resources.limits Init container generate-tls-certs resource limits 226 | ## @param tls.resources.requests Init container generate-tls-certs resource requests 227 | ## 228 | resources: 229 | ## Example: 230 | ## limits: 231 | ## cpu: 100m 232 | ## memory: 128Mi 233 | ## 234 | limits: {} 235 | ## Examples: 236 | ## requests: 237 | ## cpu: 100m 238 | ## memory: 128Mi 239 | ## 240 | requests: {} 241 | ## @param hostAliases Add deployment host aliases 242 | ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ 243 | ## 244 | hostAliases: [] 245 | ## @param replicaSetName Name of the replica set (only when `architecture=replicaset`) 246 | ## Ignored when mongodb.architecture=standalone 247 | ## 248 | replicaSetName: rs0 249 | ## @param replicaSetHostnames Enable DNS hostnames in the replicaset config (only when `architecture=replicaset`) 250 | ## Ignored when mongodb.architecture=standalone 251 | ## Ignored when externalAccess.enabled=true 252 | ## 253 | replicaSetHostnames: true 254 | ## @param enableIPv6 Switch to enable/disable IPv6 on MongoDB(®) 255 | ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-ipv6 256 | ## 257 | enableIPv6: false 258 | ## @param directoryPerDB Switch to enable/disable DirectoryPerDB on MongoDB(®) 259 | ## ref: https://github.com/bitnami/bitnami-docker-mongodb/blob/master/README.md#enabling/disabling-directoryperdb 260 | ## 261 | directoryPerDB: false 262 | ## MongoDB(®) System Log configuration 263 | ## ref: https://github.com/bitnami/bitnami-docker-mongodb#configuring-system-log-verbosity-level 264 | ## @param systemLogVerbosity MongoDB(®) system log verbosity level 265 | ## @param disableSystemLog Switch to enable/disable MongoDB(®) system log 266 | ## 267 | systemLogVerbosity: 0 268 | disableSystemLog: false 269 | ## @param disableJavascript Switch to enable/disable MongoDB(®) server-side JavaScript execution 270 | ## ref: https://docs.mongodb.com/manual/core/server-side-javascript/ 271 | ## 272 | disableJavascript: false 273 | ## @param enableJournal Switch to enable/disable MongoDB(®) Journaling 274 | ## ref: https://docs.mongodb.com/manual/reference/configuration-options/#mongodb-setting-storage.journal.enabled 275 | ## 276 | enableJournal: true 277 | ## @param configuration MongoDB(®) configuration file to be used for Primary and Secondary nodes 278 | ## For documentation of all options, see: http://docs.mongodb.org/manual/reference/configuration-options/ 279 | ## Example: 280 | ## configuration: |- 281 | ## # where and how to store data. 282 | ## storage: 283 | ## dbPath: /bitnami/mongodb/data/db 284 | ## journal: 285 | ## enabled: true 286 | ## directoryPerDB: false 287 | ## # where to write logging data 288 | ## systemLog: 289 | ## destination: file 290 | ## quiet: false 291 | ## logAppend: true 292 | ## logRotate: reopen 293 | ## path: /opt/bitnami/mongodb/logs/mongodb.log 294 | ## verbosity: 0 295 | ## # network interfaces 296 | ## net: 297 | ## port: 27017 298 | ## unixDomainSocket: 299 | ## enabled: true 300 | ## pathPrefix: /opt/bitnami/mongodb/tmp 301 | ## ipv6: false 302 | ## bindIpAll: true 303 | ## # replica set options 304 | ## #replication: 305 | ## #replSetName: replicaset 306 | ## #enableMajorityReadConcern: true 307 | ## # process management options 308 | ## processManagement: 309 | ## fork: false 310 | ## pidFilePath: /opt/bitnami/mongodb/tmp/mongodb.pid 311 | ## # set parameter options 312 | ## setParameter: 313 | ## enableLocalhostAuthBypass: true 314 | ## # security options 315 | ## security: 316 | ## authorization: disabled 317 | ## #keyFile: /opt/bitnami/mongodb/conf/keyfile 318 | ## 319 | configuration: "" 320 | ## @section replicaSetConfigurationSettings settings applied during runtime (not via configuration file) 321 | ## If enabled, these are applied by a script which is called within setup.sh 322 | ## for documentation see https://docs.mongodb.com/manual/reference/replica-configuration/#replica-set-configuration-fields 323 | ## @param replicaSetConfigurationSettings.enabled Enable MongoDB(®) Switch to enable/disable configuring MongoDB(®) run time rs.conf settings 324 | ## @param replicaSetConfigurationSettings.configuration run-time rs.conf settings 325 | ## 326 | replicaSetConfigurationSettings: 327 | enabled: false 328 | configuration: {} 329 | ## chainingAllowed : false 330 | ## heartbeatTimeoutSecs : 10 331 | ## heartbeatIntervalMillis : 2000 332 | ## electionTimeoutMillis : 10000 333 | ## catchUpTimeoutMillis : 30000 334 | ## @param existingConfigmap Name of existing ConfigMap with MongoDB(®) configuration for Primary and Secondary nodes 335 | ## NOTE: When it's set the arbiter.configuration parameter is ignored 336 | ## 337 | existingConfigmap: "" 338 | ## @param initdbScripts Dictionary of initdb scripts 339 | ## Specify dictionary of scripts to be run at first boot 340 | ## Example: 341 | ## initdbScripts: 342 | ## my_init_script.sh: | 343 | ## #!/bin/bash 344 | ## echo "Do something." 345 | ## 346 | initdbScripts: {} 347 | ## @param initdbScriptsConfigMap Existing ConfigMap with custom initdb scripts 348 | ## 349 | initdbScriptsConfigMap: "" 350 | ## Command and args for running the container (set to default if not set). Use array form 351 | ## @param command Override default container command (useful when using custom images) 352 | ## @param args Override default container args (useful when using custom images) 353 | ## 354 | command: [] 355 | args: [] 356 | ## @param extraFlags MongoDB(®) additional command line flags 357 | ## Example: 358 | ## extraFlags: 359 | ## - "--wiredTigerCacheSizeGB=2" 360 | ## 361 | extraFlags: [] 362 | ## @param extraEnvVars Extra environment variables to add to MongoDB(®) pods 363 | ## E.g: 364 | ## extraEnvVars: 365 | ## - name: FOO 366 | ## value: BAR 367 | ## 368 | extraEnvVars: [] 369 | ## @param extraEnvVarsCM Name of existing ConfigMap containing extra env vars 370 | ## 371 | extraEnvVarsCM: "" 372 | ## @param extraEnvVarsSecret Name of existing Secret containing extra env vars (in case of sensitive data) 373 | ## 374 | extraEnvVarsSecret: "" 375 | 376 | ## @section MongoDB(®) statefulset parameters 377 | ## 378 | 379 | ## @param annotations Additional labels to be added to the MongoDB(®) statefulset. Evaluated as a template 380 | ## 381 | annotations: {} 382 | ## @param labels Annotations to be added to the MongoDB(®) statefulset. Evaluated as a template 383 | ## 384 | labels: {} 385 | ## @param replicaCount Number of MongoDB(®) nodes (only when `architecture=replicaset`) 386 | ## Ignored when mongodb.architecture=standalone 387 | ## 388 | replicaCount: 2 389 | ## @param updateStrategy.type Strategy to use to replace existing MongoDB(®) pods. When architecture=standalone and useStatefulSet=false, 390 | ## this parameter will be applied on a deployment object. In other case it will be applied on a statefulset object 391 | ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies 392 | ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy 393 | ## Example: 394 | ## updateStrategy: 395 | ## type: RollingUpdate 396 | ## rollingUpdate: 397 | ## maxSurge: 25% 398 | ## maxUnavailable: 25% 399 | ## 400 | updateStrategy: 401 | type: RollingUpdate 402 | ## @param podManagementPolicy Pod management policy for MongoDB(®) 403 | ## Should be initialized one by one when building the replicaset for the first time 404 | ## 405 | podManagementPolicy: OrderedReady 406 | ## @param podAffinityPreset MongoDB(®) Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 407 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 408 | ## 409 | podAffinityPreset: "" 410 | ## @param podAntiAffinityPreset MongoDB(®) Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 411 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 412 | ## 413 | podAntiAffinityPreset: soft 414 | ## Node affinity preset 415 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity 416 | ## 417 | nodeAffinityPreset: 418 | ## @param nodeAffinityPreset.type MongoDB(®) Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 419 | ## 420 | type: "" 421 | ## @param nodeAffinityPreset.key MongoDB(®) Node label key to match Ignored if `affinity` is set. 422 | ## E.g. 423 | ## key: "kubernetes.io/e2e-az-name" 424 | ## 425 | key: "" 426 | ## @param nodeAffinityPreset.values MongoDB(®) Node label values to match. Ignored if `affinity` is set. 427 | ## E.g. 428 | ## values: 429 | ## - e2e-az1 430 | ## - e2e-az2 431 | ## 432 | values: [] 433 | ## @param affinity MongoDB(®) Affinity for pod assignment 434 | ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity 435 | ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set 436 | ## 437 | affinity: {} 438 | ## @param nodeSelector MongoDB(®) Node labels for pod assignment 439 | ## ref: https://kubernetes.io/docs/user-guide/node-selection/ 440 | ## 441 | nodeSelector: {} 442 | ## @param tolerations MongoDB(®) Tolerations for pod assignment 443 | ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ 444 | ## 445 | tolerations: [] 446 | ## @param topologySpreadConstraints MongoDB(®) Spread Constraints for Pods 447 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ 448 | ## 449 | topologySpreadConstraints: [] 450 | ## @param lifecycleHooks LifecycleHook for the MongoDB(®) container(s) to automate configuration before or after startup 451 | ## 452 | lifecycleHooks: {} 453 | ## @param terminationGracePeriodSeconds MongoDB(®) Termination Grace Period 454 | ## 455 | terminationGracePeriodSeconds: "" 456 | ## @param podLabels MongoDB(®) pod labels 457 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 458 | ## 459 | podLabels: {} 460 | ## @param podAnnotations MongoDB(®) Pod annotations 461 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 462 | ## 463 | podAnnotations: {} 464 | ## @param priorityClassName Name of the existing priority class to be used by MongoDB(®) pod(s) 465 | ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ 466 | ## 467 | priorityClassName: "" 468 | ## @param runtimeClassName Name of the runtime class to be used by MongoDB(®) pod(s) 469 | ## ref: https://kubernetes.io/docs/concepts/containers/runtime-class/ 470 | ## 471 | runtimeClassName: "" 472 | ## MongoDB(®) pods' Security Context. 473 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod 474 | ## @param podSecurityContext.enabled Enable MongoDB(®) pod(s)' Security Context 475 | ## @param podSecurityContext.fsGroup Group ID for the volumes of the MongoDB(®) pod(s) 476 | ## @param podSecurityContext.sysctls sysctl settings of the MongoDB(®) pod(s)' 477 | ## 478 | podSecurityContext: 479 | enabled: true 480 | fsGroup: 1001 481 | ## sysctl settings 482 | ## Example: 483 | ## sysctls: 484 | ## - name: net.core.somaxconn 485 | ## value: "10000" 486 | ## 487 | sysctls: [] 488 | ## MongoDB(®) containers' Security Context (main and metrics container). 489 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container 490 | ## @param containerSecurityContext.enabled Enable MongoDB(®) container(s)' Security Context 491 | ## @param containerSecurityContext.runAsUser User ID for the MongoDB(®) container 492 | ## @param containerSecurityContext.runAsNonRoot Set MongoDB(®) container's Security Context runAsNonRoot 493 | ## 494 | containerSecurityContext: 495 | enabled: true 496 | runAsUser: 1001 497 | runAsNonRoot: true 498 | ## MongoDB(®) containers' resource requests and limits. 499 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 500 | ## We usually recommend not to specify default resources and to leave this as a conscious 501 | ## choice for the user. This also increases chances charts run on environments with little 502 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 503 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 504 | ## @param resources.limits The resources limits for MongoDB(®) containers 505 | ## @param resources.requests The requested resources for MongoDB(®) containers 506 | ## 507 | resources: 508 | ## Example: 509 | ## limits: 510 | ## cpu: 100m 511 | ## memory: 128Mi 512 | ## 513 | limits: {} 514 | ## Examples: 515 | ## requests: 516 | ## cpu: 100m 517 | ## memory: 128Mi 518 | ## 519 | requests: {} 520 | ## @param containerPorts.mongodb MongoDB(®) container port 521 | containerPorts: 522 | mongodb: 27017 523 | ## MongoDB(®) pods' liveness probe. Evaluated as a template. 524 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 525 | ## @param livenessProbe.enabled Enable livenessProbe 526 | ## @param livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 527 | ## @param livenessProbe.periodSeconds Period seconds for livenessProbe 528 | ## @param livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 529 | ## @param livenessProbe.failureThreshold Failure threshold for livenessProbe 530 | ## @param livenessProbe.successThreshold Success threshold for livenessProbe 531 | ## 532 | livenessProbe: 533 | enabled: true 534 | initialDelaySeconds: 30 535 | periodSeconds: 20 536 | timeoutSeconds: 10 537 | failureThreshold: 6 538 | successThreshold: 1 539 | ## MongoDB(®) pods' readiness probe. Evaluated as a template. 540 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 541 | ## @param readinessProbe.enabled Enable readinessProbe 542 | ## @param readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 543 | ## @param readinessProbe.periodSeconds Period seconds for readinessProbe 544 | ## @param readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 545 | ## @param readinessProbe.failureThreshold Failure threshold for readinessProbe 546 | ## @param readinessProbe.successThreshold Success threshold for readinessProbe 547 | ## 548 | readinessProbe: 549 | enabled: true 550 | initialDelaySeconds: 5 551 | periodSeconds: 10 552 | timeoutSeconds: 5 553 | failureThreshold: 6 554 | successThreshold: 1 555 | ## Slow starting containers can be protected through startup probes 556 | ## Startup probes are available in Kubernetes version 1.16 and above 557 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes 558 | ## @param startupProbe.enabled Enable startupProbe 559 | ## @param startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 560 | ## @param startupProbe.periodSeconds Period seconds for startupProbe 561 | ## @param startupProbe.timeoutSeconds Timeout seconds for startupProbe 562 | ## @param startupProbe.failureThreshold Failure threshold for startupProbe 563 | ## @param startupProbe.successThreshold Success threshold for startupProbe 564 | ## 565 | startupProbe: 566 | enabled: false 567 | initialDelaySeconds: 5 568 | periodSeconds: 20 569 | timeoutSeconds: 10 570 | successThreshold: 1 571 | failureThreshold: 30 572 | ## @param customLivenessProbe Override default liveness probe for MongoDB(®) containers 573 | ## Ignored when livenessProbe.enabled=true 574 | ## 575 | customLivenessProbe: {} 576 | ## @param customReadinessProbe Override default readiness probe for MongoDB(®) containers 577 | ## Ignored when readinessProbe.enabled=true 578 | ## 579 | customReadinessProbe: {} 580 | ## @param customStartupProbe Override default startup probe for MongoDB(®) containers 581 | ## Ignored when startupProbe.enabled=true 582 | ## 583 | customStartupProbe: {} 584 | ## @param initContainers Add additional init containers for the hidden node pod(s) 585 | ## Example: 586 | ## initContainers: 587 | ## - name: your-image-name 588 | ## image: your-image 589 | ## imagePullPolicy: Always 590 | ## ports: 591 | ## - name: portname 592 | ## containerPort: 1234 593 | ## 594 | initContainers: [] 595 | ## @param sidecars Add additional sidecar containers for the MongoDB(®) pod(s) 596 | ## Example: 597 | ## sidecars: 598 | ## - name: your-image-name 599 | ## image: your-image 600 | ## imagePullPolicy: Always 601 | ## ports: 602 | ## - name: portname 603 | ## containerPort: 1234 604 | ## This is an optional 'mongo-labeler' sidecar container that tracks replica-set for the primary mongodb pod 605 | ## and labels it dynamically with ' primary: "true" ' in order for an extra-deployed service to always expose 606 | ## and attach to the primary pod, this needs to be uncommented along with the suggested 'extraDeploy' example 607 | ## and the suggested rbac example for the pod to be allowed adding labels to mongo replica pods 608 | ## search 'mongo-labeler' through this file to find the sections that needs to be uncommented to make it work 609 | ## 610 | ## - name: mongo-labeler 611 | ## image: korenlev/k8s-mongo-labeler-sidecar 612 | ## imagePullPolicy: Always 613 | ## env: 614 | ## - name: LABEL_SELECTOR 615 | ## value: "app.kubernetes.io/component=mongodb,app.kubernetes.io/instance=mongodb,app.kubernetes.io/name=mongodb" 616 | ## - name: NAMESPACE 617 | ## value: "the-mongodb-namespace" 618 | ## - name: DEBUG 619 | ## value: "true" 620 | ## 621 | sidecars: [] 622 | ## @param extraVolumeMounts Optionally specify extra list of additional volumeMounts for the MongoDB(®) container(s) 623 | ## Examples: 624 | ## extraVolumeMounts: 625 | ## - name: extras 626 | ## mountPath: /usr/share/extras 627 | ## readOnly: true 628 | ## 629 | extraVolumeMounts: [] 630 | ## @param extraVolumes Optionally specify extra list of additional volumes to the MongoDB(®) statefulset 631 | ## extraVolumes: 632 | ## - name: extras 633 | ## emptyDir: {} 634 | ## 635 | extraVolumes: [] 636 | ## MongoDB(®) Pod Disruption Budget configuration 637 | ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ 638 | ## 639 | pdb: 640 | ## @param pdb.create Enable/disable a Pod Disruption Budget creation for MongoDB(®) pod(s) 641 | ## 642 | create: false 643 | ## @param pdb.minAvailable Minimum number/percentage of MongoDB(®) pods that must still be available after the eviction 644 | ## 645 | minAvailable: 1 646 | ## @param pdb.maxUnavailable Maximum number/percentage of MongoDB(®) pods that may be made unavailable after the eviction 647 | ## 648 | maxUnavailable: "" 649 | 650 | ## @section Traffic exposure parameters 651 | ## 652 | 653 | ## Service parameters 654 | ## 655 | service: 656 | ## @param service.nameOverride MongoDB(®) service name 657 | ## 658 | nameOverride: "" 659 | ## @param service.type Kubernetes Service type (only for standalone architecture) 660 | ## 661 | type: ClusterIP 662 | ## @param service.portName MongoDB(®) service port name (only for standalone architecture) 663 | ## 664 | portName: mongodb 665 | ## @param service.ports.mongodb MongoDB(®) service port. 666 | ## 667 | ports: 668 | mongodb: 27017 669 | ## @param service.nodePorts.mongodb Port to bind to for NodePort and LoadBalancer service types (only for standalone architecture) 670 | ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport 671 | ## 672 | nodePorts: 673 | mongodb: "" 674 | ## @param service.clusterIP MongoDB(®) service cluster IP (only for standalone architecture) 675 | ## e.g: 676 | ## clusterIP: None 677 | ## 678 | clusterIP: "" 679 | ## @param service.externalIPs Specify the externalIP value ClusterIP service type (only for standalone architecture) 680 | ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips 681 | ## 682 | externalIPs: [] 683 | ## @param service.loadBalancerIP loadBalancerIP for MongoDB(®) Service (only for standalone architecture) 684 | ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer 685 | ## 686 | loadBalancerIP: "" 687 | ## @param service.loadBalancerSourceRanges Address(es) that are allowed when service is LoadBalancer (only for standalone architecture) 688 | ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service 689 | ## 690 | loadBalancerSourceRanges: [] 691 | ## @param service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 692 | ## 693 | extraPorts: [] 694 | ## @param service.annotations Provide any additional annotations that may be required 695 | ## 696 | annotations: {} 697 | ## @param service.externalTrafficPolicy service external traffic policy (only for standalone architecture) 698 | ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip 699 | ## 700 | externalTrafficPolicy: Local 701 | ## @param service.sessionAffinity Control where client requests go, to the same pod or round-robin 702 | ## Values: ClientIP or None 703 | ## ref: https://kubernetes.io/docs/user-guide/services/ 704 | ## 705 | sessionAffinity: None 706 | ## @param service.sessionAffinityConfig Additional settings for the sessionAffinity 707 | ## sessionAffinityConfig: 708 | ## clientIP: 709 | ## timeoutSeconds: 300 710 | ## 711 | sessionAffinityConfig: {} 712 | ## External Access to MongoDB(®) nodes configuration 713 | ## 714 | externalAccess: 715 | ## @param externalAccess.enabled Enable Kubernetes external cluster access to MongoDB(®) nodes (only for replicaset architecture) 716 | ## 717 | enabled: false 718 | ## External IPs auto-discovery configuration 719 | ## An init container is used to auto-detect LB IPs or node ports by querying the K8s API 720 | ## Note: RBAC might be required 721 | ## 722 | autoDiscovery: 723 | ## @param externalAccess.autoDiscovery.enabled Enable using an init container to auto-detect external IPs by querying the K8s API 724 | ## 725 | enabled: false 726 | ## Bitnami Kubectl image 727 | ## ref: https://hub.docker.com/r/bitnami/kubectl/tags/ 728 | ## @param externalAccess.autoDiscovery.image.registry Init container auto-discovery image registry 729 | ## @param externalAccess.autoDiscovery.image.repository Init container auto-discovery image repository 730 | ## @param externalAccess.autoDiscovery.image.tag Init container auto-discovery image tag (immutable tags are recommended) 731 | ## @param externalAccess.autoDiscovery.image.pullPolicy Init container auto-discovery image pull policy 732 | ## @param externalAccess.autoDiscovery.image.pullSecrets Init container auto-discovery image pull secrets 733 | ## 734 | image: 735 | registry: docker.io 736 | repository: bitnami/kubectl 737 | tag: 1.24.2-debian-11-r5 738 | ## Specify a imagePullPolicy 739 | ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' 740 | ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images 741 | ## 742 | pullPolicy: IfNotPresent 743 | ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) 744 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 745 | ## Example: 746 | ## pullSecrets: 747 | ## - myRegistryKeySecretName 748 | ## 749 | pullSecrets: [] 750 | ## Init Container resource requests and limits 751 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 752 | ## We usually recommend not to specify default resources and to leave this as a conscious 753 | ## choice for the user. This also increases chances charts run on environments with little 754 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 755 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 756 | ## @param externalAccess.autoDiscovery.resources.limits Init container auto-discovery resource limits 757 | ## @param externalAccess.autoDiscovery.resources.requests Init container auto-discovery resource requests 758 | ## 759 | resources: 760 | ## Example: 761 | ## limits: 762 | ## cpu: 100m 763 | ## memory: 128Mi 764 | ## 765 | limits: {} 766 | ## Examples: 767 | ## requests: 768 | ## cpu: 100m 769 | ## memory: 128Mi 770 | ## 771 | requests: {} 772 | ## Parameters to configure K8s service(s) used to externally access MongoDB(®) 773 | ## A new service per broker will be created 774 | ## 775 | service: 776 | ## @param externalAccess.service.type Kubernetes Service type for external access. Allowed values: NodePort, LoadBalancer or ClusterIP 777 | ## 778 | type: LoadBalancer 779 | ## @param externalAccess.service.portName MongoDB(®) port name used for external access when service type is LoadBalancer 780 | ## 781 | portName: "mongodb" 782 | ## @param externalAccess.service.ports.mongodb MongoDB(®) port used for external access when service type is LoadBalancer 783 | ## 784 | ports: 785 | mongodb: 27017 786 | ## @param externalAccess.service.loadBalancerIPs Array of load balancer IPs for MongoDB(®) nodes 787 | ## Example: 788 | ## loadBalancerIPs: 789 | ## - X.X.X.X 790 | ## - Y.Y.Y.Y 791 | ## 792 | loadBalancerIPs: [] 793 | ## @param externalAccess.service.loadBalancerSourceRanges Address(es) that are allowed when service is LoadBalancer 794 | ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service 795 | ## Example: 796 | ## loadBalancerSourceRanges: 797 | ## - 10.10.10.0/24 798 | ## 799 | loadBalancerSourceRanges: [] 800 | ## @param externalAccess.service.externalTrafficPolicy MongoDB(®) service external traffic policy 801 | ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip 802 | ## 803 | externalTrafficPolicy: Local 804 | ## @param externalAccess.service.nodePorts Array of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort 805 | ## Example: 806 | ## nodePorts: 807 | ## - 30001 808 | ## - 30002 809 | ## 810 | nodePorts: [] 811 | ## @param externalAccess.service.domain Domain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort 812 | ## If not specified, the container will try to get the kubernetes node external IP 813 | ## e.g: 814 | ## domain: mydomain.com 815 | ## 816 | domain: "" 817 | ## @param externalAccess.service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 818 | ## 819 | extraPorts: [] 820 | ## @param externalAccess.service.annotations Service annotations for external access 821 | ## 822 | annotations: {} 823 | ## @param externalAccess.service.sessionAffinity Control where client requests go, to the same pod or round-robin 824 | ## Values: ClientIP or None 825 | ## ref: https://kubernetes.io/docs/user-guide/services/ 826 | ## 827 | sessionAffinity: None 828 | ## @param externalAccess.service.sessionAffinityConfig Additional settings for the sessionAffinity 829 | ## sessionAffinityConfig: 830 | ## clientIP: 831 | ## timeoutSeconds: 300 832 | ## 833 | sessionAffinityConfig: {} 834 | ## External Access to MongoDB(®) Hidden nodes configuration 835 | ## 836 | hidden: 837 | ## @param externalAccess.hidden.enabled Enable Kubernetes external cluster access to MongoDB(®) hidden nodes 838 | ## 839 | enabled: false 840 | ## Parameters to configure K8s service(s) used to externally access MongoDB(®) 841 | ## A new service per broker will be created 842 | ## 843 | service: 844 | ## @param externalAccess.hidden.service.type Kubernetes Service type for external access. Allowed values: NodePort or LoadBalancer 845 | ## 846 | type: LoadBalancer 847 | ## @param externalAccess.hidden.service.portName MongoDB(®) port name used for external access when service type is LoadBalancer 848 | ## 849 | portName: "mongodb" 850 | ## @param externalAccess.hidden.service.ports.mongodb MongoDB(®) port used for external access when service type is LoadBalancer 851 | ## 852 | ports: 853 | mongodb: 27017 854 | ## @param externalAccess.hidden.service.loadBalancerIPs Array of load balancer IPs for MongoDB(®) nodes 855 | ## Example: 856 | ## loadBalancerIPs: 857 | ## - X.X.X.X 858 | ## - Y.Y.Y.Y 859 | ## 860 | loadBalancerIPs: [] 861 | ## @param externalAccess.hidden.service.loadBalancerSourceRanges Address(es) that are allowed when service is LoadBalancer 862 | ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service 863 | ## Example: 864 | ## loadBalancerSourceRanges: 865 | ## - 10.10.10.0/24 866 | ## 867 | loadBalancerSourceRanges: [] 868 | ## @param externalAccess.hidden.service.externalTrafficPolicy MongoDB(®) service external traffic policy 869 | ## ref https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip 870 | ## 871 | externalTrafficPolicy: Local 872 | ## @param externalAccess.hidden.service.nodePorts Array of node ports used to configure MongoDB(®) advertised hostname when service type is NodePort. Length must be the same as replicaCount 873 | ## Example: 874 | ## nodePorts: 875 | ## - 30001 876 | ## - 30002 877 | ## 878 | nodePorts: [] 879 | ## @param externalAccess.hidden.service.domain Domain or external IP used to configure MongoDB(®) advertised hostname when service type is NodePort 880 | ## If not specified, the container will try to get the kubernetes node external IP 881 | ## e.g: 882 | ## domain: mydomain.com 883 | ## 884 | domain: "" 885 | ## @param externalAccess.hidden.service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 886 | ## 887 | extraPorts: [] 888 | ## @param externalAccess.hidden.service.annotations Service annotations for external access 889 | ## 890 | annotations: {} 891 | ## @param externalAccess.hidden.service.sessionAffinity Control where client requests go, to the same pod or round-robin 892 | ## Values: ClientIP or None 893 | ## ref: https://kubernetes.io/docs/user-guide/services/ 894 | ## 895 | sessionAffinity: None 896 | ## @param externalAccess.hidden.service.sessionAffinityConfig Additional settings for the sessionAffinity 897 | ## sessionAffinityConfig: 898 | ## clientIP: 899 | ## timeoutSeconds: 300 900 | ## 901 | sessionAffinityConfig: {} 902 | 903 | ## @section Persistence parameters 904 | ## 905 | 906 | ## Enable persistence using Persistent Volume Claims 907 | ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ 908 | ## 909 | persistence: 910 | ## @param persistence.enabled Enable MongoDB(®) data persistence using PVC 911 | ## 912 | enabled: true 913 | ## @param persistence.medium Provide a medium for `emptyDir` volumes. 914 | ## Requires persistence.enabled: false 915 | ## 916 | medium: "" 917 | ## @param persistence.existingClaim Provide an existing `PersistentVolumeClaim` (only when `architecture=standalone`) 918 | ## Requires persistence.enabled: true 919 | ## If defined, PVC must be created manually before volume will be bound 920 | ## Ignored when mongodb.architecture=replicaset 921 | ## 922 | existingClaim: "" 923 | ## @param persistence.resourcePolicy Setting it to "keep" to avoid removing PVCs during a helm delete operation. Leaving it empty will delete PVCs after the chart deleted 924 | resourcePolicy: "" 925 | ## @param persistence.storageClass PVC Storage Class for MongoDB(®) data volume 926 | ## If defined, storageClassName: 927 | ## If set to "-", storageClassName: "", which disables dynamic provisioning 928 | ## If undefined (the default) or set to null, no storageClassName spec is 929 | ## set, choosing the default provisioner. 930 | ## 931 | storageClass: "" 932 | ## @param persistence.accessModes PV Access Mode 933 | ## 934 | accessModes: 935 | - ReadWriteOnce 936 | ## @param persistence.size PVC Storage Request for MongoDB(®) data volume 937 | ## 938 | size: 8Gi 939 | ## @param persistence.annotations PVC annotations 940 | ## 941 | annotations: {} 942 | ## @param persistence.mountPath Path to mount the volume at 943 | ## MongoDB(®) images. 944 | ## 945 | mountPath: /bitnami/mongodb 946 | ## @param persistence.subPath Subdirectory of the volume to mount at 947 | ## and one PV for multiple services. 948 | ## 949 | subPath: "" 950 | ## Fine tuning for volumeClaimTemplates 951 | ## 952 | volumeClaimTemplates: 953 | ## @param persistence.volumeClaimTemplates.selector A label query over volumes to consider for binding (e.g. when using local volumes) 954 | ## A label query over volumes to consider for binding (e.g. when using local volumes) 955 | ## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#labelselector-v1-meta for more details 956 | ## 957 | selector: {} 958 | ## @param persistence.volumeClaimTemplates.requests Custom PVC requests attributes 959 | ## Sometime cloud providers use additional requests attributes to provision custom storage instance 960 | ## See https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_dynamic_statefulset 961 | ## 962 | requests: {} 963 | ## @param persistence.volumeClaimTemplates.dataSource Add dataSource to the VolumeClaimTemplate 964 | ## 965 | dataSource: {} 966 | 967 | ## @section RBAC parameters 968 | ## 969 | 970 | ## ServiceAccount 971 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ 972 | ## 973 | serviceAccount: 974 | ## @param serviceAccount.create Enable creation of ServiceAccount for MongoDB(®) pods 975 | ## 976 | create: true 977 | ## @param serviceAccount.name Name of the created serviceAccount 978 | ## If not set and create is true, a name is generated using the mongodb.fullname template 979 | ## 980 | name: "" 981 | ## @param serviceAccount.annotations Additional Service Account annotations 982 | ## 983 | annotations: {} 984 | ## @param serviceAccount.automountServiceAccountToken Allows auto mount of ServiceAccountToken on the serviceAccount created 985 | ## Can be set to false if pods using this serviceAccount do not need to use K8s API 986 | ## 987 | automountServiceAccountToken: true 988 | ## Role Based Access 989 | ## ref: https://kubernetes.io/docs/admin/authorization/rbac/ 990 | ## 991 | rbac: 992 | ## @param rbac.create Whether to create & use RBAC resources or not 993 | ## binding MongoDB(®) ServiceAccount to a role 994 | ## that allows MongoDB(®) pods querying the K8s API 995 | ## this needs to be set to 'true' to enable the mongo-labeler sidecar primary mongodb discovery 996 | ## 997 | create: false 998 | ## @param rbac.rules Custom rules to create following the role specification 999 | ## The example below needs to be uncommented to use the 'mongo-labeler' sidecar for dynamic discovery of the primary mongodb pod: 1000 | ## rules: 1001 | ## - apiGroups: 1002 | ## - "" 1003 | ## resources: 1004 | ## - pods 1005 | ## verbs: 1006 | ## - get 1007 | ## - list 1008 | ## - watch 1009 | ## - update 1010 | ## 1011 | rules: [] 1012 | ## PodSecurityPolicy configuration 1013 | ## Be sure to also set rbac.create to true, otherwise Role and RoleBinding won't be created. 1014 | ## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ 1015 | ## 1016 | podSecurityPolicy: 1017 | ## @param podSecurityPolicy.create Whether to create a PodSecurityPolicy. WARNING: PodSecurityPolicy is deprecated in Kubernetes v1.21 or later, unavailable in v1.25 or later 1018 | ## 1019 | create: false 1020 | ## @param podSecurityPolicy.allowPrivilegeEscalation Enable privilege escalation 1021 | ## Either use predefined policy with some adjustments or use `podSecurityPolicy.spec` 1022 | ## 1023 | allowPrivilegeEscalation: false 1024 | ## @param podSecurityPolicy.privileged Allow privileged 1025 | ## 1026 | privileged: false 1027 | ## @param podSecurityPolicy.spec Specify the full spec to use for Pod Security Policy 1028 | ## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ 1029 | ## Defining a spec ignores the above values. 1030 | ## 1031 | spec: {} 1032 | ## Example: 1033 | ## allowPrivilegeEscalation: false 1034 | ## fsGroup: 1035 | ## rule: 'MustRunAs' 1036 | ## ranges: 1037 | ## - min: 1001 1038 | ## max: 1001 1039 | ## hostIPC: false 1040 | ## hostNetwork: false 1041 | ## hostPID: false 1042 | ## privileged: false 1043 | ## readOnlyRootFilesystem: false 1044 | ## requiredDropCapabilities: 1045 | ## - ALL 1046 | ## runAsUser: 1047 | ## rule: 'MustRunAs' 1048 | ## ranges: 1049 | ## - min: 1001 1050 | ## max: 1001 1051 | ## seLinux: 1052 | ## rule: 'RunAsAny' 1053 | ## supplementalGroups: 1054 | ## rule: 'MustRunAs' 1055 | ## ranges: 1056 | ## - min: 1001 1057 | ## max: 1001 1058 | ## volumes: 1059 | ## - 'configMap' 1060 | ## - 'secret' 1061 | ## - 'emptyDir' 1062 | ## - 'persistentVolumeClaim' 1063 | ## 1064 | 1065 | ## @section Volume Permissions parameters 1066 | ## 1067 | ## Init Container parameters 1068 | ## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component 1069 | ## values from the securityContext section of the component 1070 | ## 1071 | volumePermissions: 1072 | ## @param volumePermissions.enabled Enable init container that changes the owner and group of the persistent volume(s) mountpoint to `runAsUser:fsGroup` 1073 | ## 1074 | enabled: false 1075 | ## @param volumePermissions.image.registry Init container volume-permissions image registry 1076 | ## @param volumePermissions.image.repository Init container volume-permissions image repository 1077 | ## @param volumePermissions.image.tag Init container volume-permissions image tag (immutable tags are recommended) 1078 | ## @param volumePermissions.image.pullPolicy Init container volume-permissions image pull policy 1079 | ## @param volumePermissions.image.pullSecrets Specify docker-registry secret names as an array 1080 | ## 1081 | image: 1082 | registry: docker.io 1083 | repository: bitnami/bitnami-shell 1084 | tag: 11-debian-11-r13 1085 | ## Specify a imagePullPolicy 1086 | ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' 1087 | ## ref: https://kubernetes.io/docs/user-guide/images/#pre-pulling-images 1088 | ## 1089 | pullPolicy: IfNotPresent 1090 | ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) 1091 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 1092 | ## Example: 1093 | ## pullSecrets: 1094 | ## - myRegistryKeySecretName 1095 | ## 1096 | pullSecrets: [] 1097 | ## Init Container resource requests and limits 1098 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 1099 | ## We usually recommend not to specify default resources and to leave this as a conscious 1100 | ## choice for the user. This also increases chances charts run on environments with little 1101 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 1102 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 1103 | ## @param volumePermissions.resources.limits Init container volume-permissions resource limits 1104 | ## @param volumePermissions.resources.requests Init container volume-permissions resource requests 1105 | ## 1106 | resources: 1107 | ## Example: 1108 | ## limits: 1109 | ## cpu: 100m 1110 | ## memory: 128Mi 1111 | ## 1112 | limits: {} 1113 | ## Examples: 1114 | ## requests: 1115 | ## cpu: 100m 1116 | ## memory: 128Mi 1117 | ## 1118 | requests: {} 1119 | ## Init container Security Context 1120 | ## Note: the chown of the data folder is done to containerSecurityContext.runAsUser 1121 | ## and not the below volumePermissions.securityContext.runAsUser 1122 | ## When runAsUser is set to special value "auto", init container will try to chwon the 1123 | ## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2` 1124 | ## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed). 1125 | ## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with 1126 | ## podSecurityContext.enabled=false,containerSecurityContext.enabled=false and shmVolume.chmod.enabled=false 1127 | ## @param volumePermissions.securityContext.runAsUser User ID for the volumePermissions container 1128 | ## 1129 | securityContext: 1130 | runAsUser: 0 1131 | 1132 | ## @section Arbiter parameters 1133 | ## 1134 | 1135 | arbiter: 1136 | ## @param arbiter.enabled Enable deploying the arbiter 1137 | ## https://docs.mongodb.com/manual/tutorial/add-replica-set-arbiter/ 1138 | ## 1139 | enabled: true 1140 | ## @param arbiter.hostAliases Add deployment host aliases 1141 | ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ 1142 | ## 1143 | hostAliases: [] 1144 | ## @param arbiter.configuration Arbiter configuration file to be used 1145 | ## http://docs.mongodb.org/manual/reference/configuration-options/ 1146 | ## 1147 | configuration: "" 1148 | ## @param arbiter.existingConfigmap Name of existing ConfigMap with Arbiter configuration 1149 | ## NOTE: When it's set the arbiter.configuration parameter is ignored 1150 | ## 1151 | existingConfigmap: "" 1152 | ## Command and args for running the container (set to default if not set). Use array form 1153 | ## @param arbiter.command Override default container command (useful when using custom images) 1154 | ## @param arbiter.args Override default container args (useful when using custom images) 1155 | ## 1156 | command: [] 1157 | args: [] 1158 | ## @param arbiter.extraFlags Arbiter additional command line flags 1159 | ## Example: 1160 | ## extraFlags: 1161 | ## - "--wiredTigerCacheSizeGB=2" 1162 | ## 1163 | extraFlags: [] 1164 | ## @param arbiter.extraEnvVars Extra environment variables to add to Arbiter pods 1165 | ## E.g: 1166 | ## extraEnvVars: 1167 | ## - name: FOO 1168 | ## value: BAR 1169 | ## 1170 | extraEnvVars: [] 1171 | ## @param arbiter.extraEnvVarsCM Name of existing ConfigMap containing extra env vars 1172 | ## 1173 | extraEnvVarsCM: "" 1174 | ## @param arbiter.extraEnvVarsSecret Name of existing Secret containing extra env vars (in case of sensitive data) 1175 | ## 1176 | extraEnvVarsSecret: "" 1177 | ## @param arbiter.annotations Additional labels to be added to the Arbiter statefulset 1178 | ## 1179 | annotations: {} 1180 | ## @param arbiter.labels Annotations to be added to the Arbiter statefulset 1181 | ## 1182 | labels: {} 1183 | ## @param arbiter.topologySpreadConstraints MongoDB(®) Spread Constraints for arbiter Pods 1184 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ 1185 | ## 1186 | topologySpreadConstraints: [] 1187 | ## @param arbiter.lifecycleHooks LifecycleHook for the Arbiter container to automate configuration before or after startup 1188 | ## 1189 | lifecycleHooks: {} 1190 | ## @param arbiter.terminationGracePeriodSeconds Arbiter Termination Grace Period 1191 | ## 1192 | terminationGracePeriodSeconds: "" 1193 | ## @param arbiter.updateStrategy.type Strategy that will be employed to update Pods in the StatefulSet 1194 | ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies 1195 | ## updateStrategy: 1196 | ## type: RollingUpdate 1197 | ## rollingUpdate: 1198 | ## maxSurge: 25% 1199 | ## maxUnavailable: 25% 1200 | ## 1201 | updateStrategy: 1202 | type: RollingUpdate 1203 | ## @param arbiter.podManagementPolicy Pod management policy for MongoDB(®) 1204 | ## Should be initialized one by one when building the replicaset for the first time 1205 | ## 1206 | podManagementPolicy: OrderedReady 1207 | ## @param arbiter.schedulerName Name of the scheduler (other than default) to dispatch pods 1208 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 1209 | ## 1210 | schedulerName: "" 1211 | ## @param arbiter.podAffinityPreset Arbiter Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1212 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 1213 | ## 1214 | podAffinityPreset: "" 1215 | ## @param arbiter.podAntiAffinityPreset Arbiter Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1216 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 1217 | ## 1218 | podAntiAffinityPreset: soft 1219 | ## Node affinity preset 1220 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity 1221 | ## 1222 | nodeAffinityPreset: 1223 | ## @param arbiter.nodeAffinityPreset.type Arbiter Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1224 | ## 1225 | type: "" 1226 | ## @param arbiter.nodeAffinityPreset.key Arbiter Node label key to match Ignored if `affinity` is set. 1227 | ## E.g. 1228 | ## key: "kubernetes.io/e2e-az-name" 1229 | ## 1230 | key: "" 1231 | ## @param arbiter.nodeAffinityPreset.values Arbiter Node label values to match. Ignored if `affinity` is set. 1232 | ## E.g. 1233 | ## values: 1234 | ## - e2e-az1 1235 | ## - e2e-az2 1236 | ## 1237 | values: [] 1238 | ## @param arbiter.affinity Arbiter Affinity for pod assignment 1239 | ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity 1240 | ## Note: arbiter.podAffinityPreset, arbiter.podAntiAffinityPreset, and arbiter.nodeAffinityPreset will be ignored when it's set 1241 | ## 1242 | affinity: {} 1243 | ## @param arbiter.nodeSelector Arbiter Node labels for pod assignment 1244 | ## ref: https://kubernetes.io/docs/user-guide/node-selection/ 1245 | ## 1246 | nodeSelector: {} 1247 | ## @param arbiter.tolerations Arbiter Tolerations for pod assignment 1248 | ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ 1249 | ## 1250 | tolerations: [] 1251 | ## @param arbiter.podLabels Arbiter pod labels 1252 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 1253 | ## 1254 | podLabels: {} 1255 | ## @param arbiter.podAnnotations Arbiter Pod annotations 1256 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 1257 | ## 1258 | podAnnotations: {} 1259 | ## @param arbiter.priorityClassName Name of the existing priority class to be used by Arbiter pod(s) 1260 | ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ 1261 | ## 1262 | priorityClassName: "" 1263 | ## @param arbiter.runtimeClassName Name of the runtime class to be used by Arbiter pod(s) 1264 | ## ref: https://kubernetes.io/docs/concepts/containers/runtime-class/ 1265 | ## 1266 | runtimeClassName: "" 1267 | ## MongoDB(®) Arbiter pods' Security Context. 1268 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod 1269 | ## @param arbiter.podSecurityContext.enabled Enable Arbiter pod(s)' Security Context 1270 | ## @param arbiter.podSecurityContext.fsGroup Group ID for the volumes of the Arbiter pod(s) 1271 | ## @param arbiter.podSecurityContext.sysctls sysctl settings of the Arbiter pod(s)' 1272 | ## 1273 | podSecurityContext: 1274 | enabled: true 1275 | fsGroup: 1001 1276 | ## sysctl settings 1277 | ## Example: 1278 | ## sysctls: 1279 | ## - name: net.core.somaxconn 1280 | ## value: "10000" 1281 | ## 1282 | sysctls: [] 1283 | ## MongoDB(®) Arbiter containers' Security Context (only main container). 1284 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container 1285 | ## @param arbiter.containerSecurityContext.enabled Enable Arbiter container(s)' Security Context 1286 | ## @param arbiter.containerSecurityContext.runAsUser User ID for the Arbiter container 1287 | ## @param arbiter.containerSecurityContext.runAsNonRoot Set Arbiter containers' Security Context runAsNonRoot 1288 | ## 1289 | containerSecurityContext: 1290 | enabled: true 1291 | runAsUser: 1001 1292 | runAsNonRoot: true 1293 | ## MongoDB(®) Arbiter containers' resource requests and limits. 1294 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 1295 | ## We usually recommend not to specify default resources and to leave this as a conscious 1296 | ## choice for the user. This also increases chances charts run on environments with little 1297 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 1298 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 1299 | ## @param arbiter.resources.limits The resources limits for Arbiter containers 1300 | ## @param arbiter.resources.requests The requested resources for Arbiter containers 1301 | ## 1302 | resources: 1303 | ## Example: 1304 | ## limits: 1305 | ## cpu: 100m 1306 | ## memory: 128Mi 1307 | ## 1308 | limits: {} 1309 | ## Examples: 1310 | ## requests: 1311 | ## cpu: 100m 1312 | ## memory: 128Mi 1313 | ## 1314 | requests: {} 1315 | ## @param arbiter.containerPorts.mongodb MongoDB(®) arbiter container port 1316 | ## 1317 | containerPorts: 1318 | mongodb: 27017 1319 | ## MongoDB(®) Arbiter pods' liveness probe. Evaluated as a template. 1320 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 1321 | ## @param arbiter.livenessProbe.enabled Enable livenessProbe 1322 | ## @param arbiter.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 1323 | ## @param arbiter.livenessProbe.periodSeconds Period seconds for livenessProbe 1324 | ## @param arbiter.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 1325 | ## @param arbiter.livenessProbe.failureThreshold Failure threshold for livenessProbe 1326 | ## @param arbiter.livenessProbe.successThreshold Success threshold for livenessProbe 1327 | ## 1328 | livenessProbe: 1329 | enabled: true 1330 | initialDelaySeconds: 30 1331 | periodSeconds: 20 1332 | timeoutSeconds: 10 1333 | failureThreshold: 6 1334 | successThreshold: 1 1335 | ## MongoDB(®) Arbiter pods' readiness probe. Evaluated as a template. 1336 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 1337 | ## @param arbiter.readinessProbe.enabled Enable readinessProbe 1338 | ## @param arbiter.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 1339 | ## @param arbiter.readinessProbe.periodSeconds Period seconds for readinessProbe 1340 | ## @param arbiter.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 1341 | ## @param arbiter.readinessProbe.failureThreshold Failure threshold for readinessProbe 1342 | ## @param arbiter.readinessProbe.successThreshold Success threshold for readinessProbe 1343 | ## 1344 | readinessProbe: 1345 | enabled: true 1346 | initialDelaySeconds: 5 1347 | periodSeconds: 20 1348 | timeoutSeconds: 10 1349 | failureThreshold: 6 1350 | successThreshold: 1 1351 | ## MongoDB(®) Arbiter pods' startup probe. Evaluated as a template. 1352 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 1353 | ## @param arbiter.startupProbe.enabled Enable startupProbe 1354 | ## @param arbiter.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 1355 | ## @param arbiter.startupProbe.periodSeconds Period seconds for startupProbe 1356 | ## @param arbiter.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1357 | ## @param arbiter.startupProbe.failureThreshold Failure threshold for startupProbe 1358 | ## @param arbiter.startupProbe.successThreshold Success threshold for startupProbe 1359 | ## 1360 | startupProbe: 1361 | enabled: false 1362 | initialDelaySeconds: 5 1363 | periodSeconds: 10 1364 | timeoutSeconds: 5 1365 | successThreshold: 1 1366 | failureThreshold: 30 1367 | ## @param arbiter.customLivenessProbe Override default liveness probe for Arbiter containers 1368 | ## Ignored when arbiter.livenessProbe.enabled=true 1369 | ## 1370 | customLivenessProbe: {} 1371 | ## @param arbiter.customReadinessProbe Override default readiness probe for Arbiter containers 1372 | ## Ignored when arbiter.readinessProbe.enabled=true 1373 | ## 1374 | customReadinessProbe: {} 1375 | ## @param arbiter.customStartupProbe Override default startup probe for Arbiter containers 1376 | ## Ignored when arbiter.startupProbe.enabled=true 1377 | ## 1378 | customStartupProbe: {} 1379 | ## @param arbiter.initContainers Add additional init containers for the Arbiter pod(s) 1380 | ## Example: 1381 | ## initContainers: 1382 | ## - name: your-image-name 1383 | ## image: your-image 1384 | ## imagePullPolicy: Always 1385 | ## ports: 1386 | ## - name: portname 1387 | ## containerPort: 1234 1388 | ## 1389 | initContainers: [] 1390 | ## @param arbiter.sidecars Add additional sidecar containers for the Arbiter pod(s) 1391 | ## Example: 1392 | ## sidecars: 1393 | ## - name: your-image-name 1394 | ## image: your-image 1395 | ## imagePullPolicy: Always 1396 | ## ports: 1397 | ## - name: portname 1398 | ## containerPort: 1234 1399 | ## 1400 | sidecars: [] 1401 | ## @param arbiter.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the Arbiter container(s) 1402 | ## Examples: 1403 | ## extraVolumeMounts: 1404 | ## - name: extras 1405 | ## mountPath: /usr/share/extras 1406 | ## readOnly: true 1407 | ## 1408 | extraVolumeMounts: [] 1409 | ## @param arbiter.extraVolumes Optionally specify extra list of additional volumes to the Arbiter statefulset 1410 | ## extraVolumes: 1411 | ## - name: extras 1412 | ## emptyDir: {} 1413 | ## 1414 | extraVolumes: [] 1415 | ## MongoDB(®) Arbiter Pod Disruption Budget configuration 1416 | ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ 1417 | ## 1418 | pdb: 1419 | ## @param arbiter.pdb.create Enable/disable a Pod Disruption Budget creation for Arbiter pod(s) 1420 | ## 1421 | create: false 1422 | ## @param arbiter.pdb.minAvailable Minimum number/percentage of Arbiter pods that should remain scheduled 1423 | ## 1424 | minAvailable: 1 1425 | ## @param arbiter.pdb.maxUnavailable Maximum number/percentage of Arbiter pods that may be made unavailable 1426 | ## 1427 | maxUnavailable: "" 1428 | ## MongoDB(®) Arbiter service parameters 1429 | ## 1430 | service: 1431 | ## @param arbiter.service.nameOverride The arbiter service name 1432 | ## 1433 | nameOverride: "" 1434 | ## @param arbiter.service.ports.mongodb MongoDB(®) service port 1435 | ## 1436 | ports: 1437 | mongodb: 27017 1438 | ## @param arbiter.service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 1439 | ## 1440 | extraPorts: [] 1441 | ## @param arbiter.service.annotations Provide any additional annotations that may be required 1442 | ## 1443 | annotations: {} 1444 | 1445 | ## @section Hidden Node parameters 1446 | ## 1447 | 1448 | hidden: 1449 | ## @param hidden.enabled Enable deploying the hidden nodes 1450 | ## https://docs.mongodb.com/manual/tutorial/configure-a-hidden-replica-set-member/ 1451 | ## 1452 | enabled: false 1453 | ## @param hidden.hostAliases Add deployment host aliases 1454 | ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ 1455 | ## 1456 | hostAliases: [] 1457 | ## @param hidden.configuration Hidden node configuration file to be used 1458 | ## http://docs.mongodb.org/manual/reference/configuration-options/ 1459 | ## 1460 | configuration: "" 1461 | ## @param hidden.existingConfigmap Name of existing ConfigMap with Hidden node configuration 1462 | ## NOTE: When it's set the hidden.configuration parameter is ignored 1463 | ## 1464 | existingConfigmap: "" 1465 | ## Command and args for running the container (set to default if not set). Use array form 1466 | ## @param hidden.command Override default container command (useful when using custom images) 1467 | ## @param hidden.args Override default container args (useful when using custom images) 1468 | ## 1469 | command: [] 1470 | args: [] 1471 | ## @param hidden.extraFlags Hidden node additional command line flags 1472 | ## Example: 1473 | ## extraFlags: 1474 | ## - "--wiredTigerCacheSizeGB=2" 1475 | ## 1476 | extraFlags: [] 1477 | ## @param hidden.extraEnvVars Extra environment variables to add to Hidden node pods 1478 | ## E.g: 1479 | ## extraEnvVars: 1480 | ## - name: FOO 1481 | ## value: BAR 1482 | ## 1483 | extraEnvVars: [] 1484 | ## @param hidden.extraEnvVarsCM Name of existing ConfigMap containing extra env vars 1485 | ## 1486 | extraEnvVarsCM: "" 1487 | ## @param hidden.extraEnvVarsSecret Name of existing Secret containing extra env vars (in case of sensitive data) 1488 | ## 1489 | extraEnvVarsSecret: "" 1490 | ## @param hidden.annotations Additional labels to be added to thehidden node statefulset 1491 | ## 1492 | annotations: {} 1493 | ## @param hidden.labels Annotations to be added to the hidden node statefulset 1494 | ## 1495 | labels: {} 1496 | ## @param hidden.topologySpreadConstraints MongoDB(®) Spread Constraints for hidden Pods 1497 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ 1498 | ## 1499 | topologySpreadConstraints: [] 1500 | ## @param hidden.lifecycleHooks LifecycleHook for the Hidden container to automate configuration before or after startup 1501 | ## 1502 | lifecycleHooks: {} 1503 | ## @param hidden.replicaCount Number of hidden nodes (only when `architecture=replicaset`) 1504 | ## Ignored when mongodb.architecture=standalone 1505 | ## 1506 | replicaCount: 1 1507 | ## @param hidden.terminationGracePeriodSeconds Hidden Termination Grace Period 1508 | ## 1509 | terminationGracePeriodSeconds: "" 1510 | ## @param hidden.updateStrategy.type Strategy that will be employed to update Pods in the StatefulSet 1511 | ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies 1512 | ## updateStrategy: 1513 | ## type: RollingUpdate 1514 | ## rollingUpdate: 1515 | ## maxSurge: 25% 1516 | ## maxUnavailable: 25% 1517 | ## 1518 | updateStrategy: 1519 | type: RollingUpdate 1520 | ## @param hidden.podManagementPolicy Pod management policy for hidden node 1521 | ## 1522 | podManagementPolicy: OrderedReady 1523 | ## @param hidden.schedulerName Name of the scheduler (other than default) to dispatch pods 1524 | ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ 1525 | ## 1526 | schedulerName: "" 1527 | ## @param hidden.podAffinityPreset Hidden node Pod affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1528 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 1529 | ## 1530 | podAffinityPreset: "" 1531 | ## @param hidden.podAntiAffinityPreset Hidden node Pod anti-affinity preset. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1532 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity 1533 | ## 1534 | podAntiAffinityPreset: soft 1535 | ## Node affinity preset 1536 | ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity 1537 | ## Allowed values: soft, hard 1538 | ## 1539 | nodeAffinityPreset: 1540 | ## @param hidden.nodeAffinityPreset.type Hidden Node affinity preset type. Ignored if `affinity` is set. Allowed values: `soft` or `hard` 1541 | ## 1542 | type: "" 1543 | ## @param hidden.nodeAffinityPreset.key Hidden Node label key to match Ignored if `affinity` is set. 1544 | ## E.g. 1545 | ## key: "kubernetes.io/e2e-az-name" 1546 | ## 1547 | key: "" 1548 | ## @param hidden.nodeAffinityPreset.values Hidden Node label values to match. Ignored if `affinity` is set. 1549 | ## E.g. 1550 | ## values: 1551 | ## - e2e-az1 1552 | ## - e2e-az2 1553 | ## 1554 | values: [] 1555 | ## @param hidden.affinity Hidden node Affinity for pod assignment 1556 | ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity 1557 | ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set 1558 | ## 1559 | affinity: {} 1560 | ## @param hidden.nodeSelector Hidden node Node labels for pod assignment 1561 | ## ref: https://kubernetes.io/docs/user-guide/node-selection/ 1562 | ## 1563 | nodeSelector: {} 1564 | ## @param hidden.tolerations Hidden node Tolerations for pod assignment 1565 | ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ 1566 | ## 1567 | tolerations: [] 1568 | ## @param hidden.podLabels Hidden node pod labels 1569 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ 1570 | ## 1571 | podLabels: {} 1572 | ## @param hidden.podAnnotations Hidden node Pod annotations 1573 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 1574 | ## 1575 | podAnnotations: {} 1576 | ## @param hidden.priorityClassName Name of the existing priority class to be used by hidden node pod(s) 1577 | ## ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ 1578 | ## 1579 | priorityClassName: "" 1580 | ## @param hidden.runtimeClassName Name of the runtime class to be used by hidden node pod(s) 1581 | ## ref: https://kubernetes.io/docs/concepts/containers/runtime-class/ 1582 | ## 1583 | runtimeClassName: "" 1584 | ## MongoDB(®) Hidden pods' Security Context. 1585 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod 1586 | ## @param hidden.podSecurityContext.enabled Enable Hidden pod(s)' Security Context 1587 | ## @param hidden.podSecurityContext.fsGroup Group ID for the volumes of the Hidden pod(s) 1588 | ## @param hidden.podSecurityContext.sysctls sysctl settings of the Hidden pod(s)' 1589 | ## 1590 | podSecurityContext: 1591 | enabled: true 1592 | fsGroup: 1001 1593 | ## sysctl settings 1594 | ## Example: 1595 | ## sysctls: 1596 | ## - name: net.core.somaxconn 1597 | ## value: "10000" 1598 | ## 1599 | sysctls: [] 1600 | ## MongoDB(®) Hidden containers' Security Context (only main container). 1601 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container 1602 | ## @param hidden.containerSecurityContext.enabled Enable Hidden container(s)' Security Context 1603 | ## @param hidden.containerSecurityContext.runAsUser User ID for the Hidden container 1604 | ## @param hidden.containerSecurityContext.runAsNonRoot Set Hidden containers' Security Context runAsNonRoot 1605 | ## 1606 | containerSecurityContext: 1607 | enabled: true 1608 | runAsUser: 1001 1609 | runAsNonRoot: true 1610 | ## MongoDB(®) Hidden containers' resource requests and limits. 1611 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 1612 | ## We usually recommend not to specify default resources and to leave this as a conscious 1613 | ## choice for the user. This also increases chances charts run on environments with little 1614 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 1615 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 1616 | ## @param hidden.resources.limits The resources limits for hidden node containers 1617 | ## @param hidden.resources.requests The requested resources for hidden node containers 1618 | ## 1619 | resources: 1620 | ## Example: 1621 | ## limits: 1622 | ## cpu: 100m 1623 | ## memory: 128Mi 1624 | ## 1625 | limits: {} 1626 | ## Examples: 1627 | ## requests: 1628 | ## cpu: 100m 1629 | ## memory: 128Mi 1630 | ## 1631 | requests: {} 1632 | ## @param hidden.containerPorts.mongodb MongoDB(®) hidden container port 1633 | containerPorts: 1634 | mongodb: 27017 1635 | ## MongoDB(®) Hidden pods' liveness probe. Evaluated as a template. 1636 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 1637 | ## @param hidden.livenessProbe.enabled Enable livenessProbe 1638 | ## @param hidden.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 1639 | ## @param hidden.livenessProbe.periodSeconds Period seconds for livenessProbe 1640 | ## @param hidden.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 1641 | ## @param hidden.livenessProbe.failureThreshold Failure threshold for livenessProbe 1642 | ## @param hidden.livenessProbe.successThreshold Success threshold for livenessProbe 1643 | ## 1644 | livenessProbe: 1645 | enabled: true 1646 | initialDelaySeconds: 30 1647 | periodSeconds: 20 1648 | timeoutSeconds: 10 1649 | failureThreshold: 6 1650 | successThreshold: 1 1651 | ## MongoDB(®) Hidden pods' readiness probe. Evaluated as a template. 1652 | ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes 1653 | ## @param hidden.readinessProbe.enabled Enable readinessProbe 1654 | ## @param hidden.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 1655 | ## @param hidden.readinessProbe.periodSeconds Period seconds for readinessProbe 1656 | ## @param hidden.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 1657 | ## @param hidden.readinessProbe.failureThreshold Failure threshold for readinessProbe 1658 | ## @param hidden.readinessProbe.successThreshold Success threshold for readinessProbe 1659 | ## 1660 | readinessProbe: 1661 | enabled: true 1662 | initialDelaySeconds: 5 1663 | periodSeconds: 20 1664 | timeoutSeconds: 10 1665 | failureThreshold: 6 1666 | successThreshold: 1 1667 | ## Slow starting containers can be protected through startup probes 1668 | ## Startup probes are available in Kubernetes version 1.16 and above 1669 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes 1670 | ## @param hidden.startupProbe.enabled Enable startupProbe 1671 | ## @param hidden.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 1672 | ## @param hidden.startupProbe.periodSeconds Period seconds for startupProbe 1673 | ## @param hidden.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1674 | ## @param hidden.startupProbe.failureThreshold Failure threshold for startupProbe 1675 | ## @param hidden.startupProbe.successThreshold Success threshold for startupProbe 1676 | ## 1677 | startupProbe: 1678 | enabled: false 1679 | initialDelaySeconds: 5 1680 | periodSeconds: 10 1681 | timeoutSeconds: 5 1682 | successThreshold: 1 1683 | failureThreshold: 30 1684 | ## @param hidden.customLivenessProbe Override default liveness probe for hidden node containers 1685 | ## Ignored when hidden.livenessProbe.enabled=true 1686 | ## 1687 | customLivenessProbe: {} 1688 | ## @param hidden.customReadinessProbe Override default readiness probe for hidden node containers 1689 | ## Ignored when hidden.readinessProbe.enabled=true 1690 | ## 1691 | customReadinessProbe: {} 1692 | ## @param hidden.customStartupProbe Override default startup probe for MongoDB(®) containers 1693 | ## Ignored when hidden.startupProbe.enabled=true 1694 | ## 1695 | customStartupProbe: {} 1696 | ## @param hidden.initContainers Add init containers to the MongoDB(®) Hidden pods. 1697 | ## Example: 1698 | ## initContainers: 1699 | ## - name: your-image-name 1700 | ## image: your-image 1701 | ## imagePullPolicy: Always 1702 | ## ports: 1703 | ## - name: portname 1704 | ## containerPort: 1234 1705 | ## 1706 | initContainers: [] 1707 | ## @param hidden.sidecars Add additional sidecar containers for the hidden node pod(s) 1708 | ## Example: 1709 | ## sidecars: 1710 | ## - name: your-image-name 1711 | ## image: your-image 1712 | ## imagePullPolicy: Always 1713 | ## ports: 1714 | ## - name: portname 1715 | ## containerPort: 1234 1716 | ## 1717 | sidecars: [] 1718 | ## @param hidden.extraVolumeMounts Optionally specify extra list of additional volumeMounts for the hidden node container(s) 1719 | ## Examples: 1720 | ## extraVolumeMounts: 1721 | ## - name: extras 1722 | ## mountPath: /usr/share/extras 1723 | ## readOnly: true 1724 | ## 1725 | extraVolumeMounts: [] 1726 | ## @param hidden.extraVolumes Optionally specify extra list of additional volumes to the hidden node statefulset 1727 | ## extraVolumes: 1728 | ## - name: extras 1729 | ## emptyDir: {} 1730 | ## 1731 | extraVolumes: [] 1732 | ## MongoDB(®) Hidden Pod Disruption Budget configuration 1733 | ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ 1734 | ## 1735 | pdb: 1736 | ## @param hidden.pdb.create Enable/disable a Pod Disruption Budget creation for hidden node pod(s) 1737 | ## 1738 | create: false 1739 | ## @param hidden.pdb.minAvailable Minimum number/percentage of hidden node pods that should remain scheduled 1740 | ## 1741 | minAvailable: 1 1742 | ## @param hidden.pdb.maxUnavailable Maximum number/percentage of hidden node pods that may be made unavailable 1743 | ## 1744 | maxUnavailable: "" 1745 | ## Enable persistence using Persistent Volume Claims 1746 | ## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ 1747 | ## 1748 | persistence: 1749 | ## @param hidden.persistence.enabled Enable hidden node data persistence using PVC 1750 | ## 1751 | enabled: true 1752 | ## @param hidden.persistence.medium Provide a medium for `emptyDir` volumes. 1753 | ## Requires hidden.persistence.enabled: false 1754 | ## 1755 | medium: "" 1756 | ## @param hidden.persistence.storageClass PVC Storage Class for hidden node data volume 1757 | ## If defined, storageClassName: 1758 | ## If set to "-", storageClassName: "", which disables dynamic provisioning 1759 | ## If undefined (the default) or set to null, no storageClassName spec is 1760 | ## set, choosing the default provisioner. 1761 | ## 1762 | storageClass: "" 1763 | ## @param hidden.persistence.accessModes PV Access Mode 1764 | ## 1765 | accessModes: 1766 | - ReadWriteOnce 1767 | ## @param hidden.persistence.size PVC Storage Request for hidden node data volume 1768 | ## 1769 | size: 8Gi 1770 | ## @param hidden.persistence.annotations PVC annotations 1771 | ## 1772 | annotations: {} 1773 | ## @param hidden.persistence.mountPath The path the volume will be mounted at, useful when using different MongoDB(®) images. 1774 | ## 1775 | mountPath: /bitnami/mongodb 1776 | ## @param hidden.persistence.subPath The subdirectory of the volume to mount to, useful in dev environments 1777 | ## and one PV for multiple services. 1778 | ## 1779 | subPath: "" 1780 | ## Fine tuning for volumeClaimTemplates 1781 | ## 1782 | volumeClaimTemplates: 1783 | ## @param hidden.persistence.volumeClaimTemplates.selector A label query over volumes to consider for binding (e.g. when using local volumes) 1784 | ## See https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#labelselector-v1-meta for more details 1785 | ## 1786 | selector: {} 1787 | ## @param hidden.persistence.volumeClaimTemplates.requests Custom PVC requests attributes 1788 | ## Sometime cloud providers use additional requests attributes to provision custom storage instance 1789 | ## See https://cloud.ibm.com/docs/containers?topic=containers-file_storage#file_dynamic_statefulset 1790 | ## 1791 | requests: {} 1792 | ## @param hidden.persistence.volumeClaimTemplates.dataSource Set volumeClaimTemplate dataSource 1793 | ## 1794 | dataSource: {} 1795 | service: 1796 | ## @param hidden.service.portName MongoDB(®) service port name 1797 | ## 1798 | portName: "mongodb" 1799 | ## @param hidden.service.ports.mongodb MongoDB(®) service port 1800 | ## 1801 | ports: 1802 | mongodb: 27017 1803 | ## @param hidden.service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 1804 | ## 1805 | extraPorts: [] 1806 | ## @param hidden.service.annotations Provide any additional annotations that may be required 1807 | ## 1808 | annotations: {} 1809 | 1810 | ## @section Metrics parameters 1811 | ## 1812 | 1813 | metrics: 1814 | ## @param metrics.enabled Enable using a sidecar Prometheus exporter 1815 | ## 1816 | enabled: false 1817 | ## Bitnami MongoDB(®) Promtheus Exporter image 1818 | ## ref: https://hub.docker.com/r/bitnami/mongodb-exporter/tags/ 1819 | ## @param metrics.image.registry MongoDB(®) Prometheus exporter image registry 1820 | ## @param metrics.image.repository MongoDB(®) Prometheus exporter image repository 1821 | ## @param metrics.image.tag MongoDB(®) Prometheus exporter image tag (immutable tags are recommended) 1822 | ## @param metrics.image.pullPolicy MongoDB(®) Prometheus exporter image pull policy 1823 | ## @param metrics.image.pullSecrets Specify docker-registry secret names as an array 1824 | ## 1825 | image: 1826 | registry: docker.io 1827 | repository: bitnami/mongodb-exporter 1828 | tag: 0.33.0-debian-11-r0 1829 | pullPolicy: IfNotPresent 1830 | ## Optionally specify an array of imagePullSecrets. 1831 | ## Secrets must be manually created in the namespace. 1832 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ 1833 | ## e.g: 1834 | ## pullSecrets: 1835 | ## - myRegistryKeySecretName 1836 | ## 1837 | pullSecrets: [] 1838 | 1839 | ## @param metrics.username String with username for the metrics exporter 1840 | ## If undefined the root user will be used for the metrics exporter 1841 | username: "" 1842 | ## @param metrics.password String with password for the metrics exporter 1843 | ## If undefined but metrics.username is defined, a random password will be generated 1844 | password: "" 1845 | ## @param metrics.extraFlags String with extra flags to the metrics exporter 1846 | ## ref: https://github.com/percona/mongodb_exporter/blob/master/mongodb_exporter.go 1847 | ## 1848 | extraFlags: "" 1849 | ## Command and args for running the container (set to default if not set). Use array form 1850 | ## @param metrics.command Override default container command (useful when using custom images) 1851 | ## @param metrics.args Override default container args (useful when using custom images) 1852 | ## 1853 | command: [] 1854 | args: [] 1855 | ## Metrics exporter container resource requests and limits 1856 | ## ref: https://kubernetes.io/docs/user-guide/compute-resources/ 1857 | ## We usually recommend not to specify default resources and to leave this as a conscious 1858 | ## choice for the user. This also increases chances charts run on environments with little 1859 | ## resources, such as Minikube. If you do want to specify resources, uncomment the following 1860 | ## lines, adjust them as necessary, and remove the curly braces after 'resources:'. 1861 | ## @param metrics.resources.limits The resources limits for Prometheus exporter containers 1862 | ## @param metrics.resources.requests The requested resources for Prometheus exporter containers 1863 | ## 1864 | resources: 1865 | ## Example: 1866 | ## limits: 1867 | ## cpu: 100m 1868 | ## memory: 128Mi 1869 | ## 1870 | limits: {} 1871 | ## Examples: 1872 | ## requests: 1873 | ## cpu: 100m 1874 | ## memory: 128Mi 1875 | ## 1876 | requests: {} 1877 | ## @param metrics.containerPort Port of the Prometheus metrics container 1878 | ## 1879 | containerPort: 9216 1880 | ## Prometheus Exporter service configuration 1881 | ## 1882 | service: 1883 | ## @param metrics.service.annotations [object] Annotations for Prometheus Exporter pods. Evaluated as a template. 1884 | ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ 1885 | ## 1886 | annotations: 1887 | prometheus.io/scrape: "true" 1888 | prometheus.io/port: "{{ .Values.metrics.service.ports.metrics }}" 1889 | prometheus.io/path: "/metrics" 1890 | ## @param metrics.service.type Type of the Prometheus metrics service 1891 | ## 1892 | type: ClusterIP 1893 | ## @param metrics.service.ports.metrics Port of the Prometheus metrics service 1894 | ## 1895 | ports: 1896 | metrics: 9216 1897 | ## @param metrics.service.extraPorts Extra ports to expose (normally used with the `sidecar` value) 1898 | ## 1899 | extraPorts: [] 1900 | ## Metrics exporter liveness probe 1901 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) 1902 | ## @param metrics.livenessProbe.enabled Enable livenessProbe 1903 | ## @param metrics.livenessProbe.initialDelaySeconds Initial delay seconds for livenessProbe 1904 | ## @param metrics.livenessProbe.periodSeconds Period seconds for livenessProbe 1905 | ## @param metrics.livenessProbe.timeoutSeconds Timeout seconds for livenessProbe 1906 | ## @param metrics.livenessProbe.failureThreshold Failure threshold for livenessProbe 1907 | ## @param metrics.livenessProbe.successThreshold Success threshold for livenessProbe 1908 | ## 1909 | livenessProbe: 1910 | enabled: true 1911 | initialDelaySeconds: 15 1912 | periodSeconds: 5 1913 | timeoutSeconds: 5 1914 | failureThreshold: 3 1915 | successThreshold: 1 1916 | ## Metrics exporter readiness probe 1917 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) 1918 | ## @param metrics.readinessProbe.enabled Enable readinessProbe 1919 | ## @param metrics.readinessProbe.initialDelaySeconds Initial delay seconds for readinessProbe 1920 | ## @param metrics.readinessProbe.periodSeconds Period seconds for readinessProbe 1921 | ## @param metrics.readinessProbe.timeoutSeconds Timeout seconds for readinessProbe 1922 | ## @param metrics.readinessProbe.failureThreshold Failure threshold for readinessProbe 1923 | ## @param metrics.readinessProbe.successThreshold Success threshold for readinessProbe 1924 | ## 1925 | readinessProbe: 1926 | enabled: true 1927 | initialDelaySeconds: 5 1928 | periodSeconds: 5 1929 | timeoutSeconds: 1 1930 | failureThreshold: 3 1931 | successThreshold: 1 1932 | ## Slow starting containers can be protected through startup probes 1933 | ## Startup probes are available in Kubernetes version 1.16 and above 1934 | ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes 1935 | ## @param metrics.startupProbe.enabled Enable startupProbe 1936 | ## @param metrics.startupProbe.initialDelaySeconds Initial delay seconds for startupProbe 1937 | ## @param metrics.startupProbe.periodSeconds Period seconds for startupProbe 1938 | ## @param metrics.startupProbe.timeoutSeconds Timeout seconds for startupProbe 1939 | ## @param metrics.startupProbe.failureThreshold Failure threshold for startupProbe 1940 | ## @param metrics.startupProbe.successThreshold Success threshold for startupProbe 1941 | ## 1942 | startupProbe: 1943 | enabled: false 1944 | initialDelaySeconds: 5 1945 | periodSeconds: 10 1946 | timeoutSeconds: 5 1947 | successThreshold: 1 1948 | failureThreshold: 30 1949 | ## @param metrics.customLivenessProbe Override default liveness probe for MongoDB(®) containers 1950 | ## Ignored when livenessProbe.enabled=true 1951 | ## 1952 | customLivenessProbe: {} 1953 | ## @param metrics.customReadinessProbe Override default readiness probe for MongoDB(®) containers 1954 | ## Ignored when readinessProbe.enabled=true 1955 | ## 1956 | customReadinessProbe: {} 1957 | ## @param metrics.customStartupProbe Override default startup probe for MongoDB(®) containers 1958 | ## Ignored when startupProbe.enabled=true 1959 | ## 1960 | customStartupProbe: {} 1961 | ## Prometheus Service Monitor 1962 | ## ref: https://github.com/coreos/prometheus-operator 1963 | ## https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md 1964 | ## 1965 | serviceMonitor: 1966 | ## @param metrics.serviceMonitor.enabled Create ServiceMonitor Resource for scraping metrics using Prometheus Operator 1967 | ## 1968 | enabled: false 1969 | ## @param metrics.serviceMonitor.namespace Namespace which Prometheus is running in 1970 | ## 1971 | namespace: "" 1972 | ## @param metrics.serviceMonitor.interval Interval at which metrics should be scraped 1973 | ## 1974 | interval: 30s 1975 | ## @param metrics.serviceMonitor.scrapeTimeout Specify the timeout after which the scrape is ended 1976 | ## e.g: 1977 | ## scrapeTimeout: 30s 1978 | ## 1979 | scrapeTimeout: "" 1980 | ## @param metrics.serviceMonitor.relabelings RelabelConfigs to apply to samples before scraping. 1981 | ## 1982 | relabelings: [] 1983 | ## @param metrics.serviceMonitor.metricRelabelings MetricsRelabelConfigs to apply to samples before ingestion. 1984 | ## 1985 | metricRelabelings: [] 1986 | ## @param metrics.serviceMonitor.labels Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with 1987 | ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec 1988 | ## 1989 | labels: {} 1990 | ## @param metrics.serviceMonitor.selector Prometheus instance selector labels 1991 | ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration 1992 | ## 1993 | selector: {} 1994 | ## @param metrics.serviceMonitor.honorLabels Specify honorLabels parameter to add the scrape endpoint 1995 | ## 1996 | honorLabels: false 1997 | ## @param metrics.serviceMonitor.jobLabel The name of the label on the target service to use as the job name in prometheus. 1998 | ## 1999 | jobLabel: "" 2000 | ## Custom PrometheusRule to be defined 2001 | ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions 2002 | ## 2003 | prometheusRule: 2004 | ## @param metrics.prometheusRule.enabled Set this to true to create prometheusRules for Prometheus operator 2005 | ## 2006 | enabled: false 2007 | ## @param metrics.prometheusRule.additionalLabels Additional labels that can be used so prometheusRules will be discovered by Prometheus 2008 | ## 2009 | additionalLabels: {} 2010 | ## @param metrics.prometheusRule.namespace Namespace where prometheusRules resource should be created 2011 | ## 2012 | namespace: "" 2013 | ## @param metrics.prometheusRule.rules Rules to be created, check values for an example 2014 | ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#rulegroup 2015 | ## https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ 2016 | ## 2017 | ## This is an example of a rule, you should add the below code block under the "rules" param, removing the brackets 2018 | ## rules: 2019 | ## - alert: HighRequestLatency 2020 | ## expr: job:request_latency_seconds:mean5m{job="myjob"} > 0.5 2021 | ## for: 10m 2022 | ## labels: 2023 | ## severity: page 2024 | ## annotations: 2025 | ## summary: High request latency 2026 | ## 2027 | rules: [] 2028 | --------------------------------------------------------------------------------