├── .gitignore ├── .helmignore ├── Chart.yaml ├── LICENSE ├── README.md ├── image ├── Dockerfile └── bootstrap.sh ├── templates ├── _common.tpl ├── configmap.yaml ├── deployment.yaml ├── pvc.yaml ├── rbac.yaml └── services.yaml └── values.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | .history 2 | .DS_Store 3 | templates/secrets.yaml -------------------------------------------------------------------------------- /.helmignore: -------------------------------------------------------------------------------- 1 | # Patterns to ignore when building packages. 2 | # This supports shell glob matching, relative path matching, and 3 | # negation (prefixed with !). Only one pattern per line. 4 | .DS_Store 5 | # Common VCS dirs 6 | .git/ 7 | .gitignore 8 | .bzr/ 9 | .bzrignore 10 | .hg/ 11 | .hgignore 12 | .svn/ 13 | # Common backup files 14 | *.swp 15 | *.bak 16 | *.tmp 17 | *~ 18 | # Various IDEs 19 | .project 20 | .idea/ 21 | *.tmproj 22 | # OWNERS file for Kubernetes 23 | OWNERS -------------------------------------------------------------------------------- /Chart.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | description: Airflow installation 3 | name: airflow 4 | version: v0.0.5 5 | icon: https://airflow.apache.org/_images/pin_large.png 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Minh Mai 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # airflow-helm 2 | 3 | [Airflow](https://airflow.incubator.apache.org/) is a workflow management system built by Airbnb. Airflow is used to run and monitor daily tasks as well as providing easy scaling when workloads become too large. This is a Helm chart for Kuberentes deployment, greatly inspired by the work of [Stibbons](https://github.com/Stibbons/kube-airflow/tree/helm_chart/airflow) and [Mumoshu](https://github.com/mumoshu/kube-airflow). This project was built out of trying to integrate the previous two Airflow Kubernetese deployment and tailoring it to my needs. Subtle differences include: 4 | * a separate charts for the Nginx Ingress and RabbitMQ 5 | * a probe to the Scheduler since there are bugs with broken pipes 6 | * using [Invoke](http://www.pyinvoke.org/) for task execution 7 | * optional Postgres deployment for Airflow metadb 8 | 9 | 10 | ## Installation 11 | 12 | Before beginning, please make sure you install [pip](https://pip.pypa.io/en/stable/installing/) and the necessary libraries in `requirements.txt`. After install pip, just run: 13 | 14 | ``` 15 | pip install -r requirements.txt 16 | ``` 17 | 18 | Rather than using `MakeFile` we used [Invoke](http://www.pyinvoke.org/) to call commands, you can see the available Invoke commands by typing in `invoke -l` 19 | 20 | 21 | ## Configuration 22 | 23 | ### Adding DAGs 24 | Workflows are abstracted as DAGs in Airflow. After creating your DAGs, you can integrate them in two ways: 25 | * building the DAG directly into the image 26 | * using [Invoke](http://www.pyinvoke.org/) to copy local DAGs into the pods 27 | 28 | For the first method, I have left a template of a `Dockerfile`, where all you have to do is put your DAGs in the `dags/` directory before building your image and adjusting line 2 in `airflow-helm/charts/airflow/values.yaml` to reflect the correct image repository. The drawback of this method is that you will have to build out your image every time a DAG change, so it is best for development and testing. For the second method, you move your DAGs into `dags/` and run: 29 | 30 | ``` 31 | invoke copy-dags --all 32 | ``` 33 | 34 | Note that if you decide not to put your dags in a dag folder, you can specify which folder to copy by running 35 | 36 | ``` 37 | invoke copy-dags --path your/path/to/dags 38 | ``` 39 | 40 | Or to only certain pods (by default, it sends to worker, scheduler, and airflow server pods). Note that you don't need to necessarily need to type in the full pod name, just enough for a regex match (e.g. `sche` for scheduler) 41 | 42 | ``` 43 | invoke copy-dags --pod pod_regex_here 44 | ``` 45 | 46 | A third viable option is the use of `git-sync` which is explained thoroughly [here](https://github.com/Stibbons/kube-airflow) 47 | 48 | 49 | ### Additional Python libraries 50 | If your DAGs requires additional libraries, feel free to add them to `airflow-helm/charts/airflow/artifacts/requirements.txt` and they will be installed upon every pod start up. 51 | 52 | Scaling can be done by adjusting `airflow-helm/charts/airflow/artifacts/airflow.cfg` line 47, this `dag_concurrency` variable will dictate how many tasks the Scheduler will allow Celery workers to execute tasks. I found this to be more helpful than increasing the number of replicas of worker pods (it still an option if you so choose to scale this way). 53 | 54 | ## Deployment 55 | 56 | After configuration, all you need to do is run 57 | 58 | ``` 59 | invoke install all 60 | ``` 61 | 62 | To delete Helm charts, run 63 | 64 | ``` 65 | invoke delete all 66 | ``` 67 | 68 | You have the option of deleting a Helm chart and reinstalling it to allow changes to take place 69 | 70 | ``` 71 | invoke reinstall all 72 | ``` 73 | 74 | Charts can also be deployed, deleted, or reinstalled separately by replacing "all", for example: 75 | 76 | ``` 77 | invoke install rabbitmq 78 | invoke reinstall airflow 79 | invoke delete nginx-ingress 80 | ``` 81 | 82 | 83 | ## To Dos 84 | 85 | * Consider using [Horizontal Pod Auto Scalers](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/) for worker pods instead of StatefulSets 86 | * A more elegant way of probing the Scheduler 87 | * Allow realtime syncing of the DAGs directly using NFS 88 | 89 | Currently this is in testing and QA, so there will be more down the road. 90 | 91 | Feel free to fork, make PRs, or file issues! 92 | -------------------------------------------------------------------------------- /image/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:16.04 2 | 3 | ENV SLUGIFY_USES_TEXT_UNIDECODE=yes 4 | 5 | # install deps 6 | RUN apt-get update -y && apt-get install -y \ 7 | wget \ 8 | python-dev \ 9 | python-pip \ 10 | libczmq-dev \ 11 | libcurlpp-dev \ 12 | curl \ 13 | libssl-dev \ 14 | git \ 15 | inetutils-telnet \ 16 | bind9utils \ 17 | zip \ 18 | unzip \ 19 | && apt-get clean 20 | 21 | RUN pip install --upgrade pip 22 | 23 | # Since we install vanilla Airflow, we also want to have support for Postgres and Kubernetes 24 | RUN pip install -U setuptools && \ 25 | pip install kubernetes && \ 26 | pip install cryptography && \ 27 | pip install psycopg2-binary==2.7.4 28 | 29 | # install airflow 30 | RUN pip install apache-airflow[kubernetes,postgres] 31 | 32 | COPY bootstrap.sh /bootstrap.sh 33 | RUN chmod +x /bootstrap.sh 34 | 35 | ENTRYPOINT ["/bootstrap.sh"] 36 | -------------------------------------------------------------------------------- /image/bootstrap.sh: -------------------------------------------------------------------------------- 1 | if [ "$1" = "webserver" ] 2 | then 3 | exec airflow webserver 4 | fi 5 | 6 | if [ "$1" = "scheduler" ] 7 | then 8 | exec airflow scheduler 9 | fi 10 | -------------------------------------------------------------------------------- /templates/_common.tpl: -------------------------------------------------------------------------------- 1 | {{- define "common_deployment" }} 2 | env: 3 | - name: AIRFLOW_KUBE_NAMESPACE 4 | valueFrom: 5 | fieldRef: 6 | fieldPath: metadata.namespace 7 | - name: SQL_ALCHEMY_CONN 8 | valueFrom: 9 | secretKeyRef: 10 | name: {{ .Release.Name }}-secrets 11 | key: sql_alchemy_conn 12 | volumeMounts: 13 | - name: {{ .Release.Name }}-configmap 14 | mountPath: /root/airflow/airflow.cfg 15 | subPath: airflow.cfg 16 | - name: airflow-dags 17 | mountPath: /root/airflow/dags 18 | - name: airflow-logs 19 | mountPath: /root/airflow/logs 20 | {{- end -}} 21 | -------------------------------------------------------------------------------- /templates/configmap.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: ConfigMap 3 | metadata: 4 | name: {{ .Release.Name }}-config 5 | data: 6 | airflow.cfg: {{ .Values.configFile | quote }} 7 | -------------------------------------------------------------------------------- /templates/deployment.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: extensions/v1beta1 3 | kind: Deployment 4 | metadata: 5 | name: {{ .Release.Name }} 6 | spec: 7 | replicas: 1 8 | template: 9 | metadata: 10 | labels: 11 | name: {{ .Release.Name }} 12 | spec: 13 | initContainers: 14 | - name: "init" 15 | image: {{ .Release.Name }} 16 | imagePullPolicy: IfNotPresent 17 | volumeMounts: 18 | - name: {{ .Release.Name }}-configmap 19 | mountPath: /root/airflow/airflow.cfg 20 | subPath: airflow.cfg 21 | - name: {{ .Release.Name }}-dags 22 | mountPath: /root/airflow/dags 23 | - name: test-volume 24 | mountPath: /root/test_volume 25 | env: 26 | - name: SQL_ALCHEMY_CONN 27 | valueFrom: 28 | secretKeyRef: 29 | name: {{ .Release.Name }}-secrets 30 | key: sql_alchemy_conn 31 | command: 32 | - "bash" 33 | args: 34 | - "-cx" 35 | - "./tmp/airflow-test-env-init.sh" 36 | containers: 37 | - name: {{ .Values.app.name }} 38 | image: {{ .Values.global.image }} 39 | imageTag: {{ .Values.global.imageTag }} 40 | imagePullPolicy: {{ .Values.global.image.pullPolicy }} 41 | ports: 42 | - name: {{ .Values.app.name }} 43 | containerPort: {{ .Values.app.servicePort }} 44 | args: ["webserver"] 45 | {{- include "common_deployment" . -}} 46 | {{ if .Values.probe.enabled}} 47 | readinessProbe: 48 | initialDelaySeconds: {{ .Values.probe.readiness.delaySeconds }} 49 | timeoutSeconds: {{ .Values.probe.readiness.timeoutSeconds }} 50 | periodSeconds: {{ .Values.probe.readiness.periodSeconds }} 51 | httpGet: 52 | path: {{ .Values.probe.readiness.http.path }} 53 | port: {{ .Values.probe.readiness.http.port }} 54 | livenessProbe: 55 | initialDelaySeconds: {{ .Values.probe.liveness.delaySeconds }} 56 | timeoutSeconds: {{ .Values.probe.liveness.timeoutSeconds }} 57 | failureThreshold: {{ .Values.probe.liveness.failureThreshol }} 58 | httpGet: 59 | path: {{ .Values.probe.liveness.http.path }} 60 | port: {{ .Values.probe.liveness.http.port }} 61 | {{- end }} 62 | - name: scheduler 63 | image: {{ .Values.global.image }} 64 | imageTag: {{ .Values.global.imageTag }} 65 | imagePullPolicy: {{ .Values.global.image.pullPolicy }} 66 | args: ["scheduler"] 67 | {{- include "common_deployment" . -}} 68 | volumes: 69 | - name: airflow-dags 70 | persistentVolumeClaim: 71 | claimName: airflow-dags 72 | - name: test-volume 73 | persistentVolumeClaim: 74 | claimName: test-volume 75 | - name: airflow-logs 76 | persistentVolumeClaim: 77 | claimName: airflow-logs 78 | - name: airflow-configmap 79 | configMap: 80 | name: airflow-configmap -------------------------------------------------------------------------------- /templates/pvc.yaml: -------------------------------------------------------------------------------- 1 | {{- if .Values.persistence.enabled }} 2 | kind: PersistentVolume 3 | apiVersion: v1 4 | metadata: 5 | name: {{ .Values.persistence.dags.name }} 6 | spec: 7 | accessModes: 8 | - {{ .Values.persistence.dags.accessMode }} 9 | capacity: 10 | storage: {{ .Values.persistence.dags.size }} 11 | hostPath: 12 | path: {{ .Values.persistence.dags.path }} 13 | 14 | --- 15 | kind: PersistentVolumeClaim 16 | apiVersion: v1 17 | metadata: 18 | name: {{ .Values.persistence.dags.name }} 19 | spec: 20 | accessModes: 21 | - ReadWriteMany 22 | resources: 23 | requests: 24 | storage: {{ .Values.persistence.dags.size }} 25 | 26 | --- 27 | kind: PersistentVolume 28 | apiVersion: v1 29 | metadata: 30 | name: {{ .Values.persistence.logs.name }} 31 | spec: 32 | accessModes: 33 | - {{ .Values.persistence.logs.accessMode }} 34 | capacity: 35 | storage: {{ .Values.persistence.logs.size }} 36 | hostPath: 37 | path: {{ .Values.persistence.logs.path }} 38 | 39 | --- 40 | kind: PersistentVolumeClaim 41 | apiVersion: v1 42 | metadata: 43 | name: {{ .Values.persistence.logs.name }} 44 | spec: 45 | accessModes: 46 | - {{ .Values.persistence.logs.accessMode }} 47 | resources: 48 | requests: 49 | storage: {{ .Values.persistence.logs.size }} 50 | 51 | {{- end }} 52 | 53 | -------------------------------------------------------------------------------- /templates/rbac.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | apiVersion: rbac.authorization.k8s.io/v1beta1 3 | kind: ClusterRoleBinding 4 | metadata: 5 | name: admin-rbac 6 | subjects: 7 | - kind: ServiceAccount 8 | # Reference to upper's `metadata.name` 9 | name: {{default default .Values.rbac.namespace }} 10 | # Reference to upper's `metadata.namespace` 11 | namespace: {{default default .Values.rbac.namespace }} 12 | roleRef: 13 | kind: {{ default ClusteRole .Values.rbac.role.kind }} 14 | name: {{ default cluster-admin .Values.rbac.role.name }} 15 | apiGroup: {{ default rbac.authorization.k8s.io .Values.rbac.role.apiGroup }} 16 | -------------------------------------------------------------------------------- /templates/services.yaml: -------------------------------------------------------------------------------- 1 | apiVersion: v1 2 | kind: Service 3 | metadata: 4 | name: {{ .Release.name }} 5 | spec: 6 | type: { .Values.service.type }} 7 | ports: 8 | - port: {{ .Values.service.port }} 9 | nodePort: { .Values.service.nodePort }} 10 | selector: 11 | name: {{ .Release.name }} -------------------------------------------------------------------------------- /values.yaml: -------------------------------------------------------------------------------- 1 | global: 2 | image: minh5/airflow 3 | imageTag: 0.1.0 4 | pullPolicy: Always 5 | restartPolicy: Always 6 | 7 | rbac: 8 | namespace: default 9 | role: 10 | kind: ClusterRole 11 | name: cluster-admin 12 | apiGroup: rbac.authorization.k8s.io 13 | 14 | app: 15 | name: webserver 16 | replicas: 1 17 | servicePort: 8080 18 | containerPort: 8080 19 | urlPath: /airflow 20 | 21 | persistence: 22 | enabled: true 23 | logs: 24 | name: airflow-logs 25 | accessMode: ReadWriteMany 26 | size: 2Gi 27 | path: /root/airflow/logs 28 | dags: 29 | name: airflow-dags 30 | accessMode: ReadWriteOnce 31 | size: 2Gi 32 | path: /root/airflow/dags 33 | 34 | probe: 35 | enabled: False 36 | readiness: 37 | delaySeconds: 5 38 | timeoutSeconds: 5 39 | periodSeconds: 5 40 | http: 41 | path: /login 42 | port: 8080 43 | liveness: 44 | delaySeconds: 5 45 | timeoutSeconds: 5 46 | failureThreshold: 5 47 | http: 48 | path: /login 49 | port: 8080 50 | 51 | service: 52 | type: NodePort 53 | port: 8080 54 | nodePort: 30809 55 | 56 | initFile: |- 57 | airflow initdb 58 | bash /tmp/config/config.sh 59 | bash /usr/local/bin/airflow-init $@ 60 | 61 | configFile: |- 62 | [core] 63 | airflow_home = /root/airflow 64 | dags_folder = /root/airflow/dags 65 | base_log_folder = /root/airflow/logs 66 | logging_level = INFO 67 | executor = KubernetesExecutor 68 | parallelism = 32 69 | load_examples = False 70 | plugins_folder = /root/airflow/plugins 71 | sql_alchemy_conn = $SQL_ALCHEMY_CONN 72 | [scheduler] 73 | dag_dir_list_interval = 300 74 | child_process_log_directory = /root/airflow/logs/scheduler 75 | # Task instances listen for external kill signal (when you clear tasks 76 | # from the CLI or the UI), this defines the frequency at which they should 77 | # listen (in seconds). 78 | job_heartbeat_sec = 5 79 | max_threads = 2 80 | # The scheduler constantly tries to trigger new tasks (look at the 81 | # scheduler section in the docs for more information). This defines 82 | # how often the scheduler should run (in seconds). 83 | scheduler_heartbeat_sec = 5 84 | # after how much time should the scheduler terminate in seconds 85 | # -1 indicates to run continuously (see also num_runs) 86 | run_duration = -1 87 | # after how much time a new DAGs should be picked up from the filesystem 88 | min_file_process_interval = 0 89 | statsd_on = False 90 | statsd_host = localhost 91 | statsd_port = 8125 92 | statsd_prefix = airflow 93 | print_stats_interval = 30 94 | scheduler_zombie_task_threshold = 300 95 | max_tis_per_query = 0 96 | authenticate = False 97 | # Turn off scheduler catchup by setting this to False. 98 | # Default behavior is unchanged and 99 | # Command Line Backfills still work, but the scheduler 100 | # will not do scheduler catchup if this is False, 101 | # however it can be set on a per DAG basis in the 102 | # DAG definition (catchup) 103 | catchup_by_default = True 104 | [webserver] 105 | # The base url of your website as airflow cannot guess what domain or 106 | # cname you are using. This is used in automated emails that 107 | # airflow sends to point links to the right web server 108 | base_url = http://localhost:8080 109 | # The ip specified when starting the web server 110 | web_server_host = 0.0.0.0 111 | # The port on which to run the web server 112 | web_server_port = 8080 113 | # Paths to the SSL certificate and key for the web server. When both are 114 | # provided SSL will be enabled. This does not change the web server port. 115 | web_server_ssl_cert = 116 | web_server_ssl_key = 117 | # Number of seconds the webserver waits before killing gunicorn master that doesn't respond 118 | web_server_master_timeout = 120 119 | # Number of seconds the gunicorn webserver waits before timing out on a worker 120 | web_server_worker_timeout = 120 121 | # Number of workers to refresh at a time. When set to 0, worker refresh is 122 | # disabled. When nonzero, airflow periodically refreshes webserver workers by 123 | # bringing up new ones and killing old ones. 124 | worker_refresh_batch_size = 1 125 | # Number of seconds to wait before refreshing a batch of workers. 126 | worker_refresh_interval = 30 127 | # Secret key used to run your flask app 128 | secret_key = temporary_key 129 | # Number of workers to run the Gunicorn web server 130 | workers = 4 131 | # The worker class gunicorn should use. Choices include 132 | # sync (default), eventlet, gevent 133 | worker_class = sync 134 | # Log files for the gunicorn webserver. '-' means log to stderr. 135 | access_logfile = - 136 | error_logfile = - 137 | # Expose the configuration file in the web server 138 | expose_config = False 139 | # Set to true to turn on authentication: 140 | # https://airflow.incubator.apache.org/security.html#web-authentication 141 | authenticate = False 142 | # Filter the list of dags by owner name (requires authentication to be enabled) 143 | filter_by_owner = False 144 | # Filtering mode. Choices include user (default) and ldapgroup. 145 | # Ldap group filtering requires using the ldap backend 146 | # 147 | # Note that the ldap server needs the "memberOf" overlay to be set up 148 | # in order to user the ldapgroup mode. 149 | owner_mode = user 150 | # Default DAG view. Valid values are: 151 | # tree, graph, duration, gantt, landing_times 152 | dag_default_view = tree 153 | # Default DAG orientation. Valid values are: 154 | # LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top) 155 | dag_orientation = LR 156 | # Puts the webserver in demonstration mode; blurs the names of Operators for 157 | # privacy. 158 | demo_mode = False 159 | # The amount of time (in secs) webserver will wait for initial handshake 160 | # while fetching logs from other worker machine 161 | log_fetch_timeout_sec = 5 162 | # By default, the webserver shows paused DAGs. Flip this to hide paused 163 | # DAGs by default 164 | hide_paused_dags_by_default = False 165 | # Consistent page size across all listing views in the UI 166 | page_size = 100 167 | # Use FAB-based webserver with RBAC feature 168 | rbac = True 169 | [smtp] 170 | # If you want airflow to send emails on retries, failure, and you want to use 171 | # the airflow.utils.email.send_email_smtp function, you have to configure an 172 | # smtp server here 173 | smtp_host = localhost 174 | smtp_starttls = True 175 | smtp_ssl = False 176 | # Uncomment and set the user/pass settings if you want to use SMTP AUTH 177 | # smtp_user = airflow 178 | # smtp_password = airflow 179 | smtp_port = 25 180 | smtp_mail_from = airflow@example.com 181 | [kubernetes] 182 | airflow_configmap = airflow-configmap 183 | worker_container_repository = airflow 184 | worker_container_tag = latest 185 | worker_container_image_pull_policy = IfNotPresent 186 | worker_dags_folder = /tmp/dags 187 | delete_worker_pods = True 188 | git_repo = https://github.com/apache/incubator-airflow.git 189 | git_branch = master 190 | git_subpath = airflow/example_dags/ 191 | git_user = 192 | git_password = 193 | dags_volume_claim = airflow-dags 194 | dags_volume_subpath = 195 | logs_volume_claim = airflow-logs 196 | logs_volume_subpath = 197 | in_cluster = True 198 | namespace = default 199 | gcp_service_account_keys = 200 | # For cloning DAGs from git repositories into volumes: https://github.com/kubernetes/git-sync 201 | git_sync_container_repository = gcr.io/google-containers/git-sync-amd64 202 | git_sync_container_tag = v2.0.5 203 | git_sync_init_container_name = git-sync-clone 204 | [kubernetes_node_selectors] 205 | # The Key-value pairs to be given to worker pods. 206 | # The worker pods will be scheduled to the nodes of the specified key-value pairs. 207 | # Should be supplied in the format: key = value 208 | [kubernetes_secrets] 209 | SQL_ALCHEMY_CONN = airflow-secrets=sql_alchemy_conn 210 | [cli] 211 | api_client = airflow.api.client.json_client 212 | endpoint_url = http://localhost:8080 213 | [api] 214 | auth_backend = airflow.api.auth.backend.default 215 | [github_enterprise] 216 | api_rev = v3 217 | [admin] 218 | # UI to hide sensitive variable fields when set to True 219 | hide_sensitive_variable_fields = True 220 | --------------------------------------------------------------------------------