├── .gitignore ├── .golangci.yml ├── .promu.yml ├── .yamllint ├── CODEOWNERS ├── LICENSE ├── Makefile ├── Makefile.common ├── README.md ├── cmd └── ibm-db2-exporter │ └── main.go ├── collector ├── collector.go ├── collector_test.go ├── config.go ├── config_test.go ├── query.go └── testdata │ ├── all_metrics.prom │ └── query_failure.prom ├── errcheck_excludes.txt ├── go.mod ├── go.sum ├── mixin ├── .lint ├── Makefile ├── README.md ├── alerts │ └── alerts.libsonnet ├── config.libsonnet ├── dashboards │ ├── dashboards.libsonnet │ └── ibm-db2-overview.libsonnet ├── jsonnetfile.json ├── jsonnetfile.lock.json ├── mixin.libsonnet └── vendor │ ├── github.com │ └── grafana │ │ └── grafonnet-lib │ │ └── grafonnet │ │ ├── alert_condition.libsonnet │ │ ├── alertlist.libsonnet │ │ ├── annotation.libsonnet │ │ ├── bar_gauge_panel.libsonnet │ │ ├── cloudmonitoring.libsonnet │ │ ├── cloudwatch.libsonnet │ │ ├── dashboard.libsonnet │ │ ├── dashlist.libsonnet │ │ ├── elasticsearch.libsonnet │ │ ├── gauge_panel.libsonnet │ │ ├── grafana.libsonnet │ │ ├── graph_panel.libsonnet │ │ ├── graphite.libsonnet │ │ ├── heatmap_panel.libsonnet │ │ ├── influxdb.libsonnet │ │ ├── link.libsonnet │ │ ├── log_panel.libsonnet │ │ ├── loki.libsonnet │ │ ├── pie_chart_panel.libsonnet │ │ ├── pluginlist.libsonnet │ │ ├── prometheus.libsonnet │ │ ├── row.libsonnet │ │ ├── singlestat.libsonnet │ │ ├── sql.libsonnet │ │ ├── stat_panel.libsonnet │ │ ├── table_panel.libsonnet │ │ ├── template.libsonnet │ │ ├── text.libsonnet │ │ ├── timepicker.libsonnet │ │ └── transformation.libsonnet │ └── grafonnet └── windows_setup.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Binary from `make exporter` in ./Makefile 2 | bin 3 | 4 | # Binaries for programs and plugins 5 | *.exe 6 | *.exe~ 7 | *.dll 8 | *.so 9 | *.dylib 10 | 11 | # local data 12 | /local 13 | 14 | # Test binary, built with `go test -c` 15 | *.test 16 | 17 | # Build artifact from `make build` 18 | /ibm-db2-exporter 19 | 20 | # build artifacts from `promu crossbuild` 21 | .build 22 | 23 | # build artifacts for mixin 24 | /mixin/*_out 25 | mixin/prometheus_alerts.yaml 26 | 27 | # Output of the go coverage tool, specifically when used with LiteIDE 28 | *.out 29 | 30 | # Dependency directories (remove the comment below to include it) 31 | # vendor/ 32 | -------------------------------------------------------------------------------- /.golangci.yml: -------------------------------------------------------------------------------- 1 | --- 2 | run: 3 | deadline: 5m 4 | 5 | output: 6 | sort-results: true 7 | 8 | linters: 9 | enable: 10 | - depguard 11 | - gofumpt 12 | - goimports 13 | - revive 14 | - misspell 15 | 16 | linters-settings: 17 | errcheck: 18 | exclude: errcheck_excludes.txt 19 | goimports: 20 | local-prefixes: github.com/prometheus/prometheus 21 | gofumpt: 22 | extra-rules: true 23 | -------------------------------------------------------------------------------- /.promu.yml: -------------------------------------------------------------------------------- 1 | --- 2 | go: 3 | version: 1.20 4 | repository: 5 | path: github.com/grafana/imb-db2-prometheus-exporter 6 | build: 7 | binaries: 8 | - name: ibm-db2-exporter 9 | path: ./cmd/ibm-db2-exporter 10 | flags: -a 11 | ldflags: | 12 | -X github.com/prometheus/common/version.Version={{.Version}} 13 | -X github.com/prometheus/common/version.Revision={{.Revision}} 14 | -X github.com/prometheus/common/version.Branch={{.Branch}} 15 | -X github.com/prometheus/common/version.BuildUser={{user}}@{{host}} 16 | -X github.com/prometheus/common/version.BuildDate={{date "20060102-15:04:05"}} 17 | tarball: 18 | files: 19 | - LICENSE 20 | -------------------------------------------------------------------------------- /.yamllint: -------------------------------------------------------------------------------- 1 | --- 2 | extends: default 3 | 4 | rules: 5 | braces: 6 | max-spaces-inside: 1 7 | level: error 8 | brackets: 9 | max-spaces-inside: 1 10 | level: error 11 | commas: disable 12 | comments: disable 13 | comments-indentation: disable 14 | document-start: disable 15 | indentation: 16 | spaces: consistent 17 | indent-sequences: consistent 18 | line-length: disable 19 | -------------------------------------------------------------------------------- /CODEOWNERS: -------------------------------------------------------------------------------- 1 | # https://help.github.com/articles/about-codeowners/ 2 | # https://git-scm.com/docs/gitignore#_pattern_format 3 | 4 | # Wildcard permissions should be loaded first 5 | * @grafana/cloud-integrations 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | all:: vet common-all 2 | 3 | .PHONY: exporter 4 | exporter: 5 | go build -o ./bin/ibm_db2_exporter ./cmd/ibm-db2-exporter/main.go 6 | 7 | include Makefile.common 8 | -------------------------------------------------------------------------------- /Makefile.common: -------------------------------------------------------------------------------- 1 | # Copyright 2018 The Prometheus Authors 2 | # Licensed under the Apache License, Version 2.0 (the "License"); 3 | # you may not use this file except in compliance with the License. 4 | # You may obtain a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 | # See the License for the specific language governing permissions and 12 | # limitations under the License. 13 | 14 | 15 | # A common Makefile that includes rules to be reused in different prometheus projects. 16 | # !!! Open PRs only against the prometheus/prometheus/Makefile.common repository! 17 | 18 | # Example usage : 19 | # Create the main Makefile in the root project directory. 20 | # include Makefile.common 21 | # customTarget: 22 | # @echo ">> Running customTarget" 23 | # 24 | 25 | # Ensure GOBIN is not set during build so that promu is installed to the correct path 26 | unexport GOBIN 27 | 28 | GO ?= go 29 | GOFMT ?= $(GO)fmt 30 | FIRST_GOPATH := $(firstword $(subst :, ,$(shell $(GO) env GOPATH))) 31 | GOOPTS ?= 32 | GOHOSTOS ?= $(shell $(GO) env GOHOSTOS) 33 | GOHOSTARCH ?= $(shell $(GO) env GOHOSTARCH) 34 | 35 | GO_VERSION ?= $(shell $(GO) version) 36 | GO_VERSION_NUMBER ?= $(word 3, $(GO_VERSION)) 37 | PRE_GO_111 ?= $(shell echo $(GO_VERSION_NUMBER) | grep -E 'go1\.(10|[0-9])\.') 38 | 39 | PROMU := $(FIRST_GOPATH)/bin/promu 40 | pkgs = ./... 41 | 42 | ifeq (arm, $(GOHOSTARCH)) 43 | GOHOSTARM ?= $(shell GOARM= $(GO) env GOARM) 44 | GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH)v$(GOHOSTARM) 45 | else 46 | GO_BUILD_PLATFORM ?= $(GOHOSTOS)-$(GOHOSTARCH) 47 | endif 48 | 49 | GOTEST := $(GO) test 50 | GOTEST_DIR := 51 | ifneq ($(CIRCLE_JOB),) 52 | ifneq ($(shell which gotestsum),) 53 | GOTEST_DIR := test-results 54 | GOTEST := gotestsum --junitfile $(GOTEST_DIR)/unit-tests.xml -- 55 | endif 56 | endif 57 | 58 | PROMU_VERSION ?= 0.13.0 59 | PROMU_URL := https://github.com/prometheus/promu/releases/download/v$(PROMU_VERSION)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM).tar.gz 60 | 61 | SKIP_GOLANGCI_LINT := 62 | GOLANGCI_LINT := 63 | GOLANGCI_LINT_OPTS ?= 64 | GOLANGCI_LINT_VERSION ?= v1.49.0 65 | # golangci-lint only supports linux, darwin and windows platforms on i386/amd64. 66 | # windows isn't included here because of the path separator being different. 67 | ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux darwin)) 68 | ifeq ($(GOHOSTARCH),$(filter $(GOHOSTARCH),amd64 i386)) 69 | # If we're in CI and there is an Actions file, that means the linter 70 | # is being run in Actions, so we don't need to run it here. 71 | ifneq (,$(SKIP_GOLANGCI_LINT)) 72 | GOLANGCI_LINT := 73 | else ifeq (,$(CIRCLE_JOB)) 74 | GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint 75 | else ifeq (,$(wildcard .github/workflows/golangci-lint.yml)) 76 | GOLANGCI_LINT := $(FIRST_GOPATH)/bin/golangci-lint 77 | endif 78 | endif 79 | endif 80 | 81 | PREFIX ?= $(shell pwd) 82 | BIN_DIR ?= $(shell pwd) 83 | DOCKER_IMAGE_TAG ?= $(subst /,-,$(shell git rev-parse --abbrev-ref HEAD)) 84 | DOCKERFILE_PATH ?= ./Dockerfile 85 | DOCKERBUILD_CONTEXT ?= ./ 86 | DOCKER_REPO ?= prom 87 | 88 | DOCKER_ARCHS ?= amd64 89 | 90 | BUILD_DOCKER_ARCHS = $(addprefix common-docker-,$(DOCKER_ARCHS)) 91 | PUBLISH_DOCKER_ARCHS = $(addprefix common-docker-publish-,$(DOCKER_ARCHS)) 92 | TAG_DOCKER_ARCHS = $(addprefix common-docker-tag-latest-,$(DOCKER_ARCHS)) 93 | 94 | ifeq ($(GOHOSTARCH),amd64) 95 | ifeq ($(GOHOSTOS),$(filter $(GOHOSTOS),linux freebsd darwin windows)) 96 | # Only supported on amd64 97 | test-flags := -race 98 | endif 99 | endif 100 | 101 | # This rule is used to forward a target like "build" to "common-build". This 102 | # allows a new "build" target to be defined in a Makefile which includes this 103 | # one and override "common-build" without override warnings. 104 | %: common-% ; 105 | 106 | .PHONY: common-all 107 | common-all: precheck style check_license lint yamllint unused build test 108 | 109 | .PHONY: common-style 110 | common-style: 111 | @echo ">> checking code style" 112 | @fmtRes=$$($(GOFMT) -d $$(find . -path ./vendor -prune -o -name '*.go' -print)); \ 113 | if [ -n "$${fmtRes}" ]; then \ 114 | echo "gofmt checking failed!"; echo "$${fmtRes}"; echo; \ 115 | echo "Please ensure you are using $$($(GO) version) for formatting code."; \ 116 | exit 1; \ 117 | fi 118 | 119 | .PHONY: common-check_license 120 | common-check_license: 121 | @echo ">> checking license header" 122 | @licRes=$$(for file in $$(find . -type f -iname '*.go' ! -path './vendor/*') ; do \ 123 | awk 'NR<=3' $$file | grep -Eq "(Copyright|generated|GENERATED)" || echo $$file; \ 124 | done); \ 125 | if [ -n "$${licRes}" ]; then \ 126 | echo "license header checking failed:"; echo "$${licRes}"; \ 127 | exit 1; \ 128 | fi 129 | 130 | .PHONY: common-deps 131 | common-deps: 132 | @echo ">> getting dependencies" 133 | $(GO) mod download 134 | 135 | .PHONY: update-go-deps 136 | update-go-deps: 137 | @echo ">> updating Go dependencies" 138 | @for m in $$($(GO) list -mod=readonly -m -f '{{ if and (not .Indirect) (not .Main)}}{{.Path}}{{end}}' all); do \ 139 | $(GO) get -d $$m; \ 140 | done 141 | $(GO) mod tidy 142 | 143 | .PHONY: common-test-short 144 | common-test-short: $(GOTEST_DIR) 145 | @echo ">> running short tests" 146 | $(GOTEST) -short $(GOOPTS) $(pkgs) 147 | 148 | .PHONY: common-test 149 | common-test: $(GOTEST_DIR) 150 | @echo ">> running all tests" 151 | $(GOTEST) $(test-flags) $(GOOPTS) $(pkgs) 152 | 153 | $(GOTEST_DIR): 154 | @mkdir -p $@ 155 | 156 | .PHONY: common-format 157 | common-format: 158 | @echo ">> formatting code" 159 | $(GO) fmt $(pkgs) 160 | 161 | .PHONY: common-vet 162 | common-vet: 163 | @echo ">> vetting code" 164 | $(GO) vet $(GOOPTS) $(pkgs) 165 | 166 | .PHONY: common-lint 167 | common-lint: $(GOLANGCI_LINT) 168 | ifdef GOLANGCI_LINT 169 | @echo ">> running golangci-lint" 170 | # 'go list' needs to be executed before staticcheck to prepopulate the modules cache. 171 | # Otherwise staticcheck might fail randomly for some reason not yet explained. 172 | $(GO) list -e -compiled -test=true -export=false -deps=true -find=false -tags= -- ./... > /dev/null 173 | $(GOLANGCI_LINT) run $(GOLANGCI_LINT_OPTS) $(pkgs) 174 | endif 175 | 176 | .PHONY: common-yamllint 177 | common-yamllint: 178 | @echo ">> running yamllint on all YAML files in the repository" 179 | ifeq (, $(shell which yamllint)) 180 | @echo "yamllint not installed so skipping" 181 | else 182 | yamllint . 183 | endif 184 | 185 | # For backward-compatibility. 186 | .PHONY: common-staticcheck 187 | common-staticcheck: lint 188 | 189 | .PHONY: common-unused 190 | common-unused: 191 | @echo ">> running check for unused/missing packages in go.mod" 192 | $(GO) mod tidy 193 | @git diff --exit-code -- go.sum go.mod 194 | 195 | .PHONY: common-build 196 | common-build: promu 197 | @echo ">> building binaries" 198 | $(PROMU) build --prefix $(PREFIX) $(PROMU_BINARIES) 199 | 200 | .PHONY: common-tarball 201 | common-tarball: promu 202 | @echo ">> building release tarball" 203 | $(PROMU) tarball --prefix $(PREFIX) $(BIN_DIR) 204 | 205 | .PHONY: common-docker $(BUILD_DOCKER_ARCHS) 206 | common-docker: $(BUILD_DOCKER_ARCHS) 207 | $(BUILD_DOCKER_ARCHS): common-docker-%: 208 | docker build -t "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" \ 209 | -f $(DOCKERFILE_PATH) \ 210 | --build-arg ARCH="$*" \ 211 | --build-arg OS="linux" \ 212 | $(DOCKERBUILD_CONTEXT) 213 | 214 | .PHONY: common-docker-publish $(PUBLISH_DOCKER_ARCHS) 215 | common-docker-publish: $(PUBLISH_DOCKER_ARCHS) 216 | $(PUBLISH_DOCKER_ARCHS): common-docker-publish-%: 217 | docker push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" 218 | 219 | DOCKER_MAJOR_VERSION_TAG = $(firstword $(subst ., ,$(shell cat VERSION))) 220 | .PHONY: common-docker-tag-latest $(TAG_DOCKER_ARCHS) 221 | common-docker-tag-latest: $(TAG_DOCKER_ARCHS) 222 | $(TAG_DOCKER_ARCHS): common-docker-tag-latest-%: 223 | docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:latest" 224 | docker tag "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:$(DOCKER_IMAGE_TAG)" "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$*:v$(DOCKER_MAJOR_VERSION_TAG)" 225 | 226 | .PHONY: common-docker-manifest 227 | common-docker-manifest: 228 | DOCKER_CLI_EXPERIMENTAL=enabled docker manifest create -a "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" $(foreach ARCH,$(DOCKER_ARCHS),$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME)-linux-$(ARCH):$(DOCKER_IMAGE_TAG)) 229 | DOCKER_CLI_EXPERIMENTAL=enabled docker manifest push "$(DOCKER_REPO)/$(DOCKER_IMAGE_NAME):$(DOCKER_IMAGE_TAG)" 230 | 231 | .PHONY: promu 232 | promu: $(PROMU) 233 | 234 | $(PROMU): 235 | $(eval PROMU_TMP := $(shell mktemp -d)) 236 | curl -s -L $(PROMU_URL) | tar -xvzf - -C $(PROMU_TMP) 237 | mkdir -p $(FIRST_GOPATH)/bin 238 | cp $(PROMU_TMP)/promu-$(PROMU_VERSION).$(GO_BUILD_PLATFORM)/promu $(FIRST_GOPATH)/bin/promu 239 | rm -r $(PROMU_TMP) 240 | 241 | .PHONY: proto 242 | proto: 243 | @echo ">> generating code from proto files" 244 | @./scripts/genproto.sh 245 | 246 | ifdef GOLANGCI_LINT 247 | $(GOLANGCI_LINT): 248 | mkdir -p $(FIRST_GOPATH)/bin 249 | curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/$(GOLANGCI_LINT_VERSION)/install.sh \ 250 | | sed -e '/install -d/d' \ 251 | | sh -s -- -b $(FIRST_GOPATH)/bin $(GOLANGCI_LINT_VERSION) 252 | endif 253 | 254 | .PHONY: precheck 255 | precheck:: 256 | 257 | define PRECHECK_COMMAND_template = 258 | precheck:: $(1)_precheck 259 | 260 | PRECHECK_COMMAND_$(1) ?= $(1) $$(strip $$(PRECHECK_OPTIONS_$(1))) 261 | .PHONY: $(1)_precheck 262 | $(1)_precheck: 263 | @if ! $$(PRECHECK_COMMAND_$(1)) 1>/dev/null 2>&1; then \ 264 | echo "Execution of '$$(PRECHECK_COMMAND_$(1))' command failed. Is $(1) installed?"; \ 265 | exit 1; \ 266 | fi 267 | endef 268 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # **ibm-db2-prometheus-exporter** 2 | 3 | Exports [IBM DB2](https://www.ibm.com/products/db2/database) metrics via HTTP for Prometheus consumption. 4 | 5 | **Note:** This exporter is not compatible with ARM64 architectures due to restrictions with the driver. 6 | 7 | # Prerequisites 8 | 9 | The [go_ibm_db driver](https://github.com/ibmdb/go_ibm_db) needs installed C library files in order to connect to the database. A minimal setup could be provided via using the [clidriver](https://github.com/ibmdb/go_ibm_db/blob/master/installer/setup.go). 10 | 11 | In order for DB2 to correctly report metric values, the database being monitored must be "explicitly activated". Doing so will make it so that certain metric values are correctly incremented and not periodically reset. However, it does result in a performance impact on the environment DB2 is running in. The size of this impact will depend on the system, but will result in the most accurate data reported by DB2 and, subsequently, this exporter. To explicitly activate a database, connect to DB2 and run the command `activate database ` and disconnect. Now DB2 will correctly increment and store metrics. 12 | 13 | ``` 14 | db2 15 | activate database sample 16 | quit 17 | ``` 18 | 19 | To deactivate a database, connect to DB2 and run the command `deactivate database ` and disconnect. Doing so will reset the metrics reported by DB2. The database must be reactivated in order for metrics to be properly incremented, stored, and reported by DB2. This also applies to whenever DB2 is shutdown. 20 | 21 | **Note:** Whether or not the database is activated only affects DB2's ability to report metrics, it does not affect DB2's behavior as a database. 22 | 23 | ## Driver installation (optional) 24 | 25 | ``` 26 | go install github.com/ibmdb/go_ibm_db/installer@latest 27 | ``` 28 | 29 | Make sure to have the clidriver set up: 30 | 31 | ``` 32 | cd go/pkg/mod/github.com/ibmdb/go_ibm_db\@latest/installer && go run setup.go 33 | ``` 34 | 35 | ## Required environment variables 36 | 37 | Set the following environment variables before running the exporter in order for the driver to work: 38 | 39 | ``` 40 | LD_LIBRARY_PATH=go/pkg/mod/github.com/ibmdb/clidriver/lib 41 | CGO_LDFLAGS=-L/usr/local/go/pkg/mod/github.com/ibmdb/tmp/clidriver/lib 42 | CGO_CFLAGS=-I/usr/local/go/pkg/mod/github.com/ibmdb/clidriver/include 43 | ``` 44 | 45 | # Configuration 46 | 47 | You can build a binary of the exporter by running `make exporter` in this directory. 48 | 49 | **Note:** This exporter only connects to a single database. To monitor multiple databases, each one will need an exporter. 50 | 51 | ## Command line flags 52 | 53 | The exporter may be configured through its command line flags (run with -h to see options): 54 | 55 | ``` 56 | -h, --[no-]help Show context-sensitive help (also try --help-long and --help-man). 57 | --[no-]web.systemd-socket Use systemd socket activation listeners instead of port listeners (Linux only). 58 | --web.listen-address=:9953 ... 59 | Addresses on which to expose metrics and web interface. Repeatable for multiple 60 | addresses. 61 | --web.config.file="" [EXPERIMENTAL] Path to configuration file 62 | that can enable TLS or authentication. See: 63 | https://github.com/prometheus/exporter-toolkit/blob/master/docs/web-configuration.md 64 | --web.telemetry-path="/metrics" 65 | Path under which to expose metrics. ($IBM_DB2_EXPORTER_WEB_TELEMETRY_PATH) 66 | --dsn=DSN The connection string (data source name) to use to connect to the database when 67 | querying metrics. ($IBM_DB2_EXPORTER_DSN) 68 | --db=DB The database to connect to when querying metrics. ($IBM_DB2_EXPORTER_DB) 69 | --[no-]version Show application version. 70 | --log.level=info Only log messages with the given severity or above. One of: [debug, info, warn, 71 | error] 72 | --log.format=logfmt Output format of log messages. One of: [logfmt, json] 73 | ``` 74 | 75 | Example usage: 76 | 77 | ``` 78 | ./ibm_db2_exporter --db="database" --dsn="DATABASE=database;HOSTNAME=localhost;PORT=50000;UID=user;PWD=password;" 79 | ``` 80 | 81 | **Note:** 82 | 83 | - `--dsn` and `--db` are required flags, if not set as environment variables. 84 | - This exporter does not verify DSN strings. If you have trouble connecting, make sure the DSN is configured correctly. 85 | 86 | ## Environment Variables: 87 | 88 | You can also set the DSN and DB as environment variables and then run the exporter: 89 | 90 | ``` 91 | IBM_DB2_EXPORTER_DSN="DATABASE=database;HOSTNAME=localhost;PORT=50000;UID=user;PWD=password;" 92 | IBM_DB2_EXPORTER_DB="database" 93 | 94 | ./ibm_db2_exporter 95 | ``` 96 | 97 | # Troubleshooting 98 | 99 | If you get this error message: 100 | 101 | `ping is failing: verify DSN is correct and DB2 is running properly` 102 | 103 | It means the exporter is unable to connect to DB2, however it doesn't know why. To fix this error, please ensure the following: 104 | 105 | - Verify that the DSN being used by the exporter is correct for the instance/database of DB2 being monitored. 106 | - Verify that DB2 is running and all of its communication protocols are activated. 107 | 108 | After making any necessary changes restart the exporter. 109 | 110 | **Tip:** To verify whether or not the port being used by DB2 is cleared and ready for restart, try the following command. 111 | 112 | ``` 113 | netstat -ane | grep "" 114 | ``` 115 | 116 | This command will output the state of the port replacing ``. This port is whatever is used in the DSN. The default port that DB2 uses is `50000`. 117 | -------------------------------------------------------------------------------- /cmd/ibm-db2-exporter/main.go: -------------------------------------------------------------------------------- 1 | // Copyright 2023 Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package main 18 | 19 | import ( 20 | "fmt" 21 | "net/http" 22 | "os" 23 | 24 | "github.com/alecthomas/kingpin/v2" 25 | "github.com/go-kit/log" 26 | "github.com/go-kit/log/level" 27 | "github.com/grafana/ibm-db2-prometheus-exporter/collector" 28 | "github.com/prometheus/client_golang/prometheus" 29 | "github.com/prometheus/client_golang/prometheus/promhttp" 30 | "github.com/prometheus/common/promlog" 31 | "github.com/prometheus/common/promlog/flag" 32 | "github.com/prometheus/common/version" 33 | "github.com/prometheus/exporter-toolkit/web" 34 | webflag "github.com/prometheus/exporter-toolkit/web/kingpinflag" 35 | ) 36 | 37 | var ( 38 | webConfig = webflag.AddFlags(kingpin.CommandLine, ":9953") 39 | metricPath = kingpin.Flag("web.telemetry-path", "Path under which to expose metrics.").Default("/metrics").Envar("IBM_DB2_EXPORTER_WEB_TELEMETRY_PATH").String() 40 | dsn = kingpin.Flag("dsn", "The connection string (data source name) to use to connect to the database when querying metrics.").Envar("IBM_DB2_EXPORTER_DSN").Required().String() 41 | db = kingpin.Flag("db", "The database to connect to when querying metrics.").Envar("IBM_DB2_EXPORTER_DB").Required().String() 42 | ) 43 | 44 | const ( 45 | // The name of the exporter. 46 | exporterName = "ibm_db2_exporter" 47 | landingPageHtml = ` 48 | 49 | IBM DB2 exporter 50 | 51 |

IBM DB2 exporter

52 |

Metrics

53 | 54 | ` 55 | ) 56 | 57 | func main() { 58 | kingpin.Version(version.Print(exporterName)) 59 | 60 | promlogConfig := &promlog.Config{} 61 | 62 | flag.AddFlags(kingpin.CommandLine, promlogConfig) 63 | kingpin.HelpFlag.Short('h') 64 | kingpin.Parse() 65 | 66 | logger := promlog.New(promlogConfig) 67 | 68 | // Construct the collector, using the flags for configuration 69 | c := &collector.Config{ 70 | DSN: *dsn, 71 | DatabaseName: *db, 72 | } 73 | 74 | if err := c.Validate(); err != nil { 75 | level.Error(logger).Log("msg", "Configuration is invalid.", "err", err) 76 | os.Exit(1) 77 | } 78 | 79 | col := collector.NewCollector(logger, c) 80 | 81 | // Register collector with prometheus client library 82 | prometheus.MustRegister(version.NewCollector(exporterName)) 83 | prometheus.MustRegister(col) 84 | 85 | serveMetrics(logger) 86 | } 87 | 88 | func serveMetrics(logger log.Logger) { 89 | landingPage := []byte(fmt.Sprintf(landingPageHtml, *metricPath)) 90 | 91 | http.Handle(*metricPath, promhttp.Handler()) 92 | http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { 93 | w.Header().Set("Content-Type", "text/html; charset=UTF-8") // nolint: errcheck 94 | w.Write(landingPage) // nolint: errcheck 95 | }) 96 | 97 | srv := &http.Server{} 98 | if err := web.ListenAndServe(srv, webConfig, logger); err != nil { 99 | level.Error(logger).Log("msg", "Error running HTTP server", "err", err) 100 | os.Exit(1) 101 | } 102 | } 103 | -------------------------------------------------------------------------------- /collector/collector.go: -------------------------------------------------------------------------------- 1 | // Copyright Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package collector 18 | 19 | import ( 20 | "database/sql" 21 | "fmt" 22 | "strconv" 23 | "sync" 24 | 25 | "github.com/go-kit/kit/log" 26 | "github.com/go-kit/kit/log/level" 27 | "github.com/prometheus/client_golang/prometheus" 28 | 29 | _ "github.com/ibmdb/go_ibm_db" // IBM DB2 db driver 30 | ) 31 | 32 | const ( 33 | namespace = "ibm_db2" 34 | 35 | labelBufferpoolName = "bufferpool_name" 36 | labelDatabaseName = "database_name" 37 | labelLockState = "lock_state" 38 | labelLogMember = "log_member" 39 | labelLogOperationType = "log_operation_type" 40 | labelLogUsageType = "log_usage_type" 41 | labelRowState = "row_state" 42 | labelTablespaceName = "tablespace_name" 43 | labelTablespaceType = "tablespace_type" 44 | ) 45 | 46 | type Collector struct { 47 | config *Config 48 | logger log.Logger 49 | dbName string 50 | db *sql.DB 51 | 52 | mu sync.Mutex 53 | pingFail bool 54 | 55 | applicationActive *prometheus.Desc 56 | applicationExecuting *prometheus.Desc 57 | connectionsTop *prometheus.Desc 58 | deadlockCount *prometheus.Desc 59 | lockUsage *prometheus.Desc 60 | lockWaitTime *prometheus.Desc 61 | lockTimeoutCount *prometheus.Desc 62 | bufferpoolHitRatio *prometheus.Desc 63 | rowCount *prometheus.Desc 64 | tablespaceUsage *prometheus.Desc 65 | logUsage *prometheus.Desc 66 | logOperations *prometheus.Desc 67 | dbUp *prometheus.Desc 68 | } 69 | 70 | // NewCollector creates a new collector from the given config 71 | func NewCollector(logger log.Logger, cfg *Config) *Collector { 72 | return &Collector{ 73 | config: cfg, 74 | logger: logger, 75 | dbName: cfg.DatabaseName, 76 | pingFail: false, 77 | applicationActive: prometheus.NewDesc( 78 | prometheus.BuildFQName(namespace, "application", "active"), 79 | "The number of applications that are currently connected to the database.", 80 | []string{labelDatabaseName}, 81 | nil, 82 | ), 83 | applicationExecuting: prometheus.NewDesc( 84 | prometheus.BuildFQName(namespace, "application", "executing"), 85 | "The number of applications for which the database manager is currently processing a request.", 86 | []string{labelDatabaseName}, 87 | nil, 88 | ), 89 | connectionsTop: prometheus.NewDesc( 90 | prometheus.BuildFQName(namespace, "connections", "top_total"), 91 | "The maximum number of concurrent connections to the database since the database was activated.", 92 | []string{labelDatabaseName}, 93 | nil, 94 | ), 95 | deadlockCount: prometheus.NewDesc( 96 | prometheus.BuildFQName(namespace, "deadlock", "total"), 97 | "The total number of deadlocks that have occurred.", 98 | []string{labelDatabaseName}, 99 | nil, 100 | ), 101 | lockUsage: prometheus.NewDesc( 102 | prometheus.BuildFQName(namespace, "lock", "usage"), 103 | "The number of agents waiting on a lock.", 104 | []string{labelDatabaseName, labelLockState}, 105 | nil, 106 | ), 107 | lockWaitTime: prometheus.NewDesc( 108 | prometheus.BuildFQName(namespace, "lock", "wait_time"), 109 | "The average wait time for a lock.", 110 | []string{labelDatabaseName}, 111 | nil, 112 | ), 113 | lockTimeoutCount: prometheus.NewDesc( 114 | prometheus.BuildFQName(namespace, "lock", "timeout_total"), 115 | "The number of timeouts that a request to lock an object occurred instead of being granted.", 116 | []string{labelDatabaseName}, 117 | nil, 118 | ), 119 | bufferpoolHitRatio: prometheus.NewDesc( 120 | prometheus.BuildFQName(namespace, "bufferpool", "hit_ratio"), 121 | "The percentage of time that the database manager did not need to load a page from disk to service a page request.", 122 | []string{labelDatabaseName, labelBufferpoolName}, 123 | nil, 124 | ), 125 | rowCount: prometheus.NewDesc( 126 | prometheus.BuildFQName(namespace, "row", "total"), 127 | "The total number of rows inserted, updated, read or deleted.", 128 | []string{labelDatabaseName, labelRowState}, 129 | nil, 130 | ), 131 | tablespaceUsage: prometheus.NewDesc( 132 | prometheus.BuildFQName(namespace, "tablespace", "usage"), 133 | "The size and usage of table space in bytes.", 134 | []string{labelDatabaseName, labelTablespaceName, labelTablespaceType}, 135 | nil, 136 | ), 137 | logUsage: prometheus.NewDesc( 138 | prometheus.BuildFQName(namespace, "log", "usage"), 139 | "The disk blocks of active logs space in the database that is not being used by uncommitted transactions. Each block correlates to 4 KiB blocks of storage.", 140 | []string{labelDatabaseName, labelLogMember, labelLogUsageType}, 141 | nil, 142 | ), 143 | logOperations: prometheus.NewDesc( 144 | prometheus.BuildFQName(namespace, "log", "operations_total"), 145 | "The number of log pages read and written to by the logger.", 146 | []string{labelDatabaseName, labelLogMember, labelLogOperationType}, 147 | nil, 148 | ), 149 | dbUp: prometheus.NewDesc( 150 | prometheus.BuildFQName(namespace, "", "up"), 151 | "Metric indicating the status of the exporter collection. 1 indicates that the connection to IBM DB2 was successful, and all available metrics were collected. A 0 indicates that the exporter failed to collect metrics or to connect to IBM DB2.", 152 | []string{labelDatabaseName}, 153 | nil, 154 | ), 155 | } 156 | } 157 | 158 | // Describe emits all metric descriptions of the collector down the given channel 159 | // Implements prometheus.Collector 160 | func (c *Collector) Describe(descs chan<- *prometheus.Desc) { 161 | descs <- c.applicationActive 162 | descs <- c.applicationExecuting 163 | descs <- c.bufferpoolHitRatio 164 | descs <- c.connectionsTop 165 | descs <- c.deadlockCount 166 | descs <- c.lockTimeoutCount 167 | descs <- c.lockUsage 168 | descs <- c.lockWaitTime 169 | descs <- c.logOperations 170 | descs <- c.logUsage 171 | descs <- c.rowCount 172 | descs <- c.tablespaceUsage 173 | descs <- c.dbUp 174 | } 175 | 176 | // Collect collects all metrics for this collector and emits them down the provided channel 177 | // Implements prometheus.Collector 178 | func (c *Collector) Collect(metrics chan<- prometheus.Metric) { 179 | level.Debug(c.logger).Log("msg", "Starting to collect metrics.") 180 | 181 | var up float64 = 1 182 | if err := c.ensureConnection(); err != nil { 183 | level.Error(c.logger).Log("msg", "Failed to connect to DB2.", "err", err) 184 | metrics <- prometheus.MustNewConstMetric(c.dbUp, prometheus.GaugeValue, 0, c.dbName) 185 | return 186 | } 187 | defer c.closeConnections() 188 | 189 | if err := c.collectDatabaseMetrics(metrics); err != nil { 190 | level.Error(c.logger).Log("msg", "Failed to collect general database metrics.", "err", err) 191 | up = 0 192 | } 193 | 194 | if err := c.collectApplicationMetrics(metrics); err != nil { 195 | level.Error(c.logger).Log("msg", "Failed to collect application metrics.", "err", err) 196 | up = 0 197 | } 198 | 199 | if err := c.collectLockMetrics(metrics); err != nil { 200 | level.Error(c.logger).Log("msg", "Failed to collect lock metrics.", "err", err) 201 | up = 0 202 | } 203 | 204 | if err := c.collectRowMetrics(metrics); err != nil { 205 | level.Error(c.logger).Log("msg", "Failed to collect row operation metrics.", "err", err) 206 | up = 0 207 | } 208 | 209 | if err := c.collectTablespaceStorageMetrics(metrics); err != nil { 210 | level.Error(c.logger).Log("msg", "Failed to collect tablespace storage metrics.", "err", err) 211 | up = 0 212 | } 213 | 214 | if err := c.collectLogsMetrics(metrics); err != nil { 215 | level.Error(c.logger).Log("msg", "Failed to collect log metrics.", "err", err) 216 | up = 0 217 | } 218 | 219 | if err := c.collectBufferpoolMetrics(metrics); err != nil { 220 | level.Error(c.logger).Log("msg", "Failed to collect bufferpool metrics.", "err", err) 221 | up = 0 222 | } 223 | 224 | metrics <- prometheus.MustNewConstMetric(c.dbUp, prometheus.GaugeValue, up, c.dbName) 225 | } 226 | 227 | func (c *Collector) ensureConnection() error { 228 | c.mu.Lock() 229 | defer c.mu.Unlock() 230 | 231 | if c.db != nil { 232 | // this check is done so unit tests can work 233 | // placed after mutex so threads do not jump over this func 234 | // can occur when live if Ping takes too long to return 235 | return nil 236 | } 237 | 238 | if c.pingFail { 239 | // ping has failed, continually return err 240 | return fmt.Errorf("ping is failing: verify DSN is correct and DB2 is running properly") 241 | } 242 | 243 | db, err := sql.Open("go_ibm_db", c.config.DSN) 244 | if err != nil { 245 | return err 246 | } 247 | 248 | if err = db.Ping(); err != nil { 249 | // ping failed, set flag to true 250 | c.pingFail = true 251 | return err 252 | } 253 | 254 | c.db = db 255 | return nil 256 | } 257 | 258 | func (c *Collector) closeConnections() { 259 | if err := c.db.Close(); err != nil { 260 | level.Error(c.logger).Log("msg", "failing to close connection", "err", err) 261 | } 262 | c.db = nil 263 | } 264 | 265 | func (c *Collector) collectDatabaseMetrics(metrics chan<- prometheus.Metric) error { 266 | rows, err := c.db.Query(databaseTableMetricsQuery) 267 | if err != nil { 268 | return fmt.Errorf("failed to query metrics: %w", err) 269 | } 270 | defer rows.Close() 271 | 272 | for rows.Next() { 273 | var deadlock_count, connections_top float64 274 | if err := rows.Scan(&connections_top, &deadlock_count); err != nil { 275 | return fmt.Errorf("failed to scan row: %w", err) 276 | } 277 | 278 | metrics <- prometheus.MustNewConstMetric(c.connectionsTop, prometheus.CounterValue, connections_top, c.dbName) 279 | metrics <- prometheus.MustNewConstMetric(c.deadlockCount, prometheus.CounterValue, deadlock_count, c.dbName) 280 | } 281 | 282 | return rows.Err() 283 | } 284 | 285 | func (c *Collector) collectApplicationMetrics(metrics chan<- prometheus.Metric) error { 286 | rows, err := c.db.Query(applicationMetricsQuery) 287 | if err != nil { 288 | return fmt.Errorf("failed to query metrics: %w", err) 289 | } 290 | defer rows.Close() 291 | 292 | for rows.Next() { 293 | var application_active, application_executing float64 294 | if err := rows.Scan(&application_active, &application_executing); err != nil { 295 | return fmt.Errorf("failed to scan row: %w", err) 296 | } 297 | 298 | metrics <- prometheus.MustNewConstMetric(c.applicationActive, prometheus.GaugeValue, application_active, c.dbName) 299 | metrics <- prometheus.MustNewConstMetric(c.applicationExecuting, prometheus.GaugeValue, application_executing, c.dbName) 300 | } 301 | 302 | return rows.Err() 303 | } 304 | 305 | func (c *Collector) collectLockMetrics(metrics chan<- prometheus.Metric) error { 306 | rows, err := c.db.Query(lockMetricsQuery) 307 | if err != nil { 308 | return fmt.Errorf("failed to query metrics: %w", err) 309 | } 310 | defer rows.Close() 311 | 312 | for rows.Next() { 313 | var lock_waiting, lock_active, lock_wait_time, lock_timeout_count float64 314 | if err := rows.Scan(&lock_waiting, &lock_active, &lock_wait_time, &lock_timeout_count); err != nil { 315 | return fmt.Errorf("failed to scan row: %w", err) 316 | } 317 | 318 | metrics <- prometheus.MustNewConstMetric(c.lockUsage, prometheus.GaugeValue, lock_waiting, c.dbName, "waiting") 319 | metrics <- prometheus.MustNewConstMetric(c.lockUsage, prometheus.GaugeValue, lock_active, c.dbName, "active") 320 | metrics <- prometheus.MustNewConstMetric(c.lockWaitTime, prometheus.GaugeValue, lock_wait_time, c.dbName) 321 | metrics <- prometheus.MustNewConstMetric(c.lockTimeoutCount, prometheus.CounterValue, lock_timeout_count, c.dbName) 322 | } 323 | 324 | return rows.Err() 325 | } 326 | 327 | func (c *Collector) collectRowMetrics(metrics chan<- prometheus.Metric) error { 328 | rows, err := c.db.Query(rowMetricsQuery) 329 | if err != nil { 330 | return fmt.Errorf("failed to query metrics: %w", err) 331 | } 332 | defer rows.Close() 333 | 334 | for rows.Next() { 335 | var deleted, inserted, updated, read float64 336 | if err := rows.Scan(&deleted, &inserted, &updated, &read); err != nil { 337 | return fmt.Errorf("failed to scan row: %w", err) 338 | } 339 | 340 | metrics <- prometheus.MustNewConstMetric(c.rowCount, prometheus.CounterValue, deleted, c.dbName, "deleted") 341 | metrics <- prometheus.MustNewConstMetric(c.rowCount, prometheus.CounterValue, inserted, c.dbName, "inserted") 342 | metrics <- prometheus.MustNewConstMetric(c.rowCount, prometheus.CounterValue, updated, c.dbName, "updated") 343 | metrics <- prometheus.MustNewConstMetric(c.rowCount, prometheus.CounterValue, read, c.dbName, "read") 344 | } 345 | 346 | return rows.Err() 347 | } 348 | 349 | func (c *Collector) collectTablespaceStorageMetrics(metrics chan<- prometheus.Metric) error { 350 | rows, err := c.db.Query(tablespaceStorageMetricsQuery) 351 | if err != nil { 352 | return fmt.Errorf("failed to query metrics: %w", err) 353 | } 354 | defer rows.Close() 355 | 356 | for rows.Next() { 357 | var tablespace_name string 358 | var total, free, used float64 359 | if err := rows.Scan(&tablespace_name, &total, &free, &used); err != nil { 360 | return fmt.Errorf("failed to query metrics: %w", err) 361 | } 362 | 363 | metrics <- prometheus.MustNewConstMetric(c.tablespaceUsage, prometheus.GaugeValue, total, c.dbName, tablespace_name, "total") 364 | metrics <- prometheus.MustNewConstMetric(c.tablespaceUsage, prometheus.GaugeValue, free, c.dbName, tablespace_name, "free") 365 | metrics <- prometheus.MustNewConstMetric(c.tablespaceUsage, prometheus.GaugeValue, used, c.dbName, tablespace_name, "used") 366 | } 367 | 368 | return rows.Err() 369 | } 370 | 371 | func (c *Collector) collectLogsMetrics(metrics chan<- prometheus.Metric) error { 372 | rows, err := c.db.Query(logsMetricsQuery) 373 | if err != nil { 374 | return fmt.Errorf("failed to query metrics: %w", err) 375 | } 376 | defer rows.Close() 377 | 378 | for rows.Next() { 379 | var iMember int 380 | var available, used, reads, writes float64 381 | if err := rows.Scan(&iMember, &available, &used, &reads, &writes); err != nil { 382 | return fmt.Errorf("failed to query metrics: %w", err) 383 | } 384 | member := strconv.Itoa(iMember) 385 | 386 | metrics <- prometheus.MustNewConstMetric(c.logUsage, prometheus.GaugeValue, available, c.dbName, member, "available") 387 | metrics <- prometheus.MustNewConstMetric(c.logUsage, prometheus.GaugeValue, used, c.dbName, member, "used") 388 | metrics <- prometheus.MustNewConstMetric(c.logOperations, prometheus.CounterValue, reads, c.dbName, member, "read") 389 | metrics <- prometheus.MustNewConstMetric(c.logOperations, prometheus.CounterValue, writes, c.dbName, member, "write") 390 | } 391 | 392 | return rows.Err() 393 | } 394 | 395 | func (c *Collector) collectBufferpoolMetrics(metrics chan<- prometheus.Metric) error { 396 | rows, err := c.db.Query(bufferpoolMetricsQuery) 397 | if err != nil { 398 | return fmt.Errorf("failed to query metrics: %w", err) 399 | } 400 | defer rows.Close() 401 | 402 | for rows.Next() { 403 | var bp_name string 404 | var ratio, foo float64 405 | if err := rows.Scan(&bp_name, &foo, &foo, &foo, &ratio); err != nil { 406 | return fmt.Errorf("failed to query metrics: %w", err) 407 | } 408 | 409 | // skip over row if bp hit ratio can't be calculated/is -1 410 | if ratio == -1 { 411 | continue 412 | } 413 | 414 | metrics <- prometheus.MustNewConstMetric(c.bufferpoolHitRatio, prometheus.GaugeValue, ratio, c.dbName, bp_name) 415 | } 416 | 417 | return rows.Err() 418 | } 419 | -------------------------------------------------------------------------------- /collector/collector_test.go: -------------------------------------------------------------------------------- 1 | // Copyright Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package collector 18 | 19 | import ( 20 | "database/sql" 21 | "database/sql/driver" 22 | "errors" 23 | "os" 24 | "path/filepath" 25 | "strconv" 26 | "testing" 27 | 28 | "github.com/DATA-DOG/go-sqlmock" 29 | "github.com/go-kit/kit/log" 30 | "github.com/prometheus/client_golang/prometheus/testutil" 31 | "github.com/stretchr/testify/require" 32 | ) 33 | 34 | func TestCollector_Collect(t *testing.T) { 35 | t.Run("Metrics match expected", func(t *testing.T) { 36 | db, mock := createMockDB(t) 37 | 38 | col := NewCollector(log.NewJSONLogger(os.Stdout), &Config{}) 39 | col.db = db 40 | 41 | // reading in & comparing metrics 42 | f, err := os.Open(filepath.Join("testdata", "all_metrics.prom")) 43 | require.NoError(t, err) 44 | defer f.Close() 45 | 46 | require.NoError(t, testutil.CollectAndCompare(col, f)) 47 | require.NoError(t, mock.ExpectationsWereMet()) 48 | }) 49 | t.Run("Metrics have no lint errors", func(t *testing.T) { 50 | db, mock := createMockDB(t) 51 | 52 | col := NewCollector(log.NewJSONLogger(os.Stdout), &Config{}) 53 | col.db = db 54 | 55 | p, err := testutil.CollectAndLint(col) 56 | require.NoError(t, err) 57 | require.Empty(t, p) 58 | 59 | require.NoError(t, mock.ExpectationsWereMet()) 60 | }) 61 | t.Run("All queries fail", func(t *testing.T) { 62 | db, mock := createQueryErrMockDB(t) 63 | 64 | col := NewCollector(log.NewJSONLogger(os.Stdout), &Config{}) 65 | col.db = db 66 | 67 | // reading in & comparing metrics 68 | f, err := os.Open(filepath.Join("testdata", "query_failure.prom")) 69 | require.NoError(t, err) 70 | defer f.Close() 71 | 72 | require.NoError(t, testutil.CollectAndCompare(col, f)) 73 | require.NoError(t, mock.ExpectationsWereMet()) 74 | }) 75 | t.Run("Database connection fails", func(t *testing.T) { 76 | col := NewCollector(log.NewJSONLogger(os.Stdout), &Config{}) 77 | 78 | openErr := col.ensureConnection() 79 | require.Error(t, openErr) 80 | 81 | f, err := os.Open(filepath.Join("testdata", "query_failure.prom")) 82 | require.NoError(t, err) 83 | defer f.Close() 84 | 85 | // No metrics should be scraped if the database fails to open 86 | err = testutil.CollectAndCompare(col, f) 87 | require.NoError(t, err) 88 | }) 89 | } 90 | 91 | // //////////////////// 92 | // Helper test funcs // 93 | // //////////////////// 94 | func newRows(t *testing.T, rows [][]string) *sqlmock.Rows { 95 | numRows := len(rows[0]) 96 | 97 | for _, row := range rows { 98 | require.Equal(t, len(row), numRows, "Number of returned values must be equal for all rows") 99 | } 100 | 101 | cols := []string{} 102 | for i := 0; i < numRows; i++ { 103 | cols = append(cols, strconv.FormatInt(int64(i), 10)) 104 | } 105 | 106 | sqlRows := sqlmock.NewRows(cols) 107 | 108 | for _, row := range rows { 109 | rowVals := []driver.Value{} 110 | for _, s := range row { 111 | rowVals = append(rowVals, sql.NullString{ 112 | String: s, 113 | Valid: true, 114 | }) 115 | } 116 | 117 | sqlRows.AddRow(rowVals...) 118 | } 119 | 120 | return sqlRows 121 | } 122 | 123 | // represents a mock db 124 | // returns predefined results for queries 125 | func createMockDB(t *testing.T) (*sql.DB, sqlmock.Sqlmock) { 126 | t.Helper() 127 | 128 | db, mock, err := sqlmock.New(sqlmock.QueryMatcherOption(sqlmock.QueryMatcherEqual)) 129 | require.NoError(t, err) 130 | 131 | // (sum) connections_top, deadlock_count 132 | mock.ExpectQuery(databaseTableMetricsQuery).WillReturnRows( 133 | newRows(t, [][]string{ 134 | {"18", "3"}, 135 | }), 136 | ).RowsWillBeClosed() 137 | 138 | // (sum) application_active, application_executing 139 | mock.ExpectQuery(applicationMetricsQuery).WillReturnRows( 140 | newRows(t, [][]string{ 141 | {"12", "7"}, 142 | }), 143 | ) 144 | 145 | // (sum) lock_waiting, lock_active, lock_wait_time, lock_timeout_count 146 | mock.ExpectQuery(lockMetricsQuery).WillReturnRows( 147 | newRows(t, [][]string{ 148 | {"3", "5", "44", "2"}, 149 | }), 150 | ) 151 | 152 | // (sum) rows_deleted, rows_inserted, rows_updated, rows_read 153 | mock.ExpectQuery(rowMetricsQuery).WillReturnRows( 154 | newRows(t, [][]string{ 155 | {"33", "44", "55", "66"}, 156 | }), 157 | ) 158 | 159 | // (rows) tbsp_name, total_b, free_b, used_b 160 | mock.ExpectQuery(tablespaceStorageMetricsQuery).WillReturnRows( 161 | newRows(t, [][]string{ 162 | {"tbsp1", "333", "444", "555"}, 163 | {"tbsp2", "666", "777", "888"}, 164 | {"tbsp3", "999", "111", "222"}, 165 | }), 166 | ) 167 | 168 | // (rows) member, blocks_available, blocks_used, log_reads, log_writes 169 | mock.ExpectQuery(logsMetricsQuery).WillReturnRows( 170 | newRows(t, [][]string{ 171 | {"1", "22", "33", "4", "5"}, 172 | {"2", "66", "77", "8", "9"}, 173 | {"3", "11", "22", "3", "4"}, 174 | }), 175 | ) 176 | 177 | // (rows) bp_name, logical_reads, physical_reads, member, hit_ratio 178 | mock.ExpectQuery(bufferpoolMetricsQuery).WillReturnRows( 179 | newRows(t, [][]string{ 180 | {"bp1", "0", "0", "1", "11.22"}, 181 | {"bp2", "0", "0", "2", "33.44"}, 182 | {"bp3", "0", "0", "3", "55.66"}, 183 | {"bp4", "0", "0", "4", "77.88"}, 184 | }), 185 | ) 186 | 187 | mock.ExpectClose() 188 | 189 | return db, mock 190 | } 191 | 192 | func createQueryErrMockDB(t *testing.T) (*sql.DB, sqlmock.Sqlmock) { 193 | t.Helper() 194 | 195 | queryErr := errors.New("the query failed for inexplicable reasons") 196 | 197 | db, mock, err := sqlmock.New(sqlmock.QueryMatcherOption(sqlmock.QueryMatcherEqual)) 198 | require.NoError(t, err) 199 | 200 | mock.ExpectQuery(databaseTableMetricsQuery).WillReturnError(queryErr) 201 | mock.ExpectQuery(applicationMetricsQuery).WillReturnError(queryErr) 202 | mock.ExpectQuery(lockMetricsQuery).WillReturnError(queryErr) 203 | mock.ExpectQuery(rowMetricsQuery).WillReturnError(queryErr) 204 | mock.ExpectQuery(tablespaceStorageMetricsQuery).WillReturnError(queryErr) 205 | mock.ExpectQuery(logsMetricsQuery).WillReturnError(queryErr) 206 | mock.ExpectQuery(bufferpoolMetricsQuery).WillReturnError(queryErr) 207 | 208 | mock.ExpectClose() 209 | 210 | return db, mock 211 | } 212 | -------------------------------------------------------------------------------- /collector/config.go: -------------------------------------------------------------------------------- 1 | // Copyright Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package collector 18 | 19 | import ( 20 | "errors" 21 | ) 22 | 23 | type Config struct { 24 | DSN string 25 | DatabaseName string 26 | } 27 | 28 | var ( 29 | errNoDSN = errors.New("DSN must be specified and not empty") 30 | errNoDatabase = errors.New("DATABASE must be specified and not empty") 31 | ) 32 | 33 | func (c *Config) Validate() error { 34 | if c.DSN == "" { 35 | return errNoDSN 36 | } 37 | 38 | if c.DatabaseName == "" { 39 | return errNoDatabase 40 | } 41 | 42 | return nil 43 | } 44 | -------------------------------------------------------------------------------- /collector/config_test.go: -------------------------------------------------------------------------------- 1 | // Copyright Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package collector 18 | 19 | import ( 20 | "testing" 21 | 22 | "github.com/stretchr/testify/require" 23 | ) 24 | 25 | func TestConfig_Validate(t *testing.T) { 26 | testCases := []struct { 27 | name string 28 | inputConfig Config 29 | expectedErr error 30 | }{ 31 | { 32 | name: "valid config", 33 | inputConfig: Config{ 34 | DSN: "DATABASE=database;HOSTNAME=localhost;PORT=3333;UID=admin;PWD=password", 35 | DatabaseName: "database", 36 | }, 37 | expectedErr: nil, 38 | }, 39 | { 40 | name: "no database", 41 | inputConfig: Config{ 42 | DSN: "DATABASE=database;HOSTNAME=localhost;PORT=3333;UID=admin;PWD=password", 43 | }, 44 | expectedErr: errNoDatabase, 45 | }, 46 | { 47 | name: "no dsn", 48 | inputConfig: Config{ 49 | DatabaseName: "database", 50 | }, 51 | expectedErr: errNoDSN, 52 | }, 53 | { 54 | name: "empty DSN", 55 | inputConfig: Config{ 56 | DSN: "", 57 | DatabaseName: "database", 58 | }, 59 | expectedErr: errNoDSN, 60 | }, 61 | { 62 | name: "empty DatabaseName", 63 | inputConfig: Config{ 64 | DSN: "DATABASE=database;HOSTNAME=localhost;PORT=3333;UID=admin;PWD=password", 65 | DatabaseName: "", 66 | }, 67 | expectedErr: errNoDatabase, 68 | }, 69 | } 70 | 71 | for _, tc := range testCases { 72 | t.Run(tc.name, func(t *testing.T) { 73 | err := tc.inputConfig.Validate() 74 | if tc.expectedErr != nil { 75 | require.Equal(t, tc.expectedErr, err) 76 | } else { 77 | require.Nil(t, err) 78 | } 79 | }) 80 | } 81 | } 82 | -------------------------------------------------------------------------------- /collector/query.go: -------------------------------------------------------------------------------- 1 | // Copyright Grafana Labs 2 | // 3 | // Licensed under the Apache License, Version 2.0 (the "License"); 4 | // you may not use this file except in compliance with the License. 5 | // You may obtain a copy of the License at 6 | // 7 | // http://www.apache.org/licenses/LICENSE-2.0 8 | // 9 | // Unless required by applicable law or agreed to in writing, software 10 | // distributed under the License is distributed on an "AS IS" BASIS, 11 | // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 | // See the License for the specific language governing permissions and 13 | // limitations under the License. 14 | 15 | //go:build !arm64 16 | 17 | package collector 18 | 19 | const ( 20 | databaseTableMetricsQuery = `SELECT 21 | SUM(connections_top) as connections_top, 22 | SUM(deadlocks) as deadlock_count 23 | FROM TABLE(MON_GET_DATABASE(-2)) 24 | ` 25 | 26 | applicationMetricsQuery = `SELECT 27 | SUM(appls_cur_cons) as application_active, 28 | SUM(appls_in_db2) as application_executing 29 | FROM TABLE(MON_GET_DATABASE(-2)) 30 | ` 31 | 32 | lockMetricsQuery = `SELECT 33 | SUM(num_locks_waiting) as lock_waiting, 34 | SUM(num_locks_held) as lock_active, 35 | SUM(lock_wait_time) as lock_wait_time, 36 | SUM(lock_timeouts) as lock_timeout_count 37 | FROM TABLE(MON_GET_DATABASE(-2)) 38 | ` 39 | 40 | rowMetricsQuery = `SELECT 41 | SUM(rows_deleted) as rows_deleted, 42 | SUM(rows_inserted) as rows_inserted, 43 | SUM(rows_updated) as rows_updated, 44 | SUM(rows_read) as rows_read 45 | FROM TABLE(MON_GET_DATABASE(-2)) 46 | ` 47 | 48 | tablespaceStorageMetricsQuery = `SELECT 49 | tbsp_name, 50 | (tbsp_total_pages*tbsp_page_size) as total_b, 51 | (tbsp_free_pages*tbsp_page_size) as free_b, 52 | (tbsp_used_pages*tbsp_page_size) as used_b 53 | FROM TABLE(MON_GET_TABLESPACE('', -2)) 54 | ` 55 | 56 | logsMetricsQuery = `SELECT 57 | member, 58 | (total_log_available / 4000) as blocks_available, 59 | (total_log_used / 4000) as blocks_used, 60 | log_reads, 61 | log_writes 62 | FROM TABLE(MON_GET_TRANSACTION_LOG(-2)) 63 | ` 64 | 65 | bufferpoolMetricsQuery = `WITH BPMETRICS AS 66 | ( 67 | SELECT 68 | bp_name, 69 | pool_data_l_reads + pool_temp_data_l_reads + 70 | pool_index_l_reads + pool_temp_index_l_reads + 71 | pool_xda_l_reads + pool_temp_xda_l_reads as logical_reads, 72 | pool_data_p_reads + pool_temp_data_p_reads + 73 | pool_index_p_reads + pool_temp_index_p_reads + 74 | pool_xda_p_reads + pool_temp_xda_p_reads as physical_reads, 75 | member 76 | FROM TABLE(MON_GET_BUFFERPOOL('',-2)) AS METRICS 77 | ) 78 | SELECT 79 | VARCHAR(bp_name,20) AS bp_name, 80 | logical_reads, 81 | physical_reads, 82 | member, 83 | CASE WHEN logical_reads > 0 84 | THEN DEC((1 - (FLOAT(physical_reads) / FLOAT(logical_reads))) * 100,5,2) 85 | ELSE -1 86 | END AS HIT_RATIO 87 | FROM BPMETRICS; 88 | ` 89 | ) 90 | -------------------------------------------------------------------------------- /collector/testdata/all_metrics.prom: -------------------------------------------------------------------------------- 1 | # HELP ibm_db2_application_active The number of applications that are currently connected to the database. 2 | # TYPE ibm_db2_application_active gauge 3 | ibm_db2_application_active{database_name=""} 12 4 | # HELP ibm_db2_application_executing The number of applications for which the database manager is currently processing a request. 5 | # TYPE ibm_db2_application_executing gauge 6 | ibm_db2_application_executing{database_name=""} 7 7 | # HELP ibm_db2_bufferpool_hit_ratio The percentage of time that the database manager did not need to load a page from disk to service a page request. 8 | # TYPE ibm_db2_bufferpool_hit_ratio gauge 9 | ibm_db2_bufferpool_hit_ratio{bufferpool_name="bp1",database_name=""} 11.22 10 | ibm_db2_bufferpool_hit_ratio{bufferpool_name="bp2",database_name=""} 33.44 11 | ibm_db2_bufferpool_hit_ratio{bufferpool_name="bp3",database_name=""} 55.66 12 | ibm_db2_bufferpool_hit_ratio{bufferpool_name="bp4",database_name=""} 77.88 13 | # HELP ibm_db2_connections_top_total The maximum number of concurrent connections to the database since the database was activated. 14 | # TYPE ibm_db2_connections_top_total counter 15 | ibm_db2_connections_top_total{database_name=""} 18 16 | # HELP ibm_db2_deadlock_total The total number of deadlocks that have occurred. 17 | # TYPE ibm_db2_deadlock_total counter 18 | ibm_db2_deadlock_total{database_name=""} 3 19 | # HELP ibm_db2_lock_timeout_total The number of timeouts that a request to lock an object occurred instead of being granted. 20 | # TYPE ibm_db2_lock_timeout_total counter 21 | ibm_db2_lock_timeout_total{database_name=""} 2 22 | # HELP ibm_db2_lock_usage The number of agents waiting on a lock. 23 | # TYPE ibm_db2_lock_usage gauge 24 | ibm_db2_lock_usage{database_name="",lock_state="active"} 5 25 | ibm_db2_lock_usage{database_name="",lock_state="waiting"} 3 26 | # HELP ibm_db2_lock_wait_time The average wait time for a lock. 27 | # TYPE ibm_db2_lock_wait_time gauge 28 | ibm_db2_lock_wait_time{database_name=""} 44 29 | # HELP ibm_db2_log_operations_total The number of log pages read and written to by the logger. 30 | # TYPE ibm_db2_log_operations_total counter 31 | ibm_db2_log_operations_total{database_name="",log_member="1",log_operation_type="read"} 4 32 | ibm_db2_log_operations_total{database_name="",log_member="1",log_operation_type="write"} 5 33 | ibm_db2_log_operations_total{database_name="",log_member="2",log_operation_type="read"} 8 34 | ibm_db2_log_operations_total{database_name="",log_member="2",log_operation_type="write"} 9 35 | ibm_db2_log_operations_total{database_name="",log_member="3",log_operation_type="read"} 3 36 | ibm_db2_log_operations_total{database_name="",log_member="3",log_operation_type="write"} 4 37 | # HELP ibm_db2_log_usage The disk blocks of active logs space in the database that is not being used by uncommitted transactions. Each block correlates to 4 KiB blocks of storage. 38 | # TYPE ibm_db2_log_usage gauge 39 | ibm_db2_log_usage{database_name="",log_member="1",log_usage_type="available"} 22 40 | ibm_db2_log_usage{database_name="",log_member="1",log_usage_type="used"} 33 41 | ibm_db2_log_usage{database_name="",log_member="2",log_usage_type="available"} 66 42 | ibm_db2_log_usage{database_name="",log_member="2",log_usage_type="used"} 77 43 | ibm_db2_log_usage{database_name="",log_member="3",log_usage_type="available"} 11 44 | ibm_db2_log_usage{database_name="",log_member="3",log_usage_type="used"} 22 45 | # HELP ibm_db2_row_total The total number of rows inserted, updated, read or deleted. 46 | # TYPE ibm_db2_row_total counter 47 | ibm_db2_row_total{database_name="",row_state="deleted"} 33 48 | ibm_db2_row_total{database_name="",row_state="inserted"} 44 49 | ibm_db2_row_total{database_name="",row_state="read"} 66 50 | ibm_db2_row_total{database_name="",row_state="updated"} 55 51 | # HELP ibm_db2_tablespace_usage The size and usage of table space in bytes. 52 | # TYPE ibm_db2_tablespace_usage gauge 53 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp1",tablespace_type="free"} 444 54 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp1",tablespace_type="total"} 333 55 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp1",tablespace_type="used"} 555 56 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp2",tablespace_type="free"} 777 57 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp2",tablespace_type="total"} 666 58 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp2",tablespace_type="used"} 888 59 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp3",tablespace_type="free"} 111 60 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp3",tablespace_type="total"} 999 61 | ibm_db2_tablespace_usage{database_name="",tablespace_name="tbsp3",tablespace_type="used"} 222 62 | # HELP ibm_db2_up Metric indicating the status of the exporter collection. 1 indicates that the connection to IBM DB2 was successful, and all available metrics were collected. A 0 indicates that the exporter failed to collect metrics or to connect to IBM DB2. 63 | # TYPE ibm_db2_up gauge 64 | ibm_db2_up{database_name=""} 1 65 | -------------------------------------------------------------------------------- /collector/testdata/query_failure.prom: -------------------------------------------------------------------------------- 1 | # HELP ibm_db2_up Metric indicating the status of the exporter collection. 1 indicates that the connection to IBM DB2 was successful, and all available metrics were collected. A 0 indicates that the exporter failed to collect metrics or to connect to IBM DB2. 2 | # TYPE ibm_db2_up gauge 3 | ibm_db2_up{database_name=""} 0 4 | -------------------------------------------------------------------------------- /errcheck_excludes.txt: -------------------------------------------------------------------------------- 1 | // Never check for logger errors 2 | (github.com/go-kit/log.Logger).Log 3 | -------------------------------------------------------------------------------- /go.mod: -------------------------------------------------------------------------------- 1 | module github.com/grafana/ibm-db2-prometheus-exporter 2 | 3 | go 1.20 4 | 5 | require ( 6 | github.com/DATA-DOG/go-sqlmock v1.5.0 7 | github.com/go-kit/log v0.2.1 8 | github.com/ibmdb/go_ibm_db v0.4.3 9 | github.com/prometheus/client_golang v1.15.1 10 | github.com/stretchr/testify v1.8.2 11 | ) 12 | 13 | require ( 14 | github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 // indirect 15 | github.com/coreos/go-systemd/v22 v22.5.0 // indirect 16 | github.com/davecgh/go-spew v1.1.1 // indirect 17 | github.com/go-logfmt/logfmt v0.5.1 // indirect 18 | github.com/jpillora/backoff v1.0.0 // indirect 19 | github.com/kr/text v0.2.0 // indirect 20 | github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect 21 | github.com/pmezard/go-difflib v1.0.0 // indirect 22 | github.com/rogpeppe/go-internal v1.10.0 // indirect 23 | github.com/xhit/go-str2duration/v2 v2.1.0 // indirect 24 | golang.org/x/crypto v0.36.0 // indirect 25 | golang.org/x/net v0.38.0 // indirect 26 | golang.org/x/oauth2 v0.6.0 // indirect 27 | golang.org/x/sync v0.12.0 // indirect 28 | golang.org/x/text v0.23.0 // indirect 29 | google.golang.org/appengine v1.6.7 // indirect 30 | gopkg.in/yaml.v2 v2.4.0 // indirect 31 | gopkg.in/yaml.v3 v3.0.1 // indirect 32 | ) 33 | 34 | require ( 35 | github.com/alecthomas/kingpin/v2 v2.3.2 36 | github.com/beorn7/perks v1.0.1 // indirect 37 | github.com/cespare/xxhash/v2 v2.2.0 // indirect 38 | github.com/go-kit/kit v0.12.0 39 | github.com/golang/protobuf v1.5.3 // indirect 40 | github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect 41 | github.com/prometheus/client_model v0.3.0 // indirect 42 | github.com/prometheus/common v0.42.0 43 | github.com/prometheus/exporter-toolkit v0.10.0 44 | github.com/prometheus/procfs v0.9.0 // indirect 45 | golang.org/x/sys v0.31.0 // indirect 46 | google.golang.org/protobuf v1.33.0 // indirect 47 | ) 48 | -------------------------------------------------------------------------------- /go.sum: -------------------------------------------------------------------------------- 1 | github.com/DATA-DOG/go-sqlmock v1.5.0 h1:Shsta01QNfFxHCfpW6YH2STWB0MudeXXEWMr20OEh60= 2 | github.com/DATA-DOG/go-sqlmock v1.5.0/go.mod h1:f/Ixk793poVmq4qj/V1dPUg2JEAKC73Q5eFN3EC/SaM= 3 | github.com/alecthomas/kingpin/v2 v2.3.2 h1:H0aULhgmSzN8xQ3nX1uxtdlTHYoPLu5AhHxWrKI6ocU= 4 | github.com/alecthomas/kingpin/v2 v2.3.2/go.mod h1:0gyi0zQnjuFk8xrkNKamJoyUo382HRL7ATRpFZCw6tE= 5 | github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137 h1:s6gZFSlWYmbqAuRjVTiNNhvNRfY2Wxp9nhfyel4rklc= 6 | github.com/alecthomas/units v0.0.0-20211218093645-b94a6e3cc137/go.mod h1:OMCwj8VM1Kc9e19TLln2VL61YJF0x1XFtfdL4JdbSyE= 7 | github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= 8 | github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= 9 | github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44= 10 | github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= 11 | github.com/coreos/go-systemd/v22 v22.5.0 h1:RrqgGjYQKalulkV8NGVIfkXQf6YYmOyiJKk8iXXhfZs= 12 | github.com/coreos/go-systemd/v22 v22.5.0/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc= 13 | github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= 14 | github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 15 | github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= 16 | github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= 17 | github.com/go-kit/kit v0.12.0 h1:e4o3o3IsBfAKQh5Qbbiqyfu97Ku7jrO/JbohvztANh4= 18 | github.com/go-kit/kit v0.12.0/go.mod h1:lHd+EkCZPIwYItmGDDRdhinkzX2A1sj+M9biaEaizzs= 19 | github.com/go-kit/log v0.2.1 h1:MRVx0/zhvdseW+Gza6N9rVzU/IVzaeE1SFI4raAhmBU= 20 | github.com/go-kit/log v0.2.1/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0= 21 | github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA= 22 | github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs= 23 | github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA= 24 | github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 25 | github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= 26 | github.com/golang/protobuf v1.3.5/go.mod h1:6O5/vntMXwX2lRkT1hjjk0nAC1IDOTvTlVgjlRvqsdk= 27 | github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk= 28 | github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg= 29 | github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY= 30 | github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= 31 | github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38= 32 | github.com/ibmdb/go_ibm_db v0.4.3 h1:CtSj2oPiLBmIRQJFLjipDEulr72cIOgLSPIeQo8jbwg= 33 | github.com/ibmdb/go_ibm_db v0.4.3/go.mod h1:nl5aUh1IzBVExcqYXaZLApaq8RUvTEph3VP49UTmEvg= 34 | github.com/jpillora/backoff v1.0.0 h1:uvFg412JmmHBHw7iwprIxkPMI+sGQ4kzOWsMeHnm2EA= 35 | github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4= 36 | github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE= 37 | github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= 38 | github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= 39 | github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo= 40 | github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4= 41 | github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f h1:KUppIJq7/+SVif2QVs3tOP0zanoHgBEVAwHxUSIzRqU= 42 | github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= 43 | github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= 44 | github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= 45 | github.com/prometheus/client_golang v1.15.1 h1:8tXpTmJbyH5lydzFPoxSIJ0J46jdh3tylbvM1xCv0LI= 46 | github.com/prometheus/client_golang v1.15.1/go.mod h1:e9yaBhRPU2pPNsZwE+JdQl0KEt1N9XgF6zxWmaC0xOk= 47 | github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4= 48 | github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w= 49 | github.com/prometheus/common v0.42.0 h1:EKsfXEYo4JpWMHH5cg+KOUWeuJSov1Id8zGR8eeI1YM= 50 | github.com/prometheus/common v0.42.0/go.mod h1:xBwqVerjNdUDjgODMpudtOMwlOwf2SaTr1yjz4b7Zbc= 51 | github.com/prometheus/exporter-toolkit v0.10.0 h1:yOAzZTi4M22ZzVxD+fhy1URTuNRj/36uQJJ5S8IPza8= 52 | github.com/prometheus/exporter-toolkit v0.10.0/go.mod h1:+sVFzuvV5JDyw+Ih6p3zFxZNVnKQa3x5qPmDSiPu4ZY= 53 | github.com/prometheus/procfs v0.9.0 h1:wzCHvIvM5SxWqYvwgVL7yJY8Lz3PKn49KQtpgMYJfhI= 54 | github.com/prometheus/procfs v0.9.0/go.mod h1:+pB4zwohETzFnmlpe6yd2lSc+0/46IYZRB/chUwxUZY= 55 | github.com/rogpeppe/go-internal v1.10.0 h1:TMyTOH3F/DB16zRVcYyreMH6GnZZrwQVAoYjRBZyWFQ= 56 | github.com/rogpeppe/go-internal v1.10.0/go.mod h1:UQnix2H7Ngw/k4C5ijL5+65zddjncjaFoBhdsK/akog= 57 | github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= 58 | github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= 59 | github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= 60 | github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= 61 | github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg= 62 | github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU= 63 | github.com/stretchr/testify v1.8.2 h1:+h33VjcLVPDHtOdpUCuF+7gSuG3yGIftsP1YvFihtJ8= 64 | github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4= 65 | github.com/xhit/go-str2duration/v2 v2.1.0 h1:lxklc02Drh6ynqX+DdPyp5pCKLUQpRT8bp8Ydu2Bstc= 66 | github.com/xhit/go-str2duration/v2 v2.1.0/go.mod h1:ohY8p+0f07DiV6Em5LKB0s2YpLtXVyJfNt1+BlmyAsU= 67 | golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= 68 | golang.org/x/crypto v0.36.0 h1:AnAEvhDddvBdpY+uR+MyHmuZzzNqXSe/GvuDeob5L34= 69 | golang.org/x/crypto v0.36.0/go.mod h1:Y4J0ReaxCR1IMaabaSMugxJES1EpwhBHhv2bDHklZvc= 70 | golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= 71 | golang.org/x/net v0.38.0 h1:vRMAPTMaeGqVhG5QyLJHqNDwecKTomGeqbnfZyKlBI8= 72 | golang.org/x/net v0.38.0/go.mod h1:ivrbrMbzFq5J41QOQh0siUuly180yBYtLp+CKbEaFx8= 73 | golang.org/x/oauth2 v0.6.0 h1:Lh8GPgSKBfWSwFvtuWOfeI3aAAnbXTSutYxJiOJFgIw= 74 | golang.org/x/oauth2 v0.6.0/go.mod h1:ycmewcwgD4Rpr3eZJLSB4Kyyljb3qDh40vJ8STE5HKw= 75 | golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= 76 | golang.org/x/sync v0.12.0 h1:MHc5BpPuC30uJk597Ri8TV3CNZcTLu6B6z4lJy+g6Jw= 77 | golang.org/x/sync v0.12.0/go.mod h1:1dzgHSNfp02xaA81J2MS99Qcpr2w7fw1gpm99rleRqA= 78 | golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= 79 | golang.org/x/sys v0.31.0 h1:ioabZlmFYtWhL+TRYpcnNlLwhyxaM9kWTDEmfnprqik= 80 | golang.org/x/sys v0.31.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k= 81 | golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= 82 | golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= 83 | golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY= 84 | golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4= 85 | golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= 86 | golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= 87 | google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c= 88 | google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc= 89 | google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw= 90 | google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc= 91 | google.golang.org/protobuf v1.33.0 h1:uNO2rsAINq/JlFpSdYEKIZ0uKD/R9cpdv0T+yoGwGmI= 92 | google.golang.org/protobuf v1.33.0/go.mod h1:c6P6GXX6sHbq/GpV6MGZEdwhWPcYBgnhAHhKbcUYpos= 93 | gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= 94 | gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= 95 | gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= 96 | gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= 97 | gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= 98 | gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 99 | gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= 100 | gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= 101 | -------------------------------------------------------------------------------- /mixin/.lint: -------------------------------------------------------------------------------- 1 | exclusions: 2 | template-job-rule: 3 | reason: "Prometheus datasource variable is being named as prometheus_datasource now while linter expects 'datasource'" 4 | panel-datasource-rule: 5 | reason: "Loki datasource variable is being named as loki_datasource now while linter expects 'datasource'" 6 | template-datasource-rule: 7 | reason: "Based on new convention we are using variable names prometheus_datasource and loki_datasource where as linter expects 'datasource'" 8 | template-instance-rule: 9 | reason: "Based on new convention we are using variable names prometheus_datasource and loki_datasource where as linter expects 'datasource'" 10 | panel-units-rule: 11 | reason: "Custom units are used for better user experience in these panels" 12 | -------------------------------------------------------------------------------- /mixin/Makefile: -------------------------------------------------------------------------------- 1 | JSONNET_FMT := jsonnetfmt -n 2 --max-blank-lines 1 --string-style s --comment-style s 2 | 3 | .PHONY: all 4 | all: build dashboards_out prometheus_alerts.yaml 5 | 6 | vendor: jsonnetfile.json 7 | jb install 8 | 9 | .PHONY: build 10 | build: vendor 11 | 12 | .PHONY: fmt 13 | fmt: 14 | find . -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \ 15 | xargs -n 1 -- $(JSONNET_FMT) -i 16 | 17 | .PHONY: lint 18 | lint: build 19 | find . -name 'vendor' -prune -o -name '*.libsonnet' -print -o -name '*.jsonnet' -print | \ 20 | while read f; do \ 21 | $(JSONNET_FMT) "$$f" | diff -u "$$f" -; \ 22 | done 23 | mixtool lint mixin.libsonnet 24 | 25 | dashboards_out: mixin.libsonnet config.libsonnet $(wildcard dashboards/*) 26 | @mkdir -p dashboards_out 27 | mixtool generate dashboards mixin.libsonnet -d dashboards_out 28 | 29 | prometheus_alerts.yaml: mixin.libsonnet alerts/*.libsonnet 30 | mixtool generate alerts mixin.libsonnet -a prometheus_alerts.yaml 31 | 32 | .PHONY: clean 33 | clean: 34 | rm -rf dashboards_out prometheus_alerts.yaml -------------------------------------------------------------------------------- /mixin/README.md: -------------------------------------------------------------------------------- 1 | # IBM DB2 Mixin 2 | 3 | The IBM DB2 mixin consists of a configurable Grafana dashboard and alerts based on the [IBM DB2 exporter](../README.md). 4 | 5 | 6 | The IBM DB2 mixin contains the following dashboards: 7 | 8 | - IBM DB2 overview 9 | 10 | The mixin also contains the following alerts: 11 | 12 | - IBMDB2HighLockWaitTime 13 | - IBMDB2HighNumberOfDeadlocks 14 | - IBMDB2LogUsageReachingLimit 15 | 16 | Default thresholds can be configured in `config.libsonnet`. 17 | 18 | ```js 19 | { 20 | _config+:: { 21 | alertsHighLockWaitTime: 2000, //ms 22 | alertsHighNumberOfDeadlocks: 5, //count 23 | alertsLogUsageReachingLimit: 90, //percent 0-100 24 | }, 25 | } 26 | ``` 27 | 28 | ## IBM DB2 overview 29 | The IBM DB2 overview dashboard provides details about the general state of the database like bufferpool hit ratio, active connections, and deadlocks. 30 | 31 | ![First screenshot of the overview dashboard](https://storage.googleapis.com/grafanalabs-integration-assets/ibm-db2/screenshots/ibm-db2-overview-1.png) 32 | ![Second screenshot of the overview dashboard](https://storage.googleapis.com/grafanalabs-integration-assets/ibm-db2/screenshots/ibm-db2-overview-2.png) 33 | 34 | ## Logs 35 | To get IBM DB2 diagnostic logs, [Promtail and Loki needs to be installed](https://grafana.com/docs/loki/latest/installation/) and provisioned for logs with your Grafana instance. The default location of the diagnostic log file depends on the instance of DB2 the database being monitored is in. For single member instances, the location will look like something like this path(depends on where your instance of DB2 is located) `/home/*/sqllib/db2dump/db2diag.log`. For other instances of DB2, the path to the logs file will look like `/home/*/sqllib/db2dump/DIAG*/db2diag.log`. `DIAG*` represents any number of directories that may be present at that level that each contain a log file. 36 | 37 | IBM DB2 diagnostic logs are enabled by default in the `config.libsonnet` and can be removed by setting `enableLokiLogs` to `false`. Then run `make` again to regenerate the dashboard: 38 | 39 | ```js 40 | { 41 | _config+:: { 42 | enableLokiLogs: false 43 | }, 44 | } 45 | ``` 46 | 47 | ## Alerts Overview 48 | - IBMDB2HighLockWaitTime: The average wait time for a lock in the database is high. 49 | - IBMDB2HighNumberOfDeadlocks: The number of deadlocks occurring in the database is high. 50 | - IBMDB2LogUsageReachingLimit: The amount of log space available for the DB2 instance is running out of space. 51 | 52 | 53 | 54 | ## Install tools 55 | ```bash 56 | go install github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest 57 | go install github.com/monitoring-mixins/mixtool/cmd/mixtool@latest 58 | ``` 59 | 60 | For linting and formatting, you would also need and `jsonnetfmt` installed. If you 61 | have a working Go development environment, it's easiest to run the following: 62 | 63 | ```bash 64 | go install github.com/google/go-jsonnet/cmd/jsonnetfmt@latest 65 | ``` 66 | 67 | The files in `dashboards_out` need to be imported 68 | into your Grafana server. The exact details will be depending on your environment. 69 | 70 | `prometheus_alerts.yaml` needs to be imported into Prometheus. 71 | 72 | ## Generate dashboards and alerts 73 | 74 | Edit `config.libsonnet` if required and then build JSON dashboard files for Grafana: 75 | 76 | ```bash 77 | make 78 | ``` 79 | 80 | For more advanced uses of mixins, see 81 | https://github.com/monitoring-mixins/docs. -------------------------------------------------------------------------------- /mixin/alerts/alerts.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | prometheusAlerts+:: { 3 | groups+: [ 4 | { 5 | name: 'ibm-db2-alerts', 6 | rules: [ 7 | { 8 | alert: 'IBMDB2HighLockWaitTime', 9 | expr: ||| 10 | sum without (job) (ibm_db2_lock_wait_time) > %(alertsHighLockWaitTime)s 11 | ||| % $._config, 12 | 'for': '5m', 13 | labels: { 14 | severity: 'warning', 15 | }, 16 | annotations: { 17 | summary: 'The average amount of time waiting for locks to become free is abnormally large.', 18 | description: 19 | ( 20 | 'The average amount of time waiting for locks to become free is {{$labels.value}}ms for {{$labels.database_name}} which is above the threshold of %(alertsHighLockWaitTime)sms.' 21 | ) % $._config, 22 | }, 23 | }, 24 | { 25 | alert: 'IBMDB2HighNumberOfDeadlocks', 26 | expr: ||| 27 | sum without (job) (increase(ibm_db2_deadlock_total[5m])) > %(alertsHighNumberOfDeadlocks)s 28 | ||| % $._config, 29 | 'for': '5m', 30 | labels: { 31 | severity: 'critical', 32 | }, 33 | annotations: { 34 | summary: 'There are deadlocks occurring in the database.', 35 | description: 36 | ( 37 | 'The number of deadlocks is at {{$labels.value}} for {{$labels.database_name}} which is above the threshold of %(alertsHighNumberOfDeadlocks)s.' 38 | ) % $._config, 39 | }, 40 | }, 41 | { 42 | alert: 'IBMDB2LogUsageReachingLimit', 43 | expr: ||| 44 | 100 * sum without (job,log_usage_type) (ibm_db2_log_usage{log_usage_type="used"}) / sum without (job,log_usage_type) (ibm_db2_log_usage{log_usage_type="available"}) > %(alertsLogUsageReachingLimit)s 45 | ||| % $._config, 46 | 'for': '5m', 47 | labels: { 48 | severity: 'warning', 49 | }, 50 | annotations: { 51 | summary: 'The amount of log space available for the DB2 instance is running out of space, rotate logs or delete unnecessary storage usage.', 52 | description: 53 | ( 54 | 'The amount of log space being used by the DB2 instance is at {{$labels.value}}%% which is above the threshold of %(alertsLogUsageReachingLimit)s%%.' 55 | ) % $._config, 56 | }, 57 | }, 58 | ], 59 | }, 60 | ], 61 | }, 62 | } 63 | -------------------------------------------------------------------------------- /mixin/config.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | _config+:: { 3 | enableMultiCluster: false, 4 | ibmdb2Selector: if self.enableMultiCluster then 'job=~"$job", cluster=~"$cluster"' else 'job=~"$job"', 5 | multiclusterSelector: 'job=~"$job"', 6 | dashboardTags: ['ibm-db2-mixin'], 7 | dashboardPeriod: 'now-3h', 8 | dashboardTimezone: 'default', 9 | dashboardRefresh: '1m', 10 | 11 | // alerts thresholds 12 | alertsHighLockWaitTime: 2000, //ms 13 | alertsHighNumberOfDeadlocks: 5, //count 14 | alertsLogUsageReachingLimit: 90, //percent 0-100 15 | 16 | enableLokiLogs: true, 17 | }, 18 | } 19 | -------------------------------------------------------------------------------- /mixin/dashboards/dashboards.libsonnet: -------------------------------------------------------------------------------- 1 | (import 'ibm-db2-overview.libsonnet') 2 | -------------------------------------------------------------------------------- /mixin/dashboards/ibm-db2-overview.libsonnet: -------------------------------------------------------------------------------- 1 | local g = (import 'grafana-builder/grafana.libsonnet'); 2 | local grafana = (import 'grafonnet/grafana.libsonnet'); 3 | local dashboard = grafana.dashboard; 4 | local template = grafana.template; 5 | local prometheus = grafana.prometheus; 6 | 7 | local dashboardUid = 'ibm-db2-overview'; 8 | 9 | local promDatasourceName = 'prometheus_datasource'; 10 | local lokiDatasourceName = 'loki_datasource'; 11 | 12 | local promDatasource = { 13 | uid: '${%s}' % promDatasourceName, 14 | }; 15 | 16 | local lokiDatasource = { 17 | uid: '${%s}' % lokiDatasourceName, 18 | }; 19 | 20 | local upStatusPanel(matcher) = { 21 | datasource: promDatasource, 22 | targets: [ 23 | prometheus.target( 24 | 'ibm_db2_up{' + matcher + '}', 25 | datasource=promDatasource, 26 | legendFormat='{{instance}}', 27 | ), 28 | ], 29 | type: 'stat', 30 | title: 'Up status', 31 | description: 'Whether the agent integration is up for this database.', 32 | fieldConfig: { 33 | defaults: { 34 | color: { 35 | mode: 'thresholds', 36 | }, 37 | mappings: [ 38 | { 39 | options: { 40 | '0': { 41 | color: 'red', 42 | index: 0, 43 | text: 'Not OK', 44 | }, 45 | '1': { 46 | color: 'green', 47 | index: 1, 48 | text: 'OK', 49 | }, 50 | }, 51 | type: 'value', 52 | }, 53 | ], 54 | thresholds: { 55 | mode: 'absolute', 56 | steps: [ 57 | { 58 | color: 'green', 59 | value: null, 60 | }, 61 | ], 62 | }, 63 | }, 64 | overrides: [], 65 | }, 66 | options: { 67 | colorMode: 'value', 68 | graphMode: 'none', 69 | justifyMode: 'auto', 70 | orientation: 'auto', 71 | reduceOptions: { 72 | calcs: [ 73 | 'lastNotNull', 74 | ], 75 | fields: '', 76 | values: false, 77 | }, 78 | textMode: 'auto', 79 | }, 80 | pluginVersion: '10.0.1-cloud.2.a7a20fbf', 81 | }; 82 | 83 | local activeConnectionsPanel(matcher) = { 84 | datasource: promDatasource, 85 | targets: [ 86 | prometheus.target( 87 | 'ibm_db2_application_active{' + matcher + '}', 88 | datasource=promDatasource, 89 | legendFormat='{{database_name}}', 90 | ), 91 | ], 92 | type: 'timeseries', 93 | title: 'Active connections', 94 | description: 'The amount of active connections to the database.', 95 | fieldConfig: { 96 | defaults: { 97 | color: { 98 | mode: 'palette-classic', 99 | }, 100 | custom: { 101 | axisCenteredZero: false, 102 | axisColorMode: 'text', 103 | axisLabel: '', 104 | axisPlacement: 'auto', 105 | barAlignment: 0, 106 | drawStyle: 'line', 107 | fillOpacity: 0, 108 | gradientMode: 'none', 109 | hideFrom: { 110 | legend: false, 111 | tooltip: false, 112 | viz: false, 113 | }, 114 | lineInterpolation: 'linear', 115 | lineWidth: 1, 116 | pointSize: 5, 117 | scaleDistribution: { 118 | type: 'linear', 119 | }, 120 | showPoints: 'auto', 121 | spanNulls: false, 122 | stacking: { 123 | group: 'A', 124 | mode: 'none', 125 | }, 126 | thresholdsStyle: { 127 | mode: 'off', 128 | }, 129 | }, 130 | mappings: [], 131 | thresholds: { 132 | mode: 'absolute', 133 | steps: [ 134 | { 135 | color: 'green', 136 | value: null, 137 | }, 138 | ], 139 | }, 140 | unit: 'none', 141 | }, 142 | overrides: [], 143 | }, 144 | options: { 145 | legend: { 146 | calcs: [], 147 | displayMode: 'list', 148 | placement: 'bottom', 149 | showLegend: true, 150 | }, 151 | tooltip: { 152 | mode: 'multi', 153 | sort: 'desc', 154 | }, 155 | }, 156 | }; 157 | 158 | local rowOperationsPanel(matcher) = { 159 | datasource: promDatasource, 160 | targets: [ 161 | prometheus.target( 162 | 'increase(ibm_db2_row_total{' + matcher + '}[$__interval:])', 163 | datasource=promDatasource, 164 | legendFormat='{{database_name}} - {{row_state}}', 165 | interval='1m', 166 | ), 167 | ], 168 | type: 'timeseries', 169 | title: 'Row operations', 170 | description: 'The number of row operations that are being performed on the database.', 171 | fieldConfig: { 172 | defaults: { 173 | color: { 174 | mode: 'palette-classic', 175 | }, 176 | custom: { 177 | axisCenteredZero: false, 178 | axisColorMode: 'text', 179 | axisLabel: '', 180 | axisPlacement: 'auto', 181 | barAlignment: 0, 182 | drawStyle: 'line', 183 | fillOpacity: 0, 184 | gradientMode: 'none', 185 | hideFrom: { 186 | legend: false, 187 | tooltip: false, 188 | viz: false, 189 | }, 190 | lineInterpolation: 'linear', 191 | lineWidth: 1, 192 | pointSize: 5, 193 | scaleDistribution: { 194 | type: 'linear', 195 | }, 196 | showPoints: 'auto', 197 | spanNulls: false, 198 | stacking: { 199 | group: 'A', 200 | mode: 'none', 201 | }, 202 | thresholdsStyle: { 203 | mode: 'off', 204 | }, 205 | }, 206 | mappings: [], 207 | thresholds: { 208 | mode: 'absolute', 209 | steps: [ 210 | { 211 | color: 'green', 212 | value: null, 213 | }, 214 | ], 215 | }, 216 | unit: 'none', 217 | }, 218 | overrides: [], 219 | }, 220 | options: { 221 | legend: { 222 | calcs: [], 223 | displayMode: 'list', 224 | placement: 'bottom', 225 | showLegend: true, 226 | }, 227 | tooltip: { 228 | mode: 'multi', 229 | sort: 'desc', 230 | }, 231 | }, 232 | }; 233 | 234 | local bufferpoolHitRatioPanel(matcher) = { 235 | datasource: promDatasource, 236 | targets: [ 237 | prometheus.target( 238 | 'ibm_db2_bufferpool_hit_ratio{' + matcher + '}', 239 | datasource=promDatasource, 240 | legendFormat='{{database_name}} - {{bufferpool_name}}', 241 | ), 242 | ], 243 | type: 'timeseries', 244 | title: 'Bufferpool hit ratio', 245 | description: 'The percentage of time that the database manager did not need to load a page from disk to service a page request.', 246 | fieldConfig: { 247 | defaults: { 248 | color: { 249 | mode: 'palette-classic', 250 | }, 251 | custom: { 252 | axisCenteredZero: false, 253 | axisColorMode: 'text', 254 | axisLabel: '', 255 | axisPlacement: 'auto', 256 | barAlignment: 0, 257 | drawStyle: 'line', 258 | fillOpacity: 0, 259 | gradientMode: 'none', 260 | hideFrom: { 261 | legend: false, 262 | tooltip: false, 263 | viz: false, 264 | }, 265 | lineInterpolation: 'linear', 266 | lineWidth: 1, 267 | pointSize: 5, 268 | scaleDistribution: { 269 | type: 'linear', 270 | }, 271 | showPoints: 'auto', 272 | spanNulls: false, 273 | stacking: { 274 | group: 'A', 275 | mode: 'none', 276 | }, 277 | thresholdsStyle: { 278 | mode: 'off', 279 | }, 280 | }, 281 | mappings: [], 282 | thresholds: { 283 | mode: 'absolute', 284 | steps: [ 285 | { 286 | color: 'green', 287 | value: null, 288 | }, 289 | ], 290 | }, 291 | unit: 'percent', 292 | }, 293 | overrides: [], 294 | }, 295 | options: { 296 | legend: { 297 | calcs: [], 298 | displayMode: 'list', 299 | placement: 'bottom', 300 | showLegend: true, 301 | }, 302 | tooltip: { 303 | mode: 'multi', 304 | sort: 'desc', 305 | }, 306 | }, 307 | }; 308 | 309 | local tablespaceUsagePanel(matcher) = { 310 | datasource: promDatasource, 311 | targets: [ 312 | prometheus.target( 313 | 'ibm_db2_tablespace_usage{' + matcher + ', tablespace_type="used"}', 314 | datasource=promDatasource, 315 | legendFormat='{{database_name}} - {{tablespace_name}}', 316 | ), 317 | ], 318 | type: 'timeseries', 319 | title: 'Tablespace usage', 320 | description: 'The size and usage of table spaces.', 321 | fieldConfig: { 322 | defaults: { 323 | color: { 324 | mode: 'palette-classic', 325 | }, 326 | custom: { 327 | axisCenteredZero: false, 328 | axisColorMode: 'text', 329 | axisLabel: '', 330 | axisPlacement: 'auto', 331 | barAlignment: 0, 332 | drawStyle: 'line', 333 | fillOpacity: 0, 334 | gradientMode: 'none', 335 | hideFrom: { 336 | legend: false, 337 | tooltip: false, 338 | viz: false, 339 | }, 340 | lineInterpolation: 'linear', 341 | lineWidth: 1, 342 | pointSize: 5, 343 | scaleDistribution: { 344 | type: 'linear', 345 | }, 346 | showPoints: 'auto', 347 | spanNulls: false, 348 | stacking: { 349 | group: 'A', 350 | mode: 'none', 351 | }, 352 | thresholdsStyle: { 353 | mode: 'off', 354 | }, 355 | }, 356 | mappings: [], 357 | thresholds: { 358 | mode: 'absolute', 359 | steps: [ 360 | { 361 | color: 'green', 362 | value: null, 363 | }, 364 | { 365 | color: 'red', 366 | value: 80, 367 | }, 368 | ], 369 | }, 370 | unit: 'decbytes', 371 | }, 372 | overrides: [], 373 | }, 374 | options: { 375 | legend: { 376 | calcs: [], 377 | displayMode: 'list', 378 | placement: 'bottom', 379 | showLegend: true, 380 | }, 381 | tooltip: { 382 | mode: 'multi', 383 | sort: 'desc', 384 | }, 385 | }, 386 | }; 387 | 388 | local averageLockWaitTimePanel(matcher) = { 389 | datasource: promDatasource, 390 | targets: [ 391 | prometheus.target( 392 | 'ibm_db2_lock_wait_time{' + matcher + '}', 393 | datasource=promDatasource, 394 | legendFormat='{{database_name}}', 395 | ), 396 | ], 397 | type: 'timeseries', 398 | title: 'Average lock wait time', 399 | description: 'The average wait time for a database while acquiring locks.', 400 | fieldConfig: { 401 | defaults: { 402 | color: { 403 | mode: 'palette-classic', 404 | }, 405 | custom: { 406 | axisCenteredZero: false, 407 | axisColorMode: 'text', 408 | axisLabel: '', 409 | axisPlacement: 'auto', 410 | barAlignment: 0, 411 | drawStyle: 'line', 412 | fillOpacity: 0, 413 | gradientMode: 'none', 414 | hideFrom: { 415 | legend: false, 416 | tooltip: false, 417 | viz: false, 418 | }, 419 | lineInterpolation: 'linear', 420 | lineWidth: 1, 421 | pointSize: 5, 422 | scaleDistribution: { 423 | type: 'linear', 424 | }, 425 | showPoints: 'auto', 426 | spanNulls: false, 427 | stacking: { 428 | group: 'A', 429 | mode: 'none', 430 | }, 431 | thresholdsStyle: { 432 | mode: 'off', 433 | }, 434 | }, 435 | mappings: [], 436 | thresholds: { 437 | mode: 'absolute', 438 | steps: [ 439 | { 440 | color: 'green', 441 | value: null, 442 | }, 443 | { 444 | color: 'red', 445 | value: 80, 446 | }, 447 | ], 448 | }, 449 | unit: 'ms', 450 | }, 451 | overrides: [], 452 | }, 453 | options: { 454 | legend: { 455 | calcs: [], 456 | displayMode: 'list', 457 | placement: 'bottom', 458 | showLegend: true, 459 | }, 460 | tooltip: { 461 | mode: 'multi', 462 | sort: 'desc', 463 | }, 464 | }, 465 | }; 466 | 467 | local deadlocksPanel(matcher) = { 468 | datasource: promDatasource, 469 | targets: [ 470 | prometheus.target( 471 | 'increase(ibm_db2_deadlock_total{' + matcher + '}[$__interval:])', 472 | datasource=promDatasource, 473 | legendFormat='{{database_name}}', 474 | interval='1m', 475 | ), 476 | ], 477 | type: 'timeseries', 478 | title: 'Deadlocks', 479 | description: 'The number of deadlocks occurring on the database.', 480 | fieldConfig: { 481 | defaults: { 482 | color: { 483 | mode: 'palette-classic', 484 | }, 485 | custom: { 486 | axisCenteredZero: false, 487 | axisColorMode: 'text', 488 | axisLabel: '', 489 | axisPlacement: 'auto', 490 | barAlignment: 0, 491 | drawStyle: 'line', 492 | fillOpacity: 0, 493 | gradientMode: 'none', 494 | hideFrom: { 495 | legend: false, 496 | tooltip: false, 497 | viz: false, 498 | }, 499 | lineInterpolation: 'linear', 500 | lineWidth: 1, 501 | pointSize: 5, 502 | scaleDistribution: { 503 | type: 'linear', 504 | }, 505 | showPoints: 'auto', 506 | spanNulls: false, 507 | stacking: { 508 | group: 'A', 509 | mode: 'none', 510 | }, 511 | thresholdsStyle: { 512 | mode: 'off', 513 | }, 514 | }, 515 | mappings: [], 516 | thresholds: { 517 | mode: 'absolute', 518 | steps: [ 519 | { 520 | color: 'green', 521 | value: null, 522 | }, 523 | { 524 | color: 'red', 525 | value: 80, 526 | }, 527 | ], 528 | }, 529 | unit: 'none', 530 | }, 531 | overrides: [], 532 | }, 533 | options: { 534 | legend: { 535 | calcs: [], 536 | displayMode: 'list', 537 | placement: 'bottom', 538 | showLegend: true, 539 | }, 540 | tooltip: { 541 | mode: 'multi', 542 | sort: 'desc', 543 | }, 544 | }, 545 | }; 546 | 547 | local locksPanel(matcher) = { 548 | datasource: promDatasource, 549 | targets: [ 550 | prometheus.target( 551 | 'ibm_db2_lock_usage{' + matcher + '}', 552 | datasource=promDatasource, 553 | legendFormat='{{database_name}} - {{lock_state}}', 554 | ), 555 | ], 556 | type: 'timeseries', 557 | title: 'Locks', 558 | description: 'The number of locks active and waiting in use in the database.', 559 | fieldConfig: { 560 | defaults: { 561 | color: { 562 | mode: 'palette-classic', 563 | }, 564 | custom: { 565 | axisCenteredZero: false, 566 | axisColorMode: 'text', 567 | axisLabel: '', 568 | axisPlacement: 'auto', 569 | axisSoftMax: -4, 570 | barAlignment: 0, 571 | drawStyle: 'line', 572 | fillOpacity: 0, 573 | gradientMode: 'none', 574 | hideFrom: { 575 | legend: false, 576 | tooltip: false, 577 | viz: false, 578 | }, 579 | lineInterpolation: 'linear', 580 | lineWidth: 1, 581 | pointSize: 5, 582 | scaleDistribution: { 583 | type: 'linear', 584 | }, 585 | showPoints: 'auto', 586 | spanNulls: false, 587 | stacking: { 588 | group: 'A', 589 | mode: 'none', 590 | }, 591 | thresholdsStyle: { 592 | mode: 'off', 593 | }, 594 | }, 595 | mappings: [], 596 | thresholds: { 597 | mode: 'absolute', 598 | steps: [ 599 | { 600 | color: 'green', 601 | value: null, 602 | }, 603 | ], 604 | }, 605 | unit: 'none', 606 | }, 607 | overrides: [], 608 | }, 609 | options: { 610 | legend: { 611 | calcs: [], 612 | displayMode: 'list', 613 | placement: 'bottom', 614 | showLegend: true, 615 | }, 616 | tooltip: { 617 | mode: 'multi', 618 | sort: 'desc', 619 | }, 620 | }, 621 | }; 622 | 623 | local logsRow = { 624 | datasource: promDatasource, 625 | targets: [], 626 | type: 'row', 627 | title: 'Logs', 628 | collapsed: false, 629 | }; 630 | 631 | local diagnosticLogsPanel(matcher) = { 632 | datasource: lokiDatasource, 633 | targets: [ 634 | { 635 | datasource: lokiDatasource, 636 | editorMode: 'code', 637 | expr: '{' + matcher + '} |= `` | (filename=~"/home/.*/sqllib/db2dump/DIAG.*/db2diag.log|/home/.*/sqllib/db2dump/db2diag.log" or log_type="db2diag")', 638 | queryType: 'range', 639 | refId: 'A', 640 | }, 641 | ], 642 | type: 'logs', 643 | title: 'Diagnostic logs', 644 | description: 'Recent logs from diagnostic log file.', 645 | options: { 646 | dedupStrategy: 'none', 647 | enableLogDetails: true, 648 | prettifyLogMessage: false, 649 | showCommonLabels: false, 650 | showLabels: false, 651 | showTime: false, 652 | sortOrder: 'Descending', 653 | wrapLogMessage: false, 654 | }, 655 | }; 656 | 657 | local logStorageUsagePanel(matcher) = { 658 | datasource: promDatasource, 659 | targets: [ 660 | prometheus.target( 661 | '100 * sum(ibm_db2_log_usage{' + matcher + ', log_usage_type="used"}) by (instance, job, database_name) / sum(ibm_db2_log_usage{' + matcher + ', log_usage_type="available"}) by (instance, job, database_name)', 662 | datasource=promDatasource, 663 | legendFormat='{{instance}}', 664 | ), 665 | ], 666 | type: 'stat', 667 | title: 'Log storage usage', 668 | description: 'The percentage of allocated storage being used by the IBM DB2 instance.', 669 | fieldConfig: { 670 | defaults: { 671 | color: { 672 | mode: 'thresholds', 673 | }, 674 | mappings: [], 675 | thresholds: { 676 | mode: 'absolute', 677 | steps: [ 678 | { 679 | color: 'green', 680 | value: null, 681 | }, 682 | ], 683 | }, 684 | unit: 'percent', 685 | }, 686 | overrides: [], 687 | }, 688 | options: { 689 | colorMode: 'value', 690 | graphMode: 'none', 691 | justifyMode: 'auto', 692 | orientation: 'auto', 693 | reduceOptions: { 694 | calcs: [ 695 | 'lastNotNull', 696 | ], 697 | fields: '', 698 | values: false, 699 | }, 700 | textMode: 'auto', 701 | }, 702 | pluginVersion: '10.0.1-cloud.2.a7a20fbf', 703 | }; 704 | 705 | local logOperationsPanel(matcher) = { 706 | datasource: promDatasource, 707 | targets: [ 708 | prometheus.target( 709 | 'increase(ibm_db2_log_operations_total{' + matcher + '}[$__interval:])', 710 | datasource=promDatasource, 711 | legendFormat='{{database_name}} - {{log_member}} - {{log_operation_type}}', 712 | interval='1m', 713 | ), 714 | ], 715 | type: 'timeseries', 716 | title: 'Log operations', 717 | description: 'The number of log pages read and written to by the logger.', 718 | fieldConfig: { 719 | defaults: { 720 | color: { 721 | mode: 'palette-classic', 722 | }, 723 | custom: { 724 | axisCenteredZero: false, 725 | axisColorMode: 'text', 726 | axisLabel: '', 727 | axisPlacement: 'auto', 728 | barAlignment: 0, 729 | drawStyle: 'line', 730 | fillOpacity: 0, 731 | gradientMode: 'none', 732 | hideFrom: { 733 | legend: false, 734 | tooltip: false, 735 | viz: false, 736 | }, 737 | lineInterpolation: 'linear', 738 | lineWidth: 1, 739 | pointSize: 5, 740 | scaleDistribution: { 741 | type: 'linear', 742 | }, 743 | showPoints: 'auto', 744 | spanNulls: false, 745 | stacking: { 746 | group: 'A', 747 | mode: 'none', 748 | }, 749 | thresholdsStyle: { 750 | mode: 'off', 751 | }, 752 | }, 753 | mappings: [], 754 | thresholds: { 755 | mode: 'absolute', 756 | steps: [ 757 | { 758 | color: 'green', 759 | value: null, 760 | }, 761 | ], 762 | }, 763 | unit: 'none', 764 | }, 765 | overrides: [], 766 | }, 767 | options: { 768 | legend: { 769 | calcs: [], 770 | displayMode: 'list', 771 | placement: 'bottom', 772 | showLegend: true, 773 | }, 774 | tooltip: { 775 | mode: 'multi', 776 | sort: 'desc', 777 | }, 778 | }, 779 | }; 780 | 781 | local getMatcher(cfg) = '%(ibmdb2Selector)s, instance=~"$instance", database_name=~"$database_name"' % cfg; 782 | 783 | { 784 | grafanaDashboards+:: { 785 | 'ibm-db2-overview.json': 786 | dashboard.new( 787 | 'IBM DB2 overview', 788 | time_from='%s' % $._config.dashboardPeriod, 789 | tags=($._config.dashboardTags), 790 | timezone='%s' % $._config.dashboardTimezone, 791 | refresh='%s' % $._config.dashboardRefresh, 792 | description='', 793 | uid=dashboardUid, 794 | ) 795 | 796 | .addTemplates( 797 | std.flattenArrays([ 798 | [ 799 | template.datasource( 800 | promDatasourceName, 801 | 'prometheus', 802 | null, 803 | label='Data Source', 804 | refresh='load' 805 | ), 806 | ], 807 | if $._config.enableLokiLogs then [ 808 | template.datasource( 809 | lokiDatasourceName, 810 | 'loki', 811 | null, 812 | label='Loki Datasource', 813 | refresh='load' 814 | ), 815 | ] else [], 816 | [ 817 | template.new( 818 | 'job', 819 | promDatasource, 820 | 'label_values(ibm_db2_application_active,job)', 821 | label='Job', 822 | refresh=1, 823 | includeAll=false, 824 | multi=false, 825 | allValues='', 826 | sort=0 827 | ), 828 | template.new( 829 | 'cluster', 830 | promDatasource, 831 | 'label_values(ibm_db2_application_active{%(multiclusterSelector)s}, cluster)' % $._config, 832 | label='Cluster', 833 | refresh=2, 834 | includeAll=true, 835 | multi=true, 836 | allValues='.*', 837 | hide=if $._config.enableMultiCluster then '' else 'variable', 838 | sort=0 839 | ), 840 | template.new( 841 | 'instance', 842 | promDatasource, 843 | 'label_values(ibm_db2_application_active{%(ibmdb2Selector)s},instance)' % $._config, 844 | label='Instance', 845 | refresh=1, 846 | includeAll=false, 847 | multi=false, 848 | allValues='', 849 | sort=0 850 | ), 851 | template.new( 852 | 'database_name', 853 | promDatasource, 854 | 'label_values(ibm_db2_application_active{%(ibmdb2Selector)s},database_name)' % $._config, 855 | label='Database', 856 | refresh=1, 857 | includeAll=true, 858 | multi=true, 859 | allValues='', 860 | sort=0 861 | ), 862 | ], 863 | ]) 864 | ) 865 | .addPanels( 866 | std.flattenArrays([ 867 | [ 868 | upStatusPanel(getMatcher($._config)) { gridPos: { h: 6, w: 6, x: 0, y: 0 } }, 869 | activeConnectionsPanel(getMatcher($._config)) { gridPos: { h: 6, w: 18, x: 6, y: 0 } }, 870 | rowOperationsPanel(getMatcher($._config)) { gridPos: { h: 6, w: 12, x: 0, y: 6 } }, 871 | bufferpoolHitRatioPanel(getMatcher($._config)) { gridPos: { h: 6, w: 12, x: 12, y: 6 } }, 872 | tablespaceUsagePanel(getMatcher($._config)) { gridPos: { h: 6, w: 24, x: 0, y: 12 } }, 873 | averageLockWaitTimePanel(getMatcher($._config)) { gridPos: { h: 6, w: 8, x: 0, y: 18 } }, 874 | deadlocksPanel(getMatcher($._config)) { gridPos: { h: 6, w: 8, x: 8, y: 18 } }, 875 | locksPanel(getMatcher($._config)) { gridPos: { h: 6, w: 8, x: 16, y: 18 } }, 876 | logsRow { gridPos: { h: 1, w: 24, x: 0, y: 24 } }, 877 | ], 878 | if $._config.enableLokiLogs then [ 879 | diagnosticLogsPanel(getMatcher($._config)) { gridPos: { h: 6, w: 24, x: 0, y: 25 } }, 880 | ] else [], 881 | [ 882 | logStorageUsagePanel(getMatcher($._config)) { gridPos: { h: 6, w: 6, x: 0, y: 31 } }, 883 | logOperationsPanel(getMatcher($._config)) { gridPos: { h: 6, w: 18, x: 6, y: 31 } }, 884 | ], 885 | ]) 886 | ), 887 | }, 888 | } 889 | -------------------------------------------------------------------------------- /mixin/jsonnetfile.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": 1, 3 | "dependencies": [ 4 | { 5 | "source": { 6 | "git": { 7 | "remote": "https://github.com/grafana/grafonnet-lib.git", 8 | "subdir": "grafonnet" 9 | } 10 | }, 11 | "version": "master" 12 | } 13 | ], 14 | "legacyImports": true 15 | } 16 | -------------------------------------------------------------------------------- /mixin/jsonnetfile.lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "version": 1, 3 | "dependencies": [ 4 | { 5 | "source": { 6 | "git": { 7 | "remote": "https://github.com/grafana/grafonnet-lib.git", 8 | "subdir": "grafonnet" 9 | } 10 | }, 11 | "version": "a1d61cce1da59c71409b99b5c7568511fec661ea", 12 | "sum": "342u++/7rViR/zj2jeJOjshzglkZ1SY+hFNuyCBFMdc=" 13 | } 14 | ], 15 | "legacyImports": false 16 | } 17 | -------------------------------------------------------------------------------- /mixin/mixin.libsonnet: -------------------------------------------------------------------------------- 1 | (import 'alerts/alerts.libsonnet') + 2 | (import 'dashboards/dashboards.libsonnet') + 3 | (import 'config.libsonnet') 4 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/alert_condition.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Returns a new condition of alert of graph panel. 4 | * Currently the only condition type that exists is a Query condition 5 | * that allows to specify a query letter, time range and an aggregation function. 6 | * 7 | * @name alertCondition.new 8 | * 9 | * @param evaluatorParams Value of threshold 10 | * @param evaluatorType Type of threshold 11 | * @param operatorType Operator between conditions 12 | * @param queryRefId The letter defines what query to execute from the Metrics tab 13 | * @param queryTimeStart Begging of time range 14 | * @param queryTimeEnd End of time range 15 | * @param reducerParams Params of an aggregation function 16 | * @param reducerType Name of an aggregation function 17 | * 18 | * @return A json that represents a condition of alert 19 | */ 20 | new( 21 | evaluatorParams=[], 22 | evaluatorType='gt', 23 | operatorType='and', 24 | queryRefId='A', 25 | queryTimeEnd='now', 26 | queryTimeStart='5m', 27 | reducerParams=[], 28 | reducerType='avg', 29 | ):: 30 | { 31 | evaluator: { 32 | params: if std.type(evaluatorParams) == 'array' then evaluatorParams else [evaluatorParams], 33 | type: evaluatorType, 34 | }, 35 | operator: { 36 | type: operatorType, 37 | }, 38 | query: { 39 | params: [queryRefId, queryTimeStart, queryTimeEnd], 40 | }, 41 | reducer: { 42 | params: if std.type(reducerParams) == 'array' then reducerParams else [reducerParams], 43 | type: reducerType, 44 | }, 45 | type: 'query', 46 | }, 47 | } 48 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/alertlist.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates an [Alert list panel](https://grafana.com/docs/grafana/latest/panels/visualizations/alert-list-panel/) 4 | * 5 | * @name alertlist.new 6 | * 7 | * @param title (default `''`) 8 | * @param span (optional) 9 | * @param show (default `'current'`) Whether the panel should display the current alert state or recent alert state changes. 10 | * @param limit (default `10`) Sets the maximum number of alerts to list. 11 | * @param sortOrder (default `'1'`) '1': alerting, '2': no_data, '3': pending, '4': ok, '5': paused 12 | * @param stateFilter (optional) 13 | * @param onlyAlertsOnDashboard (optional) Shows alerts only from the dashboard the alert list is in 14 | * @param transparent (optional) Whether to display the panel without a background 15 | * @param description (optional) 16 | * @param datasource (optional) 17 | */ 18 | new( 19 | title='', 20 | span=null, 21 | show='current', 22 | limit=10, 23 | sortOrder=1, 24 | stateFilter=[], 25 | onlyAlertsOnDashboard=true, 26 | transparent=null, 27 | description=null, 28 | datasource=null, 29 | ):: 30 | { 31 | [if transparent != null then 'transparent']: transparent, 32 | title: title, 33 | [if span != null then 'span']: span, 34 | type: 'alertlist', 35 | show: show, 36 | limit: limit, 37 | sortOrder: sortOrder, 38 | [if show != 'changes' then 'stateFilter']: stateFilter, 39 | onlyAlertsOnDashboard: onlyAlertsOnDashboard, 40 | [if description != null then 'description']: description, 41 | datasource: datasource, 42 | }, 43 | } 44 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/annotation.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | default:: 3 | { 4 | builtIn: 1, 5 | datasource: '-- Grafana --', 6 | enable: true, 7 | hide: true, 8 | iconColor: 'rgba(0, 211, 255, 1)', 9 | name: 'Annotations & Alerts', 10 | type: 'dashboard', 11 | }, 12 | 13 | /** 14 | * @name annotation.datasource 15 | */ 16 | 17 | datasource( 18 | name, 19 | datasource, 20 | expr=null, 21 | enable=true, 22 | hide=false, 23 | iconColor='rgba(255, 96, 96, 1)', 24 | tags=[], 25 | type='tags', 26 | builtIn=null, 27 | ):: 28 | { 29 | datasource: datasource, 30 | enable: enable, 31 | [if expr != null then 'expr']: expr, 32 | hide: hide, 33 | iconColor: iconColor, 34 | name: name, 35 | showIn: 0, 36 | tags: tags, 37 | type: type, 38 | [if builtIn != null then 'builtIn']: builtIn, 39 | }, 40 | } 41 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/bar_gauge_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Create a [bar gauge panel](https://grafana.com/docs/grafana/latest/panels/visualizations/bar-gauge-panel/), 4 | * 5 | * @name barGaugePanel.new 6 | * 7 | * @param title Panel title. 8 | * @param description (optional) Panel description. 9 | * @param datasource (optional) Panel datasource. 10 | * @param unit (optional) The unit of the data. 11 | * @param thresholds (optional) An array of threashold values. 12 | * 13 | * @method addTarget(target) Adds a target object. 14 | * @method addTargets(targets) Adds an array of targets. 15 | */ 16 | new( 17 | title, 18 | description=null, 19 | datasource=null, 20 | unit=null, 21 | thresholds=[], 22 | ):: { 23 | type: 'bargauge', 24 | title: title, 25 | [if description != null then 'description']: description, 26 | datasource: datasource, 27 | targets: [ 28 | ], 29 | fieldConfig: { 30 | defaults: { 31 | unit: unit, 32 | thresholds: { 33 | mode: 'absolute', 34 | steps: thresholds, 35 | }, 36 | }, 37 | }, 38 | _nextTarget:: 0, 39 | addTarget(target):: self { 40 | // automatically ref id in added targets. 41 | local nextTarget = super._nextTarget, 42 | _nextTarget: nextTarget + 1, 43 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 44 | }, 45 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 46 | }, 47 | } 48 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/cloudmonitoring.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [Google Cloud Monitoring target](https://grafana.com/docs/grafana/latest/datasources/google-cloud-monitoring/) 4 | * 5 | * @name cloudmonitoring.target 6 | * 7 | * @param metric 8 | * @param project 9 | * @param filters (optional) 10 | * @param groupBys (optional) 11 | * @param period (default: `'cloud-monitoring-auto'`) 12 | * @param crossSeriesReducer (default 'REDUCE_MAX') 13 | * @param valueType (default 'INT64') 14 | * @param perSeriesAligner (default 'ALIGN_DELTA') 15 | * @param metricKind (default 'CUMULATIVE') 16 | * @param unit (optional) 17 | * @param alias (optional) 18 | 19 | * @return Panel target 20 | */ 21 | 22 | target( 23 | metric, 24 | project, 25 | filters=[], 26 | groupBys=[], 27 | period='cloud-monitoring-auto', 28 | crossSeriesReducer='REDUCE_MAX', 29 | valueType='INT64', 30 | perSeriesAligner='ALIGN_DELTA', 31 | metricKind='CUMULATIVE', 32 | unit=1, 33 | alias=null, 34 | ):: { 35 | metricQuery: { 36 | [if alias != null then 'aliasBy']: alias, 37 | alignmentPeriod: period, 38 | crossSeriesReducer: crossSeriesReducer, 39 | [if filters != null then 'filters']: filters, 40 | [if groupBys != null then 'groupBys']: groupBys, 41 | metricKind: metricKind, 42 | metricType: metric, 43 | perSeriesAligner: perSeriesAligner, 44 | projectName: project, 45 | unit: unit, 46 | valueType: valueType, 47 | }, 48 | sloQuery: { 49 | [if alias != null then 'aliasBy']: alias, 50 | alignmentPeriod: period, 51 | projectName: project, 52 | selectorName: 'select_slo_health', 53 | serviceId: '', 54 | sloId: '', 55 | }, 56 | }, 57 | } 58 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/cloudwatch.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [CloudWatch target](https://grafana.com/docs/grafana/latest/datasources/cloudwatch/) 4 | * 5 | * @name cloudwatch.target 6 | * 7 | * @param region 8 | * @param namespace 9 | * @param metric 10 | * @param datasource (optional) 11 | * @param statistic (default: `'Average'`) 12 | * @param alias (optional) 13 | * @param highResolution (default: `false`) 14 | * @param period (default: `'auto'`) 15 | * @param dimensions (optional) 16 | * @param id (optional) 17 | * @param expression (optional) 18 | * @param hide (optional) 19 | 20 | * @return Panel target 21 | */ 22 | 23 | target( 24 | region, 25 | namespace, 26 | metric, 27 | datasource=null, 28 | statistic='Average', 29 | alias=null, 30 | highResolution=false, 31 | period='auto', 32 | dimensions={}, 33 | id=null, 34 | expression=null, 35 | hide=null 36 | ):: { 37 | region: region, 38 | namespace: namespace, 39 | metricName: metric, 40 | [if datasource != null then 'datasource']: datasource, 41 | statistics: [statistic], 42 | [if alias != null then 'alias']: alias, 43 | highResolution: highResolution, 44 | period: period, 45 | dimensions: dimensions, 46 | [if id != null then 'id']: id, 47 | [if expression != null then 'expression']: expression, 48 | [if hide != null then 'hide']: hide, 49 | 50 | }, 51 | } 52 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/dashboard.libsonnet: -------------------------------------------------------------------------------- 1 | local timepickerlib = import 'timepicker.libsonnet'; 2 | 3 | { 4 | /** 5 | * Creates a [dashboard](https://grafana.com/docs/grafana/latest/features/dashboard/dashboards/) 6 | * 7 | * @name dashboard.new 8 | * 9 | * @param title The title of the dashboard 10 | * @param editable (default: `false`) Whether the dashboard is editable via Grafana UI. 11 | * @param style (default: `'dark'`) Theme of dashboard, `'dark'` or `'light'` 12 | * @param tags (optional) Array of tags associated to the dashboard, e.g.`['tag1','tag2']` 13 | * @param time_from (default: `'now-6h'`) 14 | * @param time_to (default: `'now'`) 15 | * @param timezone (default: `'browser'`) Timezone of the dashboard, `'utc'` or `'browser'` 16 | * @param refresh (default: `''`) Auto-refresh interval, e.g. `'30s'` 17 | * @param timepicker (optional) See timepicker API 18 | * @param graphTooltip (default: `'default'`) `'default'` : no shared crosshair or tooltip (0), `'shared_crosshair'`: shared crosshair (1), `'shared_tooltip'`: shared crosshair AND shared tooltip (2) 19 | * @param hideControls (default: `false`) 20 | * @param schemaVersion (default: `14`) Version of the Grafana JSON schema, incremented each time an update brings changes. `26` for Grafana 7.1.5, `22` for Grafana 6.7.4, `16` for Grafana 5.4.5, `14` for Grafana 4.6.3. etc. 21 | * @param uid (default: `''`) Unique dashboard identifier as a string (8-40), that can be chosen by users. Used to identify a dashboard to update when using Grafana REST API. 22 | * @param description (optional) 23 | * 24 | * @method addTemplate(template) Add a template variable 25 | * @method addTemplates(templates) Adds an array of template variables 26 | * @method addAnnotation(annotation) Add an [annotation](https://grafana.com/docs/grafana/latest/dashboards/annotations/) 27 | * @method addPanel(panel,gridPos) Appends a panel, with an optional grid position in grid coordinates, e.g. `gridPos={'x':0, 'y':0, 'w':12, 'h': 9}` 28 | * @method addPanels(panels) Appends an array of panels 29 | * @method addLink(link) Adds a [dashboard link](https://grafana.com/docs/grafana/latest/linking/dashboard-links/) 30 | * @method addLinks(dashboardLink) Adds an array of [dashboard links](https://grafana.com/docs/grafana/latest/linking/dashboard-links/) 31 | * @method addRequired(type, name, id, version) 32 | * @method addInput(name, label, type, pluginId, pluginName, description, value) 33 | * @method addRow(row) Adds a row. This is the legacy row concept from Grafana < 5, when rows were needed for layout. Rows should now be added via `addPanel`. 34 | */ 35 | new( 36 | title, 37 | editable=false, 38 | style='dark', 39 | tags=[], 40 | time_from='now-6h', 41 | time_to='now', 42 | timezone='browser', 43 | refresh='', 44 | timepicker=timepickerlib.new(), 45 | graphTooltip='default', 46 | hideControls=false, 47 | schemaVersion=14, 48 | uid='', 49 | description=null, 50 | ):: { 51 | local it = self, 52 | _annotations:: [], 53 | [if uid != '' then 'uid']: uid, 54 | editable: editable, 55 | [if description != null then 'description']: description, 56 | gnetId: null, 57 | graphTooltip: 58 | if graphTooltip == 'shared_tooltip' then 2 59 | else if graphTooltip == 'shared_crosshair' then 1 60 | else if graphTooltip == 'default' then 0 61 | else graphTooltip, 62 | hideControls: hideControls, 63 | id: null, 64 | links: [], 65 | panels:: [], 66 | refresh: refresh, 67 | rows: [], 68 | schemaVersion: schemaVersion, 69 | style: style, 70 | tags: tags, 71 | time: { 72 | from: time_from, 73 | to: time_to, 74 | }, 75 | timezone: timezone, 76 | timepicker: timepicker, 77 | title: title, 78 | version: 0, 79 | addAnnotations(annotations):: self { 80 | _annotations+:: annotations, 81 | }, 82 | addAnnotation(a):: self.addAnnotations([a]), 83 | addTemplates(templates):: self { 84 | templates+: templates, 85 | }, 86 | addTemplate(t):: self.addTemplates([t]), 87 | templates:: [], 88 | annotations: { list: it._annotations }, 89 | templating: { list: it.templates }, 90 | _nextPanel:: 2, 91 | addRow(row):: 92 | self { 93 | // automatically number panels in added rows. 94 | // https://github.com/kausalco/public/blob/master/klumps/grafana.libsonnet 95 | local n = std.length(row.panels), 96 | local nextPanel = super._nextPanel, 97 | local panels = std.makeArray(n, function(i) 98 | row.panels[i] { id: nextPanel + i }), 99 | 100 | _nextPanel: nextPanel + n, 101 | rows+: [row { panels: panels }], 102 | }, 103 | addPanels(newpanels):: 104 | self { 105 | // automatically number panels in added rows. 106 | // https://github.com/kausalco/public/blob/master/klumps/grafana.libsonnet 107 | local n = std.foldl(function(numOfPanels, p) 108 | (if 'panels' in p then 109 | numOfPanels + 1 + std.length(p.panels) 110 | else 111 | numOfPanels + 1), newpanels, 0), 112 | local nextPanel = super._nextPanel, 113 | local _panels = std.makeArray( 114 | std.length(newpanels), function(i) 115 | newpanels[i] { 116 | id: nextPanel + ( 117 | if i == 0 then 118 | 0 119 | else 120 | if 'panels' in _panels[i - 1] then 121 | (_panels[i - 1].id - nextPanel) + 1 + std.length(_panels[i - 1].panels) 122 | else 123 | (_panels[i - 1].id - nextPanel) + 1 124 | 125 | ), 126 | [if 'panels' in newpanels[i] then 'panels']: std.makeArray( 127 | std.length(newpanels[i].panels), function(j) 128 | newpanels[i].panels[j] { 129 | id: 1 + j + 130 | nextPanel + ( 131 | if i == 0 then 132 | 0 133 | else 134 | if 'panels' in _panels[i - 1] then 135 | (_panels[i - 1].id - nextPanel) + 1 + std.length(_panels[i - 1].panels) 136 | else 137 | (_panels[i - 1].id - nextPanel) + 1 138 | 139 | ), 140 | } 141 | ), 142 | } 143 | ), 144 | 145 | _nextPanel: nextPanel + n, 146 | panels+::: _panels, 147 | }, 148 | addPanel(panel, gridPos):: self.addPanels([panel { gridPos: gridPos }]), 149 | addRows(rows):: std.foldl(function(d, row) d.addRow(row), rows, self), 150 | addLink(link):: self { 151 | links+: [link], 152 | }, 153 | addLinks(dashboardLinks):: std.foldl(function(d, t) d.addLink(t), dashboardLinks, self), 154 | required:: [], 155 | __requires: it.required, 156 | addRequired(type, name, id, version):: self { 157 | required+: [{ type: type, name: name, id: id, version: version }], 158 | }, 159 | inputs:: [], 160 | __inputs: it.inputs, 161 | addInput( 162 | name, 163 | label, 164 | type, 165 | pluginId=null, 166 | pluginName=null, 167 | description='', 168 | value=null, 169 | ):: self { 170 | inputs+: [{ 171 | name: name, 172 | label: label, 173 | type: type, 174 | [if pluginId != null then 'pluginId']: pluginId, 175 | [if pluginName != null then 'pluginName']: pluginName, 176 | [if value != null then 'value']: value, 177 | description: description, 178 | }], 179 | }, 180 | }, 181 | } 182 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/dashlist.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [dashlist panel](https://grafana.com/docs/grafana/latest/panels/visualizations/dashboard-list-panel/). 4 | * It requires the dashlist panel plugin in grafana, which is built-in. 5 | * 6 | * @name dashlist.new 7 | * 8 | * @param title The title of the dashlist panel. 9 | * @param description (optional) Description of the panel 10 | * @param query (optional) Query to search by 11 | * @param tags (optional) Array of tag(s) to search by 12 | * @param recent (default `true`) Displays recently viewed dashboards 13 | * @param search (default `false`) Description of the panel 14 | * @param starred (default `false`) Displays starred dashboards 15 | * @param headings (default `true`) Chosen list selection(starred, recently Viewed, search) is shown as a heading 16 | * @param limit (default `10`) Set maximum items in a list 17 | * @return A json that represents a dashlist panel 18 | */ 19 | new( 20 | title, 21 | description=null, 22 | query=null, 23 | tags=[], 24 | recent=true, 25 | search=false, 26 | starred=false, 27 | headings=true, 28 | limit=10, 29 | ):: { 30 | type: 'dashlist', 31 | title: title, 32 | query: if query != null then query else '', 33 | tags: tags, 34 | recent: recent, 35 | search: search, 36 | starred: starred, 37 | headings: headings, 38 | limit: limit, 39 | [if description != null then 'description']: description, 40 | }, 41 | } 42 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/elasticsearch.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates an [Elasticsearch target](https://grafana.com/docs/grafana/latest/datasources/elasticsearch/) 4 | * 5 | * @name elasticsearch.target 6 | * 7 | * @param query 8 | * @param timeField 9 | * @param id (optional) 10 | * @param datasource (optional) 11 | * @param metrics (optional) 12 | * @param bucketAggs (optional) 13 | * @param alias (optional) 14 | */ 15 | target( 16 | query, 17 | timeField, 18 | id=null, 19 | datasource=null, 20 | metrics=[{ 21 | field: 'value', 22 | id: null, 23 | type: 'percentiles', 24 | settings: { 25 | percents: [ 26 | '90', 27 | ], 28 | }, 29 | }], 30 | bucketAggs=[{ 31 | field: 'timestamp', 32 | id: null, 33 | type: 'date_histogram', 34 | settings: { 35 | interval: '1s', 36 | min_doc_count: 0, 37 | trimEdges: 0, 38 | }, 39 | }], 40 | alias=null, 41 | ):: { 42 | [if datasource != null then 'datasource']: datasource, 43 | query: query, 44 | id: id, 45 | timeField: timeField, 46 | bucketAggs: bucketAggs, 47 | metrics: metrics, 48 | alias: alias, 49 | // TODO: generate bucket ids 50 | }, 51 | } 52 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/gauge_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [gauge panel](https://grafana.com/docs/grafana/latest/panels/visualizations/gauge-panel/). 4 | * 5 | * @name gaugePanel.new 6 | * 7 | * @param title Panel title. 8 | * @param description (optional) Panel description. 9 | * @param transparent (default `false`) Whether to display the panel without a background. 10 | * @param datasource (optional) Panel datasource. 11 | * @param allValues (default `false`) Show all values instead of reducing to one. 12 | * @param valueLimit (optional) Limit of values in all values mode. 13 | * @param reducerFunction (default `'mean'`) Function to use to reduce values to when using single value. 14 | * @param fields (default `''`) Fields that should be included in the panel. 15 | * @param showThresholdLabels (default `false`) Render the threshold values around the gauge bar. 16 | * @param showThresholdMarkers (default `true`) Render the thresholds as an outer bar. 17 | * @param unit (default `'percent'`) Panel unit field option. 18 | * @param min (optional) Leave empty to calculate based on all values. 19 | * @param max (optional) Leave empty to calculate based on all values. 20 | * @param decimals Number of decimal places to show. 21 | * @param displayName Change the field or series name. 22 | * @param noValue (optional) What to show when there is no value. 23 | * @param thresholdsMode (default `'absolute'`) 'absolute' or 'percentage'. 24 | * @param repeat (optional) Name of variable that should be used to repeat this panel. 25 | * @param repeatDirection (default `'h'`) 'h' for horizontal or 'v' for vertical. 26 | * @param repeatMaxPerRow (optional) Maximum panels per row in repeat mode. 27 | * @param pluginVersion (default `'7'`) Plugin version the panel should be modeled for. This has been tested with the default, '7', and '6.7'. 28 | * 29 | * @method addTarget(target) Adds a target object. 30 | * @method addTargets(targets) Adds an array of targets. 31 | * @method addLink(link) Adds a [panel link](https://grafana.com/docs/grafana/latest/linking/panel-links/). Argument format: `{ title: 'Link Title', url: 'https://...', targetBlank: true }`. 32 | * @method addLinks(links) Adds an array of links. 33 | * @method addThreshold(step) Adds a threshold step. Argument format: `{ color: 'green', value: 0 }`. 34 | * @method addThresholds(steps) Adds an array of threshold steps. 35 | * @method addMapping(mapping) Adds a value mapping. 36 | * @method addMappings(mappings) Adds an array of value mappings. 37 | * @method addDataLink(link) Adds a data link. 38 | * @method addDataLinks(links) Adds an array of data links. 39 | * @param timeFrom (optional) 40 | */ 41 | new( 42 | title, 43 | description=null, 44 | transparent=false, 45 | datasource=null, 46 | allValues=false, 47 | valueLimit=null, 48 | reducerFunction='mean', 49 | fields='', 50 | showThresholdLabels=false, 51 | showThresholdMarkers=true, 52 | unit='percent', 53 | min=0, 54 | max=100, 55 | decimals=null, 56 | displayName=null, 57 | noValue=null, 58 | thresholdsMode='absolute', 59 | repeat=null, 60 | repeatDirection='h', 61 | repeatMaxPerRow=null, 62 | timeFrom=null, 63 | pluginVersion='7', 64 | ):: { 65 | 66 | type: 'gauge', 67 | title: title, 68 | [if description != null then 'description']: description, 69 | transparent: transparent, 70 | datasource: datasource, 71 | targets: [], 72 | links: [], 73 | [if repeat != null then 'repeat']: repeat, 74 | [if repeat != null then 'repeatDirection']: repeatDirection, 75 | [if repeat != null then 'repeatMaxPerRow']: repeatMaxPerRow, 76 | [if timeFrom != null then 'timeFrom']: timeFrom, 77 | 78 | // targets 79 | _nextTarget:: 0, 80 | addTarget(target):: self { 81 | local nextTarget = super._nextTarget, 82 | _nextTarget: nextTarget + 1, 83 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 84 | }, 85 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 86 | 87 | // links 88 | addLink(link):: self { 89 | links+: [link], 90 | }, 91 | addLinks(links):: std.foldl(function(p, l) p.addLink(l), links, self), 92 | 93 | pluginVersion: pluginVersion, 94 | } + ( 95 | 96 | if pluginVersion >= '7' then { 97 | options: { 98 | reduceOptions: { 99 | values: allValues, 100 | [if allValues && valueLimit != null then 'limit']: valueLimit, 101 | calcs: [ 102 | reducerFunction, 103 | ], 104 | fields: fields, 105 | }, 106 | showThresholdLabels: showThresholdLabels, 107 | showThresholdMarkers: showThresholdMarkers, 108 | }, 109 | fieldConfig: { 110 | defaults: { 111 | unit: unit, 112 | [if min != null then 'min']: min, 113 | [if max != null then 'max']: max, 114 | [if decimals != null then 'decimals']: decimals, 115 | [if displayName != null then 'displayName']: displayName, 116 | [if noValue != null then 'noValue']: noValue, 117 | thresholds: { 118 | mode: thresholdsMode, 119 | steps: [], 120 | }, 121 | mappings: [], 122 | links: [], 123 | }, 124 | }, 125 | 126 | // thresholds 127 | addThreshold(step):: self { 128 | fieldConfig+: { defaults+: { thresholds+: { steps+: [step] } } }, 129 | }, 130 | 131 | // mappings 132 | _nextMapping:: 0, 133 | addMapping(mapping):: self { 134 | local nextMapping = super._nextMapping, 135 | _nextMapping: nextMapping + 1, 136 | fieldConfig+: { defaults+: { mappings+: [mapping { id: nextMapping }] } }, 137 | }, 138 | 139 | // data links 140 | addDataLink(link):: self { 141 | fieldConfig+: { defaults+: { links+: [link] } }, 142 | }, 143 | 144 | // Overrides 145 | addOverride( 146 | matcher=null, 147 | properties=null, 148 | ):: self { 149 | fieldConfig+: { 150 | overrides+: [ 151 | { 152 | [if matcher != null then 'matcher']: matcher, 153 | [if properties != null then 'properties']: properties, 154 | }, 155 | ], 156 | }, 157 | }, 158 | addOverrides(overrides):: std.foldl(function(p, o) p.addOverride(o.matcher, o.properties), overrides, self), 159 | } else { 160 | 161 | options: { 162 | fieldOptions: { 163 | values: allValues, 164 | [if allValues && valueLimit != null then 'limit']: valueLimit, 165 | calcs: [ 166 | reducerFunction, 167 | ], 168 | fields: fields, 169 | defaults: { 170 | unit: unit, 171 | [if min != null then 'min']: min, 172 | [if max != null then 'max']: max, 173 | [if decimals != null then 'decimals']: decimals, 174 | [if displayName != null then 'displayName']: displayName, 175 | [if noValue != null then 'noValue']: noValue, 176 | thresholds: { 177 | mode: thresholdsMode, 178 | steps: [], 179 | }, 180 | mappings: [], 181 | links: [], 182 | }, 183 | }, 184 | showThresholdLabels: showThresholdLabels, 185 | showThresholdMarkers: showThresholdMarkers, 186 | }, 187 | 188 | // thresholds 189 | addThreshold(step):: self { 190 | options+: { fieldOptions+: { defaults+: { thresholds+: { steps+: [step] } } } }, 191 | }, 192 | 193 | // mappings 194 | _nextMapping:: 0, 195 | addMapping(mapping):: self { 196 | local nextMapping = super._nextMapping, 197 | _nextMapping: nextMapping + 1, 198 | options+: { fieldOptions+: { defaults+: { mappings+: [mapping { id: nextMapping }] } } }, 199 | }, 200 | 201 | // data links 202 | addDataLink(link):: self { 203 | options+: { fieldOptions+: { defaults+: { links+: [link] } } }, 204 | }, 205 | } 206 | ) + { 207 | addThresholds(steps):: std.foldl(function(p, s) p.addThreshold(s), steps, self), 208 | addMappings(mappings):: std.foldl(function(p, m) p.addMapping(m), mappings, self), 209 | addDataLinks(links):: std.foldl(function(p, l) p.addDataLink(l), links, self), 210 | }, 211 | } 212 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/grafana.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | alertlist:: import 'alertlist.libsonnet', 3 | dashboard:: import 'dashboard.libsonnet', 4 | template:: import 'template.libsonnet', 5 | text:: import 'text.libsonnet', 6 | timepicker:: import 'timepicker.libsonnet', 7 | row:: import 'row.libsonnet', 8 | link:: import 'link.libsonnet', 9 | annotation:: import 'annotation.libsonnet', 10 | graphPanel:: import 'graph_panel.libsonnet', 11 | logPanel:: import 'log_panel.libsonnet', 12 | tablePanel:: import 'table_panel.libsonnet', 13 | singlestat:: import 'singlestat.libsonnet', 14 | pieChartPanel:: import 'pie_chart_panel.libsonnet', 15 | influxdb:: import 'influxdb.libsonnet', 16 | prometheus:: import 'prometheus.libsonnet', 17 | loki:: import 'loki.libsonnet', 18 | sql:: import 'sql.libsonnet', 19 | graphite:: import 'graphite.libsonnet', 20 | alertCondition:: import 'alert_condition.libsonnet', 21 | cloudmonitoring:: import 'cloudmonitoring.libsonnet', 22 | cloudwatch:: import 'cloudwatch.libsonnet', 23 | elasticsearch:: import 'elasticsearch.libsonnet', 24 | heatmapPanel:: import 'heatmap_panel.libsonnet', 25 | dashlist:: import 'dashlist.libsonnet', 26 | pluginlist:: import 'pluginlist.libsonnet', 27 | gauge:: error 'gauge is removed, migrate to gaugePanel', 28 | gaugePanel:: import 'gauge_panel.libsonnet', 29 | barGaugePanel:: import 'bar_gauge_panel.libsonnet', 30 | statPanel:: import 'stat_panel.libsonnet', 31 | transformation:: import 'transformation.libsonnet', 32 | } 33 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/graph_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [graph panel](https://grafana.com/docs/grafana/latest/panels/visualizations/graph-panel/). 4 | * It requires the graph panel plugin in grafana, which is built-in. 5 | * 6 | * @name graphPanel.new 7 | * 8 | * @param title The title of the graph panel. 9 | * @param description (optional) The description of the panel 10 | * @param span (optional) Width of the panel 11 | * @param datasource (optional) Datasource 12 | * @param fill (default `1`) , integer from 0 to 10 13 | * @param fillGradient (default `0`) , integer from 0 to 10 14 | * @param linewidth (default `1`) Line Width, integer from 0 to 10 15 | * @param decimals (optional) Override automatic decimal precision for legend and tooltip. If null, not added to the json output. 16 | * @param decimalsY1 (optional) Override automatic decimal precision for the first Y axis. If null, use decimals parameter. 17 | * @param decimalsY2 (optional) Override automatic decimal precision for the second Y axis. If null, use decimals parameter. 18 | * @param min_span (optional) Min span 19 | * @param format (default `short`) Unit of the Y axes 20 | * @param formatY1 (optional) Unit of the first Y axis 21 | * @param formatY2 (optional) Unit of the second Y axis 22 | * @param min (optional) Min of the Y axes 23 | * @param max (optional) Max of the Y axes 24 | * @param maxDataPoints (optional) If the data source supports it, sets the maximum number of data points for each series returned. 25 | * @param labelY1 (optional) Label of the first Y axis 26 | * @param labelY2 (optional) Label of the second Y axis 27 | * @param x_axis_mode (default `'time'`) X axis mode, one of [time, series, histogram] 28 | * @param x_axis_values (default `'total'`) Chosen value of series, one of [avg, min, max, total, count] 29 | * @param x_axis_buckets (optional) Restricts the x axis to this amount of buckets 30 | * @param x_axis_min (optional) Restricts the x axis to display from this value if supplied 31 | * @param x_axis_max (optional) Restricts the x axis to display up to this value if supplied 32 | * @param lines (default `true`) Display lines 33 | * @param points (default `false`) Display points 34 | * @param pointradius (default `5`) Radius of the points, allowed values are 0.5 or [1 ... 10] with step 1 35 | * @param bars (default `false`) Display bars 36 | * @param staircase (default `false`) Display line as staircase 37 | * @param dashes (default `false`) Display line as dashes 38 | * @param stack (default `false`) Whether to stack values 39 | * @param repeat (optional) Name of variable that should be used to repeat this panel. 40 | * @param repeatDirection (default `'h'`) 'h' for horizontal or 'v' for vertical. 41 | * @param legend_show (default `true`) Show legend 42 | * @param legend_values (default `false`) Show values in legend 43 | * @param legend_min (default `false`) Show min in legend 44 | * @param legend_max (default `false`) Show max in legend 45 | * @param legend_current (default `false`) Show current in legend 46 | * @param legend_total (default `false`) Show total in legend 47 | * @param legend_avg (default `false`) Show average in legend 48 | * @param legend_alignAsTable (default `false`) Show legend as table 49 | * @param legend_rightSide (default `false`) Show legend to the right 50 | * @param legend_sideWidth (optional) Legend width 51 | * @param legend_sort (optional) Sort order of legend 52 | * @param legend_sortDesc (optional) Sort legend descending 53 | * @param aliasColors (optional) Define color mappings for graphs 54 | * @param thresholds (optional) An array of graph thresholds 55 | * @param logBase1Y (default `1`) Value of logarithm base of the first Y axis 56 | * @param logBase2Y (default `1`) Value of logarithm base of the second Y axis 57 | * @param transparent (default `false`) Whether to display the panel without a background. 58 | * @param value_type (default `'individual'`) Type of tooltip value 59 | * @param shared_tooltip (default `true`) Allow to group or spit tooltips on mouseover within a chart 60 | * @param percentage (defaut: false) show as percentages 61 | * @param interval (defaut: null) A lower limit for the interval. 62 | 63 | * 64 | * @method addTarget(target) Adds a target object. 65 | * @method addTargets(targets) Adds an array of targets. 66 | * @method addSeriesOverride(override) 67 | * @method addYaxis(format,min,max,label,show,logBase,decimals) Adds a Y axis to the graph 68 | * @method addAlert(alert) Adds an alert 69 | * @method addLink(link) Adds a [panel link](https://grafana.com/docs/grafana/latest/linking/panel-links/) 70 | * @method addLinks(links) Adds an array of links. 71 | */ 72 | new( 73 | title, 74 | span=null, 75 | fill=1, 76 | fillGradient=0, 77 | linewidth=1, 78 | decimals=null, 79 | decimalsY1=null, 80 | decimalsY2=null, 81 | description=null, 82 | min_span=null, 83 | format='short', 84 | formatY1=null, 85 | formatY2=null, 86 | min=null, 87 | max=null, 88 | labelY1=null, 89 | labelY2=null, 90 | x_axis_mode='time', 91 | x_axis_values='total', 92 | x_axis_buckets=null, 93 | x_axis_min=null, 94 | x_axis_max=null, 95 | lines=true, 96 | datasource=null, 97 | points=false, 98 | pointradius=5, 99 | bars=false, 100 | staircase=false, 101 | height=null, 102 | nullPointMode='null', 103 | dashes=false, 104 | stack=false, 105 | repeat=null, 106 | repeatDirection=null, 107 | sort=0, 108 | show_xaxis=true, 109 | legend_show=true, 110 | legend_values=false, 111 | legend_min=false, 112 | legend_max=false, 113 | legend_current=false, 114 | legend_total=false, 115 | legend_avg=false, 116 | legend_alignAsTable=false, 117 | legend_rightSide=false, 118 | legend_sideWidth=null, 119 | legend_hideEmpty=null, 120 | legend_hideZero=null, 121 | legend_sort=null, 122 | legend_sortDesc=null, 123 | aliasColors={}, 124 | thresholds=[], 125 | links=[], 126 | logBase1Y=1, 127 | logBase2Y=1, 128 | transparent=false, 129 | value_type='individual', 130 | shared_tooltip=true, 131 | percentage=false, 132 | maxDataPoints=null, 133 | time_from=null, 134 | time_shift=null, 135 | interval=null 136 | ):: { 137 | title: title, 138 | [if span != null then 'span']: span, 139 | [if min_span != null then 'minSpan']: min_span, 140 | [if decimals != null then 'decimals']: decimals, 141 | type: 'graph', 142 | datasource: datasource, 143 | targets: [ 144 | ], 145 | [if description != null then 'description']: description, 146 | [if height != null then 'height']: height, 147 | renderer: 'flot', 148 | yaxes: [ 149 | self.yaxe( 150 | if formatY1 != null then formatY1 else format, 151 | min, 152 | max, 153 | decimals=(if decimalsY1 != null then decimalsY1 else decimals), 154 | logBase=logBase1Y, 155 | label=labelY1 156 | ), 157 | self.yaxe( 158 | if formatY2 != null then formatY2 else format, 159 | min, 160 | max, 161 | decimals=(if decimalsY2 != null then decimalsY2 else decimals), 162 | logBase=logBase2Y, 163 | label=labelY2 164 | ), 165 | ], 166 | xaxis: { 167 | show: show_xaxis, 168 | mode: x_axis_mode, 169 | name: null, 170 | values: if x_axis_mode == 'series' then [x_axis_values] else [], 171 | buckets: if x_axis_mode == 'histogram' then x_axis_buckets else null, 172 | [if x_axis_min != null then 'min']: x_axis_min, 173 | [if x_axis_max != null then 'max']: x_axis_max, 174 | }, 175 | lines: lines, 176 | fill: fill, 177 | fillGradient: fillGradient, 178 | linewidth: linewidth, 179 | dashes: dashes, 180 | dashLength: 10, 181 | spaceLength: 10, 182 | points: points, 183 | pointradius: pointradius, 184 | bars: bars, 185 | stack: stack, 186 | percentage: percentage, 187 | [if maxDataPoints != null then 'maxDataPoints']: maxDataPoints, 188 | legend: { 189 | show: legend_show, 190 | values: legend_values, 191 | min: legend_min, 192 | max: legend_max, 193 | current: legend_current, 194 | total: legend_total, 195 | alignAsTable: legend_alignAsTable, 196 | rightSide: legend_rightSide, 197 | sideWidth: legend_sideWidth, 198 | avg: legend_avg, 199 | [if legend_hideEmpty != null then 'hideEmpty']: legend_hideEmpty, 200 | [if legend_hideZero != null then 'hideZero']: legend_hideZero, 201 | [if legend_sort != null then 'sort']: legend_sort, 202 | [if legend_sortDesc != null then 'sortDesc']: legend_sortDesc, 203 | }, 204 | nullPointMode: nullPointMode, 205 | steppedLine: staircase, 206 | tooltip: { 207 | value_type: value_type, 208 | shared: shared_tooltip, 209 | sort: if sort == 'decreasing' then 2 else if sort == 'increasing' then 1 else sort, 210 | }, 211 | timeFrom: time_from, 212 | timeShift: time_shift, 213 | [if interval != null then 'interval']: interval, 214 | [if transparent == true then 'transparent']: transparent, 215 | aliasColors: aliasColors, 216 | repeat: repeat, 217 | [if repeatDirection != null then 'repeatDirection']: repeatDirection, 218 | seriesOverrides: [], 219 | thresholds: thresholds, 220 | links: links, 221 | yaxe( 222 | format='short', 223 | min=null, 224 | max=null, 225 | label=null, 226 | show=true, 227 | logBase=1, 228 | decimals=null, 229 | ):: { 230 | label: label, 231 | show: show, 232 | logBase: logBase, 233 | min: min, 234 | max: max, 235 | format: format, 236 | [if decimals != null then 'decimals']: decimals, 237 | }, 238 | _nextTarget:: 0, 239 | addTarget(target):: self { 240 | // automatically ref id in added targets. 241 | // https://github.com/kausalco/public/blob/master/klumps/grafana.libsonnet 242 | local nextTarget = super._nextTarget, 243 | _nextTarget: nextTarget + 1, 244 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 245 | }, 246 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 247 | addSeriesOverride(override):: self { 248 | seriesOverrides+: [override], 249 | }, 250 | resetYaxes():: self { 251 | yaxes: [], 252 | }, 253 | addYaxis( 254 | format='short', 255 | min=null, 256 | max=null, 257 | label=null, 258 | show=true, 259 | logBase=1, 260 | decimals=null, 261 | ):: self { 262 | yaxes+: [self.yaxe(format, min, max, label, show, logBase, decimals)], 263 | }, 264 | addAlert( 265 | name, 266 | executionErrorState='alerting', 267 | forDuration='5m', 268 | frequency='60s', 269 | handler=1, 270 | message='', 271 | noDataState='no_data', 272 | notifications=[], 273 | alertRuleTags={}, 274 | ):: self { 275 | local it = self, 276 | _conditions:: [], 277 | alert: { 278 | name: name, 279 | conditions: it._conditions, 280 | executionErrorState: executionErrorState, 281 | 'for': forDuration, 282 | frequency: frequency, 283 | handler: handler, 284 | noDataState: noDataState, 285 | notifications: notifications, 286 | message: message, 287 | alertRuleTags: alertRuleTags, 288 | }, 289 | addCondition(condition):: self { 290 | _conditions+: [condition], 291 | }, 292 | addConditions(conditions):: std.foldl(function(p, c) p.addCondition(c), conditions, it), 293 | }, 294 | addLink(link):: self { 295 | links+: [link], 296 | }, 297 | addLinks(links):: std.foldl(function(p, t) p.addLink(t), links, self), 298 | addOverride( 299 | matcher=null, 300 | properties=null, 301 | ):: self { 302 | fieldConfig+: { 303 | overrides+: [ 304 | { 305 | [if matcher != null then 'matcher']: matcher, 306 | [if properties != null then 'properties']: properties, 307 | }, 308 | ], 309 | }, 310 | }, 311 | addOverrides(overrides):: std.foldl(function(p, o) p.addOverride(o.matcher, o.properties), overrides, self), 312 | }, 313 | } 314 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/graphite.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [Graphite target](https://grafana.com/docs/grafana/latest/datasources/graphite/) 4 | * 5 | * @name graphite.target 6 | * 7 | * @param target Graphite Query. Nested queries are possible by adding the query reference (refId). 8 | * @param targetFull (optional) Expanding the @target. Used in nested queries. 9 | * @param hide (default `false`) Disable query on graph. 10 | * @param textEditor (default `false`) Enable raw query mode. 11 | * @param datasource (optional) Datasource. 12 | 13 | * @return Panel target 14 | */ 15 | target( 16 | target, 17 | targetFull=null, 18 | hide=false, 19 | textEditor=false, 20 | datasource=null, 21 | ):: { 22 | target: target, 23 | hide: hide, 24 | textEditor: textEditor, 25 | 26 | [if targetFull != null then 'targetFull']: targetFull, 27 | [if datasource != null then 'datasource']: datasource, 28 | }, 29 | } 30 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/heatmap_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [heatmap panel](https://grafana.com/docs/grafana/latest/panels/visualizations/heatmap/). 4 | * Requires the heatmap panel plugin in Grafana, which is built-in. 5 | * 6 | * @name heatmapPanel.new 7 | * 8 | * @param title The title of the heatmap panel 9 | * @param description (optional) Description of panel 10 | * @param datasource (optional) Datasource 11 | * @param min_span (optional) Min span 12 | * @param span (optional) Width of the panel 13 | * @param cards_cardPadding (optional) How much padding to put between bucket cards 14 | * @param cards_cardRound (optional) How much rounding should be applied to the bucket card shape 15 | * @param color_cardColor (default `'#b4ff00'`) Hex value of color used when color_colorScheme is 'opacity' 16 | * @param color_colorScale (default `'sqrt'`) How to scale the color range, 'linear' or 'sqrt' 17 | * @param color_colorScheme (default `'interpolateOranges'`) TODO: document 18 | * @param color_exponent (default `0.5`) TODO: document 19 | * @param color_max (optional) The value for the end of the color range 20 | * @param color_min (optional) The value for the beginning of the color range 21 | * @param color_mode (default `'spectrum'`) How to display difference in frequency with color 22 | * @param dataFormat (default `'timeseries'`) How to format the data 23 | * @param highlightCards (default `true`) TODO: document 24 | * @param hideZeroBuckets (default `false`) Whether or not to hide empty buckets, default is false 25 | * @param legend_show (default `false`) Show legend 26 | * @param minSpan (optional) Minimum span of the panel when repeated on a template variable 27 | * @param repeat (optional) Variable used to repeat the heatmap panel 28 | * @param repeatDirection (optional) Which direction to repeat the panel, 'h' for horizontal and 'v' for vertically 29 | * @param tooltipDecimals (optional) The number of decimal places to display in the tooltip 30 | * @param tooltip_show (default `true`) Whether or not to display a tooltip when hovering over the heatmap 31 | * @param tooltip_showHistogram (default `false`) Whether or not to display a histogram in the tooltip 32 | * @param xAxis_show (default `true`) Whether or not to show the X axis, default true 33 | * @param xBucketNumber (optional) Number of buckets for the X axis 34 | * @param xBucketSize (optional) Size of X axis buckets. Number or interval(10s, 15h, etc.) Has priority over xBucketNumber 35 | * @param yAxis_decimals (optional) Override automatic decimal precision for the Y axis 36 | * @param yAxis_format (default `'short'`) Unit of the Y axis 37 | * @param yAxis_logBase (default `1`) Only if dataFormat is 'timeseries' 38 | * @param yAxis_min (optional) Only if dataFormat is 'timeseries', min of the Y axis 39 | * @param yAxis_max (optional) Only if dataFormat is 'timeseries', max of the Y axis 40 | * @param yAxis_show (default `true`) Whether or not to show the Y axis 41 | * @param yAxis_splitFactor (optional) TODO: document 42 | * @param yBucketBound (default `'auto'`) Which bound ('lower' or 'upper') of the bucket to use 43 | * @param yBucketNumber (optional) Number of buckets for the Y axis 44 | * @param yBucketSize (optional) Size of Y axis buckets. Has priority over yBucketNumber 45 | * @param maxDataPoints (optional) The maximum data points per series. Used directly by some data sources and used in calculation of auto interval. With streaming data this value is used for the rolling buffer. 46 | * 47 | * @method addTarget(target) Adds a target object. 48 | * @method addTargets(targets) Adds an array of targets. 49 | */ 50 | new( 51 | title, 52 | datasource=null, 53 | description=null, 54 | cards_cardPadding=null, 55 | cards_cardRound=null, 56 | color_cardColor='#b4ff00', 57 | color_colorScale='sqrt', 58 | color_colorScheme='interpolateOranges', 59 | color_exponent=0.5, 60 | color_max=null, 61 | color_min=null, 62 | color_mode='spectrum', 63 | dataFormat='timeseries', 64 | highlightCards=true, 65 | hideZeroBuckets=false, 66 | legend_show=false, 67 | minSpan=null, 68 | span=null, 69 | repeat=null, 70 | repeatDirection=null, 71 | tooltipDecimals=null, 72 | tooltip_show=true, 73 | tooltip_showHistogram=false, 74 | xAxis_show=true, 75 | xBucketNumber=null, 76 | xBucketSize=null, 77 | yAxis_decimals=null, 78 | yAxis_format='short', 79 | yAxis_logBase=1, 80 | yAxis_min=null, 81 | yAxis_max=null, 82 | yAxis_show=true, 83 | yAxis_splitFactor=null, 84 | yBucketBound='auto', 85 | yBucketNumber=null, 86 | yBucketSize=null, 87 | maxDataPoints=null, 88 | ):: { 89 | title: title, 90 | type: 'heatmap', 91 | [if description != null then 'description']: description, 92 | datasource: datasource, 93 | cards: { 94 | cardPadding: cards_cardPadding, 95 | cardRound: cards_cardRound, 96 | }, 97 | color: { 98 | mode: color_mode, 99 | cardColor: color_cardColor, 100 | colorScale: color_colorScale, 101 | exponent: color_exponent, 102 | [if color_mode == 'spectrum' then 'colorScheme']: color_colorScheme, 103 | [if color_max != null then 'max']: color_max, 104 | [if color_min != null then 'min']: color_min, 105 | }, 106 | [if dataFormat != null then 'dataFormat']: dataFormat, 107 | heatmap: {}, 108 | hideZeroBuckets: hideZeroBuckets, 109 | highlightCards: highlightCards, 110 | legend: { 111 | show: legend_show, 112 | }, 113 | [if minSpan != null then 'minSpan']: minSpan, 114 | [if span != null then 'span']: span, 115 | [if repeat != null then 'repeat']: repeat, 116 | [if repeatDirection != null then 'repeatDirection']: repeatDirection, 117 | tooltip: { 118 | show: tooltip_show, 119 | showHistogram: tooltip_showHistogram, 120 | }, 121 | [if tooltipDecimals != null then 'tooltipDecimals']: tooltipDecimals, 122 | xAxis: { 123 | show: xAxis_show, 124 | }, 125 | xBucketNumber: if dataFormat == 'timeseries' && xBucketSize != null then xBucketNumber else null, 126 | xBucketSize: if dataFormat == 'timeseries' && xBucketSize != null then xBucketSize else null, 127 | yAxis: { 128 | decimals: yAxis_decimals, 129 | [if dataFormat == 'timeseries' then 'logBase']: yAxis_logBase, 130 | format: yAxis_format, 131 | [if dataFormat == 'timeseries' then 'max']: yAxis_max, 132 | [if dataFormat == 'timeseries' then 'min']: yAxis_min, 133 | show: yAxis_show, 134 | splitFactor: yAxis_splitFactor, 135 | }, 136 | yBucketBound: yBucketBound, 137 | [if dataFormat == 'timeseries' then 'yBucketNumber']: yBucketNumber, 138 | [if dataFormat == 'timeseries' then 'yBucketSize']: yBucketSize, 139 | [if maxDataPoints != null then 'maxDataPoints']: maxDataPoints, 140 | 141 | _nextTarget:: 0, 142 | addTarget(target):: self { 143 | local nextTarget = super._nextTarget, 144 | _nextTarget: nextTarget + 1, 145 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 146 | }, 147 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 148 | }, 149 | 150 | } 151 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/influxdb.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates an [InfluxDB target](https://grafana.com/docs/grafana/latest/datasources/influxdb/) 4 | * 5 | * @name influxdb.target 6 | * 7 | * @param query Raw InfluxQL statement 8 | * 9 | * @param alias (optional) 'Alias By' pattern 10 | * @param datasource (optional) Datasource 11 | * @param hide (optional) Disable query on graph 12 | * 13 | * @param rawQuery (optional) Enable/disable raw query mode 14 | * 15 | * @param policy (default: `'default'`) Tagged query 'From' policy 16 | * @param measurement (optional) Tagged query 'From' measurement 17 | * @param group_time (default: `'$__interval'`) 'Group by' time condition (if set to null, do not groups by time) 18 | * @param group_tags (optional) 'Group by' tags list 19 | * @param fill (default: `'none'`) 'Group by' missing values fill mode (works only with 'Group by time()') 20 | * 21 | * @param resultFormat (default: `'time_series'`) Format results as 'Time series' or 'Table' 22 | * 23 | * @return Panel target 24 | */ 25 | target( 26 | query=null, 27 | 28 | alias=null, 29 | datasource=null, 30 | hide=null, 31 | 32 | rawQuery=null, 33 | 34 | policy='default', 35 | measurement=null, 36 | 37 | group_time='$__interval', 38 | group_tags=[], 39 | fill='none', 40 | 41 | resultFormat='time_series', 42 | ):: { 43 | local it = self, 44 | 45 | [if alias != null then 'alias']: alias, 46 | [if datasource != null then 'datasource']: datasource, 47 | [if hide != null then 'hide']: hide, 48 | 49 | [if query != null then 'query']: query, 50 | [if rawQuery != null then 'rawQuery']: rawQuery, 51 | [if rawQuery == null && query != null then 'rawQuery']: true, 52 | 53 | policy: policy, 54 | [if measurement != null then 'measurement']: measurement, 55 | tags: [], 56 | select: [], 57 | groupBy: 58 | if group_time != null then 59 | [{ type: 'time', params: [group_time] }] + 60 | [{ type: 'tag', params: [tag_name] } for tag_name in group_tags] + 61 | [{ type: 'fill', params: [fill] }] 62 | else 63 | [{ type: 'tag', params: [tag_name] } for tag_name in group_tags], 64 | 65 | resultFormat: resultFormat, 66 | 67 | where(key, operator, value, condition=null):: self { 68 | /* 69 | * Adds query tag condition ('Where' section) 70 | */ 71 | tags: 72 | if std.length(it.tags) == 0 then 73 | [{ key: key, operator: operator, value: value }] 74 | else 75 | it.tags + [{ 76 | key: key, 77 | operator: operator, 78 | value: value, 79 | condition: if condition == null then 'AND' else condition, 80 | }], 81 | }, 82 | 83 | selectField(value):: self { 84 | /* 85 | * Adds InfluxDB selection ('field(value)' part of 'Select' statement) 86 | */ 87 | select+: [[{ params: [value], type: 'field' }]], 88 | }, 89 | 90 | addConverter(type, params=[]):: self { 91 | /* 92 | * Appends converter (aggregation, selector, etc.) to last added selection 93 | */ 94 | local len = std.length(it.select), 95 | select: 96 | if len == 1 then 97 | [it.select[0] + [{ params: params, type: type }]] 98 | else if len > 1 then 99 | it.select[0:(len - 1)] + [it.select[len - 1] + [{ params: params, type: type }]] 100 | else 101 | [], 102 | }, 103 | }, 104 | } 105 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/link.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates [links](https://grafana.com/docs/grafana/latest/linking/linking-overview/) to navigate to other dashboards. 4 | * 5 | * @param title Human-readable label for the link. 6 | * @param tags Limits the linked dashboards to only the ones with the corresponding tags. Otherwise, Grafana includes links to all other dashboards. 7 | * @param asDropdown (default: `true`) Whether to use a dropdown (with an optional title). If `false`, displays the dashboard links side by side across the top of dashboard. 8 | * @param includeVars (default: `false`) Whether to include template variables currently used as query parameters in the link. Any matching templates in the linked dashboard are set to the values from the link 9 | * @param keepTime (default: `false`) Whether to include the current dashboard time range in the link (e.g. from=now-3h&to=now) 10 | * @param icon (default: `'external link'`) Icon displayed with the link. 11 | * @param url (default: `''`) URL of the link 12 | * @param targetBlank (default: `false`) Whether the link will open in a new window. 13 | * @param type (default: `'dashboards'`) 14 | * 15 | * @name link.dashboards 16 | */ 17 | dashboards( 18 | title, 19 | tags, 20 | asDropdown=true, 21 | includeVars=false, 22 | keepTime=false, 23 | icon='external link', 24 | url='', 25 | targetBlank=false, 26 | type='dashboards', 27 | ):: 28 | { 29 | asDropdown: asDropdown, 30 | icon: icon, 31 | includeVars: includeVars, 32 | keepTime: keepTime, 33 | tags: tags, 34 | title: title, 35 | type: type, 36 | url: url, 37 | targetBlank: targetBlank, 38 | }, 39 | } 40 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/log_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [log panel](https://grafana.com/docs/grafana/latest/panels/visualizations/logs-panel/). 4 | * It requires the log panel plugin in grafana, which is built-in. 5 | * 6 | * @name logPanel.new 7 | * 8 | * @param title (default `''`) The title of the log panel. 9 | * @param span (optional) Width of the panel 10 | * @param datasource (optional) Datasource 11 | * @showLabels (default `false`) Whether to show or hide labels 12 | * @showTime (default `true`) Whether to show or hide time for each line 13 | * @wrapLogMessage (default `true`) Whether to wrap log line to the next line 14 | * @sortOrder (default `'Descending'`) sort log by time (can be 'Descending' or 'Ascending' ) 15 | * 16 | * @method addTarget(target) Adds a target object 17 | * @method addTargets(targets) Adds an array of targets 18 | */ 19 | new( 20 | title='', 21 | datasource=null, 22 | time_from=null, 23 | time_shift=null, 24 | showLabels=false, 25 | showTime=true, 26 | sortOrder='Descending', 27 | wrapLogMessage=true, 28 | span=12, 29 | height=null, 30 | ):: { 31 | [if height != null then 'height']: height, 32 | span: span, 33 | datasource: datasource, 34 | options: { 35 | showLabels: showLabels, 36 | showTime: showTime, 37 | sortOrder: sortOrder, 38 | wrapLogMessage: wrapLogMessage, 39 | }, 40 | targets: [ 41 | ], 42 | _nextTarget:: 0, 43 | addTarget(target):: self { 44 | // automatically ref id in added targets. 45 | // https://github.com/kausalco/public/blob/master/klumps/grafana.libsonnet 46 | local nextTarget = super._nextTarget, 47 | _nextTarget: nextTarget + 1, 48 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 49 | }, 50 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 51 | timeFrom: time_from, 52 | timeShift: time_shift, 53 | title: title, 54 | type: 'logs', 55 | }, 56 | } 57 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/loki.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [Loki target](https://grafana.com/docs/grafana/latest/datasources/loki/) 4 | * 5 | * @name loki.target 6 | * 7 | * @param expr 8 | * @param hide (optional) Disable query on graph. 9 | * @param legendFormat (optional) Defines the legend. Defaults to ''. 10 | */ 11 | target( 12 | expr, 13 | hide=null, 14 | legendFormat='', 15 | instant=null, 16 | ):: { 17 | [if hide != null then 'hide']: hide, 18 | expr: expr, 19 | legendFormat: legendFormat, 20 | [if instant != null then 'instant']: instant, 21 | }, 22 | } 23 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/pie_chart_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a pie chart panel. 4 | * It requires the [pie chart panel plugin in grafana](https://grafana.com/grafana/plugins/grafana-piechart-panel), 5 | * which needs to be explicitly installed. 6 | * 7 | * @name pieChartPanel.new 8 | * 9 | * @param title The title of the pie chart panel. 10 | * @param description (default `''`) Description of the panel 11 | * @param span (optional) Width of the panel 12 | * @param min_span (optional) Min span 13 | * @param datasource (optional) Datasource 14 | * @param aliasColors (optional) Define color mappings 15 | * @param pieType (default `'pie'`) Type of pie chart (one of pie or donut) 16 | * @param showLegend (default `true`) Show legend 17 | * @param showLegendPercentage (default `true`) Show percentage values in the legend 18 | * @param legendType (default `'Right side'`) Type of legend (one of 'Right side', 'Under graph' or 'On graph') 19 | * @param valueName (default `'current') Type of tooltip value 20 | * @param repeat (optional) Variable used to repeat the pie chart 21 | * @param repeatDirection (optional) Which direction to repeat the panel, 'h' for horizontal and 'v' for vertical 22 | * @param maxPerRow (optional) Number of panels to display when repeated. Used in combination with repeat. 23 | * @return A json that represents a pie chart panel 24 | * 25 | * @method addTarget(target) Adds a target object. 26 | */ 27 | new( 28 | title, 29 | description='', 30 | span=null, 31 | min_span=null, 32 | datasource=null, 33 | height=null, 34 | aliasColors={}, 35 | pieType='pie', 36 | valueName='current', 37 | showLegend=true, 38 | showLegendPercentage=true, 39 | legendType='Right side', 40 | repeat=null, 41 | repeatDirection=null, 42 | maxPerRow=null, 43 | ):: { 44 | type: 'grafana-piechart-panel', 45 | [if description != null then 'description']: description, 46 | pieType: pieType, 47 | title: title, 48 | aliasColors: aliasColors, 49 | [if span != null then 'span']: span, 50 | [if min_span != null then 'minSpan']: min_span, 51 | [if height != null then 'height']: height, 52 | [if repeat != null then 'repeat']: repeat, 53 | [if repeatDirection != null then 'repeatDirection']: repeatDirection, 54 | [if maxPerRow != null then 'maxPerRow']: maxPerRow, 55 | valueName: valueName, 56 | datasource: datasource, 57 | legend: { 58 | show: showLegend, 59 | values: true, 60 | percentage: showLegendPercentage, 61 | }, 62 | legendType: legendType, 63 | targets: [ 64 | ], 65 | _nextTarget:: 0, 66 | addTarget(target):: self { 67 | local nextTarget = super._nextTarget, 68 | _nextTarget: nextTarget + 1, 69 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 70 | }, 71 | }, 72 | } 73 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/pluginlist.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Returns a new pluginlist panel that can be added in a row. 4 | * It requires the pluginlist panel plugin in grafana, which is built-in. 5 | * 6 | * @name pluginlist.new 7 | * 8 | * @param title The title of the pluginlist panel. 9 | * @param description (optional) Description of the panel 10 | * @param limit (optional) Set maximum items in a list 11 | * @return A json that represents a pluginlist panel 12 | */ 13 | new( 14 | title, 15 | description=null, 16 | limit=null, 17 | ):: { 18 | type: 'pluginlist', 19 | title: title, 20 | [if limit != null then 'limit']: limit, 21 | [if description != null then 'description']: description, 22 | }, 23 | } 24 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/prometheus.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [Prometheus target](https://grafana.com/docs/grafana/latest/datasources/prometheus/) 4 | * to be added to panels. 5 | * 6 | * @name prometheus.target 7 | * 8 | * @param expr PromQL query to be exercised against Prometheus. Checkout [Prometheus documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/). 9 | * @param format (default `'time_series'`) Switch between `'table'`, `'time_series'` or `'heatmap'`. Table will only work in the Table panel. Heatmap is suitable for displaying metrics of the Histogram type on a Heatmap panel. Under the hood, it converts cumulative histograms to regular ones and sorts series by the bucket bound. 10 | * @param intervalFactor (default `2`) 11 | * @param legendFormat (default `''`) Controls the name of the time series, using name or pattern. For example `{{hostname}}` is replaced with the label value for the label `hostname`. 12 | * @param datasource (optional) Name of the Prometheus datasource. Leave by default otherwise. 13 | * @param interval (optional) Time span used to aggregate or group data points by time. By default Grafana uses an automatic interval calculated based on the width of the graph. 14 | * @param instant (optional) Perform an "instant" query, to return only the latest value that Prometheus has scraped for the requested time series. Instant queries return results much faster than normal range queries. Use them to look up label sets. 15 | * @param hide (optional) Set to `true` to hide the target from the panel. 16 | * 17 | * @return A Prometheus target to be added to panels. 18 | */ 19 | target( 20 | expr, 21 | format='time_series', 22 | intervalFactor=2, 23 | legendFormat='', 24 | datasource=null, 25 | interval=null, 26 | instant=null, 27 | hide=null, 28 | ):: { 29 | [if hide != null then 'hide']: hide, 30 | [if datasource != null then 'datasource']: datasource, 31 | expr: expr, 32 | format: format, 33 | intervalFactor: intervalFactor, 34 | legendFormat: legendFormat, 35 | [if interval != null then 'interval']: interval, 36 | [if instant != null then 'instant']: instant, 37 | }, 38 | } 39 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/row.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [row](https://grafana.com/docs/grafana/latest/features/dashboard/dashboards/#rows). 4 | * Rows are logical dividers within a dashboard and used to group panels together. 5 | * 6 | * @name row.new 7 | * 8 | * @param title The title of the row. 9 | * @param showTitle (default `true` if title is set) Whether to show the row title 10 | * @paral titleSize (default `'h6'`) The size of the title 11 | * @param collapse (default `false`) The initial state of the row when opening the dashboard. Panels in a collapsed row are not load until the row is expanded. 12 | * @param repeat (optional) Name of variable that should be used to repeat this row. It is recommended to use the variable in the row title as well. 13 | * 14 | * @method addPanels(panels) Appends an array of nested panels 15 | * @method addPanel(panel,gridPos) Appends a nested panel, with an optional grid position in grid coordinates, e.g. `gridPos={'x':0, 'y':0, 'w':12, 'h': 9}` 16 | */ 17 | new( 18 | title='Dashboard Row', 19 | height=null, 20 | collapse=false, 21 | repeat=null, 22 | showTitle=null, 23 | titleSize='h6' 24 | ):: { 25 | collapse: collapse, 26 | collapsed: collapse, 27 | [if height != null then 'height']: height, 28 | panels: [], 29 | repeat: repeat, 30 | repeatIteration: null, 31 | repeatRowId: null, 32 | showTitle: 33 | if showTitle != null then 34 | showTitle 35 | else 36 | title != 'Dashboard Row', 37 | title: title, 38 | type: 'row', 39 | titleSize: titleSize, 40 | addPanels(panels):: self { 41 | panels+: panels, 42 | }, 43 | addPanel(panel, gridPos={}):: self { 44 | panels+: [panel { gridPos: gridPos }], 45 | }, 46 | }, 47 | } 48 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/singlestat.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a singlestat panel. 4 | * 5 | * @name singlestat.new 6 | * 7 | * @param title The title of the singlestat panel. 8 | * @param format (default `'none'`) Unit 9 | * @param description (default `''`) 10 | * @param interval (optional) 11 | * @param height (optional) 12 | * @param datasource (optional) 13 | * @param span (optional) 14 | * @param min_span (optional) 15 | * @param decimals (optional) 16 | * @param valueName (default `'avg'`) 17 | * @param valueFontSize (default `'80%'`) 18 | * @param prefixFontSize (default `'50%'`) 19 | * @param postfixFontSize (default `'50%'`) 20 | * @param mappingType (default `1`) 21 | * @param repeat (optional) 22 | * @param repeatDirection (optional) 23 | * @param prefix (default `''`) 24 | * @param postfix (default `''`) 25 | * @param colors (default `['#299c46','rgba(237, 129, 40, 0.89)','#d44a3a']`) 26 | * @param colorBackground (default `false`) 27 | * @param colorValue (default `false`) 28 | * @param thresholds (default `''`) 29 | * @param valueMaps (default `{value: 'null',op: '=',text: 'N/A'}`) 30 | * @param rangeMaps (default `{value: 'null',op: '=',text: 'N/A'}`) 31 | * @param transparent (optional) 32 | * @param sparklineFillColor (default `'rgba(31, 118, 189, 0.18)'`) 33 | * @param sparklineFull (default `false`) 34 | * @param sparklineLineColor (default `'rgb(31, 120, 193)'`) 35 | * @param sparklineShow (default `false`) 36 | * @param gaugeShow (default `false`) 37 | * @param gaugeMinValue (default `0`) 38 | * @param gaugeMaxValue (default `100`) 39 | * @param gaugeThresholdMarkers (default `true`) 40 | * @param gaugeThresholdLabels (default `false`) 41 | * @param timeFrom (optional) 42 | * @param links (optional) 43 | * @param tableColumn (default `''`) 44 | * @param maxPerRow (optional) 45 | * @param maxDataPoints (default `100`) 46 | * 47 | * @method addTarget(target) Adds a target object. 48 | */ 49 | new( 50 | title, 51 | format='none', 52 | description='', 53 | interval=null, 54 | height=null, 55 | datasource=null, 56 | span=null, 57 | min_span=null, 58 | decimals=null, 59 | valueName='avg', 60 | valueFontSize='80%', 61 | prefixFontSize='50%', 62 | postfixFontSize='50%', 63 | mappingType=1, 64 | repeat=null, 65 | repeatDirection=null, 66 | prefix='', 67 | postfix='', 68 | colors=[ 69 | '#299c46', 70 | 'rgba(237, 129, 40, 0.89)', 71 | '#d44a3a', 72 | ], 73 | colorBackground=false, 74 | colorValue=false, 75 | thresholds='', 76 | valueMaps=[ 77 | { 78 | value: 'null', 79 | op: '=', 80 | text: 'N/A', 81 | }, 82 | ], 83 | rangeMaps=[ 84 | { 85 | from: 'null', 86 | to: 'null', 87 | text: 'N/A', 88 | }, 89 | ], 90 | transparent=null, 91 | sparklineFillColor='rgba(31, 118, 189, 0.18)', 92 | sparklineFull=false, 93 | sparklineLineColor='rgb(31, 120, 193)', 94 | sparklineShow=false, 95 | gaugeShow=false, 96 | gaugeMinValue=0, 97 | gaugeMaxValue=100, 98 | gaugeThresholdMarkers=true, 99 | gaugeThresholdLabels=false, 100 | timeFrom=null, 101 | links=[], 102 | tableColumn='', 103 | maxPerRow=null, 104 | maxDataPoints=100, 105 | ):: 106 | { 107 | [if height != null then 'height']: height, 108 | [if description != '' then 'description']: description, 109 | [if repeat != null then 'repeat']: repeat, 110 | [if repeatDirection != null then 'repeatDirection']: repeatDirection, 111 | [if transparent != null then 'transparent']: transparent, 112 | [if min_span != null then 'minSpan']: min_span, 113 | title: title, 114 | [if span != null then 'span']: span, 115 | type: 'singlestat', 116 | datasource: datasource, 117 | targets: [ 118 | ], 119 | links: links, 120 | [if decimals != null then 'decimals']: decimals, 121 | maxDataPoints: maxDataPoints, 122 | interval: interval, 123 | cacheTimeout: null, 124 | format: format, 125 | prefix: prefix, 126 | postfix: postfix, 127 | nullText: null, 128 | valueMaps: valueMaps, 129 | [if maxPerRow != null then 'maxPerRow']: maxPerRow, 130 | mappingTypes: [ 131 | { 132 | name: 'value to text', 133 | value: 1, 134 | }, 135 | { 136 | name: 'range to text', 137 | value: 2, 138 | }, 139 | ], 140 | rangeMaps: rangeMaps, 141 | mappingType: 142 | if mappingType == 'value' 143 | then 144 | 1 145 | else if mappingType == 'range' 146 | then 147 | 2 148 | else 149 | mappingType, 150 | nullPointMode: 'connected', 151 | valueName: valueName, 152 | prefixFontSize: prefixFontSize, 153 | valueFontSize: valueFontSize, 154 | postfixFontSize: postfixFontSize, 155 | thresholds: thresholds, 156 | [if timeFrom != null then 'timeFrom']: timeFrom, 157 | colorBackground: colorBackground, 158 | colorValue: colorValue, 159 | colors: colors, 160 | gauge: { 161 | show: gaugeShow, 162 | minValue: gaugeMinValue, 163 | maxValue: gaugeMaxValue, 164 | thresholdMarkers: gaugeThresholdMarkers, 165 | thresholdLabels: gaugeThresholdLabels, 166 | }, 167 | sparkline: { 168 | fillColor: sparklineFillColor, 169 | full: sparklineFull, 170 | lineColor: sparklineLineColor, 171 | show: sparklineShow, 172 | }, 173 | tableColumn: tableColumn, 174 | _nextTarget:: 0, 175 | addTarget(target):: self { 176 | local nextTarget = super._nextTarget, 177 | _nextTarget: nextTarget + 1, 178 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 179 | }, 180 | }, 181 | } 182 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/sql.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates an SQL target. 4 | * 5 | * @name sql.target 6 | * 7 | * @param rawSql The SQL query 8 | * @param datasource (optional) 9 | * @param format (default `'time_series'`) 10 | * @param alias (optional) 11 | */ 12 | target( 13 | rawSql, 14 | datasource=null, 15 | format='time_series', 16 | alias=null, 17 | ):: { 18 | [if datasource != null then 'datasource']: datasource, 19 | format: format, 20 | [if alias != null then 'alias']: alias, 21 | rawSql: rawSql, 22 | }, 23 | } 24 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/stat_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [stat panel](https://grafana.com/docs/grafana/latest/panels/visualizations/stat-panel/). 4 | * 5 | * @name statPanel.new 6 | * 7 | * @param title Panel title. 8 | * @param description (optional) Panel description. 9 | * @param transparent (default `false`) Whether to display the panel without a background. 10 | * @param datasource (optional) Panel datasource. 11 | * @param allValues (default `false`) Show all values instead of reducing to one. 12 | * @param valueLimit (optional) Limit of values in all values mode. 13 | * @param reducerFunction (default `'mean'`) Function to use to reduce values to when using single value. 14 | * @param fields (default `''`) Fields that should be included in the panel. 15 | * @param orientation (default `'auto'`) Stacking direction in case of multiple series or fields. 16 | * @param colorMode (default `'value'`) 'value' or 'background'. 17 | * @param graphMode (default `'area'`) 'none' or 'area' to enable sparkline mode. 18 | * @param textMode (default `'auto'`) Control if name and value is displayed or just name. 19 | * @param justifyMode (default `'auto'`) 'auto' or 'center'. 20 | * @param unit (default `'none'`) Panel unit field option. 21 | * @param min (optional) Leave empty to calculate based on all values. 22 | * @param max (optional) Leave empty to calculate based on all values. 23 | * @param decimals (optional) Number of decimal places to show. 24 | * @param displayName (optional) Change the field or series name. 25 | * @param noValue (optional) What to show when there is no value. 26 | * @param thresholdsMode (default `'absolute'`) 'absolute' or 'percentage'. 27 | * @param timeFrom (optional) Override the relative time range. 28 | * @param repeat (optional) Name of variable that should be used to repeat this panel. 29 | * @param repeatDirection (default `'h'`) 'h' for horizontal or 'v' for vertical. 30 | * @param maxPerRow (optional) Maximum panels per row in repeat mode. 31 | * @param pluginVersion (default `'7'`) Plugin version the panel should be modeled for. This has been tested with the default, '7', and '6.7'. 32 | * 33 | * @method addTarget(target) Adds a target object. 34 | * @method addTargets(targets) Adds an array of targets. 35 | * @method addLink(link) Adds a [panel link](https://grafana.com/docs/grafana/latest/linking/panel-links/). Argument format: `{ title: 'Link Title', url: 'https://...', targetBlank: true }`. 36 | * @method addLinks(links) Adds an array of links. 37 | * @method addThreshold(step) Adds a [threshold](https://grafana.com/docs/grafana/latest/panels/thresholds/) step. Argument format: `{ color: 'green', value: 0 }`. 38 | * @method addThresholds(steps) Adds an array of threshold steps. 39 | * @method addMapping(mapping) Adds a value mapping. 40 | * @method addMappings(mappings) Adds an array of value mappings. 41 | * @method addDataLink(link) Adds a data link. 42 | * @method addDataLinks(links) Adds an array of data links. 43 | */ 44 | new( 45 | title, 46 | description=null, 47 | transparent=false, 48 | datasource=null, 49 | allValues=false, 50 | valueLimit=null, 51 | reducerFunction='mean', 52 | fields='', 53 | orientation='auto', 54 | colorMode='value', 55 | graphMode='area', 56 | textMode='auto', 57 | justifyMode='auto', 58 | unit='none', 59 | min=null, 60 | max=null, 61 | decimals=null, 62 | displayName=null, 63 | noValue=null, 64 | thresholdsMode='absolute', 65 | timeFrom=null, 66 | repeat=null, 67 | repeatDirection='h', 68 | maxPerRow=null, 69 | pluginVersion='7', 70 | ):: { 71 | 72 | type: 'stat', 73 | title: title, 74 | [if description != null then 'description']: description, 75 | transparent: transparent, 76 | datasource: datasource, 77 | targets: [], 78 | links: [], 79 | [if repeat != null then 'repeat']: repeat, 80 | [if repeat != null then 'repeatDirection']: repeatDirection, 81 | [if timeFrom != null then 'timeFrom']: timeFrom, 82 | [if repeat != null then 'maxPerRow']: maxPerRow, 83 | 84 | // targets 85 | _nextTarget:: 0, 86 | addTarget(target):: self { 87 | local nextTarget = super._nextTarget, 88 | _nextTarget: nextTarget + 1, 89 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 90 | }, 91 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 92 | 93 | // links 94 | addLink(link):: self { 95 | links+: [link], 96 | }, 97 | addLinks(links):: std.foldl(function(p, l) p.addLink(l), links, self), 98 | 99 | pluginVersion: pluginVersion, 100 | } + ( 101 | 102 | if pluginVersion >= '7' then { 103 | options: { 104 | reduceOptions: { 105 | values: allValues, 106 | [if allValues && valueLimit != null then 'limit']: valueLimit, 107 | calcs: [ 108 | reducerFunction, 109 | ], 110 | fields: fields, 111 | }, 112 | orientation: orientation, 113 | colorMode: colorMode, 114 | graphMode: graphMode, 115 | justifyMode: justifyMode, 116 | textMode: textMode, 117 | }, 118 | fieldConfig: { 119 | defaults: { 120 | unit: unit, 121 | [if min != null then 'min']: min, 122 | [if max != null then 'max']: max, 123 | [if decimals != null then 'decimals']: decimals, 124 | [if displayName != null then 'displayName']: displayName, 125 | [if noValue != null then 'noValue']: noValue, 126 | thresholds: { 127 | mode: thresholdsMode, 128 | steps: [], 129 | }, 130 | mappings: [], 131 | links: [], 132 | }, 133 | }, 134 | 135 | // thresholds 136 | addThreshold(step):: self { 137 | fieldConfig+: { defaults+: { thresholds+: { steps+: [step] } } }, 138 | }, 139 | 140 | // mappings 141 | _nextMapping:: 0, 142 | addMapping(mapping):: self { 143 | local nextMapping = super._nextMapping, 144 | _nextMapping: nextMapping + 1, 145 | fieldConfig+: { defaults+: { mappings+: [mapping { id: nextMapping }] } }, 146 | }, 147 | 148 | // data links 149 | addDataLink(link):: self { 150 | fieldConfig+: { defaults+: { links+: [link] } }, 151 | }, 152 | 153 | // Overrides 154 | addOverride( 155 | matcher=null, 156 | properties=null, 157 | ):: self { 158 | fieldConfig+: { 159 | overrides+: [ 160 | { 161 | [if matcher != null then 'matcher']: matcher, 162 | [if properties != null then 'properties']: properties, 163 | }, 164 | ], 165 | }, 166 | }, 167 | addOverrides(overrides):: std.foldl(function(p, o) p.addOverride(o.matcher, o.properties), overrides, self), 168 | } else { 169 | options: { 170 | fieldOptions: { 171 | values: allValues, 172 | [if allValues && valueLimit != null then 'limit']: valueLimit, 173 | calcs: [ 174 | reducerFunction, 175 | ], 176 | fields: fields, 177 | defaults: { 178 | unit: unit, 179 | [if min != null then 'min']: min, 180 | [if max != null then 'max']: max, 181 | [if decimals != null then 'decimals']: decimals, 182 | [if displayName != null then 'displayName']: displayName, 183 | [if noValue != null then 'noValue']: noValue, 184 | thresholds: { 185 | mode: thresholdsMode, 186 | steps: [], 187 | }, 188 | mappings: [], 189 | links: [], 190 | }, 191 | }, 192 | orientation: orientation, 193 | colorMode: colorMode, 194 | graphMode: graphMode, 195 | justifyMode: justifyMode, 196 | }, 197 | 198 | // thresholds 199 | addThreshold(step):: self { 200 | options+: { fieldOptions+: { defaults+: { thresholds+: { steps+: [step] } } } }, 201 | }, 202 | 203 | // mappings 204 | _nextMapping:: 0, 205 | addMapping(mapping):: self { 206 | local nextMapping = super._nextMapping, 207 | _nextMapping: nextMapping + 1, 208 | options+: { fieldOptions+: { defaults+: { mappings+: [mapping { id: nextMapping }] } } }, 209 | }, 210 | 211 | // data links 212 | addDataLink(link):: self { 213 | options+: { fieldOptions+: { defaults+: { links+: [link] } } }, 214 | }, 215 | } 216 | 217 | ) + { 218 | addThresholds(steps):: std.foldl(function(p, s) p.addThreshold(s), steps, self), 219 | addMappings(mappings):: std.foldl(function(p, m) p.addMapping(m), mappings, self), 220 | addDataLinks(links):: std.foldl(function(p, l) p.addDataLink(l), links, self), 221 | }, 222 | } 223 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/table_panel.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [table panel](https://grafana.com/docs/grafana/latest/panels/visualizations/table-panel/) that can be added in a row. 4 | * It requires the table panel plugin in grafana, which is built-in. 5 | * 6 | * @name table.new 7 | * 8 | * @param title The title of the graph panel. 9 | * @param description (optional) Description of the panel 10 | * @param span (optional) Width of the panel 11 | * @param height (optional) Height of the panel 12 | * @param datasource (optional) Datasource 13 | * @param min_span (optional) Min span 14 | * @param styles (optional) Array of styles for the panel 15 | * @param columns (optional) Array of columns for the panel 16 | * @param sort (optional) Sorting instruction for the panel 17 | * @param transform (optional) Allow table manipulation to present data as desired 18 | * @param transparent (default: 'false') Whether to display the panel without a background 19 | * @param links (optional) Array of links for the panel. 20 | * @return A json that represents a table panel 21 | * 22 | * @method addTarget(target) Adds a target object 23 | * @method addTargets(targets) Adds an array of targets 24 | * @method addColumn(field, style) Adds a column 25 | * @method hideColumn(field) Hides a column 26 | * @method addLink(link) Adds a link 27 | * @method addTransformation(transformation) Adds a transformation object 28 | * @method addTransformations(transformations) Adds an array of transformations 29 | */ 30 | new( 31 | title, 32 | description=null, 33 | span=null, 34 | min_span=null, 35 | height=null, 36 | datasource=null, 37 | styles=[], 38 | transform=null, 39 | transparent=false, 40 | columns=[], 41 | sort=null, 42 | time_from=null, 43 | time_shift=null, 44 | links=[], 45 | ):: { 46 | type: 'table', 47 | title: title, 48 | [if span != null then 'span']: span, 49 | [if min_span != null then 'minSpan']: min_span, 50 | [if height != null then 'height']: height, 51 | datasource: datasource, 52 | targets: [ 53 | ], 54 | styles: styles, 55 | columns: columns, 56 | timeFrom: time_from, 57 | timeShift: time_shift, 58 | links: links, 59 | [if sort != null then 'sort']: sort, 60 | [if description != null then 'description']: description, 61 | [if transform != null then 'transform']: transform, 62 | [if transparent == true then 'transparent']: transparent, 63 | _nextTarget:: 0, 64 | addTarget(target):: self { 65 | local nextTarget = super._nextTarget, 66 | _nextTarget: nextTarget + 1, 67 | targets+: [target { refId: std.char(std.codepoint('A') + nextTarget) }], 68 | }, 69 | addTargets(targets):: std.foldl(function(p, t) p.addTarget(t), targets, self), 70 | addColumn(field, style):: self { 71 | local style_ = style { pattern: field }, 72 | local column_ = { text: field, value: field }, 73 | styles+: [style_], 74 | columns+: [column_], 75 | }, 76 | hideColumn(field):: self { 77 | styles+: [{ 78 | alias: field, 79 | pattern: field, 80 | type: 'hidden', 81 | }], 82 | }, 83 | addLink(link):: self { 84 | links+: [link], 85 | }, 86 | addTransformation(transformation):: self { 87 | transformations+: [transformation], 88 | }, 89 | addTransformations(transformations):: std.foldl(function(p, t) p.addTransformation(t), transformations, self), 90 | }, 91 | } 92 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/template.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [template](https://grafana.com/docs/grafana/latest/variables/#templates) that can be added to a dashboard. 4 | * 5 | * @name template.new 6 | * 7 | * @param name Name of variable. 8 | * @param datasource Template [datasource](https://grafana.com/docs/grafana/latest/variables/variable-types/add-data-source-variable/) 9 | * @param query [Query expression](https://grafana.com/docs/grafana/latest/variables/variable-types/add-query-variable/) for the datasource. 10 | * @param label (optional) Display name of the variable dropdown. If null, then the dropdown label will be the variable name. 11 | * @param allValues (optional) Formatting for [multi-value variables](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/#formatting-multi-value-variables) 12 | * @param tagValuesQuery (default `''`) Group values into [selectable tags](https://grafana.com/docs/grafana/latest/variables/variable-value-tags/) 13 | * @param current (default `null`) Can be `null`, `'all'` for all, or any other custom text value. 14 | * @param hide (default `''`) `''`: the variable dropdown displays the variable Name or Label value. `'label'`: the variable dropdown only displays the selected variable value and a down arrow. Any other value: no variable dropdown is displayed on the dashboard. 15 | * @param regex (default `''`) Regex expression to filter or capture specific parts of the names returned by your data source query. To see examples, refer to [Filter variables with regex](https://grafana.com/docs/grafana/latest/variables/filter-variables-with-regex/). 16 | * @param refresh (default `'never'`) `'never'`: variables queries are cached and values are not updated. This is fine if the values never change, but problematic if they are dynamic and change a lot. `'load'`: Queries the data source every time the dashboard loads. This slows down dashboard loading, because the variable query needs to be completed before dashboard can be initialized. `'time'`: Queries the data source when the dashboard time range changes. Only use this option if your variable options query contains a time range filter or is dependent on the dashboard time range. 17 | * @param includeAll (default `false`) Whether all value option is available or not. 18 | * @param multi (default `false`) Whether multiple values can be selected or not from variable value list. 19 | * @param sort (default `0`) `0`: Without Sort, `1`: Alphabetical (asc), `2`: Alphabetical (desc), `3`: Numerical (asc), `4`: Numerical (desc). 20 | * 21 | * @return A [template](https://grafana.com/docs/grafana/latest/variables/#templates) 22 | */ 23 | new( 24 | name, 25 | datasource, 26 | query, 27 | label=null, 28 | allValues=null, 29 | tagValuesQuery='', 30 | current=null, 31 | hide='', 32 | regex='', 33 | refresh='never', 34 | includeAll=false, 35 | multi=false, 36 | sort=0, 37 | ):: 38 | { 39 | allValue: allValues, 40 | current: $.current(current), 41 | datasource: datasource, 42 | includeAll: includeAll, 43 | hide: $.hide(hide), 44 | label: label, 45 | multi: multi, 46 | name: name, 47 | options: [], 48 | query: query, 49 | refresh: $.refresh(refresh), 50 | regex: regex, 51 | sort: sort, 52 | tagValuesQuery: tagValuesQuery, 53 | tags: [], 54 | tagsQuery: '', 55 | type: 'query', 56 | useTags: false, 57 | }, 58 | /** 59 | * Use an [interval variable](https://grafana.com/docs/grafana/latest/variables/variable-types/add-interval-variable/) to represent time spans such as '1m', '1h', '1d'. You can think of them as a dashboard-wide "group by time" command. Interval variables change how the data is grouped in the visualization. You can also use the Auto Option to return a set number of data points per time span. 60 | * You can use an interval variable as a parameter to group by time (for InfluxDB), date histogram interval (for Elasticsearch), or as a summarize function parameter (for Graphite). 61 | * 62 | * @name template.interval 63 | * 64 | * @param name Variable name 65 | * @param query Comma separated values without spacing of intervals available for selection. Add `'auto'` in the query to turn on the Auto Option. Ex: `'auto,5m,10m,20m'`. 66 | * @param current Currently selected interval. Must be one of the values in the query. `'auto'` is allowed if defined in the query. 67 | * @param hide (default `''`) `''`: the variable dropdown displays the variable Name or Label value. `'label'`: the variable dropdown only displays the selected variable value and a down arrow. Any other value: no variable dropdown is displayed on the dashboard. 68 | * @param label (optional) Display name of the variable dropdown. If null, then the dropdown label will be the variable name. 69 | * @param auto_count (default `300`) Valid only if `'auto'` is defined in query. Number of times the current time range will be divided to calculate the value, similar to the Max data points query option. For example, if the current visible time range is 30 minutes, then the auto interval groups the data into 30 one-minute increments. The default value is 30 steps. 70 | * @param auto_min (default `'10s'`) Valid only if `'auto'` is defined in query. The minimum threshold below which the step count intervals will not divide the time. To continue the 30 minute example, if the minimum interval is set to `'2m'`, then Grafana would group the data into 15 two-minute increments. 71 | * 72 | * @return A new interval variable for templating. 73 | */ 74 | interval( 75 | name, 76 | query, 77 | current, 78 | hide='', 79 | label=null, 80 | auto_count=300, 81 | auto_min='10s', 82 | ):: 83 | { 84 | current: $.current(current), 85 | hide: $.hide(hide), 86 | label: label, 87 | name: name, 88 | query: std.join(',', std.filter($.filterAuto, std.split(query, ','))), 89 | refresh: 2, 90 | type: 'interval', 91 | auto: std.count(std.split(query, ','), 'auto') > 0, 92 | auto_count: auto_count, 93 | auto_min: auto_min, 94 | }, 95 | hide(hide):: 96 | if hide == '' then 0 else if hide == 'label' then 1 else 2, 97 | current(current):: { 98 | [if current != null then 'text']: current, 99 | [if current != null then 'value']: if current == 'auto' then 100 | '$__auto_interval' 101 | else if current == 'all' then 102 | '$__all' 103 | else 104 | current, 105 | }, 106 | /** 107 | * Data [source variables](https://grafana.com/docs/grafana/latest/variables/variable-types/add-data-source-variable/) 108 | * allow you to quickly change the data source for an entire dashboard. 109 | * They are useful if you have multiple instances of a data source, perhaps in different environments. 110 | * 111 | * @name template.datasource 112 | * 113 | * @param name Data source variable name. Ex: `'PROMETHEUS_DS'`. 114 | * @param query Type of data source. Ex: `'prometheus'`. 115 | * @param current Ex: `'Prometheus'`. 116 | * @param hide (default `''`) `''`: the variable dropdown displays the variable Name or Label value. `'label'`: the variable dropdown only displays the selected variable value and a down arrow. Any other value: no variable dropdown is displayed on the dashboard. 117 | * @param label (optional) Display name of the variable dropdown. If null, then the dropdown label will be the variable name. 118 | * @param regex (default `''`) Regex filter for which data source instances to choose from in the variable value drop-down list. Leave this field empty to display all instances. 119 | * @param refresh (default `'load'`) `'never'`: Variables queries are cached and values are not updated. This is fine if the values never change, but problematic if they are dynamic and change a lot. `'load'`: Queries the data source every time the dashboard loads. This slows down dashboard loading, because the variable query needs to be completed before dashboard can be initialized. `'time'`: Queries the data source when the dashboard time range changes. Only use this option if your variable options query contains a time range filter or is dependent on the dashboard time range. 120 | * 121 | * @return A [data source variable](https://grafana.com/docs/grafana/latest/variables/variable-types/add-data-source-variable/). 122 | */ 123 | datasource( 124 | name, 125 | query, 126 | current, 127 | hide='', 128 | label=null, 129 | regex='', 130 | refresh='load', 131 | ):: { 132 | current: $.current(current), 133 | hide: $.hide(hide), 134 | label: label, 135 | name: name, 136 | options: [], 137 | query: query, 138 | refresh: $.refresh(refresh), 139 | regex: regex, 140 | type: 'datasource', 141 | }, 142 | refresh(refresh):: if refresh == 'never' 143 | then 144 | 0 145 | else if refresh == 'load' 146 | then 147 | 1 148 | else if refresh == 'time' 149 | then 150 | 2 151 | else 152 | refresh, 153 | filterAuto(str):: str != 'auto', 154 | /** 155 | * Use a [custom variable](https://grafana.com/docs/grafana/latest/variables/variable-types/add-custom-variable/) 156 | * for values that do not change. 157 | * 158 | * @name template.custom 159 | * This might be numbers, strings, or even other variables. 160 | * @param name Variable name 161 | * @param query Comma separated without spacing list of selectable values. 162 | * @param current Selected value 163 | * @param refresh (default `'never'`) `'never'`: Variables queries are cached and values are not updated. This is fine if the values never change, but problematic if they are dynamic and change a lot. `'load'`: Queries the data source every time the dashboard loads. This slows down dashboard loading, because the variable query needs to be completed before dashboard can be initialized. `'time'`: Queries the data source when the dashboard time range changes. Only use this option if your variable options query contains a time range filter or is dependent on the dashboard time range. 164 | * @param label (default `''`) Display name of the variable dropdown. If you don’t enter a display name, then the dropdown label will be the variable name. 165 | * @param valuelabels (default `{}`) Display names for values defined in query. For example, if `query='new,old'`, then you may display them as follows `valuelabels={new: 'nouveau', old: 'ancien'}`. 166 | * @param multi (default `false`) Whether multiple values can be selected or not from variable value list. 167 | * @param allValues (optional) Formatting for [multi-value variables](https://grafana.com/docs/grafana/latest/variables/formatting-multi-value-variables/#formatting-multi-value-variables) 168 | * @param includeAll (default `false`) Whether all value option is available or not. 169 | * @param hide (default `''`) `''`: the variable dropdown displays the variable Name or Label value. `'label'`: the variable dropdown only displays the selected variable value and a down arrow. Any other value: no variable dropdown is displayed on the dashboard. 170 | * 171 | * @return A custom variable. 172 | */ 173 | custom( 174 | name, 175 | query, 176 | current, 177 | refresh='never', 178 | label='', 179 | valuelabels={}, 180 | multi=false, 181 | allValues=null, 182 | includeAll=false, 183 | hide='', 184 | ):: 185 | { 186 | // self has dynamic scope, so self may not be myself below. 187 | // '$' can't be used neither as this object is not top-level object. 188 | local custom = self, 189 | 190 | allValue: allValues, 191 | current: { 192 | // Both 'all' and 'All' are accepted for consistency. 193 | value: if includeAll && (current == 'All' || current == 'all') then 194 | if multi then ['$__all'] else '$__all' 195 | else 196 | current, 197 | text: if std.isArray(current) then 198 | std.join(' + ', std.map(custom.valuelabel, current)) 199 | else 200 | custom.valuelabel(current), 201 | [if multi then 'selected']: true, 202 | }, 203 | options: std.map(self.option, self.query_array(query)), 204 | hide: $.hide(hide), 205 | includeAll: includeAll, 206 | label: label, 207 | refresh: $.refresh(refresh), 208 | multi: multi, 209 | name: name, 210 | query: query, 211 | type: 'custom', 212 | 213 | valuelabel(value):: if value in valuelabels then 214 | valuelabels[value] 215 | else value, 216 | 217 | option(option):: { 218 | text: custom.valuelabel(option), 219 | value: if includeAll && option == 'All' then '$__all' else option, 220 | [if multi then 'selected']: if multi && std.isArray(current) then 221 | std.member(current, option) 222 | else if multi then 223 | current == option 224 | else 225 | null, 226 | }, 227 | query_array(query):: std.split( 228 | if includeAll then 'All,' + query else query, ',' 229 | ), 230 | }, 231 | /** 232 | * [Text box variables](https://grafana.com/docs/grafana/latest/variables/variable-types/add-text-box-variable/) 233 | * display a free text input field with an optional default value. 234 | * This is the most flexible variable, because you can enter any value. 235 | * Use this type of variable if you have metrics with high cardinality or if you want to 236 | * update multiple panels in a dashboard at the same time. 237 | * 238 | * @name template.text 239 | * 240 | * @param name Variable name. 241 | * @param label (default `''`) Display name of the variable dropdown. If you don’t enter a display name, then the dropdown label will be the variable name. 242 | * 243 | * @return A text box variable. 244 | */ 245 | text( 246 | name, 247 | label='' 248 | ):: 249 | { 250 | current: { 251 | selected: false, 252 | text: '', 253 | value: '', 254 | }, 255 | name: name, 256 | label: label, 257 | query: '', 258 | type: 'textbox', 259 | }, 260 | /** 261 | * [Ad hoc filters](https://grafana.com/docs/grafana/latest/variables/variable-types/add-ad-hoc-filters/) 262 | * allow you to add key/value filters that are automatically added to all metric queries 263 | * that use the specified data source. Unlike other variables, you do not use ad hoc filters in queries. 264 | * Instead, you use ad hoc filters to write filters for existing queries. 265 | * Note: Ad hoc filter variables only work with InfluxDB, Prometheus, and Elasticsearch data sources. 266 | * 267 | * @name template.adhoc 268 | * 269 | * @param name Variable name. 270 | * @param datasource Target data source 271 | * @param label (optional) Display name of the variable dropdown. If you don’t enter a display name, then the dropdown label will be the variable name. 272 | * @param hide (default `''`) `''`: the variable dropdown displays the variable Name or Label value. `'label'`: the variable dropdown only displays the selected variable value and a down arrow. Any other value: no variable dropdown is displayed on the dashboard. 273 | * 274 | * @return An ad hoc filter 275 | */ 276 | adhoc( 277 | name, 278 | datasource, 279 | label=null, 280 | hide='', 281 | ):: 282 | { 283 | datasource: datasource, 284 | hide: $.hide(hide), 285 | label: label, 286 | name: name, 287 | type: 'adhoc', 288 | }, 289 | } 290 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/text.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a [text panel](https://grafana.com/docs/grafana/latest/panels/visualizations/text-panel/). 4 | * 5 | * @name text.new 6 | * 7 | * @param title (default `''`) Panel title. 8 | * @param description (optional) Panel description. 9 | * @param datasource (optional) Panel datasource. 10 | * @param span (optional) 11 | * @param content (default `''`) 12 | * @param mode (default `'markdown'`) Rendering of the content: 'markdown','html', ... 13 | * @param transparent (optional) Whether to display the panel without a background. 14 | * @param repeat (optional) Name of variable that should be used to repeat this panel. 15 | * @param repeatDirection (default `'h'`) 'h' for horizontal or 'v' for vertical. 16 | * @param repeatMaxPerRow (optional) Maximum panels per row in repeat mode. 17 | */ 18 | new( 19 | title='', 20 | span=null, 21 | mode='markdown', 22 | content='', 23 | transparent=null, 24 | description=null, 25 | datasource=null, 26 | repeat=null, 27 | repeatDirection=null, 28 | repeatMaxPerRow=null, 29 | ):: 30 | { 31 | [if transparent != null then 'transparent']: transparent, 32 | title: title, 33 | [if span != null then 'span']: span, 34 | type: 'text', 35 | mode: mode, 36 | content: content, 37 | [if description != null then 'description']: description, 38 | datasource: datasource, 39 | [if repeat != null then 'repeat']: repeat, 40 | [if repeat != null then 'repeatDirection']: repeatDirection, 41 | [if repeat != null then 'maxPerRow']: repeatMaxPerRow, 42 | }, 43 | } 44 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/timepicker.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * Creates a Timepicker 4 | * 5 | * @name timepicker.new 6 | * 7 | * @param refresh_intervals (default: `['5s','10s','30s','1m','5m','15m','30m','1h','2h','1d']`) Array of time durations 8 | * @param time_options (default: `['5m','15m','1h','6h','12h','24h','2d','7d','30d']`) Array of time durations 9 | */ 10 | new( 11 | refresh_intervals=[ 12 | '5s', 13 | '10s', 14 | '30s', 15 | '1m', 16 | '5m', 17 | '15m', 18 | '30m', 19 | '1h', 20 | '2h', 21 | '1d', 22 | ], 23 | time_options=[ 24 | '5m', 25 | '15m', 26 | '1h', 27 | '6h', 28 | '12h', 29 | '24h', 30 | '2d', 31 | '7d', 32 | '30d', 33 | ], 34 | nowDelay=null, 35 | ):: { 36 | refresh_intervals: refresh_intervals, 37 | time_options: time_options, 38 | [if nowDelay != null then 'nowDelay']: nowDelay, 39 | }, 40 | } 41 | -------------------------------------------------------------------------------- /mixin/vendor/github.com/grafana/grafonnet-lib/grafonnet/transformation.libsonnet: -------------------------------------------------------------------------------- 1 | { 2 | /** 3 | * @name transformation.new 4 | */ 5 | new( 6 | id='', 7 | options={} 8 | ):: { 9 | id: id, 10 | options: options, 11 | }, 12 | } 13 | -------------------------------------------------------------------------------- /mixin/vendor/grafonnet: -------------------------------------------------------------------------------- 1 | github.com/grafana/grafonnet-lib/grafonnet -------------------------------------------------------------------------------- /windows_setup.md: -------------------------------------------------------------------------------- 1 | # **ibm-db2-prometheus-exporter on Windows** 2 | 3 | Although this exporter was originally designed to support Linux, it can work on Windows. The steps in this document will outline various differences in the setup process for Windows systems. 4 | 5 | **Note**: This document was tested against Windows Server 2022 using PowerShell. 6 | 7 | ## Prerequisites 8 | 9 | The following technologies must be present: 10 | 11 | - Git (2.43.0.windows.1) 12 | - Go (1.21.6) 13 | - IBM DB2 (11.5.9) 14 | 15 | _The versions listed above were used to test the exporter, but may not represent a necessary version. If difficulties arise, consider upgrading to similar versions._ 16 | 17 | The final outside technology that must be installed is `Chocolatey`. This will provide the Windows equivalent of `make` so that the exporter binary can be built. Run these 2 commands to install `Chocolatey` and `make` respectively: 18 | 19 | ```bash 20 | Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) 21 | chaco install make 22 | ``` 23 | 24 | ### ENV Variables 25 | 26 | Like in the Linux instructions, the exporter requires that certain environment variables be set: 27 | 28 | ```bash 29 | set-item -path env:LD_LIBRARY_PATH -value "go\pkg\mod\github.com\ibmdb\clidriver\lib" 30 | set-item -path env:CGO_LDFLAGS -value "-L\\go\pkg\mod\github.com\ibmdb\tmp\clidriver\lib" 31 | set-item -path env:CGO_CFLAGS -value "-I\\go\pkg\mod\github.com\ibmdb\clidriver\include" 32 | set-item -path env:Db2CLP -value "**$**" 33 | ``` 34 | 35 | _Be sure to replace the `` with the path to the `go` directory. This will usually be under the user that Go was installed on._ 36 | 37 | ### DB2 Setup 38 | 39 | This guide assumes IBM DB2 is installed and running. If it has not been done already, [initialize the DB2 CLI environment](https://www.ibm.com/support/pages/db21061e-environment-not-initialized-when-running-db2-commands-windows-command-line) explicitly activate the database, and [connect to it](https://www.xtivia.com/blog/how-to-connect-to-a-db2-database/): 40 | 41 | ```bash 42 | db2cmd -i -w db2clpsetcp 43 | db2 activate database 44 | db2 connect to 45 | ``` 46 | 47 | **Note:** Database activation only affects DB2's ability to report metrics, it does not affect DB2's behavior as a database. 48 | 49 | The final requirement for DB2 is to have a user with correct permissions. To give a user `DATAACCESS` permissions, execute the following command: 50 | 51 | ```bash 52 | db2 grant dataaccess on database to user 53 | ``` 54 | 55 | _Be sure to replace the `` with the name of the user._ 56 | 57 | ## Install DB2 Driver 58 | 59 | Similarly to Linux, the [go_ibm_driver](https://github.com/ibmdb/go_ibm_db) should be installed and set-up: 60 | 61 | ```bash 62 | go install github.com/ibmdb/go_ibm_db/installer@latest 63 | cd go\pkg\mod\github.com\ibmdb\go_ibm_db@\installer 64 | go run setup.go 65 | ``` 66 | 67 | _Be sure to replace the `` with the version that was installed._ 68 | 69 | ## Exporter 70 | 71 | After cloning the exporter, modify the `go build` command in the `Makefile` so that the outputted binary is an executable: 72 | 73 | ```bash 74 | go build -o ./bin/ibm_db2_exporter.exe ./cmd/ibm-db2-exporter/main.go 75 | ``` 76 | 77 | Build the binary and run the exporter, passing in the credentials of a user with permissions enabled: 78 | 79 | ```bash 80 | make exporter 81 | .\bin\ibm_db2_exporter.exe --db="database" --dsn="DATABASE=database;HOSTNAME=localhost;PORT=50000;UID=username;PWD=password 82 | ``` 83 | 84 | _Be sure to replace each field with information specific to the database and user._ 85 | 86 | At this point, metrics should be appearing upon running the following command: 87 | 88 | ```bash 89 | curl :9953/metrics 90 | ``` 91 | 92 | If there are no errors in the output of the exporter, the exporter is configured and running correctly. If errors occur, consult the `Troubleshooting` section of the main README file. 93 | --------------------------------------------------------------------------------