├── .gitignore ├── README.md ├── elasticsearch ├── .env ├── README.md ├── docker-compose.yml └── getYaml.sh ├── flink ├── README.md ├── docker-compose-cluster-with-monitor.yml ├── docker-compose-cluster-with-volumes.yml ├── docker-compose-cluster.yml ├── flink-examples-1.0-SNAPSHOT.jar └── images │ ├── 2021-06-26-11-22-39.png │ ├── 2021-06-26-17-06-55.png │ ├── 2021-06-26-17-07-49.png │ ├── 2021-07-14-16-25-37.png │ ├── 2021-07-14-16-26-05.png │ ├── 2021-07-14-16-28-07.png │ ├── 2021-07-14-16-28-47.png │ ├── 2021-07-14-16-35-44.png │ ├── 2021-07-14-16-47-28.png │ └── 2021-07-14-17-44-48.png ├── kafka ├── Dockerfile ├── README.md ├── kafka-kraft-single.yml └── kafka-zk-single.yml └── pulsar ├── README.md ├── conf ├── application.properties ├── bkvm.conf ├── bookkeeper.conf ├── broker.conf ├── proxy.conf └── zookeeper.conf ├── docker-compose-cluster.yml ├── images ├── 2021-06-15-10-35-39.png ├── 2021-06-15-10-47-45.png └── 2021-06-15-10-51-18.png ├── initAccount.sh ├── startCluster.sh ├── startStandalone.sh ├── stopCluster.sh └── stopStandalone.sh /.gitignore: -------------------------------------------------------------------------------- 1 | ### docker volumes ### 2 | data 3 | volumes 4 | 5 | ### Mac store file ### 6 | .DS_Store -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Docker Sample 2 | 3 | ## 说明 4 | 本项目提供常用的组件/工具使用`Docker`或`docker compose`的部署方法,便于更快的学习使用相关组件/工具。 5 | 6 | ## 已完成组件 7 | 8 | ### [pulsar](./pulsar) 9 | 10 | > 相关链接:[pulsar官网](https://pulsar.apache.org), [pulsar官方文档](https://pulsar.apache.org/docs/zh-CN/standalone/) 11 | 12 | - 支持单节点模式部署 13 | - 支持集群模式部署 14 | 15 | ### [flink](./flink) 16 | 17 | > 相关链接:[flink官方文档](https://flink.apache.org/) 18 | 19 | **包含三个版本Flink集群部署方法** 20 | 21 | 1. Flink集群部署 22 | > 常规Flink集群 23 | 24 | 2. 挂载lib目录部署 25 | > 挂载flink客户端的lib目录,便于添加第三方依赖包 26 | 27 | 3. Flink-Monitor集群部署 28 | > 配置Flink集群的[Metrics](https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/metrics/)转发至[Prometheus](https://prometheus.io/),并通过[Grafana](https://grafana.com/)展示Metrics数据 29 | 30 | 31 | ### [elasticsearch](./elasticsearch) 32 | 33 | > 相关链接:[Elasticsearch Guide](https://www.elastic.co/guide/en/elasticsearch/reference/8.1/index.html), [install with docker](https://www.elastic.co/guide/en/elasticsearch/reference/8.1/docker.html) 34 | 35 | - 基于官方docker部署文档实现,es集群+kibana部署 36 | 37 | ### [kafka](./kafka/) 38 | 39 | > 相关链接:[kafka doc](https://kafka.apache.org/documentation/#quickstart),[kafka github](https://github.com/apache/kafka/) 40 | 41 | - 基于Zookeeper的单节点kafka 42 | - 基于Kraft的单节点kafka -------------------------------------------------------------------------------- /elasticsearch/.env: -------------------------------------------------------------------------------- 1 | # Password for the 'elastic' user (at least 6 characters) 2 | ELASTIC_PASSWORD=esadmin 3 | 4 | # Password for the 'kibana_system' user (at least 6 characters) 5 | KIBANA_PASSWORD=esadmin 6 | 7 | # Version of Elastic products 8 | STACK_VERSION=8.1.3 9 | 10 | # Set the cluster name 11 | CLUSTER_NAME=docker-cluster 12 | 13 | # Set to 'basic' or 'trial' to automatically start the 30-day trial 14 | LICENSE=basic 15 | #LICENSE=trial 16 | 17 | # Port to expose Elasticsearch HTTP API to the host 18 | ES_PORT=9200 19 | #ES_PORT=127.0.0.1:9200 20 | 21 | # Port to expose Kibana to the host 22 | KIBANA_PORT=5601 23 | #KIBANA_PORT=80 24 | 25 | # Increase or decrease based on the available host memory (in bytes) 26 | MEM_LIMIT=1073741824 27 | 28 | # Project namespace (defaults to the current folder name if not set) 29 | #COMPOSE_PROJECT_NAME=myproject -------------------------------------------------------------------------------- /elasticsearch/README.md: -------------------------------------------------------------------------------- 1 | ES集群部署方法 2 | === 3 | 4 | ## QuickStart 5 | 6 | ```shell 7 | # git clone 8 | git clone https://github.com/perayb/docker-sample.git 9 | cd elasticsearch 10 | # 下载官方提供的docker-compose.yml文件,修改源为hub.docker.com,设置管理员密码/es版本 11 | sh getYaml.sh 12 | # 启动集群 13 | docker-compose up -d 14 | ``` 15 | - kibana地址: [http://localhost:5601/](http://localhost:5601/) 16 | - 登录kibana用户为 `elastic` ,密码为 `esadmin` 17 | 18 | ## 主要问题解决方法 19 | 20 | 1. 执行`sh getYaml.sh`时报错hostname `raw.githubusercontent.com`无法解析 21 | 22 | ```shell 23 | # 添加以下内容到hosts文件中 24 | 199.232.68.133 raw.githubusercontent.com 25 | ``` 26 | 27 | 2. es节点启动失败,报错: 28 | 29 | ```log 30 | ERROR: [1] bootstrap checks failed es01 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] 31 | ``` 32 | 33 | 参考官方文档:[https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites](https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html#docker-prod-prerequisites) 34 | -------------------------------------------------------------------------------- /elasticsearch/docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: "2.2" 2 | 3 | services: 4 | setup: 5 | image: elasticsearch:${STACK_VERSION} 6 | volumes: 7 | - certs:/usr/share/elasticsearch/config/certs 8 | user: "0" 9 | command: > 10 | bash -c ' 11 | if [ x${ELASTIC_PASSWORD} == x ]; then 12 | echo "Set the ELASTIC_PASSWORD environment variable in the .env file"; 13 | exit 1; 14 | elif [ x${KIBANA_PASSWORD} == x ]; then 15 | echo "Set the KIBANA_PASSWORD environment variable in the .env file"; 16 | exit 1; 17 | fi; 18 | if [ ! -f config/certs/ca.zip ]; then 19 | echo "Creating CA"; 20 | bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip; 21 | unzip config/certs/ca.zip -d config/certs; 22 | fi; 23 | if [ ! -f config/certs/certs.zip ]; then 24 | echo "Creating certs"; 25 | echo -ne \ 26 | "instances:\n"\ 27 | " - name: es01\n"\ 28 | " dns:\n"\ 29 | " - es01\n"\ 30 | " - localhost\n"\ 31 | " ip:\n"\ 32 | " - 127.0.0.1\n"\ 33 | " - name: es02\n"\ 34 | " dns:\n"\ 35 | " - es02\n"\ 36 | " - localhost\n"\ 37 | " ip:\n"\ 38 | " - 127.0.0.1\n"\ 39 | " - name: es03\n"\ 40 | " dns:\n"\ 41 | " - es03\n"\ 42 | " - localhost\n"\ 43 | " ip:\n"\ 44 | " - 127.0.0.1\n"\ 45 | > config/certs/instances.yml; 46 | bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key; 47 | unzip config/certs/certs.zip -d config/certs; 48 | fi; 49 | echo "Setting file permissions" 50 | chown -R root:root config/certs; 51 | find . -type d -exec chmod 750 \{\} \;; 52 | find . -type f -exec chmod 640 \{\} \;; 53 | echo "Waiting for Elasticsearch availability"; 54 | until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done; 55 | echo "Setting kibana_system password"; 56 | until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done; 57 | echo "All done!"; 58 | ' 59 | healthcheck: 60 | test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"] 61 | interval: 1s 62 | timeout: 5s 63 | retries: 120 64 | 65 | es01: 66 | depends_on: 67 | setup: 68 | condition: service_healthy 69 | image: elasticsearch:${STACK_VERSION} 70 | volumes: 71 | - certs:/usr/share/elasticsearch/config/certs 72 | - esdata01:/usr/share/elasticsearch/data 73 | ports: 74 | - ${ES_PORT}:9200 75 | environment: 76 | - node.name=es01 77 | - cluster.name=${CLUSTER_NAME} 78 | - cluster.initial_master_nodes=es01,es02,es03 79 | - discovery.seed_hosts=es02,es03 80 | - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} 81 | - bootstrap.memory_lock=true 82 | - xpack.security.enabled=true 83 | - xpack.security.http.ssl.enabled=true 84 | - xpack.security.http.ssl.key=certs/es01/es01.key 85 | - xpack.security.http.ssl.certificate=certs/es01/es01.crt 86 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 87 | - xpack.security.http.ssl.verification_mode=certificate 88 | - xpack.security.transport.ssl.enabled=true 89 | - xpack.security.transport.ssl.key=certs/es01/es01.key 90 | - xpack.security.transport.ssl.certificate=certs/es01/es01.crt 91 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 92 | - xpack.security.transport.ssl.verification_mode=certificate 93 | - xpack.license.self_generated.type=${LICENSE} 94 | mem_limit: ${MEM_LIMIT} 95 | ulimits: 96 | memlock: 97 | soft: -1 98 | hard: -1 99 | healthcheck: 100 | test: 101 | [ 102 | "CMD-SHELL", 103 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 104 | ] 105 | interval: 10s 106 | timeout: 10s 107 | retries: 120 108 | 109 | es02: 110 | depends_on: 111 | - es01 112 | image: elasticsearch:${STACK_VERSION} 113 | volumes: 114 | - certs:/usr/share/elasticsearch/config/certs 115 | - esdata02:/usr/share/elasticsearch/data 116 | environment: 117 | - node.name=es02 118 | - cluster.name=${CLUSTER_NAME} 119 | - cluster.initial_master_nodes=es01,es02,es03 120 | - discovery.seed_hosts=es01,es03 121 | - bootstrap.memory_lock=true 122 | - xpack.security.enabled=true 123 | - xpack.security.http.ssl.enabled=true 124 | - xpack.security.http.ssl.key=certs/es02/es02.key 125 | - xpack.security.http.ssl.certificate=certs/es02/es02.crt 126 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 127 | - xpack.security.http.ssl.verification_mode=certificate 128 | - xpack.security.transport.ssl.enabled=true 129 | - xpack.security.transport.ssl.key=certs/es02/es02.key 130 | - xpack.security.transport.ssl.certificate=certs/es02/es02.crt 131 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 132 | - xpack.security.transport.ssl.verification_mode=certificate 133 | - xpack.license.self_generated.type=${LICENSE} 134 | mem_limit: ${MEM_LIMIT} 135 | ulimits: 136 | memlock: 137 | soft: -1 138 | hard: -1 139 | healthcheck: 140 | test: 141 | [ 142 | "CMD-SHELL", 143 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 144 | ] 145 | interval: 10s 146 | timeout: 10s 147 | retries: 120 148 | 149 | es03: 150 | depends_on: 151 | - es02 152 | image: elasticsearch:${STACK_VERSION} 153 | volumes: 154 | - certs:/usr/share/elasticsearch/config/certs 155 | - esdata03:/usr/share/elasticsearch/data 156 | environment: 157 | - node.name=es03 158 | - cluster.name=${CLUSTER_NAME} 159 | - cluster.initial_master_nodes=es01,es02,es03 160 | - discovery.seed_hosts=es01,es02 161 | - bootstrap.memory_lock=true 162 | - xpack.security.enabled=true 163 | - xpack.security.http.ssl.enabled=true 164 | - xpack.security.http.ssl.key=certs/es03/es03.key 165 | - xpack.security.http.ssl.certificate=certs/es03/es03.crt 166 | - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt 167 | - xpack.security.http.ssl.verification_mode=certificate 168 | - xpack.security.transport.ssl.enabled=true 169 | - xpack.security.transport.ssl.key=certs/es03/es03.key 170 | - xpack.security.transport.ssl.certificate=certs/es03/es03.crt 171 | - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt 172 | - xpack.security.transport.ssl.verification_mode=certificate 173 | - xpack.license.self_generated.type=${LICENSE} 174 | mem_limit: ${MEM_LIMIT} 175 | ulimits: 176 | memlock: 177 | soft: -1 178 | hard: -1 179 | healthcheck: 180 | test: 181 | [ 182 | "CMD-SHELL", 183 | "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", 184 | ] 185 | interval: 10s 186 | timeout: 10s 187 | retries: 120 188 | 189 | kibana: 190 | depends_on: 191 | es01: 192 | condition: service_healthy 193 | es02: 194 | condition: service_healthy 195 | es03: 196 | condition: service_healthy 197 | image: docker.elastic.co/kibana/kibana:${STACK_VERSION} 198 | volumes: 199 | - certs:/usr/share/kibana/config/certs 200 | - kibanadata:/usr/share/kibana/data 201 | ports: 202 | - ${KIBANA_PORT}:5601 203 | environment: 204 | - SERVERNAME=kibana 205 | - ELASTICSEARCH_HOSTS=https://es01:9200 206 | - ELASTICSEARCH_USERNAME=kibana_system 207 | - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD} 208 | - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt 209 | mem_limit: ${MEM_LIMIT} 210 | healthcheck: 211 | test: 212 | [ 213 | "CMD-SHELL", 214 | "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'", 215 | ] 216 | interval: 10s 217 | timeout: 10s 218 | retries: 120 219 | 220 | volumes: 221 | certs: 222 | driver: local 223 | esdata01: 224 | driver: local 225 | esdata02: 226 | driver: local 227 | esdata03: 228 | driver: local 229 | kibanadata: 230 | driver: local 231 | -------------------------------------------------------------------------------- /elasticsearch/getYaml.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 从官方github下载docker示例文件 4 | # https://github.com/elastic/elasticsearch/blob/master/docs/reference/setup/install/docker 5 | 6 | # 提示无法解析hosts时,添加以下内容到hosts文件中 7 | # 199.232.68.133 raw.githubusercontent.com 8 | 9 | # docker-compose文件 10 | curl -o docker-compose.yml https://raw.githubusercontent.com/elastic/elasticsearch/master/docs/reference/setup/install/docker/docker-compose.yml 11 | # 下载环境配置 12 | curl -o .env https://raw.githubusercontent.com/elastic/elasticsearch/master/docs/reference/setup/install/docker/.env 13 | 14 | # 修改docker-compose.yml的image地址 15 | sed -i 's|docker.elastic.co/elasticsearch/||g' docker-compose.yml 16 | 17 | # 设置es密码为esadmin 18 | sed -i 's/^ELASTIC_PASSWORD=.*$/ELASTIC_PASSWORD=esadmin/g' .env 19 | # 设置kibana管理员密码 20 | sed -i 's/^KIBANA_PASSWORD=.*$/KIBANA_PASSWORD=esadmin/g' .env 21 | # 设置es版本号为8.1.3 22 | sed -i 's/^STACK_VERSION=.*$/STACK_VERSION=8.1.3/g' .env -------------------------------------------------------------------------------- /flink/README.md: -------------------------------------------------------------------------------- 1 | 基于Docker的Flink集群部署和作业提交方法 2 | === 3 | 4 | ## 准备工作 5 | 6 | 1. 安装最新版本的[docker](https://www.docker.com/community-edition) 7 | 2. 拉取[flink](https://hub.docker.com/_/flink), [Prometheus](https://hub.docker.com/r/prom/prometheus), [pushgateway](https://hub.docker.com/r/prom/pushgateway), [grafana](https://hub.docker.com/r/grafana/grafana) 8 | 9 | ``` shell 10 | docker pull flink:scala_2.11-java8 11 | # flink-monitor 涉及额外镜像 12 | docker pull prom/prometheus 13 | docker pull prom/pushgateway 14 | docker pull grafana/grafana 15 | ``` 16 | 17 | ## 部署流程 18 | 19 | **包含三个版本Flink集群部署方法** 20 | 21 | 1. [Flink集群部署](#flink集群部署) 22 | > 常规Flink集群 23 | 24 | 2. [挂载lib目录部署](#挂载lib目录部署) 25 | > 挂载flink客户端的lib目录,便于添加第三方依赖包 26 | 27 | 3. [Flink-Monitor集群部署] 28 | > 配置Flink集群的[Metrics](https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/metrics/)转发至[Prometheus](https://prometheus.io/),并通过[Grafana](https://grafana.com/)展示Metrics数据 29 | ] 30 | 自定义Flink作业提交到集群方法参考[提交自定义flink作业](#提交自定义flink作业) 31 | 32 | ### Flink集群部署 33 | 34 | 启动Flink集群 35 | ``` linux 36 | docker compose -f docker-compose-cluster.yml up -d 37 | ``` 38 | 39 | 移除Flink集群 40 | ``` linux 41 | docker compose -f docker-compose-cluster.yml down 42 | ``` 43 | 44 | 等待Flink集群启动完成后,打开`Flink Web`页面 45 | - http://localhost:8081/#/overview 46 | 47 | `Flink Web`页面打开后,能看到两个taskmanager说明集群部署完成 48 | 49 | ![](images/2021-06-26-11-22-39.png) 50 | 51 | 修改集群taskmanager数量 52 | ``` linux 53 | docker-compose -f docker-compose-cluster.yml scale taskmanager= 54 | ``` 55 | 56 | 停止和启动集群(保留容器): 57 | ``` shell 58 | # 停止容器 59 | docker compose -f docker-compose-cluster.yml stop 60 | # 启动容器 61 | docker compose -f docker-compose-cluster.yml start 62 | ``` 63 | 64 | #### 容器配置说明 65 | 为保证`Flink Web`页面可以看到STDOUT日志,因此修改entrypoint为如下启动方式,修改原理参考- [Flink Web UI不显示STDOUT问题](https://stackoverflow.com/questions/54036010/apache-flink-the-file-stdout-is-not-available-on-the-taskexecutor) 66 | 67 | ``` yml 68 | entrypoint: 69 | - sh 70 | - -c 71 | - | 72 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 73 | /docker-entrypoint.sh jobmanager 74 | sleep 10 && tail -f -n +0 /opt/flink/log/* 75 | ``` 76 | 77 | ### 挂载lib目录部署 78 | > 运行Flink作业时,如果涉及第三方jar包,则需要引入至Flink客户端,因此补充挂在flink lib目录的部署方式。 79 | 80 | 获取已有lib目录文件 81 | ``` shell 82 | # 创建挂载目录 83 | mkdir ./volumes 84 | # 启动Flink集群 85 | docker compose -f docker-compose-cluster.yml up -d 86 | # 获取lib目录文件 87 | docker cp flink_jobmanager_1:/opt/flink/lib volumes/lib 88 | # 移除Flink集群 89 | docker compose -f docker-compose-cluster.yml down 90 | ``` 91 | 92 | 启动挂载lib的Flink集群 93 | ``` linux 94 | docker compose -f docker-compose-cluster-with-volumes.yml up -d 95 | ``` 96 | 97 | 移除集群 98 | ``` linux 99 | docker compose -f docker-compose-cluster-with-volumes.yml down 100 | ``` 101 | 102 | `Flink Web`页面 103 | - http://localhost:8081/#/overview 104 | 105 | 停止和启动集群(保留容器): 106 | ``` shell 107 | # 停止容器 108 | docker compose -f docker-compose-cluster-with-volumes.yml stop 109 | # 启动容器 110 | docker compose -f docker-compose-cluster-with-volumes.yml start 111 | ``` 112 | 113 | ### Flink-Monitor集群部署 114 | 115 | > 配置Flink Metrics监控通过pushgateway转发至prometheus,并通过grafana查看Flink Session和作业的监控结果。 116 | 117 | 启动Flink集群(挂载了lib目录) 118 | ``` linux 119 | docker compose -f docker-compose-cluster-with-monitor.yml up -d 120 | ``` 121 | 122 | 移除Flink集群 123 | ``` linux 124 | docker compose -f docker-compose-cluster-with-monitor.yml down 125 | ``` 126 | 127 | 等待Flink集群启动完成后,验证以下页面是否显示正常: 128 | - `Flink Web`: http://localhost:8081/#/overview 129 | ![](images/2021-07-14-16-25-37.png) 130 | - `PushGateway`: http://localhost:9091 131 | ![](images/2021-07-14-16-26-05.png) 132 | - `Prometheus`: http://localhost:9090/targets 133 | ![](images/2021-07-14-16-28-07.png) 134 | - `Grafana`: http://localhost:3000/login 135 | - username: `admin`, password: `pulsar` 136 | ![](images/2021-07-14-16-28-47.png) 137 | 138 | 启动成功后,配置`Gafana`: 139 | 1. 配置`prometheus`地址为: http://prometheus:9090 140 | ![](images/2021-07-14-16-35-44.png) 141 | 2. 新建Dashboard,添加`prometheus`指标 142 | Panel配置页面可以检索到`prometheus`,并且选择相关指标后,能看到相关数据,说明部署正常 143 | ![](images/2021-07-14-16-47-28.png) 144 | 145 | #### 容器启动流程 146 | 147 | 容器启动流程如下: 148 | 149 | ``` 150 | 启动pushgateway, grafana 151 | -> 启动prometheus 152 | -> 启动flink jobmanager 153 | -> 启动flink taskmanager 154 | ``` 155 | 156 | #### 容器配置说明 157 | 158 | 1. `jobmanager`和`taskmanager`新增metrics相关环境配置,参考[Flink Metrics Reporter文档](https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/metric_reporters/#prometheuspushgateway),prometheuspushgateway部分 159 | 160 | ```yml 161 | environment: 162 | - | 163 | FLINK_PROPERTIES= 164 | jobmanager.rpc.address: jobmanager 165 | metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 166 | metrics.reporter.promgateway.host: pushgateway 167 | metrics.reporter.promgateway.port: 9091 168 | metrics.reporter.promgateway.jobName: flink-jobmanager 169 | metrics.reporter.promgateway.randomJobNameSuffix: true 170 | metrics.reporter.promgateway.deleteOnShutdown: false 171 | ``` 172 | 173 | 2. `prometheus`重写entrypoint,在启动服务之前增加`pushgateway`配置 174 | ```yml 175 | entrypoint: 176 | - sh 177 | - -c 178 | - | 179 | echo -e " - job_name: pushgateway\n static_configs:\n - targets: ['pushgateway:9091']\n labels:\n instance: pushgateway" >> /etc/prometheus/prometheus.yml 180 | /bin/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.console.libraries=/usr/share/prometheus/console_libraries --web.console.templates=/usr/share/prometheus/consoles 181 | ``` 182 | 183 | ## 提交自定义Flink作业 184 | 185 | ### 页面提交作业 186 | 187 | 1. 登录[Flink Web](http://localhost:8081/#/overview)页面 188 | 2. 点击`Submit New Job` -> `Add New` 添加jar包 189 | 3. 以`flink-examples-1.0-SNAPSHOT.jar`为例:点击`jar`包名称 -> 配置`EntryClass`为`pers.yibo.flink.streaming.demo.hello.HelloFlink` -> 点击`submit`则可提交并运行作业 190 | 191 | **如下所示说明作业正确运行** 192 | 193 | 作业详情显示为`running` 194 | 195 | ![](images/2021-06-26-17-06-55.png) 196 | 197 | 点击作业对应的`taskmanager` -> 点击`stdout`可看到输出数据 198 | 199 | ![](images/2021-06-26-17-07-49.png) 200 | 201 | ### API方式提交作业 202 | 203 | Flink Rest Api文档参考:[Flink Rest Api文档](https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/rest_api/) 204 | 205 | #### Api可用性验证 206 | 本地集群rest验证: 207 | ```bash 208 | curl http://localhost:8081/config 209 | ``` 210 | 211 | 返回结果如下,说明Flink Rest Api访问成功 212 | ```json 213 | { 214 | "refresh-interval": 3000, 215 | "timezone-name": "Coordinated Universal Time", 216 | "timezone-offset": 0, 217 | "flink-version": "1.13.1", 218 | "flink-revision": "a7f3192 @ 2021-05-25T12:02:11+02:00", 219 | "features": { 220 | "web-submit": true 221 | } 222 | } 223 | ``` 224 | 225 | #### 作业提交流程 226 | 227 | 1. 上传jar包至集群,参考接口`/jars/upload` 228 | 229 | ```bash 230 | curl -X POST -H "Expect:" -F "jarfile=@./flink-examples-1.0-SNAPSHOT.jar" http://localhost:8081/jars/upload 231 | ``` 232 | 233 | 返回结果如下说明上传成功,记录上传的jar包名称:`bc4303b4-7bf1-4f34-a2b2-528cd0306fba_flink-examples-1.0-SNAPSHOT.jar` 234 | 235 | ```json 236 | { 237 | "filename": "/tmp/flink-web-b4297ad8-5d61-40cd-8838-21e1f0a31997/flink-web-upload/bc4303b4-7bf1-4f34-a2b2-528cd0306fba_flink-examples-1.0-SNAPSHOT.jar", 238 | "status": "success" 239 | } 240 | ``` 241 | 242 | 2. 运行作业,参考接口`/jars/:jarid/run` 243 | 244 | ```bash 245 | # 路径中的 jarid 为上一步的jar包名称 246 | curl -X POST http://localhost:8081/jars/bc4303b4-7bf1-4f34-a2b2-528cd0306fba_flink-examples-1.0-SNAPSHOT.jar/run?entry-class=pers.yibo.flink.streaming.demo.hello.HelloFlink 247 | ``` 248 | 返回结果为`jobid`说明作业已提交运行 249 | ```json 250 | {"jobid":"a87f4731b423c62ad2f5363e81dd3782"} 251 | ``` 252 | 253 | 3. 页面查询作业运行情况 254 | 可以看到上一步提交的jobid已正常运行 255 | ![](images/2021-07-14-17-44-48.png) 256 | 257 | ## 参考资料 258 | 259 | - [Flink Docker文档](https://ci.apache.org/projects/flink/flink-docs-master/zh/docs/deployment/resource-providers/standalone/docker/) 260 | - [Flink Web UI不显示STDOUT问题](https://stackoverflow.com/questions/54036010/apache-flink-the-file-stdout-is-not-available-on-the-taskexecutor) 261 | - [Flink Metrics文档](https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/metrics/) 262 | - [Flink Metrics Reporter文档](https://ci.apache.org/projects/flink/flink-docs-master/docs/deployment/metric_reporters/#prometheuspushgateway) 263 | - [Flink Rest Api文档](https://ci.apache.org/projects/flink/flink-docs-master/docs/ops/rest_api/) 264 | - [Prometheus Docker文档](https://prometheus.io/docs/prometheus/latest/installation/) 265 | - [Grafana Docker部署文档](https://grafana.com/docs/grafana/latest/installation/docker/) -------------------------------------------------------------------------------- /flink/docker-compose-cluster-with-monitor.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | jobmanager: 5 | image: flink:scala_2.11-java8 6 | ports: 7 | - "8081:8081" 8 | - "6123:6123" 9 | command: jobmanager 10 | environment: 11 | - | 12 | FLINK_PROPERTIES= 13 | jobmanager.rpc.address: jobmanager 14 | metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 15 | metrics.reporter.promgateway.host: pushgateway 16 | metrics.reporter.promgateway.port: 9091 17 | metrics.reporter.promgateway.jobName: flink-jobmanager 18 | metrics.reporter.promgateway.randomJobNameSuffix: true 19 | metrics.reporter.promgateway.deleteOnShutdown: false 20 | volumes: 21 | - ./volumes/lib:/opt/flink/lib/ 22 | entrypoint: 23 | - sh 24 | - -c 25 | - | 26 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 27 | /docker-entrypoint.sh jobmanager 28 | sleep 10 && tail -f -n +0 /opt/flink/log/* 29 | depends_on: 30 | - pushgateway 31 | 32 | taskmanager: 33 | image: flink:scala_2.11-java8 34 | environment: 35 | - | 36 | FLINK_PROPERTIES= 37 | jobmanager.rpc.address: jobmanager 38 | metrics.reporter.promgateway.class: org.apache.flink.metrics.prometheus.PrometheusPushGatewayReporter 39 | metrics.reporter.promgateway.host: pushgateway 40 | metrics.reporter.promgateway.port: 9091 41 | metrics.reporter.promgateway.jobName: flink-taskmanager 42 | metrics.reporter.promgateway.randomJobNameSuffix: true 43 | metrics.reporter.promgateway.deleteOnShutdown: false 44 | volumes: 45 | - ./volumes/lib:/opt/flink/lib/ 46 | depends_on: 47 | - jobmanager 48 | scale: 2 49 | entrypoint: 50 | - sh 51 | - -c 52 | - | 53 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 54 | /docker-entrypoint.sh taskmanager 55 | sleep 10 && tail -f -n +0 /opt/flink/log/* 56 | 57 | prometheus: 58 | image: prom/prometheus 59 | ports: 60 | - "9090:9090" 61 | entrypoint: 62 | - sh 63 | - -c 64 | - | 65 | echo -e " - job_name: pushgateway\n static_configs:\n - targets: ['pushgateway:9091']\n labels:\n instance: pushgateway" >> /etc/prometheus/prometheus.yml 66 | /bin/prometheus --config.file=/etc/prometheus/prometheus.yml --storage.tsdb.path=/prometheus --web.console.libraries=/usr/share/prometheus/console_libraries --web.console.templates=/usr/share/prometheus/consoles 67 | depends_on: 68 | - pushgateway 69 | 70 | pushgateway: 71 | image: prom/pushgateway 72 | ports: 73 | - "9091:9091" 74 | 75 | grafana: 76 | image: grafana/grafana 77 | ports: 78 | - "3000:3000" -------------------------------------------------------------------------------- /flink/docker-compose-cluster-with-volumes.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | jobmanager: 5 | image: flink:scala_2.11-java8 6 | ports: 7 | - "8081:8081" 8 | - "6123:6123" 9 | command: jobmanager 10 | environment: 11 | - | 12 | FLINK_PROPERTIES= 13 | jobmanager.rpc.address: jobmanager 14 | volumes: 15 | - ./volumes/lib:/opt/flink/lib/ 16 | entrypoint: 17 | - sh 18 | - -c 19 | - | 20 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 21 | /docker-entrypoint.sh jobmanager 22 | sleep 10 && tail -f -n +0 /opt/flink/log/* 23 | 24 | taskmanager: 25 | image: flink:scala_2.11-java8 26 | environment: 27 | - | 28 | FLINK_PROPERTIES= 29 | jobmanager.rpc.address: jobmanager 30 | volumes: 31 | - ./volumes/lib:/opt/flink/lib/ 32 | depends_on: 33 | - jobmanager 34 | scale: 2 35 | entrypoint: 36 | - sh 37 | - -c 38 | - | 39 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 40 | /docker-entrypoint.sh taskmanager 41 | sleep 10 && tail -f -n +0 /opt/flink/log/* 42 | -------------------------------------------------------------------------------- /flink/docker-compose-cluster.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | jobmanager: 5 | image: flink:scala_2.11-java8 6 | ports: 7 | - "8081:8081" 8 | - "6123:6123" 9 | environment: 10 | - | 11 | FLINK_PROPERTIES= 12 | jobmanager.rpc.address: jobmanager 13 | entrypoint: 14 | - sh 15 | - -c 16 | - | 17 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 18 | /docker-entrypoint.sh jobmanager 19 | sleep 10 && tail -f -n +0 /opt/flink/log/* 20 | 21 | taskmanager: 22 | image: flink:scala_2.11-java8 23 | environment: 24 | - | 25 | FLINK_PROPERTIES= 26 | jobmanager.rpc.address: jobmanager 27 | depends_on: 28 | - jobmanager 29 | scale: 2 30 | entrypoint: 31 | - sh 32 | - -c 33 | - | 34 | sed -i 's/start-foreground/start/g' /docker-entrypoint.sh 35 | /docker-entrypoint.sh taskmanager 36 | sleep 10 && tail -f -n +0 /opt/flink/log/* 37 | -------------------------------------------------------------------------------- /flink/flink-examples-1.0-SNAPSHOT.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/flink-examples-1.0-SNAPSHOT.jar -------------------------------------------------------------------------------- /flink/images/2021-06-26-11-22-39.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-06-26-11-22-39.png -------------------------------------------------------------------------------- /flink/images/2021-06-26-17-06-55.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-06-26-17-06-55.png -------------------------------------------------------------------------------- /flink/images/2021-06-26-17-07-49.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-06-26-17-07-49.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-25-37.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-25-37.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-26-05.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-26-05.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-28-07.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-28-07.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-28-47.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-28-47.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-35-44.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-35-44.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-16-47-28.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-16-47-28.png -------------------------------------------------------------------------------- /flink/images/2021-07-14-17-44-48.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/flink/images/2021-07-14-17-44-48.png -------------------------------------------------------------------------------- /kafka/Dockerfile: -------------------------------------------------------------------------------- 1 | FROM ubuntu:20.04 2 | 3 | ENV kafka_version=3.1.0 4 | ENV scala_version=2.13 5 | 6 | LABEL author="bobo" 7 | LABEL version="kafka_${scala_version}-${kafka_version}" 8 | 9 | # change ubuntu mirror 10 | RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list 11 | RUN apt-get -qq update 12 | 13 | # change timezone 14 | RUN apt-get install -y tzdata 15 | RUN ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime 16 | RUN echo "Asia/Shanghai" >> /etc/timezone 17 | 18 | # install wget & jdk 19 | RUN apt-get install -y wget openjdk-8-jdk 20 | 21 | # download kafka binary 22 | WORKDIR /opt 23 | RUN wget https://dlcdn.apache.org/kafka/${kafka_version}/kafka_${scala_version}-${kafka_version}.tgz \ 24 | && tar -xzf kafka_${scala_version}-${kafka_version}.tgz \ 25 | && rm kafka_${scala_version}-${kafka_version}.tgz 26 | 27 | WORKDIR /opt/kafka_${scala_version}-${kafka_version} 28 | -------------------------------------------------------------------------------- /kafka/README.md: -------------------------------------------------------------------------------- 1 | kafka部署方法 2 | === 3 | 4 | ## QuickStart 5 | 6 | ### 单节点部署 7 | 8 | > yml文件中的镜像基于[Dockerfile](./Dockerfile)构建,并已上传至[DockerHub](https://hub.docker.com/repository/docker/per495/kafka) 9 | 10 | 1. 基于Zookeeper的单节点kafka容器 11 | ```shell 12 | # 启动 13 | docker-compose -f kafka-zk-single.yml up -d 14 | # 移除 15 | docker-compose -f kafka-zk-single.yml down 16 | ``` 17 | 18 | 2. 基于Kraft的单节点kafka容器 19 | ```shell 20 | # 启动 21 | docker-compose -f kafka-kraft-single.yml up -d 22 | # 移除 23 | docker-compose -f kafka-kraft-single.yml down 24 | ``` 25 | 26 | ## Dockerfile说明 27 | 28 | 基于`ubuntu:20.04`构建包含`kafka binary`文件的镜像,更换了ubuntu源为aliyun,未配置启动命令。 29 | 30 | ## 参考文档 31 | 32 | [kafka doc](https://kafka.apache.org/documentation/#quickstart) 33 | [kafka kraft github](https://github.com/apache/kafka/tree/trunk/config/kraft) -------------------------------------------------------------------------------- /kafka/kafka-kraft-single.yml: -------------------------------------------------------------------------------- 1 | version: "3.8" 2 | 3 | services: 4 | kafka-kraft: 5 | image: per495/kafka:3.1.0 6 | ports: 7 | - 9092:9092 8 | restart: on-failure 9 | healthcheck: 10 | test: bin/kafka-topics.sh --list --bootstrap-server localhost:9092 11 | interval: 30s 12 | timeout: 10s 13 | retries: 3 14 | entrypoint: 15 | - sh 16 | - -c 17 | - | 18 | uuid=$$(bin/kafka-storage.sh random-uuid) && bin/kafka-storage.sh format -t $${uuid} -c config/kraft/server.properties 19 | bin/kafka-server-start.sh config/kraft/server.properties 20 | -------------------------------------------------------------------------------- /kafka/kafka-zk-single.yml: -------------------------------------------------------------------------------- 1 | version: "3.8" 2 | 3 | services: 4 | kafka-zk: 5 | image: per495/kafka:3.1.0 6 | ports: 7 | - 2181:2181 8 | - 9092:9092 9 | restart: on-failure 10 | healthcheck: 11 | test: bin/kafka-topics.sh --list --bootstrap-server localhost:9092 12 | interval: 30s 13 | timeout: 10s 14 | retries: 3 15 | entrypoint: 16 | - sh 17 | - -c 18 | - | 19 | bin/zookeeper-server-start.sh -daemon config/zookeeper.properties 20 | bin/kafka-server-start.sh config/server.properties 21 | -------------------------------------------------------------------------------- /pulsar/README.md: -------------------------------------------------------------------------------- 1 | 基于Docker的pulsar集群部署方法 2 | === 3 | 4 | ## 准备工作 5 | 6 | 1. 安装最新版本的[docker](https://www.docker.com/community-edition) 7 | 2. 拉取[pulsar](https://hub.docker.com/r/apachepulsar/pulsar)和[pulsar-manager](https://hub.docker.com/r/apachepulsar/pulsar-manager)镜像 8 | 9 | ``` linux 10 | docker pull apachepulsar/pulsar:2.7.2 11 | docker pull apachepulsar/pulsar-manager:v0.2.0 12 | ``` 13 | 14 | ## 部署流程 15 | 16 | ### standalone方式部署 17 | 18 | 启动容器: 19 | ``` linux 20 | sh startStandalone.sh 21 | ``` 22 | 23 | 等待`pulsar-manager`启动完成后,初始化管理员信息: 24 | ``` linux 25 | sh initAccount.sh 26 | ``` 27 | 28 | 显示以下内容说明管理员信息初始化成功: 29 | ``` log 30 | > sh initAccount.sh 31 | % Total % Received % Xferd Average Speed Time Time Time Current 32 | Dload Upload Total Spent Left Speed 33 | 100 36 100 36 0 0 130 0 --:--:-- --:--:-- --:--:-- 130 34 | {"message":"Add super user success, please login"}% 35 | ``` 36 | 37 | 管理员信息初始化完成后,可从以下地址登入: 38 | > pulsar-manager: http://127.0.0.1:9527/# 39 | > username: `admin`, password: `pulsar` 40 | 41 | 构建成功后,点击 Clusters -> Cluster Name: standalone -> BROKERS ,可看到broker列表说明`standalone`方式部署完成 42 | 43 | ![](images/2021-06-15-10-35-39.png) 44 | 45 | 删除容器: 46 | ``` linux 47 | sh stopStandalone.sh 48 | ``` 49 | 50 | #### standalone方式容器说明 51 | `pulsar-standalone`容器 52 | - 配置了集群url地址为`pulsar-standalone`(便于`pulsar-manager`容器管理) 53 | - 未挂载数据盘,每次启动均为新实例 54 | ``` linux 55 | docker run -itd \ 56 | --name pulsar-standalone \ 57 | -p 6650:6650 \ 58 | -p 8080:8080 \ 59 | apachepulsar/pulsar:2.7.2 \ 60 | sh -c "bin/pulsar standalone > pulsar.log 2>&1 & \ 61 | sleep 30 && bin/pulsar-admin clusters update standalone \ 62 | --url http://pulsar-standalone:8080 \ 63 | --broker-url pulsar://pulsar-standalone:6650 & \ 64 | tail -F pulsar.log" 65 | ``` 66 | 67 | `pulsar-manager`容器 68 | - 配置了默认集群环境 69 | - 修改容器日志为`后端日志` 70 | ``` linux 71 | docker run -itd \ 72 | --name pulsar-manager \ 73 | -p 9527:9527 -p 7750:7750 \ 74 | -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ 75 | --link pulsar-standalone \ 76 | --entrypoint="" \ 77 | apachepulsar/pulsar-manager:v0.2.0 \ 78 | sh -c "sed -i '/^default.environment.name/ s|.*|default.environment.name=pulsar-standalone|' /pulsar-manager/pulsar-manager/application.properties & \ 79 | sed -i '/^default.environment.service_url/ s|.*|default.environment.service_url=http://pulsar-standalone:8080|' /pulsar-manager/pulsar-manager/application.properties & \ 80 | /pulsar-manager/entrypoint.sh & \ 81 | tail -F /pulsar-manager/pulsar-manager/pulsar-manager.log" 82 | ``` 83 | 84 | ### cluster方式部署 85 | 86 | 启动集群(自动新建初始化数据目录`data`): 87 | ``` linux 88 | sh startCluster.sh 89 | ``` 90 | 91 | 等待`pulsar-manager`启动完成后,初始化管理员信息: 92 | ``` linux 93 | sh initAccount.sh 94 | ``` 95 | 96 | 显示以下内容说明管理员信息初始化成功: 97 | ``` log 98 | > sh initAccount.sh 99 | % Total % Received % Xferd Average Speed Time Time Time Current 100 | Dload Upload Total Spent Left Speed 101 | 100 36 100 36 0 0 130 0 --:--:-- --:--:-- --:--:-- 130 102 | {"message":"Add super user success, please login"}% 103 | ``` 104 | 105 | 管理员信息初始化完成后,可从以下地址登入: 106 | - Pulsar Manager 107 | > http://127.0.0.1:9527/# 108 | > username: `admin`, password: `pulsar` 109 | - BookKeeper Visual Manager 110 | > http://127.0.0.1:7750/bkvm/#/login 111 | > username: `admin`, password: `admin` 112 | 113 | 构建成功后,以下两个页面正常说明`cluster`方式部署完成 114 | - 点击 Clusters -> Cluster Name: pulsar-cluster-1 -> BROKERS ,可看到broker列表 115 | ![](images/2021-06-15-10-47-45.png) 116 | - 进入`BookKeeper Visual Manager`可看到3个BookKeeper节点 117 | ![](images/2021-06-15-10-51-18.png) 118 | 119 | 删除集群(删除容器,保留数据目录): 120 | ``` linux 121 | sh stopCluster.sh 122 | ``` 123 | 124 | 停止和启动集群(保留容器): 125 | ``` shell 126 | # 停止容器 127 | docker compose -f docker-compose-cluster.yml stop 128 | # 启动容器 129 | docker compose -f docker-compose-cluster.yml start 130 | ``` 131 | 132 | #### cluster方式容器说明 133 | 容器启动流程如下: 134 | 135 | ``` 136 | Zookeeper节点(zk1, zk2, zk3) 137 | -> 初始化集群元数据(init-metadata) 138 | -> Bookeeper节点(bookie1, bookie2, bookie3) 139 | -> puslar broker节点(broker1, broker2, broker3) 140 | -> 反向代理节点(pulsar-proxy) 141 | -> pulsar manager节点(dashboard) 142 | ``` 143 | 144 | 配置文件参考`conf`目录,数据目录为`data`,具体参考[docker-compose-cluster.yml](docker-compose-cluster.yml) 145 | 146 | 所有容器的重启策略为失败后重启 147 | ``` yml 148 | restart: on-failure 149 | ``` 150 | 151 | 日志均可通过`docker logs `查看 152 | 153 | ## 参考资料 154 | 155 | - [在 Docker 里配置单机 Pulsar](https://pulsar.apache.org/docs/zh-CN/next/standalone-docker/) 156 | - [pulsar集群元数据管理](https://pulsar.apache.org/docs/zh-CN/next/admin-api-clusters/) 157 | - [github: pulsar-manager](https://github.com/apache/pulsar-manager) 158 | -------------------------------------------------------------------------------- /pulsar/conf/application.properties: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed under the Apache License, Version 2.0 (the "License"); 3 | # you may not use this file except in compliance with the License. 4 | # You may obtain a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 | # See the License for the specific language governing permissions and 12 | # limitations under the License. 13 | # 14 | 15 | spring.cloud.refresh.refreshable=none 16 | server.port=7750 17 | 18 | # configuration log 19 | logging.path= 20 | logging.file=pulsar-manager.log 21 | 22 | # DEBUG print execute sql 23 | logging.level.org.apache=INFO 24 | 25 | mybatis.type-aliases-package=org.apache.pulsar.manager 26 | 27 | # database connection 28 | 29 | # SQLLite 30 | #spring.datasource.driver-class-name=org.sqlite.JDBC 31 | #spring.datasource.url=jdbc:sqlite:pulsar_manager.db 32 | #spring.datasource.initialization-mode=always 33 | #spring.datasource.schema=classpath:/META-INF/sql/sqlite-schema.sql 34 | #spring.datasource.username= 35 | #spring.datasource.password= 36 | 37 | #HerdDB JDBC Driver 38 | spring.datasource.driver-class-name=herddb.jdbc.Driver 39 | # HerdDB - local in memory-only 40 | #spring.datasource.url=jdbc:herddb:local 41 | # HerdDB - start embedded server, data persisted on local disk (directory 'dbdata'), listening on localhost:7000 42 | spring.datasource.url=jdbc:herddb:server:localhost:7000?server.start=true&server.base.dir=dbdata 43 | # HerdDB - start embedded server 'diskless-cluster' mode, WAL and Data persisted on Bookies, Metadata on ZooKeeper in '/herd', listening on localhost:7000 44 | #spring.datasource.url=jdbc:herddb:zookeeper:localhost:2181?server.start=true&server.base.dir=dbdata&server.mode=diskless-cluster&server.node.id=localhost 45 | # HerdDB - connect to standalone server at localhost:7000 46 | #spring.datasource.url=jdbc:herddb:server:localhost:7000 47 | # HerdDB - connect to cluster, uses ZooKeeper for service discovery 48 | #spring.datasource.url=jdbc:herddb:zookeeper:localhost:2181/herd 49 | 50 | 51 | spring.datasource.schema=classpath:/META-INF/sql/herddb-schema.sql 52 | spring.datasource.username=sa 53 | spring.datasource.password=hdb 54 | spring.datasource.initialization-mode=always 55 | 56 | # postgresql configuration 57 | #spring.datasource.driver-class-name=org.postgresql.Driver 58 | #spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/pulsar_manager 59 | #spring.datasource.username=postgres 60 | #spring.datasource.password=postgres 61 | 62 | # zuul config 63 | # https://cloud.spring.io/spring-cloud-static/Dalston.SR5/multi/multi__router_and_filter_zuul.html 64 | # By Default Zuul adds Authorization to be dropped headers list. Below we are manually setting it 65 | zuul.sensitive-headers=Cookie,Set-Cookie 66 | zuul.routes.admin.path=/admin/** 67 | zuul.routes.admin.url=http://localhost:8080/admin/ 68 | zuul.routes.lookup.path=/lookup/** 69 | zuul.routes.lookup.url=http://localhost:8080/lookup/ 70 | 71 | # pagehelper plugin 72 | #pagehelper.helperDialect=sqlite 73 | # force 'mysql' for HerdDB, comment out for postgresql 74 | pagehelper.helperDialect=mysql 75 | 76 | backend.directRequestBroker=true 77 | backend.directRequestHost=http://localhost:8080 78 | backend.jwt.token= 79 | backend.broker.pulsarAdmin.authPlugin= 80 | backend.broker.pulsarAdmin.authParams= 81 | backend.broker.pulsarAdmin.tlsAllowInsecureConnection=false 82 | backend.broker.pulsarAdmin.tlsTrustCertsFilePath= 83 | backend.broker.pulsarAdmin.tlsEnableHostnameVerification=false 84 | 85 | jwt.secret=dab1c8ba-b01b-11e9-b384-186590e06885 86 | jwt.sessionTime=2592000 87 | # If user.management.enable is true, the following account and password will no longer be valid. 88 | pulsar-manager.account=pulsar 89 | pulsar-manager.password=pulsar 90 | # If true, the database is used for user management 91 | user.management.enable=true 92 | 93 | # Optional -> SECRET, PRIVATE, default -> PRIVATE, empty -> disable auth 94 | # SECRET mode -> bin/pulsar tokens create --secret-key file:///path/to/my-secret.key --subject test-user 95 | # PRIVATE mode -> bin/pulsar tokens create --private-key file:///path/to/my-private.key --subject test-user 96 | # Detail information: http://pulsar.apache.org/docs/en/security-token-admin/ 97 | jwt.broker.token.mode= 98 | jwt.broker.secret.key=file:///path/broker-secret.key 99 | jwt.broker.public.key=file:///path/pulsar/broker-public.key 100 | jwt.broker.private.key=file:///path/broker-private.key 101 | 102 | # bookie 103 | bookie.host=http://localhost:8050 104 | bookie.enable=false 105 | 106 | redirect.scheme=http 107 | redirect.host=localhost 108 | redirect.port=9527 109 | 110 | # Stats interval 111 | # millisecond 112 | insert.stats.interval=30000 113 | # millisecond 114 | clear.stats.interval=300000 115 | init.delay.interval=0 116 | 117 | # cluster data reload 118 | cluster.cache.reload.interval.ms=60000 119 | 120 | # Third party login options 121 | third.party.login.option= 122 | 123 | # Github login configuration 124 | github.client.id=your-client-id 125 | github.client.secret=your-client-secret 126 | github.oauth.host=https://github.com/login/oauth/access_token 127 | github.user.info=https://api.github.com/user 128 | github.login.host=https://github.com/login/oauth/authorize 129 | github.redirect.host=http://localhost:9527 130 | 131 | user.access.token.expire=604800 132 | 133 | # thymeleaf configuration for third login. 134 | spring.thymeleaf.cache=false 135 | spring.thymeleaf.prefix=classpath:/templates/ 136 | spring.thymeleaf.check-template-location=true 137 | spring.thymeleaf.suffix=.html 138 | spring.thymeleaf.encoding=UTF-8 139 | spring.thymeleaf.servlet.content-type=text/html 140 | spring.thymeleaf.mode=HTML5 141 | 142 | # default environment configuration 143 | default.environment.name=pulsar-cluster-1 144 | default.environment.service_url=http://pulsar-proxy:8080 145 | 146 | # enable tls encryption 147 | # keytool -import -alias test-keystore -keystore ca-certs -file certs/ca.cert.pem 148 | tls.enabled=false 149 | tls.keystore=keystore-file-path 150 | tls.keystore.password=keystore-password 151 | tls.hostname.verifier=false 152 | tls.pulsar.admin.ca-certs=ca-client-path 153 | 154 | # support peek message, default false 155 | pulsar.peek.message=false 156 | -------------------------------------------------------------------------------- /pulsar/conf/bkvm.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed under the Apache License, Version 2.0 (the "License"); 3 | # you may not use this file except in compliance with the License. 4 | # You may obtain a copy of the License at 5 | # 6 | # http://www.apache.org/licenses/LICENSE-2.0 7 | # 8 | # Unless required by applicable law or agreed to in writing, software 9 | # distributed under the License is distributed on an "AS IS" BASIS, 10 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11 | # See the License for the specific language governing permissions and 12 | # limitations under the License. 13 | # 14 | 15 | # Change this to true in order to start BKVM 16 | bkvm.enabled=true 17 | 18 | # BookKeeper Connection 19 | # Default value zk+null://127.0.0.1:2181/ledgers works for Pulsar Standalone 20 | metadataServiceUri=zk+null://zk1:2181/ledgers 21 | 22 | # Refresh BK metadata at boot. 23 | # BK metadata are not scanned automatically in BKVM, you have to request it from the UI 24 | metdata.refreshAtBoot=true 25 | 26 | # HerdDB database connection, not to be changed if you are running embedded HerdDB in Pulsar Manager 27 | # If you are using PostGRE SQL you will have to change this configuration 28 | # We want to use the HerdDB database started by PulsarManager itself, by default BKVM wants to start its one database 29 | jdbc.url=jdbc:herddb:localhost:7000?server.mode=standalone&server.start=false 30 | jdbc.startDatabase=false 31 | server.mode=standalone 32 | server.start=false 33 | 34 | -------------------------------------------------------------------------------- /pulsar/conf/bookkeeper.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed to the Apache Software Foundation (ASF) under one 3 | # or more contributor license agreements. See the NOTICE file 4 | # distributed with this work for additional information 5 | # regarding copyright ownership. The ASF licenses this file 6 | # to you under the Apache License, Version 2.0 (the 7 | # "License"); you may not use this file except in compliance 8 | # with the License. You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, 13 | # software distributed under the License is distributed on an 14 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | # KIND, either express or implied. See the License for the 16 | # specific language governing permissions and limitations 17 | # under the License. 18 | # 19 | 20 | ## Bookie settings 21 | 22 | ############################################################################# 23 | ## Server parameters 24 | ############################################################################# 25 | 26 | # Port that bookie server listen on 27 | bookiePort=3181 28 | 29 | # Directories BookKeeper outputs its write ahead log. 30 | # Could define multi directories to store write head logs, separated by ','. 31 | # For example: 32 | # journalDirectories=/tmp/bk-journal1,/tmp/bk-journal2 33 | # If journalDirectories is set, bookies will skip journalDirectory and use 34 | # this setting directory. 35 | # journalDirectories=/tmp/bk-journal 36 | 37 | # Directory Bookkeeper outputs its write ahead log 38 | # @deprecated since 4.5.0. journalDirectories is preferred over journalDirectory. 39 | journalDirectory=data/bookkeeper/journal 40 | 41 | # Configure the bookie to allow/disallow multiple ledger/index/journal directories 42 | # in the same filesystem disk partition 43 | # allowMultipleDirsUnderSameDiskPartition=false 44 | 45 | # Minimum safe Usable size to be available in index directory for bookie to create 46 | # Index File while replaying journal at the time of bookie Start in Readonly Mode (in bytes) 47 | minUsableSizeForIndexFileCreation=1073741824 48 | 49 | # Set the network interface that the bookie should listen on. 50 | # If not set, the bookie will listen on all interfaces. 51 | # listeningInterface=eth0 52 | 53 | # Configure a specific hostname or IP address that the bookie should use to advertise itself to 54 | # clients. If not set, bookie will advertised its own IP address or hostname, depending on the 55 | # listeningInterface and useHostNameAsBookieID settings. 56 | advertisedAddress= 57 | 58 | # Whether the bookie allowed to use a loopback interface as its primary 59 | # interface(i.e. the interface it uses to establish its identity)? 60 | # By default, loopback interfaces are not allowed as the primary 61 | # interface. 62 | # Using a loopback interface as the primary interface usually indicates 63 | # a configuration error. For example, its fairly common in some VPS setups 64 | # to not configure a hostname, or to have the hostname resolve to 65 | # 127.0.0.1. If this is the case, then all bookies in the cluster will 66 | # establish their identities as 127.0.0.1:3181, and only one will be able 67 | # to join the cluster. For VPSs configured like this, you should explicitly 68 | # set the listening interface. 69 | allowLoopback=false 70 | 71 | # Interval to watch whether bookie is dead or not, in milliseconds 72 | bookieDeathWatchInterval=1000 73 | 74 | # When entryLogPerLedgerEnabled is enabled, checkpoint doesn't happens 75 | # when a new active entrylog is created / previous one is rolled over. 76 | # Instead SyncThread checkpoints periodically with 'flushInterval' delay 77 | # (in milliseconds) in between executions. Checkpoint flushes both ledger 78 | # entryLogs and ledger index pages to disk. 79 | # Flushing entrylog and index files will introduce much random disk I/O. 80 | # If separating journal dir and ledger dirs each on different devices, 81 | # flushing would not affect performance. But if putting journal dir 82 | # and ledger dirs on same device, performance degrade significantly 83 | # on too frequent flushing. You can consider increment flush interval 84 | # to get better performance, but you need to pay more time on bookie 85 | # server restart after failure. 86 | # This config is used only when entryLogPerLedgerEnabled is enabled. 87 | flushInterval=60000 88 | 89 | # Allow the expansion of bookie storage capacity. Newly added ledger 90 | # and index dirs must be empty. 91 | # allowStorageExpansion=false 92 | 93 | # Whether the bookie should use its hostname to register with the 94 | # co-ordination service(eg: Zookeeper service). 95 | # When false, bookie will use its ip address for the registration. 96 | # Defaults to false. 97 | useHostNameAsBookieID=false 98 | 99 | # Whether the bookie is allowed to use an ephemeral port (port 0) as its 100 | # server port. By default, an ephemeral port is not allowed. 101 | # Using an ephemeral port as the service port usually indicates a configuration 102 | # error. However, in unit tests, using an ephemeral port will address port 103 | # conflict problems and allow running tests in parallel. 104 | # allowEphemeralPorts=false 105 | 106 | # Whether allow the bookie to listen for BookKeeper clients executed on the local JVM. 107 | # enableLocalTransport=false 108 | 109 | # Whether allow the bookie to disable bind on network interfaces, 110 | # this bookie will be available only to BookKeeper clients executed on the local JVM. 111 | # disableServerSocketBind=false 112 | 113 | # The number of bytes we should use as chunk allocation for 114 | # org.apache.bookkeeper.bookie.SkipListArena 115 | # skipListArenaChunkSize=4194304 116 | 117 | # The max size we should allocate from the skiplist arena. Allocations 118 | # larger than this should be allocated directly by the VM to avoid fragmentation. 119 | # skipListArenaMaxAllocSize=131072 120 | 121 | # The bookie authentication provider factory class name. 122 | # If this is null, no authentication will take place. 123 | # bookieAuthProviderFactoryClass=null 124 | 125 | ############################################################################# 126 | ## Garbage collection settings 127 | ############################################################################# 128 | 129 | # How long the interval to trigger next garbage collection, in milliseconds 130 | # Since garbage collection is running in background, too frequent gc 131 | # will heart performance. It is better to give a higher number of gc 132 | # interval if there is enough disk capacity. 133 | gcWaitTime=900000 134 | 135 | # How long the interval to trigger next garbage collection of overreplicated 136 | # ledgers, in milliseconds [Default: 1 day]. This should not be run very frequently 137 | # since we read the metadata for all the ledgers on the bookie from zk 138 | gcOverreplicatedLedgerWaitTime=86400000 139 | 140 | # Number of threads that should handle write requests. if zero, the writes would 141 | # be handled by netty threads directly. 142 | numAddWorkerThreads=0 143 | 144 | # Number of threads that should handle read requests. if zero, the reads would 145 | # be handled by netty threads directly. 146 | numReadWorkerThreads=8 147 | 148 | # Number of threads that should be used for high priority requests 149 | # (i.e. recovery reads and adds, and fencing). 150 | numHighPriorityWorkerThreads=8 151 | 152 | # If read workers threads are enabled, limit the number of pending requests, to 153 | # avoid the executor queue to grow indefinitely 154 | maxPendingReadRequestsPerThread=2500 155 | 156 | # If add workers threads are enabled, limit the number of pending requests, to 157 | # avoid the executor queue to grow indefinitely 158 | maxPendingAddRequestsPerThread=10000 159 | 160 | # Whether force compaction is allowed when the disk is full or almost full. 161 | # Forcing GC may get some space back, but may also fill up disk space more quickly. 162 | # This is because new log files are created before GC, while old garbage 163 | # log files are deleted after GC. 164 | # isForceGCAllowWhenNoSpace=false 165 | 166 | # True if the bookie should double check readMetadata prior to gc 167 | # verifyMetadataOnGC=false 168 | 169 | ############################################################################# 170 | ## TLS settings 171 | ############################################################################# 172 | 173 | # TLS Provider (JDK or OpenSSL). 174 | # tlsProvider=OpenSSL 175 | 176 | # The path to the class that provides security. 177 | # tlsProviderFactoryClass=org.apache.bookkeeper.security.SSLContextFactory 178 | 179 | # Type of security used by server. 180 | # tlsClientAuthentication=true 181 | 182 | # Bookie Keystore type. 183 | # tlsKeyStoreType=JKS 184 | 185 | # Bookie Keystore location (path). 186 | # tlsKeyStore=null 187 | 188 | # Bookie Keystore password path, if the keystore is protected by a password. 189 | # tlsKeyStorePasswordPath=null 190 | 191 | # Bookie Truststore type. 192 | # tlsTrustStoreType=null 193 | 194 | # Bookie Truststore location (path). 195 | # tlsTrustStore=null 196 | 197 | # Bookie Truststore password path, if the trust store is protected by a password. 198 | # tlsTrustStorePasswordPath=null 199 | 200 | ############################################################################# 201 | ## Long poll request parameter settings 202 | ############################################################################# 203 | 204 | # The number of threads that should handle long poll requests. 205 | # numLongPollWorkerThreads=10 206 | 207 | # The tick duration in milliseconds for long poll requests. 208 | # requestTimerTickDurationMs=10 209 | 210 | # The number of ticks per wheel for the long poll request timer. 211 | # requestTimerNumTicks=1024 212 | 213 | ############################################################################# 214 | ## AutoRecovery settings 215 | ############################################################################# 216 | 217 | # The interval between auditor bookie checks. 218 | # The auditor bookie check, checks ledger metadata to see which bookies should 219 | # contain entries for each ledger. If a bookie which should contain entries is 220 | # unavailable, then the ledger containing that entry is marked for recovery. 221 | # Setting this to 0 disabled the periodic check. Bookie checks will still 222 | # run when a bookie fails. 223 | # The interval is specified in seconds. 224 | auditorPeriodicBookieCheckInterval=86400 225 | 226 | # The number of entries that a replication will rereplicate in parallel. 227 | rereplicationEntryBatchSize=100 228 | 229 | # Auto-replication 230 | # The grace period, in milliseconds, that the replication worker waits before fencing and 231 | # replicating a ledger fragment that's still being written to upon bookie failure. 232 | openLedgerRereplicationGracePeriod=30000 233 | 234 | # Whether the bookie itself can start auto-recovery service also or not 235 | autoRecoveryDaemonEnabled=true 236 | 237 | # How long to wait, in seconds, before starting auto recovery of a lost bookie 238 | lostBookieRecoveryDelay=0 239 | 240 | ############################################################################# 241 | ## Placement settings 242 | ############################################################################# 243 | 244 | # the following settings take effects when `autoRecoveryDaemonEnabled` is true. 245 | 246 | # The ensemble placement policy used for re-replicating entries. 247 | # 248 | # Options: 249 | # - org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy 250 | # - org.apache.bookkeeper.client.RegionAwareEnsemblePlacementPolicy 251 | # 252 | # Default value: 253 | # org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy 254 | # 255 | # ensemblePlacementPolicy=org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy 256 | 257 | ############################################################################# 258 | ## Netty server settings 259 | ############################################################################# 260 | 261 | # This settings is used to enabled/disabled Nagle's algorithm, which is a means of 262 | # improving the efficiency of TCP/IP networks by reducing the number of packets 263 | # that need to be sent over the network. 264 | # If you are sending many small messages, such that more than one can fit in 265 | # a single IP packet, setting server.tcpnodelay to false to enable Nagle algorithm 266 | # can provide better performance. 267 | # Default value is true. 268 | serverTcpNoDelay=true 269 | 270 | # This setting is used to send keep-alive messages on connection-oriented sockets. 271 | # serverSockKeepalive=true 272 | 273 | # The socket linger timeout on close. 274 | # When enabled, a close or shutdown will not return until all queued messages for 275 | # the socket have been successfully sent or the linger timeout has been reached. 276 | # Otherwise, the call returns immediately and the closing is done in the background. 277 | # serverTcpLinger=0 278 | 279 | # The Recv ByteBuf allocator initial buf size. 280 | # byteBufAllocatorSizeInitial=65536 281 | 282 | # The Recv ByteBuf allocator min buf size. 283 | # byteBufAllocatorSizeMin=65536 284 | 285 | # The Recv ByteBuf allocator max buf size. 286 | # byteBufAllocatorSizeMax=1048576 287 | 288 | # The maximum netty frame size in bytes. Any message received larger than this will be rejected. The default value is 5MB. 289 | nettyMaxFrameSizeBytes=5253120 290 | 291 | ############################################################################# 292 | ## Journal settings 293 | ############################################################################# 294 | 295 | # The journal format version to write. 296 | # Available formats are 1-6: 297 | # 1: no header 298 | # 2: a header section was added 299 | # 3: ledger key was introduced 300 | # 4: fencing key was introduced 301 | # 5: expanding header to 512 and padding writes to align sector size configured by `journalAlignmentSize` 302 | # 6: persisting explicitLac is introduced 303 | # By default, it is `6`. 304 | # If you'd like to disable persisting ExplicitLac, you can set this config to < `6` and also 305 | # fileInfoFormatVersionToWrite should be set to 0. If there is mismatch then the serverconfig is considered invalid. 306 | # You can disable `padding-writes` by setting journal version back to `4`. This feature is available in 4.5.0 307 | # and onward versions. 308 | journalFormatVersionToWrite=5 309 | 310 | # Max file size of journal file, in mega bytes 311 | # A new journal file will be created when the old one reaches the file size limitation 312 | journalMaxSizeMB=2048 313 | 314 | # Max number of old journal file to kept 315 | # Keep a number of old journal files would help data recovery in special case 316 | journalMaxBackups=5 317 | 318 | # How much space should we pre-allocate at a time in the journal. 319 | journalPreAllocSizeMB=16 320 | 321 | # Size of the write buffers used for the journal 322 | journalWriteBufferSizeKB=64 323 | 324 | # Should we remove pages from page cache after force write 325 | journalRemoveFromPageCache=true 326 | 327 | # Should the data be fsynced on journal before acknowledgment. 328 | # By default, data sync is enabled to guarantee durability of writes. 329 | # Beware: while disabling data sync in the Bookie journal might improve the bookie write performance, it will also 330 | # introduce the possibility of data loss. With no sync, the journal entries are written in the OS page cache but 331 | # not flushed to disk. In case of power failure, the affected bookie might lose the unflushed data. If the ledger 332 | # is replicated to multiple bookies, the chances of data loss are reduced though still present. 333 | journalSyncData=true 334 | 335 | # Should we group journal force writes, which optimize group commit 336 | # for higher throughput 337 | journalAdaptiveGroupWrites=true 338 | 339 | # Maximum latency to impose on a journal write to achieve grouping 340 | journalMaxGroupWaitMSec=1 341 | 342 | # Maximum writes to buffer to achieve grouping 343 | journalBufferedWritesThreshold=524288 344 | 345 | # The number of threads that should handle journal callbacks 346 | numJournalCallbackThreads=8 347 | 348 | # All the journal writes and commits should be aligned to given size. 349 | # If not, zeros will be padded to align to given size. 350 | # It only takes effects when journalFormatVersionToWrite is set to 5 351 | journalAlignmentSize=4096 352 | 353 | # Maximum entries to buffer to impose on a journal write to achieve grouping. 354 | # journalBufferedEntriesThreshold=0 355 | 356 | # If we should flush the journal when journal queue is empty 357 | journalFlushWhenQueueEmpty=false 358 | 359 | ############################################################################# 360 | ## Ledger storage settings 361 | ############################################################################# 362 | 363 | # Ledger storage implementation class 364 | ledgerStorageClass=org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage 365 | 366 | # Directory Bookkeeper outputs ledger snapshots 367 | # could define multi directories to store snapshots, separated by ',' 368 | # For example: 369 | # ledgerDirectories=/tmp/bk1-data,/tmp/bk2-data 370 | # 371 | # Ideally ledger dirs and journal dir are each in a different device, 372 | # which reduce the contention between random i/o and sequential write. 373 | # It is possible to run with a single disk, but performance will be significantly lower. 374 | ledgerDirectories=data/bookkeeper/ledgers 375 | # Directories to store index files. If not specified, will use ledgerDirectories to store. 376 | # indexDirectories=data/bookkeeper/ledgers 377 | 378 | # Interval at which the auditor will do a check of all ledgers in the cluster. 379 | # By default this runs once a week. The interval is set in seconds. 380 | # To disable the periodic check completely, set this to 0. 381 | # Note that periodic checking will put extra load on the cluster, so it should 382 | # not be run more frequently than once a day. 383 | auditorPeriodicCheckInterval=604800 384 | 385 | # Whether sorted-ledger storage enabled (default true) 386 | # sortedLedgerStorageEnabled=true 387 | 388 | # The skip list data size limitation (default 64MB) in EntryMemTable 389 | # skipListSizeLimit=67108864L 390 | 391 | ############################################################################# 392 | ## Ledger cache settings 393 | ############################################################################# 394 | 395 | # Max number of ledger index files could be opened in bookie server 396 | # If number of ledger index files reaches this limitation, bookie 397 | # server started to swap some ledgers from memory to disk. 398 | # Too frequent swap will affect performance. You can tune this number 399 | # to gain performance according your requirements. 400 | openFileLimit=0 401 | 402 | # The fileinfo format version to write. 403 | # Available formats are 0-1: 404 | # 0: Initial version 405 | # 1: persisting explicitLac is introduced 406 | # By default, it is `1`. 407 | # If you'd like to disable persisting ExplicitLac, you can set this config to 0 and 408 | # also journalFormatVersionToWrite should be set to < 6. If there is mismatch then the 409 | # serverconfig is considered invalid. 410 | fileInfoFormatVersionToWrite=0 411 | 412 | # Size of a index page in ledger cache, in bytes 413 | # A larger index page can improve performance writing page to disk, 414 | # which is efficient when you have small number of ledgers and these 415 | # ledgers have similar number of entries. 416 | # If you have large number of ledgers and each ledger has fewer entries, 417 | # smaller index page would improve memory usage. 418 | # pageSize=8192 419 | 420 | # How many index pages provided in ledger cache 421 | # If number of index pages reaches this limitation, bookie server 422 | # starts to swap some ledgers from memory to disk. You can increment 423 | # this value when you found swap became more frequent. But make sure 424 | # pageLimit*pageSize should not more than JVM max memory limitation, 425 | # otherwise you would got OutOfMemoryException. 426 | # In general, incrementing pageLimit, using smaller index page would 427 | # gain better performance in lager number of ledgers with fewer entries case 428 | # If pageLimit is -1, bookie server will use 1/3 of JVM memory to compute 429 | # the limitation of number of index pages. 430 | pageLimit=0 431 | 432 | ############################################################################# 433 | ## Ledger manager settings 434 | ############################################################################# 435 | 436 | # Ledger Manager Class 437 | # What kind of ledger manager is used to manage how ledgers are stored, managed 438 | # and garbage collected. Try to read 'BookKeeper Internals' for detail info. 439 | # ledgerManagerFactoryClass=org.apache.bookkeeper.meta.HierarchicalLedgerManagerFactory 440 | 441 | # @Deprecated - `ledgerManagerType` is deprecated in favor of using `ledgerManagerFactoryClass`. 442 | # ledgerManagerType=hierarchical 443 | 444 | # Root Zookeeper path to store ledger metadata 445 | # This parameter is used by zookeeper-based ledger manager as a root znode to 446 | # store all ledgers. 447 | zkLedgersRootPath=/ledgers 448 | 449 | ############################################################################# 450 | ## Entry log settings 451 | ############################################################################# 452 | 453 | # Max file size of entry logger, in bytes 454 | # A new entry log file will be created when the old one reaches the file size limitation 455 | logSizeLimit=1073741824 456 | 457 | # Enable/Disable entry logger preallocation 458 | entryLogFilePreallocationEnabled=true 459 | 460 | # Entry log flush interval in bytes. 461 | # Default is 0. 0 or less disables this feature and effectively flush 462 | # happens on log rotation. 463 | # Flushing in smaller chunks but more frequently reduces spikes in disk 464 | # I/O. Flushing too frequently may also affect performance negatively. 465 | flushEntrylogBytes=268435456 466 | 467 | # The number of bytes we should use as capacity for BufferedReadChannel. Default is 512 bytes. 468 | readBufferSizeBytes=4096 469 | 470 | # The number of bytes used as capacity for the write buffer. Default is 64KB. 471 | writeBufferSizeBytes=65536 472 | 473 | # Specifies if entryLog per ledger is enabled/disabled. If it is enabled, then there would be a 474 | # active entrylog for each ledger. It would be ideal to enable this feature if the underlying 475 | # storage device has multiple DiskPartitions or SSD and if in a given moment, entries of fewer 476 | # number of active ledgers are written to a bookie. 477 | # entryLogPerLedgerEnabled=false 478 | 479 | ############################################################################# 480 | ## Entry log compaction settings 481 | ############################################################################# 482 | 483 | # Set the rate at which compaction will readd entries. The unit is adds per second. 484 | compactionRate=1000 485 | 486 | # If bookie is using hostname for registration and in ledger metadata then 487 | # whether to use short hostname or FQDN hostname. Defaults to false. 488 | # useShortHostName=false 489 | 490 | # Threshold of minor compaction 491 | # For those entry log files whose remaining size percentage reaches below 492 | # this threshold will be compacted in a minor compaction. 493 | # If it is set to less than zero, the minor compaction is disabled. 494 | minorCompactionThreshold=0.2 495 | 496 | # Interval to run minor compaction, in seconds 497 | # If it is set to less than zero, the minor compaction is disabled. 498 | # Note: should be greater than gcWaitTime. 499 | minorCompactionInterval=3600 500 | 501 | # Set the maximum number of entries which can be compacted without flushing. 502 | # When compacting, the entries are written to the entrylog and the new offsets 503 | # are cached in memory. Once the entrylog is flushed the index is updated with 504 | # the new offsets. This parameter controls the number of entries added to the 505 | # entrylog before a flush is forced. A higher value for this parameter means 506 | # more memory will be used for offsets. Each offset consists of 3 longs. 507 | # This parameter should _not_ be modified unless you know what you're doing. 508 | # The default is 100,000. 509 | compactionMaxOutstandingRequests=100000 510 | 511 | # Threshold of major compaction 512 | # For those entry log files whose remaining size percentage reaches below 513 | # this threshold will be compacted in a major compaction. 514 | # Those entry log files whose remaining size percentage is still 515 | # higher than the threshold will never be compacted. 516 | # If it is set to less than zero, the minor compaction is disabled. 517 | majorCompactionThreshold=0.5 518 | 519 | # Interval to run major compaction, in seconds 520 | # If it is set to less than zero, the major compaction is disabled. 521 | # Note: should be greater than gcWaitTime. 522 | majorCompactionInterval=86400 523 | 524 | # Throttle compaction by bytes or by entries. 525 | isThrottleByBytes=false 526 | 527 | # Set the rate at which compaction will readd entries. The unit is adds per second. 528 | compactionRateByEntries=1000 529 | 530 | # Set the rate at which compaction will readd entries. The unit is bytes added per second. 531 | compactionRateByBytes=1000000 532 | 533 | ############################################################################# 534 | ## Statistics 535 | ############################################################################# 536 | 537 | # Whether statistics are enabled 538 | # enableStatistics=true 539 | 540 | # Stats Provider Class (if statistics are enabled) 541 | statsProviderClass=org.apache.bookkeeper.stats.prometheus.PrometheusMetricsProvider 542 | 543 | # Default port for Prometheus metrics exporter 544 | prometheusStatsHttpPort=8000 545 | 546 | ############################################################################# 547 | ## Read-only mode support 548 | ############################################################################# 549 | 550 | # If all ledger directories configured are full, then support only read requests for clients. 551 | # If "readOnlyModeEnabled=true" then on all ledger disks full, bookie will be converted 552 | # to read-only mode and serve only read requests. Otherwise the bookie will be shutdown. 553 | # By default this will be disabled. 554 | readOnlyModeEnabled=true 555 | 556 | # Whether the bookie is force started in read only mode or not 557 | # forceReadOnlyBookie=false 558 | 559 | # Persist the bookie status locally on the disks. So the bookies can keep their status upon restarts 560 | # @Since 4.6 561 | # persistBookieStatusEnabled=false 562 | 563 | ############################################################################# 564 | ## Disk utilization 565 | ############################################################################# 566 | 567 | # For each ledger dir, maximum disk space which can be used. 568 | # Default is 0.95f. i.e. 95% of disk can be used at most after which nothing will 569 | # be written to that partition. If all ledger dir partitions are full, then bookie 570 | # will turn to readonly mode if 'readOnlyModeEnabled=true' is set, else it will 571 | # shutdown. 572 | # Valid values should be in between 0 and 1 (exclusive). 573 | diskUsageThreshold=0.95 574 | 575 | # The disk free space low water mark threshold. 576 | # Disk is considered full when usage threshold is exceeded. 577 | # Disk returns back to non-full state when usage is below low water mark threshold. 578 | # This prevents it from going back and forth between these states frequently 579 | # when concurrent writes and compaction are happening. This also prevent bookie from 580 | # switching frequently between read-only and read-writes states in the same cases. 581 | # diskUsageWarnThreshold=0.95 582 | 583 | # Set the disk free space low water mark threshold. Disk is considered full when 584 | # usage threshold is exceeded. Disk returns back to non-full state when usage is 585 | # below low water mark threshold. This prevents it from going back and forth 586 | # between these states frequently when concurrent writes and compaction are 587 | # happening. This also prevent bookie from switching frequently between 588 | # read-only and read-writes states in the same cases. 589 | # diskUsageLwmThreshold=0.90 590 | 591 | # Disk check interval in milli seconds, interval to check the ledger dirs usage. 592 | # Default is 10000 593 | diskCheckInterval=10000 594 | 595 | ############################################################################# 596 | ## ZooKeeper parameters 597 | ############################################################################# 598 | 599 | # A list of one of more servers on which Zookeeper is running. 600 | # The server list can be comma separated values, for example: 601 | zkServers=zk1:2181,zk2:2181,zk3:2181 602 | 603 | # ZooKeeper client session timeout in milliseconds 604 | # Bookie server will exit if it received SESSION_EXPIRED because it 605 | # was partitioned off from ZooKeeper for more than the session timeout 606 | # JVM garbage collection, disk I/O will cause SESSION_EXPIRED. 607 | # Increment this value could help avoiding this issue 608 | zkTimeout=30000 609 | 610 | # The Zookeeper client backoff retry start time in millis. 611 | # zkRetryBackoffStartMs=1000 612 | 613 | # The Zookeeper client backoff retry max time in millis. 614 | # zkRetryBackoffMaxMs=10000 615 | 616 | # Set ACLs on every node written on ZooKeeper, this way only allowed users 617 | # will be able to read and write BookKeeper metadata stored on ZooKeeper. 618 | # In order to make ACLs work you need to setup ZooKeeper JAAS authentication 619 | # all the bookies and Client need to share the same user, and this is usually 620 | # done using Kerberos authentication. See ZooKeeper documentation 621 | zkEnableSecurity=false 622 | 623 | ############################################################################# 624 | ## Server parameters 625 | ############################################################################# 626 | 627 | # The flag enables/disables starting the admin http server. Default value is 'false'. 628 | httpServerEnabled=false 629 | 630 | # The http server port to listen on. Default value is 8080. 631 | # Use `8000` as the port to keep it consistent with prometheus stats provider 632 | httpServerPort=8000 633 | 634 | # The http server class 635 | httpServerClass=org.apache.bookkeeper.http.vertx.VertxHttpServer 636 | 637 | # Configure a list of server components to enable and load on a bookie server. 638 | # This provides the plugin run extra services along with a bookie server. 639 | # 640 | # extraServerComponents= 641 | 642 | 643 | ############################################################################# 644 | ## DB Ledger storage configuration 645 | ############################################################################# 646 | 647 | # These configs are used when the selected 'ledgerStorageClass' is 648 | # org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage 649 | 650 | # Size of Write Cache. Memory is allocated from JVM direct memory. 651 | # Write cache is used to buffer entries before flushing into the entry log 652 | # For good performance, it should be big enough to hold a substantial amount 653 | # of entries in the flush interval 654 | # By default it will be allocated to 1/4th of the available direct memory 655 | dbStorage_writeCacheMaxSizeMb= 656 | 657 | # Size of Read cache. Memory is allocated from JVM direct memory. 658 | # This read cache is pre-filled doing read-ahead whenever a cache miss happens 659 | # By default it will be allocated to 1/4th of the available direct memory 660 | dbStorage_readAheadCacheMaxSizeMb= 661 | 662 | # How many entries to pre-fill in cache after a read cache miss 663 | dbStorage_readAheadCacheBatchSize=1000 664 | 665 | ## RocksDB specific configurations 666 | ## DbLedgerStorage uses RocksDB to store the indexes from 667 | ## (ledgerId, entryId) -> (entryLog, offset) 668 | 669 | # Size of RocksDB block-cache. For best performance, this cache 670 | # should be big enough to hold a significant portion of the index 671 | # database which can reach ~2GB in some cases 672 | # Default is to use 10% of the direct memory size 673 | dbStorage_rocksDB_blockCacheSize= 674 | 675 | # Other RocksDB specific tunables 676 | dbStorage_rocksDB_writeBufferSizeMB=64 677 | dbStorage_rocksDB_sstSizeInMB=64 678 | dbStorage_rocksDB_blockSize=65536 679 | dbStorage_rocksDB_bloomFilterBitsPerKey=10 680 | dbStorage_rocksDB_numLevels=-1 681 | dbStorage_rocksDB_numFilesInLevel0=4 682 | dbStorage_rocksDB_maxSizeInLevel1MB=256 683 | -------------------------------------------------------------------------------- /pulsar/conf/broker.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed to the Apache Software Foundation (ASF) under one 3 | # or more contributor license agreements. See the NOTICE file 4 | # distributed with this work for additional information 5 | # regarding copyright ownership. The ASF licenses this file 6 | # to you under the Apache License, Version 2.0 (the 7 | # "License"); you may not use this file except in compliance 8 | # with the License. You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, 13 | # software distributed under the License is distributed on an 14 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | # KIND, either express or implied. See the License for the 16 | # specific language governing permissions and limitations 17 | # under the License. 18 | # 19 | 20 | ### --- General broker settings --- ### 21 | 22 | # Zookeeper quorum connection string 23 | zookeeperServers=zk1:2181,zk2:2181,zk3:2181 24 | 25 | # Configuration Store connection string 26 | configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 27 | 28 | # Broker data port 29 | brokerServicePort=6650 30 | 31 | # Broker data port for TLS - By default TLS is disabled 32 | brokerServicePortTls= 33 | 34 | # Port to use to server HTTP request 35 | webServicePort=8080 36 | 37 | # Port to use to server HTTPS request - By default TLS is disabled 38 | webServicePortTls= 39 | 40 | # Hostname or IP address the service binds on, default is 0.0.0.0. 41 | bindAddress=0.0.0.0 42 | 43 | # Hostname or IP address the service advertises to the outside world. If not set, the value of InetAddress.getLocalHost().getHostName() is used. 44 | advertisedAddress= 45 | 46 | # Used to specify multiple advertised listeners for the broker. 47 | # The value must format as :pulsar://:, 48 | # multiple listeners should separate with commas. 49 | # Do not use this configuration with advertisedAddress and brokerServicePort. 50 | # The Default value is absent means use advertisedAddress and brokerServicePort. 51 | # advertisedListeners= 52 | 53 | # Used to specify the internal listener name for the broker. 54 | # The listener name must contain in the advertisedListeners. 55 | # The Default value is absent, the broker uses the first listener as the internal listener. 56 | # internalListenerName= 57 | 58 | # Enable or disable the HAProxy protocol. 59 | haProxyProtocolEnabled=false 60 | 61 | # Number of threads to use for Netty IO. Default is set to 2 * Runtime.getRuntime().availableProcessors() 62 | numIOThreads= 63 | 64 | # Number of threads to use for ordered executor. The ordered executor is used to operate with zookeeper, 65 | # such as init zookeeper client, get namespace policies from zookeeper etc. It also used to split bundle. Default is 8 66 | numOrderedExecutorThreads=8 67 | 68 | # Number of threads to use for HTTP requests processing. Default is set to 2 * Runtime.getRuntime().availableProcessors() 69 | numHttpServerThreads= 70 | 71 | # Number of thread pool size to use for pulsar broker service. 72 | # The executor in thread pool will do basic broker operation like load/unload bundle, update managedLedgerConfig, 73 | # update topic/subscription/replicator message dispatch rate, do leader election etc. 74 | # Default is Runtime.getRuntime().availableProcessors() 75 | numExecutorThreadPoolSize= 76 | 77 | # Number of thread pool size to use for pulsar zookeeper callback service 78 | # The cache executor thread pool is used for restarting global zookeeper session. 79 | # Default is 10 80 | numCacheExecutorThreadPoolSize=10 81 | 82 | # Max concurrent web requests 83 | maxConcurrentHttpRequests=1024 84 | 85 | # Flag to control features that are meant to be used when running in standalone mode 86 | isRunningStandalone= 87 | 88 | # Name of the cluster to which this broker belongs to 89 | clusterName=pulsar-cluster-1 90 | 91 | # The maximum number of tenants that each pulsar cluster can create 92 | # This configuration is not precise control, in a concurrent scenario, the threshold will be exceeded 93 | maxTenants=0 94 | 95 | # Enable cluster's failure-domain which can distribute brokers into logical region 96 | failureDomainsEnabled=false 97 | 98 | # Zookeeper session timeout in milliseconds 99 | zooKeeperSessionTimeoutMillis=30000 100 | 101 | # ZooKeeper operation timeout in seconds 102 | zooKeeperOperationTimeoutSeconds=30 103 | 104 | # ZooKeeper cache expiry time in seconds 105 | zooKeeperCacheExpirySeconds=300 106 | 107 | # Time to wait for broker graceful shutdown. After this time elapses, the process will be killed 108 | brokerShutdownTimeoutMs=60000 109 | 110 | # Flag to skip broker shutdown when broker handles Out of memory error 111 | skipBrokerShutdownOnOOM=false 112 | 113 | # Enable backlog quota check. Enforces action on topic when the quota is reached 114 | backlogQuotaCheckEnabled=true 115 | 116 | # How often to check for topics that have reached the quota 117 | backlogQuotaCheckIntervalInSeconds=60 118 | 119 | # Default per-topic backlog quota limit, less than 0 means no limitation. default is -1. 120 | backlogQuotaDefaultLimitGB=-1 121 | 122 | # Default backlog quota retention policy. Default is producer_request_hold 123 | # 'producer_request_hold' Policy which holds producer's send request until the resource becomes available (or holding times out) 124 | # 'producer_exception' Policy which throws javax.jms.ResourceAllocationException to the producer 125 | # 'consumer_backlog_eviction' Policy which evicts the oldest message from the slowest consumer's backlog 126 | backlogQuotaDefaultRetentionPolicy=producer_request_hold 127 | 128 | # Default ttl for namespaces if ttl is not already configured at namespace policies. (disable default-ttl with value 0) 129 | ttlDurationDefaultInSeconds=0 130 | 131 | # Enable topic auto creation if new producer or consumer connected (disable auto creation with value false) 132 | allowAutoTopicCreation=true 133 | 134 | # The type of topic that is allowed to be automatically created.(partitioned/non-partitioned) 135 | allowAutoTopicCreationType=non-partitioned 136 | 137 | # Enable subscription auto creation if new consumer connected (disable auto creation with value false) 138 | allowAutoSubscriptionCreation=true 139 | 140 | # The number of partitioned topics that is allowed to be automatically created if allowAutoTopicCreationType is partitioned. 141 | defaultNumPartitions=1 142 | 143 | # Enable the deletion of inactive topics 144 | brokerDeleteInactiveTopicsEnabled=true 145 | 146 | # How often to check for inactive topics 147 | brokerDeleteInactiveTopicsFrequencySeconds=60 148 | 149 | # Set the inactive topic delete mode. Default is delete_when_no_subscriptions 150 | # 'delete_when_no_subscriptions' mode only delete the topic which has no subscriptions and no active producers 151 | # 'delete_when_subscriptions_caught_up' mode only delete the topic that all subscriptions has no backlogs(caught up) 152 | # and no active producers/consumers 153 | brokerDeleteInactiveTopicsMode=delete_when_no_subscriptions 154 | 155 | # Metadata of inactive partitioned topic will not be cleaned up automatically by default. 156 | # Note: If `allowAutoTopicCreation` and this option are enabled at the same time, 157 | # it may appear that a partitioned topic has just been deleted but is automatically created as a non-partitioned topic. 158 | brokerDeleteInactivePartitionedTopicMetadataEnabled=false 159 | 160 | # Max duration of topic inactivity in seconds, default is not present 161 | # If not present, 'brokerDeleteInactiveTopicsFrequencySeconds' will be used 162 | # Topics that are inactive for longer than this value will be deleted 163 | brokerDeleteInactiveTopicsMaxInactiveDurationSeconds= 164 | 165 | # Max pending publish requests per connection to avoid keeping large number of pending 166 | # requests in memory. Default: 1000 167 | maxPendingPublishdRequestsPerConnection=1000 168 | 169 | # How frequently to proactively check and purge expired messages 170 | messageExpiryCheckIntervalInMinutes=5 171 | 172 | # How long to delay rewinding cursor and dispatching messages when active consumer is changed 173 | activeConsumerFailoverDelayTimeMillis=1000 174 | 175 | # How long to delete inactive subscriptions from last consuming 176 | # When it is 0, inactive subscriptions are not deleted automatically 177 | subscriptionExpirationTimeMinutes=0 178 | 179 | # Enable subscription message redelivery tracker to send redelivery count to consumer (default is enabled) 180 | subscriptionRedeliveryTrackerEnabled=true 181 | 182 | # How frequently to proactively check and purge expired subscription 183 | subscriptionExpiryCheckIntervalInMinutes=5 184 | 185 | # Enable Key_Shared subscription (default is enabled) 186 | subscriptionKeySharedEnable=true 187 | 188 | # On KeyShared subscriptions, with default AUTO_SPLIT mode, use splitting ranges or 189 | # consistent hashing to reassign keys to new consumers 190 | subscriptionKeySharedUseConsistentHashing=false 191 | 192 | # On KeyShared subscriptions, number of points in the consistent-hashing ring. 193 | # The higher the number, the more equal the assignment of keys to consumers 194 | subscriptionKeySharedConsistentHashingReplicaPoints=100 195 | 196 | # Set the default behavior for message deduplication in the broker 197 | # This can be overridden per-namespace. If enabled, broker will reject 198 | # messages that were already stored in the topic 199 | brokerDeduplicationEnabled=false 200 | 201 | # Maximum number of producer information that it's going to be 202 | # persisted for deduplication purposes 203 | brokerDeduplicationMaxNumberOfProducers=10000 204 | 205 | # How often is the thread pool scheduled to check whether a snapshot needs to be taken.(disable with value 0) 206 | brokerDeduplicationSnapshotFrequencyInSeconds=10 207 | # If this time interval is exceeded, a snapshot will be taken. 208 | # It will run simultaneously with `brokerDeduplicationEntriesInterval` 209 | brokerDeduplicationSnapshotIntervalSeconds=120 210 | 211 | # Number of entries after which a dedup info snapshot is taken. 212 | # A larger interval will lead to fewer snapshots being taken, though it would 213 | # increase the topic recovery time when the entries published after the 214 | # snapshot need to be replayed. 215 | brokerDeduplicationEntriesInterval=1000 216 | 217 | # Time of inactivity after which the broker will discard the deduplication information 218 | # relative to a disconnected producer. Default is 6 hours. 219 | brokerDeduplicationProducerInactivityTimeoutMinutes=360 220 | 221 | # When a namespace is created without specifying the number of bundle, this 222 | # value will be used as the default 223 | defaultNumberOfNamespaceBundles=4 224 | 225 | # The maximum number of namespaces that each tenant can create 226 | # This configuration is not precise control, in a concurrent scenario, the threshold will be exceeded 227 | maxNamespacesPerTenant=0 228 | 229 | # Enable check for minimum allowed client library version 230 | clientLibraryVersionCheckEnabled=false 231 | 232 | # Path for the file used to determine the rotation status for the broker when responding 233 | # to service discovery health checks 234 | statusFilePath= 235 | 236 | # If true, (and ModularLoadManagerImpl is being used), the load manager will attempt to 237 | # use only brokers running the latest software version (to minimize impact to bundles) 238 | preferLaterVersions=false 239 | 240 | # Max number of unacknowledged messages allowed to receive messages by a consumer on a shared subscription. Broker will stop sending 241 | # messages to consumer once, this limit reaches until consumer starts acknowledging messages back. 242 | # Using a value of 0, is disabling unackeMessage limit check and consumer can receive messages without any restriction 243 | maxUnackedMessagesPerConsumer=50000 244 | 245 | # Max number of unacknowledged messages allowed per shared subscription. Broker will stop dispatching messages to 246 | # all consumers of the subscription once this limit reaches until consumer starts acknowledging messages back and 247 | # unack count reaches to limit/2. Using a value of 0, is disabling unackedMessage-limit 248 | # check and dispatcher can dispatch messages without any restriction 249 | maxUnackedMessagesPerSubscription=200000 250 | 251 | # Max number of unacknowledged messages allowed per broker. Once this limit reaches, broker will stop dispatching 252 | # messages to all shared subscription which has higher number of unack messages until subscriptions start 253 | # acknowledging messages back and unack count reaches to limit/2. Using a value of 0, is disabling 254 | # unackedMessage-limit check and broker doesn't block dispatchers 255 | maxUnackedMessagesPerBroker=0 256 | 257 | # Once broker reaches maxUnackedMessagesPerBroker limit, it blocks subscriptions which has higher unacked messages 258 | # than this percentage limit and subscription will not receive any new messages until that subscription acks back 259 | # limit/2 messages 260 | maxUnackedMessagesPerSubscriptionOnBrokerBlocked=0.16 261 | 262 | # Tick time to schedule task that checks topic publish rate limiting across all topics 263 | # Reducing to lower value can give more accuracy while throttling publish but 264 | # it uses more CPU to perform frequent check. (Disable publish throttling with value 0) 265 | topicPublisherThrottlingTickTimeMillis=10 266 | 267 | # Tick time to schedule task that checks broker publish rate limiting across all topics 268 | # Reducing to lower value can give more accuracy while throttling publish but 269 | # it uses more CPU to perform frequent check. (Disable publish throttling with value 0) 270 | brokerPublisherThrottlingTickTimeMillis=50 271 | 272 | # Max Rate(in 1 seconds) of Message allowed to publish for a broker if broker publish rate limiting enabled 273 | # (Disable message rate limit with value 0) 274 | brokerPublisherThrottlingMaxMessageRate=0 275 | 276 | # Max Rate(in 1 seconds) of Byte allowed to publish for a broker if broker publish rate limiting enabled. 277 | # (Disable byte rate limit with value 0) 278 | brokerPublisherThrottlingMaxByteRate=0 279 | 280 | # Max Rate(in 1 seconds) of Message allowed to publish for a topic if topic publish rate limiting enabled 281 | # (Disable byte rate limit with value 0) 282 | maxPublishRatePerTopicInMessages=0 283 | 284 | #Max Rate(in 1 seconds) of Byte allowed to publish for a topic if topic publish rate limiting enabled. 285 | # (Disable byte rate limit with value 0) 286 | maxPublishRatePerTopicInBytes=0 287 | 288 | # Too many subscribe requests from a consumer can cause broker rewinding consumer cursors and loading data from bookies, 289 | # hence causing high network bandwidth usage 290 | # When the positive value is set, broker will throttle the subscribe requests for one consumer. 291 | # Otherwise, the throttling will be disabled. The default value of this setting is 0 - throttling is disabled. 292 | subscribeThrottlingRatePerConsumer=0 293 | 294 | # Rate period for {subscribeThrottlingRatePerConsumer}. Default is 30s. 295 | subscribeRatePeriodPerConsumerInSecond=30 296 | 297 | # Default messages per second dispatch throttling-limit for every topic. Using a value of 0, is disabling default 298 | # message dispatch-throttling 299 | dispatchThrottlingRatePerTopicInMsg=0 300 | 301 | # Default bytes per second dispatch throttling-limit for every topic. Using a value of 0, is disabling 302 | # default message-byte dispatch-throttling 303 | dispatchThrottlingRatePerTopicInByte=0 304 | 305 | # Default number of message dispatching throttling-limit for a subscription. 306 | # Using a value of 0, is disabling default message dispatch-throttling. 307 | dispatchThrottlingRatePerSubscriptionInMsg=0 308 | 309 | # Default number of message-bytes dispatching throttling-limit for a subscription. 310 | # Using a value of 0, is disabling default message-byte dispatch-throttling. 311 | dispatchThrottlingRatePerSubscriptionInByte=0 312 | 313 | # Default messages per second dispatch throttling-limit for every replicator in replication. 314 | # Using a value of 0, is disabling replication message dispatch-throttling 315 | dispatchThrottlingRatePerReplicatorInMsg=0 316 | 317 | # Default bytes per second dispatch throttling-limit for every replicator in replication. 318 | # Using a value of 0, is disabling replication message-byte dispatch-throttling 319 | dispatchThrottlingRatePerReplicatorInByte=0 320 | 321 | # Dispatch rate-limiting relative to publish rate. 322 | # (Enabling flag will make broker to dynamically update dispatch-rate relatively to publish-rate: 323 | # throttle-dispatch-rate = (publish-rate + configured dispatch-rate). 324 | dispatchThrottlingRateRelativeToPublishRate=false 325 | 326 | # By default we enable dispatch-throttling for both caught up consumers as well as consumers who have 327 | # backlog. 328 | dispatchThrottlingOnNonBacklogConsumerEnabled=true 329 | 330 | # Max number of entries to read from bookkeeper. By default it is 100 entries. 331 | dispatcherMaxReadBatchSize=100 332 | 333 | # Max size in bytes of entries to read from bookkeeper. By default it is 5MB. 334 | dispatcherMaxReadSizeBytes=5242880 335 | 336 | # Min number of entries to read from bookkeeper. By default it is 1 entries. 337 | # When there is an error occurred on reading entries from bookkeeper, the broker 338 | # will backoff the batch size to this minimum number." 339 | dispatcherMinReadBatchSize=1 340 | 341 | # Max number of entries to dispatch for a shared subscription. By default it is 20 entries. 342 | dispatcherMaxRoundRobinBatchSize=20 343 | 344 | # Precise dispathcer flow control according to history message number of each entry 345 | preciseDispatcherFlowControl=false 346 | 347 | # Max number of concurrent lookup request broker allows to throttle heavy incoming lookup traffic 348 | maxConcurrentLookupRequest=50000 349 | 350 | # Max number of concurrent topic loading request broker allows to control number of zk-operations 351 | maxConcurrentTopicLoadRequest=5000 352 | 353 | # Max concurrent non-persistent message can be processed per connection 354 | maxConcurrentNonPersistentMessagePerConnection=1000 355 | 356 | # Number of worker threads to serve non-persistent topic 357 | numWorkerThreadsForNonPersistentTopic=8 358 | 359 | # Enable broker to load persistent topics 360 | enablePersistentTopics=true 361 | 362 | # Enable broker to load non-persistent topics 363 | enableNonPersistentTopics=true 364 | 365 | # Enable to run bookie along with broker 366 | enableRunBookieTogether=false 367 | 368 | # Enable to run bookie autorecovery along with broker 369 | enableRunBookieAutoRecoveryTogether=false 370 | 371 | # Max number of producers allowed to connect to topic. Once this limit reaches, Broker will reject new producers 372 | # until the number of connected producers decrease. 373 | # Using a value of 0, is disabling maxProducersPerTopic-limit check. 374 | maxProducersPerTopic=0 375 | 376 | # Enforce producer to publish encrypted messages.(default disable). 377 | encryptionRequireOnProducer=false 378 | 379 | # Max number of consumers allowed to connect to topic. Once this limit reaches, Broker will reject new consumers 380 | # until the number of connected consumers decrease. 381 | # Using a value of 0, is disabling maxConsumersPerTopic-limit check. 382 | maxConsumersPerTopic=0 383 | 384 | # Max number of subscriptions allowed to subscribe to topic. Once this limit reaches, broker will reject 385 | # new subscription until the number of subscribed subscriptions decrease. 386 | # Using a value of 0, is disabling maxSubscriptionsPerTopic limit check. 387 | maxSubscriptionsPerTopic=0 388 | 389 | # Max number of consumers allowed to connect to subscription. Once this limit reaches, Broker will reject new consumers 390 | # until the number of connected consumers decrease. 391 | # Using a value of 0, is disabling maxConsumersPerSubscription-limit check. 392 | maxConsumersPerSubscription=0 393 | 394 | # Max size of messages. 395 | maxMessageSize=5242880 396 | 397 | # Interval between checks to see if topics with compaction policies need to be compacted 398 | brokerServiceCompactionMonitorIntervalInSeconds=60 399 | 400 | # Whether to enable the delayed delivery for messages. 401 | # If disabled, messages will be immediately delivered and there will 402 | # be no tracking overhead. 403 | delayedDeliveryEnabled=true 404 | 405 | # Control the tick time for when retrying on delayed delivery, 406 | # affecting the accuracy of the delivery time compared to the scheduled time. 407 | # Default is 1 second. 408 | delayedDeliveryTickTimeMillis=1000 409 | 410 | # Whether to enable acknowledge of batch local index. 411 | acknowledgmentAtBatchIndexLevelEnabled=false 412 | 413 | # Enable tracking of replicated subscriptions state across clusters. 414 | enableReplicatedSubscriptions=true 415 | 416 | # Frequency of snapshots for replicated subscriptions tracking. 417 | replicatedSubscriptionsSnapshotFrequencyMillis=1000 418 | 419 | # Timeout for building a consistent snapshot for tracking replicated subscriptions state. 420 | replicatedSubscriptionsSnapshotTimeoutSeconds=30 421 | 422 | # Max number of snapshot to be cached per subscription. 423 | replicatedSubscriptionsSnapshotMaxCachedPerSubscription=10 424 | 425 | # Max memory size for broker handling messages sending from producers. 426 | # If the processing message size exceed this value, broker will stop read data 427 | # from the connection. The processing messages means messages are sends to broker 428 | # but broker have not send response to client, usually waiting to write to bookies. 429 | # It's shared across all the topics running in the same broker. 430 | # Use -1 to disable the memory limitation. Default is 1/2 of direct memory. 431 | maxMessagePublishBufferSizeInMB= 432 | 433 | # Interval between checks to see if message publish buffer size is exceed the max message publish buffer size 434 | # Use 0 or negative number to disable the max publish buffer limiting. 435 | messagePublishBufferCheckIntervalInMillis=100 436 | 437 | # Check between intervals to see if consumed ledgers need to be trimmed 438 | # Use 0 or negative number to disable the check 439 | retentionCheckIntervalInSeconds=120 440 | 441 | # Max number of partitions per partitioned topic 442 | # Use 0 or negative number to disable the check 443 | maxNumPartitionsPerPartitionedTopic=0 444 | 445 | # There are two policies when zookeeper session expired happens, "shutdown" and "reconnect". 446 | # If uses "shutdown" policy, shutdown the broker when zookeeper session expired happens. 447 | # If uses "reconnect" policy, try to reconnect to zookeeper server and re-register metadata to zookeeper. 448 | # Node: the "reconnect" policy is an experiment feature 449 | zookeeperSessionExpiredPolicy=shutdown 450 | 451 | # Enable or disable system topic 452 | systemTopicEnabled=false 453 | 454 | # Enable or disable topic level policies, topic level policies depends on the system topic 455 | # Please enable the system topic first. 456 | topicLevelPoliciesEnabled=false 457 | 458 | # If a topic remains fenced for this number of seconds, it will be closed forcefully. 459 | # If it is set to 0 or a negative number, the fenced topic will not be closed. 460 | topicFencingTimeoutSeconds=0 461 | 462 | ### --- Authentication --- ### 463 | # Role names that are treated as "proxy roles". If the broker sees a request with 464 | #role as proxyRoles - it will demand to see a valid original principal. 465 | proxyRoles= 466 | 467 | # If this flag is set then the broker authenticates the original Auth data 468 | # else it just accepts the originalPrincipal and authorizes it (if required). 469 | authenticateOriginalAuthData=false 470 | 471 | # Deprecated - Use webServicePortTls and brokerServicePortTls instead 472 | tlsEnabled=false 473 | 474 | # Tls cert refresh duration in seconds (set 0 to check on every new connection) 475 | tlsCertRefreshCheckDurationSec=300 476 | 477 | # Path for the TLS certificate file 478 | tlsCertificateFilePath= 479 | 480 | # Path for the TLS private key file 481 | tlsKeyFilePath= 482 | 483 | # Path for the trusted TLS certificate file. 484 | # This cert is used to verify that any certs presented by connecting clients 485 | # are signed by a certificate authority. If this verification 486 | # fails, then the certs are untrusted and the connections are dropped. 487 | tlsTrustCertsFilePath= 488 | 489 | # Accept untrusted TLS certificate from client. 490 | # If true, a client with a cert which cannot be verified with the 491 | # 'tlsTrustCertsFilePath' cert will allowed to connect to the server, 492 | # though the cert will not be used for client authentication. 493 | tlsAllowInsecureConnection=false 494 | 495 | # Specify the tls protocols the broker will use to negotiate during TLS handshake 496 | # (a comma-separated list of protocol names). 497 | # Examples:- [TLSv1.2, TLSv1.1, TLSv1] 498 | tlsProtocols= 499 | 500 | # Specify the tls cipher the broker will use to negotiate during TLS Handshake 501 | # (a comma-separated list of ciphers). 502 | # Examples:- [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256] 503 | tlsCiphers= 504 | 505 | # Trusted client certificates are required for to connect TLS 506 | # Reject the Connection if the Client Certificate is not trusted. 507 | # In effect, this requires that all connecting clients perform TLS client 508 | # authentication. 509 | tlsRequireTrustedClientCertOnConnect=false 510 | 511 | ### --- KeyStore TLS config variables --- ### 512 | # Enable TLS with KeyStore type configuration in broker. 513 | tlsEnabledWithKeyStore=false 514 | 515 | # TLS Provider for KeyStore type 516 | tlsProvider= 517 | 518 | # TLS KeyStore type configuration in broker: JKS, PKCS12 519 | tlsKeyStoreType=JKS 520 | 521 | # TLS KeyStore path in broker 522 | tlsKeyStore= 523 | 524 | # TLS KeyStore password for broker 525 | tlsKeyStorePassword= 526 | 527 | # TLS TrustStore type configuration in broker: JKS, PKCS12 528 | tlsTrustStoreType=JKS 529 | 530 | # TLS TrustStore path in broker 531 | tlsTrustStore= 532 | 533 | # TLS TrustStore password in broker 534 | tlsTrustStorePassword= 535 | 536 | # Whether internal client use KeyStore type to authenticate with Pulsar brokers 537 | brokerClientTlsEnabledWithKeyStore=false 538 | 539 | # The TLS Provider used by internal client to authenticate with other Pulsar brokers 540 | brokerClientSslProvider= 541 | 542 | # TLS TrustStore type configuration for internal client: JKS, PKCS12 543 | # used by the internal client to authenticate with Pulsar brokers 544 | brokerClientTlsTrustStoreType=JKS 545 | 546 | # TLS TrustStore path for internal client 547 | # used by the internal client to authenticate with Pulsar brokers 548 | brokerClientTlsTrustStore= 549 | 550 | # TLS TrustStore password for internal client, 551 | # used by the internal client to authenticate with Pulsar brokers 552 | brokerClientTlsTrustStorePassword= 553 | 554 | # Specify the tls cipher the internal client will use to negotiate during TLS Handshake 555 | # (a comma-separated list of ciphers) 556 | # e.g. [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256]. 557 | # used by the internal client to authenticate with Pulsar brokers 558 | brokerClientTlsCiphers= 559 | 560 | # Specify the tls protocols the broker will use to negotiate during TLS handshake 561 | # (a comma-separated list of protocol names). 562 | # e.g. [TLSv1.2, TLSv1.1, TLSv1] 563 | # used by the internal client to authenticate with Pulsar brokers 564 | brokerClientTlsProtocols= 565 | 566 | 567 | ### --- Authentication --- ### 568 | 569 | # Enable authentication 570 | authenticationEnabled=false 571 | 572 | # Autentication provider name list, which is comma separated list of class names 573 | authenticationProviders= 574 | 575 | # Interval of time for checking for expired authentication credentials 576 | authenticationRefreshCheckSeconds=60 577 | 578 | # Enforce authorization 579 | authorizationEnabled=false 580 | 581 | # Authorization provider fully qualified class-name 582 | authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider 583 | 584 | # Allow wildcard matching in authorization 585 | # (wildcard matching only applicable if wildcard-char: 586 | # * presents at first or last position eg: *.pulsar.service, pulsar.service.*) 587 | authorizationAllowWildcardsMatching=false 588 | 589 | # Role names that are treated as "super-user", meaning they will be able to do all admin 590 | # operations and publish/consume from all topics 591 | superUserRoles= 592 | 593 | # Authentication settings of the broker itself. Used when the broker connects to other brokers, 594 | # either in same or other clusters 595 | brokerClientTlsEnabled=false 596 | brokerClientAuthenticationPlugin= 597 | brokerClientAuthenticationParameters= 598 | brokerClientTrustCertsFilePath= 599 | 600 | # Supported Athenz provider domain names(comma separated) for authentication 601 | athenzDomainNames= 602 | 603 | # When this parameter is not empty, unauthenticated users perform as anonymousUserRole 604 | anonymousUserRole= 605 | 606 | ### --- Token Authentication Provider --- ### 607 | 608 | ## Symmetric key 609 | # Configure the secret key to be used to validate auth tokens 610 | # The key can be specified like: 611 | # tokenSecretKey=data:;base64,xxxxxxxxx 612 | # tokenSecretKey=file:///my/secret.key 613 | tokenSecretKey= 614 | 615 | ## Asymmetric public/private key pair 616 | # Configure the public key to be used to validate auth tokens 617 | # The key can be specified like: 618 | # tokenPublicKey=data:;base64,xxxxxxxxx 619 | # tokenPublicKey=file:///my/public.key 620 | tokenPublicKey= 621 | 622 | # The token "claim" that will be interpreted as the authentication "role" or "principal" by AuthenticationProviderToken (defaults to "sub" if blank) 623 | tokenAuthClaim= 624 | 625 | # The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. 626 | # If not set, audience will not be verified. 627 | tokenAudienceClaim= 628 | 629 | # The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. 630 | tokenAudience= 631 | 632 | ### --- SASL Authentication Provider --- ### 633 | 634 | # This is a regexp, which limits the range of possible ids which can connect to the Broker using SASL. 635 | # Default value: `SaslConstants.JAAS_CLIENT_ALLOWED_IDS_DEFAULT`, which is ".*pulsar.*", 636 | # so only clients whose id contains 'pulsar' are allowed to connect. 637 | saslJaasClientAllowedIds= 638 | 639 | # Service Principal, for login context name. 640 | # Default value `SaslConstants.JAAS_DEFAULT_BROKER_SECTION_NAME`, which is "Broker". 641 | saslJaasBrokerSectionName= 642 | 643 | ### --- HTTP Server config --- ### 644 | 645 | # If >0, it will reject all HTTP requests with bodies larged than the configured limit 646 | httpMaxRequestSize=-1 647 | 648 | # Enable the enforcement of limits on the incoming HTTP requests 649 | httpRequestsLimitEnabled=false 650 | 651 | # Max HTTP requests per seconds allowed. The excess of requests will be rejected with HTTP code 429 (Too many requests) 652 | httpRequestsMaxPerSecond=100.0 653 | 654 | 655 | ### --- BookKeeper Client --- ### 656 | 657 | # Metadata service uri that bookkeeper is used for loading corresponding metadata driver 658 | # and resolving its metadata service location. 659 | # This value can be fetched using `bookkeeper shell whatisinstanceid` command in BookKeeper cluster. 660 | # For example: zk+hierarchical://localhost:2181/ledgers 661 | # The metadata service uri list can also be semicolon separated values like below: 662 | # zk+hierarchical://zk1:2181;zk2:2181;zk3:2181/ledgers 663 | bookkeeperMetadataServiceUri= 664 | 665 | # Authentication plugin to use when connecting to bookies 666 | bookkeeperClientAuthenticationPlugin= 667 | 668 | # BookKeeper auth plugin implementatation specifics parameters name and values 669 | bookkeeperClientAuthenticationParametersName= 670 | bookkeeperClientAuthenticationParameters= 671 | 672 | # Timeout for BK add / read operations 673 | bookkeeperClientTimeoutInSeconds=30 674 | 675 | # Speculative reads are initiated if a read request doesn't complete within a certain time 676 | # Using a value of 0, is disabling the speculative reads 677 | bookkeeperClientSpeculativeReadTimeoutInMillis=0 678 | 679 | # Number of channels per bookie 680 | bookkeeperNumberOfChannelsPerBookie=16 681 | 682 | # Use older Bookkeeper wire protocol with bookie 683 | bookkeeperUseV2WireProtocol=true 684 | 685 | # Enable bookies health check. Bookies that have more than the configured number of failure within 686 | # the interval will be quarantined for some time. During this period, new ledgers won't be created 687 | # on these bookies 688 | bookkeeperClientHealthCheckEnabled=true 689 | bookkeeperClientHealthCheckIntervalSeconds=60 690 | bookkeeperClientHealthCheckErrorThresholdPerInterval=5 691 | bookkeeperClientHealthCheckQuarantineTimeInSeconds=1800 692 | 693 | #bookie quarantine ratio to avoid all clients quarantine the high pressure bookie servers at the same time 694 | bookkeeperClientQuarantineRatio=1.0 695 | 696 | # Specify options for the GetBookieInfo check. These settings can be useful 697 | # to help ensure the list of bookies is up to date on the brokers. 698 | bookkeeperGetBookieInfoIntervalSeconds=86400 699 | bookkeeperGetBookieInfoRetryIntervalSeconds=60 700 | 701 | # Enable rack-aware bookie selection policy. BK will chose bookies from different racks when 702 | # forming a new bookie ensemble 703 | # This parameter related to ensemblePlacementPolicy in conf/bookkeeper.conf, if enabled, ensemblePlacementPolicy 704 | # should be set to org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicy 705 | bookkeeperClientRackawarePolicyEnabled=true 706 | 707 | # Enable region-aware bookie selection policy. BK will chose bookies from 708 | # different regions and racks when forming a new bookie ensemble 709 | # If enabled, the value of bookkeeperClientRackawarePolicyEnabled is ignored 710 | # This parameter related to ensemblePlacementPolicy in conf/bookkeeper.conf, if enabled, ensemblePlacementPolicy 711 | # should be set to org.apache.bookkeeper.client.RegionAwareEnsemblePlacementPolicy 712 | bookkeeperClientRegionawarePolicyEnabled=false 713 | 714 | # Minimum number of racks per write quorum. BK rack-aware bookie selection policy will try to 715 | # get bookies from at least 'bookkeeperClientMinNumRacksPerWriteQuorum' racks for a write quorum. 716 | bookkeeperClientMinNumRacksPerWriteQuorum=2 717 | 718 | # Enforces rack-aware bookie selection policy to pick bookies from 'bookkeeperClientMinNumRacksPerWriteQuorum' 719 | # racks for a writeQuorum. 720 | # If BK can't find bookie then it would throw BKNotEnoughBookiesException instead of picking random one. 721 | bookkeeperClientEnforceMinNumRacksPerWriteQuorum=false 722 | 723 | # Enable/disable reordering read sequence on reading entries. 724 | bookkeeperClientReorderReadSequenceEnabled=false 725 | 726 | # Enable bookie isolation by specifying a list of bookie groups to choose from. Any bookie 727 | # outside the specified groups will not be used by the broker 728 | bookkeeperClientIsolationGroups= 729 | 730 | # Enable bookie secondary-isolation group if bookkeeperClientIsolationGroups doesn't 731 | # have enough bookie available. 732 | bookkeeperClientSecondaryIsolationGroups= 733 | 734 | # Minimum bookies that should be available as part of bookkeeperClientIsolationGroups 735 | # else broker will include bookkeeperClientSecondaryIsolationGroups bookies in isolated list. 736 | bookkeeperClientMinAvailableBookiesInIsolationGroups= 737 | 738 | # Enable/disable having read operations for a ledger to be sticky to a single bookie. 739 | # If this flag is enabled, the client will use one single bookie (by preference) to read 740 | # all entries for a ledger. 741 | # 742 | # Disable Sticy Read until {@link https://github.com/apache/bookkeeper/issues/1970} is fixed 743 | bookkeeperEnableStickyReads=false 744 | 745 | # Set the client security provider factory class name. 746 | # Default: org.apache.bookkeeper.tls.TLSContextFactory 747 | bookkeeperTLSProviderFactoryClass=org.apache.bookkeeper.tls.TLSContextFactory 748 | 749 | # Enable tls authentication with bookie 750 | bookkeeperTLSClientAuthentication=false 751 | 752 | # Supported type: PEM, JKS, PKCS12. Default value: PEM 753 | bookkeeperTLSKeyFileType=PEM 754 | 755 | #Supported type: PEM, JKS, PKCS12. Default value: PEM 756 | bookkeeperTLSTrustCertTypes=PEM 757 | 758 | # Path to file containing keystore password, if the client keystore is password protected. 759 | bookkeeperTLSKeyStorePasswordPath= 760 | 761 | # Path to file containing truststore password, if the client truststore is password protected. 762 | bookkeeperTLSTrustStorePasswordPath= 763 | 764 | # Path for the TLS private key file 765 | bookkeeperTLSKeyFilePath= 766 | 767 | # Path for the TLS certificate file 768 | bookkeeperTLSCertificateFilePath= 769 | 770 | # Path for the trusted TLS certificate file 771 | bookkeeperTLSTrustCertsFilePath= 772 | 773 | # Enable/disable disk weight based placement. Default is false 774 | bookkeeperDiskWeightBasedPlacementEnabled=false 775 | 776 | # Set the interval to check the need for sending an explicit LAC 777 | # A value of '0' disables sending any explicit LACs. Default is 0. 778 | bookkeeperExplicitLacIntervalInMills=0 779 | 780 | # Expose bookkeeper client managed ledger stats to prometheus. default is false 781 | # bookkeeperClientExposeStatsToPrometheus=false 782 | 783 | ### --- Managed Ledger --- ### 784 | 785 | # Number of bookies to use when creating a ledger 786 | managedLedgerDefaultEnsembleSize=2 787 | 788 | # Number of copies to store for each message 789 | managedLedgerDefaultWriteQuorum=2 790 | 791 | # Number of guaranteed copies (acks to wait before write is complete) 792 | managedLedgerDefaultAckQuorum=2 793 | 794 | # you can add other configuration options for the BookKeeper client 795 | # by prefixing them with bookkeeper_ 796 | 797 | # How frequently to flush the cursor positions that were accumulated due to rate limiting. (seconds). 798 | # Default is 60 seconds 799 | managedLedgerCursorPositionFlushSeconds = 60 800 | 801 | # Default type of checksum to use when writing to BookKeeper. Default is "CRC32C" 802 | # Other possible options are "CRC32", "MAC" or "DUMMY" (no checksum). 803 | managedLedgerDigestType=CRC32C 804 | 805 | # Number of threads to be used for managed ledger tasks dispatching 806 | managedLedgerNumWorkerThreads=8 807 | 808 | # Number of threads to be used for managed ledger scheduled tasks 809 | managedLedgerNumSchedulerThreads=8 810 | 811 | # Amount of memory to use for caching data payload in managed ledger. This memory 812 | # is allocated from JVM direct memory and it's shared across all the topics 813 | # running in the same broker. By default, uses 1/5th of available direct memory 814 | managedLedgerCacheSizeMB= 815 | 816 | # Whether we should make a copy of the entry payloads when inserting in cache 817 | managedLedgerCacheCopyEntries=false 818 | 819 | # Threshold to which bring down the cache level when eviction is triggered 820 | managedLedgerCacheEvictionWatermark=0.9 821 | 822 | # Configure the cache eviction frequency for the managed ledger cache (evictions/sec) 823 | managedLedgerCacheEvictionFrequency=100.0 824 | 825 | # All entries that have stayed in cache for more than the configured time, will be evicted 826 | managedLedgerCacheEvictionTimeThresholdMillis=1000 827 | 828 | # Configure the threshold (in number of entries) from where a cursor should be considered 'backlogged' 829 | # and thus should be set as inactive. 830 | managedLedgerCursorBackloggedThreshold=1000 831 | 832 | # Rate limit the amount of writes per second generated by consumer acking the messages 833 | managedLedgerDefaultMarkDeleteRateLimit=1.0 834 | 835 | # Max number of entries to append to a ledger before triggering a rollover 836 | # A ledger rollover is triggered on these conditions 837 | # * Either the max rollover time has been reached 838 | # * or max entries have been written to the ledged and at least min-time 839 | # has passed 840 | managedLedgerMaxEntriesPerLedger=50000 841 | 842 | # Minimum time between ledger rollover for a topic 843 | managedLedgerMinLedgerRolloverTimeMinutes=10 844 | 845 | # Maximum time before forcing a ledger rollover for a topic 846 | managedLedgerMaxLedgerRolloverTimeMinutes=240 847 | 848 | # Maximum ledger size before triggering a rollover for a topic (MB) 849 | managedLedgerMaxSizePerLedgerMbytes=2048 850 | 851 | # Delay between a ledger being successfully offloaded to long term storage 852 | # and the ledger being deleted from bookkeeper (default is 4 hours) 853 | managedLedgerOffloadDeletionLagMs=14400000 854 | 855 | # The number of bytes before triggering automatic offload to long term storage 856 | # (default is -1, which is disabled) 857 | managedLedgerOffloadAutoTriggerSizeThresholdBytes=-1 858 | 859 | # Max number of entries to append to a cursor ledger 860 | managedLedgerCursorMaxEntriesPerLedger=50000 861 | 862 | # Max time before triggering a rollover on a cursor ledger 863 | managedLedgerCursorRolloverTimeInSeconds=14400 864 | 865 | # Max number of "acknowledgment holes" that are going to be persistently stored. 866 | # When acknowledging out of order, a consumer will leave holes that are supposed 867 | # to be quickly filled by acking all the messages. The information of which 868 | # messages are acknowledged is persisted by compressing in "ranges" of messages 869 | # that were acknowledged. After the max number of ranges is reached, the information 870 | # will only be tracked in memory and messages will be redelivered in case of 871 | # crashes. 872 | managedLedgerMaxUnackedRangesToPersist=10000 873 | 874 | # Max number of "acknowledgment holes" that can be stored in Zookeeper. If number of unack message range is higher 875 | # than this limit then broker will persist unacked ranges into bookkeeper to avoid additional data overhead into 876 | # zookeeper. 877 | managedLedgerMaxUnackedRangesToPersistInZooKeeper=1000 878 | 879 | # Skip reading non-recoverable/unreadable data-ledger under managed-ledger's list. It helps when data-ledgers gets 880 | # corrupted at bookkeeper and managed-cursor is stuck at that ledger. 881 | autoSkipNonRecoverableData=false 882 | 883 | # Whether to recover cursors lazily when trying to recover a managed ledger backing a persistent topic. 884 | # It can improve write availability of topics. 885 | # The caveat is now when recovered ledger is ready to write we're not sure if all old consumers last mark 886 | # delete position can be recovered or not. 887 | lazyCursorRecovery=false 888 | 889 | # operation timeout while updating managed-ledger metadata. 890 | managedLedgerMetadataOperationsTimeoutSeconds=60 891 | 892 | # Read entries timeout when broker tries to read messages from bookkeeper. 893 | managedLedgerReadEntryTimeoutSeconds=0 894 | 895 | # Add entry timeout when broker tries to publish message to bookkeeper (0 to disable it). 896 | managedLedgerAddEntryTimeoutSeconds=0 897 | 898 | # Managed ledger prometheus stats latency rollover seconds (default: 60s) 899 | managedLedgerPrometheusStatsLatencyRolloverSeconds=60 900 | 901 | # Whether trace managed ledger task execution time 902 | managedLedgerTraceTaskExecution=true 903 | 904 | # New entries check delay for the cursor under the managed ledger. 905 | # If no new messages in the topic, the cursor will try to check again after the delay time. 906 | # For consumption latency sensitive scenario, can set to a smaller value or set to 0. 907 | # Of course, use a smaller value may degrade consumption throughput. Default is 10ms. 908 | managedLedgerNewEntriesCheckDelayInMillis=10 909 | 910 | ### --- Load balancer --- ### 911 | 912 | # Enable load balancer 913 | loadBalancerEnabled=true 914 | 915 | # Percentage of change to trigger load report update 916 | loadBalancerReportUpdateThresholdPercentage=10 917 | 918 | # maximum interval to update load report 919 | loadBalancerReportUpdateMaxIntervalMinutes=15 920 | 921 | # Frequency of report to collect 922 | loadBalancerHostUsageCheckIntervalMinutes=1 923 | 924 | # Enable/disable automatic bundle unloading for load-shedding 925 | loadBalancerSheddingEnabled=true 926 | 927 | # Load shedding interval. Broker periodically checks whether some traffic should be offload from 928 | # some over-loaded broker to other under-loaded brokers 929 | loadBalancerSheddingIntervalMinutes=1 930 | 931 | # Prevent the same topics to be shed and moved to other broker more that once within this timeframe 932 | loadBalancerSheddingGracePeriodMinutes=30 933 | 934 | # Usage threshold to allocate max number of topics to broker 935 | loadBalancerBrokerMaxTopics=50000 936 | 937 | # Usage threshold to determine a broker as over-loaded 938 | loadBalancerBrokerOverloadedThresholdPercentage=85 939 | 940 | # Interval to flush dynamic resource quota to ZooKeeper 941 | loadBalancerResourceQuotaUpdateIntervalMinutes=15 942 | 943 | # enable/disable namespace bundle auto split 944 | loadBalancerAutoBundleSplitEnabled=true 945 | 946 | # enable/disable automatic unloading of split bundles 947 | loadBalancerAutoUnloadSplitBundlesEnabled=true 948 | 949 | # maximum topics in a bundle, otherwise bundle split will be triggered 950 | loadBalancerNamespaceBundleMaxTopics=1000 951 | 952 | # maximum sessions (producers + consumers) in a bundle, otherwise bundle split will be triggered 953 | loadBalancerNamespaceBundleMaxSessions=1000 954 | 955 | # maximum msgRate (in + out) in a bundle, otherwise bundle split will be triggered 956 | loadBalancerNamespaceBundleMaxMsgRate=30000 957 | 958 | # maximum bandwidth (in + out) in a bundle, otherwise bundle split will be triggered 959 | loadBalancerNamespaceBundleMaxBandwidthMbytes=100 960 | 961 | # maximum number of bundles in a namespace 962 | loadBalancerNamespaceMaximumBundles=128 963 | 964 | # Override the auto-detection of the network interfaces max speed. 965 | # This option is useful in some environments (eg: EC2 VMs) where the max speed 966 | # reported by Linux is not reflecting the real bandwidth available to the broker. 967 | # Since the network usage is employed by the load manager to decide when a broker 968 | # is overloaded, it is important to make sure the info is correct or override it 969 | # with the right value here. The configured value can be a double (eg: 0.8) and that 970 | # can be used to trigger load-shedding even before hitting on NIC limits. 971 | loadBalancerOverrideBrokerNicSpeedGbps= 972 | 973 | # Name of load manager to use 974 | loadManagerClassName=org.apache.pulsar.broker.loadbalance.impl.ModularLoadManagerImpl 975 | 976 | # Supported algorithms name for namespace bundle split. 977 | # "range_equally_divide" divides the bundle into two parts with the same hash range size. 978 | # "topic_count_equally_divide" divides the bundle into two parts with the same topics count. 979 | supportedNamespaceBundleSplitAlgorithms=range_equally_divide,topic_count_equally_divide 980 | 981 | # Default algorithm name for namespace bundle split 982 | defaultNamespaceBundleSplitAlgorithm=range_equally_divide 983 | 984 | # load shedding strategy, support OverloadShedder and ThresholdShedder, default is OverloadShedder 985 | loadBalancerLoadSheddingStrategy=org.apache.pulsar.broker.loadbalance.impl.OverloadShedder 986 | 987 | # The broker resource usage threshold. 988 | # When the broker resource usage is gratter than the pulsar cluster average resource usge, 989 | # the threshold shedder will be triggered to offload bundles from the broker. 990 | # It only take effect in ThresholdSheddler strategy. 991 | loadBalancerBrokerThresholdShedderPercentage=10 992 | 993 | # When calculating new resource usage, the history usage accounts for. 994 | # It only take effect in ThresholdSheddler strategy. 995 | loadBalancerHistoryResourcePercentage=0.9 996 | 997 | # The BandWithIn usage weight when calculating new resourde usage. 998 | # It only take effect in ThresholdShedder strategy. 999 | loadBalancerBandwithInResourceWeight=1.0 1000 | 1001 | # The BandWithOut usage weight when calculating new resourde usage. 1002 | # It only take effect in ThresholdShedder strategy. 1003 | loadBalancerBandwithOutResourceWeight=1.0 1004 | 1005 | # The CPU usage weight when calculating new resourde usage. 1006 | # It only take effect in ThresholdShedder strategy. 1007 | loadBalancerCPUResourceWeight=1.0 1008 | 1009 | # The heap memory usage weight when calculating new resourde usage. 1010 | # It only take effect in ThresholdShedder strategy. 1011 | loadBalancerMemoryResourceWeight=1.0 1012 | 1013 | # The direct memory usage weight when calculating new resourde usage. 1014 | # It only take effect in ThresholdShedder strategy. 1015 | loadBalancerDirectMemoryResourceWeight=1.0 1016 | 1017 | # Bundle unload minimum throughput threshold (MB), avoding bundle unload frequently. 1018 | # It only take effect in ThresholdShedder strategy. 1019 | loadBalancerBundleUnloadMinThroughputThreshold=10 1020 | 1021 | ### --- Replication --- ### 1022 | 1023 | # Enable replication metrics 1024 | replicationMetricsEnabled=true 1025 | 1026 | # Max number of connections to open for each broker in a remote cluster 1027 | # More connections host-to-host lead to better throughput over high-latency 1028 | # links. 1029 | replicationConnectionsPerBroker=16 1030 | 1031 | # Replicator producer queue size 1032 | replicationProducerQueueSize=1000 1033 | 1034 | # Replicator prefix used for replicator producer name and cursor name 1035 | replicatorPrefix=pulsar.repl 1036 | 1037 | # Duration to check replication policy to avoid replicator inconsistency 1038 | # due to missing ZooKeeper watch (disable with value 0) 1039 | replicationPolicyCheckDurationSeconds=600 1040 | 1041 | # Default message retention time 1042 | defaultRetentionTimeInMinutes=0 1043 | 1044 | # Default retention size 1045 | defaultRetentionSizeInMB=0 1046 | 1047 | # How often to check whether the connections are still alive 1048 | keepAliveIntervalSeconds=30 1049 | 1050 | # bootstrap namespaces 1051 | bootstrapNamespaces= 1052 | 1053 | ### --- WebSocket --- ### 1054 | 1055 | # Enable the WebSocket API service in broker 1056 | webSocketServiceEnabled=false 1057 | 1058 | # Number of IO threads in Pulsar Client used in WebSocket proxy 1059 | webSocketNumIoThreads=8 1060 | 1061 | # Number of connections per Broker in Pulsar Client used in WebSocket proxy 1062 | webSocketConnectionsPerBroker=8 1063 | 1064 | # Time in milliseconds that idle WebSocket session times out 1065 | webSocketSessionIdleTimeoutMillis=300000 1066 | 1067 | # The maximum size of a text message during parsing in WebSocket proxy 1068 | webSocketMaxTextFrameSize=1048576 1069 | 1070 | ### --- Metrics --- ### 1071 | 1072 | # Enable topic level metrics 1073 | exposeTopicLevelMetricsInPrometheus=true 1074 | 1075 | # Enable consumer level metrics. default is false 1076 | exposeConsumerLevelMetricsInPrometheus=false 1077 | 1078 | # Enable cursor level metrics. default is false 1079 | exposeManagedCursorMetricsInPrometheus=false 1080 | 1081 | # Classname of Pluggable JVM GC metrics logger that can log GC specific metrics 1082 | # jvmGCMetricsLoggerClassName= 1083 | 1084 | ### --- Functions --- ### 1085 | 1086 | # Enable Functions Worker Service in Broker 1087 | functionsWorkerEnabled=false 1088 | 1089 | ### --- Broker Web Stats --- ### 1090 | 1091 | # Enable topic level metrics 1092 | exposePublisherStats=true 1093 | statsUpdateFrequencyInSecs=60 1094 | statsUpdateInitialDelayInSecs=60 1095 | 1096 | # Enable expose the precise backlog stats. 1097 | # Set false to use published counter and consumed counter to calculate, this would be more efficient but may be inaccurate. 1098 | # Default is false. 1099 | exposePreciseBacklogInPrometheus=false 1100 | 1101 | ### --- Schema storage --- ### 1102 | # The schema storage implementation used by this broker 1103 | schemaRegistryStorageClassName=org.apache.pulsar.broker.service.schema.BookkeeperSchemaStorageFactory 1104 | 1105 | # Enforce schema validation on following cases: 1106 | # 1107 | # - if a producer without a schema attempts to produce to a topic with schema, the producer will be 1108 | # failed to connect. PLEASE be carefully on using this, since non-java clients don't support schema. 1109 | # if you enable this setting, it will cause non-java clients failed to produce. 1110 | isSchemaValidationEnforced=false 1111 | 1112 | ### --- Ledger Offloading --- ### 1113 | 1114 | # The directory for all the offloader implementations 1115 | offloadersDirectory=./offloaders 1116 | 1117 | # Driver to use to offload old data to long term storage (Possible values: S3, aws-s3, google-cloud-storage) 1118 | # When using google-cloud-storage, Make sure both Google Cloud Storage and Google Cloud Storage JSON API are enabled for 1119 | # the project (check from Developers Console -> Api&auth -> APIs). 1120 | managedLedgerOffloadDriver= 1121 | 1122 | # Maximum number of thread pool threads for ledger offloading 1123 | managedLedgerOffloadMaxThreads=2 1124 | 1125 | # Maximum prefetch rounds for ledger reading for offloading 1126 | managedLedgerOffloadPrefetchRounds=1 1127 | 1128 | # Use Open Range-Set to cache unacked messages 1129 | managedLedgerUnackedRangesOpenCacheSetEnabled=true 1130 | 1131 | # For Amazon S3 ledger offload, AWS region 1132 | s3ManagedLedgerOffloadRegion= 1133 | 1134 | # For Amazon S3 ledger offload, Bucket to place offloaded ledger into 1135 | s3ManagedLedgerOffloadBucket= 1136 | 1137 | # For Amazon S3 ledger offload, Alternative endpoint to connect to (useful for testing) 1138 | s3ManagedLedgerOffloadServiceEndpoint= 1139 | 1140 | # For Amazon S3 ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) 1141 | s3ManagedLedgerOffloadMaxBlockSizeInBytes=67108864 1142 | 1143 | # For Amazon S3 ledger offload, Read buffer size in bytes (1MB by default) 1144 | s3ManagedLedgerOffloadReadBufferSizeInBytes=1048576 1145 | 1146 | # For Google Cloud Storage ledger offload, region where offload bucket is located. 1147 | # reference this page for more details: https://cloud.google.com/storage/docs/bucket-locations 1148 | gcsManagedLedgerOffloadRegion= 1149 | 1150 | # For Google Cloud Storage ledger offload, Bucket to place offloaded ledger into 1151 | gcsManagedLedgerOffloadBucket= 1152 | 1153 | # For Google Cloud Storage ledger offload, Max block size in bytes. (64MB by default, 5MB minimum) 1154 | gcsManagedLedgerOffloadMaxBlockSizeInBytes=67108864 1155 | 1156 | # For Google Cloud Storage ledger offload, Read buffer size in bytes (1MB by default) 1157 | gcsManagedLedgerOffloadReadBufferSizeInBytes=1048576 1158 | 1159 | # For Google Cloud Storage, path to json file containing service account credentials. 1160 | # For more details, see the "Service Accounts" section of https://support.google.com/googleapi/answer/6158849 1161 | gcsManagedLedgerOffloadServiceAccountKeyFile= 1162 | 1163 | #For File System Storage, file system profile path 1164 | fileSystemProfilePath=../conf/filesystem_offload_core_site.xml 1165 | 1166 | #For File System Storage, file system uri 1167 | fileSystemURI= 1168 | 1169 | ### --- Deprecated config variables --- ### 1170 | 1171 | # Deprecated. Use configurationStoreServers 1172 | globalZookeeperServers= 1173 | 1174 | # Deprecated - Enable TLS when talking with other clusters to replicate messages 1175 | replicationTlsEnabled=false 1176 | 1177 | # Deprecated. Use brokerDeleteInactiveTopicsFrequencySeconds 1178 | brokerServicePurgeInactiveFrequencyInSeconds=60 1179 | 1180 | ### --- Transaction config variables --- ### 1181 | 1182 | # Enable transaction coordinator in broker 1183 | transactionCoordinatorEnabled=false 1184 | transactionMetadataStoreProviderClassName=org.apache.pulsar.transaction.coordinator.impl.MLTransactionMetadataStoreProvider 1185 | -------------------------------------------------------------------------------- /pulsar/conf/proxy.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed to the Apache Software Foundation (ASF) under one 3 | # or more contributor license agreements. See the NOTICE file 4 | # distributed with this work for additional information 5 | # regarding copyright ownership. The ASF licenses this file 6 | # to you under the Apache License, Version 2.0 (the 7 | # "License"); you may not use this file except in compliance 8 | # with the License. You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, 13 | # software distributed under the License is distributed on an 14 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | # KIND, either express or implied. See the License for the 16 | # specific language governing permissions and limitations 17 | # under the License. 18 | # 19 | 20 | ### --- Broker Discovery --- ### 21 | 22 | # The ZooKeeper quorum connection string (as a comma-separated list) 23 | zookeeperServers=zk1:2181,zk2:2181,zk3:2181 24 | 25 | # Configuration store connection string (as a comma-separated list) 26 | configurationStoreServers=zk1:2181,zk2:2181,zk3:2181 27 | 28 | # if Service Discovery is Disabled this url should point to the discovery service provider. 29 | brokerServiceURL= 30 | brokerServiceURLTLS= 31 | 32 | # These settings are unnecessary if `zookeeperServers` is specified 33 | brokerWebServiceURL= 34 | brokerWebServiceURLTLS= 35 | 36 | # If function workers are setup in a separate cluster, configure the following 2 settings 37 | # to point to the function workers cluster 38 | functionWorkerWebServiceURL= 39 | functionWorkerWebServiceURLTLS= 40 | 41 | # ZooKeeper session timeout (in milliseconds) 42 | zookeeperSessionTimeoutMs=30000 43 | 44 | # ZooKeeper cache expiry time in seconds 45 | zooKeeperCacheExpirySeconds=300 46 | 47 | ### --- Server --- ### 48 | 49 | # Hostname or IP address the service binds on, default is 0.0.0.0. 50 | bindAddress=0.0.0.0 51 | 52 | # Hostname or IP address the service advertises to the outside world. 53 | # If not set, the value of `InetAddress.getLocalHost().getHostname()` is used. 54 | advertisedAddress= 55 | 56 | # Enable or disable the HAProxy protocol. 57 | haProxyProtocolEnabled=false 58 | 59 | # The port to use for server binary Protobuf requests 60 | servicePort=6650 61 | 62 | # The port to use to server binary Protobuf TLS requests 63 | servicePortTls= 64 | 65 | # Port that discovery service listen on 66 | webServicePort=8080 67 | 68 | # Port to use to server HTTPS request 69 | webServicePortTls= 70 | 71 | # Path for the file used to determine the rotation status for the proxy instance when responding 72 | # to service discovery health checks 73 | statusFilePath= 74 | 75 | # Proxy log level, default is 0. 76 | # 0: Do not log any tcp channel info 77 | # 1: Parse and log any tcp channel info and command info without message body 78 | # 2: Parse and log channel info, command info and message body 79 | proxyLogLevel=0 80 | 81 | ### ---Authorization --- ### 82 | 83 | # Role names that are treated as "super-users," meaning that they will be able to perform all admin 84 | # operations and publish/consume to/from all topics (as a comma-separated list) 85 | superUserRoles= 86 | 87 | # Whether authorization is enforced by the Pulsar proxy 88 | authorizationEnabled=false 89 | 90 | # Authorization provider as a fully qualified class name 91 | authorizationProvider=org.apache.pulsar.broker.authorization.PulsarAuthorizationProvider 92 | 93 | # Whether client authorization credentials are forwared to the broker for re-authorization. 94 | # Authentication must be enabled via authenticationEnabled=true for this to take effect. 95 | forwardAuthorizationCredentials=false 96 | 97 | ### --- Authentication --- ### 98 | 99 | # Whether authentication is enabled for the Pulsar proxy 100 | authenticationEnabled=false 101 | 102 | # Authentication provider name list (a comma-separated list of class names) 103 | authenticationProviders= 104 | 105 | # When this parameter is not empty, unauthenticated users perform as anonymousUserRole 106 | anonymousUserRole= 107 | 108 | ### --- Client Authentication --- ### 109 | 110 | # The three brokerClient* authentication settings below are for the proxy itself and determine how it 111 | # authenticates with Pulsar brokers 112 | 113 | # The authentication plugin used by the Pulsar proxy to authenticate with Pulsar brokers 114 | brokerClientAuthenticationPlugin= 115 | 116 | # The authentication parameters used by the Pulsar proxy to authenticate with Pulsar brokers 117 | brokerClientAuthenticationParameters= 118 | 119 | # The path to trusted certificates used by the Pulsar proxy to authenticate with Pulsar brokers 120 | brokerClientTrustCertsFilePath= 121 | 122 | # Whether TLS is enabled when communicating with Pulsar brokers 123 | tlsEnabledWithBroker=false 124 | 125 | # Tls cert refresh duration in seconds (set 0 to check on every new connection) 126 | tlsCertRefreshCheckDurationSec=300 127 | 128 | ##### --- Rate Limiting --- ##### 129 | 130 | # Max concurrent inbound connections. The proxy will reject requests beyond that. 131 | maxConcurrentInboundConnections=10000 132 | 133 | # Max concurrent outbound connections. The proxy will error out requests beyond that. 134 | maxConcurrentLookupRequests=50000 135 | 136 | ##### --- TLS --- ##### 137 | 138 | # Deprecated - use servicePortTls and webServicePortTls instead 139 | tlsEnabledInProxy=false 140 | 141 | # Path for the TLS certificate file 142 | tlsCertificateFilePath= 143 | 144 | # Path for the TLS private key file 145 | tlsKeyFilePath= 146 | 147 | # Path for the trusted TLS certificate file. 148 | # This cert is used to verify that any certs presented by connecting clients 149 | # are signed by a certificate authority. If this verification 150 | # fails, then the certs are untrusted and the connections are dropped. 151 | tlsTrustCertsFilePath= 152 | 153 | # Accept untrusted TLS certificate from client. 154 | # If true, a client with a cert which cannot be verified with the 155 | # 'tlsTrustCertsFilePath' cert will allowed to connect to the server, 156 | # though the cert will not be used for client authentication. 157 | tlsAllowInsecureConnection=false 158 | 159 | # Whether the hostname is validated when the proxy creates a TLS connection with brokers 160 | tlsHostnameVerificationEnabled=false 161 | 162 | # Specify the tls protocols the broker will use to negotiate during TLS handshake 163 | # (a comma-separated list of protocol names). 164 | # Examples:- [TLSv1.2, TLSv1.1, TLSv1] 165 | tlsProtocols= 166 | 167 | # Specify the tls cipher the broker will use to negotiate during TLS Handshake 168 | # (a comma-separated list of ciphers). 169 | # Examples:- [TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256] 170 | tlsCiphers= 171 | 172 | # Whether client certificates are required for TLS. Connections are rejected if the client 173 | # certificate isn't trusted. 174 | tlsRequireTrustedClientCertOnConnect=false 175 | 176 | ##### --- HTTP --- ##### 177 | 178 | # Http directs to redirect to non-pulsar services. 179 | httpReverseProxyConfigs= 180 | 181 | # Http output buffer size. The amount of data that will be buffered for http requests 182 | # before it is flushed to the channel. A larger buffer size may result in higher http throughput 183 | # though it may take longer for the client to see data. 184 | # If using HTTP streaming via the reverse proxy, this should be set to the minimum value, 1, 185 | # so that clients see the data as soon as possible. 186 | httpOutputBufferSize=32768 187 | 188 | # Number of threads to use for HTTP requests processing. Default is 189 | # 2 * Runtime.getRuntime().availableProcessors() 190 | httpNumThreads= 191 | 192 | # Enable the enforcement of limits on the incoming HTTP requests 193 | httpRequestsLimitEnabled=false 194 | 195 | # Max HTTP requests per seconds allowed. The excess of requests will be rejected with HTTP code 429 (Too many requests) 196 | httpRequestsMaxPerSecond=100.0 197 | 198 | 199 | ### --- Token Authentication Provider --- ### 200 | 201 | ## Symmetric key 202 | # Configure the secret key to be used to validate auth tokens 203 | # The key can be specified like: 204 | # tokenSecretKey=data:;base64,xxxxxxxxx 205 | # tokenSecretKey=file:///my/secret.key 206 | tokenSecretKey= 207 | 208 | ## Asymmetric public/private key pair 209 | # Configure the public key to be used to validate auth tokens 210 | # The key can be specified like: 211 | # tokenPublicKey=data:;base64,xxxxxxxxx 212 | # tokenPublicKey=file:///my/public.key 213 | tokenPublicKey= 214 | 215 | # The token "claim" that will be interpreted as the authentication "role" or "principal" by AuthenticationProviderToken (defaults to "sub" if blank) 216 | tokenAuthClaim= 217 | 218 | # The token audience "claim" name, e.g. "aud", that will be used to get the audience from token. 219 | # If not set, audience will not be verified. 220 | tokenAudienceClaim= 221 | 222 | # The token audience stands for this broker. The field `tokenAudienceClaim` of a valid token, need contains this. 223 | tokenAudience= 224 | 225 | ### --- WebSocket config variables --- ### 226 | 227 | # Enable or disable the WebSocket servlet. 228 | webSocketServiceEnabled=false 229 | 230 | # Name of the cluster to which this broker belongs to 231 | clusterName= 232 | 233 | ### --- Deprecated config variables --- ### 234 | 235 | # Deprecated. Use configurationStoreServers 236 | globalZookeeperServers= 237 | -------------------------------------------------------------------------------- /pulsar/conf/zookeeper.conf: -------------------------------------------------------------------------------- 1 | # 2 | # Licensed to the Apache Software Foundation (ASF) under one 3 | # or more contributor license agreements. See the NOTICE file 4 | # distributed with this work for additional information 5 | # regarding copyright ownership. The ASF licenses this file 6 | # to you under the Apache License, Version 2.0 (the 7 | # "License"); you may not use this file except in compliance 8 | # with the License. You may obtain a copy of the License at 9 | # 10 | # http://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, 13 | # software distributed under the License is distributed on an 14 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | # KIND, either express or implied. See the License for the 16 | # specific language governing permissions and limitations 17 | # under the License. 18 | # 19 | 20 | # The number of milliseconds of each tick 21 | tickTime=2000 22 | # The number of ticks that the initial 23 | # synchronization phase can take 24 | initLimit=10 25 | # The number of ticks that can pass between 26 | # sending a request and getting an acknowledgement 27 | syncLimit=5 28 | # the directory where the snapshot is stored. 29 | dataDir=data/zookeeper 30 | # the port at which the clients will connect 31 | clientPort=2181 32 | 33 | # the port at which the admin will listen 34 | admin.enableServer=true 35 | admin.serverPort=9990 36 | 37 | # the maximum number of client connections. 38 | # increase this if you need to handle more clients 39 | #maxClientCnxns=60 40 | # 41 | # Be sure to read the maintenance section of the 42 | # administrator guide before turning on autopurge. 43 | # 44 | # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance 45 | # 46 | # The number of snapshots to retain in dataDir 47 | autopurge.snapRetainCount=3 48 | # Purge task interval in hours 49 | # Set to "0" to disable auto purge feature 50 | autopurge.purgeInterval=1 51 | 52 | # Requires updates to be synced to media of the transaction log before finishing 53 | # processing the update. If this option is set to 'no', ZooKeeper will not require 54 | # updates to be synced to the media. 55 | # WARNING: it's not recommended to run a production ZK cluster with forceSync disabled. 56 | forceSync=yes 57 | 58 | #zookeper servers 59 | server.1=zk1:2888:3888 60 | server.2=zk2:2888:3888 61 | server.3=zk3:2888:3888 62 | -------------------------------------------------------------------------------- /pulsar/docker-compose-cluster.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | zk1: 5 | image: apachepulsar/pulsar:2.7.2 6 | hostname: zk1 7 | ports: 8 | - "2181:2181" 9 | volumes: 10 | - ./data/zk1:/pulsar/data/zookeeper/ 11 | - ./conf/zookeeper.conf:/pulsar/conf/zookeeper.conf 12 | restart: on-failure 13 | entrypoint: 14 | - sh 15 | - -c 16 | - | 17 | mkdir -p data/zookeeper 18 | echo 1 > data/zookeeper/myid 19 | bin/pulsar zookeeper 20 | 21 | zk2: 22 | image: apachepulsar/pulsar:2.7.2 23 | hostname: zk2 24 | ports: 25 | - "2182:2181" 26 | volumes: 27 | - ./data/zk2:/pulsar/data/zookeeper/ 28 | - ./conf/zookeeper.conf:/pulsar/conf/zookeeper.conf 29 | restart: on-failure 30 | entrypoint: 31 | - sh 32 | - -c 33 | - | 34 | mkdir -p data/zookeeper 35 | echo 2 > data/zookeeper/myid 36 | bin/pulsar zookeeper 37 | 38 | zk3: 39 | image: apachepulsar/pulsar:2.7.2 40 | hostname: zk3 41 | ports: 42 | - "2183:2181" 43 | volumes: 44 | - ./data/zk3:/pulsar/data/zookeeper/ 45 | - ./conf/zookeeper.conf:/pulsar/conf/zookeeper.conf 46 | restart: on-failure 47 | entrypoint: 48 | - sh 49 | - -c 50 | - | 51 | mkdir -p data/zookeeper 52 | echo 3 > data/zookeeper/myid 53 | bin/pulsar zookeeper 54 | 55 | init-metadata: 56 | image: apachepulsar/pulsar:2.7.2 57 | hostname: init-metadata 58 | restart: on-failure 59 | entrypoint: bin/pulsar initialize-cluster-metadata \ 60 | --cluster pulsar-cluster-1 \ 61 | --zookeeper zk1:2181,zk2:2181,zk3:2181 \ 62 | --configuration-store zk1:2181,zk2:2181,zk3:2181 \ 63 | --web-service-url http://pulsar-proxy:8080 \ 64 | --broker-service-url pulsar://pulsar-proxy:6650 65 | depends_on: 66 | - zk1 67 | - zk2 68 | - zk3 69 | 70 | bookie1: 71 | image: apachepulsar/pulsar:2.7.2 72 | hostname: bookie1 73 | ports: 74 | - "3181:3181" 75 | volumes: 76 | - ./data/bookie1:/pulsar/data/bookkeeper/ 77 | - ./conf/bookkeeper.conf:/pulsar/conf/bookkeeper.conf 78 | restart: on-failure 79 | entrypoint: bin/pulsar bookie 80 | depends_on: 81 | - init-metadata 82 | 83 | bookie2: 84 | image: apachepulsar/pulsar:2.7.2 85 | hostname: bookie2 86 | ports: 87 | - "3182:3181" 88 | volumes: 89 | - ./data/bookie2:/pulsar/data/bookkeeper/ 90 | - ./conf/bookkeeper.conf:/pulsar/conf/bookkeeper.conf 91 | restart: on-failure 92 | entrypoint: bin/pulsar bookie 93 | depends_on: 94 | - init-metadata 95 | 96 | bookie3: 97 | image: apachepulsar/pulsar:2.7.2 98 | hostname: bookie3 99 | ports: 100 | - "3183:3181" 101 | volumes: 102 | - ./data/bookie3:/pulsar/data/bookkeeper/ 103 | - ./conf/bookkeeper.conf:/pulsar/conf/bookkeeper.conf 104 | restart: on-failure 105 | entrypoint: bin/pulsar bookie 106 | depends_on: 107 | - init-metadata 108 | 109 | broker1: 110 | image: apachepulsar/pulsar:2.7.2 111 | hostname: broker1 112 | volumes: 113 | - ./data/broker1:/pulsar/data/broker/ 114 | - ./conf/broker.conf:/pulsar/conf/broker.conf 115 | restart: on-failure 116 | entrypoint: bin/pulsar broker 117 | depends_on: 118 | - bookie1 119 | - bookie2 120 | - bookie3 121 | 122 | broker2: 123 | image: apachepulsar/pulsar:2.7.2 124 | hostname: broker2 125 | volumes: 126 | - ./data/broker2:/pulsar/data/broker/ 127 | - ./conf/broker.conf:/pulsar/conf/broker.conf 128 | restart: on-failure 129 | entrypoint: bin/pulsar broker 130 | depends_on: 131 | - bookie1 132 | - bookie2 133 | - bookie3 134 | 135 | broker3: 136 | image: apachepulsar/pulsar:2.7.2 137 | hostname: broker3 138 | volumes: 139 | - ./data/broker2:/pulsar/data/broker/ 140 | - ./conf/broker.conf:/pulsar/conf/broker.conf 141 | restart: on-failure 142 | entrypoint: bin/pulsar broker 143 | depends_on: 144 | - bookie1 145 | - bookie2 146 | - bookie3 147 | 148 | pulsar-proxy: 149 | image: apachepulsar/pulsar:2.7.2 150 | hostname: pulsar-proxy 151 | ports: 152 | - "6650:6650" 153 | - "8080:8080" 154 | volumes: 155 | - ./conf/proxy.conf:/pulsar/conf/proxy.conf 156 | restart: on-failure 157 | entrypoint: bin/pulsar proxy 158 | depends_on: 159 | - broker1 160 | - broker2 161 | 162 | dashboard: 163 | image: apachepulsar/pulsar-manager:v0.2.0 164 | volumes: 165 | - ./conf/application.properties:/pulsar-manager/pulsar-manager/application.properties 166 | - ./conf/bkvm.conf:/pulsar-manager/pulsar-manager/bkvm.conf 167 | ports: 168 | - "9527:9527" 169 | - "7750:7750" 170 | environment: 171 | SPRING_CONFIGURATION_FILE: /pulsar-manager/pulsar-manager/application.properties 172 | restart: on-failure 173 | entrypoint: 174 | - sh 175 | - -c 176 | - | 177 | /pulsar-manager/entrypoint.sh & 178 | tail -F /pulsar-manager/pulsar-manager/pulsar-manager.log 179 | depends_on: 180 | - pulsar-proxy 181 | 182 | 183 | -------------------------------------------------------------------------------- /pulsar/images/2021-06-15-10-35-39.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/pulsar/images/2021-06-15-10-35-39.png -------------------------------------------------------------------------------- /pulsar/images/2021-06-15-10-47-45.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/pulsar/images/2021-06-15-10-47-45.png -------------------------------------------------------------------------------- /pulsar/images/2021-06-15-10-51-18.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bobo495/docker-sample/57aa49577ed5e1467d75406604f141262d46cd2b/pulsar/images/2021-06-15-10-51-18.png -------------------------------------------------------------------------------- /pulsar/initAccount.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 初始化管理员账号 3 | 4 | CSRF_TOKEN=$(curl http://localhost:7750/pulsar-manager/csrf-token) 5 | 6 | curl \ 7 | -H "X-XSRF-TOKEN: $CSRF_TOKEN" \ 8 | -H "Cookie: XSRF-TOKEN=$CSRF_TOKEN;" \ 9 | -H 'Content-Type: application/json' \ 10 | -X PUT http://localhost:7750/pulsar-manager/users/superuser \ 11 | -d '{"name": "admin", "password": "pulsar", "description": "test", "email": "username@test.org"}' 12 | -------------------------------------------------------------------------------- /pulsar/startCluster.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 初始化挂载数据目录 4 | rm -rf data 5 | mkdir -p data/{zk1,zk2,zk3} 6 | mkdir -p data/{bookie1,bookie2,bookie3} 7 | mkdir -p data/{broker1,broker2,broker3} 8 | 9 | # 启动服务 10 | docker compose -f docker-compose-cluster.yml up -d 11 | -------------------------------------------------------------------------------- /pulsar/startStandalone.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 删除现有容器(如果存在) 4 | docker rm -f pulsar-standalone 5 | docker rm -f pulsar-manager 6 | 7 | docker run -itd \ 8 | --name pulsar-standalone \ 9 | -p 6650:6650 \ 10 | -p 8080:8080 \ 11 | apachepulsar/pulsar:2.7.2 \ 12 | sh -c "bin/pulsar standalone > pulsar.log 2>&1 & \ 13 | sleep 30 && bin/pulsar-admin clusters update standalone \ 14 | --url http://pulsar-standalone:8080 \ 15 | --broker-url pulsar://pulsar-standalone:6650 & \ 16 | tail -F pulsar.log" 17 | 18 | docker run -itd \ 19 | --name pulsar-manager \ 20 | -p 9527:9527 -p 7750:7750 \ 21 | -e SPRING_CONFIGURATION_FILE=/pulsar-manager/pulsar-manager/application.properties \ 22 | --link pulsar-standalone \ 23 | --entrypoint="" \ 24 | apachepulsar/pulsar-manager:v0.2.0 \ 25 | sh -c "sed -i '/^default.environment.name/ s|.*|default.environment.name=pulsar-standalone|' /pulsar-manager/pulsar-manager/application.properties & \ 26 | sed -i '/^default.environment.service_url/ s|.*|default.environment.service_url=http://pulsar-standalone:8080|' /pulsar-manager/pulsar-manager/application.properties & \ 27 | /pulsar-manager/entrypoint.sh & \ 28 | tail -F /pulsar-manager/pulsar-manager/pulsar-manager.log" 29 | 30 | -------------------------------------------------------------------------------- /pulsar/stopCluster.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 停止服务 4 | docker compose -f docker-compose-cluster.yml down 5 | -------------------------------------------------------------------------------- /pulsar/stopStandalone.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # 删除现有容器(如果存在) 4 | docker rm -f pulsar-standalone 5 | docker rm -f pulsar-manager 6 | 7 | --------------------------------------------------------------------------------