├── .gitignore ├── README.md ├── sasl-scram ├── .env ├── add_kafka_accounts_in_zookeeper.sh ├── docker-compose-scram.yml ├── kafka-consumers-producers.sh ├── secrets │ ├── consumer_jaas.conf │ ├── host.consumer.sasl_scram.config │ ├── host.producer.sasl_scram.config │ ├── kafka_server_jaas.conf │ ├── producer_jaas.conf │ ├── zookeeper_client_jaas.conf │ └── zookeeper_server_jaas.conf └── zookeeper_scripts.sh ├── secrets ├── create-certs.sh ├── host.consumer.ssl.config └── host.producer.ssl.config ├── ssl-only ├── docker-compose-ssl-only.yml └── kafka-consumers-producers.sh ├── start_sasl_scram_cluster.sh └── start_ssl_only_cluster.sh /.gitignore: -------------------------------------------------------------------------------- 1 | .DS_Store 2 | venv 3 | __pycache__ 4 | *.pyc 5 | *ipynb* 6 | *cache* 7 | .idea 8 | *.csr 9 | *.srl 10 | *.req 11 | *.log 12 | *.jar 13 | Untitled* 14 | docs/_build 15 | docs/README.md 16 | target 17 | .idea 18 | *.iml 19 | local.make 20 | java/.classpath 21 | java/.project 22 | java/.settings/ 23 | .vscode/* 24 | 25 | ### SSL files ## 26 | *.crt 27 | *.jks 28 | *_creds 29 | *.key 30 | *.pem 31 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Kafka security 2 | 3 | Implementation of Kafka authentication and authorisation using different approaches. 4 | 5 | 6 | ## Run locally 7 | 8 | Every directory has its own compose file and scripts to test. In the compose file 9 | all services are using a network with name `kafka-cluster-network` which means, 10 | all other containers outside the compose file could access Kafka and Zookeeper nodes by 11 | being attached to this network. For example 12 | 13 | ``` 14 | docker run -it --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 kafka-topics --zookeeper \ zookeeper-1:22181 --list 15 | ``` 16 | 17 | ### Setup environment variables 18 | 19 | There are 2 environment variables need to be configured 20 | 21 | - `export KAFKA_SSL_SECRETS_DIR=$PWD/secrets` 22 | - `export KAFKA_SASL_SCRAM_SECRETS_DIR=$PWD/sasl-scram/secrets` 23 | 24 | 25 | ### SSL only 26 | 27 | To start Kafka and Zookeeper cluster configured only with SSL, you could run the script `start_ssl_only_cluster.sh` 28 | 29 | ### SASL/SCRAM 30 | 31 | Configured both Zookeeper and Kafka to use SASL/SCRAM. To run it 32 | 33 | - Make sure you have the SSL keystore and truststore generated and stored in the directory 34 | `kafka-security-ssl-sasl/secrets` 35 | - Run command `kafka-security-ssl-sasl/start_sasl_scram_cluster.sh` 36 | - To run console producer and consumer, you could check the commands in the file `kafka-security-ssl-sasl/sasl-scram/kafka-consumers-producers.sh` 37 | - To add any new account to connect to Kafka, you could find commands in the script file `sasl-scram/add_kafka_accounts_in_zookeeper.sh` 38 | 39 | 40 | ## References 41 | 42 | - [Redhat tutorial to configure Zookeeper and Kafka](https://access.redhat.com/documentation/en-us/red_hat_amq_streams/1.0-beta/html/using_amq_streams_on_red_hat_enterprise_linux_rhel/configuring_zookeeper#assembly-configuring-zookeeper-authentication-str) 43 | - [Confluent reference / configure SCRAM](https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_scram.html) 44 | - [Cloudera slides to configure SCRAM](https://www.slideshare.net/JeanPaulAzar1/kafka-tutorial-kafka-security) 45 | - [Blog describe steps to configure SASL/SCRAM](https://sharebigdata.wordpress.com/category/kafka/multiple-saslplainscram/) 46 | -------------------------------------------------------------------------------- /sasl-scram/.env: -------------------------------------------------------------------------------- 1 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS=kafka-broker-1:19094,kafka-broker-2:29094,kafka-broker-3:39094 2 | KAFKA_MIN_INSYNC_REPLICAS=1 3 | KAFKA_SUPER_USERS=User:metricsreporter;User:kafkabroker;User:kafkaclient -------------------------------------------------------------------------------- /sasl-scram/add_kafka_accounts_in_zookeeper.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | 4 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/producer:/etc/kafka/secrets \ 5 | -v ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_client_jaas.conf:/etc/kafka/secrets/zookeeper_client_jaas.conf \ 6 | -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_client_jaas.conf" \ 7 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config \ 8 | 'SCRAM-SHA-256=[iterations=4096,password=password],SCRAM-SHA-512=[iterations=4096,password=password]' \ 9 | --entity-type users --entity-name metricsreporter 10 | 11 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/producer:/etc/kafka/secrets \ 12 | -v ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_client_jaas.conf:/etc/kafka/secrets/zookeeper_client_jaas.conf \ 13 | -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_client_jaas.conf" \ 14 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config \ 15 | 'SCRAM-SHA-256=[iterations=4096,password=password],SCRAM-SHA-512=[iterations=4096,password=password]' \ 16 | --entity-type users --entity-name kafkabroker 17 | 18 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/producer:/etc/kafka/secrets \ 19 | -v ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_client_jaas.conf:/etc/kafka/secrets/zookeeper_client_jaas.conf \ 20 | -e KAFKA_OPTS="-Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_client_jaas.conf" \ 21 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 kafka-configs --zookeeper zookeeper-1:22181 \ 22 | --describe --entity-type users --entity-name kafkabroker -------------------------------------------------------------------------------- /sasl-scram/docker-compose-scram.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3.5' 3 | services: 4 | zookeeper-add-kafka-users: 5 | image: confluentinc/cp-kafka:5.0.1 6 | container_name: "zookeeper-add-kafka-users" 7 | depends_on: 8 | - zookeeper-1 9 | - zookeeper-2 10 | - zookeeper-3 11 | command: "bash -c 'echo Waiting for Zookeeper to be ready... && \ 12 | cub zk-ready zookeeper-1:22181 120 && \ 13 | cub zk-ready zookeeper-2:32181 120 && \ 14 | cub zk-ready zookeeper-3:42181 120 && \ 15 | kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config 'SCRAM-SHA-256=[iterations=4096,password=password]' --entity-type users --entity-name metricsreporter && \ 16 | kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config 'SCRAM-SHA-256=[iterations=4096,password=password]' --entity-type users --entity-name kafkaclient && \ 17 | kafka-configs --zookeeper zookeeper-1:22181 --alter --add-config 'SCRAM-SHA-256=[iterations=4096,password=password]' --entity-type users --entity-name kafkabroker '" 18 | environment: 19 | KAFKA_BROKER_ID: ignored 20 | KAFKA_ZOOKEEPER_CONNECT: ignored 21 | KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_client_jaas.conf 22 | networks: 23 | - kafka-cluster-network 24 | volumes: 25 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_client_jaas.conf:/etc/kafka/secrets/zookeeper_client_jaas.conf 26 | zookeeper-1: 27 | image: confluentinc/cp-zookeeper:5.0.1 28 | hostname: zookeeper-1 29 | container_name: zookeeper-1 30 | ports: 31 | - "22181:22181" 32 | environment: 33 | ZOOKEEPER_SERVER_ID: 1 34 | ZOOKEEPER_CLIENT_PORT: 22181 35 | ZOOKEEPER_TICK_TIME: 2000 36 | ZOOKEEPER_INIT_LIMIT: 5 37 | ZOOKEEPER_SYNC_LIMIT: 2 38 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "DEBUG" 39 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 40 | KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_server_jaas.conf 41 | -Dquorum.auth.enableSasl=true 42 | -Dquorum.auth.learnerRequireSasl=true 43 | -Dquorum.auth.serverRequireSasl=true 44 | -Dquorum.cnxn.threads.size=20 45 | -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider 46 | -Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider 47 | -DjaasLoginRenew=3600000 48 | -DrequireClientAuthScheme=sasl 49 | -Dquorum.auth.learner.loginContext=QuorumLearner 50 | -Dquorum.auth.server.loginContext=QuorumServer 51 | networks: 52 | - kafka-cluster-network 53 | volumes: 54 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_server_jaas.conf:/etc/kafka/secrets/zookeeper_server_jaas.conf 55 | 56 | zookeeper-2: 57 | image: confluentinc/cp-zookeeper:5.0.1 58 | hostname: zookeeper-2 59 | container_name: zookeeper-2 60 | ports: 61 | - "32181:32181" 62 | environment: 63 | ZOOKEEPER_SERVER_ID: 2 64 | ZOOKEEPER_CLIENT_PORT: 32181 65 | ZOOKEEPER_TICK_TIME: 2000 66 | ZOOKEEPER_INIT_LIMIT: 5 67 | ZOOKEEPER_SYNC_LIMIT: 2 68 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "DEBUG" 69 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 70 | KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_server_jaas.conf 71 | -Dquorum.auth.enableSasl=true 72 | -Dquorum.auth.learnerRequireSasl=true 73 | -Dquorum.auth.serverRequireSasl=true 74 | -Dquorum.cnxn.threads.size=20 75 | -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider 76 | -Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider 77 | -DjaasLoginRenew=3600000 78 | -DrequireClientAuthScheme=sasl 79 | -Dquorum.auth.learner.loginContext=QuorumLearner 80 | -Dquorum.auth.server.loginContext=QuorumServer 81 | networks: 82 | - kafka-cluster-network 83 | volumes: 84 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_server_jaas.conf:/etc/kafka/secrets/zookeeper_server_jaas.conf 85 | 86 | 87 | zookeeper-3: 88 | image: confluentinc/cp-zookeeper:5.0.1 89 | hostname: zookeeper-3 90 | container_name: zookeeper-3 91 | ports: 92 | - "42181:42181" 93 | environment: 94 | ZOOKEEPER_SERVER_ID: 3 95 | ZOOKEEPER_CLIENT_PORT: 42181 96 | ZOOKEEPER_TICK_TIME: 2000 97 | ZOOKEEPER_INIT_LIMIT: 5 98 | ZOOKEEPER_SYNC_LIMIT: 2 99 | ZOOKEEPER_LOG4J_ROOT_LOGLEVEL: "DEBUG" 100 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 101 | KAFKA_OPTS: -Djava.security.auth.login.config=/etc/kafka/secrets/zookeeper_server_jaas.conf 102 | -Dquorum.auth.enableSasl=true 103 | -Dquorum.auth.learnerRequireSasl=true 104 | -Dquorum.auth.serverRequireSasl=true 105 | -Dquorum.cnxn.threads.size=20 106 | -Dzookeeper.authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider 107 | -Dzookeeper.authProvider.2=org.apache.zookeeper.server.auth.DigestAuthenticationProvider 108 | -DjaasLoginRenew=3600000 109 | -DrequireClientAuthScheme=sasl 110 | -Dquorum.auth.learner.loginContext=QuorumLearner 111 | -Dquorum.auth.server.loginContext=QuorumServer 112 | networks: 113 | - kafka-cluster-network 114 | volumes: 115 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}/zookeeper_server_jaas.conf:/etc/kafka/secrets/zookeeper_server_jaas.conf 116 | 117 | kafka-broker-1: 118 | image: confluentinc/cp-kafka:5.0.1 119 | hostname: kafka-broker-1 120 | container_name: kafka-broker-1 121 | ports: 122 | - "19093:19093" 123 | - "19094:19094" 124 | depends_on: 125 | - zookeeper-1 126 | - zookeeper-2 127 | - zookeeper-3 128 | environment: 129 | KAFKA_BROKER_ID: 1 130 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 131 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-1:19093,SASL_SSL://kafka-broker-1:19094 132 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-1.keystore.jks 133 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-1_keystore_creds 134 | KAFKA_SSL_KEY_CREDENTIALS: broker-1_sslkey_creds 135 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-1.truststore.jks 136 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-1_truststore_creds 137 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-1.truststore.jks 138 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_PASSWORD: confluent 139 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-1.keystore.jks 140 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_PASSWORD: confluent 141 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "HTTPS" 142 | KAFKA_SSL_CLIENT_AUTH: requested 143 | KAFKA_MIN_INSYNC_REPLICAS: ${KAFKA_MIN_INSYNC_REPLICAS} 144 | KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256 145 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL 146 | KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256 147 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 148 | CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_SSL 149 | CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256 150 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: ${CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS} 151 | KAFKA_OFFSETS_RETENTION_MINUTES: 172800 152 | KAFKA_LOG4J_LOGGERS: "kafka.authorizer.logger=INFO,kafka.controller=INFO" 153 | KAFKA_LOG4J_ROOT_LOGLEVEL: "INFO" 154 | KAFKA_SUPER_USERS: ${KAFKA_SUPER_USERS} 155 | KAFKA_ZOOKEEPER_SASL_ENABLED: "true" 156 | KAFKA_ZOOKEEPER_SET_ACL: "true" 157 | KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer 158 | KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false" 159 | KAFKA_OPTS: -Dzookeeper.sasl.client=true 160 | -Dzookeeper.sasl.clientconfig=Client 161 | -Djava.security.auth.login.config=/etc/kafka/secrets/conf/kafka_server_jaas.conf 162 | volumes: 163 | - ${KAFKA_SSL_SECRETS_DIR}/broker-1:/etc/kafka/secrets 164 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}:/etc/kafka/secrets/conf 165 | networks: 166 | - kafka-cluster-network 167 | 168 | 169 | kafka-broker-2: 170 | image: confluentinc/cp-kafka:5.0.1 171 | hostname: kafka-broker-2 172 | container_name: kafka-broker-2 173 | ports: 174 | - "29093:29093" 175 | - "29094:29094" 176 | depends_on: 177 | - zookeeper-1 178 | - zookeeper-2 179 | - zookeeper-3 180 | environment: 181 | KAFKA_BROKER_ID: 2 182 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 183 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-2:29093,SASL_SSL://kafka-broker-2:29094 184 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-2.keystore.jks 185 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-2_keystore_creds 186 | KAFKA_SSL_KEY_CREDENTIALS: broker-2_sslkey_creds 187 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-2.truststore.jks 188 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-2_truststore_creds 189 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-2.truststore.jks 190 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_PASSWORD: confluent 191 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-2.keystore.jks 192 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_PASSWORD: confluent 193 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "HTTPS" 194 | KAFKA_SSL_CLIENT_AUTH: requested 195 | KAFKA_MIN_INSYNC_REPLICAS: ${KAFKA_MIN_INSYNC_REPLICAS} 196 | KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256 197 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL 198 | KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256 199 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 200 | CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_SSL 201 | CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256 202 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: ${CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS} 203 | KAFKA_OFFSETS_RETENTION_MINUTES: 172800 204 | KAFKA_LOG4J_LOGGERS: "kafka.authorizer.logger=INFO,kafka.controller=INFO" 205 | KAFKA_LOG4J_ROOT_LOGLEVEL: "INFO" 206 | KAFKA_SUPER_USERS: ${KAFKA_SUPER_USERS} 207 | KAFKA_ZOOKEEPER_SASL_ENABLED: "true" 208 | KAFKA_ZOOKEEPER_SET_ACL: "true" 209 | KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer 210 | KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false" 211 | KAFKA_OPTS: -Dzookeeper.sasl.client=true 212 | -Dzookeeper.sasl.clientconfig=Client 213 | -Djava.security.auth.login.config=/etc/kafka/secrets/conf/kafka_server_jaas.conf 214 | volumes: 215 | - ${KAFKA_SSL_SECRETS_DIR}/broker-2:/etc/kafka/secrets 216 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}:/etc/kafka/secrets/conf 217 | networks: 218 | - kafka-cluster-network 219 | 220 | 221 | kafka-broker-3: 222 | image: confluentinc/cp-kafka:5.0.1 223 | hostname: kafka-broker-3 224 | container_name: kafka-broker-3 225 | ports: 226 | - "39093:39093" 227 | - "39094:39094" 228 | depends_on: 229 | - zookeeper-1 230 | - zookeeper-2 231 | - zookeeper-3 232 | environment: 233 | KAFKA_BROKER_ID: 3 234 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 235 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-3:39093,SASL_SSL://kafka-broker-3:39094 236 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-3.keystore.jks 237 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-3_keystore_creds 238 | KAFKA_SSL_KEY_CREDENTIALS: broker-3_sslkey_creds 239 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-3.truststore.jks 240 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-3_truststore_creds 241 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-3.truststore.jks 242 | CONFLUENT_METRICS_REPORTER_SSL_TRUSTSTORE_PASSWORD: confluent 243 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_LOCATION: /etc/kafka/secrets/kafka.broker-3.keystore.jks 244 | CONFLUENT_METRICS_REPORTER_SSL_KEYSTORE_PASSWORD: confluent 245 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: "HTTPS" 246 | KAFKA_SSL_CLIENT_AUTH: requested 247 | KAFKA_MIN_INSYNC_REPLICAS: ${KAFKA_MIN_INSYNC_REPLICAS} 248 | KAFKA_SASL_ENABLED_MECHANISMS: SCRAM-SHA-256 249 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SASL_SSL 250 | KAFKA_SASL_MECHANISM_INTER_BROKER_PROTOCOL: SCRAM-SHA-256 251 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 252 | CONFLUENT_METRICS_REPORTER_SECURITY_PROTOCOL: SASL_SSL 253 | CONFLUENT_METRICS_REPORTER_SASL_MECHANISM: SCRAM-SHA-256 254 | CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS: ${CONFLUENT_METRICS_REPORTER_BOOTSTRAP_SERVERS} 255 | KAFKA_OFFSETS_RETENTION_MINUTES: 172800 256 | KAFKA_LOG4J_LOGGERS: "kafka.authorizer.logger=INFO,kafka.controller=INFO" 257 | KAFKA_LOG4J_ROOT_LOGLEVEL: "INFO" 258 | KAFKA_SUPER_USERS: ${KAFKA_SUPER_USERS} 259 | KAFKA_ZOOKEEPER_SASL_ENABLED: "true" 260 | KAFKA_ZOOKEEPER_SET_ACL: "true" 261 | KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.auth.SimpleAclAuthorizer 262 | KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "false" 263 | KAFKA_OPTS: -Dzookeeper.sasl.client=true 264 | -Dzookeeper.sasl.clientconfig=Client 265 | -Djava.security.auth.login.config=/etc/kafka/secrets/conf/kafka_server_jaas.conf 266 | volumes: 267 | - ${KAFKA_SSL_SECRETS_DIR}/broker-3:/etc/kafka/secrets 268 | - ${KAFKA_SASL_SCRAM_SECRETS_DIR}:/etc/kafka/secrets/conf 269 | networks: 270 | - kafka-cluster-network 271 | 272 | networks: 273 | kafka-cluster-network: 274 | driver: bridge 275 | name: kafka-cluster-network 276 | -------------------------------------------------------------------------------- /sasl-scram/kafka-consumers-producers.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | docker run -it --rm --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 \ 4 | kafka-topics --zookeeper zookeeper-1:22181 --create --topic test --partitions 10 --replication-factor 3 5 | 6 | #Console producer with SSL files mapped in the container 7 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/producer:/etc/kafka/secrets \ 8 | -v ${KAFKA_SASL_SCRAM_SECRETS_DIR}/host.producer.sasl_scram.config:/etc/kafka/secrets/host.producer.sasl_scram.config \ 9 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 \ 10 | kafka-console-producer --broker-list kafka-broker-1:19094 --topic test \ 11 | --producer.config /etc/kafka/secrets/host.producer.sasl_scram.config 12 | 13 | 14 | 15 | #Console consumer with SSL files mapped in the container 16 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/consumer:/etc/kafka/secrets \ 17 | -v ${KAFKA_SASL_SCRAM_SECRETS_DIR}/host.consumer.sasl_scram.config:/etc/kafka/secrets/host.consumer.sasl_scram.config \ 18 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 \ 19 | kafka-console-consumer --bootstrap-server kafka-broker-1:19094 --topic test --from-beginning \ 20 | --consumer.config /etc/kafka/secrets/host.consumer.sasl_scram.config 21 | -------------------------------------------------------------------------------- /sasl-scram/secrets/consumer_jaas.conf: -------------------------------------------------------------------------------- 1 | KafkaClient { 2 | org.apache.zookeeper.server.auth.DigestLoginModule required 3 | username="console-client" 4 | password="password"; 5 | }; 6 | -------------------------------------------------------------------------------- /sasl-scram/secrets/host.consumer.sasl_scram.config: -------------------------------------------------------------------------------- 1 | group.id=ssl-host 2 | ssl.truststore.location=/etc/kafka/secrets/kafka.consumer.truststore.jks 3 | ssl.truststore.password=confluent 4 | 5 | ssl.keystore.location=/etc/kafka/secrets/kafka.consumer.keystore.jks 6 | ssl.keystore.password=confluent 7 | 8 | ssl.key.password=confluent 9 | ssl.endpoint.identification.algorithm=HTTPS 10 | 11 | sasl.mechanism=SCRAM-SHA-256 12 | security.protocol=SASL_SSL 13 | 14 | sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \ 15 | username="kafkaclient" \ 16 | password="password"; 17 | -------------------------------------------------------------------------------- /sasl-scram/secrets/host.producer.sasl_scram.config: -------------------------------------------------------------------------------- 1 | ssl.truststore.location=/etc/kafka/secrets/kafka.producer.truststore.jks 2 | ssl.truststore.password=confluent 3 | 4 | ssl.keystore.location=/etc/kafka/secrets/kafka.producer.keystore.jks 5 | ssl.keystore.password=confluent 6 | 7 | ssl.key.password=confluent 8 | ssl.endpoint.identification.algorithm=HTTPS 9 | 10 | sasl.mechanism=SCRAM-SHA-256 11 | security.protocol=SASL_SSL 12 | 13 | sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \ 14 | username="kafkaclient" \ 15 | password="password"; -------------------------------------------------------------------------------- /sasl-scram/secrets/kafka_server_jaas.conf: -------------------------------------------------------------------------------- 1 | KafkaServer { 2 | org.apache.kafka.common.security.scram.ScramLoginModule required 3 | username="kafkabroker" 4 | password="password"; 5 | }; 6 | Client { 7 | org.apache.kafka.common.security.plain.PlainLoginModule required 8 | username="admin" 9 | password="password"; 10 | }; 11 | KafkaClient { 12 | org.apache.kafka.common.security.scram.ScramLoginModule required 13 | username="metricsreporter" 14 | password="password"; 15 | }; -------------------------------------------------------------------------------- /sasl-scram/secrets/producer_jaas.conf: -------------------------------------------------------------------------------- 1 | KafkaClient { 2 | org.apache.zookeeper.server.auth.DigestLoginModule required 3 | username="console-client" 4 | password="password"; 5 | }; 6 | -------------------------------------------------------------------------------- /sasl-scram/secrets/zookeeper_client_jaas.conf: -------------------------------------------------------------------------------- 1 | Client { 2 | org.apache.zookeeper.server.auth.DigestLoginModule required 3 | username="admin" 4 | password="password"; 5 | }; 6 | -------------------------------------------------------------------------------- /sasl-scram/secrets/zookeeper_server_jaas.conf: -------------------------------------------------------------------------------- 1 | Server { 2 | org.apache.zookeeper.server.auth.DigestLoginModule required 3 | user_admin="password"; 4 | }; 5 | QuorumServer { 6 | org.apache.zookeeper.server.auth.DigestLoginModule required 7 | user_zookeeper="password"; 8 | }; 9 | QuorumLearner { 10 | org.apache.zookeeper.server.auth.DigestLoginModule required 11 | username="zookeeper" 12 | password="password"; 13 | }; 14 | -------------------------------------------------------------------------------- /sasl-scram/zookeeper_scripts.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | docker exec -it zookeeper-1 echo stat | nc localhost 22181 | grep Mode 4 | docker exec -it zookeeper-2 echo stat | nc localhost 32181 | grep Mode 5 | docker exec -it zookeeper-3 echo stat | nc localhost 42181 | grep Mode -------------------------------------------------------------------------------- /secrets/create-certs.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -o nounset \ 4 | -o errexit \ 5 | -o verbose \ 6 | -o xtrace 7 | 8 | # Generate CA key 9 | openssl req -new -x509 -keyout snakeoil-ca-1.key -out snakeoil-ca-1.crt -days 365 -subj '/CN=ca1.test.husseinjoe.io/OU=Dev/O=HusseinJoe/L=Melbourne/ST=VIC/C=AU' -passin pass:confluent -passout pass:confluent 10 | # openssl req -new -x509 -keyout snakeoil-ca-2.key -out snakeoil-ca-2.crt -days 365 -subj '/CN=ca2.test.husseinjoe.io/OU=TEST/O=CONFLUENT/L=PaloAlto/S=Ca/C=US' -passin pass:confluent -passout pass:confluent 11 | 12 | # Kafkacat 13 | openssl genrsa -des3 -passout "pass:confluent" -out kafkacat.client.key 1024 14 | openssl req -passin "pass:confluent" -passout "pass:confluent" -key kafkacat.client.key -new -out kafkacat.client.req -subj '/CN=kafkacat.test.husseinjoe.io/OU=TEST/O=CONFLUENT/L=PaloAlto/S=Ca/C=US' 15 | openssl x509 -req -CA snakeoil-ca-1.crt -CAkey snakeoil-ca-1.key -in kafkacat.client.req -out kafkacat-ca1-signed.pem -days 9999 -CAcreateserial -passin "pass:confluent" 16 | 17 | 18 | 19 | for i in broker-1 broker-2 broker-3 producer consumer 20 | do 21 | echo $i 22 | mkdir ./$i 23 | # Create keystores 24 | keytool -genkey -noprompt \ 25 | -alias $i \ 26 | -dname "CN=kafka-$i, OU=Dev, O=HusseinJoe, L=Melbourne, ST=VIC, C=AU" \ 27 | -keystore ./$i/kafka.$i.keystore.jks \ 28 | -keyalg RSA \ 29 | -storepass confluent \ 30 | -keypass confluent 31 | 32 | # Create CSR, sign the key and import back into keystore 33 | keytool -keystore ./$i/kafka.$i.keystore.jks -alias $i -certreq -file $i.csr -storepass confluent -keypass confluent 34 | 35 | openssl x509 -req -CA snakeoil-ca-1.crt -CAkey snakeoil-ca-1.key -in $i.csr -out $i-ca1-signed.crt -days 9999 -CAcreateserial -passin pass:confluent 36 | 37 | keytool -keystore ./$i/kafka.$i.keystore.jks -alias CARoot -import -file snakeoil-ca-1.crt -storepass confluent -keypass confluent 38 | 39 | keytool -keystore ./$i/kafka.$i.keystore.jks -alias $i -import -file $i-ca1-signed.crt -storepass confluent -keypass confluent 40 | 41 | # Create truststore and import the CA cert. 42 | keytool -keystore ./$i/kafka.$i.truststore.jks -alias CARoot -import -file snakeoil-ca-1.crt -storepass confluent -keypass confluent 43 | 44 | echo "confluent" > ./$i/${i}_sslkey_creds 45 | echo "confluent" > ./$i/${i}_keystore_creds 46 | echo "confluent" > ./$i/${i}_truststore_creds 47 | done 48 | -------------------------------------------------------------------------------- /secrets/host.consumer.ssl.config: -------------------------------------------------------------------------------- 1 | group.id=ssl-host 2 | ssl.truststore.location=/etc/kafka/secrets/kafka.consumer.truststore.jks 3 | ssl.truststore.password=confluent 4 | 5 | ssl.keystore.location=/etc/kafka/secrets/kafka.consumer.keystore.jks 6 | ssl.keystore.password=confluent 7 | 8 | ssl.key.password=confluent 9 | ssl.endpoint.identification.algorithm= 10 | 11 | security.protocol=SSL 12 | -------------------------------------------------------------------------------- /secrets/host.producer.ssl.config: -------------------------------------------------------------------------------- 1 | ssl.truststore.location=/etc/kafka/secrets/kafka.producer.truststore.jks 2 | ssl.truststore.password=confluent 3 | 4 | ssl.keystore.location=/etc/kafka/secrets/kafka.producer.keystore.jks 5 | ssl.keystore.password=confluent 6 | 7 | ssl.key.password=confluent 8 | ssl.endpoint.identification.algorithm= 9 | 10 | security.protocol=SSL 11 | -------------------------------------------------------------------------------- /ssl-only/docker-compose-ssl-only.yml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3.5' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:5.0.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | ports: 9 | - "22181:22181" 10 | environment: 11 | ZOOKEEPER_SERVER_ID: 1 12 | ZOOKEEPER_CLIENT_PORT: 22181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOOKEEPER_INIT_LIMIT: 5 15 | ZOOKEEPER_SYNC_LIMIT: 2 16 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 17 | networks: 18 | - kafka-cluster-network 19 | 20 | zookeeper-2: 21 | image: confluentinc/cp-zookeeper:5.0.1 22 | hostname: zookeeper-2 23 | container_name: zookeeper-2 24 | ports: 25 | - "32181:32181" 26 | environment: 27 | ZOOKEEPER_SERVER_ID: 2 28 | ZOOKEEPER_CLIENT_PORT: 32181 29 | ZOOKEEPER_TICK_TIME: 2000 30 | ZOOKEEPER_INIT_LIMIT: 5 31 | ZOOKEEPER_SYNC_LIMIT: 2 32 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 33 | networks: 34 | - kafka-cluster-network 35 | 36 | 37 | zookeeper-3: 38 | image: confluentinc/cp-zookeeper:5.0.1 39 | hostname: zookeeper-3 40 | container_name: zookeeper-3 41 | ports: 42 | - "42181:42181" 43 | environment: 44 | ZOOKEEPER_SERVER_ID: 3 45 | ZOOKEEPER_CLIENT_PORT: 42181 46 | ZOOKEEPER_TICK_TIME: 2000 47 | ZOOKEEPER_INIT_LIMIT: 5 48 | ZOOKEEPER_SYNC_LIMIT: 2 49 | ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888 50 | networks: 51 | - kafka-cluster-network 52 | 53 | 54 | kafka-broker-1: 55 | image: confluentinc/cp-kafka:5.0.1 56 | hostname: kafka-broker-1 57 | container_name: kafka-broker-1 58 | ports: 59 | - "19093:19093" 60 | depends_on: 61 | - zookeeper-1 62 | - zookeeper-2 63 | - zookeeper-3 64 | environment: 65 | KAFKA_BROKER_ID: 1 66 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 67 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-1:19093 68 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-1.keystore.jks 69 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-1_keystore_creds 70 | KAFKA_SSL_KEY_CREDENTIALS: broker-1_sslkey_creds 71 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-1.truststore.jks 72 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-1_truststore_creds 73 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: " " 74 | KAFKA_SSL_CLIENT_AUTH: requested 75 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL 76 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 77 | volumes: 78 | - ${KAFKA_SSL_SECRETS_DIR}/broker-1:/etc/kafka/secrets 79 | networks: 80 | - kafka-cluster-network 81 | 82 | 83 | kafka-broker-2: 84 | image: confluentinc/cp-kafka:5.0.1 85 | hostname: kafka-broker-2 86 | container_name: kafka-broker-2 87 | ports: 88 | - "29093:29093" 89 | depends_on: 90 | - zookeeper-1 91 | - zookeeper-2 92 | - zookeeper-3 93 | environment: 94 | KAFKA_BROKER_ID: 2 95 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 96 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-2:29093 97 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-2.keystore.jks 98 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-2_keystore_creds 99 | KAFKA_SSL_KEY_CREDENTIALS: broker-2_sslkey_creds 100 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-2.truststore.jks 101 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-2_truststore_creds 102 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: " " 103 | KAFKA_SSL_CLIENT_AUTH: requested 104 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL 105 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 106 | volumes: 107 | - ${KAFKA_SSL_SECRETS_DIR}/broker-2:/etc/kafka/secrets 108 | networks: 109 | - kafka-cluster-network 110 | 111 | 112 | kafka-broker-3: 113 | image: confluentinc/cp-kafka:5.0.1 114 | hostname: kafka-broker-3 115 | container_name: kafka-broker-3 116 | ports: 117 | - "39093:39093" 118 | depends_on: 119 | - zookeeper-1 120 | - zookeeper-2 121 | - zookeeper-3 122 | environment: 123 | KAFKA_BROKER_ID: 3 124 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:22181,zookeeper-1:32181,zookeeper-1:42181 125 | KAFKA_ADVERTISED_LISTENERS: SSL://kafka-broker-3:39093 126 | KAFKA_SSL_KEYSTORE_FILENAME: kafka.broker-3.keystore.jks 127 | KAFKA_SSL_KEYSTORE_CREDENTIALS: broker-3_keystore_creds 128 | KAFKA_SSL_KEY_CREDENTIALS: broker-3_sslkey_creds 129 | KAFKA_SSL_TRUSTSTORE_FILENAME: kafka.broker-3.truststore.jks 130 | KAFKA_SSL_TRUSTSTORE_CREDENTIALS: broker-3_truststore_creds 131 | KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: " " 132 | KAFKA_SSL_CLIENT_AUTH: requested 133 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL 134 | KAFKA_AUTO_CREATE_TOPICS_ENABLE: "true" 135 | volumes: 136 | - ${KAFKA_SSL_SECRETS_DIR}/broker-3:/etc/kafka/secrets 137 | networks: 138 | - kafka-cluster-network 139 | 140 | networks: 141 | kafka-cluster-network: 142 | driver: bridge 143 | name: kafka-cluster-network 144 | -------------------------------------------------------------------------------- /ssl-only/kafka-consumers-producers.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | docker run -it --rm --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 \ 4 | kafka-topics --zookeeper zookeeper-1:22181 --create --topic test --partitions 10 --replication-factor 3 5 | 6 | #Console producer with SSL files mapped in the container 7 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/producer:/etc/kafka/secrets \ 8 | -v ${KAFKA_SSL_SECRETS_DIR}/host.producer.ssl.config:/etc/kafka/secrets/host.producer.ssl.config \ 9 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 kafka-console-producer \ 10 | --broker-list kafka-broker-1:19093,kafka-broker-2:29093,kafka-broker-3:39093 --topic test \ 11 | --producer.config /etc/kafka/secrets/host.producer.ssl.config 12 | 13 | 14 | #Console consumer with SSL files mapped in the container 15 | docker run -it --rm -v ${KAFKA_SSL_SECRETS_DIR}/consumer:/etc/kafka/secrets \ 16 | -v ${KAFKA_SSL_SECRETS_DIR}/host.consumer.ssl.config:/etc/kafka/secrets/host.consumer.ssl.config \ 17 | --network kafka-cluster-network confluentinc/cp-kafka:5.0.1 \ 18 | kafka-console-consumer --bootstrap-server kafka-broker-1:19093,kafka-broker-2:29093,kafka-broker-3:39093 --topic test --from-beginning \ 19 | --consumer.config /etc/kafka/secrets/host.consumer.ssl.config 20 | -------------------------------------------------------------------------------- /start_sasl_scram_cluster.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | export KAFKA_SSL_SECRETS_DIR=$PWD/secrets 4 | export KAFKA_SASL_SCRAM_SECRETS_DIR=$PWD/sasl-scram/secrets 5 | cd $PWD/sasl-scram/ 6 | docker-compose -f docker-compose-scram.yml up -d 7 | -------------------------------------------------------------------------------- /start_ssl_only_cluster.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | export KAFKA_SSL_SECRETS_DIR=$PWD/secrets 4 | cd $PWD/ssl-only/ 5 | docker-compose -f docker-compose-ssl-only.yml up -d 6 | --------------------------------------------------------------------------------