├── .gitignore ├── Installation Instructions.md ├── LICENSE ├── README.md ├── failureaccess-1.0.1.jar ├── guava-32.1.2-jre.jar ├── module3 ├── demo1 │ └── docker-compose.yaml ├── demo2 │ └── docker-compose.yaml └── demo3 │ ├── docker-compose.yaml │ └── increase_replication.json ├── module4 ├── demo1 │ └── docker-compose.yaml └── demo2 │ ├── docker-compose.yaml │ ├── pom.xml │ └── src │ └── main │ └── java │ └── com │ └── globomantics │ └── Main.java ├── module5 ├── demo1 │ ├── docker-compose.yaml │ ├── pom.xml │ └── src │ │ └── main │ │ └── java │ │ └── com │ │ └── globomantics │ │ └── Main.java └── demo2 │ ├── docker-compose.yaml │ ├── pom.xml │ └── src │ └── main │ └── java │ └── com │ └── globomantics │ ├── Consumer.java │ └── Producer.java ├── module6 ├── demo1 │ ├── docker-compose.yaml │ ├── filesink.properties │ ├── pom.xml │ ├── src │ │ └── main │ │ │ └── java │ │ │ └── com │ │ │ └── globomantics │ │ │ └── LogProducer.java │ └── worker.properties └── demo2 │ ├── docker-compose.yaml │ ├── pom.xml │ ├── src │ └── main │ │ ├── avro │ │ └── Album.avsc │ │ └── java │ │ └── com │ │ └── globomantics │ │ └── AlbumSender.java │ └── worker.properties ├── module7 ├── demo1 │ ├── docker-compose.yaml │ ├── pom.xml │ └── src │ │ └── main │ │ └── java │ │ └── com │ │ └── globomantics │ │ ├── Producer.java │ │ └── SimpleETL.java └── demo2 │ └── docker-compose.yaml ├── module8 ├── demo1 │ ├── clients │ │ ├── pom.xml │ │ └── src │ │ │ └── main │ │ │ └── java │ │ │ └── com │ │ │ └── globomantics │ │ │ ├── Consumer.java │ │ │ └── Producer.java │ ├── docker-compose.yaml │ └── security │ │ ├── generate-ca.sh │ │ ├── generate-keystore.sh │ │ └── generate-truststore.sh ├── demo2 │ ├── clients │ │ ├── pom.xml │ │ └── src │ │ │ └── main │ │ │ └── java │ │ │ └── com │ │ │ └── globomantics │ │ │ ├── Consumer.java │ │ │ └── Producer.java │ ├── docker-compose.yaml │ └── security │ │ ├── generate-ca.sh │ │ ├── generate-keystore.sh │ │ └── generate-truststore.sh └── demo3 │ ├── clients │ ├── pom.xml │ └── src │ │ └── main │ │ ├── java │ │ └── com │ │ │ └── pluralsight │ │ │ └── kafka │ │ │ └── security │ │ │ └── encryption │ │ │ ├── BasicConsumer.java │ │ │ ├── BasicProducer.java │ │ │ ├── BasicSSLConsumer.java │ │ │ └── BasicSSLProducer.java │ │ └── resources │ │ └── log4j.properties │ ├── docker-compose.yaml │ └── security │ ├── generate-ca.sh │ ├── generate-keystore.sh │ └── generate-truststore.sh └── rest_proxy.yaml /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled class file 2 | *.class 3 | 4 | # Log file 5 | *.log 6 | 7 | # BlueJ files 8 | *.ctxt 9 | 10 | # Mobile Tools for Java (J2ME) 11 | .mtj.tmp/ 12 | 13 | # Package Files # 14 | *.jar 15 | *.war 16 | *.nar 17 | *.ear 18 | *.zip 19 | *.tar.gz 20 | *.rar 21 | 22 | # virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml 23 | hs_err_pid* 24 | replay_pid* 25 | 26 | 27 | .idea 28 | **/zookeeper* 29 | **/broker* 30 | kafka-client.iml 31 | **/target/* 32 | **/.idea 33 | **/*.iml 34 | **/file-log.txt 35 | **/com/globomantics/model 36 | **/security/**/*.jks 37 | **/security/**/ca-cert 38 | **/security/**/ca-key 39 | **/authentication 40 | *.answer 41 | *.pem 42 | *.key 43 | -------------------------------------------------------------------------------- /Installation Instructions.md: -------------------------------------------------------------------------------- 1 | # Instalation instructions 2 | 3 | ## Windows 4 | 5 | ### Core requirements regardless of language to use 6 | 7 | **Docker for Desktop**: 8 | 9 | 1) Download and install from https://www.docker.com/products/docker-desktop 10 | 11 | **Java 11** 12 | 13 | 1) Download from this link: https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_windows-x64_bin.zip 14 | 15 | 2) Extract the zip file into a folder, e.g. `C:\Program Files\Java\`, and it will create a jdk-11 folder (where the bin folder is a direct sub-folder). You may need Administrator privileges to extract the zip file to this location. 16 | 17 | 3) Set a PATH: 18 | 19 | Select Control Panel and then System. 20 | Click Advanced and then Environment Variables. 21 | Add the location of the bin folder of the JDK installation to the PATH variable in System Variables. 22 | The following is a typical value for the `PATH` variable: `C:\WINDOWS\system32;C:\WINDOWS;"C:\Program Files\Java\jdk-11\bin"` 23 | 24 | 4) Set JAVA_HOME: 25 | 26 | Under System Variables, click New. 27 | Enter the variable name as JAVA_HOME. 28 | Enter the variable value as the installation path of the JDK (without the bin sub-folder). 29 | Click OK. 30 | Click Apply Changes. 31 | 32 | 5) Configure the JDK in your IDE of choice (e.g. IntelliJ, Eclipse or VS Code) 33 | 34 | 6) You are set. 35 | 36 | To see if it worked, open up the Command Prompt and type java -version and see if it prints your newly installed JDK. 37 | 38 | **Maven 3** 39 | 40 | 1) Download https://apache.dattatec.com/maven/maven-3/3.8.1/binaries/apache-maven-3.8.1-bin.zip 41 | 42 | 2) Extract the zip file into a folder, e.g. C:\Program Files\maven-3.8.1\ and it will create an apache-maven-3.8.1 folder (where the bin folder is a direct sub-folder). You may need Administrator privileges to extract the zip file to this location. 43 | 44 | 3) Set a PATH: 45 | 46 | Select Control Panel and then System. 47 | Click Advanced and then Environment Variables. 48 | Add the location of the bin folder of the maven installation to the PATH variable in System Variables. The following is a typical value for the PATH variable: C:\WINDOWS\system32;C:\WINDOWS;"C:\Program Files\Java\apache-maven-3.8.1\bin" 49 | 50 | 51 | To see if it worked, open up the Command Prompt and type mvn -v and see if it prints your newly installed mvn. 52 | 53 | **Kafka** 54 | 55 | 1) Download these files https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz. this is very important because for Windows, these contain all the classes already compiled (the other one doesn't) 56 | 57 | 2) Unzip the folder and put it in your home directory (`C:\Users\`), and rename it to a simple name like kafka. This is key because the CLASSPATH internally needs to be short enough and not pass the 8100 characters for CMD in Windows 10. 58 | 59 | 3) Open the `C:\Users\\kafka\bin\windows` directory on the command line of your choice with administrative permissions. 60 | 61 | 4) You can run any of the bat scripts that you need in the course. I suggest adding these folder to your PATH so you can run it from everywhere. 62 | 63 | 64 | ## Mac 65 | 66 | ### Core requirements regardless of language to use 67 | 68 | 69 | **Docker for Desktop:** 70 | 71 | 1) Download and install from https://www.docker.com/products/docker-desktop 72 | 73 | **Java 11** 74 | 75 | 1) `brew install openjdk@11` 76 | 1') Verify: 77 | 78 | ```sh 79 | /usr/libexec/java_home -V 80 | ``` 81 | 82 | 2) Set JAVA_HOME: 83 | ```sh 84 | export JAVA_HOME=`/usr/libexec/java_home -v WHAT_EVER_VERSION_YOU_HAVE` 85 | ``` 86 | 87 | **Maven 3** 88 | 89 | 1) `brew install maven` 90 | 91 | **Kafka** 92 | 93 | 1) Download: https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz 94 | 2) `tar xvfz ~/Downloads/kafka_2.13-3.5.1.tgz` 95 | 3) Move that folder wherever you like :) 96 | 97 | ## Linux (Debian) 98 | 99 | ### Core requirements regardless of language to use 100 | 101 | 102 | **Docker for Desktop:** 103 | 104 | 1) Download and install from https://docs.docker.com/desktop/linux/install/ 105 | 106 | **Java 11** 107 | 108 | 1) `sudo apt install openjdk-11-jdk` 109 | 1a) Verify: 110 | 111 | ```sh 112 | /usr/libexec/java_home -V 113 | ``` 114 | 115 | 2) Set JAVA_HOME: 116 | ```sh 117 | export JAVA_HOME=`/usr/libexec/java_home -v WHAT_EVER_VERSION_YOU_HAVE` 118 | ``` 119 | 120 | **Maven 3** 121 | 122 | 1) `sudo apt-get install maven` 123 | 124 | **Kafka** 125 | 126 | 1) Download: https://downloads.apache.org/kafka/3.5.1/kafka_2.13-3.5.1.tgz 127 | 2) `tar xvfz ~/Downloads/kafka_2.13-3.5.1.tgz` 128 | 3) Move that folder wherever you like :) 129 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # getting-started-kafka 2 | Course Materials for course "Getting Started with Apache Kafka" @ Pluralsight 3 | -------------------------------------------------------------------------------- /failureaccess-1.0.1.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/axel-sirota/getting-started-kafka/5cf95eefebf950a3787113d43e48c1fc762f9361/failureaccess-1.0.1.jar -------------------------------------------------------------------------------- /guava-32.1.2-jre.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/axel-sirota/getting-started-kafka/5cf95eefebf950a3787113d43e48c1fc762f9361/guava-32.1.2-jre.jar -------------------------------------------------------------------------------- /module3/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module3/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module3/demo3/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module3/demo3/increase_replication.json: -------------------------------------------------------------------------------- 1 | {"version":1, 2 | "partitions":[ 3 | {"topic":"myorders","partition":0,"replicas":[1,2]}, 4 | {"topic":"myorders","partition":1,"replicas":[1,2]}, 5 | {"topic":"myorders","partition":2,"replicas":[2]} 6 | ] 7 | } -------------------------------------------------------------------------------- /module4/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module4/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module4/demo2/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.com.globomantics 7 | kafka-client 8 | 1.0-SNAPSHOT 9 | 10 | kafka-client 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module4/demo2/src/main/java/com/globomantics/Main.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | 13 | public class Main { 14 | private static final Logger log = LoggerFactory.getLogger(Main.class); 15 | private static final String TOPIC = "myorders"; 16 | 17 | public static void main(String[] args) throws InterruptedException { 18 | 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String stateString = 25 | "AK,AL,AZ,AR,CA,CO,CT,DE,FL,GA," + 26 | "HI,ID,IL,IN,IA,KS,KY,LA,ME,MD," + 27 | "MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ," + 28 | "NM,NY,NC,ND,OH,OK,OR,PA,RI,SC," + 29 | "SD,TN,TX,UT,VT,VA,WA,WV,WI,WY"; 30 | String[] stateArray = stateString.split(","); 31 | for (int i = 0; i < 25000; i++) { 32 | String key = stateArray[(int) Math.floor(Math.random()*(50))]; 33 | double value = Math.floor(Math.random()* (10000-10+1)+10); 34 | ProducerRecord producerRecord = 35 | new ProducerRecord<>(TOPIC, key, value); 36 | 37 | log.info("Sending message with key " + key + " to Kafka"); 38 | 39 | producer.send(producerRecord, (metadata, e) -> { 40 | if (metadata != null) { 41 | System.out.println(producerRecord.key()); 42 | System.out.println(producerRecord.value()); 43 | System.out.println(metadata.toString()); 44 | } 45 | }); 46 | Thread.sleep(1000); 47 | } 48 | producer.flush(); 49 | producer.close(); 50 | 51 | log.info("Successfully produced messages to " + TOPIC + " topic"); 52 | 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /module5/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module5/demo1/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.com.globomantics 7 | kafka-producer-2 8 | 1.0-SNAPSHOT 9 | 10 | kafka-producer-2 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module5/demo1/src/main/java/com/globomantics/Main.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | 13 | public class Main { 14 | private static final Logger log = LoggerFactory.getLogger(Main.class); 15 | private static final String TOPIC = "myorders"; 16 | 17 | public static void main(String[] args) throws InterruptedException { 18 | 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String stateString = 25 | "AK,AL,AZ,AR,CA,CO,CT,DE,FL,GA," + 26 | "HI,ID,IL,IN,IA,KS,KY,LA,ME,MD," + 27 | "MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ," + 28 | "NM,NY,NC,ND,OH,OK,OR,PA,RI,SC," + 29 | "SD,TN,TX,UT,VT,VA,WA,WV,WI,WY"; 30 | String[] stateArray = stateString.split(","); 31 | for (int i = 0; i < 25000; i++) { 32 | String key = stateArray[(int) Math.floor(Math.random()*(50))]; 33 | double value = Math.floor(Math.random()* (10000-10+1)+10); 34 | ProducerRecord producerRecord = 35 | new ProducerRecord<>(TOPIC, key, value); 36 | 37 | log.info("Sending message with key " + key + " to Kafka"); 38 | 39 | producer.send(producerRecord, (metadata, e) -> { 40 | if (metadata != null) { 41 | System.out.println(producerRecord.key()); 42 | System.out.println(producerRecord.value()); 43 | System.out.println(metadata.toString()); 44 | } 45 | }); 46 | Thread.sleep(5000); 47 | } 48 | producer.flush(); 49 | producer.close(); 50 | 51 | log.info("Successfully produced messages to " + TOPIC + " topic"); 52 | 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /module5/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | -------------------------------------------------------------------------------- /module5/demo2/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.com.globomantics 7 | kafka-consumer 8 | 1.0-SNAPSHOT 9 | 10 | kafka-consumer 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module5/demo2/src/main/java/com/globomantics/Consumer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.consumer.ConsumerConfig; 4 | import org.apache.kafka.clients.consumer.ConsumerRecord; 5 | import org.apache.kafka.clients.consumer.ConsumerRecords; 6 | import org.apache.kafka.clients.consumer.KafkaConsumer; 7 | import org.apache.kafka.common.serialization.DoubleDeserializer; 8 | import org.apache.kafka.common.serialization.StringDeserializer; 9 | import org.slf4j.Logger; 10 | import org.slf4j.LoggerFactory; 11 | 12 | import java.time.Duration; 13 | import java.util.Collections; 14 | import java.util.Properties; 15 | 16 | public class Consumer { 17 | 18 | private static final Logger log = LoggerFactory.getLogger(Consumer.class); 19 | 20 | public static void main(String[] args) { 21 | 22 | Properties props = new Properties(); 23 | props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 24 | props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 25 | props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, DoubleDeserializer.class.getName()); 26 | props.put(ConsumerConfig.GROUP_ID_CONFIG, args[0] + "consumer"); 27 | 28 | KafkaConsumer consumer = new KafkaConsumer<>(props); 29 | 30 | Thread haltedHook = new Thread(consumer::close); 31 | Runtime.getRuntime().addShutdownHook(haltedHook); 32 | 33 | consumer.subscribe(Collections.singletonList("myorders")); 34 | 35 | while (true) { 36 | ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); 37 | records.forEach(Consumer::processRecord); 38 | } 39 | } 40 | 41 | private static void processRecord(ConsumerRecord record) { 42 | log.info("Received message with key: " + record.key() + " and value " + record.value()); 43 | log.info("It comes from partition: " + record.partition()); 44 | try { 45 | Thread.sleep(1000); 46 | } catch (InterruptedException e) { 47 | System.out.println(e.getMessage()); 48 | } 49 | } 50 | } -------------------------------------------------------------------------------- /module5/demo2/src/main/java/com/globomantics/Producer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | 13 | public class Producer { 14 | private static final Logger log = LoggerFactory.getLogger(Producer.class); 15 | private static final String TOPIC = "myorders"; 16 | 17 | public static void main(String[] args) throws InterruptedException { 18 | 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String stateString = 25 | "AK,AL,AZ,AR,CA,CO,CT,DE,FL,GA," + 26 | "HI,ID,IL,IN,IA,KS,KY,LA,ME,MD," + 27 | "MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ," + 28 | "NM,NY,NC,ND,OH,OK,OR,PA,RI,SC," + 29 | "SD,TN,TX,UT,VT,VA,WA,WV,WI,WY"; 30 | String[] stateArray = stateString.split(","); 31 | for (int i = 0; i < 25000; i++) { 32 | String key = stateArray[(int) Math.floor(Math.random()*(50))]; 33 | double value = Math.floor(Math.random()* (10000-10+1)+10); 34 | ProducerRecord producerRecord = 35 | new ProducerRecord<>(TOPIC, key, value); 36 | 37 | log.info("Sending message with key " + key + " to Kafka"); 38 | 39 | producer.send(producerRecord, (metadata, e) -> { 40 | if (metadata != null) { 41 | System.out.println(producerRecord.key()); 42 | System.out.println(producerRecord.value()); 43 | System.out.println(metadata.toString()); 44 | } 45 | }); 46 | Thread.sleep(5000); 47 | } 48 | producer.flush(); 49 | producer.close(); 50 | 51 | log.info("Successfully produced messages to " + TOPIC + " topic"); 52 | 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /module6/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' -------------------------------------------------------------------------------- /module6/demo1/filesink.properties: -------------------------------------------------------------------------------- 1 | #my-file-sink.properties config file 2 | name=local-file-sink 3 | connector.class=FileStreamSink 4 | tasks.max=1 5 | file=./file-log.txt 6 | topics=connectlog 7 | -------------------------------------------------------------------------------- /module6/demo1/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.globomantics 7 | kafka-connect-standalone 8 | 1.0-SNAPSHOT 9 | 10 | kafka-connect-standalone 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module6/demo1/src/main/java/com/globomantics/LogProducer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.IntegerSerializer; 8 | import org.apache.kafka.common.serialization.StringSerializer; 9 | import org.slf4j.Logger; 10 | import org.slf4j.LoggerFactory; 11 | 12 | import java.util.Properties; 13 | 14 | public class LogProducer { 15 | private static final Logger log = LoggerFactory.getLogger(LogProducer.class); 16 | private static final String TOPIC = "connectlog"; 17 | 18 | public static void main(String[] args) throws InterruptedException { 19 | 20 | Properties props = new Properties(); 21 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 22 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 24 | KafkaProducer producer = new KafkaProducer<>(props); 25 | for (int i = 0; i < 25000; i++) { 26 | double key = Math.floor(Math.random()*(50)); 27 | String value = "Some logging info from Kafka with key" + key; 28 | ProducerRecord producerRecord = 29 | new ProducerRecord<>(TOPIC, key, value); 30 | 31 | log.info("Sending message " + value + " to Kafka"); 32 | 33 | producer.send(producerRecord, (metadata, e) -> { 34 | if (metadata != null) { 35 | System.out.println(producerRecord.key()); 36 | System.out.println(producerRecord.value()); 37 | System.out.println(metadata.toString()); 38 | } 39 | }); 40 | Thread.sleep(1000); 41 | } 42 | producer.flush(); 43 | producer.close(); 44 | 45 | log.info("Successfully produced messages to " + TOPIC + " topic"); 46 | 47 | } 48 | } 49 | -------------------------------------------------------------------------------- /module6/demo1/worker.properties: -------------------------------------------------------------------------------- 1 | #worker.properties 2 | bootstrap.servers=http://localhost:9092,http://localhost:9093,http://localhost:9094 3 | 4 | # The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will 5 | # need to configure these based on the format they want their data in when loaded from or stored into Kafka 6 | key.converter=org.apache.kafka.connect.storage.StringConverter 7 | value.converter=org.apache.kafka.connect.storage.StringConverter 8 | 9 | # The internal converter used for offsets and config data is configurable and must be specified, but most users will 10 | # always want to use the built-in default. Offset and config data is never visible outside of Copcyat in this format. 11 | internal.key.converter=org.apache.kafka.connect.json.JsonConverter 12 | internal.value.converter=org.apache.kafka.connect.json.JsonConverter 13 | internal.key.converter.schemas.enable=true 14 | internal.value.converter.schemas.enable=true 15 | 16 | offset.storage.file.filename=/tmp/connect.offsets 17 | 18 | # Flush much faster than normal, which is useful for testing/debugging 19 | offset.flush.interval.ms=5000 20 | 21 | # Reload metadata faster too so consumer picks up new topics 22 | consumer.metadata.max.age.ms=10000 23 | plugin.path=/Users/axelsirota/kafka_2.13-3.5.1/libs 24 | schema.registry.url=http://localhost:8081 25 | rest.port=8083 26 | group.id=1 27 | config.storage.topic=kafka_connect_configs 28 | cleanup.policy=compact 29 | offset.storage.topic=kafka_connect_offsets 30 | status.storage.topic=kafka_connect_statuses 31 | -------------------------------------------------------------------------------- /module6/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 140 | 141 | mongo: 142 | image: mongo 143 | hostname: mongo 144 | container_name: mongo 145 | ports: 146 | - 27017:27017 147 | -------------------------------------------------------------------------------- /module6/demo2/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.globomantics 7 | kafka-connect-distributed 8 | 1.0-SNAPSHOT 9 | 10 | kafka-connect-distributed 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module6/demo2/src/main/avro/Album.avsc: -------------------------------------------------------------------------------- 1 | { 2 | "fields": [ 3 | { 4 | "name": "name", 5 | "type": "string" 6 | }, 7 | { 8 | "name": "year", 9 | "type": "int" 10 | } 11 | ], 12 | "name": "Album", 13 | "namespace": "com.globomantics.model", 14 | "type": "record" 15 | } -------------------------------------------------------------------------------- /module6/demo2/src/main/java/com/globomantics/AlbumSender.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import com.globomantics.model.Album; 4 | import io.confluent.kafka.serializers.KafkaAvroSerializer; 5 | import io.confluent.kafka.serializers.KafkaAvroSerializerConfig; 6 | import org.apache.kafka.clients.producer.KafkaProducer; 7 | import org.apache.kafka.clients.producer.ProducerConfig; 8 | import org.apache.kafka.clients.producer.ProducerRecord; 9 | import org.apache.kafka.common.serialization.DoubleSerializer; 10 | import org.slf4j.Logger; 11 | import org.slf4j.LoggerFactory; 12 | 13 | import java.util.Properties; 14 | 15 | public class AlbumSender { 16 | private static final Logger log = LoggerFactory.getLogger(AlbumSender.class); 17 | private static final String TOPIC = "connect-distributed"; 18 | 19 | public static void main(String[] args) { 20 | 21 | Properties props = new Properties(); 22 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 23 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 24 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, KafkaAvroSerializer.class.getName()); 25 | props.put(KafkaAvroSerializerConfig.SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081"); 26 | Album album = Album.newBuilder() 27 | .setName("Use Your Illusion") 28 | .setYear(1991).build(); 29 | KafkaProducer producer = new KafkaProducer<>(props); 30 | double key = Math.floor(Math.random()*(50)); 31 | ProducerRecord producerRecord = 32 | new ProducerRecord<>(TOPIC, key, album); 33 | 34 | log.info("Sending message " + album + " to Kafka"); 35 | 36 | producer.send(producerRecord, (metadata, e) -> { 37 | if (metadata != null) { 38 | System.out.println(producerRecord.key()); 39 | System.out.println(producerRecord.value()); 40 | System.out.println(metadata.toString()); 41 | } 42 | }); 43 | producer.flush(); 44 | producer.close(); 45 | 46 | log.info("Successfully produced messages to " + TOPIC + " topic"); 47 | 48 | } 49 | } 50 | -------------------------------------------------------------------------------- /module6/demo2/worker.properties: -------------------------------------------------------------------------------- 1 | #worker.properties 2 | bootstrap.servers=http://localhost:9092,http://localhost:9093,http://localhost:9094 3 | 4 | # The converters specify the format of data in Kafka and how to translate it into Connect data. Every Connect user will 5 | # need to configure these based on the format they want their data in when loaded from or stored into Kafka 6 | key.converter=org.apache.kafka.connect.storage.StringConverter 7 | value.converter=io.confluent.connect.avro.AvroConverter 8 | value.converter.schema.registry.url=http://localhost:8081 9 | 10 | # The internal converter used for offsets and config data is configurable and must be specified, but most users will 11 | # always want to use the built-in default. Offset and config data is never visible outside of Copcyat in this format. 12 | internal.key.converter=org.apache.kafka.connect.json.JsonConverter 13 | internal.value.converter=org.apache.kafka.connect.json.JsonConverter 14 | internal.key.converter.schemas.enable=true 15 | internal.value.converter.schemas.enable=true 16 | 17 | offset.storage.file.filename=/tmp/connect.offsets 18 | 19 | # Flush much faster than normal, which is useful for testing/debugging 20 | offset.flush.interval.ms=5000 21 | 22 | # Reload metadata faster too so consumer picks up new topics 23 | consumer.metadata.max.age.ms=10000 24 | plugin.path=/Users/axelsirota/kafka_2.13-3.5.1/libs 25 | schema.registry.url=http://localhost:8081 26 | rest.port=8083 27 | group.id=1 28 | config.storage.topic=kafka_connect_configs 29 | cleanup.policy=compact 30 | offset.storage.topic=kafka_connect_offsets 31 | status.storage.topic=kafka_connect_statuses 32 | -------------------------------------------------------------------------------- /module7/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' -------------------------------------------------------------------------------- /module7/demo1/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.globomantics 7 | kafka-streams-app 8 | 1.0-SNAPSHOT 9 | 10 | kafka-streams-app 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module7/demo1/src/main/java/com/globomantics/Producer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.IntegerSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | import java.util.Random; 13 | import java.util.concurrent.ThreadLocalRandom; 14 | 15 | public class Producer { 16 | private static final Logger log = LoggerFactory.getLogger(Producer.class); 17 | private static final Properties props = new Properties(); 18 | 19 | public static void main(String[] args) throws InterruptedException { 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, IntegerSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String[] sensors = new String[]{"sensor_1", "sensor_2", "sensor_3"}; 25 | Runtime.getRuntime().addShutdownHook(new Thread(producer::close)); 26 | while (true) { 27 | int idx = new Random().nextInt(sensors.length); 28 | String key = (sensors[idx]); 29 | int value = ThreadLocalRandom.current().nextInt(-20, 180 + 1); 30 | ProducerRecord producerRecord = 31 | new ProducerRecord<>("RawTempReadings", key, value); 32 | 33 | producer.send(producerRecord); 34 | 35 | producer.flush(); 36 | log.info("Successfully produced message from sensor " + key); 37 | Thread.sleep(200); 38 | } 39 | 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /module7/demo1/src/main/java/com/globomantics/SimpleETL.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.common.serialization.Serdes; 4 | import org.apache.kafka.streams.KafkaStreams; 5 | import org.apache.kafka.streams.StreamsBuilder; 6 | import org.apache.kafka.streams.StreamsConfig; 7 | import org.apache.kafka.streams.Topology; 8 | import org.apache.kafka.streams.kstream.KStream; 9 | 10 | import java.util.Properties; 11 | 12 | public class SimpleETL { 13 | 14 | public static void main(String[] args) { 15 | Properties props = new Properties(); 16 | props.put(StreamsConfig.APPLICATION_ID_CONFIG, "weather.filter"); 17 | props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 18 | props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, 19 | Serdes.String().getClass()); 20 | props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, 21 | Serdes.Integer().getClass()); 22 | 23 | StreamsBuilder builder = new StreamsBuilder(); 24 | KStream rawReadings = builder.stream("RawTempReadings"); 25 | KStream validatedReadings = rawReadings 26 | .filter((key, value) -> value > -50 && value < 130); 27 | validatedReadings.to("ValidatedTempReadings"); 28 | 29 | Topology topo = builder.build(); 30 | System.out.println(topo.describe()); 31 | 32 | KafkaStreams streams = new KafkaStreams(topo, props); 33 | Runtime.getRuntime().addShutdownHook(new Thread(streams::close)); 34 | streams.cleanUp(); 35 | streams.start(); 36 | 37 | 38 | System.out.println("Starting simple ETL"); 39 | 40 | } 41 | 42 | } 43 | -------------------------------------------------------------------------------- /module7/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 140 | 141 | ksqldb-server: 142 | image: confluentinc/cp-ksqldb-server:7.4.1 143 | hostname: ksqldb-server 144 | container_name: ksqldb-server 145 | depends_on: 146 | - schema-registry 147 | ports: 148 | - "8088:8088" 149 | environment: 150 | KSQL_LISTENERS: http://0.0.0.0:8088 151 | KSQL_BOOTSTRAP_SERVERS: broker-1:29092,broker-2:29093,broker-3:29094 152 | KSQL_KSQL_LOGGING_PROCESSING_STREAM_AUTO_CREATE: "true" 153 | KSQL_KSQL_LOGGING_PROCESSING_TOPIC_AUTO_CREATE: "true" 154 | 155 | ksqldb-cli: 156 | image: confluentinc/cp-ksqldb-cli:7.4.1 157 | container_name: ksqldb-cli 158 | hostname: ksqldb-cli 159 | depends_on: 160 | - ksqldb-server 161 | entrypoint: /bin/sh 162 | tty: true -------------------------------------------------------------------------------- /module8/demo1/clients/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.com.globomantics 7 | kafka-clients-security-1 8 | 1.0-SNAPSHOT 9 | 10 | kafka-clients-security-1 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module8/demo1/clients/src/main/java/com/globomantics/Consumer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.consumer.ConsumerConfig; 4 | import org.apache.kafka.clients.consumer.ConsumerRecord; 5 | import org.apache.kafka.clients.consumer.ConsumerRecords; 6 | import org.apache.kafka.clients.consumer.KafkaConsumer; 7 | import org.apache.kafka.common.serialization.DoubleDeserializer; 8 | import org.apache.kafka.common.serialization.StringDeserializer; 9 | import org.slf4j.Logger; 10 | import org.slf4j.LoggerFactory; 11 | 12 | import java.time.Duration; 13 | import java.util.Collections; 14 | import java.util.Properties; 15 | 16 | public class Consumer { 17 | 18 | private static final Logger log = LoggerFactory.getLogger(Consumer.class); 19 | 20 | public static void main(String[] args) { 21 | 22 | Properties props = new Properties(); 23 | props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 24 | props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 25 | props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, DoubleDeserializer.class.getName()); 26 | props.put(ConsumerConfig.GROUP_ID_CONFIG, args[0] + "consumer"); 27 | 28 | KafkaConsumer consumer = new KafkaConsumer<>(props); 29 | 30 | Thread haltedHook = new Thread(consumer::close); 31 | Runtime.getRuntime().addShutdownHook(haltedHook); 32 | 33 | consumer.subscribe(Collections.singletonList("myorders")); 34 | 35 | while (true) { 36 | ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); 37 | records.forEach(Consumer::processRecord); 38 | } 39 | } 40 | 41 | private static void processRecord(ConsumerRecord record) { 42 | log.info("Received message with key: " + record.key() + " and value " + record.value()); 43 | log.info("It comes from partition: " + record.partition()); 44 | try { 45 | Thread.sleep(1000); 46 | } catch (InterruptedException e) { 47 | System.out.println(e.getMessage()); 48 | } 49 | } 50 | } -------------------------------------------------------------------------------- /module8/demo1/clients/src/main/java/com/globomantics/Producer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | 13 | public class Producer { 14 | private static final Logger log = LoggerFactory.getLogger(Producer.class); 15 | private static final String TOPIC = "myorders"; 16 | 17 | public static void main(String[] args) throws InterruptedException { 18 | 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String stateString = 25 | "AK,AL,AZ,AR,CA,CO,CT,DE,FL,GA," + 26 | "HI,ID,IL,IN,IA,KS,KY,LA,ME,MD," + 27 | "MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ," + 28 | "NM,NY,NC,ND,OH,OK,OR,PA,RI,SC," + 29 | "SD,TN,TX,UT,VT,VA,WA,WV,WI,WY"; 30 | String[] stateArray = stateString.split(","); 31 | for (int i = 0; i < 25000; i++) { 32 | String key = stateArray[(int) Math.floor(Math.random()*(50))]; 33 | double value = Math.floor(Math.random()* (10000-10+1)+10); 34 | ProducerRecord producerRecord = 35 | new ProducerRecord<>(TOPIC, key, value); 36 | 37 | log.info("Sending message with key " + key + " to Kafka"); 38 | 39 | producer.send(producerRecord, (metadata, e) -> { 40 | if (metadata != null) { 41 | System.out.println(producerRecord.key()); 42 | System.out.println(producerRecord.value()); 43 | System.out.println(metadata.toString()); 44 | } 45 | }); 46 | Thread.sleep(5000); 47 | } 48 | producer.flush(); 49 | producer.close(); 50 | 51 | log.info("Successfully produced messages to " + TOPIC + " topic"); 52 | 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /module8/demo1/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 140 | -------------------------------------------------------------------------------- /module8/demo1/security/generate-ca.sh: -------------------------------------------------------------------------------- 1 | VALIDITY_DAYS=36500 2 | CA_KEY_FILE="ca-key" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | openssl req -new -x509 -keyout $CA_KEY_FILE -out $CA_CERT_FILE -days $VALIDITY_DAYS 6 | 7 | #### Example Values #### 8 | # Passphrase: password 9 | # Country Name: US 10 | # State or Province: UT 11 | # City: Utah 12 | # Organization Name: Pluralsight 13 | # Organizational Unit Name: Community 14 | # Common Name: pluralsight.com 15 | # Email: learner@pluralsight.com -------------------------------------------------------------------------------- /module8/demo1/security/generate-keystore.sh: -------------------------------------------------------------------------------- 1 | COMMON_NAME=$1 2 | ORGANIZATIONAL_UNIT="Community" 3 | ORGANIZATION="Pluralsight" 4 | CITY="Utah" 5 | STATE="UT" 6 | COUNTRY="US" 7 | 8 | CA_ALIAS="ca-root" 9 | CA_CERT_FILE="ca-cert" 10 | VALIDITY_DAYS=36500 11 | 12 | # Generate Keystore with Private Key 13 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -validity $VALIDITY_DAYS -genkey -keyalg RSA -dname "CN=$COMMON_NAME, OU=$ORGANIZATIONAL_UNIT, O=$ORGANIZATION, L=$CITY, ST=$STATE, C=$COUNTRY" 14 | 15 | # Generate Certificate Signing Request (CSR) using the newly created KeyStore 16 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -certreq -file $COMMON_NAME.csr 17 | 18 | # Sign the CSR using the custom CA 19 | openssl x509 -req -CA ca-cert -CAkey ca-key -in $COMMON_NAME.csr -out $COMMON_NAME.signed -days $VALIDITY_DAYS -CAcreateserial 20 | 21 | # Import ROOT CA certificate into Keystore 22 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $CA_ALIAS -importcert -file $CA_CERT_FILE 23 | 24 | # Import newly signed certificate into Keystore 25 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -importcert -file $COMMON_NAME.signed 26 | 27 | # Clean-up 28 | rm $COMMON_NAME.csr 29 | rm $COMMON_NAME.signed 30 | rm ca-cert.srl -------------------------------------------------------------------------------- /module8/demo1/security/generate-truststore.sh: -------------------------------------------------------------------------------- 1 | INSTANCE=$1 2 | CA_ALIAS="ca-root" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | #### Generate Truststore and import ROOT CA certificate #### 6 | keytool -keystore truststore/$INSTANCE.truststore.jks -import -alias $CA_ALIAS -file $CA_CERT_FILE 7 | -------------------------------------------------------------------------------- /module8/demo2/clients/pom.xml: -------------------------------------------------------------------------------- 1 | 4 | 4.0.0 5 | 6 | com.com.globomantics 7 | kafka-clients-security-2 8 | 1.0-SNAPSHOT 9 | 10 | kafka-clients-security-2 11 | 12 | http://www.example.com 13 | 14 | 15 | UTF-8 16 | 11 17 | 11 18 | 3.5.1 19 | 1.10.1 20 | 7.4.1 21 | 22 | 23 | 24 | 25 | 26 | confluent 27 | https://packages.confluent.io/maven/ 28 | 29 | 30 | 31 | 32 | 33 | 34 | com.fasterxml.jackson.core 35 | jackson-core 36 | 2.15.0 37 | 38 | 39 | com.google.guava 40 | guava 41 | 32.1.2-jre 42 | 43 | 44 | com.google.guava 45 | failureaccess 46 | 1.0.1 47 | 48 | 49 | junit 50 | junit 51 | 4.13.2 52 | test 53 | 54 | 55 | org.apache.kafka 56 | kafka-clients 57 | ${kafka.version} 58 | 59 | 60 | org.assertj 61 | assertj-core 62 | 3.6.2 63 | 64 | 65 | org.slf4j 66 | slf4j-simple 67 | 1.7.25 68 | 69 | 70 | org.apache.kafka 71 | kafka-streams 72 | ${kafka.version} 73 | 74 | 75 | org.apache.avro 76 | avro 77 | ${avro.version} 78 | 79 | 80 | io.confluent 81 | kafka-avro-serializer 82 | ${confluent.version} 83 | 84 | 85 | 86 | 87 | 88 | 89 | 90 | maven-clean-plugin 91 | 3.0.0 92 | 93 | 94 | maven-resources-plugin 95 | 3.0.2 96 | 97 | 98 | maven-compiler-plugin 99 | 3.8.0 100 | 101 | 102 | maven-surefire-plugin 103 | 2.20.1 104 | 105 | 106 | maven-jar-plugin 107 | 3.0.2 108 | 109 | 110 | maven-install-plugin 111 | 2.5.2 112 | 113 | 114 | maven-deploy-plugin 115 | 2.8.2 116 | 117 | 118 | 119 | 120 | 121 | org.apache.avro 122 | avro-maven-plugin 123 | 1.9.2 124 | 125 | 126 | generate-sources 127 | 128 | schema 129 | 130 | 131 | ${project.basedir}/src/main/avro/ 132 | ${project.basedir}/src/main/java/ 133 | 134 | 135 | 136 | 137 | 138 | org.apache.maven.plugins 139 | maven-compiler-plugin 140 | 141 | 1.8 142 | 1.8 143 | 144 | 145 | 146 | 147 | 148 | -------------------------------------------------------------------------------- /module8/demo2/clients/src/main/java/com/globomantics/Consumer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.consumer.ConsumerConfig; 4 | import org.apache.kafka.clients.consumer.ConsumerRecord; 5 | import org.apache.kafka.clients.consumer.ConsumerRecords; 6 | import org.apache.kafka.clients.consumer.KafkaConsumer; 7 | import org.apache.kafka.common.serialization.DoubleDeserializer; 8 | import org.apache.kafka.common.serialization.StringDeserializer; 9 | import org.slf4j.Logger; 10 | import org.slf4j.LoggerFactory; 11 | 12 | import java.time.Duration; 13 | import java.util.Collections; 14 | import java.util.Properties; 15 | 16 | public class Consumer { 17 | 18 | private static final Logger log = LoggerFactory.getLogger(Consumer.class); 19 | 20 | public static void main(String[] args) { 21 | 22 | Properties props = new Properties(); 23 | props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 24 | props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 25 | props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, DoubleDeserializer.class.getName()); 26 | props.put(ConsumerConfig.GROUP_ID_CONFIG, "consumer"); 27 | 28 | KafkaConsumer consumer = new KafkaConsumer<>(props); 29 | 30 | Thread haltedHook = new Thread(consumer::close); 31 | Runtime.getRuntime().addShutdownHook(haltedHook); 32 | 33 | consumer.subscribe(Collections.singletonList("myorders")); 34 | 35 | while (true) { 36 | ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); 37 | records.forEach(Consumer::processRecord); 38 | } 39 | } 40 | 41 | private static void processRecord(ConsumerRecord record) { 42 | log.info("Received message with key: " + record.key() + " and value " + record.value()); 43 | log.info("It comes from partition: " + record.partition()); 44 | try { 45 | Thread.sleep(1000); 46 | } catch (InterruptedException e) { 47 | System.out.println(e.getMessage()); 48 | } 49 | } 50 | } -------------------------------------------------------------------------------- /module8/demo2/clients/src/main/java/com/globomantics/Producer.java: -------------------------------------------------------------------------------- 1 | package com.globomantics; 2 | 3 | import org.apache.kafka.clients.producer.KafkaProducer; 4 | import org.apache.kafka.clients.producer.ProducerConfig; 5 | import org.apache.kafka.clients.producer.ProducerRecord; 6 | import org.apache.kafka.common.serialization.DoubleSerializer; 7 | import org.apache.kafka.common.serialization.StringSerializer; 8 | import org.slf4j.Logger; 9 | import org.slf4j.LoggerFactory; 10 | 11 | import java.util.Properties; 12 | 13 | public class Producer { 14 | private static final Logger log = LoggerFactory.getLogger(Producer.class); 15 | private static final String TOPIC = "myorders"; 16 | 17 | public static void main(String[] args) throws InterruptedException { 18 | 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092,http://localhost:9093,http://localhost:9094"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, DoubleSerializer.class.getName()); 23 | KafkaProducer producer = new KafkaProducer<>(props); 24 | String stateString = 25 | "AK,AL,AZ,AR,CA,CO,CT,DE,FL,GA," + 26 | "HI,ID,IL,IN,IA,KS,KY,LA,ME,MD," + 27 | "MA,MI,MN,MS,MO,MT,NE,NV,NH,NJ," + 28 | "NM,NY,NC,ND,OH,OK,OR,PA,RI,SC," + 29 | "SD,TN,TX,UT,VT,VA,WA,WV,WI,WY"; 30 | String[] stateArray = stateString.split(","); 31 | for (int i = 0; i < 25000; i++) { 32 | String key = stateArray[(int) Math.floor(Math.random()*(50))]; 33 | double value = Math.floor(Math.random()* (10000-10+1)+10); 34 | ProducerRecord producerRecord = 35 | new ProducerRecord<>(TOPIC, key, value); 36 | 37 | log.info("Sending message with key " + key + " to Kafka"); 38 | 39 | producer.send(producerRecord, (metadata, e) -> { 40 | if (metadata != null) { 41 | System.out.println(producerRecord.key()); 42 | System.out.println(producerRecord.value()); 43 | System.out.println(metadata.toString()); 44 | } 45 | }); 46 | Thread.sleep(5000); 47 | } 48 | producer.flush(); 49 | producer.close(); 50 | 51 | log.info("Successfully produced messages to " + TOPIC + " topic"); 52 | 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /module8/demo2/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '3' 3 | services: 4 | zookeeper-1: 5 | image: confluentinc/cp-zookeeper:7.4.1 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./zookeeper-1_data:/var/lib/zookeeper/data 10 | - ./zookeeper-1_log:/var/lib/zookeeper/log 11 | environment: 12 | ZOOKEEPER_CLIENT_PORT: 2181 13 | ZOOKEEPER_TICK_TIME: 2000 14 | ZOO_MY_ID: 1 15 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 16 | 17 | zookeeper-2: 18 | image: confluentinc/cp-zookeeper:7.4.1 19 | hostname: zookeeper-2 20 | container_name: zookeeper-2 21 | volumes: 22 | - ./zookeeper-2_data:/var/lib/zookeeper/data 23 | - ./zookeeper-2_log:/var/lib/zookeeper/log 24 | environment: 25 | ZOOKEEPER_CLIENT_PORT: 2181 26 | ZOOKEEPER_TICK_TIME: 2000 27 | ZOO_MY_ID: 2 28 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 29 | 30 | zookeeper-3: 31 | image: confluentinc/cp-zookeeper:7.4.1 32 | hostname: zookeeper-3 33 | container_name: zookeeper-3 34 | volumes: 35 | - ./zookeeper-3_data:/var/lib/zookeeper/data 36 | - ./zookeeper-3_log:/var/lib/zookeeper/log 37 | environment: 38 | ZOOKEEPER_CLIENT_PORT: 2181 39 | ZOOKEEPER_TICK_TIME: 2000 40 | ZOO_MY_ID: 3 41 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 42 | 43 | 44 | broker-1: 45 | image: confluentinc/cp-kafka:7.4.1 46 | hostname: broker-1 47 | container_name: broker-1 48 | volumes: 49 | - ./broker-1-data:/var/lib/kafka/data 50 | depends_on: 51 | - zookeeper-1 52 | - zookeeper-2 53 | - zookeeper-3 54 | ports: 55 | - 9092:9092 56 | - 29092:29092 57 | environment: 58 | KAFKA_BROKER_ID: 1 59 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 60 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9092,INTERNAL://broker-1:29092 61 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 62 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 63 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 64 | 65 | broker-2: 66 | image: confluentinc/cp-kafka:7.4.1 67 | hostname: broker-2 68 | container_name: broker-2 69 | volumes: 70 | - ./broker-2-data:/var/lib/kafka/data 71 | depends_on: 72 | - zookeeper-1 73 | - zookeeper-2 74 | - zookeeper-3 75 | - broker-1 76 | ports: 77 | - 9093:9093 78 | - 29093:29093 79 | environment: 80 | KAFKA_BROKER_ID: 2 81 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 82 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9093,INTERNAL://broker-2:29093 83 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 84 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 85 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 86 | 87 | broker-3: 88 | image: confluentinc/cp-kafka:7.4.1 89 | hostname: broker-3 90 | container_name: broker-3 91 | volumes: 92 | - ./broker-3-data:/var/lib/kafka/data 93 | depends_on: 94 | - zookeeper-1 95 | - zookeeper-2 96 | - zookeeper-3 97 | - broker-1 98 | - broker-2 99 | ports: 100 | - 9094:9094 101 | - 29094:29094 102 | environment: 103 | KAFKA_BROKER_ID: 3 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181 105 | KAFKA_ADVERTISED_LISTENERS: HOST://localhost:9094,INTERNAL://broker-3:29094 106 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: HOST:PLAINTEXT,INTERNAL:PLAINTEXT 107 | KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL 108 | KAFKA_SNAPSHOT_TRUST_EMPTY: true 109 | 110 | 111 | rest-proxy: 112 | image: confluentinc/cp-kafka-rest:7.4.1 113 | ports: 114 | - "8082:8082" 115 | depends_on: 116 | - zookeeper-1 117 | - zookeeper-2 118 | - zookeeper-3 119 | - broker-1 120 | - broker-2 121 | - broker-3 122 | hostname: rest-proxy 123 | container_name: rest-proxy 124 | environment: 125 | KAFKA_REST_HOST_NAME: rest-proxy 126 | KAFKA_REST_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 127 | KAFKA_REST_LISTENERS: "http://0.0.0.0:8082" 128 | 129 | schema-registry: 130 | image: confluentinc/cp-schema-registry:7.4.1 131 | hostname: schema-registry 132 | container_name: schema-registry 133 | depends_on: 134 | - rest-proxy 135 | ports: 136 | - "8081:8081" 137 | environment: 138 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 139 | SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'broker-1:29092,broker-2:29093,broker-3:29094' 140 | -------------------------------------------------------------------------------- /module8/demo2/security/generate-ca.sh: -------------------------------------------------------------------------------- 1 | VALIDITY_DAYS=36500 2 | CA_KEY_FILE="ca-key" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | openssl req -new -x509 -keyout $CA_KEY_FILE -out $CA_CERT_FILE -days $VALIDITY_DAYS 6 | 7 | #### Example Values #### 8 | # Passphrase: password 9 | # Country Name: US 10 | # State or Province: UT 11 | # City: Utah 12 | # Organization Name: Pluralsight 13 | # Organizational Unit Name: Community 14 | # Common Name: pluralsight.com 15 | # Email: learner@pluralsight.com -------------------------------------------------------------------------------- /module8/demo2/security/generate-keystore.sh: -------------------------------------------------------------------------------- 1 | COMMON_NAME=$1 2 | ORGANIZATIONAL_UNIT="PS" 3 | ORGANIZATION="Pluralsight" 4 | CITY="Boulder" 5 | STATE="CO" 6 | COUNTRY="US" 7 | 8 | CA_ALIAS="ca-root" 9 | CA_CERT_FILE="ca-cert" 10 | VALIDITY_DAYS=36500 11 | 12 | # Generate Keystore with Private Key 13 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -validity $VALIDITY_DAYS -genkey -keyalg RSA -dname "CN=$COMMON_NAME, OU=$ORGANIZATIONAL_UNIT, O=$ORGANIZATION, L=$CITY, ST=$STATE, C=$COUNTRY" 14 | 15 | # Generate Certificate Signing Request (CSR) using the newly created KeyStore 16 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -certreq -file $COMMON_NAME.csr 17 | 18 | # Sign the CSR using the custom CA 19 | openssl x509 -req -CA ca-cert -CAkey ca-key -in $COMMON_NAME.csr -out $COMMON_NAME.signed -days $VALIDITY_DAYS -CAcreateserial 20 | 21 | # Import ROOT CA certificate into Keystore 22 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $CA_ALIAS -importcert -file $CA_CERT_FILE 23 | 24 | # Import newly signed certificate into Keystore 25 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -importcert -file $COMMON_NAME.signed 26 | 27 | # Clean-up 28 | rm $COMMON_NAME.csr 29 | rm $COMMON_NAME.signed 30 | rm ca-cert.srl -------------------------------------------------------------------------------- /module8/demo2/security/generate-truststore.sh: -------------------------------------------------------------------------------- 1 | INSTANCE=$1 2 | CA_ALIAS="ca-root" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | #### Generate Truststore and import ROOT CA certificate #### 6 | keytool -keystore truststore/$INSTANCE.truststore.jks -import -alias $CA_ALIAS -file $CA_CERT_FILE 7 | -------------------------------------------------------------------------------- /module8/demo3/clients/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | com.pluralsight.kafka 8 | kafka-clients-security-3 9 | 1.0.0 10 | 11 | 12 | 11 13 | 11 14 | 15 | 16 | 17 | 18 | 19 | org.apache.kafka 20 | kafka-clients 21 | 2.6.0 22 | 23 | 24 | 25 | de.saly 26 | kafka-end-2-end-encryption 27 | 1.0.1 28 | 29 | 30 | 31 | jakarta.xml.bind 32 | jakarta.xml.bind-api 33 | 2.3.3 34 | 35 | 36 | 37 | com.fasterxml.jackson.core 38 | jackson-databind 39 | 2.11.3 40 | 41 | 42 | 43 | org.slf4j 44 | slf4j-api 45 | 1.7.30 46 | 47 | 48 | org.slf4j 49 | slf4j-log4j12 50 | 1.7.25 51 | 52 | 53 | log4j 54 | log4j 55 | 1.2.17 56 | 57 | 58 | 59 | 60 | 61 | 62 | org.apache.maven.plugins 63 | maven-compiler-plugin 64 | 65 | 11 66 | 11 67 | 68 | 69 | 70 | 71 | org.apache.maven.plugins 72 | maven-assembly-plugin 73 | 3.1.1 74 | 75 | 76 | 77 | jar-with-dependencies 78 | 79 | 80 | 81 | 82 | 83 | make-assembly 84 | package 85 | 86 | single 87 | 88 | 89 | 90 | 91 | 92 | 93 | 94 | -------------------------------------------------------------------------------- /module8/demo3/clients/src/main/java/com/pluralsight/kafka/security/encryption/BasicConsumer.java: -------------------------------------------------------------------------------- 1 | package com.pluralsight.kafka.security.encryption; 2 | 3 | import org.apache.kafka.clients.consumer.ConsumerConfig; 4 | import org.apache.kafka.clients.consumer.ConsumerRecords; 5 | import org.apache.kafka.clients.consumer.KafkaConsumer; 6 | import org.apache.kafka.common.serialization.StringDeserializer; 7 | import org.slf4j.Logger; 8 | import org.slf4j.LoggerFactory; 9 | 10 | import java.time.Duration; 11 | import java.util.Collections; 12 | import java.util.Properties; 13 | import java.util.concurrent.ExecutionException; 14 | 15 | public class BasicConsumer { 16 | 17 | private static final Logger log = LoggerFactory.getLogger(BasicConsumer.class); 18 | 19 | public static void main(String[] args) throws ExecutionException, InterruptedException { 20 | Properties props = new Properties(); 21 | props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9091,broker-2:9092,broker-3:9093"); 22 | props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 23 | props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 24 | props.put(ConsumerConfig.GROUP_ID_CONFIG, "basic.consumer"); 25 | 26 | KafkaConsumer consumer = new KafkaConsumer<>(props); 27 | 28 | Thread haltedHook = new Thread(consumer::close); 29 | Runtime.getRuntime().addShutdownHook(haltedHook); 30 | 31 | consumer.subscribe(Collections.singletonList("basic-topic")); 32 | 33 | while (true) { 34 | ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); 35 | 36 | records.forEach(record -> log.info("Consumed message: " + record.key() + ":" + record.value())); 37 | } 38 | } 39 | } 40 | -------------------------------------------------------------------------------- /module8/demo3/clients/src/main/java/com/pluralsight/kafka/security/encryption/BasicProducer.java: -------------------------------------------------------------------------------- 1 | package com.pluralsight.kafka.security.encryption; 2 | 3 | import org.apache.kafka.clients.producer.*; 4 | import org.apache.kafka.common.serialization.StringSerializer; 5 | import org.slf4j.Logger; 6 | import org.slf4j.LoggerFactory; 7 | 8 | import java.util.Properties; 9 | import java.util.UUID; 10 | import java.util.concurrent.ExecutionException; 11 | 12 | public class BasicProducer { 13 | 14 | private static final Logger log = LoggerFactory.getLogger(BasicProducer.class); 15 | 16 | public static void main(String[] args) throws ExecutionException, InterruptedException { 17 | Properties props = new Properties(); 18 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9091,broker-2:9092,broker-3:9093"); 19 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 20 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 21 | 22 | KafkaProducer producer = new KafkaProducer<>(props); 23 | 24 | Thread haltedHook = new Thread(producer::close); 25 | Runtime.getRuntime().addShutdownHook(haltedHook); 26 | 27 | long i = 0; 28 | while(true) { 29 | String key = String.valueOf(i); 30 | String value = UUID.randomUUID().toString(); 31 | 32 | ProducerRecord producerRecord = 33 | new ProducerRecord<>("basic-topic", key, value); 34 | producer.send(producerRecord); 35 | log.info("Message sent: " + key + ":" + value); 36 | 37 | i++; 38 | Thread.sleep(2000); 39 | } 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /module8/demo3/clients/src/main/java/com/pluralsight/kafka/security/encryption/BasicSSLConsumer.java: -------------------------------------------------------------------------------- 1 | package com.pluralsight.kafka.security.encryption; 2 | 3 | import org.apache.kafka.clients.CommonClientConfigs; 4 | import org.apache.kafka.clients.consumer.ConsumerConfig; 5 | import org.apache.kafka.clients.consumer.ConsumerRecords; 6 | import org.apache.kafka.clients.consumer.KafkaConsumer; 7 | import org.apache.kafka.common.config.SslConfigs; 8 | import org.apache.kafka.common.serialization.StringDeserializer; 9 | import org.slf4j.Logger; 10 | import org.slf4j.LoggerFactory; 11 | 12 | import java.time.Duration; 13 | import java.util.Collections; 14 | import java.util.Properties; 15 | import java.util.concurrent.ExecutionException; 16 | 17 | public class BasicSSLConsumer { 18 | 19 | private static final Logger log = LoggerFactory.getLogger(BasicSSLConsumer.class); 20 | 21 | public static void main(String[] args) throws ExecutionException, InterruptedException { 22 | Properties props = new Properties(); 23 | props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9191,broker-2:9192,broker-3:9193"); 24 | props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 25 | props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName()); 26 | props.put(ConsumerConfig.GROUP_ID_CONFIG, "basic.consumer"); 27 | 28 | props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); 29 | 30 | props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/Users/axelsirota/repos/getting-started-kafka/module8/demo3/security/truststore/consumer.truststore.jks"); // Replace with the absolute path on your machine 31 | props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "password"); 32 | 33 | KafkaConsumer consumer = new KafkaConsumer<>(props); 34 | 35 | Thread haltedHook = new Thread(consumer::close); 36 | Runtime.getRuntime().addShutdownHook(haltedHook); 37 | 38 | consumer.subscribe(Collections.singletonList("basic-topic")); 39 | 40 | while (true) { 41 | ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); 42 | 43 | records.forEach(record -> log.info("Consumed message: " + record.key() + ":" + record.value())); 44 | } 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /module8/demo3/clients/src/main/java/com/pluralsight/kafka/security/encryption/BasicSSLProducer.java: -------------------------------------------------------------------------------- 1 | package com.pluralsight.kafka.security.encryption; 2 | 3 | import org.apache.kafka.clients.CommonClientConfigs; 4 | import org.apache.kafka.clients.producer.*; 5 | import org.apache.kafka.common.config.SslConfigs; 6 | import org.apache.kafka.common.serialization.StringSerializer; 7 | import org.slf4j.Logger; 8 | import org.slf4j.LoggerFactory; 9 | 10 | import java.util.Properties; 11 | import java.util.UUID; 12 | import java.util.concurrent.ExecutionException; 13 | 14 | public class BasicSSLProducer { 15 | 16 | private static final Logger log = LoggerFactory.getLogger(BasicSSLProducer.class); 17 | 18 | public static void main(String[] args) throws ExecutionException, InterruptedException { 19 | Properties props = new Properties(); 20 | props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker-1:9191,broker-2:9192,broker-3:9193"); 21 | props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 22 | props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName()); 23 | 24 | props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL"); 25 | 26 | props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "/Users/axelsirota/repos/getting-started-kafka/module8/demo3/security/truststore/producer.truststore.jks"); // Replace with the absolute path on your machine 27 | props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "password"); 28 | 29 | KafkaProducer producer = new KafkaProducer<>(props); 30 | 31 | Thread haltedHook = new Thread(producer::close); 32 | Runtime.getRuntime().addShutdownHook(haltedHook); 33 | 34 | long i = 0; 35 | while(true) { 36 | String key = String.valueOf(i); 37 | String value = UUID.randomUUID().toString(); 38 | 39 | ProducerRecord producerRecord = 40 | new ProducerRecord<>("basic-topic", key, value); 41 | producer.send(producerRecord); 42 | log.info("Message sent: " + key + ":" + value); 43 | 44 | i++; 45 | Thread.sleep(20); 46 | } 47 | } 48 | } 49 | -------------------------------------------------------------------------------- /module8/demo3/clients/src/main/resources/log4j.properties: -------------------------------------------------------------------------------- 1 | log4j.rootLogger=INFO, stderr 2 | 3 | log4j.appender.stderr=org.apache.log4j.ConsoleAppender 4 | log4j.appender.stderr.layout=org.apache.log4j.PatternLayout 5 | log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n 6 | log4j.appender.stderr.Target=System.err -------------------------------------------------------------------------------- /module8/demo3/docker-compose.yaml: -------------------------------------------------------------------------------- 1 | --- 2 | version: '2' 3 | services: 4 | zookeeper-1: 5 | image: zookeeper:3.6.2 6 | hostname: zookeeper-1 7 | container_name: zookeeper-1 8 | volumes: 9 | - ./security/keystore/zookeeper-1.keystore.jks:/security/zookeeper-1.keystore.jks 10 | - ./security/truststore/zookeeper-1.truststore.jks:/security/zookeeper-1.truststore.jks 11 | - ./security/authentication/zookeeper_jaas.conf:/security/zookeeper_jaas.conf 12 | environment: 13 | ZOO_MY_ID: 1 14 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 15 | ZOO_CFG_EXTRA: "sslQuorum=true 16 | portUnification=false 17 | serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory 18 | 19 | ssl.quorum.hostnameVerification=false 20 | ssl.quorum.keyStore.location=/security/zookeeper-1.keystore.jks 21 | ssl.quorum.keyStore.password=password 22 | ssl.quorum.trustStore.location=/security/zookeeper-1.truststore.jks 23 | ssl.quorum.trustStore.password=password 24 | 25 | secureClientPort=2281 26 | ssl.hostnameVerification=false 27 | ssl.keyStore.location=/security/zookeeper-1.keystore.jks 28 | ssl.keyStore.password=password 29 | ssl.trustStore.location=/security/zookeeper-1.truststore.jks 30 | ssl.trustStore.password=password" 31 | 32 | zookeeper-2: 33 | image: zookeeper:3.6.2 34 | hostname: zookeeper-2 35 | container_name: zookeeper-2 36 | volumes: 37 | - ./security/keystore/zookeeper-2.keystore.jks:/security/zookeeper-2.keystore.jks 38 | - ./security/truststore/zookeeper-2.truststore.jks:/security/zookeeper-2.truststore.jks 39 | - ./security/authentication/zookeeper_jaas.conf:/security/zookeeper_jaas.conf 40 | environment: 41 | ZOO_MY_ID: 2 42 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 43 | ZOO_CFG_EXTRA: "sslQuorum=true 44 | portUnification=false 45 | serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory 46 | 47 | ssl.quorum.hostnameVerification=false 48 | ssl.quorum.keyStore.location=/security/zookeeper-2.keystore.jks 49 | ssl.quorum.keyStore.password=password 50 | ssl.quorum.trustStore.location=/security/zookeeper-2.truststore.jks 51 | ssl.quorum.trustStore.password=password 52 | 53 | secureClientPort=2281 54 | ssl.hostnameVerification=false 55 | ssl.keyStore.location=/security/zookeeper-2.keystore.jks 56 | ssl.keyStore.password=password 57 | ssl.trustStore.location=/security/zookeeper-2.truststore.jks 58 | ssl.trustStore.password=password" 59 | 60 | zookeeper-3: 61 | image: zookeeper:3.6.2 62 | hostname: zookeeper-3 63 | container_name: zookeeper-3 64 | volumes: 65 | - ./security/keystore/zookeeper-3.keystore.jks:/security/zookeeper-3.keystore.jks 66 | - ./security/truststore/zookeeper-3.truststore.jks:/security/zookeeper-3.truststore.jks 67 | - ./security/authentication/zookeeper_jaas.conf:/security/zookeeper_jaas.conf 68 | environment: 69 | ZOO_MY_ID: 3 70 | ZOO_SERVERS: server.1=zookeeper-1:2888:3888;2181 server.2=zookeeper-2:2888:3888;2181 server.3=zookeeper-3:2888:3888;2181 71 | ZOO_CFG_EXTRA: "sslQuorum=true 72 | portUnification=false 73 | serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory 74 | 75 | ssl.quorum.hostnameVerification=false 76 | ssl.quorum.keyStore.location=/security/zookeeper-3.keystore.jks 77 | ssl.quorum.keyStore.password=password 78 | ssl.quorum.trustStore.location=/security/zookeeper-3.truststore.jks 79 | ssl.quorum.trustStore.password=password 80 | 81 | secureClientPort=2281 82 | ssl.hostnameVerification=false 83 | ssl.keyStore.location=/security/zookeeper-3.keystore.jks 84 | ssl.keyStore.password=password 85 | ssl.trustStore.location=/security/zookeeper-3.truststore.jks 86 | ssl.trustStore.password=password" 87 | 88 | broker-1: 89 | image: bsucaciu/kafka:2.6.0 90 | hostname: broker-1 91 | container_name: broker-1 92 | depends_on: 93 | - zookeeper-1 94 | - zookeeper-2 95 | - zookeeper-3 96 | ports: 97 | - "9091:9091" 98 | - "9191:9191" 99 | volumes: 100 | - ./security/keystore/broker-1.keystore.jks:/kafka/security/broker-1.keystore.jks 101 | - ./security/truststore/broker-1.truststore.jks:/kafka/security/broker-1.truststore.jks 102 | environment: 103 | KAFKA_BROKER_ID: 1 104 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2281,zookeeper-2:2281,zookeeper-3:2281 105 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT 106 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-1:9091 107 | KAFKA_LISTENERS: PLAINTEXT://broker-1:9091 108 | KAFKA_DEFAULT_REPLICATION_FACTOR: 3 109 | KAFKA_MIN_INSYNC_REPLICAS: 2 110 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3 111 | KAFKA_ZOOKEEPER_SSL_CLIENT_ENABLE: "true" 112 | KAFKA_ZOOKEEPER_CLIENT_CNXN_SOCKET: org.apache.zookeeper.ClientCnxnSocketNetty 113 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_LOCATION: /kafka/security/broker-1.keystore.jks 114 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_PASSWORD: password 115 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_LOCATION: /kafka/security/broker-1.truststore.jks 116 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_PASSWORD: password 117 | KAFKA_SECURITY_INTER_BROKER_PROTOCOL: SSL 118 | 119 | broker-2: 120 | image: bsucaciu/kafka:2.6.0 121 | hostname: broker-2 122 | container_name: broker-2 123 | depends_on: 124 | - zookeeper-1 125 | - zookeeper-2 126 | - zookeeper-3 127 | ports: 128 | - "9092:9092" 129 | - "9192:9192" 130 | volumes: 131 | - ./security/keystore/broker-2.keystore.jks:/kafka/security/broker-2.keystore.jks 132 | - ./security/truststore/broker-2.truststore.jks:/kafka/security/broker-2.truststore.jks 133 | environment: 134 | KAFKA_BROKER_ID: 2 135 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2281,zookeeper-2:2281,zookeeper-3:2281 136 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT 137 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-2:9092 138 | KAFKA_LISTENERS: PLAINTEXT://broker-2:9092 139 | KAFKA_DEFAULT_REPLICATION_FACTOR: 3 140 | KAFKA_MIN_INSYNC_REPLICAS: 2 141 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3 142 | KAFKA_ZOOKEEPER_SSL_CLIENT_ENABLE: "true" 143 | KAFKA_ZOOKEEPER_CLIENT_CNXN_SOCKET: org.apache.zookeeper.ClientCnxnSocketNetty 144 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_LOCATION: /kafka/security/broker-2.keystore.jks 145 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_PASSWORD: password 146 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_LOCATION: /kafka/security/broker-2.truststore.jks 147 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_PASSWORD: password 148 | KAFKA_SSL_CLIENT_AUTH: none 149 | 150 | broker-3: 151 | image: bsucaciu/kafka:2.6.0 152 | hostname: broker-3 153 | container_name: broker-3 154 | depends_on: 155 | - zookeeper-1 156 | - zookeeper-2 157 | - zookeeper-3 158 | ports: 159 | - "9093:9093" 160 | - "9193:9193" 161 | volumes: 162 | - ./security/keystore/broker-3.keystore.jks:/kafka/security/broker-3.keystore.jks 163 | - ./security/truststore/broker-3.truststore.jks:/kafka/security/broker-3.truststore.jks 164 | environment: 165 | KAFKA_BROKER_ID: 3 166 | KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2281,zookeeper-2:2281,zookeeper-3:2281 167 | KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT 168 | KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://broker-3:9093 169 | KAFKA_LISTENERS: PLAINTEXT://broker-3:9093 170 | KAFKA_DEFAULT_REPLICATION_FACTOR: 3 171 | KAFKA_MIN_INSYNC_REPLICAS: 2 172 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 3 173 | KAFKA_ZOOKEEPER_SSL_CLIENT_ENABLE: "true" 174 | KAFKA_ZOOKEEPER_CLIENT_CNXN_SOCKET: org.apache.zookeeper.ClientCnxnSocketNetty 175 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_LOCATION: /kafka/security/broker-3.keystore.jks 176 | KAFKA_ZOOKEEPER_SSL_KEYSTORE_PASSWORD: password 177 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_LOCATION: /kafka/security/broker-3.truststore.jks 178 | KAFKA_ZOOKEEPER_SSL_TRUSTSTORE_PASSWORD: password 179 | 180 | -------------------------------------------------------------------------------- /module8/demo3/security/generate-ca.sh: -------------------------------------------------------------------------------- 1 | VALIDITY_DAYS=36500 2 | CA_KEY_FILE="ca-key" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | openssl req -new -x509 -keyout $CA_KEY_FILE -out $CA_CERT_FILE -days $VALIDITY_DAYS 6 | 7 | #### Example Values #### 8 | # Passphrase: password 9 | # Country Name: US 10 | # State or Province: UT 11 | # City: Utah 12 | # Organization Name: Pluralsight 13 | # Organizational Unit Name: Community 14 | # Common Name: pluralsight.com 15 | # Email: learner@pluralsight.com -------------------------------------------------------------------------------- /module8/demo3/security/generate-keystore.sh: -------------------------------------------------------------------------------- 1 | COMMON_NAME=$1 2 | ORGANIZATIONAL_UNIT="Community" 3 | ORGANIZATION="Pluralsight" 4 | CITY="Utah" 5 | STATE="Utah" 6 | COUNTRY="US" 7 | 8 | CA_ALIAS="ca-root" 9 | CA_CERT_FILE="ca-cert" 10 | VALIDITY_DAYS=36500 11 | 12 | # Generate Keystore with Private Key 13 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -validity $VALIDITY_DAYS -genkey -keyalg RSA -dname "CN=$COMMON_NAME, OU=$ORGANIZATIONAL_UNIT, O=$ORGANIZATION, L=$CITY, ST=$STATE, C=$COUNTRY" 14 | 15 | # Generate Certificate Signing Request (CSR) using the newly created KeyStore 16 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -certreq -file $COMMON_NAME.csr 17 | 18 | # Sign the CSR using the custom CA 19 | openssl x509 -req -CA ca-cert -CAkey ca-key -in $COMMON_NAME.csr -out $COMMON_NAME.signed -days $VALIDITY_DAYS -CAcreateserial 20 | 21 | # Import ROOT CA certificate into Keystore 22 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $CA_ALIAS -importcert -file $CA_CERT_FILE 23 | 24 | # Import newly signed certificate into Keystore 25 | keytool -keystore keystore/$COMMON_NAME.keystore.jks -alias $COMMON_NAME -importcert -file $COMMON_NAME.signed 26 | 27 | # Clean-up 28 | rm $COMMON_NAME.csr 29 | rm $COMMON_NAME.signed 30 | rm ca-cert.srl -------------------------------------------------------------------------------- /module8/demo3/security/generate-truststore.sh: -------------------------------------------------------------------------------- 1 | INSTANCE=$1 2 | CA_ALIAS="ca-root" 3 | CA_CERT_FILE="ca-cert" 4 | 5 | #### Generate Truststore and import ROOT CA certificate #### 6 | keytool -keystore truststore/$INSTANCE.truststore.jks -import -alias $CA_ALIAS -file $CA_CERT_FILE 7 | -------------------------------------------------------------------------------- /rest_proxy.yaml: -------------------------------------------------------------------------------- 1 | openapi: 3.0.1 2 | info: 3 | title: REST Proxy API 4 | description: >- 5 | The Confluent REST Proxy provides a RESTful interface to a Kafka cluster, making it easy to produce and consume messages, view the state of the cluster, and perform administrative actions without using the native Kafka protocol or clients. 6 |

Some example use cases are

7 |
    8 |
  • Reporting data to Kafka from any frontend app built in any language not supported by official Confluent clients
  • 9 |
  • Ingesting messages into a stream processing framework that doesn’t yet support Kafka
  • 10 |
  • Scripting administrative actions
  • 11 |
12 | version: 5.2.1 13 | externalDocs: 14 | description: Confluent's API Reference 15 | url: https://docs.confluent.io/current/kafka-rest/ 16 | paths: 17 | /topics: 18 | get: 19 | responses: 20 | 200: 21 | description: successful operation 22 | content: 23 | application/vnd.kafka.v2+json: 24 | schema: 25 | type: array 26 | items: 27 | type: string 28 | application/vnd.kafka.v2+xml: 29 | schema: 30 | type: array 31 | items: 32 | type: string 33 | /topics/{topicName}: 34 | get: 35 | parameters: 36 | - name: topicName 37 | in: path 38 | description: topic name 39 | required: true 40 | schema: 41 | type: string 42 | responses: 43 | 200: 44 | description: successful operation 45 | content: 46 | application/vnd.kafka.v2+json: 47 | schema: 48 | $ref: '#/components/schemas/topic' 49 | application/vnd.kafka.v2+xml: 50 | schema: 51 | $ref: '#/components/schemas/topic' 52 | 404: 53 | description: Error code 40401 -- Topic not found 54 | post: 55 | description: post messages to a given topic 56 | parameters: 57 | - name: topicName 58 | in: path 59 | description: name of topic to produce the messages to 60 | required: true 61 | schema: 62 | type: string 63 | requestBody: 64 | description: message(s) to produce to the given topic 65 | required: true 66 | content: 67 | application/vnd.kafka.binary.v2+json: 68 | schema: 69 | $ref: '#/components/schemas/messages' 70 | application/vnd.kafka.avro.v2+json: 71 | schema: 72 | $ref: '#/components/schemas/messages' 73 | application/vnd.kafka.json.v2+json: 74 | schema: 75 | $ref: '#/components/schemas/messages' 76 | responses: 77 | 200: 78 | description: message(s) posted 79 | content: 80 | application/vnd.kafka.v2+json: 81 | schema: 82 | $ref: '#/components/schemas/messages_response' 83 | 404: 84 | description: 40401 -- Topic not found 85 | 422: 86 | description: >- 87 | 42201 -- Request includes keys and uses a format that requires schemas, but does not include the `key_schema` or `key_schema_id` fields
88 | 42202 -- Request includes values and uses a format that requires schemas, but does not include the `value_schema` or `value_schema_id` fields
89 | 42205 -- Request includes invalid schema. 90 | /topics/{topicName}/partitions: 91 | get: 92 | description: Get a list of partitions for the topic 93 | parameters: 94 | - name: topicName 95 | in: path 96 | description: the name of the topic 97 | required: true 98 | schema: 99 | type: string 100 | responses: 101 | 200: 102 | description: successful 103 | content: 104 | application/vnd.kafka.v2+json: 105 | schema: 106 | $ref: '#/components/schemas/partitions' 107 | 404: 108 | description: topic not found 109 | /topics/{topicName}/partitions/{partitionID}: 110 | get: 111 | description: Get metadata about a single partition in the topic 112 | parameters: 113 | - name: topicName 114 | in: path 115 | description: Name of the topic 116 | required: true 117 | schema: 118 | type: string 119 | - name: partitionID 120 | in: path 121 | description: ID of the partition to inspect 122 | required: true 123 | schema: 124 | type: string 125 | responses: 126 | 200: 127 | description: successful 128 | content: 129 | application/vnd.kafka.v2+json: 130 | schema: 131 | $ref: '#/components/schemas/partitions' 132 | post: 133 | description: Produce messages to one partition of the topic 134 | parameters: 135 | - name: topicName 136 | in: path 137 | description: name of topic to produce the messages to 138 | required: true 139 | schema: 140 | type: string 141 | - name: partitionID 142 | in: path 143 | required: true 144 | schema: 145 | type: integer 146 | requestBody: 147 | description: message(s) to produce to the given topic 148 | required: true 149 | content: 150 | application/vnd.kafka.v2+json: 151 | schema: 152 | $ref: '#/components/schemas/messages' 153 | responses: 154 | 200: 155 | description: message(s) posted 156 | content: 157 | application/vnd.kafka.v2+json: 158 | schema: 159 | $ref: '#/components/schemas/messages_response' 160 | 404: 161 | description: >- 162 | 40401 -- Topic not found
163 | 40402 -- Partition not found 164 | 422: 165 | description: >- 166 | 42201 -- Request includes keys and uses a format that requires schemas, but does not include the `key_schema` or `key_schema_id` fields
167 | 42202 -- Request includes values and uses a format that requires schemas, but does not include the `value_schema` or `value_schema_id` fields
168 | 42205 -- Request includes invalid schema. 169 | /consumers/{group_name}: 170 | description: Create a new consumer instance in the consumer group 171 | post: 172 | parameters: 173 | - name: group_name 174 | in: path 175 | required: true 176 | schema: 177 | type: string 178 | requestBody: 179 | required: true 180 | content: 181 | application/vnd.kafka.v2+json: 182 | schema: 183 | $ref: '#/components/schemas/consumer_group' 184 | responses: 185 | 200: 186 | description: Consumer group created 187 | content: 188 | application/vnd.kafka.v2+json: 189 | schema: 190 | type: object 191 | properties: 192 | instance_id: 193 | type: string 194 | description: Unique ID for the consumer instance in this group 195 | base_uri: 196 | type: string 197 | description: Base URI used to construct URIs for subsequent requests against this consumer instance. This will be of the form `http://hostname:port/consumers/consumer_group/instances/instance_id` 198 | 409: 199 | description: Consumer instance with the specified name already exists 200 | 422: 201 | description: Invalid consumer configuration 202 | /consumers/{group_name}/instances/{instance}: 203 | description: Delete the consumer instance 204 | delete: 205 | parameters: 206 | - name: group_name 207 | in: path 208 | required: true 209 | schema: 210 | type: string 211 | description: The name of the consumer group 212 | - name: instance 213 | in: path 214 | required: true 215 | schema: 216 | type: string 217 | description: The ID of the consumer instance 218 | responses: 219 | 204: 220 | description: Success 221 | 404: 222 | description: 40403 -- Consumer instance not found 223 | /consumers/{group_name}/instances/{instance}/offsets: 224 | post: 225 | description: Commit a list of offsets for the consumer. When the post body is empty, it commits all the records that have been fetched by the consumer instance. 226 | parameters: 227 | - name: group_name 228 | in: path 229 | required: true 230 | schema: 231 | type: string 232 | description: The name of the consumer group 233 | - name: instance 234 | in: path 235 | required: true 236 | schema: 237 | type: string 238 | description: The ID of the consumer instance 239 | requestBody: 240 | description: The offsets to commit 241 | content: 242 | application/vnd.kafka.v2+json: 243 | schema: 244 | $ref: '#/components/schemas/offsets' 245 | responses: 246 | 200: 247 | description: Success 248 | 404: 249 | description: 40403 -- Consumer instance not found 250 | get: 251 | description: Get the latest committed offsets for the given partitions 252 | parameters: 253 | - name: group_name 254 | in: path 255 | required: true 256 | schema: 257 | type: string 258 | description: The name of the consumer group 259 | - name: instance 260 | in: path 261 | required: true 262 | schema: 263 | type: string 264 | description: The ID of the consumer instance 265 | requestBody: 266 | required: true 267 | content: 268 | application/vnd.kafka.v2+json: 269 | schema: 270 | $ref: '#/components/schemas/offsets_query' 271 | responses: 272 | 200: 273 | description: Success 274 | content: 275 | application/vnd.kafka.v2+json: 276 | schema: 277 | $ref: '#/components/schemas/offsets_response' 278 | 404: 279 | description: >- 280 | 40402 -- Partition not found
281 | 40403 -- Consumer instance not found 282 | /consumers/{group_name}/instances/{instance}/subscription: 283 | post: 284 | description: Subscribe to the given list of topics or a topic pattern to get dynamically assigned partition. If a prior subscription exists, it would be replaced by the latest subscription. 285 | parameters: 286 | - name: group_name 287 | in: path 288 | required: true 289 | schema: 290 | type: string 291 | description: The name of the consumer group 292 | - name: instance 293 | in: path 294 | required: true 295 | schema: 296 | type: string 297 | description: The ID of the consumer instance 298 | requestBody: 299 | required: true 300 | content: 301 | application/vnd.kafka.v2+json: 302 | schema: 303 | oneOf: 304 | - $ref: '#/components/schemas/topic_subscription' 305 | - $ref: '#/components/schemas/topic_pattern_subscription' 306 | 307 | responses: 308 | 404: 309 | description: 40403 -- Consumer instance not found 310 | 409: 311 | description: 40903 -- Subscription to topics, partitions and pattern are mutually exclusive 312 | 204: 313 | description: Success 314 | get: 315 | description: Get the current subscribed list of topics 316 | parameters: 317 | - name: group_name 318 | in: path 319 | required: true 320 | schema: 321 | type: string 322 | description: The name of the consumer group 323 | - name: instance 324 | in: path 325 | required: true 326 | schema: 327 | type: string 328 | description: The ID of the consumer instance 329 | responses: 330 | 404: 331 | description: 40403 -- Consumer instance not found 332 | 200: 333 | description: Success 334 | content: 335 | application/vnd.kafka.v2+json: 336 | schema: 337 | oneOf: 338 | - $ref: '#/components/schemas/topic_subscription' 339 | - $ref: '#/components/schemas/topic_pattern_subscription' 340 | delete: 341 | description: Unsubscribe from the topics currently subscribed 342 | parameters: 343 | - name: group_name 344 | in: path 345 | required: true 346 | schema: 347 | type: string 348 | description: The name of the consumer group 349 | - name: instance 350 | in: path 351 | required: true 352 | schema: 353 | type: string 354 | description: The ID of the consumer instance 355 | responses: 356 | 204: 357 | description: success 358 | 404: 359 | description: 40403 -- Consumer instance not found 360 | /consumers/{group_name}/instances/{instance}/assignments: 361 | post: 362 | description: Manually assign a list of partitions to this consumer. 363 | parameters: 364 | - name: group_name 365 | in: path 366 | required: true 367 | schema: 368 | type: string 369 | description: The name of the consumer group 370 | - name: instance 371 | in: path 372 | required: true 373 | schema: 374 | type: string 375 | description: The ID of the consumer instance 376 | requestBody: 377 | required: true 378 | content: 379 | application/vnd.kafka.v2+json: 380 | schema: 381 | $ref: '#/components/schemas/assignment' 382 | responses: 383 | 204: 384 | description: success 385 | 404: 386 | description: 40403 -- Consumer instance not found 387 | 409: 388 | description: 40903 -- Subscription to topics, partitions and pattern are mutually exclusive 389 | get: 390 | description: Get the list of partitions currently manually assigned to this consumer 391 | parameters: 392 | - name: group_name 393 | in: path 394 | required: true 395 | schema: 396 | type: string 397 | description: The name of the consumer group 398 | - name: instance 399 | in: path 400 | required: true 401 | schema: 402 | type: string 403 | description: The ID of the consumer instance 404 | responses: 405 | 200: 406 | description: Success 407 | content: 408 | application/vnd.kafka.v2+json: 409 | schema: 410 | $ref: '#/components/schemas/assignment' 411 | /consumers/{group_name}/instances/{instance}/positions: 412 | post: 413 | description: Overrides the fetch offsets that the consumer will use for the next set of records to fetch 414 | parameters: 415 | - name: group_name 416 | in: path 417 | required: true 418 | schema: 419 | type: string 420 | description: The name of the consumer group 421 | - name: instance 422 | in: path 423 | required: true 424 | schema: 425 | type: string 426 | description: The ID of the consumer instance 427 | requestBody: 428 | content: 429 | application/vnd.kafka.v2+json: 430 | schema: 431 | type: object 432 | properties: 433 | offsets: 434 | type: array 435 | items: 436 | type: object 437 | properties: 438 | topic: 439 | type: string 440 | partition: 441 | type: integer 442 | description: Partition ID 443 | offset: 444 | type: integer 445 | responses: 446 | 204: 447 | description: success 448 | 404: 449 | description: 40403 -- Consumer instance not found 450 | /consumers/{group_name}/instances/{instance}/beginning: 451 | post: 452 | description: Seek to the first offset for each of the given partitions 453 | parameters: 454 | - name: group_name 455 | in: path 456 | required: true 457 | schema: 458 | type: string 459 | description: The name of the consumer group 460 | - name: instance 461 | in: path 462 | required: true 463 | schema: 464 | type: string 465 | description: The ID of the consumer instance 466 | requestBody: 467 | required: true 468 | content: 469 | application/vnd.kafka.v2+json: 470 | schema: 471 | $ref: '#/components/schemas/partition_positions' 472 | responses: 473 | 204: 474 | description: Success 475 | 404: 476 | description: 40403 -- Consumer instance not found 477 | /consumers/{group_name}/instances/{instance}/end: 478 | post: 479 | description: Seek to the last offset for each of the given partitions 480 | parameters: 481 | - name: group_name 482 | in: path 483 | required: true 484 | schema: 485 | type: string 486 | description: The name of the consumer group 487 | - name: instance 488 | in: path 489 | required: true 490 | schema: 491 | type: string 492 | description: The ID of the consumer instance 493 | requestBody: 494 | required: true 495 | content: 496 | application/vnd.kafka.v2+json: 497 | schema: 498 | $ref: '#/components/schemas/partition_positions' 499 | responses: 500 | 204: 501 | description: Success 502 | 404: 503 | description: 40403 -- Consumer instance not found 504 | /consumers/{group_name}/instances/{instance}/records: 505 | get: 506 | description: Fetch data for the topics or partitions specified using one of the subscribe/assign APIs 507 | parameters: 508 | - name: group_name 509 | in: path 510 | required: true 511 | schema: 512 | type: string 513 | description: The name of the consumer group 514 | - name: instance 515 | in: path 516 | required: true 517 | schema: 518 | type: string 519 | description: The ID of the consumer instance 520 | - name: timeout 521 | in: query 522 | description: Maximum amount of milliseconds the REST proxy will spend fetching records 523 | schema: 524 | type: integer 525 | - name: max_bytes 526 | in: query 527 | description: The maximum number of bytes of unencoded keys and values that should be included in the response 528 | schema: 529 | type: integer 530 | responses: 531 | 200: 532 | description: records back from Kafka 533 | content: 534 | application/vnd.kafka.binary.v2+json: 535 | schema: 536 | $ref: '#/components/schemas/records' 537 | application/vnd.kafka.avro.v2+json: 538 | schema: 539 | $ref: '#/components/schemas/records' 540 | application/vnd.kafka.json.v2+json: 541 | schema: 542 | $ref: '#/components/schemas/records' 543 | 404: 544 | description: 40403 -- Consumer instance not found 545 | 406: 546 | description: 40601 -- Consumer format does not match the embedded format requested by the `Accept` header 547 | /brokers: 548 | get: 549 | description: Get a list of brokers 550 | responses: 551 | 200: 552 | description: A list of broker IDs 553 | content: 554 | application/vnd.kafka.v2+json: 555 | schema: 556 | type: object 557 | properties: 558 | brokers: 559 | type: array 560 | items: 561 | type: integer 562 | components: 563 | schemas: 564 | topic: 565 | type: object 566 | properties: 567 | name: 568 | type: string 569 | description: Name of the topic 570 | configs: 571 | type: object 572 | description: Per-topic configuration overrides 573 | partitions: 574 | type: array 575 | description: List of partitions for this topic 576 | items: 577 | type: object 578 | properties: 579 | partition: 580 | type: integer 581 | description: the ID of this partition 582 | leader: 583 | type: integer 584 | description: the broker ID of the leader for this partition 585 | replicas: 586 | type: array 587 | items: 588 | type: object 589 | properties: 590 | broker: 591 | type: integer 592 | description: broker ID of the replica 593 | leader: 594 | type: boolean 595 | description: true if this replica is the leader for the partition 596 | in_sync: 597 | type: boolean 598 | description: true if this replica is currently in sync with the leader 599 | messages: 600 | type: object 601 | properties: 602 | key_schema: 603 | type: string 604 | description: Full schema encoded as a string (e.g. JSON serioalized for Avro data) 605 | value_schema: 606 | type: string 607 | description: Full schema encoded as a string (e.g. JSON serioalized for Avro data) 608 | records: 609 | type: array 610 | description: a list of records to produce to the topic 611 | items: 612 | type: object 613 | properties: 614 | key: 615 | type: string 616 | description: The message key, formatted according to the embedded format, or null to omit a key (optional) 617 | value: 618 | type: string 619 | description: The message value, formatted according to the embedded format 620 | partion: 621 | type: integer 622 | description: Partition to store the message in (optional) 623 | messages_response: 624 | type: object 625 | properties: 626 | key_schema_id: 627 | type: integer 628 | description: The ID for the schema used to produce keys, or null if keys were not used 629 | value_schema_id: 630 | type: integer 631 | description: The ID for the schema used to produce keys, or null if keys were not used 632 | offsets: 633 | type: array 634 | description: List of partitions and offsets the messages were published to 635 | items: 636 | type: object 637 | properties: 638 | partition: 639 | type: integer 640 | description: Partition the message was published to, or null if publishing the message failed 641 | offset: 642 | type: integer 643 | description: Offset of the message, or null if publishing the message failed 644 | error_code: 645 | type: integer 646 | description: An error code classifying the reason the operation failed, or null if it succeeded 647 | enum: 648 | - 1 649 | - 2 650 | error: 651 | type: string 652 | description: An error message describing why the operation failed, or null if it succeeded 653 | partitions: 654 | type: array 655 | items: 656 | $ref: '#/components/schemas/partition' 657 | partition: 658 | type: object 659 | properties: 660 | partition: 661 | type: integer 662 | description: ID of the partition 663 | leader: 664 | type: integer 665 | description: Broker ID of the leader for this partition 666 | replicas: 667 | type: array 668 | description: List of brokers acting as replicas for this partition 669 | items: 670 | type: object 671 | properties: 672 | broker: 673 | type: integer 674 | description: Broker ID of the replica 675 | leader: 676 | type: boolean 677 | description: true if this broker is the leader for the partition 678 | in_sync: 679 | type: boolean 680 | description: true if the replica is in sync with the leader 681 | consumer_group: 682 | type: object 683 | properties: 684 | name: 685 | type: string 686 | description: Name for the consumer instance, which will be used in URLs for the consumer 687 | format: 688 | type: string 689 | description: The format of consumed messages, which is used to convert messages into a JSON-compatible form. 690 | enum: 691 | - binary 692 | - avro 693 | - json 694 | default: binary 695 | auto.offset.reset: 696 | type: string 697 | description: sets the `auto.offset.reset` setting for the consumer 698 | auto.commit.enable: 699 | type: string 700 | description: sets the `auto.commit.enable` setting for the consumer 701 | fetch.min.bytes: 702 | type: string 703 | description: sets the `fetch.min.bytes` setting for this consumer specifically 704 | consumer.request.timeout.ms: 705 | type: string 706 | description: sets the `consumer.request.timeout.ms` setting for this consumer specifically 707 | offsets: 708 | type: object 709 | properties: 710 | offsets: 711 | type: array 712 | items: 713 | type: object 714 | properties: 715 | topic: 716 | type: string 717 | description: Name of the topic 718 | partition: 719 | type: integer 720 | description: Partition ID 721 | offset: 722 | type: integer 723 | description: The offset to commit 724 | offsets_query: 725 | type: object 726 | properties: 727 | partitions: 728 | type: array 729 | description: A list of partitions to find the last committed offsets for 730 | items: 731 | type: object 732 | properties: 733 | topic: 734 | type: string 735 | description: Name of the topic 736 | partition: 737 | type: integer 738 | description: Partition ID 739 | offsets_response: 740 | type: object 741 | properties: 742 | offsets: 743 | type: array 744 | items: 745 | type: object 746 | properties: 747 | topic: 748 | type: string 749 | description: Name of the topic 750 | partition: 751 | type: integer 752 | description: Partition ID 753 | offset: 754 | type: integer 755 | description: Committed offset 756 | metadata: 757 | type: string 758 | description: Metadata for the committed offset 759 | topic_subscription: 760 | type: object 761 | properties: 762 | topics: 763 | type: array 764 | description: A list of topics to subscribe 765 | items: 766 | type: string 767 | description: Name of the topic 768 | topic_pattern_subscription: 769 | type: object 770 | properties: 771 | topic_pattern: 772 | type: string 773 | description: A pattern of topics to subscribe to 774 | assignment: 775 | type: object 776 | properties: 777 | partitions: 778 | type: array 779 | items: 780 | type: object 781 | properties: 782 | topic: 783 | type: string 784 | description: Name of the topic 785 | partition: 786 | type: integer 787 | description: Partition ID 788 | partition_positions: 789 | type: object 790 | properties: 791 | partitions: 792 | type: array 793 | description: A list of partitions 794 | items: 795 | type: object 796 | properties: 797 | topic: 798 | type: string 799 | partition: 800 | type: integer 801 | description: Partition ID 802 | records: 803 | type: array 804 | items: 805 | type: object 806 | properties: 807 | topic: 808 | type: string 809 | description: The topic 810 | key: 811 | type: string 812 | value: 813 | type: string 814 | partition: 815 | type: integer 816 | description: Partition of the message 817 | offset: 818 | type: integer 819 | description: Offset of the message --------------------------------------------------------------------------------