├── .gitignore ├── LICENSE ├── README.md ├── docker-compose.yml └── media ├── .DS_Store ├── Fencing-1917.jpg ├── Horse-and-Rider-1909-1910-.jpg ├── Kafka-connect-UI-screenshot.png ├── Kafka-topics-UI-screenshot.png ├── Schema-Registry-UI-screenshot.png └── Zoonavigator-screenshot.png /.gitignore: -------------------------------------------------------------------------------- 1 | # Logs 2 | logs 3 | *.log 4 | npm-debug.log* 5 | yarn-debug.log* 6 | yarn-error.log* 7 | 8 | # Runtime data 9 | pids 10 | *.pid 11 | *.seed 12 | *.pid.lock 13 | 14 | # Directory for instrumented libs generated by jscoverage/JSCover 15 | lib-cov 16 | 17 | # Coverage directory used by tools like istanbul 18 | coverage 19 | 20 | # nyc test coverage 21 | .nyc_output 22 | 23 | # Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) 24 | .grunt 25 | 26 | # Bower dependency directory (https://bower.io/) 27 | bower_components 28 | 29 | # node-waf configuration 30 | .lock-wscript 31 | 32 | # Compiled binary addons (https://nodejs.org/api/addons.html) 33 | build/Release 34 | 35 | # Dependency directories 36 | node_modules/ 37 | jspm_packages/ 38 | 39 | # TypeScript v1 declaration files 40 | typings/ 41 | 42 | # Optional npm cache directory 43 | .npm 44 | 45 | # Optional eslint cache 46 | .eslintcache 47 | 48 | # Optional REPL history 49 | .node_repl_history 50 | 51 | # Output of 'npm pack' 52 | *.tgz 53 | 54 | # Yarn Integrity file 55 | .yarn-integrity 56 | 57 | # dotenv environment variables file 58 | .env 59 | 60 | # next.js build output 61 | .next 62 | volumes 63 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 TribalScale 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![media/Horse-and-Rider-1909-1910-.jpg](media/Horse-and-Rider-1909-1910-.jpg) 2 | > *Horse and Rider 1909-1910 - Franz Kafka* 3 | 4 | # 🥞 kafka-waffle-stack 5 | Kafka stack with zookeeper, schema-registry, kafka-rest-proxy, kafka-connect, kafka-connect-ui, zoonavigator-web & zoonavigator-api 6 | 7 | 8 | ## 🚂 What toys will you have? 9 | 10 | - kafka 11 | * kafka-connect 12 | * kafka-connect-ui 13 | * kafka-topics-ui 14 | * kafka-rest-proxy 15 | 16 | - zookeeper 17 | * zoonavigator-api 18 | * zoonavigator-web 19 | 20 | - schema-registry 21 | * schema-registry-ui 22 | 23 | - ksql 24 | 25 | ## 🖥 Nice UIs to play with 26 | 27 | - Kafka topics UI 28 | http://localhost:8000 29 | 30 | - Schema registry UI 31 | http://localhost:8001 32 | 33 | - Kafka connect UI 34 | http://localhost:8002 35 | 36 | - Zoonavigator web 37 | http://localhost:8003 38 | 39 | 40 | ## 🔧 Installation 41 | 42 | 0. ⭐️ Star the repo 😛 43 | 1. 📦 Install Docker - [instructions](https://docs.docker.com/install) 44 | 2. 🐑 Clone this repo 45 | 3. ⌨️ Run `docker-compose up` 46 | 4. ⏱ Wait... 😅 47 | 5. 🕹 Play and have fun 48 | 49 | ## Posting a message to kafka through kafka-rest-proxy 50 | 51 | ```sh 52 | curl -X POST -H "Content-Type: application/vnd.kafka.json.v2+json" -H "Accept: application/vnd.kafka.v2+json" --data '{"records":[{"value":{"foo":"bar"}}]}' "http://localhost:8082/topics/jsontest" 53 | ``` 54 | 55 | > 📝 You can import 👆 into Postman to play with the values. `Import > plain text` 56 | 57 | ## Generating data into the kafka cluster 58 | ### simulate "user" data 59 | ```sh 60 | docker run --network kafka-waffle-stack_default --rm --name datagen-users \ 61 | confluentinc/ksql-examples:5.0.0 \ 62 | ksql-datagen \ 63 | bootstrap-server=kafka:9092 \ 64 | quickstart=users \ 65 | format=json \ 66 | topic=users \ 67 | maxInterval=100 68 | ``` 69 | 70 | ### simulate "pageview" data 71 | ```sh 72 | docker run --network kafka-waffle-stack_default --rm --name datagen-pageviews \ 73 | confluentinc/ksql-examples:5.0.0 \ 74 | ksql-datagen \ 75 | bootstrap-server=kafka:9092 \ 76 | quickstart=pageviews \ 77 | format=delimited \ 78 | topic=pageviews \ 79 | maxInterval=500 80 | ``` 81 | 82 | ## Connecting a ksql cli instance to ksql-server 83 | ```sh 84 | docker run --network=kafka-waffle-stack_default -it confluentinc/cp-ksql-cli:5.0.0 http://ksql-server:8088 85 | ``` 86 | 87 | ### Playing around with ksql 88 | 1. Run the "user" and "pageview" data generators 89 | 2. Run a ksql cli container connected to the ksql-server 90 | 3. Create a ksql stream: `CREATE STREAM pageviews_original (viewtime bigint, userid varchar, pageid varchar) WITH (kafka_topic='pageviews', value_format='DELIMITED');` 91 | 4. Create a ksql table: `CREATE TABLE users_original (registertime BIGINT, gender VARCHAR, regionid VARCHAR, userid VARCHAR) WITH (kafka_topic='users', value_format='JSON', key = 'userid');` 92 | 5. Run a one-off query to select data from a stream: `SELECT pageid FROM pageviews_original LIMIT 3;` 93 | 6. Create a persistent query that joins two topics together and writes to another topic: 94 | ``` 95 | CREATE STREAM pageviews_enriched AS \ 96 | SELECT users_original.userid AS userid, pageid, regionid, gender \ 97 | FROM pageviews_original \ 98 | LEFT JOIN users_original \ 99 | ON pageviews_original.userid = users_original.userid; 100 | ``` 101 | 7. View the results as they come in (CTRL+c to cancel, note that this does not end the underlying query): `SELECT * FROM pageviews_enriched;` 102 | 103 | Check out [the source documentation](https://docs.confluent.io/current/ksql/docs/tutorials/basics-docker.html#ksql-quickstart-docker) for more fun examples! 104 | 105 | 106 | ## 🔍 Want to take a 👀 inside the machines? 107 | 108 | ```sh 109 | docker-compose exec /bin/bash 110 | ``` 111 | 112 | **For example** 113 | 114 | ```sh 115 | docker-compose exec kafka /bin/bash 116 | ``` 117 | 118 | ```sh 119 | docker-compose exec kafka /bin/bash 120 | root@kafka:/# ls 121 | bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 122 | root@kafka:/# cd var/lib/kafka/data/ 123 | root@kafka:/var/lib/kafka/data# ls 124 | __confluent.support.metrics-0 __consumer_offsets-2 __consumer_offsets-31 __consumer_offsets-43 cleaner-offset-checkpoint docker-connect-offsets-19 docker-connect-offsets-9 125 | __consumer_offsets-0 __consumer_offsets-20 __consumer_offsets-32 __consumer_offsets-44 docker-connect-configs-0 docker-connect-offsets-2 docker-connect-status-0 126 | __consumer_offsets-1 __consumer_offsets-21 __consumer_offsets-33 __consumer_offsets-45 docker-connect-offsets-0 docker-connect-offsets-20 docker-connect-status-1 127 | __consumer_offsets-10 __consumer_offsets-22 __consumer_offsets-34 __consumer_offsets-46 docker-connect-offsets-1 docker-connect-offsets-21 docker-connect-status-2 128 | __consumer_offsets-11 __consumer_offsets-23 __consumer_offsets-35 __consumer_offsets-47 docker-connect-offsets-10 docker-connect-offsets-22 docker-connect-status-3 129 | __consumer_offsets-12 __consumer_offsets-24 __consumer_offsets-36 __consumer_offsets-48 docker-connect-offsets-11 docker-connect-offsets-23 docker-connect-status-4 130 | __consumer_offsets-13 __consumer_offsets-25 __consumer_offsets-37 __consumer_offsets-49 docker-connect-offsets-12 docker-connect-offsets-24 log-start-offset-checkpoint 131 | __consumer_offsets-14 __consumer_offsets-26 __consumer_offsets-38 __consumer_offsets-5 docker-connect-offsets-13 docker-connect-offsets-3 meta.properties 132 | __consumer_offsets-15 __consumer_offsets-27 __consumer_offsets-39 __consumer_offsets-6 docker-connect-offsets-14 docker-connect-offsets-4 recovery-point-offset-checkpoint 133 | __consumer_offsets-16 __consumer_offsets-28 __consumer_offsets-4 __consumer_offsets-7 docker-connect-offsets-15 docker-connect-offsets-5 replication-offset-checkpoint 134 | __consumer_offsets-17 __consumer_offsets-29 __consumer_offsets-40 __consumer_offsets-8 docker-connect-offsets-16 docker-connect-offsets-6 135 | __consumer_offsets-18 __consumer_offsets-3 __consumer_offsets-41 __consumer_offsets-9 docker-connect-offsets-17 docker-connect-offsets-7 136 | __consumer_offsets-19 __consumer_offsets-30 __consumer_offsets-42 _schemas-0 docker-connect-offsets-18 docker-connect-offsets-8 137 | ``` 138 | 139 | ## 💾 Volumes 140 | 141 | The following volumes will be created inside the `./volumes` folder: 142 | 143 | - 📁 `./volumes/kafka` 144 | - 📁 `./volumes/zookeeper` 145 | 146 | 147 | ## Q&A 148 | ### ⚠️ Issues login into Zoonavigator? 149 | 150 | When you visit Zoonavigator (http://localhost:8003) the first time, you will be prompted to enter the following details 👇 151 | 152 | **Connection string:** `zookeeper:2181` 153 | **no user / no password** just click Connect 154 | 155 | ### ⚠️ Unable to connect your application to Kafka broker? 156 | 157 | You may need to add the following line to `/etc/hosts`, `C:\Windows\system32\drivers\etc\hosts`, etc: 158 | 159 | `127.0.0.1 kafka` 160 | 161 | 162 | ## 💡 Ideas 163 | 164 | - Use hotel for a nice UI 165 | - kafka-stack cli that starts a UI with all the links 166 | 167 | 168 | ## 🙏 Thanks & Credits 169 | 170 | - Illustration - [The art of Franz Kafka - The Drawings (1907-1917) OpenCulture](http://www.openculture.com/2014/02/the-art-of-franz-kafka-drawings-from-1907-1917.html) 171 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3' 2 | 3 | services: 4 | # ZooKeeper is a centralized service for maintaining configuration information, 5 | # naming, providing distributed synchronization, and providing group services. 6 | # It provides distributed coordination for our Kafka cluster. 7 | # http://zookeeper.apache.org/ 8 | zookeeper: 9 | image: zookeeper:3.4.9 10 | # ZooKeeper is designed to "fail-fast", so it is important to allow it to 11 | # restart automatically. 12 | restart: unless-stopped 13 | hostname: zookeeper 14 | # We'll expose the ZK client port so that we can connect to it from our applications. 15 | ports: 16 | - "2181:2181" 17 | volumes: 18 | - ./volumes/zookeeper/data:/data 19 | - ./volumes/zookeeper/datalog:/datalog 20 | 21 | # Kafka is a distributed streaming platform. It is used to build real-time streaming 22 | # data pipelines that reliably move data between systems and platforms, and to build 23 | # real-time streaming applications that transform or react to the streams of data. 24 | # http://kafka.apache.org/ 25 | kafka: 26 | image: confluentinc/cp-kafka:4.1.0 27 | hostname: kafka 28 | ports: 29 | - "9092:9092" 30 | environment: 31 | # Required. Kafka will publish this address to ZooKeeper so clients know 32 | # how to get in touch with Kafka. "PLAINTEXT" indicates that no authentication 33 | # mechanism will be used. 34 | KAFKA_ADVERTISED_LISTENERS: "PLAINTEXT://kafka:9092" 35 | # Required. Instructs Kafka how to get in touch with ZooKeeper. 36 | KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181" 37 | # Required when running in a single-node cluster, as we are. We would be able to take the default if we had 38 | # three or more nodes in the cluster. 39 | KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 40 | volumes: 41 | - ./volumes/kafka/data:/var/lib/kafka/data 42 | # As Kafka relies upon ZooKeeper, this will instruct docker to wait until the zookeeper service 43 | # is up before attempting to start Kafka. 44 | depends_on: 45 | - zookeeper 46 | 47 | # Written and open sourced by Confluent, the Schema Registry for Apache Kafka enables 48 | # developers to define standard schemas for their events, share them across the 49 | # organization and safely evolve them in a way that is backward compatible and future proof. 50 | # https://www.confluent.io/confluent-schema-registry/ 51 | schema-registry: 52 | image: confluentinc/cp-schema-registry:4.1.0 53 | hostname: schema-registry 54 | restart: unless-stopped 55 | ports: 56 | - "8081:8081" 57 | environment: 58 | # Required. Schema Registry will contact ZooKeeper to figure out how to connect 59 | # to the Kafka cluster. 60 | SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: zookeeper:2181 61 | # Required. This is the hostname that Schema Registry will advertise in ZooKeeper. 62 | SCHEMA_REGISTRY_HOST_NAME: schema-registry 63 | # Schema Registry relies upon both Kafka and ZooKeeper. This will instruct docker to wait 64 | # until the zookeeper and kafka services are up before attempting to start Schema Registry. 65 | depends_on: 66 | - zookeeper 67 | - kafka 68 | 69 | # A web tool that allows you to create / view / search / evolve / view 70 | # history & configure Avro schemas of your Kafka cluster. 71 | # https://github.com/Landoop/schema-registry-ui 72 | schema-registry-ui: 73 | image: landoop/schema-registry-ui:0.9.4 74 | hostname: schema-registry-ui 75 | # schema-registry-ui binds to port 8000, but we are going to expose it on our local 76 | # machine on port 8001. 77 | ports: 78 | - "8001:8000" 79 | environment: 80 | # Required. Instructs the UI where it can find the schema registry. 81 | SCHEMAREGISTRY_URL: http://schema-registry:8081/ 82 | # This instructs the docker image to use Caddy to proxy traffic to schema-registry-ui. 83 | PROXY: "true" 84 | # As this is a UI for Schema Registry, we rely upon Schema Registry. Docker will wait for 85 | # the schema-registry service to be up before staring schema-registry-ui. 86 | depends_on: 87 | - schema-registry 88 | 89 | # The Kafka REST Proxy provides a RESTful interface to a Kafka cluster. 90 | # It makes it easy to produce and consume messages, view the state 91 | # of the cluster, and perform administrative actions without using 92 | # the native Kafka protocol or clients. 93 | # https://github.com/confluentinc/kafka-rest 94 | kafka-rest-proxy: 95 | image: confluentinc/cp-kafka-rest:4.1.0 96 | hostname: kafka-rest-proxy 97 | ports: 98 | - "8082:8082" 99 | environment: 100 | # Specifies the ZooKeeper connection string. This service connects 101 | # to ZooKeeper so that it can broadcast its endpoints as well as 102 | # react to the dynamic topology of the Kafka cluster. 103 | KAFKA_REST_ZOOKEEPER_CONNECT: zookeeper:2181 104 | # The address on which Kafka REST will listen for API requests. 105 | KAFKA_REST_LISTENERS: http://0.0.0.0:8082/ 106 | # The base URL for Schema Registry that should be used by the Avro serializer. 107 | KAFKA_REST_SCHEMA_REGISTRY_URL: http://schema-registry:8081/ 108 | # Required. This is the hostname used to generate absolute URLs in responses. 109 | # It defaults to the Java canonical hostname for the container, which might 110 | # not be resolvable in a Docker environment. 111 | KAFKA_REST_HOST_NAME: kafka-rest-proxy 112 | # The list of Kafka brokers to connect to. This is only used for bootstrapping, 113 | # the addresses provided here are used to initially connect to the cluster, 114 | # after which the cluster will dynamically change. Thanks, ZooKeeper! 115 | KAFKA_REST_BOOTSTRAP_SERVERS: kafka:9092 116 | # Kafka REST relies upon Kafka, ZooKeeper, and Schema Registry. 117 | # This will instruct docker to wait until those services are up 118 | # before attempting to start Kafka REST. 119 | depends_on: 120 | - zookeeper 121 | - kafka 122 | - schema-registry 123 | 124 | # Browse Kafka topics and understand what's happening on your cluster. 125 | # Find topics / view topic metadata / browse topic data 126 | # (kafka messages) / view topic configuration / download data. 127 | # https://github.com/Landoop/kafka-topics-ui 128 | kafka-topics-ui: 129 | image: landoop/kafka-topics-ui:0.9.3 130 | hostname: kafka-topics-ui 131 | ports: 132 | - "8000:8000" 133 | environment: 134 | # Required. Instructs the UI where it can find the Kafka REST Proxy. 135 | KAFKA_REST_PROXY_URL: "http://kafka-rest-proxy:8082/" 136 | # This instructs the docker image to use Caddy to proxy traffic to kafka-topics-ui. 137 | PROXY: "true" 138 | # kafka-topics-ui relies upon Kafka REST. 139 | # This will instruct docker to wait until those services are up 140 | # before attempting to start kafka-topics-ui. 141 | depends_on: 142 | - kafka-rest-proxy 143 | 144 | # Kafka Connect, an open source component of Apache Kafka, 145 | # is a framework for connecting Kafka with external systems 146 | # such as databases, key-value stores, search indexes, and file systems. 147 | # https://docs.confluent.io/current/connect/index.html 148 | kafka-connect: 149 | image: confluentinc/cp-kafka-connect:4.1.0 150 | hostname: kafka-connect 151 | ports: 152 | - "8083:8083" 153 | environment: 154 | # Required. 155 | # The list of Kafka brokers to connect to. This is only used for bootstrapping, 156 | # the addresses provided here are used to initially connect to the cluster, 157 | # after which the cluster can dynamically change. Thanks, ZooKeeper! 158 | CONNECT_BOOTSTRAP_SERVERS: "kafka:9092" 159 | # Required. A unique string that identifies the Connect cluster group this worker belongs to. 160 | CONNECT_GROUP_ID: compose-connect-group 161 | # Connect will actually use Kafka topics as a datastore for configuration and other data. #meta 162 | # Required. The name of the topic where connector and task configuration data are stored. 163 | CONNECT_CONFIG_STORAGE_TOPIC: docker-connect-configs 164 | # Required. The name of the topic where connector and task configuration offsets are stored. 165 | CONNECT_OFFSET_STORAGE_TOPIC: docker-connect-offsets 166 | # Required. The name of the topic where connector and task configuration status updates are stored. 167 | CONNECT_STATUS_STORAGE_TOPIC: docker-connect-status 168 | # Required. Converter class for key Connect data. This controls the format of the 169 | # data that will be written to Kafka for source connectors or read from Kafka for sink connectors. 170 | CONNECT_KEY_CONVERTER: io.confluent.connect.avro.AvroConverter 171 | # Allows connect to leverage the power of schema registry. Here we define it for key schemas. 172 | CONNECT_KEY_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081' 173 | # Required. Converter class for value Connect data. This controls the format of the 174 | # data that will be written to Kafka for source connectors or read from Kafka for sink connectors. 175 | CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter 176 | # Allows connect to leverage the power of schema registry. Here we define it for value schemas. 177 | CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081' 178 | # Required. Converter class for internal key Connect data that implements the Converter interface. 179 | CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" 180 | # Required. Converter class for offset value Connect data that implements the Converter interface. 181 | CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter" 182 | # Required. The hostname that will be given out to other workers to connect to. 183 | CONNECT_REST_ADVERTISED_HOST_NAME: "kafka-connect" 184 | # The next three are required when running in a single-node cluster, as we are. 185 | # We would be able to take the default (of 3) if we had three or more nodes in the cluster. 186 | CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1" 187 | CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1" 188 | CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1" 189 | # kafka-connect relies upon Kafka and ZooKeeper. 190 | # This will instruct docker to wait until those services are up 191 | # before attempting to start kafka-connect. 192 | depends_on: 193 | - zookeeper 194 | - kafka 195 | 196 | # This is a web tool for Kafka Connect for setting up and managing connectors for multiple connect clusters. 197 | # https://github.com/Landoop/kafka-connect-ui 198 | kafka-connect-ui: 199 | image: landoop/kafka-connect-ui:0.9.4 200 | hostname: kafka-connect-ui 201 | # kafka-connect-ui binds to port 8000, but we are going to expose it on our local 202 | # machine on port 8002. 203 | ports: 204 | - "8002:8000" 205 | environment: 206 | # Required. Instructs the UI where it can find Kafka Connect. 207 | CONNECT_URL: "http://kafka-connect:8083/" 208 | # This instructs the docker image to use Caddy to proxy traffic to kafka-connect-ui. 209 | PROXY: "true" 210 | # kafka-connect-ui relies upon Kafka Connect. 211 | # This will instruct docker to wait until those services are up 212 | # before attempting to start kafka-connect-ui. 213 | depends_on: 214 | - kafka-connect 215 | 216 | # API for ZooNavigator, web-based browser & editor for ZooKeeper. 217 | # https://github.com/elkozmon/zoonavigator-api 218 | zoonavigator-api: 219 | image: elkozmon/zoonavigator-api:0.4.0 220 | environment: 221 | # The port on which the api service will listen for incoming connections. 222 | SERVER_HTTP_PORT: 9000 223 | restart: unless-stopped 224 | # zoonavigator-api relies upon ZooKeeper. 225 | # This will instruct docker to wait until those services are up 226 | # before attempting to start zoonavigator-api. 227 | depends_on: 228 | - zookeeper 229 | 230 | # Web client for ZooNavigator, web-based browser & editor for ZooKeeper. 231 | # https://github.com/elkozmon/zoonavigator-web 232 | zoonavigator-web: 233 | image: elkozmon/zoonavigator-web:0.4.0 234 | # zoonavigator-web binds to port 8000, but we are going to expose it on our local 235 | # machine on port 8002. 236 | ports: 237 | - "8003:8000" 238 | environment: 239 | # The following two keys instruct the web component how to connect to 240 | # the backing api component. 241 | API_HOST: "zoonavigator-api" 242 | API_PORT: 9000 243 | # zoonavigator-web relies upon zoonavigator-api. 244 | # This will instruct docker to wait until those services are up 245 | # before attempting to start zoonavigator-web. 246 | depends_on: 247 | - zoonavigator-api 248 | restart: unless-stopped 249 | 250 | # KSQL is the open source streaming SQL engine for Apache Kafka. 251 | # It provides an easy-to-use yet powerful interactive SQL 252 | # interface for stream processing on Kafka, without the need to write code 253 | # in a programming language such as Java or Python. KSQL is scalable, elastic, 254 | # fault-tolerant, and real-time. It supports a wide range of streaming operations, 255 | # including data filtering, transformations, aggregations, joins, windowing, and sessionization. 256 | # https://docs.confluent.io/current/ksql/docs/ 257 | ksql-server: 258 | image: confluentinc/cp-ksql-server:5.0.0 259 | ports: 260 | - "8088:8088" 261 | environment: 262 | # Required. 263 | # The list of Kafka brokers to connect to. This is only used for bootstrapping, 264 | # the addresses provided here are used to initially connect to the cluster, 265 | # after which the cluster can dynamically change. Thanks, ZooKeeper! 266 | KSQL_BOOTSTRAP_SERVERS: kafka:9092 267 | # Controls the REST API endpoint for the KSQL server. 268 | KSQL_LISTENERS: http://0.0.0.0:8088 269 | # The Schema Registry URL path to connect KSQL to. 270 | KSQL_KSQL_SCHEMA_REGISTRY_URL: http://schema-registry:8081 271 | # ksql-server relies upon Kafka and Schema Registry. 272 | # This will instruct docker to wait until those services are up 273 | # before attempting to start ksql-server. 274 | depends_on: 275 | - kafka 276 | - schema-registry 277 | -------------------------------------------------------------------------------- /media/.DS_Store: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/.DS_Store -------------------------------------------------------------------------------- /media/Fencing-1917.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Fencing-1917.jpg -------------------------------------------------------------------------------- /media/Horse-and-Rider-1909-1910-.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Horse-and-Rider-1909-1910-.jpg -------------------------------------------------------------------------------- /media/Kafka-connect-UI-screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Kafka-connect-UI-screenshot.png -------------------------------------------------------------------------------- /media/Kafka-topics-UI-screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Kafka-topics-UI-screenshot.png -------------------------------------------------------------------------------- /media/Schema-Registry-UI-screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Schema-Registry-UI-screenshot.png -------------------------------------------------------------------------------- /media/Zoonavigator-screenshot.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/TribalScale/kafka-waffle-stack/c16e8e607933d61e216bdd49b8c1d476348fbbad/media/Zoonavigator-screenshot.png --------------------------------------------------------------------------------