├── .gitignore ├── LICENSE ├── README.md ├── project.clj ├── src └── clafka │ ├── core.clj │ ├── pool.clj │ ├── proto.clj │ └── sim.clj └── test └── clafka ├── core_test.clj └── pool_test.clj /.gitignore: -------------------------------------------------------------------------------- 1 | /target 2 | /classes 3 | /checkouts 4 | pom.xml 5 | pom.xml.asc 6 | *.jar 7 | *.class 8 | /.lein-* 9 | /.nrepl-port 10 | .hgignore 11 | .hg/ 12 | .#* 13 | /doc 14 | /.idea 15 | *.iml -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Copyright (c) 2015, Mix Radio 2 | All rights reserved. 3 | 4 | Redistribution and use in source and binary forms, with or without 5 | modification, are permitted provided that the following conditions are met: 6 | 7 | * Redistributions of source code must retain the above copyright notice, this 8 | list of conditions and the following disclaimer. 9 | 10 | * Redistributions in binary form must reproduce the above copyright notice, 11 | this list of conditions and the following disclaimer in the documentation 12 | and/or other materials provided with the distribution. 13 | 14 | * Neither the name of the {organization} nor the names of its 15 | contributors may be used to endorse or promote products derived from 16 | this software without specific prior written permission. 17 | 18 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 19 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 20 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 21 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 22 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 23 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 24 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 25 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 26 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 27 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 28 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # clafka 2 | 3 | A clojure kafka client focusing on the simple consumer. 4 | 5 | ![latest clafka version](https://clojars.org/mixradio/clafka/latest-version.svg) 6 | 7 | ### Concept 8 | 9 | To provide the simplest possible consumer and producer interfaces for [kafka](http://kafka.apache.org/documentation.html) by exposing the new java producer api and the simple consumer. 10 | 11 | It is designed to be used as the basis for more sophisticated consumers whose needs are not met 12 | by the default zookeeper consumer in kafka. 13 | 14 | ### Docs 15 | 16 | API docs can be found [here](http://danstone.github.io/clafka) 17 | 18 | ## Usage 19 | 20 | Include the following to your lein `project.clj` dependencies 21 | 22 | ```clojure 23 | [mixradio/clafka "0.2.3"] 24 | ``` 25 | 26 | Most functions are found in `clafka.core` 27 | 28 | ```clojure 29 | (require '[clafka.core :refer :all]) 30 | ``` 31 | ### Clients 32 | 33 | The `IBrokerClient` protocol is the core abstraction of clafka. 34 | 35 | You will see functions that accept an `IBrokerClient` use the name `client`, as opposed to low-level fns 36 | that may require a `consumer`. Most functions require a `client`. 37 | 38 | The kafka `SimpleConsumer` is one implementation of `IBrokerClient`. A pooled implementation is also provided. 39 | 40 | #### SimpleConsumer Client 41 | I have provided a wrapper for the simple consumer, a good zookeeper consumer api can be found in [clj-kafka](http://github.com/pingles/clj-kafka) 42 | 43 | First create a `SimpleConsumer` instance with `consumer` 44 | 45 | ```clojure 46 | (def c (consumer "localhost" 9092)) 47 | ;;close a consumer with 48 | (.close c) 49 | ``` 50 | 51 | The `SimpleConsumer` talks to a single broker, and can only receive data for partitions on which that 52 | broker is the leader. 53 | 54 | 55 | #### Pooled Client 56 | 57 | You can use a pooled client in order to: 58 | - Load balance requests over many client/consumer instances 59 | - Support fetching data without worrying about which broker leads a partition. 60 | 61 | Find the pooled client in `clafka.pool`. 62 | 63 | ```clojure 64 | (require '[clafka.pool :refer [pool]) 65 | (require '[clafka.proto :refer [shutdown!]]) 66 | 67 | ;; p will balance requests over 2 clients for each listed broker for a total of 4 clients. 68 | (def p (pool [{:host "localhost" :port 9092} {:host "localhost", :port "9093"}] 2)) 69 | ;; you can use all the regular clafka functions with the pooled client. 70 | (log-seq p "my-topic" 0 0) 71 | ;; => 72 | ({:message # 73 | :offset 0 74 | :next-offset 1}, ...) 75 | 76 | ;;close the pooled client with shutdown!, this will attempt to close all underlying clients. 77 | (shutdown! p) 78 | ``` 79 | 80 | By default the pool will create `SimpleConsumer` instances for each listed broker, but you can override 81 | this behaviour by providing a :factory as part of the `pool` config map. See the docstring for more details. 82 | 83 | Be warned, you can still get `NotLeaderForPartitionException`'s if leaders change as you are making a request, this 84 | should be rare however. You should expect subsequent requests to reflect the new leader, so simply retry in these cases. 85 | 86 | #### Finding a leader 87 | 88 | You can ask any broker which broker is the leader for a partition in your cluster via 89 | `find-leader` or `find-leaders` 90 | 91 | ```clojure 92 | (find-leader c "my-topic" 0) 93 | ;;=> 94 | {:host "localhost" :port 9092 :id 0} 95 | ``` 96 | 97 | #### Fetching 98 | You can fetch some data from a log with `fetch` 99 | 100 | ```clojure 101 | ;;fetches 1024 bytes of data from topic "my-topic" partition 0, offset 0. 102 | (fetch c "my-topic" 0 0 1024) 103 | ;; => 104 | {:messages [{:message # 105 | :offset 0 106 | :next-offset 1}] 107 | :total-bytes 1024 108 | :valid-bytes 124 109 | :error nil 110 | :error-code 0} 111 | ``` 112 | `fetch` will throw exceptions for the various 113 | kafka error codes given by the `ErrorMapping` class in kafka, if you do not want this and you want to manually check error-codes etc you can do so with the underlying `-fetch` fn. 114 | 115 | So for example, if the leader for a partition changes, you should expect `fetch` to throw a 116 | `kafka.common.NotLeaderForPartitionException`. 117 | 118 | At which point, if you wanted to continue fetching from that partition, you would have to construct a 119 | new `consumer`. 120 | 121 | #### Fetching a seq 122 | 123 | You can fetch a lazy seq of the entire log until the current head with `fetch-log` 124 | ```clojure 125 | ;;fetches a seq of messages lazily from the log in blocks of *default-fetch-size* 126 | ;;from topic "my-topic" partition 0 offset 0 127 | (fetch-log c "my-topic" 0 0) 128 | ;;=> 129 | ({:message # 130 | :offset 0 131 | :next-offset 1}, ...) 132 | 133 | ;;you can also manually specify the fetch size to use... (here we say 1024 bytes) 134 | (fetch-log c "my-topic" 0 0 1024) 135 | ``` 136 | 137 | You can also return an infinite seq of messages with `log-seq`, this sequence does not 138 | terminate when the log is exhausted, rather it enters a polling mode allowing you to block on new messages being added to the log over time. 139 | 140 | ```clojure 141 | ;;will use the default size and poll-ms parameters (512KB and 1 second) 142 | (log-seq c "my-topic" 0 0) 143 | ;;=> 144 | ({:message # 145 | :offset 0 146 | :next-offset 1}, ...) 147 | 148 | ;;you can manually specify the size and poll-ms through a configuration map 149 | (log-seq c "my-topic 0 0 {:size 1024, :poll-ms 2000}) 150 | ``` 151 | 152 | `fetch-log` and `log-seq` will skip messages that are too large to be fetched with a single fetch, 153 | so tune the fetch-size carefully. The default fetch size is 512KB which should be plenty 154 | for most use cases. 155 | 156 | ### Producer 157 | 158 | You can produce messages using the `KafkaProducer` api. 159 | 160 | Create a producer using configuration as specified: [docs](http://kafka.apache.org/documentation.html#newproducerconfigs) 161 | ```clojure 162 | (def p (producer {"bootstrap.servers" "localhost:9092,localhost:9093"})) 163 | 164 | ;;close a producer with 165 | (.close p) 166 | ``` 167 | ** NB ** - The config options are specified in the properties style, so always use strings! 168 | 169 | Then publish a message using `publish!` 170 | 171 | ```clojure 172 | (publish! p "my-topic" (.getBytes "some-key") (.getBytes "hello world!")) 173 | 174 | ;;publish returns a delay that can returns some metadata about the publish 175 | @*1 176 | ;;=> 177 | {:topic "my-topic", :offset 0, :partition 0} 178 | 179 | ``` 180 | 181 | By default `publish!` will take byte arrays for the key and value. If you want you can use the `KafkaProducer` serialization mechanism by specifying a pair of either functions or `Serializer` instances when you create the producer. 182 | 183 | ```clojure 184 | ;;using a pair of functions, one for the key and the latter for the value 185 | (def p2 (producer {"bootstrap.servers" "localhost:9092,localhost:9093"} 186 | (fn [topic v] (.getBytes v)) 187 | (fn [topic v] (.getBytes v)))) 188 | 189 | ;;using just a single function for both the key the value 190 | (def p3 (producer {"bootstrap.servers" "localhost:9092,localhost:9093"} 191 | (fn [topic v] (.getBytes v)))) 192 | ``` 193 | 194 | ### Other features 195 | 196 | - You can utilise broker acknowledgment on publish using `publish-ack!` 197 | - You can make requests for offsets at given times using `offsets`, `offset-at`, `earliest-offset` and `latest-offset` 198 | 199 | ### Contributing 200 | 201 | PR's welcome! 202 | 203 | Low hanging fruit: 204 | - There are few type hints! 205 | - Anything that I have missed feature wise relating to the SimpleConsumer and KafkaProducer. 206 | - More tests would be good 207 | 208 | ## License 209 | 210 | [clafka is released under the 3-clause license ("New BSD License" or "Modified BSD License").](http://github.com/danstone/clafka/blob/master/LICENSE) 211 | 212 | -------------------------------------------------------------------------------- /project.clj: -------------------------------------------------------------------------------- 1 | (defproject mixradio/clafka "0.2.4-SNAPSHOT" 2 | :description "The simplest possible way to read and produce messages for kafka" 3 | :url "http://github.com/mixradio/clafka" 4 | :license "https://github.com/mixradio/clafka/blob/master/LICENSE" 5 | :dependencies [[org.clojure/clojure "1.6.0"] 6 | [org.apache.kafka/kafka_2.9.2 "0.8.2.1" 7 | :exclusions [[com.sun.jmx/jmxri] 8 | [com.sun.jdmk/jmxtools]]]] 9 | 10 | :profiles {:dev {:plugins [[codox "0.8.11"]] 11 | :codox {:src-dir-uri "http://github.com/mixradio/clafka/blob/0.2.3/" 12 | :src-linenum-anchor-prefix "L" 13 | :defaults {:doc/format :markdown}}}}) 14 | -------------------------------------------------------------------------------- /src/clafka/core.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.core 2 | "Contains a clojure interface for the Producer and SimpleConsumer api's" 3 | (:import [kafka.api FetchRequest FetchRequestBuilder OffsetRequest PartitionOffsetRequestInfo] 4 | [kafka.javaapi TopicMetadataRequest] 5 | [kafka.javaapi.consumer SimpleConsumer] 6 | [kafka.common ErrorMapping TopicAndPartition] 7 | [org.apache.kafka.common.serialization Serializer] 8 | [org.apache.kafka.clients.producer KafkaProducer Producer 9 | ProducerRecord RecordMetadata 10 | Callback]) 11 | (:require [clafka.proto :refer :all])) 12 | 13 | (defn ^Serializer kafka-serializer 14 | [serializer] 15 | (if (ifn? serializer) 16 | (reify Serializer 17 | (serialize [this topic v] (serializer topic v)) 18 | (close [this])) 19 | serializer)) 20 | 21 | (defmulti serialize-default (fn [topic v] topic)) 22 | 23 | (defmethod serialize-default :default 24 | [topic v] 25 | v) 26 | 27 | (defn producer 28 | "Pass in a config map for the producer [docs](http://kafka.apache.org/documentation.html#newproducerconfigs) 29 | and a serializer that takes the topic name and a value, and is expected to return a byte array. 30 | 31 | n.b `serialize-default` can be overridden per topic to specify global serialization behaviour" 32 | ([config] 33 | (producer config serialize-default)) 34 | ([config serializer] 35 | (producer config serializer serializer)) 36 | ([config key-serializer value-serializer] 37 | (KafkaProducer. 38 | ^java.util.Map config 39 | (kafka-serializer key-serializer) 40 | (kafka-serializer value-serializer)))) 41 | 42 | (defn producer-record 43 | "Creates a keyed message for the given topic" 44 | ([topic v] 45 | (ProducerRecord. topic v)) 46 | ([topic k v] 47 | (ProducerRecord. topic k v)) 48 | ([topic partition k v] 49 | (ProducerRecord. topic (int partition) k v))) 50 | 51 | (defn record-metadata->map 52 | [^RecordMetadata rm] 53 | {:topic (.topic rm) 54 | :offset (.offset rm) 55 | :partition (.partition rm)}) 56 | 57 | (defn publish! 58 | "Sends a message asynchronously via 'producer', returns a delay that will contain metadata 59 | about what has been sent. The key and value ought to be compatible with the 60 | serializer used by the producer." 61 | ([producer producer-record] 62 | (let [fut (.send ^Producer producer producer-record)] 63 | (delay (record-metadata->map @fut)))) 64 | ([producer topic v] 65 | (publish! producer (producer-record topic v))) 66 | ([producer topic k v] 67 | (publish! producer (producer-record topic k v))) 68 | ([producer topic partition k v] 69 | (publish! producer (producer-record topic partition k v)))) 70 | 71 | (defn ^Callback fn->callback 72 | [f] 73 | (reify Callback 74 | (onCompletion [this rm exc] 75 | (f (when rm (record-metadata->map rm)) 76 | exc)))) 77 | 78 | (defn publish-ack! 79 | "Sends a message asynchronously via 'producer', returns a delay that will contain metadata 80 | about what has been sent. Accepts a function `f` that will be called when the broker has acknowledged 81 | receipt of the message. 82 | The precise acknowledgment semantics will depend on your producer's `acks` setting." 83 | ([producer producer-record f] 84 | (let [fut (.send ^Producer producer producer-record (fn->callback f))] 85 | (delay (record-metadata->map @fut)))) 86 | ([producer topic v f] 87 | (publish-ack! producer (producer-record topic v) f)) 88 | ([producer topic k v f] 89 | (publish-ack! producer (producer-record topic k v) f)) 90 | ([producer topic partition k v f] 91 | (publish-ack! producer (producer-record topic partition k v) f))) 92 | 93 | (defn node->map 94 | [^org.apache.kafka.common.Node n] 95 | (when n 96 | {:id (.id n) 97 | :host (.host n) 98 | :port (.port n)})) 99 | 100 | (defn partition-info->map 101 | [^org.apache.kafka.common.PartitionInfo pi] 102 | (when pi 103 | {:topic (.topic pi) 104 | :partition (.partition pi) 105 | :leader (node->map (.leader pi)) 106 | :replicas (keep node->map (.replicas pi)) 107 | :isr (keep node->map (.inSyncReplicas pi))})) 108 | 109 | (defn partitions-for 110 | "Returns a seq of partition information for the given topic via the producer." 111 | [^Producer producer topic] 112 | (keep partition-info->map (.partitionsFor producer topic))) 113 | 114 | (def ^:dynamic *default-socket-timeout* (* 30 1000)) 115 | 116 | (def ^:dynamic *default-buffer-size* (* 512 1024)) 117 | 118 | (defn consumer 119 | "Creates a SimpleConsumer instance in order to 120 | 1. Query topic metadata with `topic-metadata-request` 121 | 2. Query leadership status with `find-leader` and `find-leaders` 122 | 3. Fetch data with `fetch`, `fetch-log` and `log-seq` 123 | 4. Make offset metadata requests with `offsets` or `offset-at` 124 | 125 | The consumer maintains a tcp connection internally and can be closed with 126 | `.close` - be warned, there appears to be a bug in kafka 0.8.2 whereby this 127 | connection is re-opened if you use the consumer again, subsequent .close calls should close it again. 128 | 129 | The connection is automatically reconnected if the socket is closed/timed out 130 | however, while it is unavailable some operations will return ClosedChannelExceptions" 131 | ([host port] 132 | (consumer host port (str "consumer-" (java.util.UUID/randomUUID)))) 133 | ([host port client-id] 134 | (consumer host port client-id *default-socket-timeout* *default-buffer-size*)) 135 | ([host port client-id socket-timeout buffer-size] 136 | (SimpleConsumer. host port socket-timeout buffer-size client-id))) 137 | 138 | (def earliest-time 139 | "A constant that can be used as a `time` value 140 | for offset requests via `offsets` representing the 141 | beginning of the log" 142 | (OffsetRequest/EarliestTime)) 143 | 144 | (def latest-time 145 | "A constant that can be used as a `time` value 146 | for offset requests via `offsets` representing the 147 | head of the log" 148 | (OffsetRequest/LatestTime)) 149 | 150 | (def error-code->kw 151 | "Maps Kafka error codes to keywords" 152 | {(ErrorMapping/BrokerNotAvailableCode) :broker-unavailable 153 | (ErrorMapping/ConsumerCoordinatorNotAvailableCode) :consumer-coordinator-unavailable 154 | (ErrorMapping/InvalidFetchSizeCode) :invalid-fetch-size 155 | (ErrorMapping/InvalidMessageCode) :invalid-message 156 | (ErrorMapping/InvalidTopicCode) :invalid-topic 157 | (ErrorMapping/LeaderNotAvailableCode) :leader-unavailable 158 | (ErrorMapping/MessageSetSizeTooLargeCode) :message-set-too-large 159 | (ErrorMapping/MessageSizeTooLargeCode) :message-too-large 160 | (ErrorMapping/NoError) nil 161 | (ErrorMapping/NotCoordinatorForConsumerCode) :not-coordinator-for-consumer 162 | (ErrorMapping/NotEnoughReplicasAfterAppendCode) :not-enough-replicas-after-append 163 | (ErrorMapping/NotEnoughReplicasCode) :not-enough-replicas 164 | (ErrorMapping/NotLeaderForPartitionCode) :not-leader-for-partition 165 | (ErrorMapping/OffsetMetadataTooLargeCode) :offset-metadata-too-large 166 | (ErrorMapping/OffsetOutOfRangeCode) :offset-out-of-range 167 | (ErrorMapping/OffsetsLoadInProgressCode) :offsets-load-in-progress 168 | (ErrorMapping/ReplicaNotAvailableCode) :replica-unavailable 169 | (ErrorMapping/RequestTimedOutCode) :request-timed-out 170 | (ErrorMapping/StaleControllerEpochCode) :stale-controller-epoch 171 | (ErrorMapping/StaleLeaderEpochCode) :stale-leader-epoch 172 | (ErrorMapping/UnknownCode) :unknown 173 | (ErrorMapping/UnknownTopicOrPartitionCode) :unknown-topic-or-partition}) 174 | 175 | (defn broker->map 176 | [^kafka.cluster.Broker broker] 177 | (when broker 178 | {:host (.host broker) 179 | :port (.port broker) 180 | :id (.id broker)})) 181 | 182 | (defn partition-metadata->map 183 | [^kafka.javaapi.PartitionMetadata partition-metadata] 184 | (when partition-metadata 185 | {:partition-id (.partitionId partition-metadata) 186 | :isr (keep broker->map (.isr partition-metadata)) 187 | :leader (broker->map (.leader partition-metadata)) 188 | :replicas (keep broker->map (.replicas partition-metadata))})) 189 | 190 | (defn topic-metadata->map 191 | [^kafka.javaapi.TopicMetadata topic-metadata] 192 | (when topic-metadata 193 | {:topic (.topic topic-metadata) 194 | :partitions (keep partition-metadata->map (.partitionsMetadata topic-metadata)) 195 | :error-code (.errorCode topic-metadata) 196 | :error (error-code->kw (.errorCode topic-metadata)) 197 | :size (.sizeInBytes topic-metadata)})) 198 | 199 | (defn topic-metadata-request 200 | [^SimpleConsumer consumer topics] 201 | (->> (.send consumer (TopicMetadataRequest. ^java.util.List topics)) 202 | .topicsMetadata 203 | (keep topic-metadata->map))) 204 | 205 | (defn offset-request 206 | [m time client-id] 207 | (let [r (into {} (for [[k v] m 208 | v v] 209 | [(TopicAndPartition. k v) 210 | (PartitionOffsetRequestInfo. time 1)]))] 211 | (kafka.javaapi.OffsetRequest. r (OffsetRequest/CurrentVersion) client-id))) 212 | 213 | (defn block 214 | "Returns a map describing a block of data in a kafka log" 215 | [topic partition offset size] 216 | {:topic topic 217 | :partition partition 218 | :offset offset 219 | :size size}) 220 | 221 | (defn message-and-offset->map 222 | [^kafka.message.MessageAndOffset mao] 223 | (when mao 224 | {:message (when-let [m (.message mao)] 225 | (let [payload (.payload m) 226 | bytes (byte-array (.limit payload))] 227 | (.get payload bytes) 228 | bytes)) 229 | :next-offset (.nextOffset mao) 230 | :offset (.offset mao)})) 231 | 232 | (defn fetch-response->map 233 | [^kafka.javaapi.FetchResponse fetch-response blocks] 234 | (when fetch-response 235 | {:has-error? (.hasError fetch-response) 236 | :data (into {} (for [{:keys [topic partition]} blocks] 237 | [{:topic topic 238 | :partition partition} 239 | (let [ec (.errorCode fetch-response topic partition) 240 | ^kafka.javaapi.message.ByteBufferMessageSet 241 | ms (.messageSet fetch-response topic partition)] 242 | {:error (error-code->kw ec ec) 243 | :error-code ec 244 | :total-bytes (when ms (.sizeInBytes ms)) 245 | :valid-bytes (when ms (.validBytes ms)) 246 | :messages 247 | (keep message-and-offset->map ms)})]))})) 248 | 249 | (defn- add-fetch 250 | [^FetchRequestBuilder fb {:keys [topic partition offset size]}] 251 | (.addFetch fb topic partition offset size)) 252 | 253 | (defn fetch-request 254 | "Requests the consumer to fetch 255 | blocks of data from kafka the blocks are described by maps 256 | in the `blocks` collection, each block is simply a map of :topic, :partition, :offset and :size." 257 | [^SimpleConsumer consumer blocks] 258 | (let [fb (FetchRequestBuilder.) 259 | ^FetchRequestBuilder fb (reduce add-fetch fb blocks) 260 | fr (.build fb)] 261 | (-> (.fetch consumer ^FetchRequest fr) 262 | (fetch-response->map blocks)))) 263 | 264 | (extend-type SimpleConsumer 265 | 266 | ICloseable 267 | (shutdown! [this] 268 | (.close this)) 269 | 270 | IBrokerClient 271 | (-fetch [this topic partition offset size] 272 | (let [r (fetch-request this [(block topic partition offset size)]) 273 | data (-> r :data (get {:topic topic :partition partition}))] 274 | data)) 275 | (-find-topic-metadata [this topics] 276 | (topic-metadata-request this topics)) 277 | (-find-offsets [this m time] 278 | (let [r (offset-request m time (.clientId this)) 279 | result (.getOffsetsBefore this r)] 280 | (into {} 281 | (for [[k v] m] 282 | [k (mapv (fn [v] {:offset (first (.offsets result k v)) 283 | :partition v 284 | :error-code (.errorCode result k v) 285 | :error (error-code->kw (.errorCode result k v))}) 286 | v)]))))) 287 | 288 | (defn find-topic-metadata 289 | "Requests metadata about the topics supplied. Returns a seq containing 290 | one map for each topic created." 291 | [client topics] 292 | (for [topic (-find-topic-metadata client topics)] 293 | (if (:error topic) 294 | (throw (ErrorMapping/exceptionFor (:error-code topic))) 295 | topic))) 296 | 297 | (defn find-leaders 298 | "Finds partition leaders for the given topics." 299 | [client topics] 300 | (map #(update-in % [:partitions] (partial map (juxt :partition-id :leader))) 301 | (find-topic-metadata client topics))) 302 | 303 | (defn find-partitions 304 | "Finds partition metadata for the give topic via the client." 305 | [client topic] 306 | (->> (find-topic-metadata client [topic]) 307 | first 308 | :partitions)) 309 | 310 | (defn find-leader 311 | "Finds the leader for the given topic partition" 312 | [client topic partition] 313 | (first 314 | (for [{:keys [partitions]} (find-leaders client [topic]) 315 | [id leader] partitions 316 | :when (= (str id) (str partition))] 317 | leader))) 318 | 319 | (defn offsets 320 | "Make an offset request to determine the offset at `time` in the log 321 | time is a long value or one of the 2 special constants `earliest-time` or 322 | `latest-time`. m is a map of {topic [partition]} you will receive back a map 323 | of {topic [{:offset, :partition}]}" 324 | [client m time] 325 | (let [r (-find-offsets client m time)] 326 | (doseq [[k v] r] 327 | (when (:error v) 328 | (throw (ErrorMapping/exceptionFor (:error-code v))))) 329 | (into {} 330 | (for [[k v] r] 331 | [k (mapv #(dissoc % :error :error-code) v)])))) 332 | 333 | (defn offset-at 334 | "Makes a request to find the offset for time `time` in the log 335 | given by the `topic` and `partition` - you can use the 2 special constants 336 | `earliest-time` and `latest-time` to ask for the earliest or latest offset" 337 | [client topic partition time] 338 | (let [r (offsets client {topic [partition]} time)] 339 | (-> r (get topic) first :offset))) 340 | 341 | (defn earliest-offset 342 | "Finds the earliest offset in the log" 343 | [client topic partition] 344 | (offset-at client topic partition earliest-time)) 345 | 346 | (defn latest-offset 347 | "Finds the latest offset in the log" 348 | [client topic partition] 349 | (offset-at client topic partition latest-time)) 350 | 351 | (def ^:dynamic *default-fetch-size* (* 1024 512)) 352 | 353 | (defn fetch 354 | "Fetches a single block of data from kafka, throws exceptions for broker errors" 355 | ([client topic partition offset] 356 | (fetch client topic partition offset *default-fetch-size*)) 357 | ([client topic partition offset size] 358 | (let [partition (Long/valueOf (str partition)) 359 | data (-fetch client topic partition offset size)] 360 | (if (:error data) 361 | (throw (ErrorMapping/exceptionFor (:error-code data))) 362 | data)))) 363 | 364 | (defn log-head? 365 | "Are we at the log head?" 366 | [block] 367 | (and (not (:error block)) 368 | (empty? (:messages block)) 369 | (= 0 370 | (:total-bytes block) 371 | (:valid-bytes block)))) 372 | 373 | (defn next-block-offset 374 | "Find the starting offset from the block if possible (nil if not)" 375 | [block] 376 | (:next-offset (last (:messages block)))) 377 | 378 | (defn fetch-log 379 | "Fetch the entire log from the given offset, 380 | lazily fetches in batches of `size` bytes (default 512KB). 381 | Returns a seq of messages. 382 | Messages greater in size than `size` in total will be omitted." 383 | ([client topic partition offset] 384 | (fetch-log client topic partition offset *default-fetch-size*)) 385 | ([client topic partition offset size] 386 | (lazy-seq 387 | (let [data (fetch client topic partition offset size)] 388 | (concat (:messages data) 389 | (if (log-head? data) 390 | nil 391 | (fetch-log client topic partition 392 | (or (next-block-offset data) 393 | (inc offset)) 394 | size))))))) 395 | 396 | (defn poll-from-offset 397 | "Fetches a block of data from the kafka log, unless 398 | the offset is at the head of the log, in which case it will wait and poll until a block 399 | can be received every `poll-ms`." 400 | [client topic partition offset size poll-ms] 401 | (let [data (fetch client topic partition offset size)] 402 | (if (log-head? data) 403 | (do (Thread/sleep poll-ms) 404 | (recur client topic partition offset size poll-ms)) 405 | data))) 406 | 407 | (def ^:dynamic *default-poll-ms* 1000) 408 | 409 | (defn log-seq 410 | "Returns an infinite seq of messages from offset, requests blocks of data 411 | from kafka in blocks of `:size` (512KB by default). 412 | Messages greater in size than `:size` in total will be omitted. 413 | Once the log has been exhausted will enter a polling mode whereby it tries to retrieve new 414 | blocks every `:poll-ms` milliseconds (1000 by default)." 415 | ([client topic partition offset] 416 | (log-seq client topic partition offset {})) 417 | ([client topic partition offset {:keys [size poll-ms] 418 | :or {size *default-fetch-size* 419 | poll-ms *default-poll-ms*} 420 | :as opts}] 421 | (lazy-seq 422 | (let [data (poll-from-offset client topic partition offset size poll-ms) 423 | messages (:messages data)] 424 | (concat messages 425 | (log-seq client topic partition 426 | (or (next-block-offset data) 427 | (inc offset)) 428 | opts)))))) 429 | -------------------------------------------------------------------------------- /src/clafka/pool.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.pool 2 | "Contains a pooled IBrokerClient which gives up some performance 3 | for greater concurrency" 4 | (:require [clafka.proto :refer :all] 5 | [clafka.core :as clafka]) 6 | (:import [java.util.concurrent LinkedBlockingQueue TimeUnit])) 7 | 8 | 9 | 10 | (defn dequeue-loop 11 | [broker-client queue {:keys [wait-time back-off-time]}] 12 | (let [stop? (atom false) 13 | loop 14 | (future 15 | (loop [] 16 | (when-not @stop? 17 | (when-let [[[f & args] r] (.poll queue wait-time TimeUnit/MILLISECONDS)] 18 | (do 19 | (deliver r (try 20 | (apply f broker-client args) 21 | (catch Throwable e e))) 22 | (when (instance? Throwable @r) 23 | (Thread/sleep back-off-time)))) 24 | (recur))))] 25 | (reify ICloseable 26 | (shutdown! [this] 27 | (reset! stop? true) 28 | @loop)))) 29 | 30 | (def ^:dynamic *req-timeout* (* 1000 15)) 31 | 32 | (defn make-req 33 | [queue f & args] 34 | (let [r (promise)] 35 | (.put queue [(cons f args) r]) 36 | (let [x (deref r *req-timeout* ::error)] 37 | (cond 38 | (instance? Throwable x) (throw x) 39 | (= x ::error) (throw (Exception. "Could not receive a result from underlying client")) 40 | :else x)))) 41 | 42 | (defn make-req-retry 43 | [queue n f & args] 44 | (when (pos? n) 45 | (trampoline (fn ! [n] 46 | (try 47 | (apply make-req queue f args) 48 | (catch InterruptedException e 49 | (throw e)) 50 | (catch Throwable e 51 | (if (= n 0) 52 | (throw e) 53 | (fn [] (! (dec n))))))) 54 | n))) 55 | 56 | (defn with-leader 57 | [pool topic partition f] 58 | (if-let [{:keys [m meta-queue]} pool] 59 | (f (make-req-retry meta-queue (count (mapcat identity (vals m))) 60 | clafka/find-leader topic partition)) 61 | (throw (Exception. (format "No leader could be found for %s %s" topic partition))))) 62 | 63 | (defn with-leader-client 64 | [pool topic partition f & args] 65 | (with-leader 66 | pool topic partition 67 | (fn [leader] 68 | (if-let [cls (seq (get (:m pool) (select-keys leader [:host :port])))] 69 | (apply make-req-retry 70 | (:queue (first cls)) 71 | (count cls) 72 | f args) 73 | (throw (Exception. (str "No clients could be found for broker: " leader))))))) 74 | 75 | (defrecord PooledClient [meta-queue shutdown? m] 76 | IBrokerClient 77 | (-fetch [this topic partition offset size] 78 | (when @shutdown? 79 | (throw (Exception. "PooledClient is closed"))) 80 | (with-leader-client 81 | this topic partition 82 | -fetch topic partition offset size)) 83 | (-find-topic-metadata [this topics] 84 | (when @shutdown? 85 | (throw (Exception. "PooledClient is closed"))) 86 | (make-req-retry 87 | meta-queue (count (mapcat identity (vals m))) 88 | -find-topic-metadata topics)) 89 | (-find-offsets [this m time] 90 | (when @shutdown? 91 | (throw (Exception. "PooledClient is closed"))) 92 | (->> (for [[topic partitions] m 93 | partition partitions] 94 | (with-leader-client 95 | this topic partition 96 | -find-offsets {topic [partition]} time)) 97 | (apply merge-with #(reduce conj %1 %2)))) 98 | ICloseable 99 | (shutdown! [this] 100 | (reset! shutdown? true) 101 | (doseq [{:keys [loop client]} (mapcat identity (vals m))] 102 | (try-shutdown! loop) 103 | (shutdown! client)))) 104 | 105 | (def default-config 106 | "The default configuration used for `pool`" 107 | {:wait-time 1000 108 | :back-off-time (* 5 2000) 109 | :factory clafka/consumer}) 110 | 111 | (defn pool 112 | "Creates a pooled client allowing `n` clients per broker. 113 | 114 | Brokers should be collection of maps of the form `{:host host-name, :port port-number}`. 115 | 116 | Re-routes requests requiring leadership to the correct brokers and load balances requests 117 | over the pool of connections. 118 | 119 | Some naive back-off is applied as connections fail so that they should be less likely 120 | to handle subsequent requests for a period of time. 121 | 122 | It is expected that the underlying IBrokerClient's are able to re-establish their own connections, this 123 | is true of the `SimpleConsumer`. 124 | 125 | If a leader changes while still making a request you can still get a `NotLeaderForPartitionException`. 126 | 127 | A fn `:factory` taking host, port as args can be specified in order to create custom IBrokerClients. 128 | Such clients should also be `ICloseable`. (`SimpleConsumer` instances are already `ICloseable`) 129 | 130 | Load balancing is achieved through allocating worker threads for each client. 131 | The configuration supports a `wait-time` which is the poll time on the work queue and a `back-off-time` 132 | which is the time a worker should wait after an error, in hope that it recovers the next time 133 | it picks up some work. 134 | 135 | Shutdown the pool with `clafka.proto/shutdown!`" 136 | ([brokers n] 137 | (pool brokers n {})) 138 | ([brokers n config] 139 | (let [config (merge default-config 140 | config) 141 | factory (:factory config) 142 | meta-queue (LinkedBlockingQueue. 1) 143 | loops (for [broker brokers 144 | :let [queue (LinkedBlockingQueue. 1) 145 | client-factory #(factory (:host broker) (:port broker))] 146 | client (repeatedly n client-factory)] 147 | {:broker broker 148 | :queue queue 149 | :client client 150 | :loop (dequeue-loop client queue config) 151 | :meta-loop (dequeue-loop client meta-queue config)})] 152 | (map->PooledClient 153 | {:meta-queue meta-queue 154 | :shutdown? (atom false) 155 | :m (group-by #(select-keys (:broker %) [:host :port]) loops)})))) 156 | -------------------------------------------------------------------------------- /src/clafka/proto.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.proto) 2 | 3 | (defprotocol IBrokerClient 4 | (-fetch [this topic partition offset size]) 5 | (-find-topic-metadata [this topics]) 6 | (-find-offsets [this m time])) 7 | 8 | (defprotocol ICloseable 9 | (shutdown! [this] "closes the resource")) 10 | 11 | (extend-protocol ICloseable 12 | Object 13 | (shutdown! [this] 14 | (try 15 | (.close this) 16 | (catch IllegalArgumentException e 17 | (throw (Exception. "No .close method, please implement ICloseable manually")))))) 18 | 19 | (defn try-shutdown! 20 | [x] 21 | (try 22 | (shutdown! x) 23 | (catch Throwable e 24 | nil))) 25 | -------------------------------------------------------------------------------- /src/clafka/sim.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.sim 2 | "Contains a simulated IBrokerClient for use in tests and at the repl" 3 | (:require [clafka.core :as clafka] 4 | [clafka.proto :refer :all])) 5 | 6 | (def head-block 7 | {:messages [] 8 | :total-bytes 0 9 | :valid-bytes 0}) 10 | 11 | (defn message 12 | ([] 13 | {:sim/bytes 1024}) 14 | ([payload] 15 | {:sim/bytes (count payload) 16 | :message payload})) 17 | 18 | (defn messages 19 | ([n] 20 | (repeat n (message)))) 21 | 22 | (defrecord SimulatedClient [state] 23 | IBrokerClient 24 | (-find-topic-metadata [this topics] 25 | (mapv 26 | #(get-in @state [:topic-metadata %]) 27 | topics)) 28 | (-find-offsets [this m time] 29 | (for [[topic partitions] m] 30 | [topic 31 | (->> partitions 32 | (mapv 33 | #(get-in @state [:offsets topic %])))])) 34 | (-fetch [this topic partition offset size] 35 | (or 36 | (when-not (= (:broker @state) (clafka/find-leader this topic partition)) 37 | {:error :not-leader-for-partition 38 | :error-code 6}) 39 | (when-let [log (get-in @state [:log topic partition])] 40 | (let [first-offset (:offset (first log)) 41 | last-offset (:next-offset (peek log))] 42 | (if (or (< offset first-offset) 43 | (< last-offset offset)) 44 | {:error :offset-out-of-range 45 | :error-code 1} 46 | (when-let [msgs (seq (drop-while #(not= (:offset %) offset) log))] 47 | (first 48 | (reduce (fn [[acc bytes-left] x] 49 | (if (<= (:sim/bytes x) bytes-left) 50 | (-> (update-in acc [:messages] conj x) 51 | (update-in [:valid-bytes] + (:sim/bytes x)) 52 | (assoc :total-bytes size) 53 | (vector (- bytes-left (:sim/bytes x)))) 54 | (reduced [acc bytes-left]))) 55 | [{:messages [] 56 | :total-bytes 0 57 | :valid-bytes 0} size] 58 | msgs)))))) 59 | head-block)) 60 | ICloseable 61 | (shutdown! [this] 62 | (reset! state nil))) 63 | 64 | (defn client 65 | "Create a simulated client suitable for tests and repl usage. 66 | e.g (client (-> (broker \"localhost\" 9092) (+make-leader \"test\" 0) (+messages \"test\" 0 (messages 20)))) 67 | => creates a client that is the leader of topic: test partition: 0, having 20 1024 byte messages in its log." 68 | [m] 69 | (map->SimulatedClient {:state (atom m)})) 70 | 71 | (defn broker 72 | [host port] 73 | {:broker 74 | {:host host 75 | :port port}}) 76 | 77 | (defn +leader 78 | [m topic partition broker] 79 | (update-in m 80 | [:topic-metadata topic :partitions] 81 | (fnil conj []) 82 | {:partition-id partition 83 | :leader broker})) 84 | 85 | (defn +make-leader 86 | [m topic partition] 87 | (+leader m topic partition (:broker m))) 88 | 89 | (defn add-leader 90 | "Makes a simulated client recognise the broker as leader for the given partition" 91 | [client topic partition broker] 92 | (swap! (:state client) +leader topic partition broker)) 93 | 94 | (defn make-leader 95 | "Makes the client a leader for the given partition" 96 | [client topic partition] 97 | (swap! (:state client) +make-leader topic partition)) 98 | 99 | (defn +message 100 | [m topic partition message] 101 | (update-in m 102 | [:log topic partition] 103 | (fn [coll] 104 | (conj (or coll []) 105 | (merge message 106 | {:offset (or (:next-offset (peek coll)) 0) 107 | :next-offset (inc (or (:next-offset (peek coll)) 0))}))))) 108 | 109 | (defn +messages 110 | [m topic partition messages] 111 | (reduce #(+message %1 topic partition %2) m messages)) 112 | 113 | (defn add-message 114 | "Adds a message to the simulated client's log" 115 | [client topic partition message] 116 | (swap! (:state client) +message topic partition message) 117 | nil) 118 | -------------------------------------------------------------------------------- /test/clafka/core_test.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.core-test 2 | (:require [clojure.test :refer :all] 3 | [clafka.core :refer :all] 4 | [clafka.proto :refer :all])) 5 | 6 | (def head-block 7 | (fn [] 8 | {:messages [] 9 | :total-bytes 0 10 | :valid-bytes 0})) 11 | 12 | (def example-payload 13 | (.getBytes "hello-world")) 14 | 15 | (defn n-messages 16 | ([last-n n] 17 | (n-messages last-n n example-payload)) 18 | ([last-n n payload] 19 | (fn [] {:messages 20 | (for [n (range last-n n)] 21 | {:message payload 22 | :offset n 23 | :next-offset (inc n)}) 24 | :total-bytes 10}))) 25 | 26 | (defn- slow-pop* 27 | [ref] 28 | (let [h (first @ref)] 29 | (ref-set ref (into [] (rest @ref))) 30 | h)) 31 | 32 | (defrecord SimulatedConsumer [state] 33 | IBrokerClient 34 | (-fetch [this topic partition offset size] 35 | (if-let [next-thunk (dosync (slow-pop* state))] 36 | (next-thunk) 37 | (head-block)))) 38 | 39 | (defn sim-consumer 40 | [steps] 41 | (map->SimulatedConsumer {:state (ref (into [] steps))})) 42 | 43 | (defn error-fetcher 44 | [error-code] 45 | (reify IBrokerClient 46 | (-fetch [this topic partition offset size] 47 | {:error :can-be-anything 48 | :error-code error-code}))) 49 | 50 | (deftest test-fetch 51 | (testing "Errors are thrown if an :error & :error-code is passed back in the fetch response" 52 | (->> 53 | "Error code 1 should cause fetch to throw an OffsetOutOfRangeException" 54 | (is 55 | (thrown? 56 | kafka.common.OffsetOutOfRangeException 57 | (fetch (error-fetcher 1) "can-be-anything" 0 0))))) 58 | 59 | (testing "It should not matter if my partition is a string or already a numeric value" 60 | (->> 61 | "String partitions are passed through to the underlying fetcher as numbers" 62 | (is 63 | (= :a-number 64 | (fetch (reify IBrokerClient 65 | (-fetch [this topic partition offset size] 66 | (if (number? partition) 67 | :a-number 68 | :not-a-number))) 69 | "can-be-anything" 70 | "12" 71 | 0))))) 72 | 73 | (->> "If an exception is thrown by the underlying fetch call it bubbles up" 74 | (is (thrown? Exception (fetch (sim-consumer [#(throw (Exception.))]) "can-be-anything" 0 0))))) 75 | 76 | (deftest test-fetch-log 77 | 78 | (testing "If any fetch returns an :error and :error-code the seq will also throw when evaluated" 79 | (let [c (sim-consumer [(n-messages 0 5) 80 | #(throw (Exception.))]) 81 | log (fetch-log c "anything" 0 0)] 82 | (->> "The first fetch is successful" 83 | (is (= (:messages ((n-messages 0 5))) 84 | (take 5 log)))) 85 | 86 | (->> "The second fetch will throw" 87 | (is (thrown? Exception 88 | (first (drop 5 log))))))) 89 | 90 | (testing "Once we are at the log head the sequence is ends" 91 | (let [c (sim-consumer [(n-messages 0 5) 92 | head-block])] 93 | (is (= (:messages ((n-messages 0 5))) 94 | (fetch-log c "anything" 0 0)))))) 95 | 96 | 97 | (deftest test-log-seq 98 | 99 | (testing "If any fetch returns an :error and :error-code the seq will also throw when evaluated" 100 | (let [c (sim-consumer [(n-messages 0 5) 101 | #(throw (Exception.))]) 102 | log (log-seq c "anything" 0 0)] 103 | (->> "The first fetch is successful" 104 | (is (= (:messages ((n-messages 0 5))) 105 | (take 5 log)))) 106 | 107 | (->> "The second fetch will throw" 108 | (is (thrown? Exception 109 | (first (drop 5 log))))))) 110 | 111 | (testing "Once at the log head, I will wait until new messages are added to the log" 112 | (let [c (sim-consumer [(n-messages 0 5) 113 | head-block]) 114 | read (atom []) 115 | block-on-me (promise) 116 | fetching (future (try (doseq [m (log-seq c "anything" 0 0 {:poll-ms 10})] 117 | (deliver block-on-me :something) 118 | (swap! read conj m)) 119 | (catch Exception e 120 | nil)))] 121 | ;;wait for the first block to be read 122 | (is (deref block-on-me 2000 false)) 123 | ;;put a new block on the log 124 | (dosync (alter (:state c) conj (n-messages 5 10))) 125 | ;;force an exception to kill our future 126 | (dosync (alter (:state c) conj #(throw (Exception.)))) 127 | 128 | ;;wait for the future to die 129 | (when (= :kill (deref fetching 1000 :kill)) 130 | (future-cancel fetching)) 131 | 132 | (->> "We should have read both the first block, and the second block which was 133 | added to our log after the log-seq was created" 134 | (is (= (:messages ((n-messages 0 10))) 135 | @read)))))) 136 | -------------------------------------------------------------------------------- /test/clafka/pool_test.clj: -------------------------------------------------------------------------------- 1 | (ns clafka.pool-test 2 | (:require [clafka.pool :refer :all] 3 | [clafka.proto :refer :all] 4 | [clafka.sim :as sim] 5 | [clafka.core :as clafka] 6 | [clojure.test :refer :all])) 7 | 8 | 9 | (deftest test-pool 10 | (let [add-leaders #(-> % 11 | (sim/+leader "test" 0 {:host "localhost" :port 9092}) 12 | (sim/+leader "test" 1 {:host "localhost" :port 9093}) 13 | (sim/+leader "test" 2 {:host "localhost" :port 9094})) 14 | 15 | consumer1 (sim/client (-> (sim/broker "localhost" 9092) 16 | add-leaders 17 | (sim/+message "test" 0 (sim/message (.getBytes "msg1"))))) 18 | consumer2 (sim/client (-> (sim/broker "localhost" 9093) 19 | add-leaders 20 | (sim/+message "test" 1 (sim/message (.getBytes "msg2"))))) 21 | consumer3 (sim/client (-> (sim/broker "localhost" 9094) 22 | add-leaders 23 | (sim/+message "test" 2 (sim/message (.getBytes "msg3"))))) 24 | 25 | consumers (group-by (comp :broker deref :state) [consumer1 consumer2 consumer3]) 26 | 27 | pool (pool (keys consumers) 1 28 | {:wait-time 50 29 | :back-off-time 50 30 | :factory (fn [host port] (first (consumers {:host host :port port})))})] 31 | 32 | (testing "I should be able for data from any partition and the right consumer will be hit" 33 | (is (= (seq (.getBytes "msg1")) 34 | (-> (clafka/fetch pool "test" 0 0) 35 | :messages 36 | first 37 | :message 38 | seq))) 39 | 40 | (is (= (seq (.getBytes "msg2")) 41 | (-> (clafka/fetch pool "test" 1 0) 42 | :messages 43 | first 44 | :message 45 | seq))) 46 | 47 | (is (= (seq (.getBytes "msg3")) 48 | (-> (clafka/fetch pool "test" 2 0) 49 | :messages 50 | first 51 | :message 52 | seq)))) 53 | 54 | (testing "I can ask about the leaders of each partition (can use any consumer)" 55 | (is (= {:host "localhost" :port 9092} 56 | (clafka/find-leader pool "test" 0))) 57 | 58 | (is (= {:host "localhost" :port 9093} 59 | (clafka/find-leader pool "test" 1))) 60 | 61 | (is (= {:host "localhost" :port 9094} 62 | (clafka/find-leader pool "test" 2)))) 63 | 64 | (testing "I can still get errors..." 65 | (is (thrown? kafka.common.OffsetOutOfRangeException 66 | (clafka/fetch pool "test" 0 100))) 67 | 68 | (is (thrown? Exception 69 | (clafka/fetch pool "test2" 0 0)))) 70 | 71 | (shutdown! pool) 72 | 73 | (testing "Post shutdown all requests should throw" 74 | (is (thrown? Exception 75 | (clafka/fetch pool "test" 0 0))) 76 | (is (thrown? Exception 77 | (clafka/find-leader pool "test" 1))) 78 | (is (thrown? Exception 79 | (clafka/earliest-offset pool "test" 2)))))) 80 | --------------------------------------------------------------------------------