├── .ci └── run.sh ├── .github ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── .travis.yml ├── CHANGELOG.md ├── CONTRIBUTORS ├── DEVELOPER.md ├── Gemfile ├── LICENSE ├── NOTICE.TXT ├── README.md ├── Rakefile ├── build.gradle ├── docs ├── index.asciidoc ├── input-kafka.asciidoc └── output-kafka.asciidoc ├── gradle.properties ├── gradle └── wrapper │ ├── gradle-wrapper.jar │ └── gradle-wrapper.properties ├── gradlew ├── gradlew.bat ├── kafka_test_setup.sh ├── kafka_test_teardown.sh ├── lib └── logstash │ ├── inputs │ └── kafka.rb │ ├── outputs │ └── kafka.rb │ └── plugin_mixins │ └── kafka │ ├── avro_schema_registry.rb │ └── common.rb ├── logstash-integration-kafka.gemspec ├── setup_keystore_and_truststore.sh ├── spec ├── check_docs_spec.rb ├── fixtures │ ├── jaas.config │ ├── pwd │ └── trust-store_stub.jks ├── integration │ ├── inputs │ │ └── kafka_spec.rb │ └── outputs │ │ └── kafka_spec.rb └── unit │ ├── inputs │ ├── avro_schema_fixture_payment.asvc │ └── kafka_spec.rb │ └── outputs │ └── kafka_spec.rb ├── start_auth_schema_registry.sh ├── start_schema_registry.sh └── stop_schema_registry.sh /.ci/run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # This is intended to be run inside the docker container as the command of the docker-compose. 3 | 4 | env 5 | 6 | set -ex 7 | 8 | export KAFKA_VERSION=3.3.1 9 | ./kafka_test_setup.sh 10 | 11 | bundle exec rspec -fd 12 | bundle exec rspec -fd --tag integration 13 | 14 | ./kafka_test_teardown.sh 15 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Logstash 2 | 3 | All contributions are welcome: ideas, patches, documentation, bug reports, 4 | complaints, etc! 5 | 6 | Programming is not a required skill, and there are many ways to help out! 7 | It is more important to us that you are able to contribute. 8 | 9 | That said, some basic guidelines, which you are free to ignore :) 10 | 11 | ## Want to learn? 12 | 13 | Want to lurk about and see what others are doing with Logstash? 14 | 15 | * The irc channel (#logstash on irc.freenode.org) is a good place for this 16 | * The [forum](https://discuss.elastic.co/c/logstash) is also 17 | great for learning from others. 18 | 19 | ## Got Questions? 20 | 21 | Have a problem you want Logstash to solve for you? 22 | 23 | * You can ask a question in the [forum](https://discuss.elastic.co/c/logstash) 24 | * Alternately, you are welcome to join the IRC channel #logstash on 25 | irc.freenode.org and ask for help there! 26 | 27 | ## Have an Idea or Feature Request? 28 | 29 | * File a ticket on [GitHub](https://github.com/elastic/logstash/issues). Please remember that GitHub is used only for issues and feature requests. If you have a general question, the [forum](https://discuss.elastic.co/c/logstash) or IRC would be the best place to ask. 30 | 31 | ## Something Not Working? Found a Bug? 32 | 33 | If you think you found a bug, it probably is a bug. 34 | 35 | * If it is a general Logstash or a pipeline issue, file it in [Logstash GitHub](https://github.com/elasticsearch/logstash/issues) 36 | * If it is specific to a plugin, please file it in the respective repository under [logstash-plugins](https://github.com/logstash-plugins) 37 | * or ask the [forum](https://discuss.elastic.co/c/logstash). 38 | 39 | # Contributing Documentation and Code Changes 40 | 41 | If you have a bugfix or new feature that you would like to contribute to 42 | logstash, and you think it will take more than a few minutes to produce the fix 43 | (ie; write code), it is worth discussing the change with the Logstash users and developers first! You can reach us via [GitHub](https://github.com/elastic/logstash/issues), the [forum](https://discuss.elastic.co/c/logstash), or via IRC (#logstash on freenode irc) 44 | Please note that Pull Requests without tests will not be merged. If you would like to contribute but do not have experience with writing tests, please ping us on IRC/forum or create a PR and ask our help. 45 | 46 | ## Contributing to plugins 47 | 48 | Check our [documentation](https://www.elastic.co/guide/en/logstash/current/contributing-to-logstash.html) on how to contribute to plugins or write your own! It is super easy! 49 | 50 | ## Contribution Steps 51 | 52 | 1. Test your changes! [Run](https://github.com/elastic/logstash#testing) the test suite 53 | 2. Please make sure you have signed our [Contributor License 54 | Agreement](https://www.elastic.co/contributor-agreement/). We are not 55 | asking you to assign copyright to us, but to give us the right to distribute 56 | your code without restriction. We ask this of all contributors in order to 57 | assure our users of the origin and continuing existence of the code. You 58 | only need to sign the CLA once. 59 | 3. Send a pull request! Push your changes to your fork of the repository and 60 | [submit a pull 61 | request](https://help.github.com/articles/using-pull-requests). In the pull 62 | request, describe what your changes do and mention any bugs/issues related 63 | to the pull request. 64 | 65 | 66 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Please post all product and debugging questions on our [forum](https://discuss.elastic.co/c/logstash). Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here. 2 | 3 | For all general issues, please provide the following details for fast resolution: 4 | 5 | - Version: 6 | - Operating System: 7 | - Config File (if you have sensitive info, please remove it): 8 | - Sample Data: 9 | - Steps to Reproduce: 10 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Thanks for contributing to Logstash! If you haven't already signed our CLA, here's a handy link: https://www.elastic.co/contributor-agreement/ 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.gem 2 | Gemfile.lock 3 | .bundle 4 | .gradle 5 | .idea 6 | lib/log4j/ 7 | lib/net/ 8 | lib/org/ 9 | vendor/ 10 | build/ 11 | .idea/ 12 | vendor 13 | *.jar 14 | tls_repository/ -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | import: 2 | - logstash-plugins/.ci:travis/travis.yml@1.x 3 | 4 | # lock on version 8.x because use of Jackson 2.13.3 available from 8.3.0 5 | jobs: 6 | exclude: 7 | - env: ELASTIC_STACK_VERSION=7.current 8 | - env: SNAPSHOT=true ELASTIC_STACK_VERSION=7.current 9 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## 11.6.2 2 | - Docs: fixed setting type reference for `sasl_iam_jar_paths` [#192](https://github.com/logstash-plugins/logstash-integration-kafka/pull/192) 3 | 4 | ## 11.6.1 5 | - Expose the SASL client callback class setting to the Logstash configuration [#177](https://github.com/logstash-plugins/logstash-integration-kafka/pull/177) 6 | - Adds a mechanism to load AWS IAM authentication as SASL client libraries at startup [#178](https://github.com/logstash-plugins/logstash-integration-kafka/pull/178) 7 | 8 | ## 11.6.0 9 | - Support additional `oauth` and `sasl` configuration options for configuring kafka client [#189](https://github.com/logstash-plugins/logstash-integration-kafka/pull/189) 10 | 11 | ## 11.5.4 12 | - Update kafka client to 3.8.1 and transitive dependencies [#188](https://github.com/logstash-plugins/logstash-integration-kafka/pull/188) 13 | - Removed Jar Dependencies dependency [#187](https://github.com/logstash-plugins/logstash-integration-kafka/pull/187) 14 | 15 | ## 11.5.3 16 | - Update kafka client to 3.7.1 and transitive dependencies [#186](https://github.com/logstash-plugins/logstash-integration-kafka/pull/186) 17 | 18 | ## 11.5.2 19 | - Update avro to 1.11.4 and confluent kafka to 7.4.7 [#184](https://github.com/logstash-plugins/logstash-integration-kafka/pull/184) 20 | 21 | ## 11.5.1 22 | - Specify that only headers with UTF-8 encoded values are supported in extended decoration [#174](https://github.com/logstash-plugins/logstash-integration-kafka/pull/174) 23 | 24 | ## 11.5.0 25 | - Add "auto_create_topics" option to allow disabling of topic auto creation [#172](https://github.com/logstash-plugins/logstash-integration-kafka/pull/172) 26 | 27 | ## 11.4.2 28 | - Add default client_id of logstash to kafka output [#169](https://github.com/logstash-plugins/logstash-integration-kafka/pull/169) 29 | 30 | ## 11.4.1 31 | - [DOC] Match anchor ID and references for `message_headers` [#164](https://github.com/logstash-plugins/logstash-integration-kafka/pull/164) 32 | 33 | ## 11.4.0 34 | - Add support for setting Kafka message headers in output plugin [#162](https://github.com/logstash-plugins/logstash-integration-kafka/pull/162) 35 | 36 | ## 11.3.4 37 | - Fix "retries" and "value_serializer" error handling in output plugin (#160) [#160](https://github.com/logstash-plugins/logstash-integration-kafka/pull/160) 38 | 39 | ## 11.3.3 40 | - Fix "Can't modify frozen string" error when record value is `nil` (tombstones) [#155](https://github.com/logstash-plugins/logstash-integration-kafka/pull/155) 41 | 42 | ## 11.3.2 43 | - Fix: update Avro library [#150](https://github.com/logstash-plugins/logstash-integration-kafka/pull/150) 44 | 45 | ## 11.3.1 46 | - Fix: update snappy dependency [#148](https://github.com/logstash-plugins/logstash-integration-kafka/pull/148) 47 | 48 | ## 11.3.0 49 | - Bump kafka client to 3.4.1 [#145](https://github.com/logstash-plugins/logstash-integration-kafka/pull/145) 50 | 51 | ## 11.2.1 52 | - Fix nil exception to empty headers of record during event metadata assignment [#140](https://github.com/logstash-plugins/logstash-integration-kafka/pull/140) 53 | 54 | ## 11.2.0 55 | - Added TLS truststore and keystore settings specifically to access the schema registry [#137](https://github.com/logstash-plugins/logstash-integration-kafka/pull/137) 56 | 57 | ## 11.1.0 58 | - Added config `group_instance_id` to use the Kafka's consumer static membership feature [#135](https://github.com/logstash-plugins/logstash-integration-kafka/pull/135) 59 | 60 | ## 11.0.0 61 | - Changed Kafka client to 3.3.1, requires Logstash >= 8.3.0. 62 | - Deprecated `default` value for setting `client_dns_lookup` forcing to `use_all_dns_ips` when explicitly used [#130](https://github.com/logstash-plugins/logstash-integration-kafka/pull/130) 63 | - Changed the consumer's poll from using the one that blocks on metadata retrieval to the one that doesn't [#136](https://github.com/logstash-plugins/logstash-integration-kafka/pull/133) 64 | 65 | ## 10.12.1 66 | - Fix: update Avro library on 10.x [#149](https://github.com/logstash-plugins/logstash-integration-kafka/pull/149) 67 | 68 | ## 10.12.0 69 | - bump kafka client to 2.8.1 [#115](https://github.com/logstash-plugins/logstash-integration-kafka/pull/115) 70 | 71 | ## 10.11.0 72 | - Feat: added connections_max_idle_ms setting for output [#118](https://github.com/logstash-plugins/logstash-integration-kafka/pull/118) 73 | - Refactor: mixins to follow shared mixin module naming 74 | 75 | ## 10.10.1 76 | - Update CHANGELOG.md [#114](https://github.com/logstash-plugins/logstash-integration-kafka/pull/114) 77 | 78 | ## 10.10.0 79 | - Added config setting to enable 'zstd' compression in the Kafka output [#112](https://github.com/logstash-plugins/logstash-integration-kafka/pull/112) 80 | 81 | ## 10.9.0 82 | - Refactor: leverage codec when using schema registry [#106](https://github.com/logstash-plugins/logstash-integration-kafka/pull/106) 83 | Previously using `schema_registry_url` parsed the payload as JSON even if `codec => 'plain'` was set, this is no longer the case. 84 | 85 | ## 10.8.2 86 | - [DOC] Updates description of `enable_auto_commit=false` to clarify that the commit happens after data is fetched AND written to the queue [#90](https://github.com/logstash-plugins/logstash-integration-kafka/pull/90) 87 | - Fix: update to Gradle 7 [#104](https://github.com/logstash-plugins/logstash-integration-kafka/pull/104) 88 | - [DOC] Clarify Kafka client does not support proxy [#103](https://github.com/logstash-plugins/logstash-integration-kafka/pull/103) 89 | 90 | ## 10.8.1 91 | - [DOC] Removed a setting recommendation that is no longer applicable for Kafka 2.0+ [#99](https://github.com/logstash-plugins/logstash-integration-kafka/pull/99) 92 | 93 | ## 10.8.0 94 | - Added config setting to enable schema registry validation to be skipped when an authentication scheme unsupported 95 | by the validator is used [#97](https://github.com/logstash-plugins/logstash-integration-kafka/pull/97) 96 | 97 | ## 10.7.7 98 | - Fix: Correct the settings to allow basic auth to work properly, either by setting `schema_registry_key/secret` or embedding username/password in the 99 | url [#94](https://github.com/logstash-plugins/logstash-integration-kafka/pull/94) 100 | 101 | ## 10.7.6 102 | - Test: specify development dependency version [#91](https://github.com/logstash-plugins/logstash-integration-kafka/pull/91) 103 | 104 | ## 10.7.5 105 | - Improved error handling in the input plugin to avoid errors 'escaping' from the plugin, and crashing the logstash 106 | process [#87](https://github.com/logstash-plugins/logstash-integration-kafka/pull/87) 107 | 108 | ## 10.7.4 109 | - Docs: make sure Kafka clients version is updated in docs [#83](https://github.com/logstash-plugins/logstash-integration-kafka/pull/83) 110 | Since **10.6.0** Kafka client was updated to **2.5.1** 111 | 112 | ## 10.7.3 113 | - Changed `decorate_events` to add also Kafka headers [#78](https://github.com/logstash-plugins/logstash-integration-kafka/pull/78) 114 | 115 | ## 10.7.2 116 | - Update Jersey dependency to version 2.33 [#75](https://github.com/logstash-plugins/logstash-integration-kafka/pull/75) 117 | 118 | ## 10.7.1 119 | - Fix: dropped usage of SHUTDOWN event deprecated since Logstash 5.0 [#71](https://github.com/logstash-plugins/logstash-integration-kafka/pull/71) 120 | 121 | ## 10.7.0 122 | - Switched use from Faraday to Manticore as HTTP client library to access Schema Registry service 123 | to fix issue [#63](https://github.com/logstash-plugins/logstash-integration-kafka/pull/63) 124 | 125 | ## 10.6.0 126 | - Added functionality to Kafka input to use Avro deserializer in retrieving data from Kafka. The schema is retrieved 127 | from an instance of Confluent's Schema Registry service [#51](https://github.com/logstash-plugins/logstash-integration-kafka/pull/51) 128 | 129 | ## 10.5.3 130 | - Fix: set (optional) truststore when endpoint id check disabled [#60](https://github.com/logstash-plugins/logstash-integration-kafka/pull/60). 131 | Since **10.1.0** disabling server host-name verification (`ssl_endpoint_identification_algorithm => ""`) did not allow 132 | the (output) plugin to set `ssl_truststore_location => "..."`. 133 | 134 | ## 10.5.2 135 | - Docs: explain group_id in case of multiple inputs [#59](https://github.com/logstash-plugins/logstash-integration-kafka/pull/59) 136 | 137 | ## 10.5.1 138 | - [DOC]Replaced plugin_header file with plugin_header-integration file. [#46](https://github.com/logstash-plugins/logstash-integration-kafka/pull/46) 139 | - [DOC]Update kafka client version across kafka integration docs [#47](https://github.com/logstash-plugins/logstash-integration-kafka/pull/47) 140 | - [DOC]Replace hard-coded kafka client and doc path version numbers with attributes to simplify doc maintenance [#48](https://github.com/logstash-plugins/logstash-integration-kafka/pull/48) 141 | 142 | ## 10.5.0 143 | - Changed: retry sending messages only for retriable exceptions [#27](https://github.com/logstash-plugins/logstash-integration-kafka/pull/29) 144 | 145 | ## 10.4.1 146 | - [DOC] Fixed formatting issues and made minor content edits [#43](https://github.com/logstash-plugins/logstash-integration-kafka/pull/43) 147 | 148 | ## 10.4.0 149 | - added the input `isolation_level` to allow fine control of whether to return transactional messages [#44](https://github.com/logstash-plugins/logstash-integration-kafka/pull/44) 150 | 151 | ## 10.3.0 152 | - added the input and output `client_dns_lookup` parameter to allow control of how DNS requests are made [#28](https://github.com/logstash-plugins/logstash-integration-kafka/pull/28) 153 | 154 | ## 10.2.0 155 | - Changed: config defaults to be aligned with Kafka client defaults [#30](https://github.com/logstash-plugins/logstash-integration-kafka/pull/30) 156 | 157 | ## 10.1.0 158 | - updated kafka client (and its dependencies) to version 2.4.1 ([#16](https://github.com/logstash-plugins/logstash-integration-kafka/pull/16)) 159 | - added the input `client_rack` parameter to enable support for follower fetching 160 | - added the output `partitioner` parameter for tuning partitioning strategy 161 | - Refactor: normalized error logging a bit - make sure exception type is logged 162 | - Fix: properly handle empty ssl_endpoint_identification_algorithm [#8](https://github.com/logstash-plugins/logstash-integration-kafka/pull/8) 163 | - Refactor : made `partition_assignment_strategy` option easier to configure by accepting simple values from an enumerated set instead of requiring lengthy class paths ([#25](https://github.com/logstash-plugins/logstash-integration-kafka/pull/25)) 164 | 165 | ## 10.0.1 166 | - Fix links in changelog pointing to stand-alone plugin changelogs. 167 | - Refactor: scope java_import to plugin class 168 | 169 | ## 10.0.0 170 | - Initial release of the Kafka Integration Plugin, which combines 171 | previously-separate Kafka plugins and shared dependencies into a single 172 | codebase; independent changelogs for previous versions can be found: 173 | - [Kafka Input Plugin @9.1.0](https://github.com/logstash-plugins/logstash-input-kafka/blob/v9.1.0/CHANGELOG.md) 174 | - [Kafka Output Plugin @8.1.0](https://github.com/logstash-plugins/logstash-output-kafka/blob/v8.1.0/CHANGELOG.md) 175 | -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | The following is a list of people who have contributed ideas, code, bug 2 | reports, or in general have helped logstash along its way. 3 | 4 | Contributors: 5 | * Joseph Lawson (joekiller) 6 | * Pere Urbón (purbon) 7 | * Pier-Hugues Pellerin (ph) 8 | * Richard Pijnenburg (electrical) 9 | * Suyog Rao (suyograo) 10 | * Tal Levy (talevy) 11 | * João Duarte (jsvd) 12 | * Kurt Hurtado (kurtado) 13 | * Ry Biesemeyer (yaauie) 14 | * Rob Cowart (robcowart) 15 | * Tim te Beek (timtebeek) 16 | 17 | Note: If you've sent us patches, bug reports, or otherwise contributed to 18 | Logstash, and you aren't on the list above and want to be, please let us know 19 | and we'll make sure you're here. Contributions from folks like you are what make 20 | open source awesome. 21 | -------------------------------------------------------------------------------- /DEVELOPER.md: -------------------------------------------------------------------------------- 1 | # logsstash-integration-kafka 2 | 3 | Apache Kafka integration for Logstash, including Input and Output plugins. 4 | 5 | # Dependencies 6 | 7 | * Apache Kafka version 0.8.1.1 8 | * jruby-kafka library 9 | 10 | # Plugins 11 | 12 | 13 | ## logstash-input-kafka 14 | 15 | Apache Kafka input for Logstash. This input will consume messages from a Kafka topic using the high level consumer API exposed by Kafka. 16 | 17 | For more information about Kafka, refer to this [documentation](http://kafka.apache.org/documentation.html) 18 | 19 | Information about high level consumer API can be found [here](http://kafka.apache.org/documentation.html#highlevelconsumerapi) 20 | 21 | ### Logstash Configuration 22 | 23 | See http://kafka.apache.org/documentation.html#consumerconfigs for details about the Kafka consumer options. 24 | 25 | input { 26 | kafka { 27 | topic_id => ... # string (optional), default: nil, The topic to consume messages from. Can be a java regular expression for whitelist of topics. 28 | white_list => ... # string (optional), default: nil, Blacklist of topics to exclude from consumption. 29 | black_list => ... # string (optional), default: nil, Whitelist of topics to include for consumption. 30 | zk_connect => ... # string (optional), default: "localhost:2181", Specifies the ZooKeeper connection string in the form hostname:port 31 | group_id => ... # string (optional), default: "logstash", A string that uniquely identifies the group of consumer processes 32 | reset_beginning => ... # boolean (optional), default: false, Specify whether to jump to beginning of the queue when there is no initial offset in ZK 33 | auto_offset_reset => ... # string (optional), one of [ "largest", "smallest"] default => 'largest', Where consumer should start if group does not already have an established offset or offset is invalid 34 | consumer_threads => ... # number (optional), default: 1, Number of threads to read from the partitions 35 | queue_size => ... # number (optional), default: 20, Internal Logstash queue size used to hold events in memory 36 | rebalance_max_retries => ... # number (optional), default: 4 37 | rebalance_backoff_ms => ... # number (optional), default: 2000 38 | consumer_timeout_ms => ... # number (optional), default: -1 39 | consumer_restart_on_error => ... # boolean (optional), default: true 40 | consumer_restart_sleep_ms => ... # number (optional), default: 0 41 | decorate_events => ... # boolean (optional), default: false, Option to add Kafka metadata like topic, message size to the event 42 | consumer_id => ... # string (optional), default: nil 43 | fetch_message_max_bytes => ... # number (optional), default: 1048576 44 | } 45 | } 46 | 47 | The default codec is json 48 | 49 | ## logstash-output-kafka 50 | 51 | Apache Kafka output for Logstash. This output will produce messages to a Kafka topic using the producer API exposed by Kafka. 52 | 53 | For more information about Kafka, refer to this [documentation](http://kafka.apache.org/documentation.html) 54 | 55 | Information about producer API can be found [here](http://kafka.apache.org/documentation.html#apidesign) 56 | 57 | ### Logstash Configuration 58 | 59 | See http://kafka.apache.org/documentation.html#producerconfigs for details about the Kafka producer options. 60 | 61 | output { 62 | kafka { 63 | topic_id => ... # string (required), The topic to produce the messages to 64 | broker_list => ... # string (optional), default: "localhost:9092", This is for bootstrapping and the producer will only use it for getting metadata 65 | compression_codec => ... # string (optional), one of ["none", "gzip", "snappy", "lz4", "zstd"], default: "none" 66 | compressed_topics => ... # string (optional), default: "", This parameter allows you to set whether compression should be turned on for particular 67 | request_required_acks => ... # number (optional), one of [-1, 0, 1], default: 0, This value controls when a produce request is considered completed 68 | serializer_class => ... # string, (optional) default: "kafka.serializer.StringEncoder", The serializer class for messages. The default encoder takes a byte[] and returns the same byte[] 69 | partitioner_class => ... # string (optional) default: "kafka.producer.DefaultPartitioner" 70 | request_timeout_ms => ... # number (optional) default: 10000 71 | producer_type => ... # string (optional), one of ["sync", "async"] default => 'sync' 72 | key_serializer_class => ... # string (optional) default: kafka.serializer.StringEncoder 73 | message_send_max_retries => ... # number (optional) default: 3 74 | retry_backoff_ms => ... # number (optional) default: 100 75 | topic_metadata_refresh_interval_ms => ... # number (optional) default: 600 * 1000 76 | queue_buffering_max_ms => ... # number (optional) default: 5000 77 | queue_buffering_max_messages => ... # number (optional) default: 10000 78 | queue_enqueue_timeout_ms => ... # number (optional) default: -1 79 | batch_num_messages => ... # number (optional) default: 200 80 | send_buffer_bytes => ... # number (optional) default: 100 * 1024 81 | client_id => ... # string (optional) default: "" 82 | partition_key_format => ... # string (optional) default: nil, Provides a way to specify a partition key as a string 83 | } 84 | } 85 | 86 | The default codec is json for outputs. If you select a codec of plain, logstash will encode your messages with not only the message 87 | but also with a timestamp and hostname. If you do not want anything but your message passing through, you should make 88 | the output configuration something like: 89 | 90 | output { 91 | kafka { 92 | codec => plain { 93 | format => "%{message}" 94 | } 95 | topic_id => "my_topic_id" 96 | } 97 | } 98 | -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | 3 | gemspec 4 | 5 | logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash" 6 | use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1" 7 | 8 | if Dir.exist?(logstash_path) && use_logstash_source 9 | gem 'logstash-core', :path => "#{logstash_path}/logstash-core" 10 | gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api" 11 | end 12 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright 2020 Elastic and contributors 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /NOTICE.TXT: -------------------------------------------------------------------------------- 1 | Elasticsearch 2 | Copyright 2012-2019 Elastic NV 3 | 4 | This product includes software developed by The Apache Software 5 | Foundation (http://www.apache.org/). -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Logstash Plugin 2 | 3 | [![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-integration-kafka.svg)](https://travis-ci.com/logstash-plugins/logstash-integration-kafka) 4 | 5 | This is a plugin for [Logstash](https://github.com/elastic/logstash). 6 | 7 | It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way. 8 | 9 | ## Logging 10 | 11 | Kafka logs do not respect the Log4J2 root logger level and defaults to INFO, for other levels, you must explicitly set the log level in your Logstash deployment's `log4j2.properties` file, e.g.: 12 | ``` 13 | logger.kafka.name=org.apache.kafka 14 | logger.kafka.appenderRef.console.ref=console 15 | logger.kafka.level=debug 16 | ``` 17 | 18 | ## Documentation 19 | 20 | https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html 21 | 22 | Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/). 23 | 24 | - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive 25 | - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide 26 | 27 | ## Need Help? 28 | 29 | Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum. 30 | 31 | ## Developing 32 | 33 | ### 1. Plugin Developement and Testing 34 | 35 | #### Code 36 | - To get started, you'll need JRuby with the Bundler gem installed. 37 | 38 | - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example). 39 | 40 | - Install dependencies 41 | 42 | ```sh 43 | bundle install 44 | rake install_jars 45 | ``` 46 | 47 | #### Test 48 | 49 | - Update your dependencies 50 | 51 | ```sh 52 | bundle install 53 | rake install_jars 54 | ``` 55 | 56 | - Run unit tests 57 | 58 | ```sh 59 | bundle exec rspec 60 | ``` 61 | 62 | - Run integration tests 63 | 64 | you'll need to have docker available within your test environment before 65 | running the integration tests. The tests depend on a specific Kafka image 66 | found in Docker Hub called `spotify/kafka`. You will need internet connectivity 67 | to pull in this image if it does not already exist locally. 68 | 69 | ```sh 70 | bundle exec rspec --tag integration 71 | ``` 72 | 73 | ### 2. Running your unpublished Plugin in Logstash 74 | 75 | #### 2.1 Run in a local Logstash clone 76 | 77 | - Edit Logstash `Gemfile` and add the local plugin path, for example: 78 | ```ruby 79 | gem "logstash-output-kafka", :path => "/your/local/logstash-output-kafka" 80 | ``` 81 | - Install plugin 82 | ```sh 83 | # Logstash 2.3 and higher 84 | bin/logstash-plugin install --no-verify 85 | 86 | # Prior to Logstash 2.3 87 | bin/plugin install --no-verify 88 | 89 | ``` 90 | - Run Logstash with your plugin 91 | ```sh 92 | bin/logstash -e 'output { kafka { topic_id => "kafka_topic" }}' 93 | ``` 94 | At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash. 95 | 96 | #### 2.2 Run in an installed Logstash 97 | 98 | You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using: 99 | 100 | - Build your plugin gem 101 | ```sh 102 | gem build logstash-output-kafka.gemspec 103 | ``` 104 | - Install the plugin from the Logstash home 105 | ```sh 106 | bin/plugin install /your/local/plugin/logstash-output-kafka.gem 107 | ``` 108 | - Start Logstash and proceed to test the plugin 109 | 110 | ## Contributing 111 | 112 | All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin. 113 | 114 | Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here. 115 | 116 | It is more important to the community that you are able to contribute. 117 | 118 | For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file. 119 | -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rake" 3 | require "jars/installer" 4 | require "fileutils" 5 | 6 | task :default do 7 | system('rake -vT') 8 | end 9 | 10 | task :vendor do 11 | exit(1) unless system './gradlew --no-daemon vendor' 12 | end 13 | 14 | task :clean do 15 | ["vendor/jar-dependencies", "Gemfile.lock"].each do |p| 16 | FileUtils.rm_rf(p) 17 | end 18 | end 19 | -------------------------------------------------------------------------------- /build.gradle: -------------------------------------------------------------------------------- 1 | import java.nio.file.Files 2 | import static java.nio.file.StandardCopyOption.REPLACE_EXISTING 3 | /* 4 | * Licensed to Elasticsearch under one or more contributor 5 | * license agreements. See the NOTICE file distributed with 6 | * this work for additional information regarding copyright 7 | * ownership. Elasticsearch licenses this file to you under 8 | * the Apache License, Version 2.0 (the "License"); you may 9 | * not use this file except in compliance with the License. 10 | * You may obtain a copy of the License at 11 | * 12 | * http://www.apache.org/licenses/LICENSE-2.0 13 | * 14 | * Unless required by applicable law or agreed to in writing, 15 | * software distributed under the License is distributed on an 16 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 17 | * KIND, either express or implied. See the License for the 18 | * specific language governing permissions and limitations 19 | * under the License. 20 | */ 21 | buildscript { 22 | repositories { 23 | mavenCentral() 24 | } 25 | } 26 | 27 | plugins { 28 | id 'java' 29 | id 'maven-publish' 30 | id 'distribution' 31 | id 'idea' 32 | } 33 | 34 | group "org.logstash.integrations" 35 | 36 | java { 37 | sourceCompatibility = JavaVersion.VERSION_1_8 38 | } 39 | 40 | // given https://docs.confluent.io/current/installation/versions-interoperability.html matrix 41 | // Confluent Platform 7.8.x is Apache Kafka 3.8.x 42 | String confluentKafkaVersion = '7.8.0' 43 | String apacheKafkaVersion = '3.8.1' 44 | 45 | repositories { 46 | mavenCentral() 47 | maven { 48 | // Confluent repo for kafka-avro-serializer 49 | url "https://packages.confluent.io/maven/" 50 | } 51 | } 52 | 53 | dependencies { 54 | implementation("io.confluent:kafka-avro-serializer:${confluentKafkaVersion}") { 55 | exclude group: 'org.apache.kafka', module:'kafka-clients' 56 | } 57 | // dependency of kafka-avro-serializer 58 | implementation("io.confluent:kafka-schema-serializer:${confluentKafkaVersion}") { 59 | exclude group: 'org.apache.kafka', module:'kafka-clients' 60 | } 61 | // dependency of kafka-avro-serializer 62 | implementation 'org.apache.avro:avro:1.11.4' 63 | // dependency of kafka-avro-serializer 64 | implementation("io.confluent:kafka-schema-registry-client:${confluentKafkaVersion}") { 65 | exclude group: 'org.apache.kafka', module:'kafka-clients' 66 | } 67 | implementation "org.apache.kafka:kafka-clients:${apacheKafkaVersion}" 68 | // slf4j, zstd, lz4-java, snappy are dependencies from "kafka-clients" 69 | implementation 'org.slf4j:slf4j-api:1.7.36' 70 | implementation 'com.github.luben:zstd-jni:1.5.6-8' 71 | implementation 'org.lz4:lz4-java:1.8.0' 72 | implementation 'org.xerial.snappy:snappy-java:1.1.10.7' 73 | } 74 | task generateGemJarRequiresFile { 75 | doLast { 76 | File jars_file = file('lib/logstash-integration-kafka_jars.rb') 77 | jars_file.newWriter().withWriter { w -> 78 | w << "# AUTOGENERATED BY THE GRADLE SCRIPT. DO NOT EDIT.\n\n" 79 | w << "require \'jar_dependencies\'\n" 80 | configurations.runtimeClasspath.allDependencies.each { 81 | w << "require_jar(\'${it.group}\', \'${it.name}\', \'${it.version}\')\n" 82 | } 83 | } 84 | } 85 | } 86 | 87 | task vendor { 88 | doLast { 89 | String vendorPathPrefix = "vendor/jar-dependencies" 90 | configurations.runtimeClasspath.allDependencies.each { dep -> 91 | File f = configurations.runtimeClasspath.filter { it.absolutePath.contains("${dep.group}/${dep.name}/${dep.version}") }.singleFile 92 | String groupPath = dep.group.replaceAll('\\.', '/') 93 | File newJarFile = file("${vendorPathPrefix}/${groupPath}/${dep.name}/${dep.version}/${dep.name}-${dep.version}.jar") 94 | newJarFile.mkdirs() 95 | Files.copy(f.toPath(), newJarFile.toPath(), REPLACE_EXISTING) 96 | } 97 | } 98 | } 99 | 100 | vendor.dependsOn(generateGemJarRequiresFile) 101 | -------------------------------------------------------------------------------- /docs/index.asciidoc: -------------------------------------------------------------------------------- 1 | :plugin: kafka 2 | :type: integration 3 | :no_codec: 4 | :kafka_client: 3.8.1 5 | 6 | /////////////////////////////////////////// 7 | START - GENERATED VARIABLES, DO NOT EDIT! 8 | /////////////////////////////////////////// 9 | :version: %VERSION% 10 | :release_date: %RELEASE_DATE% 11 | :changelog_url: %CHANGELOG_URL% 12 | :include_path: ../../../../logstash/docs/include 13 | /////////////////////////////////////////// 14 | END - GENERATED VARIABLES, DO NOT EDIT! 15 | /////////////////////////////////////////// 16 | 17 | [id="plugins-{type}s-{plugin}"] 18 | 19 | === Kafka Integration Plugin 20 | 21 | include::{include_path}/plugin_header.asciidoc[] 22 | 23 | ==== Description 24 | 25 | The Kafka Integration Plugin provides integrated plugins for working with the 26 | https://kafka.apache.org/[Kafka] distributed streaming platform. 27 | 28 | - {logstash-ref}/plugins-inputs-kafka.html[Kafka Input Plugin] 29 | - {logstash-ref}/plugins-outputs-kafka.html[Kafka Output Plugin] 30 | 31 | This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the official 32 | https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka 33 | compatibility reference]. If the linked compatibility wiki is not up-to-date, 34 | please contact Kafka support/community to confirm compatibility. 35 | 36 | :no_codec!: 37 | -------------------------------------------------------------------------------- /docs/output-kafka.asciidoc: -------------------------------------------------------------------------------- 1 | :integration: kafka 2 | :plugin: kafka 3 | :type: output 4 | :default_codec: plain 5 | :kafka_client: 3.8.1 6 | :kafka_client_doc: 38 7 | 8 | /////////////////////////////////////////// 9 | START - GENERATED VARIABLES, DO NOT EDIT! 10 | /////////////////////////////////////////// 11 | :version: %VERSION% 12 | :release_date: %RELEASE_DATE% 13 | :changelog_url: %CHANGELOG_URL% 14 | :include_path: ../../../../logstash/docs/include 15 | /////////////////////////////////////////// 16 | END - GENERATED VARIABLES, DO NOT EDIT! 17 | /////////////////////////////////////////// 18 | 19 | [id="plugins-{type}s-{plugin}"] 20 | 21 | === Kafka output plugin 22 | 23 | include::{include_path}/plugin_header-integration.asciidoc[] 24 | 25 | ==== Description 26 | 27 | Write events to a Kafka topic. 28 | 29 | This plugin uses Kafka Client {kafka_client}. For broker compatibility, see the 30 | official 31 | https://cwiki.apache.org/confluence/display/KAFKA/Compatibility+Matrix[Kafka 32 | compatibility reference]. If the linked compatibility wiki is not up-to-date, 33 | please contact Kafka support/community to confirm compatibility. 34 | 35 | If you require features not yet available in this plugin (including client 36 | version upgrades), please file an issue with details about what you need. 37 | 38 | This output supports connecting to Kafka over: 39 | 40 | * SSL (requires plugin version 3.0.0 or later) 41 | * Kerberos SASL (requires plugin version 5.1.0 or later) 42 | 43 | By default security is disabled but can be turned on as needed. 44 | 45 | The only required configuration is the topic_id. 46 | 47 | The default codec is plain. Logstash will encode your events with not only the 48 | message field but also with a timestamp and hostname. 49 | 50 | If you want the full content of your events to be sent as json, you should set 51 | the codec in the output configuration like this: 52 | 53 | [source,ruby] 54 | output { 55 | kafka { 56 | codec => json 57 | topic_id => "mytopic" 58 | } 59 | } 60 | 61 | For more information see 62 | https://kafka.apache.org/{kafka_client_doc}/documentation.html#theproducer 63 | 64 | Kafka producer configuration: 65 | https://kafka.apache.org/{kafka_client_doc}/documentation.html#producerconfigs 66 | 67 | NOTE: This plugin does not support using a proxy when communicating to the Kafka broker. 68 | 69 | [id="plugins-{type}s-{plugin}-aws_msk_iam_auth"] 70 | ==== AWS MSK IAM authentication 71 | If you use AWS MSK, the AWS MSK IAM access control enables you to handle both authentication and authorization for your MSK cluster with AWS IAM. 72 | For more information on this AWS MSK feature see the https://docs.aws.amazon.com/msk/latest/developerguide/iam-access-control.html[AWS documentation]. 73 | 74 | To use this Kafka input with AWS MSK IAM authentication, download the uber jar which contains the client library for 75 | this specific cloud vendor and all the transitive dependencies from this https://github.com/elastic/logstash-kafka-iams-packages/releases[repository]. 76 | Configure the following setting: 77 | ``` 78 | security_protocol => "SASL_SSL" 79 | sasl_mechanism => "AWS_MSK_IAM" 80 | sasl_iam_jar_paths => ["/path/to/aws_iam_uber.jar"] 81 | sasl_jaas_config => "software.amazon.msk.auth.iam.IAMLoginModule required;" 82 | sasl_client_callback_handler_class => "software.amazon.msk.auth.iam.IAMClientCallbackHandler" 83 | ``` 84 | For more IAM authentication configurations, see the https://github.com/aws/aws-msk-iam-auth[AWS MSK IAM authentication library documentation]. 85 | 86 | [id="plugins-{type}s-{plugin}-options"] 87 | ==== Kafka Output Configuration Options 88 | 89 | This plugin supports the following configuration options plus the <> described later. 90 | 91 | NOTE: Some of these options map to a Kafka option. Defaults usually reflect the Kafka default setting, 92 | and might change if Kafka's producer defaults change. 93 | See the https://kafka.apache.org/{kafka_client_doc}/documentation for more details. 94 | 95 | [cols="<,<,<",options="header",] 96 | |======================================================================= 97 | |Setting |Input type|Required 98 | | <> |<>, one of `["0", "1", "all"]`|No 99 | | <> |<>|No 100 | | <> |<>|No 101 | | <> |<>|No 102 | | <> |<>|No 103 | | <> |<>|No 104 | | <> |<>, one of `["none", "gzip", "snappy", "lz4", "zstd"]`|No 105 | | <> |<>|No 106 | | <> |a valid filesystem path|No 107 | | <> |a valid filesystem path|No 108 | | <> |<>|No 109 | | <> |<>|No 110 | | <> |<>|No 111 | | <> |<>|No 112 | | <> |<>|No 113 | | <> |<>|No 114 | | <> |<>|No 115 | | <> |<>|No 116 | | <> |<>|No 117 | | <> |<>|No 118 | | <> |<>|No 119 | | <> |<>|No 120 | | <> |<>|No 121 | | <> |<>|No 122 | | <> |<>|No 123 | | <> |<>|No 124 | | <> |<>|No 125 | | <> |<>|No 126 | | <> |<>|No 127 | | <> |<>|No 128 | | <> |<>|No 129 | | <> |<>|No 130 | | <> |<>|No 131 | | <> |<>|No 132 | | <> |<>|No 133 | | <> |<>, one of `["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"]`|No 134 | | <> |<>|No 135 | | <> |<>|No 136 | | <> |<>|No 137 | | <> |a valid filesystem path|No 138 | | <> |<>|No 139 | | <> |<>|No 140 | | <> |a valid filesystem path|No 141 | | <> |<>|No 142 | | <> |<>|No 143 | | <> |<>|Yes 144 | | <> |<>|No 145 | |======================================================================= 146 | 147 | Also see <> for a list of options supported by all 148 | output plugins. 149 | 150 |   151 | 152 | [id="plugins-{type}s-{plugin}-acks"] 153 | ===== `acks` 154 | 155 | * Value can be any of: `0`, `1`, `all` 156 | * Default value is `"1"` 157 | 158 | The number of acknowledgments the producer requires the leader to have received 159 | before considering a request complete. 160 | 161 | `acks=0`. The producer will not wait for any acknowledgment from the server. 162 | 163 | `acks=1`. The leader will write the record to its local log, but will respond 164 | without waiting for full acknowledgement from all followers. 165 | 166 | `acks=all`. The leader will wait for the full set of in-sync replicas before 167 | acknowledging the record. 168 | 169 | [id="plugins-{type}s-{plugin}-batch_size"] 170 | ===== `batch_size` 171 | 172 | * Value type is <> 173 | * Default value is `16384`. 174 | 175 | The producer will attempt to batch records together into fewer requests whenever multiple 176 | records are being sent to the same partition. This helps performance on both the client 177 | and the server. This configuration controls the default batch size in bytes. 178 | 179 | [id="plugins-{type}s-{plugin}-bootstrap_servers"] 180 | ===== `bootstrap_servers` 181 | 182 | * Value type is <> 183 | * Default value is `"localhost:9092"` 184 | 185 | This is for bootstrapping and the producer will only use it for getting metadata (topics, 186 | partitions and replicas). The socket connections for sending the actual data will be 187 | established based on the broker information returned in the metadata. The format is 188 | `host1:port1,host2:port2`, and the list can be a subset of brokers or a VIP pointing to a 189 | subset of brokers. 190 | 191 | [id="plugins-{type}s-{plugin}-buffer_memory"] 192 | ===== `buffer_memory` 193 | 194 | * Value type is <> 195 | * Default value is `33554432` (32MB). 196 | 197 | The total bytes of memory the producer can use to buffer records waiting to be sent to the server. 198 | 199 | [id="plugins-{type}s-{plugin}-client_dns_lookup"] 200 | ===== `client_dns_lookup` 201 | 202 | * Value type is <> 203 | * Valid options are `use_all_dns_ips`, `resolve_canonical_bootstrap_servers_only`, `default` 204 | * Default value is `"default"` 205 | 206 | Controls how DNS lookups are done. If set to `use_all_dns_ips`, Logstash tries 207 | all IP addresses returned for a hostname before failing the connection. 208 | If set to `resolve_canonical_bootstrap_servers_only`, each entry will be 209 | resolved and expanded into a list of canonical names. 210 | 211 | [NOTE] 212 | ==== 213 | Starting from Kafka 3 `default` value for `client.dns.lookup` value has been removed. 214 | If not explicitly configured it defaults to `use_all_dns_ips`. 215 | ==== 216 | 217 | [id="plugins-{type}s-{plugin}-client_id"] 218 | ===== `client_id` 219 | 220 | * Value type is <> 221 | * Default value is `"logstash"` 222 | 223 | The id string to pass to the server when making requests. 224 | The purpose of this is to be able to track the source of requests beyond just 225 | ip/port by allowing a logical application name to be included with the request 226 | 227 | [id="plugins-{type}s-{plugin}-compression_type"] 228 | ===== `compression_type` 229 | 230 | * Value can be any of: `none`, `gzip`, `snappy`, `lz4`, `zstd` 231 | * Default value is `"none"` 232 | 233 | The compression type for all data generated by the producer. 234 | The default is none (meaning no compression). Valid values are none, gzip, snappy, lz4, or zstd. 235 | 236 | [id="plugins-{type}s-{plugin}-connections_max_idle_ms"] 237 | ===== `connections_max_idle_ms` 238 | 239 | * Value type is <> 240 | * Default value is `540000` milliseconds (9 minutes). 241 | 242 | Close idle connections after the number of milliseconds specified by this config. 243 | 244 | [id="plugins-{type}s-{plugin}-jaas_path"] 245 | ===== `jaas_path` 246 | 247 | * Value type is <> 248 | * There is no default value for this setting. 249 | 250 | The Java Authentication and Authorization Service (JAAS) API supplies user authentication and authorization 251 | services for Kafka. This setting provides the path to the JAAS file. Sample JAAS file for Kafka client: 252 | [source,java] 253 | ---------------------------------- 254 | KafkaClient { 255 | com.sun.security.auth.module.Krb5LoginModule required 256 | useTicketCache=true 257 | renewTicket=true 258 | serviceName="kafka"; 259 | }; 260 | ---------------------------------- 261 | 262 | Please note that specifying `jaas_path` and `kerberos_config` in the config file will add these 263 | to the global JVM system properties. This means if you have multiple Kafka inputs, all of them would be sharing the same 264 | `jaas_path` and `kerberos_config`. If this is not desirable, you would have to run separate instances of Logstash on 265 | different JVM instances. 266 | 267 | [id="plugins-{type}s-{plugin}-kerberos_config"] 268 | ===== `kerberos_config` 269 | 270 | * Value type is <> 271 | * There is no default value for this setting. 272 | 273 | Optional path to kerberos config file. This is krb5.conf style as detailed in https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html 274 | 275 | [id="plugins-{type}s-{plugin}-key_serializer"] 276 | ===== `key_serializer` 277 | 278 | * Value type is <> 279 | * Default value is `"org.apache.kafka.common.serialization.StringSerializer"` 280 | 281 | Serializer class for the key of the message 282 | 283 | [id="plugins-{type}s-{plugin}-linger_ms"] 284 | ===== `linger_ms` 285 | 286 | * Value type is <> 287 | * Default value is `0` 288 | 289 | The producer groups together any records that arrive in between request 290 | transmissions into a single batched request. Normally this occurs only under 291 | load when records arrive faster than they can be sent out. However in some circumstances 292 | the client may want to reduce the number of requests even under moderate load. 293 | This setting accomplishes this by adding a small amount of artificial delay—that is, 294 | rather than immediately sending out a record the producer will wait for up to the given delay 295 | to allow other records to be sent so that the sends can be batched together. 296 | 297 | [id="plugins-{type}s-{plugin}-max_request_size"] 298 | ===== `max_request_size` 299 | 300 | * Value type is <> 301 | * Default value is `1048576` (1MB). 302 | 303 | The maximum size of a request 304 | 305 | [id="plugins-{type}s-{plugin}-message_headers"] 306 | ===== `message_headers` 307 | 308 | * Value type is <> 309 | ** Keys are header names, and must be <> 310 | ** Values are header values, and must be <> 311 | ** Values support interpolation from event field values 312 | * There is no default value for this setting. 313 | 314 | A map of key value pairs, each corresponding to a header name and its value respectively. 315 | Example: 316 | [source,ruby] 317 | ---------------------------------- 318 | message_headers => { "event_timestamp" => "%{@timestamp}" } 319 | ---------------------------------- 320 | 321 | [id="plugins-{type}s-{plugin}-message_key"] 322 | ===== `message_key` 323 | 324 | * Value type is <> 325 | * There is no default value for this setting. 326 | 327 | The key for the message. 328 | 329 | [id="plugins-{type}s-{plugin}-metadata_fetch_timeout_ms"] 330 | ===== `metadata_fetch_timeout_ms` 331 | 332 | * Value type is <> 333 | * Default value is `60000` milliseconds (60 seconds). 334 | 335 | The timeout setting for initial metadata request to fetch topic metadata. 336 | 337 | [id="plugins-{type}s-{plugin}-metadata_max_age_ms"] 338 | ===== `metadata_max_age_ms` 339 | 340 | * Value type is <> 341 | * Default value is `300000` milliseconds (5 minutes). 342 | 343 | The max time in milliseconds before a metadata refresh is forced. 344 | 345 | [id="plugins-{type}s-{plugin}-partitioner"] 346 | ===== `partitioner` 347 | 348 | * Value type is <> 349 | * There is no default value for this setting. 350 | 351 | The default behavior is to hash the `message_key` of an event to get the partition. 352 | When no message key is present, the plugin picks a partition in a round-robin fashion. 353 | 354 | Available options for choosing a partitioning strategy are as follows: 355 | 356 | * `default` use the default partitioner as described above 357 | * `round_robin` distributes writes to all partitions equally, regardless of `message_key` 358 | * `uniform_sticky` sticks to a partition for the duration of a batch than randomly picks a new one 359 | 360 | [id="plugins-{type}s-{plugin}-receive_buffer_bytes"] 361 | ===== `receive_buffer_bytes` 362 | 363 | * Value type is <> 364 | * Default value is `32768` (32KB). 365 | 366 | The size of the TCP receive buffer to use when reading data 367 | 368 | [id="plugins-{type}s-{plugin}-reconnect_backoff_ms"] 369 | ===== `reconnect_backoff_ms` 370 | 371 | * Value type is <> 372 | * Default value is `50`. 373 | 374 | The amount of time to wait before attempting to reconnect to a given host when a connection fails. 375 | 376 | [id="plugins-{type}s-{plugin}-request_timeout_ms"] 377 | ===== `request_timeout_ms` 378 | 379 | * Value type is <> 380 | * Default value is `40000` milliseconds (40 seconds). 381 | 382 | The configuration controls the maximum amount of time the client will wait 383 | for the response of a request. If the response is not received before the timeout 384 | elapses the client will resend the request if necessary or fail the request if 385 | retries are exhausted. 386 | 387 | [id="plugins-{type}s-{plugin}-retries"] 388 | ===== `retries` 389 | 390 | * Value type is <> 391 | * There is no default value for this setting. 392 | 393 | The default retry behavior is to retry until successful. To prevent data loss, 394 | changing this setting is discouraged. 395 | 396 | If you choose to set `retries`, a value greater than zero will cause the 397 | client to only retry a fixed number of times. This will result in data loss 398 | if a transport fault exists for longer than your retry count (network outage, 399 | Kafka down, etc). 400 | 401 | A value less than zero is a configuration error. 402 | 403 | Starting with version 10.5.0, this plugin will only retry exceptions that are a subclass of 404 | https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/common/errors/RetriableException.html[RetriableException] 405 | and 406 | https://kafka.apache.org/{kafka_client_doc}/javadoc/org/apache/kafka/common/errors/InterruptException.html[InterruptException]. 407 | If producing a message throws any other exception, an error is logged and the message is dropped without retrying. 408 | This prevents the Logstash pipeline from hanging indefinitely. 409 | 410 | In versions prior to 10.5.0, any exception is retried indefinitely unless the `retries` option is configured. 411 | 412 | [id="plugins-{type}s-{plugin}-retry_backoff_ms"] 413 | ===== `retry_backoff_ms` 414 | 415 | * Value type is <> 416 | * Default value is `100` milliseconds. 417 | 418 | The amount of time to wait before attempting to retry a failed produce request to a given topic partition. 419 | 420 | [id="plugins-{type}s-{plugin}-sasl_client_callback_handler_class"] 421 | ===== `sasl_client_callback_handler_class` 422 | * Value type is <> 423 | * There is no default value for this setting. 424 | 425 | The SASL client callback handler class the specified SASL mechanism should use. 426 | 427 | [id="plugins-{type}s-{plugin}-sasl_oauthbearer_token_endpoint_url"] 428 | ===== `sasl_oauthbearer_token_endpoint_url` 429 | * Value type is <> 430 | * There is no default value for this setting. 431 | 432 | The URL for the OAuth 2.0 issuer token endpoint. 433 | 434 | [id="plugins-{type}s-{plugin}-sasl_oauthbearer_scope_claim_name"] 435 | ===== `sasl_oauthbearer_scope_claim_name` 436 | * Value type is <> 437 | * Default value is `"scope"` 438 | 439 | (optional) The override name of the scope claim. 440 | 441 | [id="plugins-{type}s-{plugin}-sasl_iam_jar_paths"] 442 | ===== `sasl_iam_jar_paths` 443 | * Value type is <> 444 | * There is no default value for this setting. 445 | 446 | Contains the list of paths to jar libraries that contains cloud providers MSK IAM's clients. 447 | There is one jar per provider and can be retrieved as described in <<"plugins-{type}s-{plugin}-aws_msk_iam_auth">>. 448 | 449 | [id="plugins-{type}s-{plugin}-sasl_login_callback_handler_class"] 450 | ===== `sasl_login_callback_handler_class` 451 | * Value type is <> 452 | * There is no default value for this setting. 453 | 454 | The SASL login callback handler class the specified SASL mechanism should use. 455 | 456 | [id="plugins-{type}s-{plugin}-sasl_login_connect_timeout_ms"] 457 | ===== `sasl_login_connect_timeout_ms` 458 | * Value type is <> 459 | * There is no default value for this setting. 460 | 461 | (optional) The duration, in milliseconds, for HTTPS connect timeout 462 | 463 | [id="plugins-{type}s-{plugin}-sasl_login_read_timeout_ms"] 464 | ===== `sasl_login_read_timeout_ms` 465 | * Value type is <> 466 | * There is no default value for this setting. 467 | 468 | (optional) The duration, in milliseconds, for HTTPS read timeout. 469 | 470 | [id="plugins-{type}s-{plugin}-sasl_login_retry_backoff_ms"] 471 | ===== `sasl_login_retry_backoff_ms` 472 | * Value type is <> 473 | * Default value is `100` milliseconds. 474 | 475 | (optional) The duration, in milliseconds, to wait between HTTPS call attempts. 476 | 477 | [id="plugins-{type}s-{plugin}-sasl_login_retry_backoff_max_ms"] 478 | ===== `sasl_login_retry_backoff_max_ms` 479 | * Value type is <> 480 | * Default value is `10000` milliseconds. 481 | 482 | (optional) The maximum duration, in milliseconds, for HTTPS call attempts. 483 | 484 | [id="plugins-{type}s-{plugin}-sasl_jaas_config"] 485 | ===== `sasl_jaas_config` 486 | 487 | * Value type is <> 488 | * There is no default value for this setting. 489 | 490 | JAAS configuration setting local to this plugin instance, as opposed to settings using config file configured using `jaas_path`, which are shared across the JVM. This allows each plugin instance to have its own configuration. 491 | 492 | If both `sasl_jaas_config` and `jaas_path` configurations are set, the setting here takes precedence. 493 | 494 | Example (setting for Azure Event Hub): 495 | [source,ruby] 496 | output { 497 | kafka { 498 | sasl_jaas_config => "org.apache.kafka.common.security.plain.PlainLoginModule required username='auser' password='apassword';" 499 | } 500 | } 501 | 502 | [id="plugins-{type}s-{plugin}-sasl_kerberos_service_name"] 503 | ===== `sasl_kerberos_service_name` 504 | 505 | * Value type is <> 506 | * There is no default value for this setting. 507 | 508 | The Kerberos principal name that Kafka broker runs as. 509 | This can be defined either in Kafka's JAAS config or in Kafka's config. 510 | 511 | [id="plugins-{type}s-{plugin}-sasl_mechanism"] 512 | ===== `sasl_mechanism` 513 | 514 | * Value type is <> 515 | * Default value is `"GSSAPI"` 516 | 517 | http://kafka.apache.org/documentation.html#security_sasl[SASL mechanism] used for client connections. 518 | This may be any mechanism for which a security provider is available. 519 | For AWS MSK IAM authentication use `AWS_MSK_IAM`. 520 | GSSAPI is the default mechanism. 521 | 522 | [id="plugins-{type}s-{plugin}-security_protocol"] 523 | ===== `security_protocol` 524 | 525 | * Value can be any of: `PLAINTEXT`, `SSL`, `SASL_PLAINTEXT`, `SASL_SSL` 526 | * Default value is `"PLAINTEXT"` 527 | 528 | Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL 529 | 530 | [id="plugins-{type}s-{plugin}-send_buffer_bytes"] 531 | ===== `send_buffer_bytes` 532 | 533 | * Value type is <> 534 | * Default value is `131072` (128KB). 535 | 536 | The size of the TCP send buffer to use when sending data. 537 | 538 | [id="plugins-{type}s-{plugin}-ssl_endpoint_identification_algorithm"] 539 | ===== `ssl_endpoint_identification_algorithm` 540 | 541 | * Value type is <> 542 | * Default value is `"https"` 543 | 544 | The endpoint identification algorithm, defaults to `"https"`. Set to empty string `""` to disable 545 | 546 | [id="plugins-{type}s-{plugin}-ssl_key_password"] 547 | ===== `ssl_key_password` 548 | 549 | * Value type is <> 550 | * There is no default value for this setting. 551 | 552 | The password of the private key in the key store file. 553 | 554 | [id="plugins-{type}s-{plugin}-ssl_keystore_location"] 555 | ===== `ssl_keystore_location` 556 | 557 | * Value type is <> 558 | * There is no default value for this setting. 559 | 560 | If client authentication is required, this setting stores the keystore path. 561 | 562 | [id="plugins-{type}s-{plugin}-ssl_keystore_password"] 563 | ===== `ssl_keystore_password` 564 | 565 | * Value type is <> 566 | * There is no default value for this setting. 567 | 568 | If client authentication is required, this setting stores the keystore password 569 | 570 | [id="plugins-{type}s-{plugin}-ssl_keystore_type"] 571 | ===== `ssl_keystore_type` 572 | 573 | * Value type is <> 574 | * There is no default value for this setting. 575 | 576 | The keystore type. 577 | 578 | [id="plugins-{type}s-{plugin}-ssl_truststore_location"] 579 | ===== `ssl_truststore_location` 580 | 581 | * Value type is <> 582 | * There is no default value for this setting. 583 | 584 | The JKS truststore path to validate the Kafka broker's certificate. 585 | 586 | [id="plugins-{type}s-{plugin}-ssl_truststore_password"] 587 | ===== `ssl_truststore_password` 588 | 589 | * Value type is <> 590 | * There is no default value for this setting. 591 | 592 | The truststore password 593 | 594 | [id="plugins-{type}s-{plugin}-ssl_truststore_type"] 595 | ===== `ssl_truststore_type` 596 | 597 | * Value type is <> 598 | * There is no default value for this setting. 599 | 600 | The truststore type. 601 | 602 | [id="plugins-{type}s-{plugin}-topic_id"] 603 | ===== `topic_id` 604 | 605 | * This is a required setting. 606 | * Value type is <> 607 | * There is no default value for this setting. 608 | 609 | The topic to produce messages to 610 | 611 | [id="plugins-{type}s-{plugin}-value_serializer"] 612 | ===== `value_serializer` 613 | 614 | * Value type is <> 615 | * Default value is `"org.apache.kafka.common.serialization.StringSerializer"` 616 | 617 | Serializer class for the value of the message 618 | 619 | 620 | 621 | [id="plugins-{type}s-{plugin}-common-options"] 622 | include::{include_path}/{type}.asciidoc[] 623 | 624 | :default_codec!: 625 | -------------------------------------------------------------------------------- /gradle.properties: -------------------------------------------------------------------------------- 1 | org.gradle.daemon=false 2 | -------------------------------------------------------------------------------- /gradle/wrapper/gradle-wrapper.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/logstash-plugins/logstash-integration-kafka/a5b3251699a931f100902e1e5dd6aa1809f10e25/gradle/wrapper/gradle-wrapper.jar -------------------------------------------------------------------------------- /gradle/wrapper/gradle-wrapper.properties: -------------------------------------------------------------------------------- 1 | distributionBase=GRADLE_USER_HOME 2 | distributionPath=wrapper/dists 3 | distributionUrl=https\://services.gradle.org/distributions/gradle-8.7-bin.zip 4 | networkTimeout=10000 5 | validateDistributionUrl=true 6 | zipStoreBase=GRADLE_USER_HOME 7 | zipStorePath=wrapper/dists 8 | -------------------------------------------------------------------------------- /gradlew: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # 4 | # Copyright © 2015-2021 the original authors. 5 | # 6 | # Licensed under the Apache License, Version 2.0 (the "License"); 7 | # you may not use this file except in compliance with the License. 8 | # You may obtain a copy of the License at 9 | # 10 | # https://www.apache.org/licenses/LICENSE-2.0 11 | # 12 | # Unless required by applicable law or agreed to in writing, software 13 | # distributed under the License is distributed on an "AS IS" BASIS, 14 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | # See the License for the specific language governing permissions and 16 | # limitations under the License. 17 | # 18 | 19 | ############################################################################## 20 | # 21 | # Gradle start up script for POSIX generated by Gradle. 22 | # 23 | # Important for running: 24 | # 25 | # (1) You need a POSIX-compliant shell to run this script. If your /bin/sh is 26 | # noncompliant, but you have some other compliant shell such as ksh or 27 | # bash, then to run this script, type that shell name before the whole 28 | # command line, like: 29 | # 30 | # ksh Gradle 31 | # 32 | # Busybox and similar reduced shells will NOT work, because this script 33 | # requires all of these POSIX shell features: 34 | # * functions; 35 | # * expansions «$var», «${var}», «${var:-default}», «${var+SET}», 36 | # «${var#prefix}», «${var%suffix}», and «$( cmd )»; 37 | # * compound commands having a testable exit status, especially «case»; 38 | # * various built-in commands including «command», «set», and «ulimit». 39 | # 40 | # Important for patching: 41 | # 42 | # (2) This script targets any POSIX shell, so it avoids extensions provided 43 | # by Bash, Ksh, etc; in particular arrays are avoided. 44 | # 45 | # The "traditional" practice of packing multiple parameters into a 46 | # space-separated string is a well documented source of bugs and security 47 | # problems, so this is (mostly) avoided, by progressively accumulating 48 | # options in "$@", and eventually passing that to Java. 49 | # 50 | # Where the inherited environment variables (DEFAULT_JVM_OPTS, JAVA_OPTS, 51 | # and GRADLE_OPTS) rely on word-splitting, this is performed explicitly; 52 | # see the in-line comments for details. 53 | # 54 | # There are tweaks for specific operating systems such as AIX, CygWin, 55 | # Darwin, MinGW, and NonStop. 56 | # 57 | # (3) This script is generated from the Groovy template 58 | # https://github.com/gradle/gradle/blob/HEAD/subprojects/plugins/src/main/resources/org/gradle/api/internal/plugins/unixStartScript.txt 59 | # within the Gradle project. 60 | # 61 | # You can find Gradle at https://github.com/gradle/gradle/. 62 | # 63 | ############################################################################## 64 | 65 | # Attempt to set APP_HOME 66 | 67 | # Resolve links: $0 may be a link 68 | app_path=$0 69 | 70 | # Need this for daisy-chained symlinks. 71 | while 72 | APP_HOME=${app_path%"${app_path##*/}"} # leaves a trailing /; empty if no leading path 73 | [ -h "$app_path" ] 74 | do 75 | ls=$( ls -ld "$app_path" ) 76 | link=${ls#*' -> '} 77 | case $link in #( 78 | /*) app_path=$link ;; #( 79 | *) app_path=$APP_HOME$link ;; 80 | esac 81 | done 82 | 83 | # This is normally unused 84 | # shellcheck disable=SC2034 85 | APP_BASE_NAME=${0##*/} 86 | # Discard cd standard output in case $CDPATH is set (https://github.com/gradle/gradle/issues/25036) 87 | APP_HOME=$( cd "${APP_HOME:-./}" > /dev/null && pwd -P ) || exit 88 | 89 | # Use the maximum available, or set MAX_FD != -1 to use that value. 90 | MAX_FD=maximum 91 | 92 | warn () { 93 | echo "$*" 94 | } >&2 95 | 96 | die () { 97 | echo 98 | echo "$*" 99 | echo 100 | exit 1 101 | } >&2 102 | 103 | # OS specific support (must be 'true' or 'false'). 104 | cygwin=false 105 | msys=false 106 | darwin=false 107 | nonstop=false 108 | case "$( uname )" in #( 109 | CYGWIN* ) cygwin=true ;; #( 110 | Darwin* ) darwin=true ;; #( 111 | MSYS* | MINGW* ) msys=true ;; #( 112 | NONSTOP* ) nonstop=true ;; 113 | esac 114 | 115 | CLASSPATH=$APP_HOME/gradle/wrapper/gradle-wrapper.jar 116 | 117 | 118 | # Determine the Java command to use to start the JVM. 119 | if [ -n "$JAVA_HOME" ] ; then 120 | if [ -x "$JAVA_HOME/jre/sh/java" ] ; then 121 | # IBM's JDK on AIX uses strange locations for the executables 122 | JAVACMD=$JAVA_HOME/jre/sh/java 123 | else 124 | JAVACMD=$JAVA_HOME/bin/java 125 | fi 126 | if [ ! -x "$JAVACMD" ] ; then 127 | die "ERROR: JAVA_HOME is set to an invalid directory: $JAVA_HOME 128 | 129 | Please set the JAVA_HOME variable in your environment to match the 130 | location of your Java installation." 131 | fi 132 | else 133 | JAVACMD=java 134 | if ! command -v java >/dev/null 2>&1 135 | then 136 | die "ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. 137 | 138 | Please set the JAVA_HOME variable in your environment to match the 139 | location of your Java installation." 140 | fi 141 | fi 142 | 143 | # Increase the maximum file descriptors if we can. 144 | if ! "$cygwin" && ! "$darwin" && ! "$nonstop" ; then 145 | case $MAX_FD in #( 146 | max*) 147 | # In POSIX sh, ulimit -H is undefined. That's why the result is checked to see if it worked. 148 | # shellcheck disable=SC2039,SC3045 149 | MAX_FD=$( ulimit -H -n ) || 150 | warn "Could not query maximum file descriptor limit" 151 | esac 152 | case $MAX_FD in #( 153 | '' | soft) :;; #( 154 | *) 155 | # In POSIX sh, ulimit -n is undefined. That's why the result is checked to see if it worked. 156 | # shellcheck disable=SC2039,SC3045 157 | ulimit -n "$MAX_FD" || 158 | warn "Could not set maximum file descriptor limit to $MAX_FD" 159 | esac 160 | fi 161 | 162 | # Collect all arguments for the java command, stacking in reverse order: 163 | # * args from the command line 164 | # * the main class name 165 | # * -classpath 166 | # * -D...appname settings 167 | # * --module-path (only if needed) 168 | # * DEFAULT_JVM_OPTS, JAVA_OPTS, and GRADLE_OPTS environment variables. 169 | 170 | # For Cygwin or MSYS, switch paths to Windows format before running java 171 | if "$cygwin" || "$msys" ; then 172 | APP_HOME=$( cygpath --path --mixed "$APP_HOME" ) 173 | CLASSPATH=$( cygpath --path --mixed "$CLASSPATH" ) 174 | 175 | JAVACMD=$( cygpath --unix "$JAVACMD" ) 176 | 177 | # Now convert the arguments - kludge to limit ourselves to /bin/sh 178 | for arg do 179 | if 180 | case $arg in #( 181 | -*) false ;; # don't mess with options #( 182 | /?*) t=${arg#/} t=/${t%%/*} # looks like a POSIX filepath 183 | [ -e "$t" ] ;; #( 184 | *) false ;; 185 | esac 186 | then 187 | arg=$( cygpath --path --ignore --mixed "$arg" ) 188 | fi 189 | # Roll the args list around exactly as many times as the number of 190 | # args, so each arg winds up back in the position where it started, but 191 | # possibly modified. 192 | # 193 | # NB: a `for` loop captures its iteration list before it begins, so 194 | # changing the positional parameters here affects neither the number of 195 | # iterations, nor the values presented in `arg`. 196 | shift # remove old arg 197 | set -- "$@" "$arg" # push replacement arg 198 | done 199 | fi 200 | 201 | 202 | # Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. 203 | DEFAULT_JVM_OPTS='"-Xmx64m" "-Xms64m"' 204 | 205 | # Collect all arguments for the java command: 206 | # * DEFAULT_JVM_OPTS, JAVA_OPTS, JAVA_OPTS, and optsEnvironmentVar are not allowed to contain shell fragments, 207 | # and any embedded shellness will be escaped. 208 | # * For example: A user cannot expect ${Hostname} to be expanded, as it is an environment variable and will be 209 | # treated as '${Hostname}' itself on the command line. 210 | 211 | set -- \ 212 | "-Dorg.gradle.appname=$APP_BASE_NAME" \ 213 | -classpath "$CLASSPATH" \ 214 | org.gradle.wrapper.GradleWrapperMain \ 215 | "$@" 216 | 217 | # Stop when "xargs" is not available. 218 | if ! command -v xargs >/dev/null 2>&1 219 | then 220 | die "xargs is not available" 221 | fi 222 | 223 | # Use "xargs" to parse quoted args. 224 | # 225 | # With -n1 it outputs one arg per line, with the quotes and backslashes removed. 226 | # 227 | # In Bash we could simply go: 228 | # 229 | # readarray ARGS < <( xargs -n1 <<<"$var" ) && 230 | # set -- "${ARGS[@]}" "$@" 231 | # 232 | # but POSIX shell has neither arrays nor command substitution, so instead we 233 | # post-process each arg (as a line of input to sed) to backslash-escape any 234 | # character that might be a shell metacharacter, then use eval to reverse 235 | # that process (while maintaining the separation between arguments), and wrap 236 | # the whole thing up as a single "set" statement. 237 | # 238 | # This will of course break if any of these variables contains a newline or 239 | # an unmatched quote. 240 | # 241 | 242 | eval "set -- $( 243 | printf '%s\n' "$DEFAULT_JVM_OPTS $JAVA_OPTS $GRADLE_OPTS" | 244 | xargs -n1 | 245 | sed ' s~[^-[:alnum:]+,./:=@_]~\\&~g; ' | 246 | tr '\n' ' ' 247 | )" '"$@"' 248 | 249 | exec "$JAVACMD" "$@" 250 | -------------------------------------------------------------------------------- /gradlew.bat: -------------------------------------------------------------------------------- 1 | @rem 2 | @rem Copyright 2015 the original author or authors. 3 | @rem 4 | @rem Licensed under the Apache License, Version 2.0 (the "License"); 5 | @rem you may not use this file except in compliance with the License. 6 | @rem You may obtain a copy of the License at 7 | @rem 8 | @rem https://www.apache.org/licenses/LICENSE-2.0 9 | @rem 10 | @rem Unless required by applicable law or agreed to in writing, software 11 | @rem distributed under the License is distributed on an "AS IS" BASIS, 12 | @rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | @rem See the License for the specific language governing permissions and 14 | @rem limitations under the License. 15 | @rem 16 | 17 | @if "%DEBUG%"=="" @echo off 18 | @rem ########################################################################## 19 | @rem 20 | @rem Gradle startup script for Windows 21 | @rem 22 | @rem ########################################################################## 23 | 24 | @rem Set local scope for the variables with windows NT shell 25 | if "%OS%"=="Windows_NT" setlocal 26 | 27 | set DIRNAME=%~dp0 28 | if "%DIRNAME%"=="" set DIRNAME=. 29 | @rem This is normally unused 30 | set APP_BASE_NAME=%~n0 31 | set APP_HOME=%DIRNAME% 32 | 33 | @rem Resolve any "." and ".." in APP_HOME to make it shorter. 34 | for %%i in ("%APP_HOME%") do set APP_HOME=%%~fi 35 | 36 | @rem Add default JVM options here. You can also use JAVA_OPTS and GRADLE_OPTS to pass JVM options to this script. 37 | set DEFAULT_JVM_OPTS="-Xmx64m" "-Xms64m" 38 | 39 | @rem Find java.exe 40 | if defined JAVA_HOME goto findJavaFromJavaHome 41 | 42 | set JAVA_EXE=java.exe 43 | %JAVA_EXE% -version >NUL 2>&1 44 | if %ERRORLEVEL% equ 0 goto execute 45 | 46 | echo. 1>&2 47 | echo ERROR: JAVA_HOME is not set and no 'java' command could be found in your PATH. 1>&2 48 | echo. 1>&2 49 | echo Please set the JAVA_HOME variable in your environment to match the 1>&2 50 | echo location of your Java installation. 1>&2 51 | 52 | goto fail 53 | 54 | :findJavaFromJavaHome 55 | set JAVA_HOME=%JAVA_HOME:"=% 56 | set JAVA_EXE=%JAVA_HOME%/bin/java.exe 57 | 58 | if exist "%JAVA_EXE%" goto execute 59 | 60 | echo. 1>&2 61 | echo ERROR: JAVA_HOME is set to an invalid directory: %JAVA_HOME% 1>&2 62 | echo. 1>&2 63 | echo Please set the JAVA_HOME variable in your environment to match the 1>&2 64 | echo location of your Java installation. 1>&2 65 | 66 | goto fail 67 | 68 | :execute 69 | @rem Setup the command line 70 | 71 | set CLASSPATH=%APP_HOME%\gradle\wrapper\gradle-wrapper.jar 72 | 73 | 74 | @rem Execute Gradle 75 | "%JAVA_EXE%" %DEFAULT_JVM_OPTS% %JAVA_OPTS% %GRADLE_OPTS% "-Dorg.gradle.appname=%APP_BASE_NAME%" -classpath "%CLASSPATH%" org.gradle.wrapper.GradleWrapperMain %* 76 | 77 | :end 78 | @rem End local scope for the variables with windows NT shell 79 | if %ERRORLEVEL% equ 0 goto mainEnd 80 | 81 | :fail 82 | rem Set variable GRADLE_EXIT_CONSOLE if you need the _script_ return code instead of 83 | rem the _cmd.exe /c_ return code! 84 | set EXIT_CODE=%ERRORLEVEL% 85 | if %EXIT_CODE% equ 0 set EXIT_CODE=1 86 | if not ""=="%GRADLE_EXIT_CONSOLE%" exit %EXIT_CODE% 87 | exit /b %EXIT_CODE% 88 | 89 | :mainEnd 90 | if "%OS%"=="Windows_NT" endlocal 91 | 92 | :omega 93 | -------------------------------------------------------------------------------- /kafka_test_setup.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Kafka and create test topics 3 | 4 | set -ex 5 | # check if KAFKA_VERSION env var is set 6 | if [ -n "${KAFKA_VERSION+1}" ]; then 7 | echo "KAFKA_VERSION is $KAFKA_VERSION" 8 | else 9 | KAFKA_VERSION=3.4.1 10 | fi 11 | 12 | export _JAVA_OPTIONS="-Djava.net.preferIPv4Stack=true" 13 | 14 | rm -rf build 15 | mkdir build 16 | 17 | echo "Setup Kafka version $KAFKA_VERSION" 18 | if [ ! -e "kafka_2.13-$KAFKA_VERSION.tgz" ]; then 19 | echo "Kafka not present locally, downloading" 20 | curl -s -o "kafka_2.13-$KAFKA_VERSION.tgz" "https://archive.apache.org/dist/kafka/$KAFKA_VERSION/kafka_2.13-$KAFKA_VERSION.tgz" 21 | fi 22 | cp kafka_2.13-$KAFKA_VERSION.tgz build/kafka.tgz 23 | mkdir build/kafka && tar xzf build/kafka.tgz -C build/kafka --strip-components 1 24 | 25 | echo "Starting ZooKeeper" 26 | build/kafka/bin/zookeeper-server-start.sh -daemon build/kafka/config/zookeeper.properties 27 | sleep 10 28 | echo "Starting Kafka broker" 29 | build/kafka/bin/kafka-server-start.sh -daemon build/kafka/config/server.properties --override advertised.host.name=127.0.0.1 --override log.dirs="${PWD}/build/kafka-logs" 30 | sleep 10 31 | 32 | echo "Setup Confluent Platform" 33 | # check if CONFLUENT_VERSION env var is set 34 | if [ -n "${CONFLUENT_VERSION+1}" ]; then 35 | echo "CONFLUENT_VERSION is $CONFLUENT_VERSION" 36 | else 37 | CONFLUENT_VERSION=7.4.0 38 | fi 39 | if [ ! -e confluent-community-$CONFLUENT_VERSION.tar.gz ]; then 40 | echo "Confluent Platform not present locally, downloading" 41 | CONFLUENT_MINOR=$(echo "$CONFLUENT_VERSION" | sed -n 's/^\([[:digit:]]*\.[[:digit:]]*\)\.[[:digit:]]*$/\1/p') 42 | echo "CONFLUENT_MINOR is $CONFLUENT_MINOR" 43 | curl -s -o confluent-community-$CONFLUENT_VERSION.tar.gz http://packages.confluent.io/archive/$CONFLUENT_MINOR/confluent-community-$CONFLUENT_VERSION.tar.gz 44 | fi 45 | cp confluent-community-$CONFLUENT_VERSION.tar.gz build/confluent_platform.tar.gz 46 | mkdir build/confluent_platform && tar xzf build/confluent_platform.tar.gz -C build/confluent_platform --strip-components 1 47 | 48 | echo "Configuring TLS on Schema registry" 49 | rm -Rf tls_repository 50 | mkdir tls_repository 51 | ./setup_keystore_and_truststore.sh 52 | # configure schema-registry to handle https on 8083 port 53 | if [[ "$OSTYPE" == "darwin"* ]]; then 54 | sed -i '' 's/http:\/\/0.0.0.0:8081/http:\/\/0.0.0.0:8081, https:\/\/0.0.0.0:8083/g' build/confluent_platform/etc/schema-registry/schema-registry.properties 55 | else 56 | sed -i 's/http:\/\/0.0.0.0:8081/http:\/\/0.0.0.0:8081, https:\/\/0.0.0.0:8083/g' build/confluent_platform/etc/schema-registry/schema-registry.properties 57 | fi 58 | echo "ssl.keystore.location=`pwd`/tls_repository/schema_reg.jks" >> build/confluent_platform/etc/schema-registry/schema-registry.properties 59 | echo "ssl.keystore.password=changeit" >> build/confluent_platform/etc/schema-registry/schema-registry.properties 60 | echo "ssl.key.password=changeit" >> build/confluent_platform/etc/schema-registry/schema-registry.properties 61 | 62 | cp build/confluent_platform/etc/schema-registry/schema-registry.properties build/confluent_platform/etc/schema-registry/authed-schema-registry.properties 63 | echo "authentication.method=BASIC" >> build/confluent_platform/etc/schema-registry/authed-schema-registry.properties 64 | echo "authentication.roles=admin,developer,user,sr-user" >> build/confluent_platform/etc/schema-registry/authed-schema-registry.properties 65 | echo "authentication.realm=SchemaRegistry-Props" >> build/confluent_platform/etc/schema-registry/authed-schema-registry.properties 66 | cp spec/fixtures/jaas.config build/confluent_platform/etc/schema-registry 67 | cp spec/fixtures/pwd build/confluent_platform/etc/schema-registry 68 | 69 | echo "Setting up test topics with test data" 70 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic_plain --bootstrap-server localhost:9092 71 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic_plain_with_headers --bootstrap-server localhost:9092 72 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic_plain_with_headers_badly --bootstrap-server localhost:9092 73 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic_snappy --bootstrap-server localhost:9092 74 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic_lz4 --bootstrap-server localhost:9092 75 | build/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic logstash_integration_topic1 --bootstrap-server localhost:9092 76 | build/kafka/bin/kafka-topics.sh --create --partitions 2 --replication-factor 1 --topic logstash_integration_topic2 --bootstrap-server localhost:9092 77 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_topic3 --bootstrap-server localhost:9092 78 | build/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic logstash_integration_gzip_topic --bootstrap-server localhost:9092 79 | build/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic logstash_integration_snappy_topic --bootstrap-server localhost:9092 80 | build/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic logstash_integration_lz4_topic --bootstrap-server localhost:9092 81 | build/kafka/bin/kafka-topics.sh --create --partitions 1 --replication-factor 1 --topic logstash_integration_zstd_topic --bootstrap-server localhost:9092 82 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_partitioner_topic --bootstrap-server localhost:9092 83 | build/kafka/bin/kafka-topics.sh --create --partitions 3 --replication-factor 1 --topic logstash_integration_static_membership_topic --bootstrap-server localhost:9092 84 | curl -s -o build/apache_logs.txt https://s3.amazonaws.com/data.elasticsearch.org/apache_logs/apache_logs.txt 85 | cat build/apache_logs.txt | build/kafka/bin/kafka-console-producer.sh --topic logstash_integration_topic_plain --broker-list localhost:9092 86 | cat build/apache_logs.txt | build/kafka/bin/kafka-console-producer.sh --topic logstash_integration_topic_snappy --broker-list localhost:9092 --compression-codec snappy 87 | cat build/apache_logs.txt | build/kafka/bin/kafka-console-producer.sh --topic logstash_integration_topic_lz4 --broker-list localhost:9092 --compression-codec lz4 88 | 89 | echo "Setup complete, running specs" 90 | -------------------------------------------------------------------------------- /kafka_test_teardown.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Kafka and create test topics 3 | set -ex 4 | 5 | echo "Unregistering test topics" 6 | build/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic 'logstash_integration_.*' 7 | build/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic 'topic_avro.*' 8 | 9 | echo "Stopping Kafka broker" 10 | build/kafka/bin/kafka-server-stop.sh 11 | echo "Stopping zookeeper" 12 | build/kafka/bin/zookeeper-server-stop.sh 13 | 14 | echo "Clean TLS folder" 15 | rm -Rf tls_repository 16 | -------------------------------------------------------------------------------- /lib/logstash/outputs/kafka.rb: -------------------------------------------------------------------------------- 1 | require 'logstash/namespace' 2 | require 'logstash/outputs/base' 3 | require 'java' 4 | require 'logstash-integration-kafka_jars.rb' 5 | require 'logstash/plugin_mixins/kafka/common' 6 | 7 | # Write events to a Kafka topic. This uses the Kafka Producer API to write messages to a topic on 8 | # the broker. 9 | # 10 | # Here's a compatibility matrix that shows the Kafka client versions that are compatible with each combination 11 | # of Logstash and the Kafka output plugin: 12 | # 13 | # [options="header"] 14 | # |========================================================== 15 | # |Kafka Client Version |Logstash Version |Plugin Version |Why? 16 | # |0.8 |2.0.0 - 2.x.x |<3.0.0 |Legacy, 0.8 is still popular 17 | # |0.9 |2.0.0 - 2.3.x | 3.x.x |Works with the old Ruby Event API (`event['product']['price'] = 10`) 18 | # |0.9 |2.4.x - 5.x.x | 4.x.x |Works with the new getter/setter APIs (`event.set('[product][price]', 10)`) 19 | # |0.10.0.x |2.4.x - 5.x.x | 5.x.x |Not compatible with the <= 0.9 broker 20 | # |0.10.1.x |2.4.x - 5.x.x | 6.x.x | 21 | # |========================================================== 22 | # 23 | # NOTE: We recommended that you use matching Kafka client and broker versions. During upgrades, you should 24 | # upgrade brokers before clients because brokers target backwards compatibility. For example, the 0.9 broker 25 | # is compatible with both the 0.8 consumer and 0.9 consumer APIs, but not the other way around. 26 | # 27 | # This output supports connecting to Kafka over: 28 | # 29 | # * SSL (requires plugin version 3.0.0 or later) 30 | # * Kerberos SASL (requires plugin version 5.1.0 or later) 31 | # 32 | # By default security is disabled but can be turned on as needed. 33 | # 34 | # The only required configuration is the topic_id. The default codec is plain, 35 | # so events will be persisted on the broker in plain format. Logstash will encode your messages with not 36 | # only the message but also with a timestamp and hostname. If you do not want anything but your message 37 | # passing through, you should make the output configuration something like: 38 | # [source,ruby] 39 | # output { 40 | # kafka { 41 | # codec => plain { 42 | # format => "%{message}" 43 | # } 44 | # topic_id => "mytopic" 45 | # } 46 | # } 47 | # For more information see http://kafka.apache.org/documentation.html#theproducer 48 | # 49 | # Kafka producer configuration: http://kafka.apache.org/documentation.html#newproducerconfigs 50 | class LogStash::Outputs::Kafka < LogStash::Outputs::Base 51 | 52 | java_import org.apache.kafka.clients.producer.ProducerRecord 53 | 54 | include LogStash::PluginMixins::Kafka::Common 55 | 56 | declare_threadsafe! 57 | 58 | config_name 'kafka' 59 | 60 | default :codec, 'plain' 61 | 62 | # The number of acknowledgments the producer requires the leader to have received 63 | # before considering a request complete. 64 | # 65 | # acks=0, the producer will not wait for any acknowledgment from the server at all. 66 | # acks=1, This will mean the leader will write the record to its local log but 67 | # will respond without awaiting full acknowledgement from all followers. 68 | # acks=all, This means the leader will wait for the full set of in-sync replicas to acknowledge the record. 69 | config :acks, :validate => ["0", "1", "all"], :default => "1" 70 | # The producer will attempt to batch records together into fewer requests whenever multiple 71 | # records are being sent to the same partition. This helps performance on both the client 72 | # and the server. This configuration controls the default batch size in bytes. 73 | config :batch_size, :validate => :number, :default => 16_384 # Kafka default 74 | # This is for bootstrapping and the producer will only use it for getting metadata (topics, 75 | # partitions and replicas). The socket connections for sending the actual data will be 76 | # established based on the broker information returned in the metadata. The format is 77 | # `host1:port1,host2:port2`, and the list can be a subset of brokers or a VIP pointing to a 78 | # subset of brokers. 79 | config :bootstrap_servers, :validate => :string, :default => 'localhost:9092' 80 | # The total bytes of memory the producer can use to buffer records waiting to be sent to the server. 81 | config :buffer_memory, :validate => :number, :default => 33_554_432 # (32M) Kafka default 82 | # The compression type for all data generated by the producer. 83 | # The default is none (i.e. no compression). Valid values are none, gzip, snappy, lz4 or zstd. 84 | config :compression_type, :validate => ["none", "gzip", "snappy", "lz4", "zstd"], :default => "none" 85 | # How DNS lookups should be done. If set to `use_all_dns_ips`, when the lookup returns multiple 86 | # IP addresses for a hostname, they will all be attempted to connect to before failing the 87 | # connection. If the value is `resolve_canonical_bootstrap_servers_only` each entry will be 88 | # resolved and expanded into a list of canonical names. 89 | # Starting from Kafka 3 `default` value for `client.dns.lookup` value has been removed. If explicitly configured it fallbacks to `use_all_dns_ips`. 90 | config :client_dns_lookup, :validate => ["default", "use_all_dns_ips", "resolve_canonical_bootstrap_servers_only"], :default => "use_all_dns_ips" 91 | # The id string to pass to the server when making requests. 92 | # The purpose of this is to be able to track the source of requests beyond just 93 | # ip/port by allowing a logical application name to be included with the request 94 | config :client_id, :validate => :string, :default => "logstash" 95 | # Serializer class for the key of the message 96 | config :key_serializer, :validate => :string, :default => 'org.apache.kafka.common.serialization.StringSerializer' 97 | # The producer groups together any records that arrive in between request 98 | # transmissions into a single batched request. Normally this occurs only under 99 | # load when records arrive faster than they can be sent out. However in some circumstances 100 | # the client may want to reduce the number of requests even under moderate load. 101 | # This setting accomplishes this by adding a small amount of artificial delay—that is, 102 | # rather than immediately sending out a record the producer will wait for up to the given delay 103 | # to allow other records to be sent so that the sends can be batched together. 104 | config :linger_ms, :validate => :number, :default => 0 # Kafka default 105 | # The maximum size of a request 106 | config :max_request_size, :validate => :number, :default => 1_048_576 # (1MB) Kafka default 107 | # The key for the message 108 | config :message_key, :validate => :string 109 | # Headers added to kafka message in the form of key-value pairs 110 | config :message_headers, :validate => :hash, :default => {} 111 | # the timeout setting for initial metadata request to fetch topic metadata. 112 | config :metadata_fetch_timeout_ms, :validate => :number, :default => 60_000 113 | # Partitioner to use - can be `default`, `uniform_sticky`, `round_robin` or a fully qualified class name of a custom partitioner. 114 | config :partitioner, :validate => :string 115 | # The size of the TCP receive buffer to use when reading data 116 | config :receive_buffer_bytes, :validate => :number, :default => 32_768 # (32KB) Kafka default 117 | # The amount of time to wait before attempting to reconnect to a given host when a connection fails. 118 | config :reconnect_backoff_ms, :validate => :number, :default => 50 # Kafka default 119 | # The default retry behavior is to retry until successful. To prevent data loss, 120 | # the use of this setting is discouraged. 121 | # 122 | # If you choose to set `retries`, a value greater than zero will cause the 123 | # client to only retry a fixed number of times. This will result in data loss 124 | # if a transient error outlasts your retry count. 125 | # 126 | # A value less than zero is a configuration error. 127 | config :retries, :validate => :number 128 | # The amount of time to wait before attempting to retry a failed produce request to a given topic partition. 129 | config :retry_backoff_ms, :validate => :number, :default => 100 # Kafka default 130 | # The size of the TCP send buffer to use when sending data. 131 | config :send_buffer_bytes, :validate => :number, :default => 131_072 # (128KB) Kafka default 132 | # The truststore type. 133 | config :ssl_truststore_type, :validate => :string 134 | # The JKS truststore path to validate the Kafka broker's certificate. 135 | config :ssl_truststore_location, :validate => :path 136 | # The truststore password 137 | config :ssl_truststore_password, :validate => :password 138 | # The keystore type. 139 | config :ssl_keystore_type, :validate => :string 140 | # If client authentication is required, this setting stores the keystore path. 141 | config :ssl_keystore_location, :validate => :path 142 | # If client authentication is required, this setting stores the keystore password 143 | config :ssl_keystore_password, :validate => :password 144 | # The password of the private key in the key store file. 145 | config :ssl_key_password, :validate => :password 146 | # Algorithm to use when verifying host. Set to "" to disable 147 | config :ssl_endpoint_identification_algorithm, :validate => :string, :default => 'https' 148 | # Security protocol to use, which can be either of PLAINTEXT,SSL,SASL_PLAINTEXT,SASL_SSL 149 | config :security_protocol, :validate => ["PLAINTEXT", "SSL", "SASL_PLAINTEXT", "SASL_SSL"], :default => "PLAINTEXT" 150 | # SASL client callback handler class 151 | config :sasl_client_callback_handler_class, :validate => :string 152 | # Path to the jar containing client and all dependencies for SASL IAM authentication of specific cloud vendor 153 | config :sasl_iam_jar_paths, :validate => :array 154 | # The URL for the OAuth 2.0 issuer token endpoint. 155 | config :sasl_oauthbearer_token_endpoint_url, :validate => :string 156 | # (optional) The override name of the scope claim. 157 | config :sasl_oauthbearer_scope_claim_name, :validate => :string, :default => 'scope' # Kafka default 158 | # SASL login callback handler class 159 | config :sasl_login_callback_handler_class, :validate => :string 160 | # (optional) The duration, in milliseconds, for HTTPS connect timeout 161 | config :sasl_login_connect_timeout_ms, :validate => :number 162 | # (optional) The duration, in milliseconds, for HTTPS read timeout. 163 | config :sasl_login_read_timeout_ms, :validate => :number 164 | # (optional) The duration, in milliseconds, to wait between HTTPS call attempts. 165 | config :sasl_login_retry_backoff_ms, :validate => :number, :default => 100 # Kafka default 166 | # (optional) The maximum duration, in milliseconds, for HTTPS call attempts. 167 | config :sasl_login_retry_backoff_max_ms, :validate => :number, :default => 10000 # Kafka default 168 | # http://kafka.apache.org/documentation.html#security_sasl[SASL mechanism] used for client connections. 169 | # This may be any mechanism for which a security provider is available. 170 | # GSSAPI is the default mechanism. 171 | config :sasl_mechanism, :validate => :string, :default => "GSSAPI" 172 | # The Kerberos principal name that Kafka broker runs as. 173 | # This can be defined either in Kafka's JAAS config or in Kafka's config. 174 | config :sasl_kerberos_service_name, :validate => :string 175 | # The Java Authentication and Authorization Service (JAAS) API supplies user authentication and authorization 176 | # services for Kafka. This setting provides the path to the JAAS file. Sample JAAS file for Kafka client: 177 | # [source,java] 178 | # ---------------------------------- 179 | # KafkaClient { 180 | # com.sun.security.auth.module.Krb5LoginModule required 181 | # useTicketCache=true 182 | # renewTicket=true 183 | # serviceName="kafka"; 184 | # }; 185 | # ---------------------------------- 186 | # 187 | # Please note that specifying `jaas_path` and `kerberos_config` in the config file will add these 188 | # to the global JVM system properties. This means if you have multiple Kafka inputs, all of them would be sharing the same 189 | # `jaas_path` and `kerberos_config`. If this is not desirable, you would have to run separate instances of Logstash on 190 | # different JVM instances. 191 | config :jaas_path, :validate => :path 192 | # JAAS configuration settings. This allows JAAS config to be a part of the plugin configuration and allows for different JAAS configuration per each plugin config. 193 | config :sasl_jaas_config, :validate => :string 194 | # Optional path to kerberos config file. This is krb5.conf style as detailed in https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html 195 | config :kerberos_config, :validate => :path 196 | 197 | # The topic to produce messages to 198 | config :topic_id, :validate => :string, :required => true 199 | # Serializer class for the value of the message 200 | config :value_serializer, :validate => :string, :default => 'org.apache.kafka.common.serialization.StringSerializer' 201 | 202 | public 203 | def register 204 | @thread_batch_map = Concurrent::Hash.new 205 | 206 | if !@retries.nil? 207 | if @retries < 0 208 | raise LogStash::ConfigurationError, "A negative retry count (#{@retries}) is not valid. Must be a value >= 0" 209 | end 210 | 211 | logger.warn("Kafka output is configured with finite retry. This instructs Logstash to LOSE DATA after a set number of send attempts fails. If you do not want to lose data if Kafka is down, then you must remove the retry setting.", :retries => @retries) 212 | end 213 | 214 | reassign_dns_lookup 215 | 216 | if value_serializer == 'org.apache.kafka.common.serialization.StringSerializer' 217 | @codec.on_event do |event, data| 218 | write_to_kafka(event, data) 219 | end 220 | elsif value_serializer == 'org.apache.kafka.common.serialization.ByteArraySerializer' 221 | @codec.on_event do |event, data| 222 | write_to_kafka(event, data.to_java_bytes) 223 | end 224 | else 225 | raise LogStash::ConfigurationError, "'value_serializer' only supports org.apache.kafka.common.serialization.ByteArraySerializer and org.apache.kafka.common.serialization.StringSerializer" 226 | end 227 | @message_headers.each do |key, value| 228 | if !key.is_a? String 229 | raise LogStash::ConfigurationError, "'message_headers' contains a key that is not a string!" 230 | end 231 | end 232 | @producer = create_producer 233 | end 234 | 235 | def prepare(record) 236 | # This output is threadsafe, so we need to keep a batch per thread. 237 | @thread_batch_map[Thread.current].add(record) 238 | end 239 | 240 | def multi_receive(events) 241 | t = Thread.current 242 | if !@thread_batch_map.include?(t) 243 | @thread_batch_map[t] = java.util.ArrayList.new(events.size) 244 | end 245 | 246 | events.each do |event| 247 | @codec.encode(event) 248 | end 249 | 250 | batch = @thread_batch_map[t] 251 | if batch.any? 252 | retrying_send(batch) 253 | batch.clear 254 | end 255 | end 256 | 257 | def retrying_send(batch) 258 | remaining = @retries 259 | 260 | while batch.any? 261 | unless remaining.nil? 262 | if remaining < 0 263 | # TODO(sissel): Offer to DLQ? Then again, if it's a transient fault, 264 | # DLQing would make things worse (you dlq data that would be successful 265 | # after the fault is repaired) 266 | logger.info("Exhausted user-configured retry count when sending to Kafka. Dropping these events.", 267 | :max_retries => @retries, :drop_count => batch.count) 268 | break 269 | end 270 | 271 | remaining -= 1 272 | end 273 | 274 | failures = [] 275 | 276 | futures = batch.collect do |record| 277 | begin 278 | # send() can throw an exception even before the future is created. 279 | @producer.send(record) 280 | rescue org.apache.kafka.common.errors.InterruptException, 281 | org.apache.kafka.common.errors.RetriableException => e 282 | logger.info("producer send failed, will retry sending", :exception => e.class, :message => e.message) 283 | failures << record 284 | nil 285 | rescue org.apache.kafka.common.KafkaException => e 286 | # This error is not retriable, drop event 287 | # TODO: add DLQ support 288 | logger.warn("producer send failed, dropping record",:exception => e.class, :message => e.message, 289 | :record_value => record.value) 290 | nil 291 | end 292 | end 293 | 294 | futures.each_with_index do |future, i| 295 | # We cannot skip nils using `futures.compact` because then our index `i` will not align with `batch` 296 | unless future.nil? 297 | begin 298 | future.get 299 | rescue java.util.concurrent.ExecutionException => e 300 | # TODO(sissel): Add metric to count failures, possibly by exception type. 301 | if e.get_cause.is_a? org.apache.kafka.common.errors.RetriableException or 302 | e.get_cause.is_a? org.apache.kafka.common.errors.InterruptException 303 | logger.info("producer send failed, will retry sending", :exception => e.cause.class, 304 | :message => e.cause.message) 305 | failures << batch[i] 306 | elsif e.get_cause.is_a? org.apache.kafka.common.KafkaException 307 | # This error is not retriable, drop event 308 | # TODO: add DLQ support 309 | logger.warn("producer send failed, dropping record", :exception => e.cause.class, 310 | :message => e.cause.message, :record_value => batch[i].value) 311 | end 312 | end 313 | end 314 | end 315 | 316 | # No failures? Cool. Let's move on. 317 | break if failures.empty? 318 | 319 | # Otherwise, retry with any failed transmissions 320 | if remaining.nil? || remaining >= 0 321 | delay = @retry_backoff_ms / 1000.0 322 | logger.info("Sending batch to Kafka failed. Will retry after a delay.", :batch_size => batch.size, 323 | :failures => failures.size, 324 | :sleep => delay) 325 | batch = failures 326 | sleep(delay) 327 | end 328 | end 329 | end 330 | 331 | def close 332 | @producer.close 333 | end 334 | 335 | private 336 | 337 | def write_to_kafka(event, serialized_data) 338 | if @message_key.nil? 339 | record = ProducerRecord.new(event.sprintf(@topic_id), serialized_data) 340 | else 341 | record = ProducerRecord.new(event.sprintf(@topic_id), event.sprintf(@message_key), serialized_data) 342 | end 343 | @message_headers.each do |key, value| 344 | record.headers().add(key, event.sprintf(value).to_java_bytes) 345 | end 346 | prepare(record) 347 | rescue LogStash::ShutdownSignal 348 | logger.debug('producer received shutdown signal') 349 | rescue => e 350 | logger.warn('producer threw exception, restarting', :exception => e.class, :message => e.message) 351 | end 352 | 353 | def create_producer 354 | begin 355 | props = java.util.Properties.new 356 | kafka = org.apache.kafka.clients.producer.ProducerConfig 357 | 358 | props.put(kafka::ACKS_CONFIG, acks) 359 | props.put(kafka::BATCH_SIZE_CONFIG, batch_size.to_s) 360 | props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, bootstrap_servers) 361 | props.put(kafka::BUFFER_MEMORY_CONFIG, buffer_memory.to_s) 362 | props.put(kafka::COMPRESSION_TYPE_CONFIG, compression_type) 363 | props.put(kafka::CLIENT_DNS_LOOKUP_CONFIG, client_dns_lookup) 364 | props.put(kafka::CLIENT_ID_CONFIG, client_id) unless client_id.nil? 365 | props.put(kafka::KEY_SERIALIZER_CLASS_CONFIG, key_serializer) 366 | props.put(kafka::LINGER_MS_CONFIG, linger_ms.to_s) 367 | props.put(kafka::MAX_REQUEST_SIZE_CONFIG, max_request_size.to_s) 368 | props.put(kafka::METADATA_MAX_AGE_CONFIG, metadata_max_age_ms.to_s) unless metadata_max_age_ms.nil? 369 | unless partitioner.nil? 370 | props.put(kafka::PARTITIONER_CLASS_CONFIG, partitioner = partitioner_class) 371 | logger.debug('producer configured using partitioner', :partitioner_class => partitioner) 372 | end 373 | props.put(kafka::RECEIVE_BUFFER_CONFIG, receive_buffer_bytes.to_s) unless receive_buffer_bytes.nil? 374 | props.put(kafka::RECONNECT_BACKOFF_MS_CONFIG, reconnect_backoff_ms.to_s) unless reconnect_backoff_ms.nil? 375 | props.put(kafka::REQUEST_TIMEOUT_MS_CONFIG, request_timeout_ms.to_s) unless request_timeout_ms.nil? 376 | props.put(kafka::RETRIES_CONFIG, retries.to_s) unless retries.nil? 377 | props.put(kafka::RETRY_BACKOFF_MS_CONFIG, retry_backoff_ms.to_s) 378 | props.put(kafka::SEND_BUFFER_CONFIG, send_buffer_bytes.to_s) 379 | props.put(kafka::VALUE_SERIALIZER_CLASS_CONFIG, value_serializer) 380 | 381 | props.put("security.protocol", security_protocol) unless security_protocol.nil? 382 | 383 | if security_protocol == "SSL" 384 | set_trustore_keystore_config(props) 385 | elsif security_protocol == "SASL_PLAINTEXT" 386 | set_sasl_config(props) 387 | elsif security_protocol == "SASL_SSL" 388 | set_trustore_keystore_config(props) 389 | set_sasl_config(props) 390 | end 391 | 392 | org.apache.kafka.clients.producer.KafkaProducer.new(props) 393 | rescue => e 394 | logger.error("Unable to create Kafka producer from given configuration", 395 | :kafka_error_message => e, 396 | :cause => e.respond_to?(:getCause) ? e.getCause() : nil) 397 | raise e 398 | end 399 | end 400 | 401 | def partitioner_class 402 | case partitioner 403 | when 'round_robin' 404 | 'org.apache.kafka.clients.producer.RoundRobinPartitioner' 405 | when 'uniform_sticky' 406 | 'org.apache.kafka.clients.producer.UniformStickyPartitioner' 407 | when 'default' 408 | 'org.apache.kafka.clients.producer.internals.DefaultPartitioner' 409 | else 410 | unless partitioner.index('.') 411 | raise LogStash::ConfigurationError, "unsupported partitioner: #{partitioner.inspect}" 412 | end 413 | partitioner # assume a fully qualified class-name 414 | end 415 | end 416 | 417 | end #class LogStash::Outputs::Kafka 418 | -------------------------------------------------------------------------------- /lib/logstash/plugin_mixins/kafka/avro_schema_registry.rb: -------------------------------------------------------------------------------- 1 | require 'manticore' 2 | 3 | module LogStash module PluginMixins module Kafka 4 | module AvroSchemaRegistry 5 | 6 | def self.included(base) 7 | base.extend(self) 8 | base.setup_schema_registry_config 9 | end 10 | 11 | def setup_schema_registry_config 12 | # Option to set key to access Schema Registry. 13 | config :schema_registry_key, :validate => :string 14 | 15 | # Option to set secret to access Schema Registry. 16 | config :schema_registry_secret, :validate => :password 17 | 18 | # Option to set the endpoint of the Schema Registry. 19 | # This option permit the usage of Avro Kafka deserializer which retrieve the schema of the Avro message from an 20 | # instance of schema registry. If this option has value `value_deserializer_class` nor `topics_pattern` could be valued 21 | config :schema_registry_url, :validate => :uri 22 | 23 | # Option to set the proxy of the Schema Registry. 24 | # This option permits to define a proxy to be used to reach the schema registry service instance. 25 | config :schema_registry_proxy, :validate => :uri 26 | 27 | # If schema registry client authentication is required, this setting stores the keystore path. 28 | config :schema_registry_ssl_keystore_location, :validate => :string 29 | 30 | # The keystore password. 31 | config :schema_registry_ssl_keystore_password, :validate => :password 32 | 33 | # The keystore type 34 | config :schema_registry_ssl_keystore_type, :validate => ['jks', 'PKCS12'], :default => "jks" 35 | 36 | # The JKS truststore path to validate the Schema Registry's certificate. 37 | config :schema_registry_ssl_truststore_location, :validate => :string 38 | 39 | # The truststore password. 40 | config :schema_registry_ssl_truststore_password, :validate => :password 41 | 42 | # The truststore type 43 | config :schema_registry_ssl_truststore_type, :validate => ['jks', 'PKCS12'], :default => "jks" 44 | 45 | # Option to skip validating the schema registry during registration. This can be useful when using 46 | # certificate based auth 47 | config :schema_registry_validation, :validate => ['auto', 'skip'], :default => 'auto' 48 | end 49 | 50 | def check_schema_registry_parameters 51 | if @schema_registry_url 52 | check_for_schema_registry_conflicts 53 | @schema_registry_proxy_host, @schema_registry_proxy_port = split_proxy_into_host_and_port(schema_registry_proxy) 54 | check_for_key_and_secret 55 | check_for_schema_registry_connectivity_and_subjects if schema_registry_validation? 56 | end 57 | end 58 | 59 | def schema_registry_validation? 60 | return false if schema_registry_validation.to_s == 'skip' 61 | return false if using_kerberos? # pre-validation doesn't support kerberos 62 | 63 | true 64 | end 65 | 66 | def using_kerberos? 67 | security_protocol == "SASL_PLAINTEXT" || security_protocol == "SASL_SSL" 68 | end 69 | 70 | private 71 | def check_for_schema_registry_conflicts 72 | if @value_deserializer_class != LogStash::Inputs::Kafka::DEFAULT_DESERIALIZER_CLASS 73 | raise LogStash::ConfigurationError, 'Option schema_registry_url prohibit the customization of value_deserializer_class' 74 | end 75 | if @topics_pattern && !@topics_pattern.empty? 76 | raise LogStash::ConfigurationError, 'Option schema_registry_url prohibit the customization of topics_pattern' 77 | end 78 | end 79 | 80 | private 81 | def check_for_schema_registry_connectivity_and_subjects 82 | options = {} 83 | if schema_registry_proxy && !schema_registry_proxy.empty? 84 | options[:proxy] = schema_registry_proxy.to_s 85 | end 86 | if schema_registry_key and !schema_registry_key.empty? 87 | options[:auth] = {:user => schema_registry_key, :password => schema_registry_secret.value} 88 | end 89 | if schema_registry_ssl_truststore_location and !schema_registry_ssl_truststore_location.empty? 90 | options[:ssl] = {} unless options.key?(:ssl) 91 | options[:ssl][:truststore] = schema_registry_ssl_truststore_location unless schema_registry_ssl_truststore_location.nil? 92 | options[:ssl][:truststore_password] = schema_registry_ssl_truststore_password.value unless schema_registry_ssl_truststore_password.nil? 93 | options[:ssl][:truststore_type] = schema_registry_ssl_truststore_type unless schema_registry_ssl_truststore_type.nil? 94 | end 95 | if schema_registry_ssl_keystore_location and !schema_registry_ssl_keystore_location.empty? 96 | options[:ssl] = {} unless options.key? :ssl 97 | options[:ssl][:keystore] = schema_registry_ssl_keystore_location unless schema_registry_ssl_keystore_location.nil? 98 | options[:ssl][:keystore_password] = schema_registry_ssl_keystore_password.value unless schema_registry_ssl_keystore_password.nil? 99 | options[:ssl][:keystore_type] = schema_registry_ssl_keystore_type unless schema_registry_ssl_keystore_type.nil? 100 | end 101 | 102 | client = Manticore::Client.new(options) 103 | begin 104 | response = client.get(@schema_registry_url.uri.to_s + '/subjects').body 105 | rescue Manticore::ManticoreException => e 106 | raise LogStash::ConfigurationError.new("Schema registry service doesn't respond, error: #{e.message}") 107 | end 108 | registered_subjects = JSON.parse response 109 | expected_subjects = @topics.map { |t| "#{t}-value"} 110 | if (expected_subjects & registered_subjects).size != expected_subjects.size 111 | undefined_topic_subjects = expected_subjects - registered_subjects 112 | raise LogStash::ConfigurationError, "The schema registry does not contain definitions for required topic subjects: #{undefined_topic_subjects}" 113 | end 114 | end 115 | 116 | def split_proxy_into_host_and_port(proxy_uri) 117 | return nil unless proxy_uri && !proxy_uri.empty? 118 | 119 | port = proxy_uri.port 120 | 121 | host_spec = "" 122 | host_spec << proxy_uri.scheme || "http" 123 | host_spec << "://" 124 | host_spec << "#{proxy_uri.userinfo}@" if proxy_uri.userinfo 125 | host_spec << proxy_uri.host 126 | 127 | [host_spec, port] 128 | end 129 | 130 | def check_for_key_and_secret 131 | if schema_registry_key and !schema_registry_key.empty? 132 | if !schema_registry_secret or schema_registry_secret.value.empty? 133 | raise LogStash::ConfigurationError, "Setting `schema_registry_secret` is required when `schema_registry_key` is provided." 134 | end 135 | end 136 | end 137 | 138 | end 139 | end end end 140 | -------------------------------------------------------------------------------- /lib/logstash/plugin_mixins/kafka/common.rb: -------------------------------------------------------------------------------- 1 | module LogStash module PluginMixins module Kafka 2 | module Common 3 | 4 | def self.included(base) 5 | # COMMON CONFIGURATION SUPPORTED BY BOTH PRODUCER/CONSUMER 6 | 7 | # Close idle connections after the number of milliseconds specified by this config. 8 | base.config :connections_max_idle_ms, :validate => :number, :default => 540_000 # (9m) Kafka default 9 | 10 | # The period of time in milliseconds after which we force a refresh of metadata even if 11 | # we haven't seen any partition leadership changes to proactively discover any new brokers or partitions 12 | base.config :metadata_max_age_ms, :validate => :number, :default => 300_000 # (5m) Kafka default 13 | 14 | # The configuration controls the maximum amount of time the client will wait for the response of a request. 15 | # If the response is not received before the timeout elapses the client will resend the request if necessary 16 | # or fail the request if retries are exhausted. 17 | base.config :request_timeout_ms, :validate => :number, :default => 40_000 # Kafka default 18 | end 19 | 20 | def set_trustore_keystore_config(props) 21 | props.put("ssl.truststore.type", ssl_truststore_type) unless ssl_truststore_type.nil? 22 | props.put("ssl.truststore.location", ssl_truststore_location) unless ssl_truststore_location.nil? 23 | props.put("ssl.truststore.password", ssl_truststore_password.value) unless ssl_truststore_password.nil? 24 | 25 | # Client auth stuff 26 | props.put("ssl.keystore.type", ssl_keystore_type) unless ssl_keystore_type.nil? 27 | props.put("ssl.key.password", ssl_key_password.value) unless ssl_key_password.nil? 28 | props.put("ssl.keystore.location", ssl_keystore_location) unless ssl_keystore_location.nil? 29 | props.put("ssl.keystore.password", ssl_keystore_password.value) unless ssl_keystore_password.nil? 30 | props.put("ssl.endpoint.identification.algorithm", ssl_endpoint_identification_algorithm) unless ssl_endpoint_identification_algorithm.nil? 31 | end 32 | 33 | def set_sasl_config(props) 34 | java.lang.System.setProperty("java.security.auth.login.config", jaas_path) unless jaas_path.nil? 35 | java.lang.System.setProperty("java.security.krb5.conf", kerberos_config) unless kerberos_config.nil? 36 | 37 | props.put("sasl.mechanism", sasl_mechanism) 38 | if sasl_mechanism == "GSSAPI" && sasl_kerberos_service_name.nil? 39 | raise LogStash::ConfigurationError, "sasl_kerberos_service_name must be specified when SASL mechanism is GSSAPI" 40 | end 41 | 42 | props.put("sasl.kerberos.service.name", sasl_kerberos_service_name) unless sasl_kerberos_service_name.nil? 43 | props.put("sasl.jaas.config", sasl_jaas_config) unless sasl_jaas_config.nil? 44 | props.put("sasl.client.callback.handler.class", sasl_client_callback_handler_class) unless sasl_client_callback_handler_class.nil? 45 | props.put("sasl.oauthbearer.token.endpoint.url", sasl_oauthbearer_token_endpoint_url) unless sasl_oauthbearer_token_endpoint_url.nil? 46 | props.put("sasl.oauthbearer.scope.claim.name", sasl_oauthbearer_scope_claim_name) unless sasl_oauthbearer_scope_claim_name.nil? 47 | props.put("sasl.login.callback.handler.class", sasl_login_callback_handler_class) unless sasl_login_callback_handler_class.nil? 48 | props.put("sasl.login.connect.timeout.ms", sasl_login_connect_timeout_ms.to_s) unless sasl_login_connect_timeout_ms.nil? 49 | props.put("sasl.login.read.timeout.ms", sasl_login_read_timeout_ms.to_s) unless sasl_login_read_timeout_ms.nil? 50 | props.put("sasl.login.retry.backoff.ms", sasl_login_retry_backoff_ms.to_s) unless sasl_login_retry_backoff_ms.nil? 51 | props.put("sasl.login.retry.backoff.max.ms", sasl_login_retry_backoff_max_ms.to_s) unless sasl_login_retry_backoff_max_ms.nil? 52 | sasl_iam_jar_paths&.each {|jar_path| require jar_path } 53 | end 54 | 55 | def reassign_dns_lookup 56 | if @client_dns_lookup == "default" 57 | @client_dns_lookup = "use_all_dns_ips" 58 | logger.warn("client_dns_lookup setting 'default' value is deprecated, forced to 'use_all_dns_ips', please update your configuration") 59 | deprecation_logger.deprecated("Deprecated value `default` for `client_dns_lookup` option; use `use_all_dns_ips` instead.") 60 | end 61 | end 62 | 63 | end 64 | end end end -------------------------------------------------------------------------------- /logstash-integration-kafka.gemspec: -------------------------------------------------------------------------------- 1 | Gem::Specification.new do |s| 2 | s.name = 'logstash-integration-kafka' 3 | s.version = '11.6.2' 4 | s.licenses = ['Apache-2.0'] 5 | s.summary = "Integration with Kafka - input and output plugins" 6 | s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline "+ 7 | "using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program." 8 | s.authors = ["Elastic"] 9 | s.email = 'info@elastic.co' 10 | s.homepage = "http://www.elastic.co/guide/en/logstash/current/index.html" 11 | s.require_paths = ['lib', 'vendor/jar-dependencies'] 12 | 13 | # Files 14 | s.files = Dir.glob(%w( 15 | lib/**/* 16 | spec/**/* 17 | *.gemspec 18 | *.md 19 | CONTRIBUTORS 20 | Gemfile 21 | LICENSE 22 | NOTICE.TXT 23 | vendor/jar-dependencies/**/*.jar 24 | vendor/jar-dependencies/**/*.rb 25 | VERSION docs/**/* 26 | )) 27 | 28 | # Tests 29 | s.test_files = s.files.grep(%r{^(test|spec|features)/}) 30 | 31 | # Special flag to let us know this is actually a logstash plugin 32 | s.metadata = { 33 | "logstash_plugin" => "true", 34 | "logstash_group" => "integration", 35 | "integration_plugins" => "logstash-input-kafka,logstash-output-kafka" 36 | } 37 | 38 | s.platform = RUBY_PLATFORM 39 | 40 | # Gem dependencies 41 | s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99" 42 | s.add_runtime_dependency "logstash-core", ">= 8.3.0" 43 | 44 | s.add_runtime_dependency 'logstash-codec-json' 45 | s.add_runtime_dependency 'logstash-codec-plain' 46 | s.add_runtime_dependency 'stud', '>= 0.0.22', '< 0.1.0' 47 | s.add_runtime_dependency "manticore", '>= 0.5.4', '< 1.0.0' 48 | s.add_runtime_dependency 'logstash-mixin-deprecation_logger_support', '~>1.0' 49 | 50 | s.add_development_dependency 'logstash-devutils' 51 | s.add_development_dependency 'logstash-codec-line' 52 | s.add_development_dependency 'rspec-wait' 53 | s.add_development_dependency 'digest-crc', '~> 0.5.1' # 0.6.0 started using a C-ext 54 | s.add_development_dependency 'ruby-kafka' # depends on digest-crc 55 | s.add_development_dependency 'snappy' 56 | end 57 | -------------------------------------------------------------------------------- /setup_keystore_and_truststore.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Schema Registry keystore and Kafka's schema registry client's truststore 3 | set -ex 4 | 5 | echo "Generating schema registry key store" 6 | keytool -genkey -alias schema_reg -keyalg RSA -keystore tls_repository/schema_reg.jks -keypass changeit -storepass changeit -validity 365 -keysize 2048 -dname "CN=localhost, OU=John Doe, O=Acme Inc, L=Unknown, ST=Unknown, C=IT" 7 | 8 | echo "Exporting schema registry certificate" 9 | keytool -exportcert -rfc -keystore tls_repository/schema_reg.jks -storepass changeit -alias schema_reg -file tls_repository/schema_reg_certificate.pem 10 | 11 | echo "Creating client's truststore and importing schema registry's certificate" 12 | keytool -import -trustcacerts -file tls_repository/schema_reg_certificate.pem -keypass changeit -storepass changeit -keystore tls_repository/clienttruststore.jks -noprompt -------------------------------------------------------------------------------- /spec/check_docs_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require 'logstash-integration-kafka_jars' 3 | 4 | describe "[DOCS]" do 5 | 6 | let(:docs_files) do 7 | ['index.asciidoc', 'input-kafka.asciidoc', 'output-kafka.asciidoc'].map { |name| File.join('docs', name) } 8 | end 9 | 10 | let(:kafka_version_properties) do 11 | loader = java.lang.Thread.currentThread.getContextClassLoader 12 | version = loader.getResource('kafka/kafka-version.properties') 13 | fail "kafka-version.properties missing" unless version 14 | properties = java.util.Properties.new 15 | properties.load version.openStream 16 | properties 17 | end 18 | 19 | it 'is sync-ed with Kafka client version' do 20 | version = kafka_version_properties.get('version') # e.g. '2.5.1' 21 | 22 | fails = docs_files.map do |file| 23 | if line = File.readlines(file).find { |line| line.index(':kafka_client:') } 24 | puts "found #{line.inspect} in #{file}" if $VERBOSE # e.g. ":kafka_client: 2.5\n" 25 | if !version.start_with?(line.strip.split[1]) 26 | "documentation at #{file} is out of sync with kafka-clients version (#{version.inspect}), detected line: #{line.inspect}" 27 | else 28 | nil 29 | end 30 | end 31 | end 32 | 33 | fail "\n" + fails.join("\n") if fails.flatten.any? 34 | end 35 | 36 | end 37 | -------------------------------------------------------------------------------- /spec/fixtures/jaas.config: -------------------------------------------------------------------------------- 1 | SchemaRegistry-Props { 2 | org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required 3 | file="build/confluent_platform/etc/schema-registry/pwd" 4 | debug="true"; 5 | }; 6 | -------------------------------------------------------------------------------- /spec/fixtures/pwd: -------------------------------------------------------------------------------- 1 | fred: OBF:1w8t1tvf1w261w8v1w1c1tvn1w8x,user,admin 2 | barney: changeme,user,developer 3 | admin:admin,admin 4 | betty: MD5:164c88b302622e17050af52c89945d44,user 5 | wilma: CRYPT:adpexzg3FUZAk,admin,sr-user 6 | -------------------------------------------------------------------------------- /spec/fixtures/trust-store_stub.jks: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/logstash-plugins/logstash-integration-kafka/a5b3251699a931f100902e1e5dd6aa1809f10e25/spec/fixtures/trust-store_stub.jks -------------------------------------------------------------------------------- /spec/integration/inputs/kafka_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rspec/spec_helper" 3 | require "logstash/inputs/kafka" 4 | require "rspec/wait" 5 | require "stud/try" 6 | require "manticore" 7 | require "json" 8 | 9 | # Please run kafka_test_setup.sh prior to executing this integration test. 10 | describe "inputs/kafka", :integration => true do 11 | # Group ids to make sure that the consumers get all the logs. 12 | let(:group_id_1) {rand(36**8).to_s(36)} 13 | let(:group_id_2) {rand(36**8).to_s(36)} 14 | let(:group_id_3) {rand(36**8).to_s(36)} 15 | let(:group_id_4) {rand(36**8).to_s(36)} 16 | let(:group_id_5) {rand(36**8).to_s(36)} 17 | let(:group_id_6) {rand(36**8).to_s(36)} 18 | let(:plain_config) do 19 | { 'topics' => ['logstash_integration_topic_plain'], 'group_id' => group_id_1, 20 | 'auto_offset_reset' => 'earliest' } 21 | end 22 | let(:multi_consumer_config) do 23 | plain_config.merge({"group_id" => group_id_4, "client_id" => "spec", "consumer_threads" => 3}) 24 | end 25 | let(:snappy_config) do 26 | { 'topics' => ['logstash_integration_topic_snappy'], 'group_id' => group_id_1, 27 | 'auto_offset_reset' => 'earliest' } 28 | end 29 | let(:lz4_config) do 30 | { 'topics' => ['logstash_integration_topic_lz4'], 'group_id' => group_id_1, 31 | 'auto_offset_reset' => 'earliest' } 32 | end 33 | let(:pattern_config) do 34 | { 'topics_pattern' => 'logstash_integration_topic_.*', 'group_id' => group_id_2, 35 | 'auto_offset_reset' => 'earliest' } 36 | end 37 | let(:decorate_config) do 38 | { 'topics' => ['logstash_integration_topic_plain'], 'group_id' => group_id_3, 39 | 'auto_offset_reset' => 'earliest', 'decorate_events' => 'true' } 40 | end 41 | let(:decorate_headers_config) do 42 | { 'topics' => ['logstash_integration_topic_plain_with_headers'], 'group_id' => group_id_3, 43 | 'auto_offset_reset' => 'earliest', 'decorate_events' => 'extended' } 44 | end 45 | let(:decorate_bad_headers_config) do 46 | { 'topics' => ['logstash_integration_topic_plain_with_headers_badly'], 'group_id' => group_id_3, 47 | 'auto_offset_reset' => 'earliest', 'decorate_events' => 'extended' } 48 | end 49 | let(:manual_commit_config) do 50 | { 'topics' => ['logstash_integration_topic_plain'], 'group_id' => group_id_5, 51 | 'auto_offset_reset' => 'earliest', 'enable_auto_commit' => 'false' } 52 | end 53 | let(:timeout_seconds) { 30 } 54 | let(:num_events) { 103 } 55 | 56 | before(:all) do 57 | # Prepare message with headers with valid UTF-8 chars 58 | header = org.apache.kafka.common.header.internals.RecordHeader.new("name", "John ανδρεα €".to_java_bytes) 59 | record = org.apache.kafka.clients.producer.ProducerRecord.new( 60 | "logstash_integration_topic_plain_with_headers", 0, "key", "value", [header]) 61 | send_message(record) 62 | 63 | # Prepare message with headers with invalid UTF-8 chars 64 | invalid = "日本".encode('Shift_JIS').force_encoding(Encoding::UTF_8).to_java_bytes 65 | header = org.apache.kafka.common.header.internals.RecordHeader.new("name", invalid) 66 | record = org.apache.kafka.clients.producer.ProducerRecord.new( 67 | "logstash_integration_topic_plain_with_headers_badly", 0, "key", "value", [header]) 68 | 69 | send_message(record) 70 | end 71 | 72 | def send_message(record) 73 | props = java.util.Properties.new 74 | kafka = org.apache.kafka.clients.producer.ProducerConfig 75 | props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, "localhost:9092") 76 | props.put(kafka::KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer") 77 | props.put(kafka::VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer") 78 | 79 | producer = org.apache.kafka.clients.producer.KafkaProducer.new(props) 80 | 81 | producer.send(record) 82 | producer.flush 83 | producer.close 84 | end 85 | 86 | describe "#kafka-topics" do 87 | 88 | it "should consume all messages from plain 3-partition topic" do 89 | queue = consume_messages(plain_config, timeout: timeout_seconds, event_count: num_events) 90 | expect(queue.length).to eq(num_events) 91 | end 92 | 93 | it "should consume all messages from snappy 3-partition topic" do 94 | queue = consume_messages(snappy_config, timeout: timeout_seconds, event_count: num_events) 95 | expect(queue.length).to eq(num_events) 96 | end 97 | 98 | it "should consume all messages from lz4 3-partition topic" do 99 | queue = consume_messages(lz4_config, timeout: timeout_seconds, event_count: num_events) 100 | expect(queue.length).to eq(num_events) 101 | end 102 | 103 | it "should consumer all messages with multiple consumers" do 104 | consume_messages(multi_consumer_config, timeout: timeout_seconds, event_count: num_events) do |queue, kafka_input| 105 | expect(queue.length).to eq(num_events) 106 | kafka_input.kafka_consumers.each_with_index do |consumer, i| 107 | expect(consumer.metrics.keys.first.tags["client-id"]).to eq("spec-#{i}") 108 | end 109 | end 110 | end 111 | end 112 | 113 | context "#kafka-topics-pattern" do 114 | it "should consume all messages from all 3 topics" do 115 | total_events = num_events * 3 + 2 116 | queue = consume_messages(pattern_config, timeout: timeout_seconds, event_count: total_events) 117 | expect(queue.length).to eq(total_events) 118 | end 119 | end 120 | 121 | context "#kafka-decorate" do 122 | it "should show the right topic and group name in decorated kafka section" do 123 | start = LogStash::Timestamp.now.time.to_i 124 | consume_messages(decorate_config, timeout: timeout_seconds, event_count: num_events) do |queue, _| 125 | expect(queue.length).to eq(num_events) 126 | event = queue.shift 127 | expect(event.get("[@metadata][kafka][topic]")).to eq("logstash_integration_topic_plain") 128 | expect(event.get("[@metadata][kafka][consumer_group]")).to eq(group_id_3) 129 | expect(event.get("[@metadata][kafka][timestamp]")).to be >= start 130 | end 131 | end 132 | 133 | it "should show the right topic and group name in and kafka headers decorated kafka section" do 134 | start = LogStash::Timestamp.now.time.to_i 135 | consume_messages(decorate_headers_config, timeout: timeout_seconds, event_count: 1) do |queue, _| 136 | expect(queue.length).to eq(1) 137 | event = queue.shift 138 | expect(event.get("[@metadata][kafka][topic]")).to eq("logstash_integration_topic_plain_with_headers") 139 | expect(event.get("[@metadata][kafka][consumer_group]")).to eq(group_id_3) 140 | expect(event.get("[@metadata][kafka][timestamp]")).to be >= start 141 | expect(event.get("[@metadata][kafka][headers][name]")).to eq("John ανδρεα €") 142 | end 143 | end 144 | 145 | it "should skip headers not encoded in UTF-8" do 146 | start = LogStash::Timestamp.now.time.to_i 147 | consume_messages(decorate_bad_headers_config, timeout: timeout_seconds, event_count: 1) do |queue, _| 148 | expect(queue.length).to eq(1) 149 | event = queue.shift 150 | expect(event.get("[@metadata][kafka][topic]")).to eq("logstash_integration_topic_plain_with_headers_badly") 151 | expect(event.get("[@metadata][kafka][consumer_group]")).to eq(group_id_3) 152 | expect(event.get("[@metadata][kafka][timestamp]")).to be >= start 153 | 154 | expect(event.include?("[@metadata][kafka][headers][name]")).to eq(false) 155 | end 156 | end 157 | end 158 | 159 | context "#kafka-offset-commit" do 160 | it "should manually commit offsets" do 161 | queue = consume_messages(manual_commit_config, timeout: timeout_seconds, event_count: num_events) 162 | expect(queue.length).to eq(num_events) 163 | end 164 | end 165 | 166 | context 'setting partition_assignment_strategy' do 167 | let(:test_topic) { 'logstash_integration_partitioner_topic' } 168 | let(:consumer_config) do 169 | plain_config.merge( 170 | "topics" => [test_topic], 171 | 'group_id' => group_id_6, 172 | "client_id" => "partition_assignment_strategy-spec", 173 | "consumer_threads" => 2, 174 | "partition_assignment_strategy" => partition_assignment_strategy 175 | ) 176 | end 177 | let(:partition_assignment_strategy) { nil } 178 | 179 | # NOTE: just verify setting works, as its a bit cumbersome to do in a unit spec 180 | [ 'range', 'round_robin', 'sticky', 'org.apache.kafka.clients.consumer.CooperativeStickyAssignor' ].each do |partition_assignment_strategy| 181 | describe partition_assignment_strategy do 182 | let(:partition_assignment_strategy) { partition_assignment_strategy } 183 | it 'consumes data' do 184 | consume_messages(consumer_config, timeout: false, event_count: 0) 185 | end 186 | end 187 | end 188 | end 189 | 190 | context "static membership 'group.instance.id' setting" do 191 | let(:base_config) do 192 | { 193 | "topics" => ["logstash_integration_static_membership_topic"], 194 | "group_id" => "logstash", 195 | "consumer_threads" => 1, 196 | # this is needed because the worker thread could be executed little after the producer sent the "up" message 197 | "auto_offset_reset" => "earliest", 198 | "group_instance_id" => "test_static_group_id" 199 | } 200 | end 201 | let(:consumer_config) { base_config } 202 | let(:logger) { double("logger") } 203 | let(:queue) { java.util.concurrent.ArrayBlockingQueue.new(10) } 204 | let(:kafka_input) { LogStash::Inputs::Kafka.new(consumer_config) } 205 | before :each do 206 | allow(LogStash::Inputs::Kafka).to receive(:logger).and_return(logger) 207 | [:error, :warn, :info, :debug].each do |level| 208 | allow(logger).to receive(level) 209 | end 210 | 211 | kafka_input.register 212 | end 213 | 214 | it "input plugin disconnects from the broker when another client with same static membership connects" do 215 | expect(logger).to receive(:error).with("Another consumer with same group.instance.id has connected", anything) 216 | 217 | input_worker = java.lang.Thread.new { kafka_input.run(queue) } 218 | begin 219 | input_worker.start 220 | wait_kafka_input_is_ready("logstash_integration_static_membership_topic", queue) 221 | saboteur_kafka_consumer = create_consumer_and_start_consuming("test_static_group_id") 222 | saboteur_kafka_consumer.run # ask to be scheduled 223 | saboteur_kafka_consumer.join 224 | 225 | expect(saboteur_kafka_consumer.value).to eq("saboteur exited") 226 | ensure 227 | input_worker.join(30_000) 228 | end 229 | end 230 | 231 | context "when the plugin is configured with multiple consumer threads" do 232 | let(:consumer_config) { base_config.merge({"consumer_threads" => 2}) } 233 | 234 | it "should avoid to connect with same 'group.instance.id'" do 235 | expect(logger).to_not receive(:error).with("Another consumer with same group.instance.id has connected", anything) 236 | 237 | input_worker = java.lang.Thread.new { kafka_input.run(queue) } 238 | begin 239 | input_worker.start 240 | wait_kafka_input_is_ready("logstash_integration_static_membership_topic", queue) 241 | ensure 242 | kafka_input.stop 243 | input_worker.join(1_000) 244 | end 245 | end 246 | end 247 | end 248 | end 249 | 250 | # return consumer Ruby Thread 251 | def create_consumer_and_start_consuming(static_group_id) 252 | props = java.util.Properties.new 253 | kafka = org.apache.kafka.clients.consumer.ConsumerConfig 254 | props.put(kafka::BOOTSTRAP_SERVERS_CONFIG, "localhost:9092") 255 | props.put(kafka::KEY_DESERIALIZER_CLASS_CONFIG, LogStash::Inputs::Kafka::DEFAULT_DESERIALIZER_CLASS) 256 | props.put(kafka::VALUE_DESERIALIZER_CLASS_CONFIG, LogStash::Inputs::Kafka::DEFAULT_DESERIALIZER_CLASS) 257 | props.put(kafka::GROUP_ID_CONFIG, "logstash") 258 | props.put(kafka::GROUP_INSTANCE_ID_CONFIG, static_group_id) 259 | consumer = org.apache.kafka.clients.consumer.KafkaConsumer.new(props) 260 | 261 | Thread.new do 262 | LogStash::Util::set_thread_name("integration_test_simple_consumer") 263 | begin 264 | consumer.subscribe(["logstash_integration_static_membership_topic"]) 265 | records = consumer.poll(java.time.Duration.ofSeconds(3)) 266 | "saboteur exited" 267 | rescue => e 268 | e # return the exception reached in thread.value 269 | ensure 270 | consumer.close 271 | end 272 | end 273 | end 274 | 275 | private 276 | 277 | def wait_kafka_input_is_ready(topic, queue) 278 | # this is needed to give time to the kafka input to be up and running 279 | header = org.apache.kafka.common.header.internals.RecordHeader.new("name", "Ping Up".to_java_bytes) 280 | record = org.apache.kafka.clients.producer.ProducerRecord.new(topic, 0, "key", "value", [header]) 281 | send_message(record) 282 | 283 | # Wait the message is processed 284 | message = queue.poll(1, java.util.concurrent.TimeUnit::MINUTES) 285 | expect(message).to_not eq(nil) 286 | end 287 | 288 | def consume_messages(config, queue: Queue.new, timeout:, event_count:) 289 | kafka_input = LogStash::Inputs::Kafka.new(config) 290 | kafka_input.register 291 | t = Thread.new { kafka_input.run(queue) } 292 | begin 293 | t.run 294 | wait(timeout).for { queue.length }.to eq(event_count) unless timeout.eql?(false) 295 | block_given? ? yield(queue, kafka_input) : queue 296 | ensure 297 | kafka_input.do_stop 298 | t.kill 299 | t.join(30) 300 | end 301 | end 302 | 303 | 304 | describe "schema registry connection options", :integration => true do 305 | schema_registry = Manticore::Client.new 306 | before (:all) do 307 | shutdown_schema_registry 308 | startup_schema_registry(schema_registry) 309 | end 310 | 311 | after(:all) do 312 | shutdown_schema_registry 313 | end 314 | 315 | context "remote endpoint validation" do 316 | it "should fail if not reachable" do 317 | config = {'schema_registry_url' => 'http://localnothost:8081'} 318 | kafka_input = LogStash::Inputs::Kafka.new(config) 319 | expect { kafka_input.register }.to raise_error LogStash::ConfigurationError, /Schema registry service doesn't respond.*/ 320 | end 321 | 322 | it "should fail if any topic is not matched by a subject on the schema registry" do 323 | config = { 324 | 'schema_registry_url' => 'http://localhost:8081', 325 | 'topics' => ['temperature_stream'] 326 | } 327 | 328 | kafka_input = LogStash::Inputs::Kafka.new(config) 329 | expect { kafka_input.register }.to raise_error LogStash::ConfigurationError, /The schema registry does not contain definitions for required topic subjects: \["temperature_stream-value"\]/ 330 | end 331 | 332 | context "register with subject present" do 333 | SUBJECT_NAME = "temperature_stream-value" 334 | 335 | before(:each) do 336 | response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), SUBJECT_NAME) 337 | expect( response.code ).to be(200) 338 | end 339 | 340 | after(:each) do 341 | delete_remote_schema(schema_registry, SUBJECT_NAME) 342 | end 343 | 344 | it "should correctly complete registration phase" do 345 | config = { 346 | 'schema_registry_url' => 'http://localhost:8081', 347 | 'topics' => ['temperature_stream'] 348 | } 349 | kafka_input = LogStash::Inputs::Kafka.new(config) 350 | kafka_input.register 351 | end 352 | end 353 | end 354 | end 355 | 356 | def save_avro_schema_to_schema_registry(schema_file, subject_name, proto = 'http', port = 8081, manticore_options = {}) 357 | raw_schema = File.readlines(schema_file).map(&:chomp).join 358 | raw_schema_quoted = raw_schema.gsub('"', '\"') 359 | 360 | client = Manticore::Client.new(manticore_options) 361 | 362 | response = client.post("#{proto}://localhost:#{port}/subjects/#{subject_name}/versions", 363 | body: '{"schema": "' + raw_schema_quoted + '"}', 364 | headers: {"Content-Type" => "application/vnd.schemaregistry.v1+json"}) 365 | response 366 | end 367 | 368 | def delete_remote_schema(schema_registry_client, subject_name) 369 | expect(schema_registry_client.delete("http://localhost:8081/subjects/#{subject_name}").code ).to be(200) 370 | expect(schema_registry_client.delete("http://localhost:8081/subjects/#{subject_name}?permanent=true").code ).to be(200) 371 | end 372 | 373 | # AdminClientConfig = org.alpache.kafka.clients.admin.AdminClientConfig 374 | 375 | def startup_schema_registry(schema_registry, auth=false) 376 | system('./stop_schema_registry.sh') 377 | auth ? system('./start_auth_schema_registry.sh') : system('./start_schema_registry.sh') 378 | url = auth ? "http://barney:changeme@localhost:8081" : "http://localhost:8081" 379 | Stud.try(20.times, [Manticore::SocketException, StandardError, RSpec::Expectations::ExpectationNotMetError]) do 380 | expect(schema_registry.get(url).code).to eq(200) 381 | end 382 | end 383 | 384 | shared_examples 'it has endpoints available to' do |tls| 385 | let(:port) { tls ? 8083 : 8081 } 386 | let(:proto) { tls ? 'https' : 'http' } 387 | 388 | manticore_options = { 389 | :ssl => { 390 | :truststore => File.join(Dir.pwd, "tls_repository/clienttruststore.jks"), 391 | :truststore_password => "changeit" 392 | } 393 | } 394 | schema_registry = Manticore::Client.new(manticore_options) 395 | 396 | before(:all) do 397 | startup_schema_registry(schema_registry) 398 | end 399 | 400 | after(:all) do 401 | shutdown_schema_registry 402 | end 403 | 404 | context 'listing subject on clean instance' do 405 | it "should return an empty set" do 406 | subjects = JSON.parse schema_registry.get("#{proto}://localhost:#{port}/subjects").body 407 | expect( subjects ).to be_empty 408 | end 409 | end 410 | 411 | context 'send a schema definition' do 412 | it "save the definition" do 413 | response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), "schema_test_1", proto, port, manticore_options) 414 | expect( response.code ).to be(200) 415 | delete_remote_schema(schema_registry, "schema_test_1") 416 | end 417 | 418 | it "delete the schema just added" do 419 | response = save_avro_schema_to_schema_registry(File.join(Dir.pwd, "spec", "unit", "inputs", "avro_schema_fixture_payment.asvc"), "schema_test_1", proto, port, manticore_options) 420 | expect( response.code ).to be(200) 421 | 422 | expect( schema_registry.delete("#{proto}://localhost:#{port}/subjects/schema_test_1?permanent=false").code ).to be(200) 423 | sleep(1) 424 | subjects = JSON.parse schema_registry.get("#{proto}://localhost:#{port}/subjects").body 425 | expect( subjects ).to be_empty 426 | end 427 | end 428 | end 429 | 430 | describe "Schema registry API", :integration => true do 431 | 432 | context "when exposed with HTTPS" do 433 | it_behaves_like 'it has endpoints available to', true 434 | end 435 | 436 | context "when exposed with plain HTTP" do 437 | it_behaves_like 'it has endpoints available to', false 438 | end 439 | end 440 | 441 | def shutdown_schema_registry 442 | system('./stop_schema_registry.sh') 443 | end 444 | 445 | describe "Deserializing with the schema registry", :integration => true do 446 | manticore_options = { 447 | :ssl => { 448 | :truststore => File.join(Dir.pwd, "tls_repository/clienttruststore.jks"), 449 | :truststore_password => "changeit" 450 | } 451 | } 452 | schema_registry = Manticore::Client.new(manticore_options) 453 | 454 | shared_examples 'it reads from a topic using a schema registry' do |with_auth| 455 | 456 | before(:all) do 457 | shutdown_schema_registry 458 | startup_schema_registry(schema_registry, with_auth) 459 | end 460 | 461 | after(:all) do 462 | shutdown_schema_registry 463 | end 464 | 465 | after(:each) do 466 | expect( schema_registry.delete("#{subject_url}/#{avro_topic_name}-value").code ).to be(200) 467 | sleep 1 468 | expect( schema_registry.delete("#{subject_url}/#{avro_topic_name}-value?permanent=true").code ).to be(200) 469 | 470 | Stud.try(3.times, [StandardError, RSpec::Expectations::ExpectationNotMetError]) do 471 | wait(10).for do 472 | subjects = JSON.parse schema_registry.get(subject_url).body 473 | subjects.empty? 474 | end.to be_truthy 475 | end 476 | end 477 | 478 | let(:base_config) do 479 | { 480 | 'topics' => [avro_topic_name], 'group_id' => group_id_1, 'auto_offset_reset' => 'earliest' 481 | } 482 | end 483 | 484 | let(:group_id_1) {rand(36**8).to_s(36)} 485 | 486 | def delete_topic_if_exists(topic_name, user = nil, password = nil) 487 | props = java.util.Properties.new 488 | props.put(Java::org.apache.kafka.clients.admin.AdminClientConfig::BOOTSTRAP_SERVERS_CONFIG, "localhost:9092") 489 | serdes_config = Java::io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig 490 | unless user.nil? 491 | props.put(serdes_config::BASIC_AUTH_CREDENTIALS_SOURCE, 'USER_INFO') 492 | props.put(serdes_config::USER_INFO_CONFIG, "#{user}:#{password}") 493 | end 494 | admin_client = org.apache.kafka.clients.admin.AdminClient.create(props) 495 | topics_list = admin_client.listTopics().names().get() 496 | if topics_list.contains(topic_name) 497 | result = admin_client.deleteTopics([topic_name]) 498 | result.values.get(topic_name).get() 499 | end 500 | end 501 | 502 | def write_some_data_to(topic_name, user = nil, password = nil) 503 | props = java.util.Properties.new 504 | config = org.apache.kafka.clients.producer.ProducerConfig 505 | 506 | serdes_config = Java::io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig 507 | props.put(serdes_config::SCHEMA_REGISTRY_URL_CONFIG, "http://localhost:8081") 508 | 509 | props.put(config::BOOTSTRAP_SERVERS_CONFIG, "localhost:9092") 510 | unless user.nil? 511 | props.put(serdes_config::BASIC_AUTH_CREDENTIALS_SOURCE, 'USER_INFO') 512 | props.put(serdes_config::USER_INFO_CONFIG, "#{user}:#{password}") 513 | end 514 | props.put(config::KEY_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.java_class) 515 | props.put(config::VALUE_SERIALIZER_CLASS_CONFIG, Java::io.confluent.kafka.serializers.KafkaAvroSerializer.java_class) 516 | 517 | parser = org.apache.avro.Schema::Parser.new() 518 | user_schema = '''{"type":"record", 519 | "name":"myrecord", 520 | "fields":[ 521 | {"name":"str_field", "type": "string"}, 522 | {"name":"map_field", "type": {"type": "map", "values": "string"}} 523 | ]}''' 524 | schema = parser.parse(user_schema) 525 | avro_record = org.apache.avro.generic.GenericData::Record.new(schema) 526 | avro_record.put("str_field", "value1") 527 | avro_record.put("map_field", {"inner_field" => "inner value"}) 528 | 529 | producer = org.apache.kafka.clients.producer.KafkaProducer.new(props) 530 | record = org.apache.kafka.clients.producer.ProducerRecord.new(topic_name, "avro_key", avro_record) 531 | producer.send(record) 532 | end 533 | 534 | it "stored a new schema using Avro Kafka serdes" do 535 | auth ? delete_topic_if_exists(avro_topic_name, user, password) : delete_topic_if_exists(avro_topic_name) 536 | auth ? write_some_data_to(avro_topic_name, user, password) : write_some_data_to(avro_topic_name) 537 | 538 | subjects = JSON.parse schema_registry.get(subject_url).body 539 | expect( subjects ).to contain_exactly("#{avro_topic_name}-value") 540 | 541 | num_events = 1 542 | queue = consume_messages(plain_config, timeout: 30, event_count: num_events) 543 | expect(queue.length).to eq(num_events) 544 | elem = queue.pop 545 | expect( elem.to_hash).not_to include("message") 546 | expect( elem.get("str_field") ).to eq("value1") 547 | expect( elem.get("map_field")["inner_field"] ).to eq("inner value") 548 | end 549 | end 550 | 551 | shared_examples 'with an unauthed schema registry' do |tls| 552 | let(:port) { tls ? 8083 : 8081 } 553 | let(:proto) { tls ? 'https' : 'http' } 554 | 555 | let(:auth) { false } 556 | let(:avro_topic_name) { "topic_avro" } 557 | let(:subject_url) { "#{proto}://localhost:#{port}/subjects" } 558 | let(:plain_config) { base_config.merge!({ 559 | 'schema_registry_url' => "#{proto}://localhost:#{port}", 560 | 'schema_registry_ssl_truststore_location' => File.join(Dir.pwd, "tls_repository/clienttruststore.jks"), 561 | 'schema_registry_ssl_truststore_password' => 'changeit', 562 | }) } 563 | 564 | it_behaves_like 'it reads from a topic using a schema registry', false 565 | end 566 | 567 | context 'with an unauthed schema registry' do 568 | context "accessed through HTTPS" do 569 | it_behaves_like 'with an unauthed schema registry', true 570 | end 571 | 572 | context "accessed through HTTPS" do 573 | it_behaves_like 'with an unauthed schema registry', false 574 | end 575 | end 576 | 577 | shared_examples 'with an authed schema registry' do |tls| 578 | let(:port) { tls ? 8083 : 8081 } 579 | let(:proto) { tls ? 'https' : 'http' } 580 | let(:auth) { true } 581 | let(:user) { "barney" } 582 | let(:password) { "changeme" } 583 | let(:avro_topic_name) { "topic_avro_auth" } 584 | let(:subject_url) { "#{proto}://#{user}:#{password}@localhost:#{port}/subjects" } 585 | let(:tls_base_config) do 586 | if tls 587 | base_config.merge({ 588 | 'schema_registry_ssl_truststore_location' => ::File.join(Dir.pwd, "tls_repository/clienttruststore.jks"), 589 | 'schema_registry_ssl_truststore_password' => 'changeit', 590 | }) 591 | else 592 | base_config 593 | end 594 | end 595 | 596 | context 'using schema_registry_key' do 597 | let(:plain_config) do 598 | tls_base_config.merge!({ 599 | 'schema_registry_url' => "#{proto}://localhost:#{port}", 600 | 'schema_registry_key' => user, 601 | 'schema_registry_secret' => password, 602 | }) 603 | end 604 | 605 | it_behaves_like 'it reads from a topic using a schema registry', true 606 | end 607 | 608 | context 'using schema_registry_url' do 609 | let(:plain_config) do 610 | tls_base_config.merge!({ 611 | 'schema_registry_url' => "#{proto}://#{user}:#{password}@localhost:#{port}", 612 | }) 613 | end 614 | 615 | it_behaves_like 'it reads from a topic using a schema registry', true 616 | end 617 | end 618 | 619 | context 'with an authed schema registry' do 620 | context "accessed through HTTPS" do 621 | it_behaves_like 'with an authed schema registry', true 622 | end 623 | 624 | context "accessed through HTTPS" do 625 | it_behaves_like 'with an authed schema registry', false 626 | end 627 | end 628 | end -------------------------------------------------------------------------------- /spec/integration/outputs/kafka_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | 3 | require "logstash/devutils/rspec/spec_helper" 4 | require 'logstash/outputs/kafka' 5 | require 'json' 6 | require 'kafka' 7 | 8 | describe "outputs/kafka", :integration => true do 9 | let(:kafka_host) { 'localhost' } 10 | let(:kafka_port) { 9092 } 11 | let(:num_events) { 10 } 12 | 13 | let(:base_config) { {'client_id' => 'kafkaoutputspec'} } 14 | let(:message_content) do 15 | '"GET /scripts/netcat-webserver HTTP/1.1" 200 182 "-" "Mozilla/5.0 (compatible; EasouSpider; +http://www.easou.com/search/spider.html)"' 16 | end 17 | let(:event) do 18 | LogStash::Event.new({ 'message' => 19 | '183.60.215.50 - - [11/Sep/2014:22:00:00 +0000] ' + message_content, 20 | '@timestamp' => LogStash::Timestamp.at(0) 21 | }) 22 | end 23 | 24 | let(:kafka_client) { Kafka.new ["#{kafka_host}:#{kafka_port}"] } 25 | 26 | context 'when outputting messages serialized as String' do 27 | let(:test_topic) { 'logstash_integration_topic1' } 28 | let(:num_events) { 3 } 29 | 30 | before :each do 31 | # NOTE: the connections_max_idle_ms is irrelevant just testing that configuration works ... 32 | config = base_config.merge({"topic_id" => test_topic, "connections_max_idle_ms" => 540_000}) 33 | load_kafka_data(config) 34 | end 35 | 36 | it 'should have data integrity' do 37 | messages = fetch_messages(test_topic) 38 | 39 | expect(messages.size).to eq(num_events) 40 | messages.each do |m| 41 | expect(m.value).to eq(event.to_s) 42 | end 43 | end 44 | 45 | end 46 | 47 | context 'when outputting messages serialized as Byte Array' do 48 | let(:test_topic) { 'logstash_integration_topicbytearray' } 49 | let(:num_events) { 3 } 50 | 51 | before :each do 52 | config = base_config.merge( 53 | { 54 | "topic_id" => test_topic, 55 | "value_serializer" => 'org.apache.kafka.common.serialization.ByteArraySerializer' 56 | } 57 | ) 58 | load_kafka_data(config) 59 | end 60 | 61 | it 'should have data integrity' do 62 | messages = fetch_messages(test_topic) 63 | 64 | expect(messages.size).to eq(num_events) 65 | messages.each do |m| 66 | expect(m.value).to eq(event.to_s) 67 | end 68 | end 69 | 70 | end 71 | 72 | context 'when setting message_key' do 73 | let(:num_events) { 10 } 74 | let(:test_topic) { 'logstash_integration_topic2' } 75 | 76 | before :each do 77 | config = base_config.merge({"topic_id" => test_topic, "message_key" => "static_key"}) 78 | load_kafka_data(config) 79 | end 80 | 81 | it 'should send all events to one partition' do 82 | data0 = fetch_messages(test_topic, partition: 0) 83 | data1 = fetch_messages(test_topic, partition: 1) 84 | expect(data0.size == num_events || data1.size == num_events).to be true 85 | end 86 | end 87 | 88 | context 'when using gzip compression' do 89 | let(:test_topic) { 'logstash_integration_gzip_topic' } 90 | 91 | before :each do 92 | config = base_config.merge({"topic_id" => test_topic, "compression_type" => "gzip"}) 93 | load_kafka_data(config) 94 | end 95 | 96 | it 'should have data integrity' do 97 | messages = fetch_messages(test_topic) 98 | 99 | expect(messages.size).to eq(num_events) 100 | messages.each do |m| 101 | expect(m.value).to eq(event.to_s) 102 | end 103 | end 104 | end 105 | 106 | context 'when using snappy compression' do 107 | let(:test_topic) { 'logstash_integration_snappy_topic' } 108 | 109 | before :each do 110 | config = base_config.merge({"topic_id" => test_topic, "compression_type" => "snappy"}) 111 | load_kafka_data(config) 112 | end 113 | 114 | it 'should have data integrity' do 115 | messages = fetch_messages(test_topic) 116 | 117 | expect(messages.size).to eq(num_events) 118 | messages.each do |m| 119 | expect(m.value).to eq(event.to_s) 120 | end 121 | end 122 | end 123 | 124 | context 'when using LZ4 compression' do 125 | let(:test_topic) { 'logstash_integration_lz4_topic' } 126 | 127 | before :each do 128 | config = base_config.merge({"topic_id" => test_topic, "compression_type" => "lz4"}) 129 | load_kafka_data(config) 130 | end 131 | 132 | # NOTE: depends on extlz4 gem which is using a C-extension 133 | # it 'should have data integrity' do 134 | # messages = fetch_messages(test_topic) 135 | # 136 | # expect(messages.size).to eq(num_events) 137 | # messages.each do |m| 138 | # expect(m.value).to eq(event.to_s) 139 | # end 140 | # end 141 | end 142 | 143 | context 'when using zstd compression' do 144 | let(:test_topic) { 'logstash_integration_zstd_topic' } 145 | 146 | before :each do 147 | config = base_config.merge({"topic_id" => test_topic, "compression_type" => "zstd"}) 148 | load_kafka_data(config) 149 | end 150 | 151 | # NOTE: depends on zstd-ruby gem which is using a C-extension 152 | # it 'should have data integrity' do 153 | # messages = fetch_messages(test_topic) 154 | # 155 | # expect(messages.size).to eq(num_events) 156 | # messages.each do |m| 157 | # expect(m.value).to eq(event.to_s) 158 | # end 159 | # end 160 | end 161 | 162 | context 'when using multi partition topic' do 163 | let(:num_events) { 100 } # ~ more than (batch.size) 16,384 bytes 164 | let(:test_topic) { 'logstash_integration_topic3' } 165 | 166 | before :each do 167 | config = base_config.merge("topic_id" => test_topic, "partitioner" => 'org.apache.kafka.clients.producer.UniformStickyPartitioner') 168 | load_kafka_data(config) do # let's have a bit more (diverse) dataset 169 | num_events.times.collect do 170 | LogStash::Event.new.tap do |e| 171 | e.set('message', event.get('message').sub('183.60.215.50') { "#{rand(126)+1}.#{rand(126)+1}.#{rand(126)+1}.#{rand(126)+1}" }) 172 | end 173 | end 174 | end 175 | end 176 | 177 | it 'should distribute events to all partitions' do 178 | consumer0_records = fetch_messages(test_topic, partition: 0) 179 | consumer1_records = fetch_messages(test_topic, partition: 1) 180 | consumer2_records = fetch_messages(test_topic, partition: 2) 181 | 182 | all_records = consumer0_records + consumer1_records + consumer2_records 183 | expect(all_records.size).to eq(num_events * 2) 184 | all_records.each do |m| 185 | expect(m.value).to include message_content 186 | end 187 | 188 | expect(consumer0_records.size).to be > 1 189 | expect(consumer1_records.size).to be > 1 190 | expect(consumer2_records.size).to be > 1 191 | end 192 | end 193 | 194 | context 'when setting message_headers' do 195 | let(:num_events) { 10 } 196 | let(:test_topic) { 'logstash_integration_topic4' } 197 | 198 | before :each do 199 | config = base_config.merge({"topic_id" => test_topic, "message_headers" => {"event_timestamp" => "%{@timestamp}"}}) 200 | load_kafka_data(config) 201 | end 202 | 203 | it 'messages should contain headers' do 204 | messages = fetch_messages(test_topic) 205 | 206 | expect(messages.size).to eq(num_events) 207 | messages.each do |m| 208 | expect(m.headers).to eq({"event_timestamp" => LogStash::Timestamp.at(0).to_s}) 209 | end 210 | end 211 | end 212 | 213 | context 'setting partitioner' do 214 | let(:test_topic) { 'logstash_integration_partitioner_topic' } 215 | let(:partitioner) { nil } 216 | 217 | before :each do 218 | @messages_offset = fetch_messages_from_all_partitions 219 | 220 | config = base_config.merge("topic_id" => test_topic, 'partitioner' => partitioner) 221 | load_kafka_data(config) 222 | end 223 | 224 | [ 'default', 'round_robin', 'uniform_sticky' ].each do |partitioner| 225 | describe partitioner do 226 | let(:partitioner) { partitioner } 227 | it 'loads data' do 228 | expect(fetch_messages_from_all_partitions - @messages_offset).to eql num_events 229 | end 230 | end 231 | end 232 | 233 | def fetch_messages_from_all_partitions 234 | 3.times.map { |i| fetch_messages(test_topic, partition: i).size }.sum 235 | end 236 | end 237 | 238 | def load_kafka_data(config) 239 | kafka = LogStash::Outputs::Kafka.new(config) 240 | kafka.register 241 | kafka.multi_receive(num_events.times.collect { event }) 242 | kafka.multi_receive(Array(yield)) if block_given? 243 | kafka.close 244 | end 245 | 246 | def fetch_messages(topic, partition: 0, offset: :earliest) 247 | kafka_client.fetch_messages(topic: topic, partition: partition, offset: offset) 248 | end 249 | 250 | end 251 | -------------------------------------------------------------------------------- /spec/unit/inputs/avro_schema_fixture_payment.asvc: -------------------------------------------------------------------------------- 1 | {"namespace": "io.confluent.examples.clients.basicavro", 2 | "type": "record", 3 | "name": "Payment", 4 | "fields": [ 5 | {"name": "id", "type": "string"}, 6 | {"name": "amount", "type": "double"} 7 | ] 8 | } 9 | -------------------------------------------------------------------------------- /spec/unit/inputs/kafka_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rspec/spec_helper" 3 | require "logstash/inputs/kafka" 4 | require "concurrent" 5 | 6 | 7 | describe LogStash::Inputs::Kafka do 8 | let(:common_config) { { 'topics' => ['logstash'] } } 9 | let(:config) { common_config } 10 | let(:consumer_double) { double(:consumer) } 11 | let(:needs_raise) { false } 12 | let(:payload) { 13 | 10.times.map do 14 | org.apache.kafka.clients.consumer.ConsumerRecord.new("logstash", 0, 0, "key", "value") 15 | end 16 | } 17 | subject { LogStash::Inputs::Kafka.new(config) } 18 | 19 | describe '#poll' do 20 | before do 21 | polled = false 22 | allow(consumer_double).to receive(:poll) do 23 | if polled 24 | [] 25 | else 26 | polled = true 27 | payload 28 | end 29 | end 30 | end 31 | 32 | it 'should poll' do 33 | expect(consumer_double).to receive(:poll) 34 | expect(subject.do_poll(consumer_double)).to eq(payload) 35 | end 36 | 37 | it 'should return nil if Kafka Exception is encountered' do 38 | expect(consumer_double).to receive(:poll).and_raise(org.apache.kafka.common.errors.TopicAuthorizationException.new('')) 39 | expect(subject.do_poll(consumer_double)).to be_empty 40 | end 41 | 42 | it 'should not throw if Kafka Exception is encountered' do 43 | expect(consumer_double).to receive(:poll).and_raise(org.apache.kafka.common.errors.TopicAuthorizationException.new('')) 44 | expect{subject.do_poll(consumer_double)}.not_to raise_error 45 | end 46 | 47 | it 'should return no records if Assertion Error is encountered' do 48 | expect(consumer_double).to receive(:poll).and_raise(java.lang.AssertionError.new('')) 49 | expect{subject.do_poll(consumer_double)}.to raise_error(java.lang.AssertionError) 50 | end 51 | end 52 | 53 | describe '#maybe_commit_offset' do 54 | context 'with auto commit disabled' do 55 | let(:config) { common_config.merge('enable_auto_commit' => false) } 56 | 57 | it 'should call commit on the consumer' do 58 | expect(consumer_double).to receive(:commitSync) 59 | subject.maybe_commit_offset(consumer_double) 60 | end 61 | it 'should not throw if a Kafka Exception is encountered' do 62 | expect(consumer_double).to receive(:commitSync).and_raise(org.apache.kafka.common.errors.TopicAuthorizationException.new('')) 63 | expect{subject.maybe_commit_offset(consumer_double)}.not_to raise_error 64 | end 65 | 66 | it 'should throw if Assertion Error is encountered' do 67 | expect(consumer_double).to receive(:commitSync).and_raise(java.lang.AssertionError.new('')) 68 | expect{subject.maybe_commit_offset(consumer_double)}.to raise_error(java.lang.AssertionError) 69 | end 70 | end 71 | 72 | context 'with auto commit enabled' do 73 | let(:config) { common_config.merge('enable_auto_commit' => true) } 74 | 75 | it 'should not call commit on the consumer' do 76 | expect(consumer_double).not_to receive(:commitSync) 77 | subject.maybe_commit_offset(consumer_double) 78 | end 79 | end 80 | end 81 | 82 | describe '#register' do 83 | it "should register" do 84 | expect { subject.register }.to_not raise_error 85 | end 86 | 87 | context "when the deprecated `default` is specified" do 88 | let(:config) { common_config.merge('client_dns_lookup' => 'default') } 89 | 90 | it 'should fallback `client_dns_lookup` to `use_all_dns_ips`' do 91 | subject.register 92 | 93 | expect(subject.client_dns_lookup).to eq('use_all_dns_ips') 94 | end 95 | end 96 | end 97 | 98 | describe '#running' do 99 | let(:q) { Queue.new } 100 | let(:config) { common_config.merge('client_id' => 'test') } 101 | 102 | before do 103 | expect(subject).to receive(:create_consumer).once.and_return(consumer_double) 104 | allow(consumer_double).to receive(:wakeup) 105 | allow(consumer_double).to receive(:close) 106 | allow(consumer_double).to receive(:subscribe) 107 | end 108 | 109 | context 'when running' do 110 | before do 111 | polled = false 112 | allow(consumer_double).to receive(:poll) do 113 | if polled 114 | [] 115 | else 116 | polled = true 117 | payload 118 | end 119 | end 120 | 121 | subject.register 122 | t = Thread.new do 123 | sleep(1) 124 | subject.do_stop 125 | end 126 | subject.run(q) 127 | t.join 128 | end 129 | 130 | it 'should process the correct number of events' do 131 | expect(q.size).to eq(10) 132 | end 133 | 134 | it 'should set the consumer thread name' do 135 | expect(subject.instance_variable_get('@runner_threads').first.get_name).to eq("kafka-input-worker-test-0") 136 | end 137 | 138 | context 'with records value frozen' do 139 | # boolean, module name & nil .to_s are frozen by default (https://bugs.ruby-lang.org/issues/16150) 140 | let(:payload) do [ 141 | org.apache.kafka.clients.consumer.ConsumerRecord.new("logstash", 0, 0, "nil", nil), 142 | org.apache.kafka.clients.consumer.ConsumerRecord.new("logstash", 0, 0, "true", true), 143 | org.apache.kafka.clients.consumer.ConsumerRecord.new("logstash", 0, 0, "false", false), 144 | org.apache.kafka.clients.consumer.ConsumerRecord.new("logstash", 0, 0, "frozen", "".freeze) 145 | ] 146 | end 147 | 148 | it "should process events" do 149 | expect(q.size).to eq(4) 150 | end 151 | end 152 | end 153 | 154 | context 'when errors are encountered during poll' do 155 | before do 156 | raised, polled = false 157 | allow(consumer_double).to receive(:poll) do 158 | unless raised 159 | raised = true 160 | raise exception 161 | end 162 | if polled 163 | [] 164 | else 165 | polled = true 166 | payload 167 | end 168 | end 169 | 170 | subject.register 171 | t = Thread.new do 172 | sleep 2 173 | subject.do_stop 174 | end 175 | subject.run(q) 176 | t.join 177 | end 178 | 179 | context "when a Kafka exception is raised" do 180 | let(:exception) { org.apache.kafka.common.errors.TopicAuthorizationException.new('Invalid topic') } 181 | 182 | it 'should poll successfully' do 183 | expect(q.size).to eq(10) 184 | end 185 | end 186 | 187 | context "when a StandardError is raised" do 188 | let(:exception) { StandardError.new('Standard Error') } 189 | 190 | it 'should retry and poll successfully' do 191 | expect(q.size).to eq(10) 192 | end 193 | end 194 | 195 | context "when a java error is raised" do 196 | let(:exception) { java.lang.AssertionError.new('Fatal assertion') } 197 | 198 | it "should not retry" do 199 | expect(q.size).to eq(0) 200 | end 201 | end 202 | end 203 | end 204 | 205 | it 'uses plain codec by default' do 206 | expect( subject.codec ).to respond_to :decode 207 | expect( subject.codec.class ).to be LogStash::Codecs::Plain 208 | end 209 | 210 | context 'with codec option' do 211 | 212 | let(:config) { super().merge 'codec' => 'line' } 213 | 214 | it 'uses specified codec' do 215 | expect( subject.codec ).to respond_to :decode 216 | expect( subject.codec.class ).to be LogStash::Codecs::Line 217 | end 218 | 219 | end 220 | 221 | context 'when oauth is configured' do 222 | let(:config) { super().merge( 223 | 'security_protocol' => 'SASL_PLAINTEXT', 224 | 'sasl_mechanism' => 'OAUTHBEARER', 225 | 'sasl_oauthbearer_token_endpoint_url' => 'https://auth.example.com/token', 226 | 'sasl_oauthbearer_scope_claim_name' => 'custom_scope' 227 | )} 228 | 229 | it "sets oauth properties" do 230 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 231 | to receive(:new).with(hash_including( 232 | 'security.protocol' => 'SASL_PLAINTEXT', 233 | 'sasl.mechanism' => 'OAUTHBEARER', 234 | 'sasl.oauthbearer.token.endpoint.url' => 'https://auth.example.com/token', 235 | 'sasl.oauthbearer.scope.claim.name' => 'custom_scope' 236 | )).and_return(kafka_client = double('kafka-consumer')) 237 | 238 | expect(subject.send(:create_consumer, 'test-client-1', 'group_instance_id')).to be kafka_client 239 | end 240 | end 241 | 242 | context 'when sasl is configured' do 243 | let(:config) { super().merge( 244 | 'security_protocol' => 'SASL_PLAINTEXT', 245 | 'sasl_mechanism' => 'OAUTHBEARER', 246 | 'sasl_login_connect_timeout_ms' => 15000, 247 | 'sasl_login_read_timeout_ms' => 5000, 248 | 'sasl_login_retry_backoff_ms' => 200, 249 | 'sasl_login_retry_backoff_max_ms' => 15000, 250 | 'sasl_login_callback_handler_class' => 'org.example.CustomLoginHandler' 251 | )} 252 | 253 | it "sets sasl login properties" do 254 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 255 | to receive(:new).with(hash_including( 256 | 'security.protocol' => 'SASL_PLAINTEXT', 257 | 'sasl.mechanism' => 'OAUTHBEARER', 258 | 'sasl.login.connect.timeout.ms' => '15000', 259 | 'sasl.login.read.timeout.ms' => '5000', 260 | 'sasl.login.retry.backoff.ms' => '200', 261 | 'sasl.login.retry.backoff.max.ms' => '15000', 262 | 'sasl.login.callback.handler.class' => 'org.example.CustomLoginHandler' 263 | )).and_return(kafka_client = double('kafka-consumer')) 264 | 265 | expect(subject.send(:create_consumer, 'test-client-2', 'group_instance_id')).to be kafka_client 266 | end 267 | end 268 | 269 | describe "schema registry" do 270 | let(:base_config) do { 271 | 'schema_registry_url' => 'http://localhost:8081', 272 | 'topics' => ['logstash'], 273 | 'consumer_threads' => 4 274 | } 275 | end 276 | 277 | context "schema_registry_url" do 278 | let(:config) { base_config } 279 | 280 | it "conflict with value_deserializer_class should fail" do 281 | config['value_deserializer_class'] = 'my.fantasy.Deserializer' 282 | expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of value_deserializer_class/ 283 | end 284 | 285 | it "conflict with topics_pattern should fail" do 286 | config['topics_pattern'] = 'topic_.*' 287 | expect { subject.register }.to raise_error LogStash::ConfigurationError, /Option schema_registry_url prohibit the customization of topics_pattern/ 288 | end 289 | 290 | it 'switches default codec to json' do 291 | expect( subject.codec ).to respond_to :decode 292 | expect( subject.codec.class ).to be LogStash::Codecs::JSON 293 | end 294 | end 295 | 296 | context 'when kerberos auth is used' do 297 | ['SASL_SSL', 'SASL_PLAINTEXT'].each do |protocol| 298 | context "with #{protocol}" do 299 | ['auto', 'skip'].each do |vsr| 300 | context "when validata_schema_registry is #{vsr}" do 301 | let(:config) { base_config.merge({'security_protocol' => protocol, 'schema_registry_validation' => vsr}) } 302 | 303 | it 'skips verification' do 304 | expect(subject).not_to receive(:check_for_schema_registry_connectivity_and_subjects) 305 | expect { subject.register }.not_to raise_error 306 | end 307 | end 308 | end 309 | end 310 | end 311 | end 312 | 313 | context 'when kerberos auth is not used' do 314 | context "when skip_verify is set to auto" do 315 | let(:config) { base_config.merge({'schema_registry_validation' => 'auto'})} 316 | it 'performs verification' do 317 | expect(subject).to receive(:check_for_schema_registry_connectivity_and_subjects) 318 | expect { subject.register }.not_to raise_error 319 | end 320 | end 321 | 322 | context "when skip_verify is set to default" do 323 | let(:config) { base_config } 324 | it 'performs verification' do 325 | expect(subject).to receive(:check_for_schema_registry_connectivity_and_subjects) 326 | expect { subject.register }.not_to raise_error 327 | end 328 | end 329 | 330 | context "when skip_verify is set to skip" do 331 | let(:config) { base_config.merge({'schema_registry_validation' => 'skip'})} 332 | it 'should skip verification' do 333 | expect(subject).not_to receive(:check_for_schema_registry_connectivity_and_subjects) 334 | expect { subject.register }.not_to raise_error 335 | end 336 | end 337 | end 338 | end 339 | 340 | context "decorate_events" do 341 | let(:config) { { 'decorate_events' => 'extended'} } 342 | 343 | it "should raise error for invalid value" do 344 | config['decorate_events'] = 'avoid' 345 | expect { subject.register }.to raise_error LogStash::ConfigurationError, /Something is wrong with your configuration./ 346 | end 347 | 348 | it "should map old true boolean value to :record_props mode" do 349 | config['decorate_events'] = "true" 350 | subject.register 351 | expect(subject.metadata_mode).to include(:record_props) 352 | end 353 | 354 | context "guards against nil header" do 355 | let(:header) { double(:value => nil, :key => "k") } 356 | let(:headers) { [ header ] } 357 | let(:record) { double(:headers => headers, :topic => "topic", :partition => 0, 358 | :offset => 123456789, :key => "someId", :timestamp => nil ) } 359 | 360 | it "does not raise error when key is nil" do 361 | subject.register 362 | evt = LogStash::Event.new('message' => 'Hello') 363 | expect { subject.maybe_set_metadata(evt, record) }.not_to raise_error 364 | end 365 | end 366 | end 367 | 368 | context 'with client_rack' do 369 | let(:config) { super().merge('client_rack' => 'EU-R1') } 370 | 371 | it "sets broker rack parameter" do 372 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 373 | to receive(:new).with(hash_including('client.rack' => 'EU-R1')). 374 | and_return kafka_client = double('kafka-consumer') 375 | 376 | expect( subject.send(:create_consumer, 'sample_client-0', 'group_instance_id') ).to be kafka_client 377 | end 378 | end 379 | 380 | context 'string integer config' do 381 | let(:config) { super().merge('session_timeout_ms' => '25000', 'max_poll_interval_ms' => '345000') } 382 | 383 | it "sets integer values" do 384 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 385 | to receive(:new).with(hash_including('session.timeout.ms' => '25000', 'max.poll.interval.ms' => '345000')). 386 | and_return kafka_client = double('kafka-consumer') 387 | 388 | expect( subject.send(:create_consumer, 'sample_client-1', 'group_instance_id') ).to be kafka_client 389 | end 390 | end 391 | 392 | context 'integer config' do 393 | let(:config) { super().merge('session_timeout_ms' => 25200, 'max_poll_interval_ms' => 123_000) } 394 | 395 | it "sets integer values" do 396 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 397 | to receive(:new).with(hash_including('session.timeout.ms' => '25200', 'max.poll.interval.ms' => '123000')). 398 | and_return kafka_client = double('kafka-consumer') 399 | 400 | expect( subject.send(:create_consumer, 'sample_client-2', 'group_instance_id') ).to be kafka_client 401 | end 402 | end 403 | 404 | context 'string boolean config' do 405 | let(:config) { super().merge('enable_auto_commit' => 'false', 'check_crcs' => 'true') } 406 | 407 | it "sets parameters" do 408 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 409 | to receive(:new).with(hash_including('enable.auto.commit' => 'false', 'check.crcs' => 'true')). 410 | and_return kafka_client = double('kafka-consumer') 411 | 412 | expect( subject.send(:create_consumer, 'sample_client-3', 'group_instance_id') ).to be kafka_client 413 | expect( subject.enable_auto_commit ).to be false 414 | end 415 | end 416 | 417 | context 'boolean config' do 418 | let(:config) { super().merge('enable_auto_commit' => true, 'check_crcs' => false) } 419 | 420 | it "sets parameters" do 421 | expect(org.apache.kafka.clients.consumer.KafkaConsumer). 422 | to receive(:new).with(hash_including('enable.auto.commit' => 'true', 'check.crcs' => 'false')). 423 | and_return kafka_client = double('kafka-consumer') 424 | 425 | expect( subject.send(:create_consumer, 'sample_client-4', 'group_instance_id') ).to be kafka_client 426 | expect( subject.enable_auto_commit ).to be true 427 | end 428 | end 429 | end 430 | -------------------------------------------------------------------------------- /spec/unit/outputs/kafka_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rspec/spec_helper" 3 | require 'logstash/outputs/kafka' 4 | require 'json' 5 | 6 | describe "outputs/kafka" do 7 | let (:simple_kafka_config) {{'topic_id' => 'test'}} 8 | let (:event) { LogStash::Event.new({'message' => 'hello', 'topic_name' => 'my_topic', 'host' => '172.0.0.1', 9 | '@timestamp' => LogStash::Timestamp.now}) } 10 | 11 | let(:future) { double('kafka producer future') } 12 | subject { LogStash::Outputs::Kafka.new(config) } 13 | 14 | context 'when initializing' do 15 | it "should register" do 16 | output = LogStash::Plugin.lookup("output", "kafka").new(simple_kafka_config) 17 | expect {output.register}.to_not raise_error 18 | end 19 | 20 | it 'should populate kafka config with default values' do 21 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 22 | expect(kafka.bootstrap_servers).to eql 'localhost:9092' 23 | expect(kafka.topic_id).to eql 'test' 24 | expect(kafka.key_serializer).to eql 'org.apache.kafka.common.serialization.StringSerializer' 25 | end 26 | 27 | it 'should fallback `client_dns_lookup` to `use_all_dns_ips` when the deprecated `default` is specified' do 28 | simple_kafka_config["client_dns_lookup"] = 'default' 29 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 30 | kafka.register 31 | 32 | expect(kafka.client_dns_lookup).to eq('use_all_dns_ips') 33 | end 34 | end 35 | 36 | context 'when outputting messages' do 37 | it 'should send logstash event to kafka broker' do 38 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send). 39 | with(an_instance_of(org.apache.kafka.clients.producer.ProducerRecord)) 40 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 41 | kafka.register 42 | kafka.multi_receive([event]) 43 | end 44 | 45 | it 'should support Event#sprintf placeholders in topic_id' do 46 | topic_field = 'topic_name' 47 | expect(org.apache.kafka.clients.producer.ProducerRecord).to receive(:new). 48 | with("my_topic", event.to_s).and_call_original 49 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send) 50 | kafka = LogStash::Outputs::Kafka.new({'topic_id' => "%{#{topic_field}}"}) 51 | kafka.register 52 | kafka.multi_receive([event]) 53 | end 54 | 55 | it 'should support field referenced message_keys' do 56 | expect(org.apache.kafka.clients.producer.ProducerRecord).to receive(:new). 57 | with("test", "172.0.0.1", event.to_s).and_call_original 58 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send) 59 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge({"message_key" => "%{host}"})) 60 | kafka.register 61 | kafka.multi_receive([event]) 62 | end 63 | 64 | it 'should support field referenced message_headers' do 65 | expect(org.apache.kafka.clients.producer.ProducerRecord).to receive(:new). 66 | with("test", event.to_s).and_call_original 67 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send) 68 | expect_any_instance_of(org.apache.kafka.common.header.internals.RecordHeaders).to receive(:add).with("host","172.0.0.1".to_java_bytes).and_call_original 69 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge({"message_headers" => { "host" => "%{host}"}})) 70 | kafka.register 71 | kafka.multi_receive([event]) 72 | end 73 | 74 | it 'should not raise config error when truststore location is not set and ssl is enabled' do 75 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("security_protocol" => "SSL")) 76 | expect(org.apache.kafka.clients.producer.KafkaProducer).to receive(:new) 77 | expect { kafka.register }.to_not raise_error 78 | end 79 | end 80 | 81 | context "when KafkaProducer#send() raises a retriable exception" do 82 | let(:failcount) { (rand * 10).to_i } 83 | let(:sendcount) { failcount + 1 } 84 | 85 | let(:exception_classes) { [ 86 | org.apache.kafka.common.errors.TimeoutException, 87 | org.apache.kafka.common.errors.DisconnectException, 88 | org.apache.kafka.common.errors.CoordinatorNotAvailableException, 89 | org.apache.kafka.common.errors.InterruptException, 90 | ] } 91 | 92 | before do 93 | count = 0 94 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send) 95 | .exactly(sendcount).times do 96 | if count < failcount # fail 'failcount' times in a row. 97 | count += 1 98 | # Pick an exception at random 99 | raise exception_classes.shuffle.first.new("injected exception for testing") 100 | else 101 | count = :done 102 | future # return future 103 | end 104 | end 105 | expect(future).to receive :get 106 | end 107 | 108 | it "should retry until successful" do 109 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 110 | kafka.register 111 | kafka.multi_receive([event]) 112 | sleep(1.0) # allow for future.get call 113 | end 114 | end 115 | 116 | context "when KafkaProducer#send() raises a non-retriable exception" do 117 | let(:failcount) { (rand * 10).to_i } 118 | 119 | let(:exception_classes) { [ 120 | org.apache.kafka.common.errors.SerializationException, 121 | org.apache.kafka.common.errors.RecordTooLargeException, 122 | org.apache.kafka.common.errors.InvalidTopicException 123 | ] } 124 | 125 | before do 126 | count = 0 127 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).exactly(1).times do 128 | if count < failcount # fail 'failcount' times in a row. 129 | count += 1 130 | # Pick an exception at random 131 | raise exception_classes.shuffle.first.new("injected exception for testing") 132 | else 133 | fail 'unexpected producer#send invocation' 134 | end 135 | end 136 | end 137 | 138 | it "should not retry" do 139 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 140 | kafka.register 141 | kafka.multi_receive([event]) 142 | end 143 | end 144 | 145 | context "when a send fails" do 146 | context "and the default retries behavior is used" do 147 | # Fail this many times and then finally succeed. 148 | let(:failcount) { (rand * 10).to_i } 149 | 150 | # Expect KafkaProducer.send() to get called again after every failure, plus the successful one. 151 | let(:sendcount) { failcount + 1 } 152 | 153 | it "should retry until successful" do 154 | count = 0 155 | success = nil 156 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).exactly(sendcount).times do 157 | if count < failcount 158 | count += 1 159 | # inject some failures. 160 | 161 | # Return a custom Future that will raise an exception to simulate a Kafka send() problem. 162 | future = java.util.concurrent.FutureTask.new { raise org.apache.kafka.common.errors.TimeoutException.new("Failed") } 163 | else 164 | success = true 165 | future = java.util.concurrent.FutureTask.new { nil } # return no-op future 166 | end 167 | future.tap { Thread.start { future.run } } 168 | end 169 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config) 170 | kafka.register 171 | kafka.multi_receive([event]) 172 | expect( success ).to be true 173 | end 174 | end 175 | 176 | context 'when retries is 0' do 177 | let(:retries) { 0 } 178 | let(:max_sends) { 1 } 179 | 180 | it "should should only send once" do 181 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).once do 182 | # Always fail. 183 | future = java.util.concurrent.FutureTask.new { raise org.apache.kafka.common.errors.TimeoutException.new("Failed") } 184 | future.run 185 | future 186 | end 187 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries)) 188 | kafka.register 189 | kafka.multi_receive([event]) 190 | end 191 | 192 | it 'should not sleep' do 193 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).once do 194 | # Always fail. 195 | future = java.util.concurrent.FutureTask.new { raise org.apache.kafka.common.errors.TimeoutException.new("Failed") } 196 | future.run 197 | future 198 | end 199 | 200 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries)) 201 | expect(kafka).not_to receive(:sleep).with(anything) 202 | kafka.register 203 | kafka.multi_receive([event]) 204 | end 205 | end 206 | 207 | context "and when retries is set by the user" do 208 | let(:retries) { (rand * 10).to_i } 209 | let(:max_sends) { retries + 1 } 210 | 211 | it "should give up after retries are exhausted" do 212 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).at_most(max_sends).times do 213 | # Always fail. 214 | future = java.util.concurrent.FutureTask.new { raise org.apache.kafka.common.errors.TimeoutException.new("Failed") } 215 | future.tap { Thread.start { future.run } } 216 | end 217 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries)) 218 | kafka.register 219 | kafka.multi_receive([event]) 220 | end 221 | 222 | it 'should only sleep retries number of times' do 223 | expect_any_instance_of(org.apache.kafka.clients.producer.KafkaProducer).to receive(:send).at_most(max_sends).times do 224 | # Always fail. 225 | future = java.util.concurrent.FutureTask.new { raise org.apache.kafka.common.errors.TimeoutException.new("Failed") } 226 | future.run 227 | future 228 | end 229 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries)) 230 | expect(kafka).to receive(:sleep).exactly(retries).times 231 | kafka.register 232 | kafka.multi_receive([event]) 233 | end 234 | end 235 | context 'when retries is -1' do 236 | let(:retries) { -1 } 237 | 238 | it "should raise a Configuration error" do 239 | kafka = LogStash::Outputs::Kafka.new(simple_kafka_config.merge("retries" => retries)) 240 | expect { kafka.register }.to raise_error(LogStash::ConfigurationError) 241 | end 242 | end 243 | end 244 | 245 | describe "value_serializer" do 246 | let(:output) { LogStash::Plugin.lookup("output", "kafka").new(config) } 247 | 248 | context "when a random string is set" do 249 | let(:config) { { "topic_id" => "random", "value_serializer" => "test_string" } } 250 | 251 | it "raises a ConfigurationError" do 252 | expect { output.register }.to raise_error(LogStash::ConfigurationError) 253 | end 254 | end 255 | end 256 | 257 | context 'when ssl endpoint identification disabled' do 258 | 259 | let(:config) do 260 | simple_kafka_config.merge( 261 | 'security_protocol' => 'SSL', 262 | 'ssl_endpoint_identification_algorithm' => '', 263 | 'ssl_truststore_location' => truststore_path, 264 | ) 265 | end 266 | 267 | let(:truststore_path) do 268 | File.join(File.dirname(__FILE__), '../../fixtures/trust-store_stub.jks') 269 | end 270 | 271 | it 'sets empty ssl.endpoint.identification.algorithm' do 272 | expect(org.apache.kafka.clients.producer.KafkaProducer). 273 | to receive(:new).with(hash_including('ssl.endpoint.identification.algorithm' => '')) 274 | subject.register 275 | end 276 | 277 | it 'configures truststore' do 278 | expect(org.apache.kafka.clients.producer.KafkaProducer). 279 | to receive(:new).with(hash_including('ssl.truststore.location' => truststore_path)) 280 | subject.register 281 | end 282 | 283 | end 284 | 285 | context 'when oauth is configured' do 286 | let(:config) { 287 | simple_kafka_config.merge( 288 | 'security_protocol' => 'SASL_PLAINTEXT', 289 | 'sasl_mechanism' => 'OAUTHBEARER', 290 | 'sasl_oauthbearer_token_endpoint_url' => 'https://auth.example.com/token', 291 | 'sasl_oauthbearer_scope_claim_name' => 'custom_scope' 292 | ) 293 | } 294 | 295 | it "sets oauth properties" do 296 | expect(org.apache.kafka.clients.producer.KafkaProducer). 297 | to receive(:new).with(hash_including( 298 | 'security.protocol' => 'SASL_PLAINTEXT', 299 | 'sasl.mechanism' => 'OAUTHBEARER', 300 | 'sasl.oauthbearer.token.endpoint.url' => 'https://auth.example.com/token', 301 | 'sasl.oauthbearer.scope.claim.name' => 'custom_scope' 302 | )) 303 | subject.register 304 | end 305 | end 306 | 307 | context 'when sasl is configured' do 308 | let(:config) { 309 | simple_kafka_config.merge( 310 | 'security_protocol' => 'SASL_PLAINTEXT', 311 | 'sasl_mechanism' => 'OAUTHBEARER', 312 | 'sasl_login_connect_timeout_ms' => 15000, 313 | 'sasl_login_read_timeout_ms' => 5000, 314 | 'sasl_login_retry_backoff_ms' => 200, 315 | 'sasl_login_retry_backoff_max_ms' => 15000, 316 | 'sasl_login_callback_handler_class' => 'org.example.CustomLoginHandler' 317 | ) 318 | } 319 | 320 | it "sets sasl login properties" do 321 | expect(org.apache.kafka.clients.producer.KafkaProducer). 322 | to receive(:new).with(hash_including( 323 | 'security.protocol' => 'SASL_PLAINTEXT', 324 | 'sasl.mechanism' => 'OAUTHBEARER', 325 | 'sasl.login.connect.timeout.ms' => '15000', 326 | 'sasl.login.read.timeout.ms' => '5000', 327 | 'sasl.login.retry.backoff.ms' => '200', 328 | 'sasl.login.retry.backoff.max.ms' => '15000', 329 | 'sasl.login.callback.handler.class' => 'org.example.CustomLoginHandler' 330 | )) 331 | subject.register 332 | end 333 | end 334 | end 335 | -------------------------------------------------------------------------------- /start_auth_schema_registry.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Kafka and create test topics 3 | set -ex 4 | 5 | echo "Starting authed SchemaRegistry" 6 | SCHEMA_REGISTRY_OPTS=-Djava.security.auth.login.config=build/confluent_platform/etc/schema-registry/jaas.config build/confluent_platform/bin/schema-registry-start build/confluent_platform/etc/schema-registry/authed-schema-registry.properties > /dev/null 2>&1 & -------------------------------------------------------------------------------- /start_schema_registry.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Kafka and create test topics 3 | set -ex 4 | 5 | echo "Starting SchemaRegistry" 6 | build/confluent_platform/bin/schema-registry-start build/confluent_platform/etc/schema-registry/schema-registry.properties > /dev/null 2>&1 & 7 | -------------------------------------------------------------------------------- /stop_schema_registry.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Setup Kafka and create test topics 3 | set -ex 4 | 5 | echo "Stopping SchemaRegistry" 6 | build/confluent_platform/bin/schema-registry-stop 7 | sleep 5 --------------------------------------------------------------------------------