├── .gitignore ├── LICENSE ├── README.md ├── component ├── pom.xml └── src │ ├── main │ └── java │ │ └── io │ │ └── siddhi │ │ └── extension │ │ └── io │ │ └── kafka │ │ ├── Constants.java │ │ ├── KafkaIOUtils.java │ │ ├── metrics │ │ ├── Metrics.java │ │ ├── SinkMetrics.java │ │ └── SourceMetrics.java │ │ ├── multidc │ │ ├── sink │ │ │ └── KafkaMultiDCSink.java │ │ └── source │ │ │ ├── KafkaMultiDCSource.java │ │ │ └── SourceSynchronizer.java │ │ ├── sink │ │ ├── KafkaReplayRequestSink.java │ │ └── KafkaSink.java │ │ ├── source │ │ ├── ConsumerKafkaGroup.java │ │ ├── KafkaConsumerThread.java │ │ ├── KafkaReplayResponseSource.java │ │ ├── KafkaReplayThread.java │ │ └── KafkaSource.java │ │ └── util │ │ └── KafkaReplayResponseSourceRegistry.java │ └── test │ ├── java │ └── io │ │ └── siddhi │ │ └── extension │ │ └── io │ │ └── kafka │ │ ├── KafkaTestUtil.java │ │ ├── SequencedMessagingTestCase.java │ │ ├── UnitTestAppender.java │ │ ├── multidc │ │ ├── KafkaMultiDCSinkTestCases.java │ │ ├── KafkaMultiDCSourceSynchronizerTestCases.java │ │ └── KafkaMultiDCSourceTestCases.java │ │ ├── sink │ │ ├── ErrorHandlingTestCase.java │ │ ├── KafkaSinkTestCase.java │ │ └── KafkaSinkwithBinaryMapperTestCase.java │ │ └── source │ │ ├── KafkaSourceHATestCase.java │ │ └── KafkaSourceTestCase.java │ └── resources │ ├── log4j.properties │ ├── log4j2.xml │ └── testng.xml ├── docs ├── api │ ├── 4.0.10.md │ ├── 4.0.11.md │ ├── 4.0.12.md │ ├── 4.0.13.md │ ├── 4.0.14.md │ ├── 4.0.15.md │ ├── 4.0.16.md │ ├── 4.0.17.md │ ├── 4.0.7.md │ ├── 4.0.8.md │ ├── 4.0.9.md │ ├── 4.1.0.md │ ├── 4.1.1.md │ ├── 4.1.10.md │ ├── 4.1.11.md │ ├── 4.1.12.md │ ├── 4.1.13.md │ ├── 4.1.14.md │ ├── 4.1.15.md │ ├── 4.1.16.md │ ├── 4.1.17.md │ ├── 4.1.18.md │ ├── 4.1.19.md │ ├── 4.1.2.md │ ├── 4.1.20.md │ ├── 4.1.21.md │ ├── 4.1.3.md │ ├── 4.1.4.md │ ├── 4.1.5.md │ ├── 4.1.6.md │ ├── 4.1.7.md │ ├── 4.1.8.md │ ├── 4.1.9.md │ ├── 4.2.0.md │ ├── 4.2.1.md │ ├── 5.0.0.md │ ├── 5.0.1.md │ ├── 5.0.10.md │ ├── 5.0.11.md │ ├── 5.0.12.md │ ├── 5.0.13.md │ ├── 5.0.14.md │ ├── 5.0.15.md │ ├── 5.0.16.md │ ├── 5.0.17.md │ ├── 5.0.18.md │ ├── 5.0.19.md │ ├── 5.0.2.md │ ├── 5.0.3.md │ ├── 5.0.4.md │ ├── 5.0.5.md │ ├── 5.0.6.md │ ├── 5.0.7.md │ ├── 5.0.8.md │ ├── 5.0.9.md │ └── latest.md ├── assets │ ├── javascripts │ │ └── extra.js │ ├── lib │ │ ├── backtotop │ │ │ ├── img │ │ │ │ └── cd-top-arrow.svg │ │ │ └── js │ │ │ │ ├── main.js │ │ │ │ └── util.js │ │ └── highlightjs │ │ │ ├── default.min.css │ │ │ └── highlight.min.js │ └── stylesheets │ │ └── extra.css ├── images │ ├── favicon.ico │ └── siddhi-logo.svg ├── index.md └── license.md ├── findbugs-exclude.xml ├── issue_template.md ├── mkdocs.yml ├── pom.xml └── pull_request_template.md /.gitignore: -------------------------------------------------------------------------------- 1 | # Compiled class file 2 | *.class 3 | 4 | # Log file 5 | *.log 6 | 7 | # BlueJ files 8 | *.ctxt 9 | 10 | # Mobile Tools for Java (J2ME) 11 | .mtj.tmp/ 12 | 13 | # Package Files # 14 | *.jar 15 | *.war 16 | *.ear 17 | *.zip 18 | *.tar.gz 19 | *.rar 20 | 21 | tmp_kafka* 22 | .idea 23 | target 24 | *.iml 25 | site 26 | .DS_Store 27 | # virtual machine crash logs, see http://www.java.com/en/download/help/error_hotspot.xml 28 | hs_err_pid* -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | Siddhi IO Kafka 2 | ====================================== 3 | 4 | [![Jenkins Build Status](https://wso2.org/jenkins/job/siddhi/job/siddhi-io-kafka/badge/icon)](https://wso2.org/jenkins/job/siddhi/job/siddhi-io-kafka/) 5 | [![GitHub Release](https://img.shields.io/github/release/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/releases) 6 | [![GitHub Release Date](https://img.shields.io/github/release-date/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/releases) 7 | [![GitHub Open Issues](https://img.shields.io/github/issues-raw/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/issues) 8 | [![GitHub Last Commit](https://img.shields.io/github/last-commit/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/commits/master) 9 | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) 10 | 11 | The **siddhi-io-kafka extension** is an extension to Siddhi that receives and publishes events from and to Kafka. 12 | 13 | For information on Siddhi and it's features refer Siddhi Documentation. 14 | 15 | ## Download 16 | 17 | * Versions 5.x and above with group id `io.siddhi.extension.*` from here. 18 | * Versions 4.x and lower with group id `org.wso2.extension.siddhi.*` from here. 19 | 20 | ## Latest API Docs 21 | 22 | Latest API Docs is 5.0.19. 23 | 24 | ## Features 25 | 26 | * kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

27 | * kafka-replay-request *(Sink)*

This sink is used to request replay of specific range of events on a specified partition of a topic.

28 | * kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

29 | * kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

30 | * kafka-replay-response *(Source)*

This source is used to listen to replayed events requested from kafka-replay-request sink

31 | * kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

32 | 33 | ## Installation 34 | 35 | For installing this extension in the Streaming Integrator Server, and to add the dependent jars, refer Streaming Integrator documentation section on downloading and installing siddhi extensions.\ 36 | For installing this extension in the Streaming Integrator Tooling, and to add the dependent jars, refer Streaming Integrator documentation section on installing siddhi extensions. 37 | 38 | ## Dependencies 39 | 40 | Following JARs will be converted to osgi and copied to `WSO2SI_HOME/lib` and `WSO2SI_HOME/samples/sample-clients/lib` which are in `/libs` directory. 41 | 42 | - kafka_2.11-*.jar 43 | - kafka-clients-*.jar 44 | - metrics-core-*.jar 45 | - scala-library-2.11.*.jar 46 | - scala-parser-combinators_2.11.*.jar (if exists) 47 | - zkclient-*.jar 48 | - zookeeper-*.jar 49 | 50 | #### Setup Kafka 51 | 52 | As a prerequisite, you have to start the Kafka message broker. Please follow better steps. 53 | 1. Download the Kafka [distribution](https://kafka.apache.org/downloads) 54 | 2. Unzip the above distribution and go to the ‘bin’ directory 55 | 3. Start the zookeeper by executing below command, 56 | ```bash 57 | zookeeper-server-start.sh config/zookeeper.properties 58 | ``` 59 | 4. Start the Kafka broker by executing below command, 60 | ```bash 61 | kafka-server-start.sh config/server.properties 62 | ``` 63 | 64 | Refer the Kafka documentation for more details, https://kafka.apache.org/quickstart 65 | 66 | ## Support and Contribution 67 | 68 | * We encourage users to ask questions and get support via StackOverflow, make sure to add the `siddhi` tag to the issue for better response. 69 | 70 | * If you find any issues related to the extension please report them on the issue tracker. 71 | 72 | * For production support and other contribution related information refer Siddhi Community documentation. 73 | -------------------------------------------------------------------------------- /component/pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 20 | 21 | 22 | 23 | io.siddhi.extension.io.kafka 24 | siddhi-io-kafka-parent 25 | 5.0.20-SNAPSHOT 26 | ../pom.xml 27 | 28 | 4.0.0 29 | bundle 30 | 31 | siddhi-io-kafka 32 | Siddhi Extension - Kafka Transport 33 | 34 | 35 | 36 | io.siddhi 37 | siddhi-query-api 38 | 39 | 40 | io.siddhi 41 | siddhi-annotations 42 | 43 | 44 | io.siddhi 45 | siddhi-core 46 | 47 | 48 | org.apache.logging.log4j 49 | log4j-core 50 | 51 | 52 | org.testng 53 | testng 54 | test 55 | 56 | 57 | org.apache.kafka 58 | kafka_2.11 59 | 60 | 61 | 62 | 63 | org.apache.curator 64 | curator-test 65 | 66 | 67 | org.apache.zookeeper 68 | zookeeper 69 | 70 | 71 | commons-io 72 | commons-io 73 | test 74 | 75 | 76 | io.siddhi.extension.map.xml 77 | siddhi-map-xml 78 | 79 | 80 | io.siddhi.extension.map.binary 81 | siddhi-map-binary 82 | 83 | 84 | io.confluent 85 | common-config 86 | 87 | 88 | io.confluent 89 | common-utils 90 | 91 | 92 | io.confluent 93 | kafka-schema-registry-client 94 | 95 | 96 | io.confluent 97 | kafka-avro-serializer 98 | 99 | 100 | org.jacoco 101 | org.jacoco.agent 102 | runtime 103 | test 104 | 105 | 106 | io.siddhi.extension.map.text 107 | siddhi-map-text 108 | test 109 | 110 | 111 | org.wso2.carbon.analytics 112 | org.wso2.carbon.si.metrics.core 113 | 114 | 115 | com.google.protobuf 116 | protobuf-java 117 | 118 | 119 | org.codehaus.jackson 120 | jackson-jaxrs 121 | 122 | 123 | com.fasterxml.jackson.core 124 | jackson-databind 125 | 126 | 127 | com.fasterxml.jackson.core 128 | jackson-core 129 | 130 | 131 | com.fasterxml.jackson.core 132 | jackson-annotations 133 | 134 | 135 | log4j 136 | log4j 137 | test 138 | 139 | 140 | 141 | 142 | 143 | documentation-deploy 144 | 145 | 146 | 147 | io.siddhi 148 | siddhi-doc-gen 149 | ${siddhi.version} 150 | 151 | 152 | compile 153 | 154 | deploy-mkdocs-github-pages 155 | 156 | 157 | 158 | 159 | 160 | 161 | 162 | 163 | 164 | 165 | 166 | 167 | org.apache.felix 168 | maven-bundle-plugin 169 | true 170 | 171 | 172 | ${project.artifactId} 173 | ${project.artifactId} 174 | 175 | io.siddhi.extension.io.kafka.*, 176 | io.confluent.*, 177 | !com.google.protobuf.*, 178 | org.codehaus.jackson.*, 179 | 180 | 181 | io.siddhi.core.*;version="${siddhi.version.range}", 182 | io.siddhi.annotation.*;version="${siddhi.version.range}", 183 | io.siddhi.query.api.*;version="${siddhi.version.range}", 184 | *;resolution:=optional 185 | 186 | 187 | com.google.*, 188 | 189 | 190 | META-INF=target/classes/META-INF 191 | 192 | 193 | 194 | 195 | 196 | org.apache.maven.plugins 197 | maven-compiler-plugin 198 | 199 | 1.8 200 | 1.8 201 | 202 | 203 | 204 | org.apache.maven.plugins 205 | maven-surefire-plugin 206 | 207 | 208 | src/test/resources/testng.xml 209 | 210 | 211 | 212 | 213 | org.jacoco 214 | jacoco-maven-plugin 215 | 216 | 217 | io.siddhi 218 | siddhi-doc-gen 219 | ${siddhi.version} 220 | 221 | 222 | compile 223 | 224 | generate-md-docs 225 | 226 | 227 | 228 | 229 | 230 | 231 | 232 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/Constants.java: -------------------------------------------------------------------------------- 1 | package io.siddhi.extension.io.kafka; 2 | 3 | /** 4 | * Constants used in kafka executions. 5 | * */ 6 | public class Constants { 7 | public static final String TRP_RECORD_TIMESTAMP = "record.timestamp"; 8 | public static final String TRP_EVENT_TIMESTAMP = "event.timestamp"; 9 | public static final String TRP_CHECK_SUM = "check.sum"; 10 | public static final String TRP_TOPIC = "topic"; 11 | public static final String TRP_PARTITION = "partition"; 12 | public static final String TRP_KEY = "key"; 13 | public static final String TRP_OFFSET = "offset"; 14 | public static final String SINK_ID = "sink.id"; 15 | /*prometheus reporte values*/ 16 | public static final String PROMETHEUS_REPORTER_NAME = "prometheus"; 17 | } 18 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/KafkaIOUtils.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka; 20 | 21 | import org.apache.logging.log4j.LogManager; 22 | import org.apache.logging.log4j.Logger; 23 | 24 | import java.util.Properties; 25 | 26 | /** 27 | * Util class of Kafka IO. 28 | */ 29 | public class KafkaIOUtils { 30 | 31 | public static final String HEADER_SEPARATOR = ","; 32 | private static final String ENTRY_SEPARATOR = ":"; 33 | private static final Logger LOG = LogManager.getLogger(KafkaIOUtils.class); 34 | 35 | public static void splitHeaderValues(String optionalConfigs, Properties configProperties) { 36 | if (optionalConfigs != null && !optionalConfigs.isEmpty()) { 37 | String[] optionalProperties = optionalConfigs.split(HEADER_SEPARATOR); 38 | if (optionalProperties.length > 0) { 39 | for (String header : optionalProperties) { 40 | try { 41 | String[] configPropertyWithValue = header.split(ENTRY_SEPARATOR, 2); 42 | configProperties.put(configPropertyWithValue[0], configPropertyWithValue[1]); 43 | } catch (Exception e) { 44 | LOG.warn("Optional property '{}' is not defined in the correct format.", header, e); 45 | } 46 | } 47 | } 48 | } 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/metrics/Metrics.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.metrics; 20 | 21 | /** 22 | * Parent metric class for Kafka Source and Sink 23 | */ 24 | public class Metrics { 25 | 26 | protected String siddhiAppName; 27 | protected String streamId; 28 | 29 | protected Metrics (String siddhiAppName, String streamId) { 30 | this.siddhiAppName = siddhiAppName; 31 | this.streamId = streamId; 32 | } 33 | } 34 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/metrics/SinkMetrics.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.metrics; 20 | 21 | import org.wso2.carbon.metrics.core.Counter; 22 | import org.wso2.carbon.metrics.core.Level; 23 | import org.wso2.carbon.si.metrics.core.internal.MetricsDataHolder; 24 | 25 | import java.util.Map; 26 | import java.util.concurrent.ConcurrentHashMap; 27 | 28 | /** 29 | * Metric class for Kafka Sink 30 | */ 31 | public class SinkMetrics extends Metrics { 32 | private Map> offsetMap = new ConcurrentHashMap<>(); 33 | private Map> latencyMap = new ConcurrentHashMap<>(); 34 | private Map> messageSizeMap = new ConcurrentHashMap<>(); 35 | private Map> lastMessagePublishedTimeMap = new ConcurrentHashMap<>(); 36 | 37 | public SinkMetrics(String siddhiAppName, String streamId) { 38 | super(siddhiAppName, streamId); 39 | } 40 | 41 | public Counter getTotalWrites() { 42 | return MetricsDataHolder.getInstance().getMetricService() 43 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Total.Writes.%s", siddhiAppName, "kafka"), 44 | Level.INFO); 45 | } 46 | 47 | public Counter getWriteCountPerStream(String streamId, String topic, int partition) { 48 | return MetricsDataHolder.getInstance().getMetricService() 49 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Writes.Per.Stream.%s.%s.%s", 50 | siddhiAppName, topic, "stream_id." + streamId, "partition." + partition) 51 | , Level.INFO); 52 | } 53 | 54 | public Counter getErrorCountWithoutPartition(String topic, String streamId, String errorString) { 55 | return MetricsDataHolder.getInstance().getMetricService() 56 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Errors.Without.Partition.%s.%s.%s", 57 | siddhiAppName, topic, "stream_id." + streamId, "errorString." + errorString), Level.INFO); 58 | } 59 | 60 | public Counter getErrorCountPerStream(String streamId, String topic, int partition, String errorString) { 61 | return MetricsDataHolder.getInstance().getMetricService() 62 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Errors.Per.Stream.%s.%s.%s.%s", 63 | siddhiAppName, topic, "stream_id." + streamId, "partition." + partition, "errorString." + 64 | errorString), Level.INFO); 65 | } 66 | 67 | public void getLastMessageSize(String topic, int partition, String streamId, double messageSize) { 68 | updateMessageSizeMap(topic, partition, messageSize); 69 | MetricsDataHolder.getInstance().getMetricService() 70 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Per.Stream.%s.%s.%s.%s", 71 | siddhiAppName, topic, "partition." + partition, 72 | "streamId." + streamId, "last_message_size_in_bytes"), 73 | Level.INFO, () -> messageSizeMap.get(topic).get(partition)); 74 | } 75 | 76 | public void getLastMessageAckLatency(String topic, int partition, String streamId, long latency) { 77 | updateLatencyMap(topic, partition, latency); 78 | MetricsDataHolder.getInstance().getMetricService() 79 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Per.Stream.%s.%s.%s.%s", 80 | siddhiAppName, topic, "partition." + partition, 81 | "streamId." + streamId, "last_message_latency_in_millis"), 82 | Level.INFO, () -> latencyMap.get(topic).get(partition)); 83 | } 84 | 85 | public void getLastCommittedOffset(String topic, int partition, String streamId, long offset) { 86 | updateOffsetMap(topic, partition, offset); 87 | MetricsDataHolder.getInstance().getMetricService() 88 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Current.Offset.%s.%s.%s", 89 | siddhiAppName, topic, "partition." + partition, "streamId." + streamId), Level.INFO, 90 | () -> offsetMap.get(topic).get(partition)); 91 | } 92 | 93 | public void getLastMessagePublishedTime(String topic, int partition, String streamId, long pushedTimestamp) { 94 | updateLastMessagePublishedTimeMap(topic, partition, pushedTimestamp); 95 | MetricsDataHolder.getInstance().getMetricService() 96 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Sink.Per.Stream.%s.%s.%s.%s", 97 | siddhiAppName, topic, "partition." + partition, 98 | "streamId." + streamId, "last_message_published_at"), 99 | Level.INFO, System::currentTimeMillis); 100 | } 101 | 102 | private void updateOffsetMap(String topic, int partition, long offset) { 103 | Map partitionMap; 104 | if (offsetMap.get(topic) == null) { 105 | partitionMap = new ConcurrentHashMap(); 106 | } else { 107 | partitionMap = offsetMap.get(topic); 108 | } 109 | partitionMap.put(partition, offset); 110 | offsetMap.put(topic, partitionMap); 111 | } 112 | 113 | private void updateLatencyMap(String topic, int partition, long latency) { 114 | Map partitionMap; 115 | if (latencyMap.get(topic) == null) { 116 | partitionMap = new ConcurrentHashMap(); 117 | } else { 118 | partitionMap = latencyMap.get(topic); 119 | } 120 | partitionMap.put(partition, latency); 121 | latencyMap.put(topic, partitionMap); 122 | } 123 | 124 | private void updateMessageSizeMap(String topic, int partition, double messageSize) { 125 | Map partitionMap; 126 | if (messageSizeMap.get(topic) == null) { 127 | partitionMap = new ConcurrentHashMap(); 128 | } else { 129 | partitionMap = messageSizeMap.get(topic); 130 | } 131 | partitionMap.put(partition, messageSize); 132 | messageSizeMap.put(topic, partitionMap); 133 | } 134 | 135 | private void updateLastMessagePublishedTimeMap(String topic, int partition, long pushedTimestamp) { 136 | Map partitionMap; 137 | if (lastMessagePublishedTimeMap.get(topic) == null) { 138 | partitionMap = new ConcurrentHashMap(); 139 | } else { 140 | partitionMap = lastMessagePublishedTimeMap.get(topic); 141 | } 142 | partitionMap.put(partition, pushedTimestamp); 143 | lastMessagePublishedTimeMap.put(topic, partitionMap); 144 | } 145 | } 146 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/metrics/SourceMetrics.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.metrics; 20 | 21 | import org.wso2.carbon.metrics.core.Counter; 22 | import org.wso2.carbon.metrics.core.Level; 23 | import org.wso2.carbon.si.metrics.core.internal.MetricsDataHolder; 24 | 25 | import java.util.Map; 26 | 27 | /** 28 | * Metric class for Kafka Source 29 | */ 30 | public class SourceMetrics extends Metrics { 31 | private Map> topicOffsetMap = null; 32 | private long consumerLag; 33 | 34 | public SourceMetrics(String siddhiAppName, String streamId) { 35 | super(siddhiAppName, streamId); 36 | } 37 | 38 | public Counter getTotalReads() { //to count the total reads from siddhi app level. 39 | return MetricsDataHolder.getInstance().getMetricService() 40 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Total.Reads.%s", siddhiAppName, "kafka"), 41 | Level.INFO); 42 | } 43 | 44 | public Counter getReadCountPerStream(String topic, Integer partition, String groupId) { 45 | return MetricsDataHolder.getInstance().getMetricService() 46 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Source.Reads.Per.Stream.%s.%s.%s.%s", 47 | siddhiAppName, topic, "stream_id." + streamId, "partition." + partition, "groupId." + groupId) 48 | , Level.INFO); 49 | } 50 | 51 | public void getCurrentOffset(String topic, Integer partition, String groupId) { 52 | if (topicOffsetMap != null) { 53 | MetricsDataHolder.getInstance().getMetricService() 54 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Source.Current.Offset.%s.%s.%s.%s", 55 | siddhiAppName, topic, "partition." + partition, "groupId." + groupId, 56 | "stream_id." + streamId), Level.INFO, () -> topicOffsetMap.get(topic).get(partition)); 57 | } 58 | } 59 | 60 | public Counter getErrorCountPerStream(String topic, String groupId, String errorString) { 61 | return MetricsDataHolder.getInstance().getMetricService() 62 | .counter(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Source.Errors.Per.Stream.%s.%s.%s.%s", 63 | siddhiAppName, topic, "stream_id." + streamId, "groupId." + groupId, "errorString." + 64 | errorString), Level.INFO); 65 | } 66 | 67 | public void getLastMessageConsumedTime(String topic, String groupId) { 68 | MetricsDataHolder.getInstance().getMetricService() 69 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Source.Per.Stream.%s.%s.%s.%s", 70 | siddhiAppName, topic, "groupId." + groupId, 71 | "streamId." + streamId, "last_message_consumed_at"), 72 | Level.INFO, System::currentTimeMillis); 73 | } 74 | 75 | public synchronized void getConsumerLag(String topic, String groupId, int partition, long recordTimestamp) { 76 | setConsumerLag(System.currentTimeMillis() - recordTimestamp); 77 | MetricsDataHolder.getInstance().getMetricService() 78 | .gauge(String.format("io.siddhi.SiddhiApps.%s.Siddhi.Kafka.Source.Per.Stream.%s.%s.%s.%s.%s", 79 | siddhiAppName, topic, "partition." + partition, "groupId." + groupId, 80 | "streamId." + streamId, "consumer_lag"), 81 | Level.INFO, () -> getConsumerLag()); 82 | } 83 | 84 | public void setTopicOffsetMap(Map> topicOffsetMap) { 85 | this.topicOffsetMap = topicOffsetMap; 86 | } 87 | 88 | public long getConsumerLag() { 89 | return consumerLag; 90 | } 91 | 92 | public void setConsumerLag(long consumerLag) { 93 | this.consumerLag = consumerLag; 94 | } 95 | } 96 | 97 | 98 | 99 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/multidc/sink/KafkaMultiDCSink.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.multidc.sink; 20 | 21 | import io.siddhi.annotation.Example; 22 | import io.siddhi.annotation.Extension; 23 | import io.siddhi.annotation.Parameter; 24 | import io.siddhi.annotation.util.DataType; 25 | import io.siddhi.core.config.SiddhiAppContext; 26 | import io.siddhi.core.exception.ConnectionUnavailableException; 27 | import io.siddhi.core.util.config.ConfigReader; 28 | import io.siddhi.core.util.snapshot.state.StateFactory; 29 | import io.siddhi.core.util.transport.DynamicOptions; 30 | import io.siddhi.core.util.transport.OptionHolder; 31 | import io.siddhi.extension.io.kafka.KafkaIOUtils; 32 | import io.siddhi.extension.io.kafka.sink.KafkaSink; 33 | import io.siddhi.query.api.definition.StreamDefinition; 34 | import io.siddhi.query.api.exception.SiddhiAppValidationException; 35 | import org.apache.kafka.clients.producer.KafkaProducer; 36 | import org.apache.kafka.clients.producer.Producer; 37 | import org.apache.kafka.clients.producer.ProducerRecord; 38 | import org.apache.logging.log4j.LogManager; 39 | import org.apache.logging.log4j.Logger; 40 | 41 | import java.io.UnsupportedEncodingException; 42 | import java.nio.ByteBuffer; 43 | import java.util.ArrayList; 44 | import java.util.List; 45 | import java.util.Properties; 46 | 47 | /** 48 | * This class implements a Kafka sink to publish Siddhi events to multiple kafka clusters. This sink is useful in 49 | * multi data center deployments where we have two identical setups replicated in two physical locations 50 | */ 51 | @Extension( 52 | name = "kafkaMultiDC", 53 | namespace = "sink", 54 | description = "A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka " + 55 | "cluster. The events can be published in the `TEXT` `XML` `JSON` or `Binary` format.\n" + 56 | "If the topic is not already created in the Kafka cluster, the Kafka sink creates the default " + 57 | "partition for the given topic. The publishing topic and partition can be a dynamic value taken " + 58 | "from the Siddhi event.\n" + 59 | "To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to " + 60 | "publish events to the same topic, the `type` parameter must have `kafkaMultiDC` as its value.", 61 | parameters = { 62 | @Parameter(name = "bootstrap.servers", 63 | description = " This parameter specifies the list of Kafka servers to which the Kafka " + 64 | "sink must publish events. This list should be provided as a set of comma " + 65 | "-separated values. There must be " + 66 | "at least two servers in this list. e.g., `localhost:9092,localhost:9093`.", 67 | type = {DataType.STRING}), 68 | @Parameter(name = "topic", 69 | description = "The topic to which the Kafka sink needs to publish events. Only one " + 70 | "topic must be specified.", 71 | type = {DataType.STRING}), 72 | @Parameter(name = "sequence.id", 73 | description = "A unique identifier to identify the messages published by this sink. This ID " + 74 | "allows receivers to identify the sink that published a specific message.", 75 | type = {DataType.STRING}, 76 | optional = true, 77 | defaultValue = "null"), 78 | @Parameter(name = "key", 79 | description = "The key contains the values that are used to maintain ordering in a Kafka" + 80 | " partition.", 81 | type = {DataType.STRING}, 82 | optional = true, 83 | defaultValue = "null"), 84 | @Parameter(name = "partition.no", 85 | description = "The partition number for the given topic. Only one partition ID can be " + 86 | "defined. If no value is specified for this parameter, the Kafka sink publishes " + 87 | "to the default partition of the topic (i.e., 0)", 88 | type = {DataType.INT}, 89 | optional = true, 90 | defaultValue = "0"), 91 | @Parameter(name = "is.binary.message", 92 | description = "In order to send the binary events via kafkaMultiDCSink, it is required to set " 93 | + "this parameter to `true`.", 94 | type = {DataType.BOOL}, 95 | optional = false, 96 | defaultValue = "null"), 97 | @Parameter(name = "optional.configuration", 98 | description = "This parameter contains all the other possible configurations that the " + 99 | "producer is created with. \n" + 100 | "e.g., `producer.type:async,batch.size:200`", 101 | optional = true, 102 | type = {DataType.STRING}, 103 | defaultValue = "null") 104 | }, 105 | examples = { 106 | @Example( 107 | syntax = "@App:name('TestExecutionPlan') \n" + 108 | "define stream FooStream (symbol string, price float, volume long); \n" + 109 | "@info(name = 'query1') \n" + 110 | "@sink(" 111 | + "type='kafkaMultiDC', " 112 | + "topic='myTopic', " 113 | + "partition.no='0'," 114 | + "bootstrap.servers='host1:9092, host2:9092', " 115 | + "@map(type='xml'))" + 116 | "Define stream BarStream (symbol string, price float, volume long);\n" + 117 | "from FooStream select symbol, price, volume insert into BarStream;\n", 118 | description = "This query publishes to the default (i.e., 0th) partition of the brokers in " + 119 | "two data centers ") 120 | } 121 | ) 122 | public class KafkaMultiDCSink extends KafkaSink { 123 | private static final Logger LOG = LogManager.getLogger(KafkaMultiDCSink.class); 124 | List> producers = new ArrayList<>(); 125 | private String topic; 126 | private Integer partitionNo; 127 | 128 | @Override 129 | protected StateFactory init(StreamDefinition outputStreamDefinition, OptionHolder optionHolder, 130 | ConfigReader sinkConfigReader, 131 | SiddhiAppContext siddhiAppContext) { 132 | StateFactory stateStateFactory = super.init(outputStreamDefinition, optionHolder, 133 | sinkConfigReader, siddhiAppContext); 134 | topic = optionHolder.validateAndGetStaticValue(KAFKA_PUBLISH_TOPIC); 135 | partitionNo = Integer.parseInt(optionHolder.validateAndGetStaticValue(KAFKA_PARTITION_NO, "0")); 136 | if (bootstrapServers.split(",").length != 2) { 137 | throw new SiddhiAppValidationException("There should be two servers listed in 'bootstrap.servers' " + 138 | "configuration"); 139 | } 140 | 141 | return stateStateFactory; 142 | } 143 | 144 | @Override 145 | public String[] getSupportedDynamicOptions() { 146 | return new String[]{KAFKA_MESSAGE_KEY}; 147 | } 148 | 149 | @Override 150 | public void connect() throws ConnectionUnavailableException { 151 | Properties props = new Properties(); 152 | props.put("acks", "all"); 153 | props.put("retries", 0); 154 | props.put("batch.size", 16384); 155 | props.put("linger.ms", 1); 156 | props.put("buffer.memory", 33554432); 157 | props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); 158 | 159 | if (!isBinaryMessage) { 160 | props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); 161 | } else { 162 | props.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer"); 163 | } 164 | 165 | KafkaIOUtils.splitHeaderValues(optionalConfigs, props); 166 | 167 | String[] bootstrapServersList = bootstrapServers.split(","); 168 | for (int index = 0; index < bootstrapServersList.length; index++) { 169 | String server = bootstrapServersList[index].trim(); 170 | props.put("bootstrap.servers", server); 171 | Producer producer = new KafkaProducer<>(props); 172 | producers.add(producer); 173 | LOG.info("Kafka producer created for Kafka cluster :{}", server); 174 | } 175 | } 176 | 177 | 178 | @Override 179 | public void publish(Object payload, DynamicOptions dynamicOptions, KafkaSinkState kafkaSinkState) 180 | throws ConnectionUnavailableException { 181 | String key = keyOption.getValue(dynamicOptions); 182 | Object payloadToSend = null; 183 | try { 184 | if (payload instanceof String) { 185 | 186 | // If it is required to send the message as string message. 187 | if (!isBinaryMessage) { 188 | StringBuilder strPayload = new StringBuilder(); 189 | strPayload.append(sequenceId).append(SEQ_NO_HEADER_FIELD_SEPERATOR). 190 | append(kafkaSinkState.lastSentSequenceNo).append(SEQ_NO_HEADER_DELIMITER). 191 | append(payload.toString()); 192 | payloadToSend = strPayload.toString(); 193 | kafkaSinkState.lastSentSequenceNo.incrementAndGet(); 194 | 195 | // If it is required to send 'xml`, 'json' or 'test' mapping payload as a byte stream through kafka. 196 | } else { 197 | byte[] byteEvents = payload.toString().getBytes("UTF-8"); 198 | payloadToSend = getSequencedBinaryPayloadToSend(byteEvents, kafkaSinkState); 199 | kafkaSinkState.lastSentSequenceNo.incrementAndGet(); 200 | } 201 | //if the received payload to send is binary. 202 | } else { 203 | byte[] byteEvents = ((ByteBuffer) payload).array(); 204 | payloadToSend = getSequencedBinaryPayloadToSend(byteEvents, kafkaSinkState); 205 | kafkaSinkState.lastSentSequenceNo.incrementAndGet(); 206 | } 207 | } catch (UnsupportedEncodingException e) { 208 | LOG.error("Error while converting the received string payload to byte[].", e); 209 | } 210 | 211 | for (Producer producer : producers) { 212 | try { 213 | producer.send(new ProducerRecord<>(topic, partitionNo, key, payloadToSend)); 214 | } catch (Exception e) { 215 | LOG.error("Failed to publish the message to [topic] {}. Error: {}. Sequence Number " + 216 | ": {}", topic, e.getMessage(), kafkaSinkState.lastSentSequenceNo.get() - 1, e); 217 | } 218 | } 219 | } 220 | 221 | @Override 222 | public void disconnect() { 223 | for (Producer producer : producers) { 224 | producer.flush(); 225 | producer.close(); 226 | } 227 | producers.clear(); 228 | } 229 | } 230 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/multidc/source/SourceSynchronizer.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | package io.siddhi.extension.io.kafka.multidc.source; 19 | 20 | import io.siddhi.core.stream.input.source.SourceEventListener; 21 | import org.apache.logging.log4j.LogManager; 22 | import org.apache.logging.log4j.Logger; 23 | 24 | import java.util.ArrayList; 25 | import java.util.HashMap; 26 | import java.util.List; 27 | import java.util.Map; 28 | import java.util.Timer; 29 | import java.util.TimerTask; 30 | import java.util.TreeMap; 31 | import java.util.concurrent.atomic.AtomicBoolean; 32 | 33 | 34 | /** 35 | * The source Synchronize to merge events from two kafka source 36 | */ 37 | public class SourceSynchronizer { 38 | private static final Logger LOG = LogManager.getLogger(SourceSynchronizer.class); 39 | private final SourceEventListener eventListener; 40 | boolean isEventGap = false; 41 | // Buffer events sorting by the sequence number. 42 | Map eventBuffer = new TreeMap<>(); 43 | Map perSourceReceivedSeqNo = new HashMap<>(); 44 | Timer flushBufferTimer = new Timer(true); 45 | String[] bootstrapServers = new String[2]; 46 | List toRemoveSeqNos = new ArrayList<>(); 47 | private Long lastConsumedSeqNo = -1L; 48 | private int maxBufferSize; 49 | private int bufferInterval; 50 | private AtomicBoolean isFlushTaskDue = new AtomicBoolean(false); 51 | 52 | public SourceSynchronizer(SourceEventListener eventListener, String[] bootstrapServers, int maxBufferSize, 53 | int bufferFlushInterval) { 54 | this.eventListener = eventListener; 55 | this.bootstrapServers[0] = bootstrapServers[0]; 56 | this.bootstrapServers[1] = bootstrapServers[1]; 57 | this.maxBufferSize = maxBufferSize; 58 | this.bufferInterval = bufferFlushInterval; 59 | 60 | perSourceReceivedSeqNo.put(bootstrapServers[0], -1L); 61 | perSourceReceivedSeqNo.put(bootstrapServers[1], -1L); 62 | } 63 | 64 | private synchronized void forceFlushBuffer(long flushTillSeqNo) { 65 | for (Map.Entry entry : eventBuffer.entrySet()) { 66 | Long sequenceNumber = entry.getKey(); 67 | BufferValueHolder eventHolder = entry.getValue(); 68 | if ((sequenceNumber > lastConsumedSeqNo) && 69 | (sequenceNumber <= flushTillSeqNo)) { 70 | if (LOG.isDebugEnabled()) { 71 | LOG.debug("Updating the lastConsumedSeqNo={} as the event is forcefully flushed, " + 72 | "from the source {}", sequenceNumber, eventHolder.getSourceId()); 73 | } 74 | 75 | if (!(sequenceNumber < lastConsumedSeqNo) && 76 | (lastConsumedSeqNo != sequenceNumber + 1)) { 77 | LOG.warn("Events lost from sequence {} to {}", lastConsumedSeqNo + 1, sequenceNumber - 1); 78 | } 79 | 80 | lastConsumedSeqNo = sequenceNumber; 81 | toRemoveSeqNos.add(sequenceNumber); 82 | eventListener.onEvent(eventHolder.getEvent(), eventHolder.getObjects()); 83 | } 84 | } 85 | toRemoveSeqNos.forEach(seqNo -> eventBuffer.remove(seqNo)); // To avoid concurrent modification. 86 | toRemoveSeqNos.clear(); 87 | } 88 | 89 | private synchronized void flushBuffer() { 90 | if (LOG.isDebugEnabled()) { 91 | LOG.debug("Start flushing buffer"); 92 | } 93 | for (Map.Entry entry : eventBuffer.entrySet()) { 94 | Long sequenceNumber = entry.getKey(); 95 | BufferValueHolder eventHolder = entry.getValue(); 96 | if (sequenceNumber <= lastConsumedSeqNo) { 97 | if (LOG.isDebugEnabled()) { 98 | LOG.debug("Message with sequence {} already received. Dropping the event from the buffer", 99 | sequenceNumber); 100 | } 101 | toRemoveSeqNos.add(sequenceNumber); 102 | continue; 103 | } else if (sequenceNumber == lastConsumedSeqNo + 1) { 104 | isEventGap = false; 105 | lastConsumedSeqNo++; 106 | if (LOG.isDebugEnabled()) { 107 | LOG.debug("Message with sequence {} flushed from buffer. Updating lastConsumedSeqNo={}", 108 | sequenceNumber, lastConsumedSeqNo); 109 | } 110 | 111 | toRemoveSeqNos.add(sequenceNumber); 112 | eventListener.onEvent(eventHolder.getEvent(), eventHolder.getObjects()); 113 | } else { 114 | isEventGap = true; 115 | if (LOG.isDebugEnabled()) { 116 | LOG.debug("Gap detected while flushing the buffer. Flushed message sequence={}. Expected " + 117 | "sequence={}. Stop flushing the buffer.", sequenceNumber, lastConsumedSeqNo + 1); 118 | } 119 | break; 120 | } 121 | } 122 | 123 | toRemoveSeqNos.forEach(seqNo -> eventBuffer.remove(seqNo)); // To avoid concurrent modification. 124 | toRemoveSeqNos.clear(); 125 | if (LOG.isDebugEnabled()) { 126 | LOG.debug("End flushing buffer"); 127 | } 128 | } 129 | 130 | private synchronized void bufferEvent(String sourceId, long sequenceNumber, Object event, Object[] objects) { 131 | if (LOG.isDebugEnabled()) { 132 | LOG.debug("Buffering Event. SourceId={}, SequenceNumber={}", sourceId, sequenceNumber); 133 | } 134 | 135 | if (eventBuffer.size() >= maxBufferSize) { 136 | long flushTillSeq = Math.max( 137 | perSourceReceivedSeqNo.get(bootstrapServers[0]), 138 | perSourceReceivedSeqNo.get(bootstrapServers[1])); 139 | LOG.info("Buffer size exceeded. Force flushing events till the sequence {}", sequenceNumber); 140 | forceFlushBuffer(flushTillSeq); 141 | } 142 | eventBuffer.put(sequenceNumber, new BufferValueHolder(event, sourceId, objects)); 143 | } 144 | 145 | public synchronized void onEvent(String sourceId, long sequenceNumber, Object event, Object[] objects) { 146 | perSourceReceivedSeqNo.put(sourceId, sequenceNumber); 147 | 148 | if (sequenceNumber <= lastConsumedSeqNo) { 149 | if (LOG.isDebugEnabled()) { 150 | LOG.debug("Message with sequence {} already received. Dropping the event from source {}:{}", 151 | sequenceNumber, sourceId, event); 152 | } 153 | } else if (sequenceNumber == lastConsumedSeqNo + 1) { 154 | lastConsumedSeqNo++; 155 | if (LOG.isDebugEnabled()) { 156 | LOG.debug("Message with sequence {} received from source {}. Updating lastConsumedSeqNo={}", 157 | sequenceNumber, sourceId, lastConsumedSeqNo); 158 | } 159 | eventListener.onEvent(event, objects); 160 | 161 | // Gap is filled by receiving the next expected sequence number 162 | if (!eventBuffer.isEmpty()) { 163 | flushBuffer(); 164 | } 165 | } else { // Sequence number is greater than the expected sequence number 166 | if (isEventGap) { 167 | if (LOG.isDebugEnabled()) { 168 | LOG.debug("Message with sequence {} from source{}. Couldn't fill the gap, buffering the event.", 169 | sequenceNumber, sourceId); 170 | } 171 | 172 | bufferEvent(sourceId, sequenceNumber, event, objects); 173 | long flushTillSeq = Math.min(perSourceReceivedSeqNo.get(bootstrapServers[0]), 174 | perSourceReceivedSeqNo.get(bootstrapServers[1])); 175 | isEventGap = false; 176 | forceFlushBuffer(flushTillSeq); 177 | } else { 178 | if (LOG.isDebugEnabled()) { 179 | LOG.debug("Gap detected. Message with sequence {} received from source {}." + 180 | " Expected sequence number is {}. Starting buffering events", 181 | sequenceNumber, sourceId, lastConsumedSeqNo + 1); 182 | } 183 | isEventGap = true; 184 | bufferEvent(sourceId, sequenceNumber, event, objects); 185 | 186 | if (!isFlushTaskDue.get()) { 187 | flushBufferTimer.schedule(new BufferFlushTask(), bufferInterval); 188 | isFlushTaskDue.set(true); 189 | } 190 | } 191 | } 192 | } 193 | 194 | public synchronized Long getLastConsumedSeqNo() { 195 | return lastConsumedSeqNo; 196 | } 197 | 198 | public synchronized void setLastConsumedSeqNo(long seqNo) { 199 | this.lastConsumedSeqNo = seqNo; 200 | } 201 | 202 | static class BufferValueHolder { 203 | Object[] objects; 204 | private Object event; 205 | private String sourceId; 206 | 207 | BufferValueHolder(Object event, String sourceId, Object[] objects) { 208 | this.event = event; 209 | this.sourceId = sourceId; 210 | this.objects = objects; 211 | } 212 | 213 | Object[] getObjects() { 214 | return objects; 215 | } 216 | 217 | String getSourceId() { 218 | return sourceId; 219 | } 220 | 221 | public Object getEvent() { 222 | return event; 223 | } 224 | } 225 | 226 | class BufferFlushTask extends TimerTask { 227 | private final Logger log = LogManager.getLogger(BufferFlushTask.class); 228 | 229 | @Override 230 | public synchronized void run() { 231 | isFlushTaskDue.set(false); 232 | long flushTillSeq = Math.max(perSourceReceivedSeqNo.get(bootstrapServers[0]), 233 | perSourceReceivedSeqNo.get(bootstrapServers[1])); 234 | if (log.isDebugEnabled()) { 235 | log.debug("Executing the buffer flushing task. Flushing buffers till {}", flushTillSeq); 236 | } 237 | forceFlushBuffer(flushTillSeq); 238 | } 239 | } 240 | } 241 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/sink/KafkaReplayRequestSink.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | package io.siddhi.extension.io.kafka.sink; 19 | 20 | import io.siddhi.annotation.Example; 21 | import io.siddhi.annotation.Extension; 22 | import io.siddhi.annotation.Parameter; 23 | import io.siddhi.annotation.util.DataType; 24 | import io.siddhi.core.config.SiddhiAppContext; 25 | import io.siddhi.core.event.Event; 26 | import io.siddhi.core.exception.ConnectionUnavailableException; 27 | import io.siddhi.core.stream.ServiceDeploymentInfo; 28 | import io.siddhi.core.stream.output.sink.Sink; 29 | import io.siddhi.core.util.config.ConfigReader; 30 | import io.siddhi.core.util.snapshot.state.State; 31 | import io.siddhi.core.util.snapshot.state.StateFactory; 32 | import io.siddhi.core.util.transport.DynamicOptions; 33 | import io.siddhi.core.util.transport.OptionHolder; 34 | import io.siddhi.extension.io.kafka.Constants; 35 | import io.siddhi.extension.io.kafka.util.KafkaReplayResponseSourceRegistry; 36 | import io.siddhi.query.api.definition.StreamDefinition; 37 | 38 | /** 39 | * This class implements a Kafka Replay Request Sink 40 | */ 41 | @Extension( 42 | name = "kafka-replay-request", 43 | namespace = "sink", 44 | description = "This sink is used to request replay of specific range of events on a specified partition of a " + 45 | "topic.", 46 | parameters = { 47 | @Parameter(name = "sink.id", 48 | description = "a unique SINK_ID should be set. This sink id will be used to match with the " + 49 | "appropriate kafka-replay-response source", 50 | type = {DataType.STRING}) 51 | }, 52 | examples = { 53 | @Example( 54 | syntax = "@App:name('TestKafkaReplay')\n" + 55 | "\n" + 56 | "@sink(type='kafka-replay-request', sink.id='1')\n" + 57 | "define stream BarStream (topicForReplay string, partitionForReplay string, " + 58 | "startOffset string, endOffset string);\n" + 59 | "\n" + 60 | "@info(name = 'query1')\n" + 61 | "@source(type='kafka-replay-response', group.id='group', threading.option=" + 62 | "'single.thread', bootstrap.servers='localhost:9092', sink.id='1',\n" + 63 | "@map(type='json'))\n" + 64 | "Define stream FooStream (symbol string, amount double);\n" + 65 | "\n" + 66 | "@sink(type='log')\n" + 67 | "Define stream logStream(symbol string, amount double);\n" + 68 | "\n" + 69 | "from FooStream select * insert into logStream;", 70 | description = "In this app we can send replay request events into BarStream and observe the " + 71 | "replayed events in the logStream") 72 | } 73 | ) 74 | 75 | public class KafkaReplayRequestSink extends Sink { 76 | private String sinkID; 77 | 78 | @Override 79 | public Class[] getSupportedInputEventClasses() { 80 | return new Class[]{String.class, Event.class}; 81 | } 82 | 83 | @Override 84 | protected ServiceDeploymentInfo exposeServiceDeploymentInfo() { 85 | return null; 86 | } 87 | 88 | @Override 89 | public String[] getSupportedDynamicOptions() { 90 | return new String[0]; 91 | } 92 | 93 | @Override 94 | protected StateFactory init(StreamDefinition outputStreamDefinition, OptionHolder optionHolder, 95 | ConfigReader sinkConfigReader, SiddhiAppContext siddhiAppContext) { 96 | this.sinkID = optionHolder.validateAndGetOption(Constants.SINK_ID).getValue(); 97 | return null; 98 | } 99 | 100 | @Override 101 | public void publish(Object payload, DynamicOptions dynamicOptions, State state) 102 | throws ConnectionUnavailableException { 103 | String partitionForReplay; 104 | String startOffset; 105 | String endOffset; 106 | String replayTopic; 107 | Object[] replayParams; 108 | if (payload instanceof Event[]) { 109 | replayParams = ((Event[]) payload)[0].getData(); 110 | } else if (payload instanceof Event) { 111 | replayParams = ((Event) payload).getData(); 112 | } else { 113 | throw new ConnectionUnavailableException("Unknown type"); 114 | } 115 | replayTopic = (String) replayParams[0]; 116 | partitionForReplay = (String) replayParams[1]; 117 | startOffset = (String) replayParams[2]; 118 | endOffset = (String) replayParams[3]; 119 | KafkaReplayResponseSourceRegistry.getInstance().getKafkaReplayResponseSource(sinkID) 120 | .onReplayRequest(partitionForReplay, startOffset, endOffset, replayTopic); 121 | } 122 | 123 | @Override 124 | public void connect() throws ConnectionUnavailableException { 125 | 126 | } 127 | 128 | @Override 129 | public void disconnect() { 130 | 131 | } 132 | 133 | @Override 134 | public void destroy() { 135 | 136 | } 137 | } 138 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/source/ConsumerKafkaGroup.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.source; 20 | 21 | import io.siddhi.core.stream.input.source.SourceEventListener; 22 | import io.siddhi.extension.io.kafka.metrics.SourceMetrics; 23 | import org.apache.logging.log4j.LogManager; 24 | import org.apache.logging.log4j.Logger; 25 | 26 | import java.util.ArrayList; 27 | import java.util.Arrays; 28 | import java.util.List; 29 | import java.util.Properties; 30 | import java.util.concurrent.ExecutorService; 31 | import java.util.concurrent.Future; 32 | 33 | /** 34 | * This processes the Kafka messages using a thread pool. 35 | */ 36 | public class ConsumerKafkaGroup { 37 | private static final Logger LOG = LogManager.getLogger(ConsumerKafkaGroup.class); 38 | private final String topics[]; 39 | private final String partitions[]; 40 | private final Properties props; 41 | private List kafkaConsumerThreadList = new ArrayList<>(); 42 | private ExecutorService executorService; 43 | private String threadingOption; 44 | private boolean isBinaryMessage; 45 | private KafkaSource.KafkaSourceState kafkaSourceState; 46 | private List> futureList = new ArrayList<>(); 47 | 48 | ConsumerKafkaGroup(String[] topics, String[] partitions, Properties props, String threadingOption, 49 | ExecutorService executorService, boolean isBinaryMessage, boolean enableOffsetCommit, 50 | boolean enableAsyncCommit, SourceEventListener sourceEventListener, 51 | String[] requiredProperties, SourceMetrics metrics) { 52 | this.threadingOption = threadingOption; 53 | this.topics = topics; 54 | this.partitions = partitions; 55 | this.props = props; 56 | this.executorService = executorService; 57 | this.isBinaryMessage = isBinaryMessage; 58 | 59 | if (KafkaSource.SINGLE_THREADED.equals(threadingOption)) { 60 | KafkaConsumerThread kafkaConsumerThread = 61 | new KafkaConsumerThread(sourceEventListener, topics, partitions, props, 62 | false, isBinaryMessage, enableOffsetCommit, enableAsyncCommit, 63 | requiredProperties, metrics); 64 | kafkaConsumerThreadList.add(kafkaConsumerThread); 65 | LOG.info("Kafka Consumer thread starting to listen on topic(s): {} with partition/s: {}", 66 | Arrays.toString(topics), Arrays.toString(partitions)); 67 | } else if (KafkaSource.TOPIC_WISE.equals(threadingOption)) { 68 | for (String topic : topics) { 69 | KafkaConsumerThread kafkaConsumerThread = 70 | new KafkaConsumerThread(sourceEventListener, new String[]{topic}, partitions, props, 71 | false, isBinaryMessage, enableOffsetCommit, enableAsyncCommit, 72 | requiredProperties, metrics); 73 | kafkaConsumerThreadList.add(kafkaConsumerThread); 74 | LOG.info("Kafka Consumer thread starting to listen on topic: {} with partition/s: {}", topic, 75 | Arrays.toString(partitions)); 76 | } 77 | } else if (KafkaSource.PARTITION_WISE.equals(threadingOption)) { 78 | for (String topic : topics) { 79 | for (String partition : partitions) { 80 | KafkaConsumerThread kafkaConsumerThread = 81 | new KafkaConsumerThread(sourceEventListener, new String[]{topic}, 82 | new String[]{partition}, props, true, 83 | isBinaryMessage, enableOffsetCommit, enableAsyncCommit, requiredProperties, 84 | metrics); 85 | kafkaConsumerThreadList.add(kafkaConsumerThread); 86 | LOG.info("Kafka Consumer thread starting to listen on topic: {} with partition: {}", topic, 87 | partition); 88 | } 89 | } 90 | } 91 | } 92 | 93 | void pause() { 94 | kafkaConsumerThreadList.forEach(KafkaConsumerThread::pause); 95 | } 96 | 97 | void resume() { 98 | kafkaConsumerThreadList.forEach(KafkaConsumerThread::resume); 99 | } 100 | 101 | void restoreState() { 102 | kafkaConsumerThreadList.forEach(kafkaConsumerThread -> kafkaConsumerThread.restore()); 103 | } 104 | 105 | void shutdown() { 106 | kafkaConsumerThreadList.forEach(KafkaConsumerThread::shutdownConsumer); 107 | futureList.forEach(future -> { 108 | if (!future.isCancelled()) { 109 | future.cancel(true); 110 | } 111 | }); 112 | } 113 | 114 | void run() { 115 | try { 116 | for (KafkaConsumerThread consumerThread : kafkaConsumerThreadList) { 117 | futureList.add(executorService.submit(consumerThread)); 118 | } 119 | } catch (Throwable t) { 120 | LOG.error("Error while creating KafkaConsumerThread for topic(s): {}", Arrays.toString(topics), t); 121 | } 122 | } 123 | 124 | public void setKafkaSourceState(KafkaSource.KafkaSourceState kafkaSourceState) { 125 | this.kafkaSourceState = kafkaSourceState; 126 | for (KafkaConsumerThread consumer : kafkaConsumerThreadList) { 127 | consumer.setKafkaSourceState(kafkaSourceState); 128 | } 129 | } 130 | } 131 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/source/KafkaReplayResponseSource.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | package io.siddhi.extension.io.kafka.source; 19 | 20 | 21 | /** 22 | * This class implements a Kafka source to receive events from a kafka cluster. 23 | */ 24 | 25 | import io.siddhi.annotation.Example; 26 | import io.siddhi.annotation.Extension; 27 | import io.siddhi.annotation.Parameter; 28 | import io.siddhi.annotation.util.DataType; 29 | import io.siddhi.core.config.SiddhiAppContext; 30 | import io.siddhi.core.exception.ConnectionUnavailableException; 31 | import io.siddhi.core.exception.SiddhiAppRuntimeException; 32 | import io.siddhi.core.stream.input.source.SourceEventListener; 33 | import io.siddhi.core.util.config.ConfigReader; 34 | import io.siddhi.core.util.snapshot.state.StateFactory; 35 | import io.siddhi.core.util.transport.OptionHolder; 36 | import io.siddhi.extension.io.kafka.Constants; 37 | import io.siddhi.extension.io.kafka.util.KafkaReplayResponseSourceRegistry; 38 | 39 | import java.util.ArrayList; 40 | import java.util.List; 41 | import java.util.concurrent.ExecutorService; 42 | import java.util.concurrent.Future; 43 | 44 | /** 45 | * This source is used to listen to replayed events requested from kafka-replay-request sink 46 | */ 47 | @Extension( 48 | name = "kafka-replay-response", 49 | namespace = "source", 50 | description = "This source is used to listen to replayed events requested from kafka-replay-request sink", 51 | parameters = { 52 | @Parameter(name = "bootstrap.servers", 53 | description = "This specifies the list of Kafka servers to which the Kafka source " + 54 | "must listen. This list can be provided as a set of comma-separated values.\n" + 55 | "e.g., `localhost:9092,localhost:9093`", 56 | type = {DataType.STRING}), 57 | @Parameter(name = "group.id", 58 | description = "This is an ID to identify the Kafka source group. The group ID ensures " + 59 | "that sources with the same topic and partition that are in the same group do not" + 60 | " receive the same event.", 61 | type = {DataType.STRING}), 62 | @Parameter(name = "threading.option", 63 | description = " This specifies whether the Kafka source is to be run on a single thread," + 64 | " or in multiple threads based on a condition. Possible values are as follows:\n" + 65 | "`single.thread`: To run the Kafka source on a single thread.\n" + 66 | "`topic.wise`: To use a separate thread per topic.\n" + 67 | "`partition.wise`: To use a separate thread per partition.", 68 | type = {DataType.STRING}), 69 | @Parameter( 70 | name = "sink.id", 71 | description = "a unique SINK_ID .", 72 | type = {DataType.INT}), 73 | }, 74 | examples = { 75 | @Example( 76 | syntax = "@App:name('TestKafkaReplay')\n" + 77 | "\n" + 78 | "@sink(type='kafka-replay-request', sink.id='1')\n" + 79 | "define stream BarStream (topicForReplay string, partitionForReplay string, " + 80 | "startOffset string, endOffset string);\n" + 81 | "\n" + 82 | "@info(name = 'query1')\n" + 83 | "@source(type='kafka-replay-response', group.id='group', threading.option=" + 84 | "'single.thread', bootstrap.servers='localhost:9092', sink.id='1',\n" + 85 | "@map(type='json'))\n" + 86 | "Define stream FooStream (symbol string, amount double);\n" + 87 | "\n" + 88 | "@sink(type='log')\n" + 89 | "Define stream logStream(symbol string, amount double);\n" + 90 | "\n" + 91 | "from FooStream select * insert into logStream;", 92 | description = "In this app we can send replay request events into BarStream and observe the " + 93 | "replayed events in the logStream") 94 | } 95 | ) 96 | public class KafkaReplayResponseSource extends KafkaSource { 97 | private String sinkId; 98 | private List> futureList = new ArrayList<>(); 99 | private List kafkaReplayThreadList = new ArrayList<>(); 100 | 101 | @Override 102 | public StateFactory init(SourceEventListener sourceEventListener, OptionHolder optionHolder, 103 | String[] requiredProperties, ConfigReader configReader, 104 | SiddhiAppContext siddhiAppContext) { 105 | this.siddhiAppContext = siddhiAppContext; 106 | this.optionHolder = optionHolder; 107 | this.requiredProperties = requiredProperties.clone(); 108 | this.sourceEventListener = sourceEventListener; 109 | if (configReader != null) { 110 | bootstrapServers = configReader.readConfig(ADAPTOR_SUBSCRIBER_ZOOKEEPER_CONNECT_SERVERS, 111 | optionHolder.validateAndGetStaticValue(ADAPTOR_SUBSCRIBER_ZOOKEEPER_CONNECT_SERVERS)); 112 | groupID = configReader.readConfig(ADAPTOR_SUBSCRIBER_GROUP_ID, 113 | optionHolder.validateAndGetStaticValue(ADAPTOR_SUBSCRIBER_GROUP_ID)); 114 | threadingOption = configReader.readConfig(THREADING_OPTION, 115 | optionHolder.validateAndGetStaticValue(THREADING_OPTION)); 116 | seqEnabled = configReader.readConfig(SEQ_ENABLED, 117 | optionHolder.validateAndGetStaticValue(SEQ_ENABLED, "false")) 118 | .equalsIgnoreCase("true"); 119 | optionalConfigs = configReader.readConfig(ADAPTOR_OPTIONAL_CONFIGURATION_PROPERTIES, 120 | optionHolder.validateAndGetStaticValue(ADAPTOR_OPTIONAL_CONFIGURATION_PROPERTIES, null)); 121 | isBinaryMessage = Boolean.parseBoolean(configReader.readConfig(IS_BINARY_MESSAGE, 122 | optionHolder.validateAndGetStaticValue(IS_BINARY_MESSAGE, "false"))); 123 | enableOffsetCommit = Boolean.parseBoolean(configReader.readConfig(ADAPTOR_ENABLE_OFFSET_COMMIT, 124 | optionHolder.validateAndGetStaticValue(ADAPTOR_ENABLE_OFFSET_COMMIT, "true"))); 125 | enableAsyncCommit = Boolean.parseBoolean(configReader.readConfig(ADAPTOR_ENABLE_ASYNC_COMMIT, 126 | optionHolder.validateAndGetStaticValue(ADAPTOR_ENABLE_ASYNC_COMMIT, "true"))); 127 | 128 | } else { 129 | bootstrapServers = optionHolder.validateAndGetStaticValue(ADAPTOR_SUBSCRIBER_ZOOKEEPER_CONNECT_SERVERS); 130 | groupID = optionHolder.validateAndGetStaticValue(ADAPTOR_SUBSCRIBER_GROUP_ID); 131 | threadingOption = optionHolder.validateAndGetStaticValue(THREADING_OPTION); 132 | optionalConfigs = optionHolder.validateAndGetStaticValue(ADAPTOR_OPTIONAL_CONFIGURATION_PROPERTIES, null); 133 | isBinaryMessage = Boolean.parseBoolean(optionHolder.validateAndGetStaticValue(IS_BINARY_MESSAGE, 134 | "false")); 135 | enableOffsetCommit = Boolean.parseBoolean(optionHolder. 136 | validateAndGetStaticValue(ADAPTOR_ENABLE_OFFSET_COMMIT, "true")); 137 | enableAsyncCommit = Boolean.parseBoolean(optionHolder.validateAndGetStaticValue(ADAPTOR_ENABLE_ASYNC_COMMIT, 138 | "true")); 139 | } 140 | seqEnabled = optionHolder.validateAndGetStaticValue(SEQ_ENABLED, "false").equalsIgnoreCase("true"); 141 | optionalConfigs = optionHolder.validateAndGetStaticValue(ADAPTOR_OPTIONAL_CONFIGURATION_PROPERTIES, null); 142 | isBinaryMessage = Boolean.parseBoolean(optionHolder.validateAndGetStaticValue(IS_BINARY_MESSAGE, 143 | "false")); 144 | enableOffsetCommit = Boolean.parseBoolean(optionHolder.validateAndGetStaticValue(ADAPTOR_ENABLE_OFFSET_COMMIT, 145 | "true")); 146 | enableAsyncCommit = Boolean.parseBoolean(optionHolder.validateAndGetStaticValue(ADAPTOR_ENABLE_ASYNC_COMMIT, 147 | "true")); 148 | this.sinkId = optionHolder.validateAndGetStaticValue(Constants.SINK_ID); 149 | KafkaReplayResponseSourceRegistry.getInstance().putKafkaReplayResponseSource(sinkId, this); 150 | return () -> new KafkaSourceState(seqEnabled); 151 | } 152 | 153 | @Override 154 | public void connect(ConnectionCallback connectionCallback, KafkaSourceState kafkaSourceState) { 155 | } 156 | 157 | public void onReplayRequest(String partitionForReplay, String startOffset, String endOffset, String replayTopic) 158 | throws ConnectionUnavailableException { 159 | try { 160 | String[] partitionAsListForReplay = new String[]{partitionForReplay}; 161 | ExecutorService executorService = siddhiAppContext.getExecutorService(); 162 | KafkaReplayThread kafkaReplayThread = 163 | new KafkaReplayThread(sourceEventListener, new String[]{replayTopic}, partitionAsListForReplay, 164 | KafkaSource.createConsumerConfig(bootstrapServers, groupID, optionalConfigs, 165 | isBinaryMessage, enableOffsetCommit), 166 | false, isBinaryMessage, enableOffsetCommit, 167 | enableAsyncCommit, requiredProperties, Integer.parseInt(startOffset), 168 | Integer.parseInt(endOffset), futureList.size(), sinkId); 169 | kafkaReplayThreadList.add(kafkaReplayThread); 170 | futureList.add(executorService.submit(kafkaReplayThread)); 171 | } catch (SiddhiAppRuntimeException e) { 172 | throw e; 173 | } catch (Throwable e) { 174 | throw new ConnectionUnavailableException("Error when initiating connection with Kafka server: " + 175 | bootstrapServers + " in Siddhi App: " + siddhiAppContext.getName(), e); 176 | } 177 | } 178 | 179 | public void onReplayFinish(int threadId) { 180 | kafkaReplayThreadList.get(threadId).shutdownConsumer(); 181 | Future future = futureList.get(threadId); 182 | if (!future.isCancelled()) { 183 | future.cancel(true); 184 | } 185 | } 186 | } 187 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/source/KafkaReplayThread.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.source; 20 | 21 | import io.siddhi.core.stream.input.source.SourceEventListener; 22 | import io.siddhi.extension.io.kafka.util.KafkaReplayResponseSourceRegistry; 23 | import org.apache.kafka.clients.consumer.ConsumerRecord; 24 | 25 | import java.util.Properties; 26 | 27 | /** 28 | * This runnable processes each Kafka message and sends it to siddhi. 29 | */ 30 | public class KafkaReplayThread extends KafkaConsumerThread { 31 | private int startOffset; 32 | private int endOffset; 33 | private int threadId; 34 | private String sinkId; 35 | 36 | KafkaReplayThread(SourceEventListener sourceEventListener, String[] topics, String[] partitions, Properties props, 37 | boolean isPartitionWiseThreading, boolean isBinaryMessage, boolean enableOffsetCommit, 38 | boolean enableAsyncCommit, String[] requiredProperties, int startOffset, int endOffset, 39 | int threadId, String sinkId) { 40 | super(sourceEventListener, topics, partitions, props, isPartitionWiseThreading, isBinaryMessage, 41 | enableOffsetCommit, enableAsyncCommit, requiredProperties, null); 42 | this.threadId = threadId; 43 | this.sinkId = sinkId; 44 | this.startOffset = startOffset; 45 | this.endOffset = endOffset; 46 | this.isReplayThread = true; 47 | } 48 | 49 | @Override 50 | void seekToRequiredOffset() { 51 | consumer.seekToBeginning(partitionsList); 52 | } 53 | 54 | @Override 55 | boolean isRecordAfterStartOffset(ConsumerRecord record) { 56 | return record.offset() >= startOffset; 57 | } 58 | 59 | @Override 60 | boolean endReplay(ConsumerRecord record) { 61 | if (record.offset() >= endOffset) { 62 | KafkaReplayResponseSourceRegistry.getInstance().getKafkaReplayResponseSource(sinkId) 63 | .onReplayFinish(threadId); 64 | return true; 65 | } else { 66 | return false; 67 | } 68 | } 69 | } 70 | -------------------------------------------------------------------------------- /component/src/main/java/io/siddhi/extension/io/kafka/util/KafkaReplayResponseSourceRegistry.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2020, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | package io.siddhi.extension.io.kafka.util; 19 | 20 | import io.siddhi.extension.io.kafka.source.KafkaReplayResponseSource; 21 | 22 | import java.util.Collections; 23 | import java.util.HashMap; 24 | import java.util.Map; 25 | 26 | /** 27 | * a class to register KafkaReplayResponseSource with respective sink.id or source.id. Used by KafkaReplayRequestSink 28 | * to push responses 29 | */ 30 | public class KafkaReplayResponseSourceRegistry { 31 | private static KafkaReplayResponseSourceRegistry instance = new KafkaReplayResponseSourceRegistry(); 32 | private Map kafkaReplayResponseSourceHashMap = Collections.synchronizedMap( 33 | new HashMap<>()); 34 | 35 | private KafkaReplayResponseSourceRegistry() {} 36 | 37 | public static KafkaReplayResponseSourceRegistry getInstance() { 38 | return instance; 39 | } 40 | 41 | public void putKafkaReplayResponseSource(String key, KafkaReplayResponseSource source) { 42 | kafkaReplayResponseSourceHashMap.put(key, source); 43 | } 44 | 45 | public KafkaReplayResponseSource getKafkaReplayResponseSource(String key) { 46 | return kafkaReplayResponseSourceHashMap.get(key); 47 | } 48 | public void removeKafkaReplayResponseSource(String key) { 49 | kafkaReplayResponseSourceHashMap.remove(key); 50 | } 51 | } 52 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/KafkaTestUtil.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka; 20 | 21 | import kafka.admin.AdminUtils; 22 | import kafka.admin.RackAwareMode; 23 | import kafka.common.TopicExistsException; 24 | import kafka.server.KafkaConfig; 25 | import kafka.server.KafkaServerStartable; 26 | import kafka.utils.ZKStringSerializer$; 27 | import kafka.utils.ZkUtils; 28 | import org.I0Itec.zkclient.ZkClient; 29 | import org.I0Itec.zkclient.ZkConnection; 30 | import org.apache.commons.io.FileUtils; 31 | import org.apache.curator.test.TestingServer; 32 | import org.apache.kafka.clients.producer.KafkaProducer; 33 | import org.apache.kafka.clients.producer.Producer; 34 | import org.apache.kafka.clients.producer.ProducerRecord; 35 | import org.apache.logging.log4j.LogManager; 36 | import org.apache.logging.log4j.Logger; 37 | 38 | import java.io.File; 39 | import java.io.IOException; 40 | import java.util.Properties; 41 | 42 | /** 43 | * Class defining the Constant for Kafka Test cases. 44 | */ 45 | public class KafkaTestUtil { 46 | public static final String ZK_SERVER_CON_STRING = "localhost:2181"; 47 | public static final String ZK_SERVER2_CON_STRING = "localhost:2182"; 48 | private static final Logger log = LogManager.getLogger(KafkaTestUtil.class); 49 | private static final String kafkaLogDir = "tmp_kafka_dir"; 50 | private static final String kafkaLogDir2 = "tmp_kafka_dir2"; 51 | private static final long CLEANER_BUFFER_SIZE = 2 * 1024 * 1024L; 52 | private static TestingServer zkTestServer; 53 | private static TestingServer zkTestServer2; 54 | private static KafkaServerStartable kafkaServer; 55 | private static KafkaServerStartable kafkaServer2; 56 | 57 | public static void cleanLogDir() { 58 | try { 59 | File f = new File(kafkaLogDir); 60 | FileUtils.deleteDirectory(f); 61 | } catch (IOException e) { 62 | log.error("Failed to clean up: " + e); 63 | } 64 | } 65 | 66 | public static void cleanLogDir2() { 67 | try { 68 | File f = new File(kafkaLogDir2); 69 | FileUtils.deleteDirectory(f); 70 | } catch (IOException e) { 71 | log.error("Failed to clean up: " + e); 72 | } 73 | } 74 | 75 | //---- private methods -------- 76 | public static void setupKafkaBroker() { 77 | try { 78 | log.info("#############################################################################################"); 79 | log.info("################################# ZOOKEEPER STARTED ######################################"); 80 | log.info("#############################################################################################"); 81 | // mock zookeeper 82 | zkTestServer = new TestingServer(2181); 83 | // mock kafka 84 | Properties props = new Properties(); 85 | props.put("broker.id", "0"); 86 | props.put("host.name", "localhost"); 87 | props.put("port", "9092"); 88 | props.put("log.dir", kafkaLogDir); 89 | props.put("zookeeper.connect", zkTestServer.getConnectString()); 90 | props.put("replica.socket.timeout.ms", "30000"); 91 | props.put("delete.topic.enable", "true"); 92 | props.put("log.cleaner.dedupe.buffer.size", CLEANER_BUFFER_SIZE); 93 | KafkaConfig config = new KafkaConfig(props); 94 | kafkaServer = new KafkaServerStartable(config); 95 | kafkaServer.startup(); 96 | } catch (Exception e) { 97 | log.error("Error running local Kafka broker / Zookeeper", e); 98 | } 99 | } 100 | 101 | public static void setupKafkaBroker2() { 102 | try { 103 | log.info("#############################################################################################"); 104 | log.info("################################# ZOOKEEPER 2 STARTED ####################################"); 105 | log.info("#############################################################################################"); 106 | // mock zookeeper 107 | zkTestServer2 = new TestingServer(2182); 108 | // mock kafka 109 | Properties props = new Properties(); 110 | props.put("broker.id", "1"); 111 | props.put("host.name", "localhost"); 112 | props.put("port", "9093"); 113 | props.put("log.dir", kafkaLogDir2); 114 | props.put("zookeeper.connect", zkTestServer2.getConnectString()); 115 | props.put("replica.socket.timeout.ms", "30000"); 116 | props.put("delete.topic.enable", "true"); 117 | props.put("log.cleaner.dedupe.buffer.size", CLEANER_BUFFER_SIZE); 118 | KafkaConfig config = new KafkaConfig(props); 119 | kafkaServer2 = new KafkaServerStartable(config); 120 | kafkaServer2.startup(); 121 | 122 | } catch (Exception e) { 123 | log.error("Error running local Kafka broker 2", e); 124 | } 125 | } 126 | 127 | public static void stopKafkaBroker2() { 128 | log.info("#############################################################################################"); 129 | log.info("################################# ZOOKEEPER 2 STOPPED ####################################"); 130 | log.info("#############################################################################################"); 131 | try { 132 | if (kafkaServer2 != null) { 133 | kafkaServer2.shutdown(); 134 | kafkaServer2.awaitShutdown(); 135 | } 136 | Thread.sleep(5000); 137 | if (zkTestServer2 != null) { 138 | zkTestServer2.stop(); 139 | } 140 | Thread.sleep(5000); 141 | cleanLogDir2(); 142 | } catch (InterruptedException e) { 143 | log.error(e.getMessage(), e); 144 | } catch (IOException e) { 145 | log.error("Error shutting down 2nd Kafka broker / Zookeeper", e); 146 | } 147 | } 148 | 149 | public static void stopKafkaBroker() { 150 | log.info("#############################################################################################"); 151 | log.info("################################# ZOOKEEPER STOPPED ######################################"); 152 | log.info("#############################################################################################"); 153 | try { 154 | if (kafkaServer != null) { 155 | kafkaServer.shutdown(); 156 | kafkaServer.awaitShutdown(); 157 | } 158 | Thread.sleep(500); 159 | if (zkTestServer != null) { 160 | zkTestServer.stop(); 161 | } 162 | Thread.sleep(500); 163 | cleanLogDir(); 164 | } catch (InterruptedException e) { 165 | log.error(e.getMessage(), e); 166 | } catch (IOException e) { 167 | log.error("Error shutting down Kafka broker / Zookeeper", e); 168 | } 169 | } 170 | 171 | 172 | public static void createTopic(String topics[], int numOfPartitions) { 173 | createTopic(ZK_SERVER_CON_STRING, topics, numOfPartitions); 174 | } 175 | 176 | public static void createTopic(String connectionString, String topics[], int numOfPartitions) { 177 | ZkClient zkClient = new ZkClient(connectionString, 30000, 30000, ZKStringSerializer$.MODULE$); 178 | ZkConnection zkConnection = new ZkConnection(connectionString); 179 | ZkUtils zkUtils = new ZkUtils(zkClient, zkConnection, false); 180 | for (String topic : topics) { 181 | try { 182 | AdminUtils.createTopic(zkUtils, topic, numOfPartitions, 1, new Properties(), 183 | RackAwareMode.Enforced$.MODULE$); 184 | } catch (TopicExistsException e) { 185 | log.warn("topic exists for: " + topic); 186 | } 187 | } 188 | zkClient.close(); 189 | } 190 | 191 | public static void deleteTopic(String topics[]) { 192 | deleteTopic("localhost:2181", topics); 193 | } 194 | 195 | public static void deleteTopic(String connectionString, String topics[]) { 196 | ZkClient zkClient = new ZkClient(connectionString, 30000, 30000, ZKStringSerializer$.MODULE$); 197 | ZkConnection zkConnection = new ZkConnection(connectionString); 198 | ZkUtils zkUtils = new ZkUtils(zkClient, zkConnection, false); 199 | for (String topic : topics) { 200 | AdminUtils.deleteTopic(zkUtils, topic); 201 | } 202 | zkClient.close(); 203 | } 204 | 205 | public static void kafkaPublisher(String topics[], int numOfPartitions, int numberOfEventsPerTopic, boolean 206 | publishWithPartition, String bootstrapServers, boolean isXML) { 207 | kafkaPublisher(topics, numOfPartitions, numberOfEventsPerTopic, 1000, publishWithPartition, 208 | bootstrapServers, isXML); 209 | } 210 | 211 | public static void kafkaPublisher(String topics[], int numOfPartitions, int numberOfEventsPerTopic, long sleep, 212 | boolean publishWithPartition, String bootstrapServers, boolean isXML) { 213 | Properties props = new Properties(); 214 | if (null == bootstrapServers) { 215 | props.put("bootstrap.servers", "localhost:9092"); 216 | } else { 217 | props.put("bootstrap.servers", bootstrapServers); 218 | } 219 | props.put("acks", "all"); 220 | props.put("retries", 0); 221 | props.put("batch.size", 16384); 222 | props.put("linger.ms", 1); 223 | props.put("buffer.memory", 33559000); 224 | props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); 225 | props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); 226 | Producer producer = new KafkaProducer(props); 227 | for (String topic : topics) { 228 | for (int i = 0; i < numberOfEventsPerTopic; i++) { 229 | String msg; 230 | if (isXML) { 231 | msg = "" 232 | + "" 233 | + "" + topic + "" 234 | + "12.5" 235 | + "" + i + "" 236 | + "" 237 | + ""; 238 | } else { 239 | msg = "symbol:\"" + topic + "\",\nprice:12.5,\nvolume:" + i; 240 | } 241 | 242 | try { 243 | Thread.sleep(sleep); 244 | } catch (InterruptedException e) { 245 | } 246 | if (numOfPartitions > 1 || publishWithPartition) { 247 | log.info("producing: " + msg + " into partition: " + (i % numOfPartitions)); 248 | producer.send(new ProducerRecord<>(topic, (i % numOfPartitions), 249 | System.currentTimeMillis(), null, msg)); 250 | } else { 251 | log.info("producing: " + msg); 252 | producer.send(new ProducerRecord<>(topic, null, System.currentTimeMillis(), null, msg)); 253 | } 254 | } 255 | } 256 | producer.flush(); 257 | producer.close(); 258 | try { 259 | Thread.sleep(2000); 260 | } catch (InterruptedException e) { 261 | log.error("Thread sleep failed", e); 262 | } 263 | } 264 | } 265 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/UnitTestAppender.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019 WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | * 18 | */ 19 | 20 | package io.siddhi.extension.io.kafka; 21 | 22 | import org.apache.logging.log4j.core.Appender; 23 | import org.apache.logging.log4j.core.Core; 24 | import org.apache.logging.log4j.core.Filter; 25 | import org.apache.logging.log4j.core.LogEvent; 26 | import org.apache.logging.log4j.core.appender.AbstractAppender; 27 | import org.apache.logging.log4j.core.config.plugins.Plugin; 28 | import org.apache.logging.log4j.core.config.plugins.PluginAttribute; 29 | import org.apache.logging.log4j.core.config.plugins.PluginElement; 30 | import org.apache.logging.log4j.core.config.plugins.PluginFactory; 31 | import org.mvel2.util.StringAppender; 32 | 33 | @Plugin(name = "UnitTestAppender", 34 | category = Core.CATEGORY_NAME, elementType = Appender.ELEMENT_TYPE) 35 | public class UnitTestAppender extends AbstractAppender { 36 | 37 | private StringAppender messages = new StringAppender(); 38 | 39 | public UnitTestAppender(String name, Filter filter) { 40 | 41 | super(name, filter, null); 42 | } 43 | 44 | @PluginFactory 45 | public static UnitTestAppender createAppender( 46 | @PluginAttribute("name") String name, 47 | @PluginElement("Filter") Filter filter) { 48 | 49 | return new UnitTestAppender(name, filter); 50 | } 51 | 52 | public String getMessages() { 53 | 54 | String results = messages.toString(); 55 | if (results.isEmpty()) { 56 | return null; 57 | } 58 | return results; 59 | } 60 | 61 | @Override 62 | public void append(LogEvent event) { 63 | 64 | messages.append(event.getMessage().getFormattedMessage()); 65 | } 66 | 67 | } 68 | 69 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/multidc/KafkaMultiDCSinkTestCases.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.multidc; 20 | 21 | import io.siddhi.core.SiddhiAppRuntime; 22 | import io.siddhi.core.SiddhiManager; 23 | import io.siddhi.core.event.Event; 24 | import io.siddhi.core.stream.input.InputHandler; 25 | import io.siddhi.core.stream.output.StreamCallback; 26 | import io.siddhi.core.util.SiddhiTestHelper; 27 | import io.siddhi.extension.io.kafka.KafkaTestUtil; 28 | import org.apache.logging.log4j.LogManager; 29 | import org.apache.logging.log4j.Logger; 30 | import org.junit.Assert; 31 | import org.testng.annotations.AfterClass; 32 | import org.testng.annotations.BeforeClass; 33 | import org.testng.annotations.BeforeMethod; 34 | import org.testng.annotations.Test; 35 | 36 | import java.rmi.RemoteException; 37 | import java.util.concurrent.ExecutorService; 38 | import java.util.concurrent.Executors; 39 | import java.util.concurrent.atomic.AtomicInteger; 40 | 41 | /** 42 | * Class implementing the Test cases for KafkaMultiDCSink. 43 | */ 44 | public class KafkaMultiDCSinkTestCases { 45 | private static final Logger LOG = LogManager.getLogger(KafkaMultiDCSinkTestCases.class); 46 | private static ExecutorService executorService; 47 | private AtomicInteger count; 48 | 49 | @BeforeClass 50 | public static void init() throws Exception { 51 | try { 52 | executorService = Executors.newFixedThreadPool(5); 53 | KafkaTestUtil.cleanLogDir(); 54 | KafkaTestUtil.setupKafkaBroker(); 55 | Thread.sleep(1000); 56 | KafkaTestUtil.cleanLogDir2(); 57 | KafkaTestUtil.setupKafkaBroker2(); 58 | Thread.sleep(1000); 59 | } catch (Exception e) { 60 | throw new RemoteException("Exception caught when starting server", e); 61 | } 62 | } 63 | 64 | @AfterClass 65 | public static void stopKafkaBroker() throws InterruptedException { 66 | KafkaTestUtil.stopKafkaBroker(); 67 | Thread.sleep(1000); 68 | KafkaTestUtil.stopKafkaBroker2(); 69 | Thread.sleep(1000); 70 | while (!executorService.isShutdown() || !executorService.isTerminated()) { 71 | executorService.shutdown(); 72 | } 73 | } 74 | 75 | @BeforeMethod 76 | public void reset() { 77 | count = new AtomicInteger(0); 78 | } 79 | 80 | @Test 81 | public void testMultiDCSinkWithBothBrokersRunning() throws InterruptedException { 82 | LOG.info("Creating test for publishing events for static topic without a partition"); 83 | String[] topics = new String[]{"myTopic"}; 84 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER_CON_STRING, topics, 1); 85 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER2_CON_STRING, topics, 1); 86 | Thread.sleep(4000); 87 | 88 | SiddhiManager sourceOneSiddhiManager = new SiddhiManager(); 89 | SiddhiAppRuntime sourceOneApp = sourceOneSiddhiManager.createSiddhiAppRuntime( 90 | "@App:name('SourceOneSiddhiApp') " + 91 | "define stream BarStream2 (symbol string, price float, volume long); " + 92 | "@info(name = 'query1') " + 93 | "@source(type='kafka', topic.list='myTopic', group.id='single_topic_test'," + 94 | "partition.no.list='0', seq.enabled='true'," + 95 | "threading.option='single.thread', bootstrap.servers='localhost:9092'," + 96 | "@map(type='xml'))" + 97 | "Define stream FooStream2 (symbol string, price float, volume long);" + 98 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 99 | 100 | sourceOneApp.addCallback("BarStream2", new StreamCallback() { 101 | @Override 102 | public void receive(Event[] events) { 103 | for (Event event : events) { 104 | LOG.info(event); 105 | count.getAndIncrement(); 106 | } 107 | } 108 | }); 109 | sourceOneApp.start(); 110 | Thread.sleep(4000); 111 | 112 | SiddhiManager sourceTwoSiddhiManager = new SiddhiManager(); 113 | SiddhiAppRuntime sourceTwoApp = sourceTwoSiddhiManager.createSiddhiAppRuntime( 114 | "@App:name('SourceTwoSiddhiApp') " + 115 | "define stream BarStream2 (symbol string, price float, volume long); " + 116 | "@info(name = 'query1') " + 117 | "@source(type='kafka', topic.list='myTopic', group.id='single_topic_test'," + 118 | "partition.no.list='0', seq.enabled='true'," + 119 | "threading.option='single.thread', bootstrap.servers='localhost:9093'," + 120 | "@map(type='xml'))" + 121 | "Define stream FooStream2 (symbol string, price float, volume long);" + 122 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 123 | 124 | sourceTwoApp.addCallback("BarStream2", new StreamCallback() { 125 | @Override 126 | public void receive(Event[] events) { 127 | for (Event event : events) { 128 | LOG.info(event); 129 | count.getAndIncrement(); 130 | } 131 | } 132 | }); 133 | sourceTwoApp.start(); 134 | Thread.sleep(4000); 135 | 136 | String sinkApp = "@App:name('SinkSiddhiApp') \n" 137 | + "define stream FooStream (symbol string, price float, volume long); \n" 138 | + "@info(name = 'query1') \n" 139 | + "@sink(" 140 | + "type='kafkaMultiDC', " 141 | + "topic='myTopic', " 142 | + "partition='0'," 143 | + "bootstrap.servers='localhost:9092,localhost:9093', " 144 | + "@map(type='xml'))" + 145 | "Define stream BarStream (symbol string, price float, volume long);\n" + 146 | "from FooStream select symbol, price, volume insert into BarStream;\n"; 147 | 148 | SiddhiManager siddhiManager = new SiddhiManager(); 149 | SiddhiAppRuntime siddhiAppRuntimeSink = siddhiManager.createSiddhiAppRuntime(sinkApp); 150 | InputHandler fooStream = siddhiAppRuntimeSink.getInputHandler("BarStream"); 151 | siddhiAppRuntimeSink.start(); 152 | Thread.sleep(4000); 153 | fooStream.send(new Object[]{"WSO2", 55.6f, 100L}); 154 | fooStream.send(new Object[]{"WSO2", 75.6f, 102L}); 155 | fooStream.send(new Object[]{"WSO2", 57.6f, 103L}); 156 | 157 | SiddhiTestHelper.waitForEvents(8000, 2, count, 40000); 158 | Assert.assertEquals(6, count.get()); 159 | sourceOneApp.shutdown(); 160 | sourceTwoApp.shutdown(); 161 | siddhiAppRuntimeSink.shutdown(); 162 | Thread.sleep(1000); 163 | 164 | } 165 | 166 | /* 167 | Even if one of the brokers are failing publishing should not be stopped for the other broker. Therefore, one 168 | siddhi app must receive events. 169 | */ 170 | @Test (dependsOnMethods = "testMultiDCSinkWithBothBrokersRunning") 171 | public void testMultiDCSinkWithOneBrokersFailing() throws InterruptedException { 172 | LOG.info("Creating test for publishing events for static topic without a partition"); 173 | String topics[] = new String[]{"myTopic"}; 174 | KafkaTestUtil.createTopic(topics, 1); 175 | 176 | // Stopping 2nd Kafka broker to mimic a broker failure 177 | KafkaTestUtil.stopKafkaBroker2(); 178 | Thread.sleep(4000); 179 | 180 | SiddhiManager sourceOneSiddhiManager = new SiddhiManager(); 181 | SiddhiAppRuntime sourceOneApp = sourceOneSiddhiManager.createSiddhiAppRuntime( 182 | "@App:name('SourceSiddhiApp') " + 183 | "define stream BarStream2 (symbol string, price float, volume long); " + 184 | "@info(name = 'query1') " + 185 | "@source(type='kafka', topic.list='myTopic', group.id='single_topic_test'," + 186 | "partition.no.list='0', seq.enabled='true'," + 187 | "threading.option='single.thread', bootstrap.servers='localhost:9092'," + 188 | "@map(type='xml'))" + 189 | "Define stream FooStream2 (symbol string, price float, volume long);" + 190 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 191 | 192 | sourceOneApp.addCallback("BarStream2", new StreamCallback() { 193 | @Override 194 | public synchronized void receive(Event[] events) { 195 | for (Event event : events) { 196 | LOG.info(event); 197 | count.getAndIncrement(); 198 | } 199 | } 200 | }); 201 | sourceOneApp.start(); 202 | Thread.sleep(4000); 203 | 204 | String sinkApp = "@App:name('SinkSiddhiApp') \n" 205 | + "define stream FooStream (symbol string, price float, volume long); \n" 206 | + "@info(name = 'query1') \n" 207 | + "@sink(" 208 | + "type='kafkaMultiDC', " 209 | + "topic='myTopic', " 210 | + "partition='0'," 211 | + "bootstrap.servers='localhost:9092,localhost:9093', " 212 | + "@map(type='xml'))" + 213 | "Define stream BarStream (symbol string, price float, volume long);\n" + 214 | "from FooStream select symbol, price, volume insert into BarStream;\n"; 215 | 216 | SiddhiManager siddhiManager = new SiddhiManager(); 217 | SiddhiAppRuntime siddhiAppRuntimeSink = siddhiManager.createSiddhiAppRuntime(sinkApp); 218 | InputHandler fooStream = siddhiAppRuntimeSink.getInputHandler("BarStream"); 219 | siddhiAppRuntimeSink.start(); 220 | Thread.sleep(4000); 221 | fooStream.send(new Object[]{"WSO2", 55.6f, 100L}); 222 | fooStream.send(new Object[]{"WSO2", 75.6f, 102L}); 223 | fooStream.send(new Object[]{"WSO2", 57.6f, 103L}); 224 | Thread.sleep(4000); 225 | 226 | Assert.assertEquals(3, count.get()); 227 | sourceOneApp.shutdown(); 228 | siddhiAppRuntimeSink.shutdown(); 229 | 230 | } 231 | 232 | } 233 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/multidc/KafkaMultiDCSourceSynchronizerTestCases.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.multidc; 20 | 21 | import io.siddhi.core.stream.input.source.SourceEventListener; 22 | import io.siddhi.extension.io.kafka.multidc.source.SourceSynchronizer; 23 | import io.siddhi.query.api.definition.StreamDefinition; 24 | import org.apache.logging.log4j.LogManager; 25 | import org.apache.logging.log4j.Logger; 26 | import org.junit.Assert; 27 | import org.junit.Test; 28 | import org.testng.annotations.BeforeMethod; 29 | 30 | import java.util.ArrayList; 31 | import java.util.List; 32 | 33 | /** 34 | * Class implementing the Test cases for KafkaMultiDCSource Synchronize Test Case. 35 | */ 36 | public class KafkaMultiDCSourceSynchronizerTestCases { 37 | private static final Logger LOG = LogManager.getLogger(KafkaMultiDCSourceSynchronizerTestCases.class); 38 | private static final String SOURCE_1 = "source:9000"; 39 | private static final String SOURCE_2 = "source2:9000"; 40 | private static String[] servers = {SOURCE_1, SOURCE_2}; 41 | private List eventsArrived = new ArrayList<>(); 42 | private SourceEventListener eventListener = new SourceEventListener() { 43 | @Override 44 | public StreamDefinition getStreamDefinition() { 45 | return null; 46 | } 47 | 48 | @Override 49 | public void onEvent(Object eventObject, Object[] transportProperties) { 50 | eventsArrived.add(eventObject); 51 | } 52 | 53 | @Override 54 | public void onEvent(Object o, String[] strings) { 55 | eventsArrived.add(o); 56 | } 57 | 58 | @Override 59 | public void onEvent(Object eventObject, Object[] transportProperties, String[] transportSyncProperties) { 60 | eventsArrived.add(eventObject); 61 | } 62 | 63 | @Override 64 | public void onEvent(Object o, String[] strings, String[] strings1) { 65 | eventsArrived.add(o); 66 | } 67 | }; 68 | 69 | private static String buildDummyEvent(String source, long seqNo) { 70 | StringBuilder builder = new StringBuilder(); 71 | builder.append(source).append(":").append(seqNo); 72 | return builder.toString(); 73 | } 74 | 75 | @BeforeMethod 76 | public void reset() { 77 | eventsArrived.clear(); 78 | } 79 | 80 | private boolean compareEventSequnce(List expectedEventSequence) { 81 | if (eventsArrived.size() != expectedEventSequence.size()) { 82 | LOG.info("Expected number of events and actual number of events are different. " + 83 | "Expected=" + expectedEventSequence.size() + ". Arrived=" + eventsArrived.size()); 84 | return false; 85 | } 86 | for (int i = 0; i < expectedEventSequence.size(); i++) { 87 | if (!eventsArrived.get(i).toString().equals(expectedEventSequence.get(i))) { 88 | LOG.warn("Event " + i + " in the expected and arrived events are different." 89 | + " Expected=" + expectedEventSequence.get(i).toString() 90 | + ", Arrived=" + eventsArrived.get(i).toString()); 91 | return false; 92 | } 93 | } 94 | return true; 95 | } 96 | 97 | private void sendEvent(SourceSynchronizer synchronizer, String source, long seqNo) { 98 | String[] dummyDynamicOptions = new String[2]; 99 | synchronizer.onEvent(source, seqNo, buildDummyEvent(source, seqNo), dummyDynamicOptions); 100 | } 101 | 102 | /* 103 | Source1: 0 - 1 - 2 104 | Source2: 0 - 1 - 2 105 | */ 106 | @Test 107 | public void testWithoutGaps1() { 108 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 1000); 109 | 110 | sendEvent(synchronizer, SOURCE_1, 0); 111 | 112 | sendEvent(synchronizer, SOURCE_2, 0); 113 | 114 | sendEvent(synchronizer, SOURCE_1, 1); 115 | 116 | sendEvent(synchronizer, SOURCE_2, 1); 117 | 118 | sendEvent(synchronizer, SOURCE_1, 2); 119 | 120 | sendEvent(synchronizer, SOURCE_2, 2); 121 | 122 | List expectedEvents = new ArrayList<>(); 123 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 124 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 125 | expectedEvents.add(buildDummyEvent(SOURCE_1, 2)); 126 | 127 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 128 | } 129 | 130 | /* 131 | Source1: 0 1 2 132 | Source2: 0 1 2 133 | */ 134 | @Test 135 | public void testWithoutGaps2() { 136 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 1000); 137 | 138 | sendEvent(synchronizer, SOURCE_1, 0); 139 | sendEvent(synchronizer, SOURCE_1, 1); 140 | sendEvent(synchronizer, SOURCE_1, 2); 141 | 142 | sendEvent(synchronizer, SOURCE_2, 0); 143 | sendEvent(synchronizer, SOURCE_2, 1); 144 | sendEvent(synchronizer, SOURCE_2, 2); 145 | 146 | List expectedEvents = new ArrayList<>(); 147 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 148 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 149 | expectedEvents.add(buildDummyEvent(SOURCE_1, 2)); 150 | 151 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 152 | } 153 | 154 | /* 155 | Source1: 0 - - - 1 2 156 | Source2: 0 1 2 157 | */ 158 | @Test 159 | public void testWithoutGaps3() { 160 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 1000); 161 | 162 | sendEvent(synchronizer, SOURCE_1, 0); 163 | 164 | sendEvent(synchronizer, SOURCE_2, 0); 165 | sendEvent(synchronizer, SOURCE_2, 1); 166 | sendEvent(synchronizer, SOURCE_2, 2); 167 | 168 | sendEvent(synchronizer, SOURCE_1, 1); 169 | sendEvent(synchronizer, SOURCE_1, 2); 170 | 171 | List expectedEvents = new ArrayList<>(); 172 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 173 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 174 | expectedEvents.add(buildDummyEvent(SOURCE_2, 2)); 175 | 176 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 177 | } 178 | 179 | /* 180 | Source1: 0 2 - - - 181 | Source2: 0 1 2 182 | */ 183 | @Test 184 | public void testGapFiledByOtherSource() { 185 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 1000); 186 | 187 | sendEvent(synchronizer, SOURCE_1, 0); 188 | sendEvent(synchronizer, SOURCE_1, 2); 189 | 190 | sendEvent(synchronizer, SOURCE_2, 0); 191 | sendEvent(synchronizer, SOURCE_2, 1); 192 | sendEvent(synchronizer, SOURCE_2, 2); 193 | 194 | List expectedEvents = new ArrayList<>(); 195 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 196 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 197 | expectedEvents.add(buildDummyEvent(SOURCE_1, 2)); 198 | 199 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 200 | } 201 | 202 | /* 203 | Source1: 0 4 - - - - 5 204 | Source2: 0 1 2 3 205 | */ 206 | @Test 207 | public void testMultiMessageGapFiledByOtherSource() throws InterruptedException { 208 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 209 | 210 | sendEvent(synchronizer, SOURCE_1, 0); 211 | sendEvent(synchronizer, SOURCE_1, 4); 212 | 213 | sendEvent(synchronizer, SOURCE_2, 0); 214 | sendEvent(synchronizer, SOURCE_2, 1); 215 | sendEvent(synchronizer, SOURCE_2, 2); 216 | sendEvent(synchronizer, SOURCE_2, 3); 217 | 218 | sendEvent(synchronizer, SOURCE_1, 5); 219 | 220 | List expectedEvents = new ArrayList<>(); 221 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 222 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 223 | expectedEvents.add(buildDummyEvent(SOURCE_2, 2)); 224 | expectedEvents.add(buildDummyEvent(SOURCE_2, 3)); 225 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 226 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 227 | 228 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 229 | } 230 | 231 | /* 232 | Source1: 0 4 - - 5 233 | Source2: 0 1 - 2 3 234 | */ 235 | @Test 236 | public void testMultiMessageGapFiledByOtherSource1() throws InterruptedException { 237 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 238 | 239 | sendEvent(synchronizer, SOURCE_1, 0); 240 | sendEvent(synchronizer, SOURCE_1, 4); 241 | 242 | sendEvent(synchronizer, SOURCE_2, 0); 243 | sendEvent(synchronizer, SOURCE_2, 1); 244 | 245 | sendEvent(synchronizer, SOURCE_1, 5); 246 | 247 | sendEvent(synchronizer, SOURCE_2, 2); 248 | sendEvent(synchronizer, SOURCE_2, 3); 249 | 250 | List expectedEvents = new ArrayList<>(); 251 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 252 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 253 | expectedEvents.add(buildDummyEvent(SOURCE_2, 2)); 254 | expectedEvents.add(buildDummyEvent(SOURCE_2, 3)); 255 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 256 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 257 | 258 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 259 | } 260 | 261 | /* 262 | Source1: 3 - - 4 263 | Source2: 0 1 - 2 3 264 | */ 265 | @Test 266 | public void testMultiMessageGapFiledByOtherSource2() throws InterruptedException { 267 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 268 | 269 | sendEvent(synchronizer, SOURCE_1, 3); 270 | 271 | sendEvent(synchronizer, SOURCE_2, 0); 272 | sendEvent(synchronizer, SOURCE_2, 1); 273 | 274 | sendEvent(synchronizer, SOURCE_1, 4); 275 | 276 | sendEvent(synchronizer, SOURCE_2, 2); 277 | sendEvent(synchronizer, SOURCE_2, 3); 278 | 279 | List expectedEvents = new ArrayList<>(); 280 | expectedEvents.add(buildDummyEvent(SOURCE_2, 0)); 281 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 282 | expectedEvents.add(buildDummyEvent(SOURCE_2, 2)); 283 | expectedEvents.add(buildDummyEvent(SOURCE_1, 3)); 284 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 285 | 286 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 287 | } 288 | 289 | 290 | /* 291 | Source1: 3 4 292 | Source2: 0 1 2 5 293 | */ 294 | @Test 295 | public void testMultiMessageGapFiledByOtherSource3() throws InterruptedException { 296 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 297 | 298 | sendEvent(synchronizer, SOURCE_1, 3); 299 | sendEvent(synchronizer, SOURCE_1, 4); 300 | 301 | sendEvent(synchronizer, SOURCE_2, 0); 302 | sendEvent(synchronizer, SOURCE_2, 1); 303 | sendEvent(synchronizer, SOURCE_2, 2); 304 | sendEvent(synchronizer, SOURCE_2, 5); 305 | 306 | List expectedEvents = new ArrayList<>(); 307 | expectedEvents.add(buildDummyEvent(SOURCE_2, 0)); 308 | expectedEvents.add(buildDummyEvent(SOURCE_2, 1)); 309 | expectedEvents.add(buildDummyEvent(SOURCE_2, 2)); 310 | expectedEvents.add(buildDummyEvent(SOURCE_1, 3)); 311 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 312 | expectedEvents.add(buildDummyEvent(SOURCE_2, 5)); 313 | 314 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 315 | } 316 | 317 | /* 318 | Source1: 0 1 4 - 5 319 | Source2: 0 1 - - - 6 320 | */ 321 | @Test 322 | public void testUnrecoverableGap() throws InterruptedException { 323 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 324 | 325 | sendEvent(synchronizer, SOURCE_1, 0); 326 | sendEvent(synchronizer, SOURCE_1, 1); 327 | sendEvent(synchronizer, SOURCE_1, 4); 328 | 329 | sendEvent(synchronizer, SOURCE_2, 0); 330 | sendEvent(synchronizer, SOURCE_2, 1); 331 | sendEvent(synchronizer, SOURCE_2, 6); 332 | 333 | sendEvent(synchronizer, SOURCE_1, 5); 334 | 335 | List expectedEvents = new ArrayList<>(); 336 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 337 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 338 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 339 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 340 | expectedEvents.add(buildDummyEvent(SOURCE_2, 6)); 341 | 342 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 343 | } 344 | 345 | /* 346 | Source1: 0 1 4 - 5 8 9 347 | Source2: 0 1 - 8 348 | */ 349 | @Test 350 | public void testTwoUnrecoverableGaps() throws InterruptedException { 351 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 352 | 353 | sendEvent(synchronizer, SOURCE_1, 0); 354 | sendEvent(synchronizer, SOURCE_1, 1); 355 | sendEvent(synchronizer, SOURCE_1, 4); 356 | 357 | sendEvent(synchronizer, SOURCE_2, 0); 358 | sendEvent(synchronizer, SOURCE_2, 1); 359 | sendEvent(synchronizer, SOURCE_2, 8); 360 | 361 | sendEvent(synchronizer, SOURCE_1, 5); 362 | sendEvent(synchronizer, SOURCE_1, 8); 363 | sendEvent(synchronizer, SOURCE_1, 9); 364 | 365 | List expectedEvents = new ArrayList<>(); 366 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 367 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 368 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 369 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 370 | expectedEvents.add(buildDummyEvent(SOURCE_1, 8)); 371 | expectedEvents.add(buildDummyEvent(SOURCE_1, 9)); 372 | 373 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 374 | } 375 | 376 | /* 377 | Source1: 0 1 4 - 5 8 9 - - 12 378 | Source2: 0 1 - 8 - - 10 11 379 | */ 380 | @Test 381 | public void testMultipleoUnrecoverableGaps() throws InterruptedException { 382 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 383 | 384 | sendEvent(synchronizer, SOURCE_1, 0); 385 | sendEvent(synchronizer, SOURCE_1, 1); 386 | sendEvent(synchronizer, SOURCE_1, 4); 387 | 388 | sendEvent(synchronizer, SOURCE_2, 0); 389 | sendEvent(synchronizer, SOURCE_2, 1); 390 | sendEvent(synchronizer, SOURCE_2, 8); 391 | 392 | sendEvent(synchronizer, SOURCE_1, 5); 393 | sendEvent(synchronizer, SOURCE_1, 8); 394 | sendEvent(synchronizer, SOURCE_1, 9); 395 | 396 | sendEvent(synchronizer, SOURCE_2, 10); 397 | sendEvent(synchronizer, SOURCE_2, 11); 398 | 399 | sendEvent(synchronizer, SOURCE_1, 12); 400 | 401 | List expectedEvents = new ArrayList<>(); 402 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 403 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 404 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 405 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 406 | expectedEvents.add(buildDummyEvent(SOURCE_1, 8)); 407 | expectedEvents.add(buildDummyEvent(SOURCE_1, 9)); 408 | expectedEvents.add(buildDummyEvent(SOURCE_2, 10)); 409 | expectedEvents.add(buildDummyEvent(SOURCE_2, 11)); 410 | expectedEvents.add(buildDummyEvent(SOURCE_1, 12)); 411 | 412 | 413 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 414 | } 415 | 416 | /* 417 | Source1: 0 1 4 - 5 8 9 418 | Source2: 0 1 - 8 11 12 419 | */ 420 | @Test 421 | public void testTwoUnrecoverableGapFlushedWithTimer() throws InterruptedException { 422 | SourceSynchronizer synchronizer = new SourceSynchronizer(eventListener, servers, 1000, 10 * 1000); 423 | 424 | sendEvent(synchronizer, SOURCE_1, 0); 425 | sendEvent(synchronizer, SOURCE_1, 1); 426 | sendEvent(synchronizer, SOURCE_1, 4); 427 | 428 | sendEvent(synchronizer, SOURCE_2, 0); 429 | sendEvent(synchronizer, SOURCE_2, 1); 430 | sendEvent(synchronizer, SOURCE_2, 8); 431 | 432 | sendEvent(synchronizer, SOURCE_1, 5); 433 | sendEvent(synchronizer, SOURCE_1, 8); 434 | sendEvent(synchronizer, SOURCE_1, 9); 435 | 436 | sendEvent(synchronizer, SOURCE_2, 11); 437 | sendEvent(synchronizer, SOURCE_2, 12); 438 | 439 | // Since no events are received from source1, it will wait for a tolerance period expecting the events to come. 440 | // When the time expires it will forcefully flush the events. 441 | Thread.sleep(15 * 1000); 442 | 443 | List expectedEvents = new ArrayList<>(); 444 | expectedEvents.add(buildDummyEvent(SOURCE_1, 0)); 445 | expectedEvents.add(buildDummyEvent(SOURCE_1, 1)); 446 | expectedEvents.add(buildDummyEvent(SOURCE_1, 4)); 447 | expectedEvents.add(buildDummyEvent(SOURCE_1, 5)); 448 | expectedEvents.add(buildDummyEvent(SOURCE_1, 8)); 449 | expectedEvents.add(buildDummyEvent(SOURCE_1, 9)); 450 | expectedEvents.add(buildDummyEvent(SOURCE_2, 11)); 451 | expectedEvents.add(buildDummyEvent(SOURCE_2, 12)); 452 | 453 | Assert.assertTrue(compareEventSequnce(expectedEvents)); 454 | } 455 | 456 | } 457 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/multidc/KafkaMultiDCSourceTestCases.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.multidc; 20 | 21 | import io.siddhi.core.SiddhiAppRuntime; 22 | import io.siddhi.core.SiddhiManager; 23 | import io.siddhi.core.event.Event; 24 | import io.siddhi.core.stream.input.InputHandler; 25 | import io.siddhi.core.stream.output.StreamCallback; 26 | import io.siddhi.extension.io.kafka.KafkaTestUtil; 27 | import org.apache.logging.log4j.LogManager; 28 | import org.apache.logging.log4j.Logger; 29 | import org.junit.Assert; 30 | import org.testng.annotations.AfterClass; 31 | import org.testng.annotations.BeforeClass; 32 | import org.testng.annotations.BeforeMethod; 33 | import org.testng.annotations.Test; 34 | 35 | import java.rmi.RemoteException; 36 | import java.util.ArrayList; 37 | import java.util.List; 38 | import java.util.concurrent.ExecutorService; 39 | import java.util.concurrent.Executors; 40 | 41 | /** 42 | * Class implementing the Test cases for Sequenced Messaging. 43 | */ 44 | public class KafkaMultiDCSourceTestCases { 45 | private static final Logger LOG = LogManager.getLogger(KafkaMultiDCSourceTestCases.class); 46 | private static ExecutorService executorService; 47 | private volatile int count; 48 | private volatile boolean eventArrived; 49 | private volatile List receivedEventNameList; 50 | private volatile List receivedValueList; 51 | 52 | @BeforeClass 53 | public static void init() throws Exception { 54 | try { 55 | executorService = Executors.newFixedThreadPool(5); 56 | KafkaTestUtil.cleanLogDir(); 57 | KafkaTestUtil.setupKafkaBroker(); 58 | Thread.sleep(1000); 59 | KafkaTestUtil.cleanLogDir2(); 60 | KafkaTestUtil.setupKafkaBroker2(); 61 | Thread.sleep(1000); 62 | } catch (Exception e) { 63 | throw new RemoteException("Exception caught when starting server", e); 64 | } 65 | } 66 | 67 | @AfterClass 68 | public static void stopKafkaBroker() throws InterruptedException { 69 | KafkaTestUtil.stopKafkaBroker(); 70 | Thread.sleep(1000); 71 | KafkaTestUtil.stopKafkaBroker2(); 72 | Thread.sleep(1000); 73 | while (!executorService.isShutdown() || !executorService.isTerminated()) { 74 | executorService.shutdown(); 75 | } 76 | } 77 | 78 | @BeforeMethod 79 | public void reset() { 80 | count = 0; 81 | eventArrived = false; 82 | } 83 | 84 | @Test 85 | public void testMultiDCSourceWithBothBrokersRunning() throws InterruptedException { 86 | LOG.info("Creating test for publishing events for static topic without a partition"); 87 | String topics[] = new String[]{"myTopic"}; 88 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER_CON_STRING, topics, 1); 89 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER2_CON_STRING, topics, 1); 90 | Thread.sleep(4000); 91 | receivedEventNameList = new ArrayList<>(3); 92 | receivedValueList = new ArrayList<>(3); 93 | 94 | SiddhiManager sourceOneSiddhiManager = new SiddhiManager(); 95 | SiddhiAppRuntime sourceOneApp = sourceOneSiddhiManager.createSiddhiAppRuntime( 96 | "@App:name('SourceOneSiddhiApp') " + 97 | "define stream BarStream2 (symbol string, price float, volume long); " + 98 | "@info(name = 'query1') " + 99 | "@source(type='kafkaMultiDC', " + 100 | "topic='myTopic', " + 101 | "partition='0', " + 102 | "bootstrap.servers='localhost:9092,localhost:9093'," + 103 | "@map(type='xml'))" + 104 | "Define stream FooStream2 (symbol string, price float, volume long);" + 105 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 106 | 107 | sourceOneApp.addCallback("BarStream2", new StreamCallback() { 108 | @Override 109 | public synchronized void receive(Event[] events) { 110 | for (Event event : events) { 111 | LOG.info(event); 112 | eventArrived = true; 113 | count++; 114 | receivedEventNameList.add(event.getData(0).toString()); 115 | receivedValueList.add((long) event.getData(2)); 116 | } 117 | } 118 | }); 119 | sourceOneApp.start(); 120 | Thread.sleep(4000); 121 | 122 | 123 | String sinkApp = "@App:name('SinkSiddhiApp') \n" 124 | + "define stream FooStream (symbol string, price float, volume long); \n" 125 | + "@info(name = 'query1') \n" 126 | + "@sink(" 127 | + "type='kafkaMultiDC', " 128 | + "topic='myTopic', " 129 | + "partition='0'," 130 | + "bootstrap.servers='localhost:9092,localhost:9093', " 131 | + "@map(type='xml'))" + 132 | "Define stream BarStream (symbol string, price float, volume long);\n" + 133 | "from FooStream select symbol, price, volume insert into BarStream;\n"; 134 | 135 | SiddhiManager siddhiManager = new SiddhiManager(); 136 | SiddhiAppRuntime siddhiAppRuntimeSink = siddhiManager.createSiddhiAppRuntime(sinkApp); 137 | InputHandler fooStream = siddhiAppRuntimeSink.getInputHandler("BarStream"); 138 | siddhiAppRuntimeSink.start(); 139 | Thread.sleep(4000); 140 | fooStream.send(new Object[]{"WSO2", 55.6f, 100L}); 141 | fooStream.send(new Object[]{"WSO2", 75.6f, 102L}); 142 | fooStream.send(new Object[]{"WSO2", 57.6f, 103L}); 143 | Thread.sleep(4000); 144 | 145 | Assert.assertTrue(count == 3); 146 | } 147 | 148 | @Test(description = "Test the scenario: MultiDC sink and source with binary mapper") 149 | public void testMultiDCSourceWithBothBrokersRunningUsingBinaryMapper() throws InterruptedException { 150 | LOG.info("Creating test for publishing events for static topic without a partition"); 151 | String topics[] = new String[]{"myTopic2"}; 152 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER_CON_STRING, topics, 1); 153 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER2_CON_STRING, topics, 1); 154 | Thread.sleep(4000); 155 | receivedEventNameList = new ArrayList<>(3); 156 | receivedValueList = new ArrayList<>(3); 157 | 158 | SiddhiManager sourceOneSiddhiManager = new SiddhiManager(); 159 | SiddhiAppRuntime sourceOneApp = sourceOneSiddhiManager.createSiddhiAppRuntime( 160 | "@App:name('SourceOneSiddhiApp') " + 161 | "define stream BarStream2 (symbol string, price float, volume long); " + 162 | "@info(name = 'query1') " + 163 | "@source(type='kafkaMultiDC', " + 164 | "topic='myTopic2', " + 165 | "partition='0', " + 166 | "is.binary.message='true'," + 167 | "bootstrap.servers='localhost:9092,localhost:9093'," + 168 | "@map(type='binary'))" + 169 | "Define stream FooStream2 (symbol string, price float, volume long);" + 170 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 171 | 172 | sourceOneApp.addCallback("BarStream2", new StreamCallback() { 173 | @Override 174 | public synchronized void receive(Event[] events) { 175 | for (Event event : events) { 176 | LOG.info(event); 177 | eventArrived = true; 178 | count++; 179 | receivedEventNameList.add(event.getData(0).toString()); 180 | receivedValueList.add((long) event.getData(2)); 181 | } 182 | } 183 | }); 184 | sourceOneApp.start(); 185 | Thread.sleep(4000); 186 | 187 | 188 | String sinkApp = "@App:name('SinkSiddhiApp') \n" 189 | + "define stream FooStream (symbol string, price float, volume long); \n" 190 | + "@info(name = 'query1') \n" 191 | + "@sink(" 192 | + "type='kafkaMultiDC', " 193 | + "topic='myTopic2', " 194 | + "is.binary.message='true'," 195 | + "partition='0'," 196 | + "bootstrap.servers='localhost:9092,localhost:9093', " 197 | + "@map(type='binary'))" + 198 | "Define stream BarStream (symbol string, price float, volume long);\n" + 199 | "from FooStream select symbol, price, volume insert into BarStream;\n"; 200 | 201 | SiddhiManager siddhiManager = new SiddhiManager(); 202 | 203 | SiddhiAppRuntime siddhiAppRuntimeSink = siddhiManager.createSiddhiAppRuntime(sinkApp); 204 | InputHandler fooStream = siddhiAppRuntimeSink.getInputHandler("BarStream"); 205 | siddhiAppRuntimeSink.start(); 206 | Thread.sleep(4000); 207 | fooStream.send(new Object[]{"WSO2", 55.6f, 100L}); 208 | fooStream.send(new Object[]{"WSO2", 75.6f, 102L}); 209 | fooStream.send(new Object[]{"WSO2", 57.6f, 103L}); 210 | Thread.sleep(4000); 211 | 212 | Assert.assertTrue(count == 3); 213 | } 214 | 215 | @Test(description = "Test the scenario: Send and Received event via multiDC sink and source as byte stream " 216 | + "using xml mapper") 217 | public void testMultiDCSourceWithBothBrokersRunningUsingXmlMapper() throws InterruptedException { 218 | LOG.info("Creating test for publishing events for static topic without a partition"); 219 | String topics[] = new String[]{"myTopic3"}; 220 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER_CON_STRING, topics, 1); 221 | KafkaTestUtil.createTopic(KafkaTestUtil.ZK_SERVER2_CON_STRING, topics, 1); 222 | Thread.sleep(4000); 223 | receivedEventNameList = new ArrayList<>(3); 224 | receivedValueList = new ArrayList<>(3); 225 | 226 | SiddhiManager sourceOneSiddhiManager = new SiddhiManager(); 227 | SiddhiAppRuntime sourceOneApp = sourceOneSiddhiManager.createSiddhiAppRuntime( 228 | "@App:name('SourceOneSiddhiApp') " + 229 | "define stream BarStream2 (symbol string, price float, volume long); " + 230 | "@info(name = 'query1') " + 231 | "@source(type='kafkaMultiDC', " + 232 | "topic='myTopic3', " + 233 | "partition='0', " + 234 | "is.binary.message='true'," + 235 | "bootstrap.servers='localhost:9092,localhost:9093'," + 236 | "@map(type='xml'))" + 237 | "Define stream FooStream2 (symbol string, price float, volume long);" + 238 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 239 | 240 | sourceOneApp.addCallback("BarStream2", new StreamCallback() { 241 | @Override 242 | public synchronized void receive(Event[] events) { 243 | for (Event event : events) { 244 | LOG.info(event); 245 | eventArrived = true; 246 | count++; 247 | receivedEventNameList.add(event.getData(0).toString()); 248 | receivedValueList.add((long) event.getData(2)); 249 | } 250 | } 251 | }); 252 | sourceOneApp.start(); 253 | Thread.sleep(4000); 254 | 255 | 256 | String sinkApp = "@App:name('SinkSiddhiApp') \n" 257 | + "define stream FooStream (symbol string, price float, volume long); \n" 258 | + "@info(name = 'query1') \n" 259 | + "@sink(" 260 | + "type='kafkaMultiDC', " 261 | + "topic='myTopic3', " 262 | + "is.binary.message='true'," 263 | + "partition='0'," 264 | + "bootstrap.servers='localhost:9092,localhost:9093', " 265 | + "@map(type='xml'))" + 266 | "Define stream BarStream (symbol string, price float, volume long);\n" + 267 | "from FooStream select symbol, price, volume insert into BarStream;\n"; 268 | 269 | SiddhiManager siddhiManager = new SiddhiManager(); 270 | 271 | SiddhiAppRuntime siddhiAppRuntimeSink = siddhiManager.createSiddhiAppRuntime(sinkApp); 272 | InputHandler fooStream = siddhiAppRuntimeSink.getInputHandler("BarStream"); 273 | siddhiAppRuntimeSink.start(); 274 | Thread.sleep(4000); 275 | fooStream.send(new Object[]{"WSO2", 55.6f, 100L}); 276 | fooStream.send(new Object[]{"WSO2", 75.6f, 102L}); 277 | fooStream.send(new Object[]{"WSO2", 57.6f, 103L}); 278 | Thread.sleep(4000); 279 | 280 | Assert.assertTrue(count == 3); 281 | } 282 | } 283 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/sink/ErrorHandlingTestCase.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2021, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.sink; 20 | 21 | import io.siddhi.core.SiddhiAppRuntime; 22 | import io.siddhi.core.SiddhiManager; 23 | import io.siddhi.core.event.Event; 24 | import io.siddhi.core.stream.input.InputHandler; 25 | import io.siddhi.core.stream.output.StreamCallback; 26 | import org.apache.logging.log4j.LogManager; 27 | import org.apache.logging.log4j.Logger; 28 | import org.testng.AssertJUnit; 29 | import org.testng.annotations.BeforeMethod; 30 | import org.testng.annotations.Test; 31 | 32 | public class ErrorHandlingTestCase { 33 | private static final Logger LOG = LogManager.getLogger(ErrorHandlingTestCase.class); 34 | private volatile int count; 35 | private volatile boolean eventArrived; 36 | 37 | @BeforeMethod 38 | public void initClassVariables() { 39 | count = 0; 40 | eventArrived = false; 41 | } 42 | 43 | @Test 44 | public void testErrorStream() throws InterruptedException { 45 | LOG.info("Sending messages to error stream when the broker is not available"); 46 | SiddhiManager siddhiManager = new SiddhiManager(); 47 | SiddhiAppRuntime siddhiAppRuntimeSource = siddhiManager.createSiddhiAppRuntime( 48 | "@App:name('ErrorHandlerApp') " + 49 | "define stream inputStream (symbol string, price float, volume long); " + 50 | "define stream kafkaErrorStream (symbol string, price float, volume long); " + 51 | "@info(name = 'query1') " + 52 | "@OnError(action='STREAM')" + 53 | "@sink(type='kafka', topic='single_topic', is.synchronous='true', " + 54 | "on.error='STREAM', " + 55 | "bootstrap.servers='localhost:9092',\n" + 56 | "optional.configuration='retry.backoff.ms:1,metadata.fetch.timeout.ms:10," + 57 | "request.timeout.ms:5," + 58 | "timeout.ms:10', " + 59 | "@map(type='xml'))" + 60 | "Define stream kafkaSinkStream (symbol string, price float, volume long);" + 61 | "from inputStream select symbol, price, volume insert into kafkaSinkStream;" + 62 | "from !kafkaSinkStream select symbol, price, volume insert into kafkaErrorStream;"); 63 | siddhiAppRuntimeSource.addCallback("kafkaErrorStream", new StreamCallback() { 64 | @Override 65 | public void receive(Event[] events) { 66 | for (Event event : events) { 67 | LOG.info(event); 68 | eventArrived = true; 69 | count++; 70 | } 71 | } 72 | }); 73 | siddhiAppRuntimeSource.start(); 74 | InputHandler fooStream = siddhiAppRuntimeSource.getInputHandler("inputStream"); 75 | fooStream.send(new Object[]{"single_topic", 55.6f, 100L}); 76 | Thread.sleep(1000); 77 | AssertJUnit.assertTrue(eventArrived); 78 | AssertJUnit.assertEquals(1, count); 79 | siddhiAppRuntimeSource.shutdown(); 80 | } 81 | } 82 | -------------------------------------------------------------------------------- /component/src/test/java/io/siddhi/extension/io/kafka/sink/KafkaSinkwithBinaryMapperTestCase.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2017, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | package io.siddhi.extension.io.kafka.sink; 20 | 21 | import io.siddhi.core.SiddhiAppRuntime; 22 | import io.siddhi.core.SiddhiManager; 23 | import io.siddhi.core.event.Event; 24 | import io.siddhi.core.stream.input.InputHandler; 25 | import io.siddhi.core.stream.output.StreamCallback; 26 | import io.siddhi.core.util.EventPrinter; 27 | import io.siddhi.extension.io.kafka.KafkaTestUtil; 28 | import org.I0Itec.zkclient.exception.ZkTimeoutException; 29 | import org.apache.logging.log4j.LogManager; 30 | import org.apache.logging.log4j.Logger; 31 | import org.testng.AssertJUnit; 32 | import org.testng.annotations.AfterClass; 33 | import org.testng.annotations.BeforeClass; 34 | import org.testng.annotations.BeforeMethod; 35 | import org.testng.annotations.Test; 36 | 37 | import java.rmi.RemoteException; 38 | import java.util.ArrayList; 39 | import java.util.List; 40 | 41 | /** 42 | * Test Class Implementing send message via binary mapping. 43 | */ 44 | public class KafkaSinkwithBinaryMapperTestCase { 45 | private static final Logger LOG = LogManager.getLogger(KafkaSinkwithBinaryMapperTestCase.class); 46 | private volatile int count; 47 | private volatile boolean eventArrived; 48 | private volatile List receivedEventNameList; 49 | private volatile List receivedValueList; 50 | 51 | @BeforeClass 52 | public static void init() throws Exception { 53 | try { 54 | KafkaTestUtil.cleanLogDir(); 55 | KafkaTestUtil.setupKafkaBroker(); 56 | Thread.sleep(1000); 57 | } catch (Exception e) { 58 | throw new RemoteException("Exception caught when starting server", e); 59 | } 60 | } 61 | 62 | @AfterClass 63 | public static void stopKafkaBroker() { 64 | KafkaTestUtil.stopKafkaBroker(); 65 | } 66 | 67 | @BeforeMethod 68 | public void init2() { 69 | count = 0; 70 | eventArrived = false; 71 | } 72 | 73 | @Test 74 | public void testPublisherUsingBinaryMapper() throws InterruptedException { 75 | LOG.info("Creating test for publishing events using binary mapper."); 76 | String topics[] = new String[]{"single_topic"}; 77 | KafkaTestUtil.createTopic(topics, 1); 78 | receivedEventNameList = new ArrayList<>(3); 79 | receivedValueList = new ArrayList<>(3); 80 | try { 81 | SiddhiManager siddhiManager = new SiddhiManager(); 82 | SiddhiAppRuntime siddhiAppRuntimeSource = siddhiManager.createSiddhiAppRuntime( 83 | "@App:name('TestExecutionPlan1') " + 84 | "define stream BarStream2 (symbol string, price float, volume long); " + 85 | "@info(name = 'query1') " + 86 | "@source(type='kafka', topic.list='single_topic', group.id='single_topic_test', " + 87 | "threading.option='single.thread', bootstrap.servers='localhost:9092', " + 88 | "is.binary.message='true'," + 89 | "@map(type='binary'))" + 90 | "Define stream FooStream2 (symbol string, price float, volume long);" + 91 | "from FooStream2 select symbol, price, volume insert into BarStream2;"); 92 | siddhiAppRuntimeSource.addCallback("BarStream2", new StreamCallback() { 93 | @Override 94 | public void receive(Event[] events) { 95 | EventPrinter.print(events); 96 | for (Event event : events) { 97 | LOG.info(event); 98 | eventArrived = true; 99 | count++; 100 | receivedEventNameList.add(event.getData(0).toString()); 101 | receivedValueList.add((long) event.getData(2)); 102 | } 103 | } 104 | }); 105 | siddhiAppRuntimeSource.start(); 106 | SiddhiAppRuntime siddhiAppRuntime = siddhiManager.createSiddhiAppRuntime( 107 | "@App:name('TestExecutionPlan') " + 108 | "define stream FooStream (symbol string, price float, volume long); " + 109 | "@info(name = 'query1') " + 110 | "@sink(type='kafka', topic='single_topic', bootstrap.servers='localhost:9092', " + 111 | "is.binary.message = 'true'," + 112 | "@map(type='binary'))" + 113 | "Define stream BarStream (symbol string, price float, volume long);" + 114 | "from FooStream select symbol, price, volume insert into BarStream;"); 115 | InputHandler fooStream = siddhiAppRuntime.getInputHandler("FooStream"); 116 | siddhiAppRuntime.start(); 117 | fooStream.send(new Object[]{"single_topic", 55.6f, 100L}); 118 | fooStream.send(new Object[]{"single_topic2", 75.6f, 102L}); 119 | fooStream.send(new Object[]{"single_topic3", 57.6f, 103L}); 120 | Thread.sleep(2000); 121 | List expectedNames = new ArrayList<>(2); 122 | expectedNames.add("single_topic"); 123 | expectedNames.add("single_topic2"); 124 | expectedNames.add("single_topic3"); 125 | List expectedValues = new ArrayList<>(2); 126 | expectedValues.add(100L); 127 | expectedValues.add(102L); 128 | expectedValues.add(103L); 129 | AssertJUnit.assertEquals("Kafka Sink published the expected events", expectedNames, 130 | receivedEventNameList); 131 | AssertJUnit.assertEquals("Kafka Sink published the expected events", expectedValues, receivedValueList); 132 | AssertJUnit.assertEquals(3, count); 133 | KafkaTestUtil.deleteTopic(topics); 134 | siddhiAppRuntime.shutdown(); 135 | siddhiAppRuntimeSource.shutdown(); 136 | } catch (ZkTimeoutException ex) { 137 | LOG.warn("No zookeeper may not be available.", ex); 138 | } 139 | } 140 | } 141 | -------------------------------------------------------------------------------- /component/src/test/resources/log4j.properties: -------------------------------------------------------------------------------- 1 | # 2 | # Copyright (c) 2016, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | # 4 | # WSO2 Inc. licenses this file to you under the Apache License, 5 | # Version 2.0 (the "License"); you may not use this file except 6 | # in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, 12 | # software distributed under the License is distributed on an 13 | # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | # KIND, either express or implied. See the License for the 15 | # specific language governing permissions and limitations 16 | # under the License. 17 | # 18 | # For the general syntax of property based configuration files see the 19 | # documenation of org.apache.log4j.PropertyConfigurator. 20 | # The root category uses the appender called A1. Since no priority is 21 | # specified, the root category assumes the default priority for root 22 | # which is DEBUG in log4j. The root category is the only category that 23 | # has a default priority. All other categories need not be assigned a 24 | # priority in which case they inherit their priority from the 25 | # hierarchy. 26 | #log4j.rootLogger=DEBUG, stdout 27 | log4j.rootLogger=ERROR, stdout 28 | log4j.logger.io.siddhi.extension.io.kafka=DEBUG 29 | log4j.appender.stdout=org.apache.log4j.ConsoleAppender 30 | log4j.appender.stdout.layout=org.apache.log4j.PatternLayout 31 | log4j.appender.stdout.layout.ConversionPattern=[%t] %-5p %c %x - %m%n 32 | log4j.logger.org.apache.zookeeper=ERROR, stdout 33 | log4j.logger.kafka.consumer=ERROR, stdout 34 | log4j.logger.kafka.utils=ERROR, stdout 35 | log4j.logger.org.I0Itec.zkclient=ERROR, stdout 36 | log4j.logger.org.apache.zookeeper.ZooKeeper=ERROR, stdout 37 | log4j.logger.kafka.producer=ERROR, stdout 38 | -------------------------------------------------------------------------------- /component/src/test/resources/log4j2.xml: -------------------------------------------------------------------------------- 1 | 2 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | -------------------------------------------------------------------------------- /component/src/test/resources/testng.xml: -------------------------------------------------------------------------------- 1 | 2 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | -------------------------------------------------------------------------------- /docs/assets/javascripts/extra.js: -------------------------------------------------------------------------------- 1 | /* 2 | ~ Copyright (c) WSO2 Inc. (http://wso2.com) All Rights Reserved. 3 | ~ 4 | ~ Licensed under the Apache License, Version 2.0 (the "License"); 5 | ~ you may not use this file except in compliance with the License. 6 | ~ You may obtain a copy of the License at 7 | ~ 8 | ~ http://www.apache.org/licenses/LICENSE-2.0 9 | ~ 10 | ~ Unless required by applicable law or agreed to in writing, software 11 | ~ distributed under the License is distributed on an "AS IS" BASIS, 12 | ~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | ~ See the License for the specific language governing permissions and 14 | ~ limitations under the License. 15 | */ 16 | 17 | var logo = document.querySelector('.md-logo'); 18 | var logoTitle = logo.title; 19 | logo.setAttribute('href', 'https://siddhi.io/') 20 | 21 | var header = document.querySelector('.md-header-nav__title'); 22 | var headerContent = document.querySelectorAll('.md-header-nav__title span')[1].textContent; 23 | var url = document.querySelector('.md-nav__item a.md-nav__link').href 24 | header.innerHTML = '' + logoTitle + '' + 25 | '' + headerContent + '' 26 | 27 | 28 | /* 29 | * TOC position highlight on scroll 30 | */ 31 | 32 | var observeeList = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary .md-nav__link"); 33 | var listElems = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary > ul li"); 34 | var config = {attributes: true, childList: true, subtree: true}; 35 | 36 | var callback = function (mutationsList, observer) { 37 | for (var mutation of mutationsList) { 38 | if (mutation.type == 'attributes') { 39 | mutation.target.parentNode.setAttribute(mutation.attributeName, 40 | mutation.target.getAttribute(mutation.attributeName)); 41 | scrollerPosition(mutation); 42 | } 43 | } 44 | }; 45 | var observer = new MutationObserver(callback); 46 | 47 | listElems[0].classList.add('active'); 48 | 49 | for (var i = 0; i < observeeList.length; i++) { 50 | var el = observeeList[i]; 51 | 52 | observer.observe(el, config); 53 | 54 | el.onclick = function (e) { 55 | listElems.forEach(function (elm) { 56 | if (elm.classList) { 57 | elm.classList.remove('active'); 58 | } 59 | }); 60 | 61 | e.target.parentNode.classList.add('active'); 62 | } 63 | } 64 | 65 | function scrollerPosition(mutation) { 66 | var blurList = document.querySelectorAll(".md-sidebar__inner > .md-nav--secondary > ul li > .md-nav__link[data-md-state='blur']"); 67 | 68 | listElems.forEach(function (el) { 69 | if (el.classList) { 70 | el.classList.remove('active'); 71 | } 72 | }); 73 | 74 | if (blurList.length > 0) { 75 | if (mutation.target.getAttribute('data-md-state') === 'blur') { 76 | if (mutation.target.parentNode.querySelector('ul li')) { 77 | mutation.target.parentNode.querySelector('ul li').classList.add('active'); 78 | } else { 79 | setActive(mutation.target.parentNode); 80 | } 81 | } else { 82 | mutation.target.parentNode.classList.add('active'); 83 | } 84 | } else { 85 | if (listElems.length > 0) { 86 | listElems[0].classList.add('active'); 87 | } 88 | } 89 | } 90 | 91 | function setActive(parentNode, i) { 92 | i = i || 0; 93 | if (i === 5) { 94 | return; 95 | } 96 | if (parentNode.nextElementSibling) { 97 | parentNode.nextElementSibling.classList.add('active'); 98 | return; 99 | } 100 | setActive(parentNode.parentNode.parentNode.parentNode, ++i); 101 | } 102 | -------------------------------------------------------------------------------- /docs/assets/lib/backtotop/img/cd-top-arrow.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 6 | 7 | 8 | -------------------------------------------------------------------------------- /docs/assets/lib/backtotop/js/main.js: -------------------------------------------------------------------------------- 1 | (function(){ 2 | // Back to Top - by CodyHouse.co 3 | var backTop = document.getElementsByClassName('js-cd-top')[0], 4 | offset = 300, // browser window scroll (in pixels) after which the "back to top" link is shown 5 | offsetOpacity = 1200, //browser window scroll (in pixels) after which the "back to top" link opacity is reduced 6 | scrollDuration = 700, 7 | scrolling = false; 8 | 9 | if( backTop ) { 10 | //update back to top visibility on scrolling 11 | window.addEventListener("scroll", function(event) { 12 | if( !scrolling ) { 13 | scrolling = true; 14 | (!window.requestAnimationFrame) ? setTimeout(checkBackToTop, 250) : window.requestAnimationFrame(checkBackToTop); 15 | } 16 | }); 17 | 18 | //smooth scroll to top 19 | backTop.addEventListener('click', function(event) { 20 | event.preventDefault(); 21 | (!window.requestAnimationFrame) ? window.scrollTo(0, 0) : Util.scrollTo(0, scrollDuration); 22 | }); 23 | } 24 | 25 | function checkBackToTop() { 26 | var windowTop = window.scrollY || document.documentElement.scrollTop; 27 | ( windowTop > offset ) ? Util.addClass(backTop, 'cd-top--is-visible') : Util.removeClass(backTop, 'cd-top--is-visible cd-top--fade-out'); 28 | ( windowTop > offsetOpacity ) && Util.addClass(backTop, 'cd-top--fade-out'); 29 | scrolling = false; 30 | } 31 | })(); -------------------------------------------------------------------------------- /docs/assets/lib/backtotop/js/util.js: -------------------------------------------------------------------------------- 1 | // Utility function 2 | function Util () {}; 3 | 4 | /* 5 | class manipulation functions 6 | */ 7 | Util.hasClass = function(el, className) { 8 | if (el.classList) return el.classList.contains(className); 9 | else return !!el.className.match(new RegExp('(\\s|^)' + className + '(\\s|$)')); 10 | }; 11 | 12 | Util.addClass = function(el, className) { 13 | var classList = className.split(' '); 14 | if (el.classList) el.classList.add(classList[0]); 15 | else if (!Util.hasClass(el, classList[0])) el.className += " " + classList[0]; 16 | if (classList.length > 1) Util.addClass(el, classList.slice(1).join(' ')); 17 | }; 18 | 19 | Util.removeClass = function(el, className) { 20 | var classList = className.split(' '); 21 | if (el.classList) el.classList.remove(classList[0]); 22 | else if(Util.hasClass(el, classList[0])) { 23 | var reg = new RegExp('(\\s|^)' + classList[0] + '(\\s|$)'); 24 | el.className=el.className.replace(reg, ' '); 25 | } 26 | if (classList.length > 1) Util.removeClass(el, classList.slice(1).join(' ')); 27 | }; 28 | 29 | Util.toggleClass = function(el, className, bool) { 30 | if(bool) Util.addClass(el, className); 31 | else Util.removeClass(el, className); 32 | }; 33 | 34 | Util.setAttributes = function(el, attrs) { 35 | for(var key in attrs) { 36 | el.setAttribute(key, attrs[key]); 37 | } 38 | }; 39 | 40 | /* 41 | DOM manipulation 42 | */ 43 | Util.getChildrenByClassName = function(el, className) { 44 | var children = el.children, 45 | childrenByClass = []; 46 | for (var i = 0; i < el.children.length; i++) { 47 | if (Util.hasClass(el.children[i], className)) childrenByClass.push(el.children[i]); 48 | } 49 | return childrenByClass; 50 | }; 51 | 52 | /* 53 | Animate height of an element 54 | */ 55 | Util.setHeight = function(start, to, element, duration, cb) { 56 | var change = to - start, 57 | currentTime = null; 58 | 59 | var animateHeight = function(timestamp){ 60 | if (!currentTime) currentTime = timestamp; 61 | var progress = timestamp - currentTime; 62 | var val = parseInt((progress/duration)*change + start); 63 | element.setAttribute("style", "height:"+val+"px;"); 64 | if(progress < duration) { 65 | window.requestAnimationFrame(animateHeight); 66 | } else { 67 | cb(); 68 | } 69 | }; 70 | 71 | //set the height of the element before starting animation -> fix bug on Safari 72 | element.setAttribute("style", "height:"+start+"px;"); 73 | window.requestAnimationFrame(animateHeight); 74 | }; 75 | 76 | /* 77 | Smooth Scroll 78 | */ 79 | 80 | Util.scrollTo = function(final, duration, cb) { 81 | var start = window.scrollY || document.documentElement.scrollTop, 82 | currentTime = null; 83 | 84 | var animateScroll = function(timestamp){ 85 | if (!currentTime) currentTime = timestamp; 86 | var progress = timestamp - currentTime; 87 | if(progress > duration) progress = duration; 88 | var val = Math.easeInOutQuad(progress, start, final-start, duration); 89 | window.scrollTo(0, val); 90 | if(progress < duration) { 91 | window.requestAnimationFrame(animateScroll); 92 | } else { 93 | cb && cb(); 94 | } 95 | }; 96 | 97 | window.requestAnimationFrame(animateScroll); 98 | }; 99 | 100 | /* 101 | Focus utility classes 102 | */ 103 | 104 | //Move focus to an element 105 | Util.moveFocus = function (element) { 106 | if( !element ) element = document.getElementsByTagName("body")[0]; 107 | element.focus(); 108 | if (document.activeElement !== element) { 109 | element.setAttribute('tabindex','-1'); 110 | element.focus(); 111 | } 112 | }; 113 | 114 | /* 115 | Misc 116 | */ 117 | 118 | Util.getIndexInArray = function(array, el) { 119 | return Array.prototype.indexOf.call(array, el); 120 | }; 121 | 122 | Util.cssSupports = function(property, value) { 123 | if('CSS' in window) { 124 | return CSS.supports(property, value); 125 | } else { 126 | var jsProperty = property.replace(/-([a-z])/g, function (g) { return g[1].toUpperCase();}); 127 | return jsProperty in document.body.style; 128 | } 129 | }; 130 | 131 | /* 132 | Polyfills 133 | */ 134 | //Closest() method 135 | if (!Element.prototype.matches) { 136 | Element.prototype.matches = Element.prototype.msMatchesSelector || Element.prototype.webkitMatchesSelector; 137 | } 138 | 139 | if (!Element.prototype.closest) { 140 | Element.prototype.closest = function(s) { 141 | var el = this; 142 | if (!document.documentElement.contains(el)) return null; 143 | do { 144 | if (el.matches(s)) return el; 145 | el = el.parentElement || el.parentNode; 146 | } while (el !== null && el.nodeType === 1); 147 | return null; 148 | }; 149 | } 150 | 151 | //Custom Event() constructor 152 | if ( typeof window.CustomEvent !== "function" ) { 153 | 154 | function CustomEvent ( event, params ) { 155 | params = params || { bubbles: false, cancelable: false, detail: undefined }; 156 | var evt = document.createEvent( 'CustomEvent' ); 157 | evt.initCustomEvent( event, params.bubbles, params.cancelable, params.detail ); 158 | return evt; 159 | } 160 | 161 | CustomEvent.prototype = window.Event.prototype; 162 | 163 | window.CustomEvent = CustomEvent; 164 | } 165 | 166 | /* 167 | Animation curves 168 | */ 169 | Math.easeInOutQuad = function (t, b, c, d) { 170 | t /= d/2; 171 | if (t < 1) return c/2*t*t + b; 172 | t--; 173 | return -c/2 * (t*(t-2) - 1) + b; 174 | }; -------------------------------------------------------------------------------- /docs/assets/lib/highlightjs/default.min.css: -------------------------------------------------------------------------------- 1 | .hljs{display:block;overflow-x:auto;padding:.5em;background:#F0F0F0}.hljs,.hljs-subst{color:#444}.hljs-comment{color:#888888}.hljs-keyword,.hljs-attribute,.hljs-selector-tag,.hljs-meta-keyword,.hljs-doctag,.hljs-name{font-weight:bold}.hljs-type,.hljs-string,.hljs-number,.hljs-selector-id,.hljs-selector-class,.hljs-quote,.hljs-template-tag,.hljs-deletion{color:#880000}.hljs-title,.hljs-section{color:#880000;font-weight:bold}.hljs-regexp,.hljs-symbol,.hljs-variable,.hljs-template-variable,.hljs-link,.hljs-selector-attr,.hljs-selector-pseudo{color:#BC6060}.hljs-literal{color:#78A960}.hljs-built_in,.hljs-bullet,.hljs-code,.hljs-addition{color:#397300}.hljs-meta{color:#1f7199}.hljs-meta-string{color:#4d99bf}.hljs-emphasis{font-style:italic}.hljs-strong{font-weight:bold} -------------------------------------------------------------------------------- /docs/assets/stylesheets/extra.css: -------------------------------------------------------------------------------- 1 | /* 2 | * Copyright (c) 2019, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. 3 | * 4 | * WSO2 Inc. licenses this file to you under the Apache License, 5 | * Version 2.0 (the "License"); you may not use this file except 6 | * in compliance with the License. 7 | * You may obtain a copy of the License at 8 | * 9 | * http://www.apache.org/licenses/LICENSE-2.0 10 | * 11 | * Unless required by applicable law or agreed to in writing, 12 | * software distributed under the License is distributed on an 13 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 14 | * KIND, either express or implied. See the License for the 15 | * specific language governing permissions and limitations 16 | * under the License. 17 | */ 18 | 19 | 20 | .md-header-nav__button.md-logo img { 21 | width: 80px; 22 | height: 20px; 23 | } 24 | 25 | .extension-title-low { 26 | font-weight: 100; 27 | padding-left: 15px; 28 | } 29 | 30 | .extension-title { 31 | font-weight: 700; 32 | margin-right: 5px; 33 | } 34 | 35 | .extension-title:hover { 36 | opacity: .7; 37 | } 38 | 39 | .md-header-nav__title { 40 | padding: 1px 0; 41 | } 42 | 43 | .md-main > .md-main__inner > .md-content { 44 | -webkit-transition: margin 0.2s linear; 45 | -khtml-transition: margin 0.2s linear; 46 | -moz-transition: margin 0.2s linear; 47 | -ms-transition: margin 0.2s linear; 48 | transition: margin 0.2s linear; 49 | } 50 | 51 | .md-main .md-sidebar.md-sidebar--secondary { 52 | -webkit-transition: width 0.2s linear; 53 | -khtml-transition: width 0.2s linear; 54 | -moz-transition: width 0.2s linear; 55 | -ms-transition: width 0.2s linear; 56 | transition: width 0.2s linear; 57 | padding-bottom: 80px; 58 | } 59 | 60 | .md-main.hide-toc .md-content { 61 | margin-right: 0; 62 | } 63 | 64 | .md-main.hide-toc .md-sidebar.md-sidebar--secondary { 65 | width: 0; 66 | } 67 | 68 | .md-header { 69 | height: 2.6rem; 70 | } 71 | 72 | .md-header-nav__topic { 73 | top: 0; 74 | margin-top: .2rem; 75 | font-weight: bold; 76 | color: darkslategray; 77 | } 78 | 79 | /*White header*/ 80 | [data-md-color-primary=teal] .md-header { 81 | background-color: #ffffff; 82 | color: #212121; 83 | border-top: 4px solid #009688; 84 | box-shadow: 0 0 0.2rem #009688, 0 0.2rem 0.4rem rgba(0,0,0,.2); 85 | } 86 | 87 | @media only screen and (min-width: 76.25em) { 88 | .md-source { 89 | padding-right: 0; 90 | text-align: right; 91 | opacity: 0.7; 92 | } 93 | } 94 | 95 | html .md-typeset .superfences-tabs > label:hover { 96 | color: #009688; 97 | } 98 | 99 | .md-search__input { 100 | background-color: #dedede; 101 | } 102 | 103 | .md-search__input::placeholder { 104 | color: #404040; 105 | } 106 | 107 | .md-nav-link-wrapper { 108 | display: block; 109 | margin-top: .625em; 110 | transition: color .125s; 111 | text-overflow: ellipsis; 112 | cursor: pointer; 113 | overflow: hidden 114 | } 115 | 116 | .md-nav__item--nested > .md-nav-link-wrapper > .md-nav__link { 117 | display: inline; 118 | } 119 | 120 | .md-nav__item--nested > .md-nav-link-wrapper > .md-nav__link:after { 121 | content: "\E313"; 122 | display: inline-block; 123 | vertical-align: middle; 124 | } 125 | 126 | .md-nav__item--nested .md-nav__toggle:checked ~ .md-nav-link-wrapper > .md-nav__link:after { 127 | -webkit-transform: rotateX(180deg); 128 | transform: rotateX(180deg) 129 | } 130 | 131 | [data-md-color-primary=deep-orange] .md-nav-link-wrapper a:focus, 132 | [data-md-color-primary=deep-orange] .md-nav-link-wrapper a:hover { 133 | color: #009688; 134 | } 135 | 136 | .hljs-title, 137 | .hljs-section { 138 | color: #009688; 139 | font-weight: normal; 140 | } 141 | 142 | .hljs-type, 143 | .hljs-string, 144 | .hljs-number, 145 | .hljs-selector-id, 146 | .hljs-selector-class, 147 | .hljs-quote, 148 | .hljs-template-tag, 149 | .hljs-deletion { 150 | color: #009688; 151 | } 152 | 153 | .home_icon { 154 | height: 45px; 155 | margin-right: -12px; 156 | vertical-align: middle; 157 | } 158 | 159 | .home_icon a { 160 | margin-top: 4px; 161 | } 162 | 163 | .home_icon a i { 164 | font-size: 25px; 165 | } 166 | 167 | .md-nav__link[data-md-state=blur] { 168 | color: rgba(0, 0, 0, .54); 169 | } 170 | 171 | .quick_links { 172 | float: right; 173 | } 174 | 175 | .nav_link { 176 | color: #fff; 177 | font-size: 22px; 178 | -webkit-transition: right 0.2s linear; 179 | -khtml-transition: right 0.2s linear; 180 | -moz-transition: right 0.2s linear; 181 | -ms-transition: right 0.2s linear; 182 | transition: right 0.2s linear, color .25s, opacity .1s; 183 | z-index: 2; 184 | padding-left: 20px; 185 | opacity: 0; 186 | display: none; 187 | } 188 | 189 | .nav_link.active:hover { 190 | opacity: 1; 191 | } 192 | 193 | .edit_link.active { 194 | display: block; 195 | opacity: 0.7; 196 | margin-top: 18px; 197 | } 198 | 199 | .md-header-nav { 200 | padding-right: 0; 201 | } 202 | 203 | @media only screen and (min-width: 76.25em) { 204 | .md-search__inner { 205 | margin-right: 0; 206 | } 207 | } 208 | 209 | @media only screen and (min-width: 60em) { 210 | .md-search { 211 | padding-right: 0; 212 | } 213 | } 214 | 215 | @media only screen and (max-width: 76.1875em) { 216 | html .md-nav--primary .md-nav__title--site .md-nav__button { 217 | font-size: 1.9rem; 218 | padding: 0 0 0 .4rem; 219 | height: 2.2rem; 220 | } 221 | 222 | .extension-title { 223 | display: none; 224 | } 225 | 226 | html [data-md-color-primary=teal] .md-nav--primary .md-nav__title--site { 227 | background-color: #fff; 228 | border-top: 3px solid #009688; 229 | box-shadow: 0 0 0.2rem rgba(0, 0, 0, .1), 0 0.2rem 0.4rem rgba(0, 0, 0, .2); 230 | color: black; 231 | } 232 | 233 | .md-header-nav__source { 234 | display: block; 235 | } 236 | 237 | .md-source__icon + .md-source__repository { 238 | display: none; 239 | } 240 | 241 | } 242 | 243 | .feedbackBtn { 244 | transition: all 450ms cubic-bezier(0.23, 1, 0.32, 1) 0ms; 245 | background: rgb(38, 50, 56);; 246 | color: #fff; 247 | font-size: .7rem; 248 | position: fixed; 249 | right: 0; 250 | top: 50%; 251 | border-radius: 6px 0px 0px 6px; 252 | writing-mode: vertical-lr; 253 | padding: 15px 2px; 254 | line-height: 30px; 255 | box-shadow: 0 1px 0 rgba(153, 153, 153, 0.25) inset, 0 -1px 0 rgba(0, 0, 0, 0.25) inset; 256 | } 257 | 258 | .feedbackBtn:hover { 259 | background-color: #3c464c; 260 | } 261 | 262 | .md-footer-nav { 263 | background-color: rgba(0, 0, 0, 0.67); 264 | } 265 | 266 | .md-footer-nav__link { 267 | padding-top: .4rem; 268 | padding-bottom: 0; 269 | } 270 | 271 | .md-footer-nav__inner { 272 | height: 3rem; 273 | overflow: hidden; 274 | } 275 | 276 | .md-footer-nav__direction { 277 | font-size: .5rem; 278 | top: 3px; 279 | } 280 | 281 | .md-footer-nav__title { 282 | font-size: .7rem; 283 | } 284 | 285 | .md-footer-nav .md-flex__cell { 286 | vertical-align: baseline; 287 | } 288 | 289 | .md-footer-copyright__highlight { 290 | padding-right: 10px; 291 | border-right: 1px solid #575757; 292 | margin-right: 10px; 293 | display: inline-block; 294 | } 295 | 296 | .text--replace { 297 | overflow: hidden; 298 | color: transparent; 299 | text-indent: 100%; 300 | white-space: nowrap 301 | } 302 | 303 | .cd-top { 304 | position: fixed; 305 | bottom: 20px; 306 | right: 20px; 307 | display: inline-block; 308 | height: 40px; 309 | width: 40px; 310 | box-shadow: 0 0 10px rgba(0, 0, 0, 0.05); 311 | background: url(../lib/backtotop/img/cd-top-arrow.svg) no-repeat center 50%; 312 | background-color: hsla(174, 100%, 29%, 0.8); 313 | } 314 | 315 | .js .cd-top { 316 | visibility: hidden; 317 | opacity: 0; 318 | transition: opacity .3s, visibility .3s, background-color .3s 319 | } 320 | 321 | .js .cd-top--is-visible { 322 | visibility: visible; 323 | opacity: 1 324 | } 325 | 326 | .js .cd-top--fade-out { 327 | opacity: .5 328 | } 329 | 330 | .js .cd-top:hover { 331 | background-color: hsl(174, 100%, 29%); 332 | opacity: 1 333 | } 334 | 335 | .md-nav__source { 336 | display: none; 337 | } 338 | 339 | @media only screen and (max-width: 1220px) { 340 | .nav_link { 341 | display: none; 342 | } 343 | } 344 | 345 | .md-nav--secondary ul > li.md-nav__item { 346 | border-left: 4px solid transparent; 347 | transition: border 500ms; 348 | margin-left: -3px; 349 | } 350 | 351 | .md-sidebar--secondary .md-nav--secondary > ul { 352 | border-left: 2px solid #ccc; 353 | margin-left: 3px; 354 | } 355 | 356 | .md-nav--secondary > ul li.md-nav__item.active { 357 | border-color: #242424 !important; 358 | } 359 | 360 | .md-sidebar--secondary .md-nav--secondary > ul ul { 361 | margin-left: -15px; 362 | } 363 | 364 | .md-sidebar--secondary .md-nav--secondary > ul ul > li { 365 | padding-left: 30px; 366 | } 367 | 368 | .md-sidebar--secondary .md-nav--secondary > ul ul ul { 369 | margin-left: -31px; 370 | } 371 | 372 | .md-sidebar--secondary .md-nav--secondary > ul ul ul > li { 373 | padding-left: 45px; 374 | } 375 | 376 | .md-sidebar--secondary .md-nav--secondary > ul ul ul ul { 377 | margin-left: -46px; 378 | } 379 | 380 | .md-sidebar--secondary .md-nav--secondary > ul ul ul ul > li { 381 | padding-left: 60px; 382 | } 383 | 384 | .md-sidebar--secondary .md-nav--secondary > ul ul ul ul ul { 385 | margin-left: -61px; 386 | } 387 | 388 | .md-sidebar--secondary .md-nav--secondary > ul ul ul ul ul > li { 389 | padding-left: 75px; 390 | } 391 | 392 | .md-sidebar { 393 | position: fixed; 394 | } 395 | 396 | .md-sidebar[data-md-state=lock] { 397 | top: 3.9rem; 398 | } 399 | 400 | @media screen and (min-width: 1220px) and (max-width: 1599px) { 401 | .nav_link { 402 | top: 62px; 403 | } 404 | } 405 | 406 | .md-content__icon, 407 | .md-footer-nav__button, 408 | .md-header-nav__button, 409 | .md-nav__button, 410 | .md-nav__title::before, 411 | .md-search-result__article--document::before { 412 | margin: 0.3rem; 413 | } 414 | 415 | .md-source__icon { 416 | float: right; 417 | text-align: right; 418 | width: 1.8rem; 419 | } 420 | 421 | @media (min-width: 60em) { 422 | .md-header-nav__source { 423 | padding-right: 0; 424 | } 425 | } 426 | 427 | .md-source { 428 | opacity: 0.7; 429 | } 430 | 431 | .md-source:hover { 432 | opacity: 1; 433 | } 434 | 435 | @media only screen and (min-width: 45em) { 436 | .md-footer-social { 437 | padding: 2px 0; 438 | } 439 | } 440 | 441 | .md-nav__button img { 442 | width: 150%; 443 | } 444 | -------------------------------------------------------------------------------- /docs/images/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/siddhi-io/siddhi-io-kafka/0453381b4ba6d6453358597eb2243d5fa10c5f02/docs/images/favicon.ico -------------------------------------------------------------------------------- /docs/images/siddhi-logo.svg: -------------------------------------------------------------------------------- 1 | 2 | 3 | 47 | -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | Siddhi IO Kafka 2 | ====================================== 3 | 4 | [![Jenkins Build Status](https://wso2.org/jenkins/job/siddhi/job/siddhi-io-kafka/badge/icon)](https://wso2.org/jenkins/job/siddhi/job/siddhi-io-kafka/) 5 | [![GitHub Release](https://img.shields.io/github/release/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/releases) 6 | [![GitHub Release Date](https://img.shields.io/github/release-date/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/releases) 7 | [![GitHub Open Issues](https://img.shields.io/github/issues-raw/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/issues) 8 | [![GitHub Last Commit](https://img.shields.io/github/last-commit/siddhi-io/siddhi-io-kafka.svg)](https://github.com/siddhi-io/siddhi-io-kafka/commits/master) 9 | [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0) 10 | 11 | The **siddhi-io-kafka extension** is an extension to Siddhi that receives and publishes events from and to Kafka. 12 | 13 | For information on Siddhi and it's features refer Siddhi Documentation. 14 | 15 | ## Download 16 | 17 | * Versions 5.x and above with group id `io.siddhi.extension.*` from here. 18 | * Versions 4.x and lower with group id `org.wso2.extension.siddhi.*` from here. 19 | 20 | ## Latest API Docs 21 | 22 | Latest API Docs is 5.0.19. 23 | 24 | ## Features 25 | 26 | * kafka *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to use the Kafka transport, the type parameter should have kafka as its value.

27 | * kafka-replay-request *(Sink)*

This sink is used to request replay of specific range of events on a specified partition of a topic.

28 | * kafkaMultiDC *(Sink)*

A Kafka sink publishes events processed by WSO2 SP to a topic with a partition for a Kafka cluster. The events can be published in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic. The publishing topic and partition can be a dynamic value taken from the Siddhi event.
To configure a sink to publish events via the Kafka transport, and using two Kafka brokers to publish events to the same topic, the type parameter must have kafkaMultiDC as its value.

29 | * kafka *(Source)*

A Kafka source receives events to be processed by WSO2 SP from a topic with a partition for a Kafka cluster. The events received can be in the TEXT XML JSON or Binary format.
If the topic is not already created in the Kafka cluster, the Kafka sink creates the default partition for the given topic.

30 | * kafka-replay-response *(Source)*

This source is used to listen to replayed events requested from kafka-replay-request sink

31 | * kafkaMultiDC *(Source)*

The Kafka Multi-Datacenter(DC) source receives records from the same topic in brokers deployed in two different kafka clusters. It filters out all the duplicate messages and ensuresthat the events are received in the correct order using sequential numbering. It receives events in formats such as TEXT, XML JSON and Binary`.The Kafka Source creates the default partition '0' for a given topic, if the topic has not yet been created in the Kafka cluster.

32 | 33 | ## Installation 34 | 35 | For installing this extension in the Streaming Integrator Server, and to add the dependent jars, refer Streaming Integrator documentation section on downloading and installing siddhi extensions.\ 36 | For installing this extension in the Streaming Integrator Tooling, and to add the dependent jars, refer Streaming Integrator documentation section on installing siddhi extensions. 37 | 38 | ## Dependencies 39 | 40 | Following JARs will be converted to osgi and copied to `WSO2SI_HOME/lib` and `WSO2SI_HOME/samples/sample-clients/lib` which are in `/libs` directory. 41 | 42 | - kafka_2.11-*.jar 43 | - kafka-clients-*.jar 44 | - metrics-core-*.jar 45 | - scala-library-2.11.*.jar 46 | - scala-parser-combinators_2.11.*.jar (if exists) 47 | - zkclient-*.jar 48 | - zookeeper-*.jar 49 | 50 | #### Setup Kafka 51 | 52 | As a prerequisite, you have to start the Kafka message broker. Please follow better steps. 53 | 1. Download the Kafka [distribution](https://kafka.apache.org/downloads) 54 | 2. Unzip the above distribution and go to the ‘bin’ directory 55 | 3. Start the zookeeper by executing below command, 56 | ```bash 57 | zookeeper-server-start.sh config/zookeeper.properties 58 | ``` 59 | 4. Start the Kafka broker by executing below command, 60 | ```bash 61 | kafka-server-start.sh config/server.properties 62 | ``` 63 | 64 | Refer the Kafka documentation for more details, https://kafka.apache.org/quickstart 65 | 66 | ## Support and Contribution 67 | 68 | * We encourage users to ask questions and get support via StackOverflow, make sure to add the `siddhi` tag to the issue for better response. 69 | 70 | * If you find any issues related to the extension please report them on the issue tracker. 71 | 72 | * For production support and other contribution related information refer Siddhi Community documentation. 73 | -------------------------------------------------------------------------------- /docs/license.md: -------------------------------------------------------------------------------- 1 | Copyright (c) 2019 WSO2 Inc. () All Rights Reserved. 2 | 3 | WSO2 Inc. licenses this file to you under the Apache License, 4 | Version 2.0 (the "License"); you may not use this file except 5 | in compliance with the License. 6 | You may obtain a copy of the License at 7 | 8 | 9 | 10 | Unless required by applicable law or agreed to in writing, 11 | software distributed under the License is distributed on an 12 | "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 13 | KIND, either express or implied. See the License for the 14 | specific language governing permissions and limitations 15 | under the License. 16 | 17 | ``` 18 | ------------------------------------------------------------------------- 19 | Apache License 20 | Version 2.0, January 2004 21 | http://www.apache.org/licenses/ 22 | 23 | 24 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 25 | 26 | 1. Definitions. 27 | 28 | "License" shall mean the terms and conditions for use, reproduction, 29 | and distribution as defined by Sections 1 through 9 of this document. 30 | 31 | "Licensor" shall mean the copyright owner or entity authorized by 32 | the copyright owner that is granting the License. 33 | 34 | "Legal Entity" shall mean the union of the acting entity and all 35 | other entities that control, are controlled by, or are under common 36 | control with that entity. For the purposes of this definition, 37 | "control" means (i) the power, direct or indirect, to cause the 38 | direction or management of such entity, whether by contract or 39 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 40 | outstanding shares, or (iii) beneficial ownership of such entity. 41 | 42 | "You" (or "Your") shall mean an individual or Legal Entity 43 | exercising permissions granted by this License. 44 | 45 | "Source" form shall mean the preferred form for making modifications, 46 | including but not limited to software source code, documentation 47 | source, and configuration files. 48 | 49 | "Object" form shall mean any form resulting from mechanical 50 | transformation or translation of a Source form, including but 51 | not limited to compiled object code, generated documentation, 52 | and conversions to other media types. 53 | 54 | "Work" shall mean the work of authorship, whether in Source or 55 | Object form, made available under the License, as indicated by a 56 | copyright notice that is included in or attached to the work 57 | (an example is provided in the Appendix below). 58 | 59 | "Derivative Works" shall mean any work, whether in Source or Object 60 | form, that is based on (or derived from) the Work and for which the 61 | editorial revisions, annotations, elaborations, or other modifications 62 | represent, as a whole, an original work of authorship. For the purposes 63 | of this License, Derivative Works shall not include works that remain 64 | separable from, or merely link (or bind by name) to the interfaces of, 65 | the Work and Derivative Works thereof. 66 | 67 | "Contribution" shall mean any work of authorship, including 68 | the original version of the Work and any modifications or additions 69 | to that Work or Derivative Works thereof, that is intentionally 70 | submitted to Licensor for inclusion in the Work by the copyright owner 71 | or by an individual or Legal Entity authorized to submit on behalf of 72 | the copyright owner. For the purposes of this definition, "submitted" 73 | means any form of electronic, verbal, or written communication sent 74 | to the Licensor or its representatives, including but not limited to 75 | communication on electronic mailing lists, source code control systems, 76 | and issue tracking systems that are managed by, or on behalf of, the 77 | Licensor for the purpose of discussing and improving the Work, but 78 | excluding communication that is conspicuously marked or otherwise 79 | designated in writing by the copyright owner as "Not a Contribution." 80 | 81 | "Contributor" shall mean Licensor and any individual or Legal Entity 82 | on behalf of whom a Contribution has been received by Licensor and 83 | subsequently incorporated within the Work. 84 | 85 | 2. Grant of Copyright License. Subject to the terms and conditions of 86 | this License, each Contributor hereby grants to You a perpetual, 87 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 88 | copyright license to reproduce, prepare Derivative Works of, 89 | publicly display, publicly perform, sublicense, and distribute the 90 | Work and such Derivative Works in Source or Object form. 91 | 92 | 3. Grant of Patent License. Subject to the terms and conditions of 93 | this License, each Contributor hereby grants to You a perpetual, 94 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 95 | (except as stated in this section) patent license to make, have made, 96 | use, offer to sell, sell, import, and otherwise transfer the Work, 97 | where such license applies only to those patent claims licensable 98 | by such Contributor that are necessarily infringed by their 99 | Contribution(s) alone or by combination of their Contribution(s) 100 | with the Work to which such Contribution(s) was submitted. If You 101 | institute patent litigation against any entity (including a 102 | cross-claim or counterclaim in a lawsuit) alleging that the Work 103 | or a Contribution incorporated within the Work constitutes direct 104 | or contributory patent infringement, then any patent licenses 105 | granted to You under this License for that Work shall terminate 106 | as of the date such litigation is filed. 107 | 108 | 4. Redistribution. You may reproduce and distribute copies of the 109 | Work or Derivative Works thereof in any medium, with or without 110 | modifications, and in Source or Object form, provided that You 111 | meet the following conditions: 112 | 113 | (a) You must give any other recipients of the Work or 114 | Derivative Works a copy of this License; and 115 | 116 | (b) You must cause any modified files to carry prominent notices 117 | stating that You changed the files; and 118 | 119 | (c) You must retain, in the Source form of any Derivative Works 120 | that You distribute, all copyright, patent, trademark, and 121 | attribution notices from the Source form of the Work, 122 | excluding those notices that do not pertain to any part of 123 | the Derivative Works; and 124 | 125 | (d) If the Work includes a "NOTICE" text file as part of its 126 | distribution, then any Derivative Works that You distribute must 127 | include a readable copy of the attribution notices contained 128 | within such NOTICE file, excluding those notices that do not 129 | pertain to any part of the Derivative Works, in at least one 130 | of the following places: within a NOTICE text file distributed 131 | as part of the Derivative Works; within the Source form or 132 | documentation, if provided along with the Derivative Works; or, 133 | within a display generated by the Derivative Works, if and 134 | wherever such third-party notices normally appear. The contents 135 | of the NOTICE file are for informational purposes only and 136 | do not modify the License. You may add Your own attribution 137 | notices within Derivative Works that You distribute, alongside 138 | or as an addendum to the NOTICE text from the Work, provided 139 | that such additional attribution notices cannot be construed 140 | as modifying the License. 141 | 142 | You may add Your own copyright statement to Your modifications and 143 | may provide additional or different license terms and conditions 144 | for use, reproduction, or distribution of Your modifications, or 145 | for any such Derivative Works as a whole, provided Your use, 146 | reproduction, and distribution of the Work otherwise complies with 147 | the conditions stated in this License. 148 | 149 | 5. Submission of Contributions. Unless You explicitly state otherwise, 150 | any Contribution intentionally submitted for inclusion in the Work 151 | by You to the Licensor shall be under the terms and conditions of 152 | this License, without any additional terms or conditions. 153 | Notwithstanding the above, nothing herein shall supersede or modify 154 | the terms of any separate license agreement you may have executed 155 | with Licensor regarding such Contributions. 156 | 157 | 6. Trademarks. This License does not grant permission to use the trade 158 | names, trademarks, service marks, or product names of the Licensor, 159 | except as required for reasonable and customary use in describing the 160 | origin of the Work and reproducing the content of the NOTICE file. 161 | 162 | 7. Disclaimer of Warranty. Unless required by applicable law or 163 | agreed to in writing, Licensor provides the Work (and each 164 | Contributor provides its Contributions) on an "AS IS" BASIS, 165 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 166 | implied, including, without limitation, any warranties or conditions 167 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 168 | PARTICULAR PURPOSE. You are solely responsible for determining the 169 | appropriateness of using or redistributing the Work and assume any 170 | risks associated with Your exercise of permissions under this License. 171 | 172 | 8. Limitation of Liability. In no event and under no legal theory, 173 | whether in tort (including negligence), contract, or otherwise, 174 | unless required by applicable law (such as deliberate and grossly 175 | negligent acts) or agreed to in writing, shall any Contributor be 176 | liable to You for damages, including any direct, indirect, special, 177 | incidental, or consequential damages of any character arising as a 178 | result of this License or out of the use or inability to use the 179 | Work (including but not limited to damages for loss of goodwill, 180 | work stoppage, computer failure or malfunction, or any and all 181 | other commercial damages or losses), even if such Contributor 182 | has been advised of the possibility of such damages. 183 | 184 | 9. Accepting Warranty or Additional Liability. While redistributing 185 | the Work or Derivative Works thereof, You may choose to offer, 186 | and charge a fee for, acceptance of support, warranty, indemnity, 187 | or other liability obligations and/or rights consistent with this 188 | License. However, in accepting such obligations, You may act only 189 | on Your own behalf and on Your sole responsibility, not on behalf 190 | of any other Contributor, and only if You agree to indemnify, 191 | defend, and hold each Contributor harmless for any liability 192 | incurred by, or claims asserted against, such Contributor by reason 193 | of your accepting any such warranty or additional liability. 194 | 195 | END OF TERMS AND CONDITIONS 196 | ``` 197 | -------------------------------------------------------------------------------- /findbugs-exclude.xml: -------------------------------------------------------------------------------- 1 | 2 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | -------------------------------------------------------------------------------- /issue_template.md: -------------------------------------------------------------------------------- 1 | **Description:** 2 | 3 | 4 | **Suggested Labels:** 5 | 6 | 7 | **Suggested Assignees:** 8 | 9 | 10 | **Affected Product Version:** 11 | 12 | **OS, DB, other environment details and versions:** 13 | 14 | **Steps to reproduce:** 15 | 16 | 17 | **Related Issues:** 18 | -------------------------------------------------------------------------------- /mkdocs.yml: -------------------------------------------------------------------------------- 1 | site_name: Siddhi IO Kafka 2 | site_description: Siddhi IO Kafka Extension 3 | repo_name: Siddhi IO Kafka 4 | repo_url: https://github.com/siddhi-io/siddhi-io-kafka/ 5 | edit_uri: https://github.com/siddhi-io/siddhi-io-kafka/blob/master/ 6 | copyright: Siddhi - Documentation 7 | theme: 8 | name: material 9 | logo: images/siddhi-logo.svg 10 | favicon: images/favicon.ico 11 | palette: 12 | primary: teal 13 | accent: teal 14 | extra_css: 15 | - assets/stylesheets/extra.css 16 | extra_javascript: 17 | - assets/javascripts/extra.js 18 | extra: 19 | social: 20 | - type: github 21 | link: https://github.com/siddhi-io/siddhi 22 | - type: medium 23 | link: https://medium.com/siddhi-io 24 | - type: twitter 25 | link: https://twitter.com/siddhi_io 26 | - type: linkedin 27 | link: https://www.linkedin.com/groups/13553064 28 | google_analytics: 29 | - UA-103065-28 30 | - auto 31 | markdown_extensions: 32 | - markdown.extensions.admonition 33 | pages: 34 | - Information: index.md 35 | - API Docs: 36 | - latest: api/latest.md 37 | - 5.0.19: api/5.0.19.md 38 | - 5.0.18: api/5.0.18.md 39 | - 5.0.17: api/5.0.17.md 40 | - 5.0.16: api/5.0.16.md 41 | - 5.0.15: api/5.0.15.md 42 | - 5.0.14: api/5.0.14.md 43 | - 5.0.13: api/5.0.13.md 44 | - 5.0.12: api/5.0.12.md 45 | - 5.0.11: api/5.0.11.md 46 | - 5.0.10: api/5.0.10.md 47 | - 5.0.9: api/5.0.9.md 48 | - 5.0.8: api/5.0.8.md 49 | - 5.0.7: api/5.0.7.md 50 | - 5.0.6: api/5.0.6.md 51 | - 5.0.5: api/5.0.5.md 52 | - 5.0.4: api/5.0.4.md 53 | - 5.0.3: api/5.0.3.md 54 | - 5.0.2: api/5.0.2.md 55 | - 5.0.1: api/5.0.1.md 56 | - 5.0.0: api/5.0.0.md 57 | - 4.2.1: api/4.2.1.md 58 | - 4.2.0: api/4.2.0.md 59 | - 4.1.21: api/4.1.21.md 60 | - 4.1.20: api/4.1.20.md 61 | - 4.1.19: api/4.1.19.md 62 | - 4.1.18: api/4.1.18.md 63 | - 4.1.17: api/4.1.17.md 64 | - 4.1.16: api/4.1.16.md 65 | - 4.1.15: api/4.1.15.md 66 | - 4.1.14: api/4.1.14.md 67 | - 4.1.13: api/4.1.13.md 68 | - 4.1.12: api/4.1.12.md 69 | - 4.1.11: api/4.1.11.md 70 | - 4.1.10: api/4.1.10.md 71 | - 4.1.9: api/4.1.9.md 72 | - 4.1.8: api/4.1.8.md 73 | - 4.1.7: api/4.1.7.md 74 | - 4.1.6: api/4.1.6.md 75 | - 4.1.5: api/4.1.5.md 76 | - 4.1.4: api/4.1.4.md 77 | - 4.1.3: api/4.1.3.md 78 | - 4.1.2: api/4.1.2.md 79 | - 4.1.1: api/4.1.1.md 80 | - 4.1.0: api/4.1.0.md 81 | - 4.0.17: api/4.0.17.md 82 | - 4.0.16: api/4.0.16.md 83 | - 4.0.15: api/4.0.15.md 84 | - 4.0.14: api/4.0.14.md 85 | - 4.0.13: api/4.0.13.md 86 | - 4.0.12: api/4.0.12.md 87 | - 4.0.11: api/4.0.11.md 88 | - 4.0.10: api/4.0.10.md 89 | - 4.0.9: api/4.0.9.md 90 | - 4.0.8: api/4.0.8.md 91 | - 4.0.7: api/4.0.7.md 92 | - License: license.md 93 | -------------------------------------------------------------------------------- /pull_request_template.md: -------------------------------------------------------------------------------- 1 | ## Purpose 2 | > Describe the problems, issues, or needs driving this feature/fix and include links to related issues in the following format: Resolves issue1, issue2, etc. 3 | 4 | ## Goals 5 | > Describe the solutions that this feature/fix will introduce to resolve the problems described above 6 | 7 | ## Approach 8 | > Describe how you are implementing the solutions. Include an animated GIF or screenshot if the change affects the UI (email documentation@wso2.com to review all UI text). Include a link to a Markdown file or Google doc if the feature write-up is too long to paste here. 9 | 10 | ## User stories 11 | > Summary of user stories addressed by this change> 12 | 13 | ## Release note 14 | > Brief description of the new feature or bug fix as it will appear in the release notes 15 | 16 | ## Documentation 17 | > Link(s) to product documentation that addresses the changes of this PR. If no doc impact, enter “N/A” plus brief explanation of why there’s no doc impact 18 | 19 | ## Training 20 | > Link to the PR for changes to the training content in https://github.com/wso2/WSO2-Training, if applicable 21 | 22 | ## Certification 23 | > Type “Sent” when you have provided new/updated certification questions, plus four answers for each question (correct answer highlighted in bold), based on this change. Certification questions/answers should be sent to certification@wso2.com and NOT pasted in this PR. If there is no impact on certification exams, type “N/A” and explain why. 24 | 25 | ## Marketing 26 | > Link to drafts of marketing content that will describe and promote this feature, including product page changes, technical articles, blog posts, videos, etc., if applicable 27 | 28 | ## Automation tests 29 | - Unit tests 30 | > Code coverage information 31 | - Integration tests 32 | > Details about the test cases and coverage 33 | 34 | ## Security checks 35 | - Followed secure coding standards in http://wso2.com/technical-reports/wso2-secure-engineering-guidelines? yes/no 36 | - Ran FindSecurityBugs plugin and verified report? yes/no 37 | - Confirmed that this PR doesn't commit any keys, passwords, tokens, usernames, or other secrets? yes/no 38 | 39 | ## Samples 40 | > Provide high-level details about the samples related to this feature 41 | 42 | ## Related PRs 43 | > List any other related PRs 44 | 45 | ## Migrations (if applicable) 46 | > Describe migration steps and platforms on which migration has been tested 47 | 48 | ## Test environment 49 | > List all JDK versions, operating systems, databases, and browser/versions on which this feature/fix was tested 50 | 51 | ## Learning 52 | > Describe the research phase and any blog posts, patterns, libraries, or add-ons you used to solve the problem. --------------------------------------------------------------------------------