├── LICENSE ├── README.md ├── build_community.sh ├── build_confluent.sh ├── build_lab.sh ├── community.Dockerfile ├── confluent.Dockerfile ├── docker-compose.yml └── lab.Dockerfile /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # cp-kafka-connect-custom 2 | 3 | Tooling to build a custom Confluent Platform Kafka Connect container with additional connectors from Confluent Hub. 4 | 5 | ![connect](https://user-images.githubusercontent.com/10326954/65959234-c465ca80-e451-11e9-8d66-7f43fae78ffb.png) 6 | 7 | ## Docker Hub 8 | 9 | The containers are available on Docker Hub [HERE](https://hub.docker.com/r/robcowart/cp-kafka-connect-custom). 10 | 11 | There are two container options, which can be identified by the the suffix added to their tags. 12 | 13 | suffix | description 14 | :---|:--- 15 | `_community` | Includes only those connectors released under the [Confluent Community License](http://www.confluent.io/confluent-community-license), or an Open Source license (e.g. MIT or Apache 2.0). 16 | `_confluent` | Includes all connectors in the `_community` container, as well as those licensed under the [Confluent Software Evaluation License](https://www.confluent.io/software-evaluation-license). 17 | 18 | ## Bundled Connectors 19 | 20 | ### Community 21 | 22 | Connector | Version 23 | :--- | ---: 24 | confluentinc/connect-transforms | 1.3.0 25 | confluentinc/kafka-connect-avro-converter | 5.4.1 26 | confluentinc/kafka-connect-datagen | 0.3.1 27 | confluentinc/kafka-connect-elasticsearch | 5.4.1 28 | confluentinc/kafka-connect-hdfs | 5.4.1 29 | confluentinc/kafka-connect-jdbc | 5.4.1 30 | confluentinc/kafka-connect-s3 | 5.4.1 31 | confluentinc/kafka-connect-vertica | 1.0.2 32 | bkatwal/bkatwal-kafka-connect-solr-sink | 2.0 33 | blueapron/kafka-connect-protobuf-converter | 3.1.0 34 | C0urante/kafka-connect-reddit | 0.1.2 35 | chaitalisagesh/kafka-connect-log-analytics | 0.1 36 | cjmatta/kafka-connect-irc | 5.0.0 37 | debezium/debezium-connector-mongodb | 1.0.0 38 | debezium/debezium-connector-mysql | 1.0.0 39 | debezium/debezium-connector-postgresql | 1.0.0 40 | debezium/debezium-connector-sqlserver | 1.0.0 41 | dhananjaypatkar/kafka-connect-phoenix | 0.1 42 | fbascheper/kafka-connect-telegram | 0.2.0 43 | hpgrahsl/kafka-connect-mongodb | 1.3.1 44 | humio/kafka-connect-hec-sink | 1.0 45 | jaredpetersen/kafka-connect-arangodb | 1.0.4 46 | jcustenborder/kafka-connect-aerospike | 0.2.4 47 | jcustenborder/kafka-connect-json-schema | 0.0.2.1 48 | jcustenborder/kafka-connect-memcached | 0.1.0.10 49 | jcustenborder/kafka-connect-redis | 0.0.2.11 50 | jcustenborder/kafka-connect-simulator | 0.1.120 51 | jcustenborder/kafka-connect-solr | 0.1.34 52 | jcustenborder/kafka-connect-spooldir | 2.0.43 53 | jcustenborder/kafka-connect-transform-common | 0.1.0.34 54 | jcustenborder/kafka-connect-transform-fix | 0.1.0.1 55 | jcustenborder/kafka-connect-transform-maxmind | 0.1.0.10 56 | jcustenborder/kafka-connect-transform-xml | 0.1.0.17 57 | jcustenborder/kafka-connect-twitter | 0.3.33 58 | juxt/kafka-connect-crux | 19.12-1.6.1-alpha 59 | kaliy/kafka-connect-rss | 0.1.0 60 | mdrogalis/voluble | 0.1.0 61 | microsoft/kafka-connect-iothub | 0.6 62 | mongodb/kafka-connect-mongodb | 1.0.1 63 | neo4j/kafka-connect-neo4j | 1.0.2 64 | nishutayal/kafka-connect-hbase | 1.0.1 65 | opencredo/kafka-connect-venafi | 0.9.5 66 | rockset/kafka-connect-rockset | 1.2.1 67 | sanjuthomas/kafka-connect-gcp-bigtable | 1.0.7 68 | ScyllaDB/kafka-connect-scylladb | 1.0.0 69 | splunk/kafka-connect-splunk | 1.1.1 70 | streamthoughts/kafka-connect-file-pulse | 1.2.1 71 | thomaskwscott/kafka-connect-shell-sink | 5.1.0 72 | thomaskwscott/kafka-connect-shell-source | 5.1.0 73 | wepay/kafka-connect-bigquery | 1.6.1 74 | yugabyteinc/yb-kafka-connector | 1.0.0 75 | zeebe-io/kafka-connect-zeebe | 0.22.0 76 | 77 | 78 | ### Confluent 79 | 80 | Connector | Version 81 | :--- | ---: 82 | confluentinc/kafka-connect-activemq | 5.4.1 83 | confluentinc/kafka-connect-activemq-sink | 1.1.2 84 | confluentinc/kafka-connect-appdynamics-metrics | 1.1.0-preview 85 | confluentinc/kafka-connect-aws-cloudwatch-metrics | 1.1.0 86 | confluentinc/kafka-connect-aws-cloudwatch-logs | 1.0.2 87 | confluentinc/kafka-connect-aws-dynamodb | 1.0.2 88 | confluentinc/kafka-connect-aws-lambda | 1.0.1 89 | confluentinc/kafka-connect-aws-redshift | 1.0.1 90 | confluentinc/kafka-connect-azure-blob-storage-source | 1.2.1 91 | confluentinc/kafka-connect-azure-blob-storage | 1.3.1 92 | confluentinc/kafka-connect-azure-data-lake-gen1-storage | 1.3.1 93 | confluentinc/kafka-connect-azure-data-lake-gen2-storage | 1.3.1 94 | confluentinc/kafka-connect-azure-event-hubs | 1.0.1 95 | confluentinc/kafka-connect-azure-functions | 1.0.5 96 | confluentinc/kafka-connect-azure-search | 1.0.1 97 | confluentinc/kafka-connect-azure-service-bus | 1.0.2 98 | confluentinc/kafka-connect-azure-sql-dw | 1.0.2 99 | confluentinc/kafka-connect-cassandra | 1.2.0 100 | confluentinc/kafka-connect-data-diode | 1.1.1 101 | confluentinc/kafka-connect-datadog-metrics | 1.1.0-preview 102 | confluentinc/kafka-connect-firebase | 1.1.1 103 | confluentinc/kafka-connect-gcp-bigtable | 1.0.4 104 | confluentinc/kafka-connect-gcp-dataproc-sink | 1.0.2 105 | confluentinc/kafka-connect-gcp-functions | 1.0.6 106 | confluentinc/kafka-connect-gcp-pubsub | 1.0.2 107 | confluentinc/kafka-connect-gcp-spanner | 1.0.2 108 | confluentinc/kafka-connect-gcs | 5.5.0 109 | confluentinc/kafka-connect-gcs-source | 1.2.1 110 | confluentinc/kafka-connect-hbase | 1.0.4 111 | confluentinc/kafka-connect-hdfs3 | 1.0.5 112 | confluentinc/kafka-connect-hdfs2-source | 1.2.1-preview 113 | confluentinc/kafka-connect-hdfs3-source | 1.2.1 114 | confluentinc/kafka-connect-http | 1.0.8 115 | confluentinc/kafka-connect-ibmmq | 5.4.1 116 | confluentinc/kafka-connect-ibmmq-sink | 1.1.2 117 | confluentinc/kafka-connect-influxdb | 1.1.2 118 | confluentinc/kafka-connect-jms | 5.4.1 119 | confluentinc/kafka-connect-jms-sink | 1.1.2 120 | confluentinc/kafka-connect-kinesis | 1.1.4 121 | confluentinc/kafka-connect-kudu | 1.0.1 122 | confluentinc/kafka-connect-maprdb | 1.1.1 123 | confluentinc/kafka-connect-mqtt | 1.2.3 124 | confluentinc/kafka-connect-netezza | 1.0.1 125 | confluentinc/kafka-connect-omnisci | 1.0.2 126 | confluentinc/kafka-connect-pagerduty | 1.0.0-preview 127 | confluentinc/kafka-connect-pivotal-gemfire | 1.0.1 128 | confluentinc/kafka-connect-prometheus-metrics | 1.1.0-preview 129 | confluentinc/kafka-connect-rabbitmq | 1.2.0 130 | confluentinc/kafka-connect-replicator | 5.4.1 131 | confluentinc/kafka-connect-s3-source | 1.2.1 132 | confluentinc/kafka-connect-salesforce | 1.4.2 133 | confluentinc/kafka-connect-salesforce-bulk-api | 1.4.2-preview 134 | confluentinc/kafka-connect-servicenow | 1.1.2 135 | confluentinc/kafka-connect-sftp | 1.0.4 136 | confluentinc/kafka-connect-snmp | 1.0.0-preview 137 | confluentinc/kafka-connect-solace-sink | 1.1.2-preview 138 | confluentinc/kafka-connect-solace-source | 1.1.0 139 | confluentinc/kafka-connect-splunk-source | 1.0.2 140 | confluentinc/kafka-connect-sqs | 1.0.3 141 | confluentinc/kafka-connect-syslog | 1.2.6 142 | confluentinc/kafka-connect-teradata | 1.0.2 143 | confluentinc/kafka-connect-tibco-sink | 1.1.2 144 | confluentinc/kafka-connect-tibco-source | 1.1.0 145 | -------------------------------------------------------------------------------- /build_community.sh: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | docker build --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') -f ./community.Dockerfile -t robcowart/cp-kafka-connect-custom:6.0.0_1.0.3_community . 18 | -------------------------------------------------------------------------------- /build_confluent.sh: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | docker build --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') -f ./confluent.Dockerfile -t robcowart/cp-kafka-connect-custom:6.0.0_1.0.3_confluent . 18 | -------------------------------------------------------------------------------- /build_lab.sh: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | docker build --build-arg BUILD_DATE=$(date -u +'%Y-%m-%dT%H:%M:%SZ') -f ./lab.Dockerfile -t robcowart/cp-kafka-connect-custom:6.0.0_1.0.3_lab . 18 | -------------------------------------------------------------------------------- /community.Dockerfile: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | FROM confluentinc/cp-kafka-connect-base 18 | 19 | ARG BUILD_DATE 20 | 21 | LABEL org.opencontainers.image.created="$BUILD_DATE" \ 22 | org.opencontainers.image.authors="elastiflow@gmail.com" \ 23 | org.opencontainers.image.url="https://hub.docker.com/r/robcowart/cp-kafka-connect-custom" \ 24 | org.opencontainers.image.documentation="https://github.com/robcowart/cp-kafka-connect-custom/README.md" \ 25 | org.opencontainers.image.source="https://github.com/robcowart/cp-kafka-connect-custom" \ 26 | org.opencontainers.image.version="6.0.0_1.0.3_community" \ 27 | org.opencontainers.image.vendor="Robert Cowart" \ 28 | org.opencontainers.image.title="cp-kafka-connect-custom" \ 29 | org.opencontainers.image.description="A custom Confluent Platform Kafka Connect container with additional community licensed connectors from Confluent Hub." 30 | 31 | RUN confluent-hub install --no-prompt apache/kafka-connect-geode:latest && \ 32 | confluent-hub install --no-prompt bkatwal/bkatwal-kafka-connect-solr-sink:latest && \ 33 | confluent-hub install --no-prompt batchsh/sink-connector:latest && \ 34 | confluent-hub install --no-prompt blueapron/kafka-connect-protobuf-converter:latest && \ 35 | confluent-hub install --no-prompt C0urante/kafka-connect-reddit:latest && \ 36 | confluent-hub install --no-prompt camunda/kafka-connect-zeebe:latest && \ 37 | confluent-hub install --no-prompt castorm/kafka-connect-http:latest && \ 38 | confluent-hub install --no-prompt chaitalisagesh/kafka-connect-log-analytics:latest && \ 39 | confluent-hub install --no-prompt cjmatta/kafka-connect-irc:latest && \ 40 | confluent-hub install --no-prompt cjmatta/kafka-connect-sse:latest && \ 41 | confluent-hub install --no-prompt confluentinc/connect-transforms:latest && \ 42 | confluent-hub install --no-prompt confluentinc/kafka-connect-avro-converter:latest && \ 43 | confluent-hub install --no-prompt confluentinc/kafka-connect-datagen:latest && \ 44 | confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:latest && \ 45 | confluent-hub install --no-prompt confluentinc/kafka-connect-hdfs:latest && \ 46 | confluent-hub install --no-prompt confluentinc/kafka-connect-jdbc:latest && \ 47 | confluent-hub install --no-prompt confluentinc/kafka-connect-s3:latest && \ 48 | confluent-hub install --no-prompt confluentinc/kafka-connect-vertica:latest && \ 49 | confluent-hub install --no-prompt couchbase/kafka-connect-couchbase:latest && \ 50 | confluent-hub install --no-prompt datadog/kafka-connect-logs:latest && \ 51 | confluent-hub install --no-prompt datastax/kafka-connect-cassandra-sink:latest && \ 52 | confluent-hub install --no-prompt debezium/debezium-connector-mongodb:latest && \ 53 | confluent-hub install --no-prompt debezium/debezium-connector-mysql:latest && \ 54 | confluent-hub install --no-prompt debezium/debezium-connector-postgresql:latest && \ 55 | confluent-hub install --no-prompt debezium/debezium-connector-sqlserver:latest && \ 56 | confluent-hub install --no-prompt dhananjaypatkar/kafka-connect-phoenix:latest && \ 57 | confluent-hub install --no-prompt f0xdx/kafka-connect-wrap-smt:latest && \ 58 | confluent-hub install --no-prompt fbascheper/kafka-connect-telegram:latest && \ 59 | confluent-hub install --no-prompt findinpath/kafka-connect-nested-set-jdbc-sink:latest && \ 60 | confluent-hub install --no-prompt hpgrahsl/kafka-connect-mongodb:latest && \ 61 | confluent-hub install --no-prompt humio/kafka-connect-hec-sink:latest && \ 62 | confluent-hub install --no-prompt jaredpetersen/kafka-connect-arangodb:latest && \ 63 | confluent-hub install --no-prompt jcustenborder/kafka-connect-aerospike:latest && \ 64 | confluent-hub install --no-prompt jcustenborder/kafka-connect-email:latest && \ 65 | confluent-hub install --no-prompt jcustenborder/kafka-connect-flume-avro:latest && \ 66 | confluent-hub install --no-prompt jcustenborder/kafka-connect-json-schema:latest && \ 67 | confluent-hub install --no-prompt jcustenborder/kafka-connect-memcached:latest && \ 68 | confluent-hub install --no-prompt jcustenborder/kafka-connect-opentsdb:latest && \ 69 | confluent-hub install --no-prompt jcustenborder/kafka-connect-redis:latest && \ 70 | confluent-hub install --no-prompt jcustenborder/kafka-connect-simulator:latest && \ 71 | confluent-hub install --no-prompt jcustenborder/kafka-connect-solr:latest && \ 72 | confluent-hub install --no-prompt jcustenborder/kafka-connect-spooldir:latest && \ 73 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-cobol:latest && \ 74 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-common:latest && \ 75 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-fix:latest && \ 76 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-maxmind:latest && \ 77 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-xml:latest && \ 78 | confluent-hub install --no-prompt jcustenborder/kafka-connect-twitter:latest && \ 79 | confluent-hub install --no-prompt juxt/kafka-connect-crux:latest && \ 80 | confluent-hub install --no-prompt kaliy/kafka-connect-rss:latest && \ 81 | confluent-hub install --no-prompt marklogic/kafka-marklogic-connector:latest && \ 82 | confluent-hub install --no-prompt mdrogalis/voluble:latest && \ 83 | confluent-hub install --no-prompt memsql/memsql-kafka-connector:latest && \ 84 | confluent-hub install --no-prompt microsoft/kafka-connect-iothub:latest && \ 85 | confluent-hub install --no-prompt microsoftcorporation/kafka-sink-azure-kusto:latest && \ 86 | confluent-hub install --no-prompt mmolimar/kafka-connect-fs:latest && \ 87 | confluent-hub install --no-prompt mongodb/kafka-connect-mongodb:latest && \ 88 | confluent-hub install --no-prompt neo4j/kafka-connect-neo4j:latest && \ 89 | confluent-hub install --no-prompt newrelic/newrelic-kafka-connector:latest && \ 90 | confluent-hub install --no-prompt nishutayal/kafka-connect-hbase:latest && \ 91 | confluent-hub install --no-prompt norsktipping/kafka-connect-jdbc_flatten:latest && \ 92 | confluent-hub install --no-prompt opencredo/kafka-connect-venafi:latest && \ 93 | confluent-hub install --no-prompt riferrei/kafka-connect-pulsar:latest && \ 94 | confluent-hub install --no-prompt rockset/kafka-connect-rockset:latest && \ 95 | confluent-hub install --no-prompt rudderstack/kafka-connect-rudderstack:latest && \ 96 | confluent-hub install --no-prompt sanjuthomas/kafka-connect-gcp-bigtable:latest && \ 97 | confluent-hub install --no-prompt sanjuthomas/kafka-connect-orientdb:latest && \ 98 | confluent-hub install --no-prompt ScyllaDB/kafka-connect-scylladb:latest && \ 99 | confluent-hub install --no-prompt snowflakeinc/snowflake-kafka-connector:latest && \ 100 | confluent-hub install --no-prompt splunk/kafka-connect-splunk:latest && \ 101 | confluent-hub install --no-prompt spoudinc/spoud-agoora:latest && \ 102 | confluent-hub install --no-prompt streamthoughts/kafka-connect-file-pulse:latest && \ 103 | confluent-hub install --no-prompt thomaskwscott/kafka-connect-shell-sink:latest && \ 104 | confluent-hub install --no-prompt thomaskwscott/kafka-connect-shell-source:latest && \ 105 | confluent-hub install --no-prompt wepay/kafka-connect-bigquery:latest && \ 106 | confluent-hub install --no-prompt yugabyteinc/yb-kafka-connector:latest && \ 107 | confluent-hub install --no-prompt zeebe-io/kafka-connect-zeebe:latest 108 | -------------------------------------------------------------------------------- /confluent.Dockerfile: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | FROM robcowart/cp-kafka-connect-custom:6.0.0_1.0.3_community 18 | 19 | ARG BUILD_DATE 20 | 21 | LABEL org.opencontainers.image.created="$BUILD_DATE" \ 22 | org.opencontainers.image.authors="elastiflow@gmail.com" \ 23 | org.opencontainers.image.url="https://hub.docker.com/r/robcowart/cp-kafka-connect-custom" \ 24 | org.opencontainers.image.documentation="https://github.com/robcowart/cp-kafka-connect-custom/README.md" \ 25 | org.opencontainers.image.source="https://github.com/robcowart/cp-kafka-connect-custom" \ 26 | org.opencontainers.image.version="6.0.0_1.0.3_confluent" \ 27 | org.opencontainers.image.vendor="Robert Cowart" \ 28 | org.opencontainers.image.title="cp-kafka-connect-custom" \ 29 | org.opencontainers.image.description="A custom Confluent Platform Kafka Connect container with additional community and Confluent licensed connectors from Confluent Hub." 30 | 31 | RUN confluent-hub install --no-prompt confluentinc/connect-transforms:latest && \ 32 | confluent-hub install --no-prompt confluentinc/kafka-connect-activemq:latest && \ 33 | confluent-hub install --no-prompt confluentinc/kafka-connect-activemq-sink:latest && \ 34 | confluent-hub install --no-prompt confluentinc/kafka-connect-amps:latest && \ 35 | confluent-hub install --no-prompt confluentinc/kafka-connect-appdynamics-metrics:latest && \ 36 | confluent-hub install --no-prompt confluentinc/kafka-connect-aws-cloudwatch-logs:latest && \ 37 | confluent-hub install --no-prompt confluentinc/kafka-connect-aws-cloudwatch-metrics:latest && \ 38 | confluent-hub install --no-prompt confluentinc/kafka-connect-aws-dynamodb:latest && \ 39 | confluent-hub install --no-prompt confluentinc/kafka-connect-aws-lambda:latest && \ 40 | confluent-hub install --no-prompt confluentinc/kafka-connect-aws-redshift:latest && \ 41 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-blob-storage:latest && \ 42 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-blob-storage-source:latest && \ 43 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-data-lake-gen1-storage:latest && \ 44 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-data-lake-gen2-storage:latest && \ 45 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-event-hubs:latest && \ 46 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-functions:latest && \ 47 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-search:latest && \ 48 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-service-bus:latest && \ 49 | confluent-hub install --no-prompt confluentinc/kafka-connect-azure-sql-dw:latest && \ 50 | confluent-hub install --no-prompt confluentinc/kafka-connect-cassandra:latest && \ 51 | confluent-hub install --no-prompt confluentinc/kafka-connect-data-diode:latest && \ 52 | confluent-hub install --no-prompt confluentinc/kafka-connect-datadog-metrics:latest && \ 53 | confluent-hub install --no-prompt confluentinc/kafka-connect-firebase:latest && \ 54 | confluent-hub install --no-prompt confluentinc/kafka-connect-ftps:latest && \ 55 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-bigtable:latest && \ 56 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-dataproc-sink:latest && \ 57 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-functions:latest && \ 58 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-pubsub:latest && \ 59 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcp-spanner:latest && \ 60 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcs:latest && \ 61 | confluent-hub install --no-prompt confluentinc/kafka-connect-gcs-source:latest && \ 62 | confluent-hub install --no-prompt confluentinc/kafka-connect-github:latest && \ 63 | confluent-hub install --no-prompt confluentinc/kafka-connect-hbase:latest && \ 64 | confluent-hub install --no-prompt confluentinc/kafka-connect-hdfs2-source:latest && \ 65 | confluent-hub install --no-prompt confluentinc/kafka-connect-hdfs3:latest && \ 66 | confluent-hub install --no-prompt confluentinc/kafka-connect-hdfs3-source:latest && \ 67 | confluent-hub install --no-prompt confluentinc/kafka-connect-http:latest && \ 68 | confluent-hub install --no-prompt confluentinc/kafka-connect-ibmmq:latest && \ 69 | confluent-hub install --no-prompt confluentinc/kafka-connect-ibmmq-sink:latest && \ 70 | confluent-hub install --no-prompt confluentinc/kafka-connect-influxdb:latest && \ 71 | confluent-hub install --no-prompt confluentinc/kafka-connect-jira:latest && \ 72 | confluent-hub install --no-prompt confluentinc/kafka-connect-jms:latest && \ 73 | confluent-hub install --no-prompt confluentinc/kafka-connect-jms-sink:latest && \ 74 | confluent-hub install --no-prompt confluentinc/kafka-connect-kinesis:latest && \ 75 | confluent-hub install --no-prompt confluentinc/kafka-connect-kudu:latest && \ 76 | confluent-hub install --no-prompt confluentinc/kafka-connect-maprdb:latest && \ 77 | confluent-hub install --no-prompt confluentinc/kafka-connect-marketo:latest && \ 78 | confluent-hub install --no-prompt confluentinc/kafka-connect-mqtt:latest && \ 79 | confluent-hub install --no-prompt confluentinc/kafka-connect-netezza:latest && \ 80 | confluent-hub install --no-prompt confluentinc/kafka-connect-omnisci:latest && \ 81 | confluent-hub install --no-prompt confluentinc/kafka-connect-pagerduty:latest && \ 82 | confluent-hub install --no-prompt confluentinc/kafka-connect-pivotal-gemfire:latest && \ 83 | confluent-hub install --no-prompt confluentinc/kafka-connect-prometheus-metrics:latest && \ 84 | confluent-hub install --no-prompt confluentinc/kafka-connect-rabbitmq:latest && \ 85 | confluent-hub install --no-prompt confluentinc/kafka-connect-rabbitmq-sink:latest && \ 86 | confluent-hub install --no-prompt confluentinc/kafka-connect-replicator:latest && \ 87 | confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:latest && \ 88 | confluent-hub install --no-prompt confluentinc/kafka-connect-salesforce:latest && \ 89 | confluent-hub install --no-prompt confluentinc/kafka-connect-salesforce-bulk-api:latest && \ 90 | confluent-hub install --no-prompt confluentinc/kafka-connect-servicenow:latest && \ 91 | confluent-hub install --no-prompt confluentinc/kafka-connect-sftp:latest && \ 92 | confluent-hub install --no-prompt confluentinc/kafka-connect-snmp:latest && \ 93 | confluent-hub install --no-prompt confluentinc/kafka-connect-solace-sink:latest && \ 94 | confluent-hub install --no-prompt confluentinc/kafka-connect-solace-source:latest && \ 95 | confluent-hub install --no-prompt confluentinc/kafka-connect-splunk-source:latest && \ 96 | confluent-hub install --no-prompt confluentinc/kafka-connect-sqs:latest && \ 97 | confluent-hub install --no-prompt confluentinc/kafka-connect-syslog:latest && \ 98 | confluent-hub install --no-prompt confluentinc/kafka-connect-teradata:latest && \ 99 | confluent-hub install --no-prompt confluentinc/kafka-connect-tibco-sink:latest && \ 100 | confluent-hub install --no-prompt confluentinc/kafka-connect-tibco-source:latest && \ 101 | confluent-hub install --no-prompt confluentinc/kafka-connect-vertica:latest && \ 102 | confluent-hub install --no-prompt confluentinc/kafka-connect-zendesk:latest 103 | -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | version: '3' 18 | services: 19 | cp-kafka-connect: 20 | image: robcowart/cp-kafka-connect-custom:6.0.0_1.0.3_community 21 | container_name: cp-kafka-connect 22 | restart: unless-stopped 23 | network_mode: bridge 24 | ports: 25 | - 8083:8083/tcp 26 | environment: 27 | ########## General ########## 28 | 29 | # group.id 30 | # A unique string that identifies the Connect cluster group this worker belongs to. 31 | # Type: string 32 | CONNECT_GROUP_ID: 'connect-cluster' 33 | 34 | # client.id 35 | # An id string to pass to the server when making requests. The purpose of this is to be able to track the 36 | # source of requests beyond just ip/port by allowing a logical application name to be included in server-side 37 | # request logging. 38 | # Type: string 39 | # Default: "" 40 | #CONNECT_CLIENT_ID: '' 41 | 42 | # bootstrap.servers 43 | # A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client 44 | # will make use of all servers irrespective of which servers are specified here for bootstrapping—this list 45 | # only impacts the initial hosts used to discover the full set of servers. This list should be in the form 46 | # host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the 47 | # full cluster membership (which may change dynamically), this list need not contain the full set of servers 48 | # (you may want more than one, though, in case a server is down). 49 | # Type: list 50 | # Default: localhost:9092 51 | CONNECT_BOOTSTRAP_SERVERS: '192.2.0.11:9092,192.2.0.12:9092,192.2.0.13:9092' 52 | 53 | # security.protocol 54 | # Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. 55 | # Type: string 56 | # Default: PLAINTEXT 57 | CONNECT_SECURITY_PROTOCOL: 'PLAINTEXT' 58 | 59 | # plugin.path 60 | # List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The 61 | # list should consist of top level directories that include any combination of: a) directories immediately 62 | # containing jars with plugins and their dependencies b) uber-jars with plugins and their dependencies c) 63 | # directories immediately containing the package directory structure of classes of plugins and their 64 | # dependencies Note: symlinks will be followed to discover dependencies or plugins. Examples: 65 | # plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors 66 | # Type: list 67 | # Default: null 68 | CONNECT_PLUGIN_PATH: '/usr/share/java,/usr/share/confluent-hub-components' 69 | 70 | # connect.protocol 71 | # Compatibility mode for Kafka Connect Protocol 72 | # Type: string 73 | # Default: compatible [eager, compatible] 74 | #CONNECT_CONNECT_PROTOCOL: 'compatible' 75 | 76 | 77 | 78 | ########## REST ########## 79 | 80 | # listeners 81 | # List of comma-separated URIs the REST API will listen on. The supported protocols are HTTP and HTTPS. Specify 82 | # hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of 83 | # legal listener lists: HTTP://myhost:8083,HTTPS://myhost:8084 84 | # Type: list 85 | # Default: null 86 | CONNECT_LISTENERS: 'http://0.0.0.0:8083' 87 | 88 | # rest.host.name 89 | # Hostname for the REST API. If this is set, it will only bind to this interface. 90 | # Type: string 91 | # Default: null 92 | CONNECT_REST_HOST_NAME: '0.0.0.0' 93 | 94 | # rest.port 95 | # Port for the REST API to listen on. 96 | # Type: int 97 | # Default: 8083 98 | CONNECT_REST_PORT: 8083 99 | 100 | 101 | # rest.advertised.listener 102 | # Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use. 103 | # Type: string 104 | # Default: null 105 | CONNECT_REST_ADVERTISED_LISTENER: 'http' 106 | 107 | # rest.advertised.host.name 108 | # If this is set, this is the hostname that will be given out to other workers to connect to. 109 | # Type: string 110 | # Default: null 111 | CONNECT_REST_ADVERTISED_HOST_NAME: '192.2.0.11' 112 | 113 | # rest.advertised.port 114 | # If this is set, this is the port that will be given out to other workers to connect to. 115 | # Type: int 116 | # Default: null 117 | CONNECT_REST_ADVERTISED_PORT: 8083 118 | 119 | 120 | # rest.extension.classes 121 | # Comma-separated names of ConnectRestExtension classes, loaded and called in the order specified. Implementing 122 | # the interface ConnectRestExtension allows you to inject into Connect's REST API user defined resources like 123 | # filters. Typically used to add custom capability like logging, security, etc. 124 | # Type: list 125 | # Default: "" 126 | #CONNECT_REST_EXTENSION_CLASSES: '' 127 | 128 | 129 | 130 | ########## Buffers ########## 131 | 132 | # receive.buffer.bytes 133 | # The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default 134 | # will be used. 135 | # Type: int 136 | # Default: 32768 [0,...] 137 | #CONNECT_RECEIVER_BUFFER_BYTES: 32768 138 | 139 | # send.buffer.bytes 140 | # The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will 141 | # be used. 142 | # Type: int 143 | # Default: 131072 [0,...] 144 | #CONNECT_SEND_BUFFER_BYTES: 131072 145 | 146 | 147 | 148 | ########## Storage Topics ########## 149 | 150 | # config.storage.topic 151 | # The name of the Kafka topic where connector configurations are stored 152 | # Type: string 153 | CONNECT_CONFIG_STORAGE_TOPIC: 'connect-config' 154 | 155 | # config.storage.replication.factor 156 | # Replication factor used when creating the configuration storage topic 157 | # Type: short 158 | # Default: 3 [1,...] 159 | CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 2 160 | 161 | 162 | # offset.storage.topic 163 | # The name of the Kafka topic where connector offsets are stored 164 | # Type: string 165 | CONNECT_OFFSET_STORAGE_TOPIC: 'connect-offset' 166 | 167 | # offset.storage.partitions 168 | # The number of partitions used when creating the offset storage topic 169 | # Type: int 170 | # Default: 25 [1,...] 171 | CONNECT_OFFSET_STORAGE_PARTITIONS: 12 172 | 173 | # offset.storage.replication.factor 174 | # Replication factor used when creating the offset storage topic 175 | # Type: short 176 | # Default: 3 [1,...] 177 | CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 2 178 | 179 | # offset.flush.interval.ms 180 | # Interval at which to try committing offsets for tasks. 181 | # Type: long 182 | # Default: 60000 183 | CONNECT_OFFSET_FLUSH_INTERVAL_MS: 60000 184 | 185 | # offset.flush.timeout.ms 186 | # Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to 187 | # offset storage before cancelling the process and restoring the offset data to be committed in a future 188 | # attempt. 189 | # Type: long 190 | # Default: 5000 191 | CONNECT_OFFSET_FLUSH_TIMEOUT_MS: 5000 192 | 193 | # offset.storage.file.filename 194 | # The file to store connector offsets in. By storing offsets on disk, a standalone process can be stopped and 195 | # started on a single node and resume where it previously left off. 196 | # Type: string 197 | # Default: "" 198 | #CONNECT_OFFSET_STORAGE_FILE_FILENAME: '' 199 | 200 | # status.storage.topic 201 | # The name of the Kafka topic where connector and task status are stored 202 | # Type: string 203 | CONNECT_STATUS_STORAGE_TOPIC: 'connect-status' 204 | 205 | # status.storage.partitions 206 | # The number of partitions used when creating the status storage topic 207 | # Type: int 208 | # Default: 5 [1,...] 209 | CONNECT_STATUS_STORAGE_PARTITIONS: 3 210 | 211 | # status.storage.replication.factor 212 | # Replication factor used when creating the status storage topic 213 | # Type: short 214 | # Default: 3 [1,...] 215 | CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 2 216 | 217 | 218 | 219 | ########## Converters ########## 220 | 221 | # key.converter 222 | # Converter class used to convert between Kafka Connect format and the serialized form that is written to 223 | # Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is 224 | # independent of connectors it allows any connector to work with any serialization format. Examples of common 225 | # formats include JSON and Avro. 226 | # Type: class 227 | CONNECT_KEY_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter' 228 | 229 | # value.converter 230 | # Converter class used to convert between Kafka Connect format and the serialized form that is written to 231 | # Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is 232 | # independent of connectors it allows any connector to work with any serialization format. Examples of common 233 | # formats include JSON and Avro. 234 | # Type: class 235 | CONNECT_VALUE_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter' 236 | 237 | # internal.key.converter 238 | # Converter class used to convert between Kafka Connect format and the serialized form that is written to 239 | # Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is 240 | # independent of connectors it allows any connector to work with any serialization format. Examples of common 241 | # formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by 242 | # the framework, such as configs and offsets, so users can typically use any functioning Converter 243 | # implementation. Deprecated; will be removed in an upcoming version. 244 | # Type: class 245 | # Default: org.apache.kafka.connect.json.JsonConverter 246 | CONNECT_INTERNAL_KEY_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter' 247 | 248 | # internal.value.converter 249 | # Converter class used to convert between Kafka Connect format and the serialized form that is written to 250 | # Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is 251 | # independent of connectors it allows any connector to work with any serialization format. Examples of common 252 | # formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by 253 | # the framework, such as configs and offsets, so users can typically use any functioning Converter 254 | # implementation. Deprecated; will be removed in an upcoming version. 255 | # Type: class 256 | # Default: org.apache.kafka.connect.json.JsonConverter 257 | CONNECT_INTERNAL_VALUE_CONVERTER: 'org.apache.kafka.connect.json.JsonConverter' 258 | 259 | 260 | 261 | ########## Access Control ########## 262 | 263 | # access.control.allow.methods 264 | # Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The 265 | # default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD. 266 | # Type: string 267 | # Default: "" 268 | #CONNECT_CONTROL_ALLOW_METHODS: '' 269 | 270 | # access.control.allow.origin 271 | # Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, 272 | # set this to the domain of the application that should be permitted to access the API, or '*' to allow access 273 | # from any domain. The default value only allows access from the domain of the REST API. 274 | # Type: string 275 | # Default: "" 276 | #CONNECT_CONTROL_ALLOW_ORIGINS: '' 277 | 278 | 279 | 280 | ########## SSL ########## 281 | 282 | # ssl.protocol 283 | # The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. 284 | # Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older 285 | # JVMs, but their usage is discouraged due to known security vulnerabilities. 286 | # Type: string 287 | # Default: TLS 288 | #CONNECT_SSL_PROTOCOL: 'TLS' 289 | 290 | # ssl.enabled.protocols 291 | # The list of protocols enabled for SSL connections. 292 | # Type: list 293 | # Default: TLSv1.2,TLSv1.1,TLSv1 294 | #CONNECT_SSL_ENABLED_PROTOCOLS: 'TLSv1.2,TLSv1.1,TLSv1' 295 | 296 | # ssl.provider 297 | # The name of the security provider used for SSL connections. Default value is the default security provider of 298 | # the JVM. 299 | # Type: string 300 | # Default: null 301 | #CONNECT_SSL_PROVIDER: '' 302 | 303 | # ssl.cipher.suites 304 | # A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange 305 | # algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. 306 | # By default all the available cipher suites are supported. 307 | # Type: list 308 | # Default: null 309 | #CONNECT_SSL_CIPHER_SUITES: '' 310 | 311 | # ssl.key.password 312 | # The password of the private key in the key store file. This is optional for client. 313 | # Type: password 314 | # Default: null 315 | #CONNECT_SSL_KEY_PASSOWRD: '' 316 | 317 | # ssl.keystore.location 318 | # The location of the key store file. This is optional for client and can be used for two-way authentication 319 | # for client. 320 | # Type: string 321 | # Default: null 322 | #CONNECT_SSL_KEYSTORE_LOCATION: '' 323 | 324 | # ssl.keystore.password 325 | # The store password for the key store file. This is optional for client and only needed if 326 | # ssl.keystore.location is configured. 327 | # Type: password 328 | # Default: null 329 | #CONNECT_SSL_KEYSTORE_PASSWORD: '' 330 | 331 | # ssl.keystore.type 332 | # The file format of the key store file. This is optional for client. 333 | # Type: string 334 | # Default: JKS 335 | #CONNECT_SSL_KEYSTORE_TYPE: 'JKS' 336 | 337 | # ssl.truststore.location 338 | # The location of the trust store file. 339 | # Type: string 340 | # Default: null 341 | #CONNECT_SSL_TRUSTSTORE_LOCATION: '' 342 | 343 | # ssl.truststore.password 344 | # The password for the trust store file. If a password is not set access to the truststore is still available, 345 | # but integrity checking is disabled. 346 | # Type: password 347 | # Default: null 348 | #CONNECT_SSL_TRUSTSTORE_PASSWORD: '' 349 | 350 | # ssl.truststore.type 351 | # The file format of the trust store file. 352 | # Type: string 353 | # Default: JKS 354 | #CONNECT_SSL_TRUSTSTORE_TYPE: 'JKS' 355 | 356 | # ssl.keymanager.algorithm 357 | # The algorithm used by key manager factory for SSL connections. Default value is the key manager factory 358 | # algorithm configured for the Java Virtual Machine. 359 | # Type: string 360 | # Default: SunX509 361 | #CONNECT_SSL_KEYMANAGER_ALGORITHM: 'SunX509' 362 | 363 | # ssl.trustmanager.algorithm 364 | # The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory 365 | # algorithm configured for the Java Virtual Machine. 366 | # Type: string 367 | # Default: PKIX 368 | #CONNECT_SSL_TRUSTMANAGER_ALGORITHM: 'PKIX' 369 | 370 | # ssl.client.auth 371 | # Configures kafka broker to request client authentication. The following settings are common: 372 | # required - If set to required client authentication is required. 373 | # requested - This means client authentication is optional. unlike requested, if this option is set client 374 | # can choose not to provide authentication information about itself 375 | # none - This means client authentication is not needed. 376 | # Type: string 377 | # Default: none 378 | #CONNECT_SSL_CLIENT_AUTH: 'none' 379 | 380 | # ssl.endpoint.identification.algorithm 381 | # The endpoint identification algorithm to validate server hostname using server certificate. 382 | # Type: string 383 | # Default: https 384 | #CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM: 'https' 385 | 386 | # ssl.secure.random.implementation 387 | # The SecureRandom PRNG implementation to use for SSL cryptography operations. 388 | # Type: string 389 | # Default: null 390 | #CONNECT_SSL_SECURE_RANDOM_IMPLEMENTATION: '' 391 | 392 | 393 | 394 | ########## SASL ########## 395 | 396 | # sasl.client.callback.handler.class 397 | # The fully qualified name of a SASL client callback handler class that implements the 398 | # AuthenticateCallbackHandler interface. 399 | # Type: class 400 | # Default: null 401 | #CONNECT_SASL_CLIENT_CALLBACK_HANDLER_CLASS: '' 402 | 403 | # sasl.jaas.config 404 | # JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS 405 | # configuration file format is described here. The format for the value is: 'loginModuleClass controlFlag 406 | # (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism 407 | # name in lower-case. For example, 408 | # listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required; 409 | # Type: password 410 | # Default: null 411 | #CONNECT_SASL_JAAS_CONFIG: '' 412 | 413 | # sasl.mechanism 414 | # SASL mechanism used for client connections. This may be any mechanism for which a security provider is 415 | # available. GSSAPI is the default mechanism. 416 | # Type: string 417 | # Default: GSSAPI 418 | #CONNECT_SASL_MECHANISM: 'GSSAPI' 419 | 420 | # sasl.kerberos.service.name 421 | # The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in 422 | # Kafka's config. 423 | # Type: string 424 | # Default: null 425 | #CONNECT_SASL_KERBEROS_SERVICE_NAME: '' 426 | 427 | # sasl.kerberos.kinit.cmd 428 | # Kerberos kinit command path. 429 | # Type: string 430 | # Default: /usr/bin/kinit 431 | #CONNECT_SASL_KERBEROS_KINIT_CMD: '/usr/bin/kinit' 432 | 433 | # sasl.kerberos.min.time.before.relogin 434 | # Login thread sleep time between refresh attempts. 435 | # Type: long 436 | # Default: 60000 437 | #CONNECT_SASL_KERBEROS_MIN_TIME_BEFORE_RELOGIN: 60000 438 | 439 | # sasl.kerberos.ticket.renew.jitter 440 | # Percentage of random jitter added to the renewal time. 441 | # Type: double 442 | # Default: 0.05 443 | #CONNECT_SASL_KERBEROS_TICKET_RENEW_JITTER: 0.05 444 | 445 | # sasl.kerberos.ticket.renew.window.factor 446 | # Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has 447 | # been reached, at which time it will try to renew the ticket. 448 | # Type: double 449 | # Default: 0.8 450 | #CONNECT_SASL_KERBEROS_TICKET_RENEW_WINDOW_FACTOR: 0.8 451 | 452 | # sasl.login.callback.handler.class 453 | # The fully qualified name of a SASL login callback handler class that implements the 454 | # AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with 455 | # listener prefix and SASL mechanism name in lower-case. For example, 456 | # listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler 457 | # Type: class 458 | # Default: null 459 | #CONNECT_SASL_LOGIN_CALLBACK_HANDLER_CLASS: '' 460 | 461 | # sasl.login.class 462 | # The fully qualified name of a class that implements the Login interface. For brokers, login config must be 463 | # prefixed with listener prefix and SASL mechanism name in lower-case. For example, 464 | # listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin 465 | # Type: class 466 | # Default: null 467 | #CONNECT_SASL_LOGIN_CLASS: '' 468 | 469 | # sasl.login.refresh.buffer.seconds 470 | # The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. 471 | # If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh 472 | # will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 473 | # (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and 474 | # sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a 475 | # credential. Currently applies only to OAUTHBEARER. 476 | # Type: short 477 | # Default: 300 [0,...,3600] 478 | #CONNECT_SASL_LOGIN_REFRESH_BUFFER_SECONDS: 300 479 | 480 | # sasl.login.refresh.min.period.seconds 481 | # The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. 482 | # Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is 483 | # specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the 484 | # remaining lifetime of a credential. Currently applies only to OAUTHBEARER. 485 | # Type: short 486 | # Default: 60 [0,...,900] 487 | #CONNECT_SASL_LOGIN_REFRESH_MIN_PERIOD_SECONDS: 60 488 | 489 | # sasl.login.refresh.window.factor 490 | # Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has 491 | # been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 492 | # (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to 493 | # OAUTHBEARER. 494 | # Type: double 495 | # Default: 0.8 [0.5,...,1.0] 496 | #CONNECT_SASL_LOGIN_REFRESH_WINDOW_FACTOR: 0.8 497 | 498 | # sasl.login.refresh.window.jitter 499 | # The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh 500 | # thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is 501 | # used if no value is specified. Currently applies only to OAUTHBEARER. 502 | # Type: double 503 | # Default: 0.05 [0.0,...,0.25] 504 | #CONNECT_SASL_LOGIN_REFRESH_WINDOW_JITTER: 0.05 505 | 506 | 507 | 508 | ########## Timeouts and Backoffs ########## 509 | 510 | # connections.max.idle.ms 511 | # Close idle connections after the number of milliseconds specified by this config. 512 | # Type: long 513 | # Default: 540000 514 | #CONNECT_CONNECTIONS_MAX_IDLE_MS: 540000 515 | 516 | # heartbeat.interval.ms 517 | # The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. 518 | # Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new 519 | # members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be 520 | # set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal 521 | # rebalances. 522 | # Type: int 523 | # Default: 3000 524 | #CONNECT_HEARTBEAT_INTERVAL_MS: 3000 525 | 526 | # metadata.max.age.ms 527 | # The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any 528 | # partition leadership changes to proactively discover any new brokers or partitions. 529 | # Type: long 530 | # Default: 300000 [0,...] 531 | #CONNECT_METADATA_MAX_AGE_MS: 300000 532 | 533 | # rebalance.timeout.ms 534 | # The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a 535 | # limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout 536 | # is exceeded, then the worker will be removed from the group, which will cause offset commit failures. 537 | # Type: int 538 | # Default: 60000 539 | #CONNECT_REBALANCE_TIMEOUT_MS: 60000 540 | 541 | # reconnect.backoff.max.ms 542 | # The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed 543 | # to connect. If provided, the backoff per host will increase exponentially for each consecutive connection 544 | # failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid 545 | # connection storms. 546 | # Type: long 547 | # Default: 1000 [0,...] 548 | #CONNECT_RECONNECT_BACKOFF_MAX_MS: 1000 549 | 550 | # reconnect.backoff.ms 551 | # The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly 552 | # connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a 553 | # broker. 554 | # Type: long 555 | # Default: 50 [0,...] 556 | #CONNECT_RECONNECT_BACKOFF_MS: 50 557 | 558 | # request.timeout.ms 559 | # The configuration controls the maximum amount of time the client will wait for the response of a request. If 560 | # the response is not received before the timeout elapses the client will resend the request if necessary or 561 | # fail the request if retries are exhausted. 562 | # Type: int 563 | # Default: 40000 [0,...] 564 | #CONNECT_REQUEST_TIMEOUT_MS: 40000 565 | 566 | # retry.backoff.ms 567 | # The amount of time to wait before attempting to retry a failed request to a given topic partition. This 568 | # avoids repeatedly sending requests in a tight loop under some failure scenarios. 569 | # Type: long 570 | # Default: 100 [0,...] 571 | #CONNECT_RETRY_BACKOFF_MS: 100 572 | 573 | # scheduled.rebalance.max.delay.ms 574 | # Compatibility mode for Kafka Connect Protocol 575 | # Type: int 576 | # Default: 300000 [0,...,2147483647] 577 | #CONNECT_SCHEDULED_REBALANCE_MAX_DELAY_MS: 300000 578 | 579 | # session.timeout.ms 580 | # The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to 581 | # the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then 582 | # the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the 583 | # allowable range as configured in the broker configuration by group.min.session.timeout.ms and 584 | # group.max.session.timeout.ms. 585 | # Type: int 586 | # Default: 10000 587 | #CONNECT_SESSION_TIMEOUT_MS: 10000 588 | 589 | # task.shutdown.graceful.timeout.ms 590 | # Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All 591 | # task have shutdown triggered, then they are waited on sequentially. 592 | # Type: long 593 | # Default: 5000 594 | #CONNECT_TASK_SHUTDOWN_GRACEFUL_TIMEOUT_MS: 5000 595 | 596 | # worker.sync.timeout.ms 597 | # When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this 598 | # amount of time before giving up, leaving the group, and waiting a backoff period before rejoining. 599 | # Type: int 600 | # Default: 3000 601 | #CONNECT_WORKER_SYNC_TIMEOUT_MS: 3000 602 | 603 | # worker.unsync.backoff.ms 604 | # When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave 605 | # the Connect cluster for this long before rejoining. 606 | # Type: int 607 | # Default: 300000 608 | #CONNECT_WORKER_UNSYNC_BACKOFF_MS: 300000 609 | 610 | 611 | 612 | ########## Additional Configuration ########## 613 | 614 | # client.dns.lookup 615 | # Controls how the client uses DNS lookups. If set to use_all_dns_ips then, when the lookup returns multiple IP 616 | # addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to 617 | # both bootstrap and advertised servers. If the value is resolve_canonical_bootstrap_servers_only each entry 618 | # will be resolved and expanded into a list of canonical names. 619 | # Type: string 620 | # Default: default [default, use_all_dns_ips, resolve_canonical_bootstrap_servers_only] 621 | #CONNECT_DNS_LOOKUP: 'default' 622 | 623 | # config.providers 624 | # Comma-separated names of ConfigProvider classes, loaded and used in the order specified. Implementing the 625 | # interface ConfigProvider allows you to replace variable references in connector configurations, such as for 626 | # externalized secrets. 627 | # Type: list 628 | # Default: "" 629 | #CONNECT_CONFIG_PROVIDERS: '' 630 | 631 | # connector.client.config.override.policy 632 | # Class name or alias of implementation of ConnectorClientConfigOverridePolicy. Defines what client 633 | # configurations can be overriden by the connector. The default implementation is `None`. The other possible 634 | # policies in the framework include `All` and `Principal`. 635 | # Type: string 636 | # Default: None 637 | #CONNECT_CONNECTOR_CLIENT_CONFIG_OVERRIDE_POLICY: 'None' 638 | 639 | # header.converter 640 | # HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to 641 | # Kafka. This controls the format of the header values in messages written to or read from Kafka, and since 642 | # this is independent of connectors it allows any connector to work with any serialization format. Examples of 643 | # common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header 644 | # values to strings and deserialize them by inferring the schemas. 645 | # Type: class 646 | # Default: org.apache.kafka.connect.storage.SimpleHeaderConverter 647 | #CONNECT_HEADER_CONVERTER: 'org.apache.kafka.connect.storage.SimpleHeaderConverter' 648 | 649 | 650 | 651 | ########## Metric Reporter ########## 652 | 653 | # metric.reporters 654 | # A list of classes to use as metrics reporters. Implementing the 655 | # org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of 656 | # new metric creation. The JmxReporter is always included to register JMX statistics. 657 | # Type: list 658 | # Default: "" 659 | #CONNECT_METRIC_REPORTERS: '' 660 | 661 | # metrics.num.samples 662 | # The number of samples maintained to compute metrics. 663 | # Type: int 664 | # Default: 2 [1,...] 665 | #CONNECT_METRICS_NUM_SAMPLES: 2 666 | 667 | # metrics.recording.level 668 | # The highest recording level for metrics. 669 | # Type: string 670 | # Default: INFO [INFO, DEBUG] 671 | #CONNECT_METRICS_RECORDING_LEVEL: 'INFO' 672 | 673 | # metrics.sample.window.ms 674 | # The window of time a metrics sample is computed over. 675 | # Type: long 676 | # Default: 30000 [0,...] 677 | #CONNECT_METRICS_SAMPLE_WINDOW_MS: 30000 678 | -------------------------------------------------------------------------------- /lab.Dockerfile: -------------------------------------------------------------------------------- 1 | #------------------------------------------------------------------------------ 2 | # Copyright 2019 Robert Cowart 3 | # 4 | # Licensed under the Apache License, Version 2.0 (the "License"); 5 | # you may not use this file except in compliance with the License. 6 | # You may obtain a copy of the License at 7 | # 8 | # http://www.apache.org/licenses/LICENSE-2.0 9 | # 10 | # Unless required by applicable law or agreed to in writing, software 11 | # distributed under the License is distributed on an "AS IS" BASIS, 12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | # See the License for the specific language governing permissions and 14 | # limitations under the License. 15 | #------------------------------------------------------------------------------ 16 | 17 | FROM confluentinc/cp-kafka-connect-base:6.0.0 18 | 19 | ARG BUILD_DATE 20 | 21 | LABEL org.opencontainers.image.created="$BUILD_DATE" \ 22 | org.opencontainers.image.authors="elastiflow@gmail.com" \ 23 | org.opencontainers.image.url="https://hub.docker.com/r/robcowart/cp-kafka-connect-custom" \ 24 | org.opencontainers.image.documentation="https://github.com/robcowart/cp-kafka-connect-custom/README.md" \ 25 | org.opencontainers.image.source="https://github.com/robcowart/cp-kafka-connect-custom" \ 26 | org.opencontainers.image.version="6.0.0_1.0.3_lab" \ 27 | org.opencontainers.image.vendor="Robert Cowart" \ 28 | org.opencontainers.image.title="cp-kafka-connect-custom" \ 29 | org.opencontainers.image.description="A custom Confluent Platform Kafka Connect container with additional connectors from Confluent Hub." 30 | 31 | RUN confluent-hub install --no-prompt confluentinc/connect-transforms:latest && \ 32 | confluent-hub install --no-prompt confluentinc/kafka-connect-avro-converter:latest && \ 33 | confluent-hub install --no-prompt confluentinc/kafka-connect-elasticsearch:latest && \ 34 | confluent-hub install --no-prompt confluentinc/kafka-connect-s3:latest && \ 35 | confluent-hub install --no-prompt blueapron/kafka-connect-protobuf-converter:latest && \ 36 | confluent-hub install --no-prompt debezium/debezium-connector-mongodb:latest && \ 37 | confluent-hub install --no-prompt debezium/debezium-connector-mysql:latest && \ 38 | confluent-hub install --no-prompt debezium/debezium-connector-postgresql:latest && \ 39 | confluent-hub install --no-prompt jcustenborder/kafka-connect-json-schema:latest && \ 40 | confluent-hub install --no-prompt jcustenborder/kafka-connect-memcached:latest && \ 41 | confluent-hub install --no-prompt jcustenborder/kafka-connect-redis:latest && \ 42 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-common:latest && \ 43 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-fix:latest && \ 44 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-maxmind:latest && \ 45 | confluent-hub install --no-prompt jcustenborder/kafka-connect-transform-xml:latest && \ 46 | confluent-hub install --no-prompt jcustenborder/kafka-connect-twitter:latest && \ 47 | confluent-hub install --no-prompt mongodb/kafka-connect-mongodb:latest && \ 48 | confluent-hub install --no-prompt neo4j/kafka-connect-neo4j:latest && \ 49 | confluent-hub install --no-prompt splunk/kafka-connect-splunk:latest && \ 50 | confluent-hub install --no-prompt confluentinc/kafka-connect-cassandra:latest && \ 51 | confluent-hub install --no-prompt confluentinc/kafka-connect-http:latest && \ 52 | confluent-hub install --no-prompt confluentinc/kafka-connect-influxdb:latest && \ 53 | confluent-hub install --no-prompt confluentinc/kafka-connect-mqtt:latest && \ 54 | confluent-hub install --no-prompt confluentinc/kafka-connect-rabbitmq:latest && \ 55 | confluent-hub install --no-prompt confluentinc/kafka-connect-replicator:latest && \ 56 | confluent-hub install --no-prompt confluentinc/kafka-connect-s3-source:latest && \ 57 | confluent-hub install --no-prompt confluentinc/kafka-connect-sftp:latest 58 | --------------------------------------------------------------------------------