├── .github └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── CHANGELOG.md ├── CODE_OF_CONDUCT.md ├── CONTRIBUTING.md ├── LICENSE ├── NOTICE ├── README.md ├── pom.xml └── src ├── main └── java │ └── com │ └── amazonaws │ └── services │ └── kinesisanalytics │ └── flink │ └── connectors │ ├── config │ ├── AWSConfigConstants.java │ └── ProducerConfigConstants.java │ ├── exception │ ├── FlinkKinesisFirehoseException.java │ ├── RecordCouldNotBeBuffered.java │ ├── RecordCouldNotBeSentException.java │ ├── SerializationException.java │ └── TimeoutExpiredException.java │ ├── producer │ ├── FlinkKinesisFirehoseProducer.java │ ├── IProducer.java │ └── impl │ │ ├── FirehoseProducer.java │ │ └── FirehoseProducerConfiguration.java │ ├── provider │ └── credential │ │ ├── AssumeRoleCredentialsProvider.java │ │ ├── BasicCredentialProvider.java │ │ ├── CredentialProvider.java │ │ ├── DefaultCredentialProvider.java │ │ ├── EnvironmentCredentialProvider.java │ │ ├── ProfileCredentialProvider.java │ │ ├── SystemCredentialProvider.java │ │ └── factory │ │ └── CredentialProviderFactory.java │ ├── serialization │ ├── JsonSerializationSchema.java │ └── KinesisFirehoseSerializationSchema.java │ └── util │ └── AWSUtil.java └── test ├── java └── com │ └── amazonaws │ └── services │ └── kinesisanalytics │ └── flink │ └── connectors │ ├── config │ └── AWSConfigConstantsTest.java │ ├── firehose │ └── examples │ │ ├── AssumeRoleSimpleStreamString.java │ │ ├── SimpleStreamString.java │ │ ├── SimpleWordCount.java │ │ └── WordCountData.java │ ├── producer │ ├── FlinkKinesisFirehoseProducerTest.java │ └── impl │ │ ├── FirehoseProducerConfigurationTest.java │ │ └── FirehoseProducerTest.java │ ├── provider │ └── credential │ │ ├── AssumeRoleCredentialsProviderTest.java │ │ ├── BasicCredentialProviderTest.java │ │ ├── CredentialProviderFactoryTest.java │ │ ├── CredentialProviderTest.java │ │ └── ProfileCredentialProviderTest.java │ ├── serialization │ └── JsonSerializationSchemaTest.java │ ├── testutils │ └── TestUtils.java │ └── util │ └── AWSUtilTest.java └── resources ├── logback.xml └── profile /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | *Issue #, if available:* 2 | 3 | *Description of changes:* 4 | 5 | 6 | By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. 7 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | target/ 2 | .idea/ 3 | *.iml 4 | -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | # Changelog 2 | 3 | ## Release 2.1.0 (February 20th, 2023) 4 | * Fix issue when region enum values are not available for new regions when AWS SDK is not updated 5 | ([PR#22](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/22)) 6 | 7 | ## Release 2.0.0 (July 30th, 2020) 8 | ### Milestones 9 | - [Milestone v2.0.0](https://github.com/aws/aws-kinesisanalytics-flink-connectors/milestone/2) 10 | - [Milestone v2.0.0-RC1](https://github.com/aws/aws-kinesisanalytics-flink-connectors/milestone/1) 11 | 12 | ### New Features 13 | * Adding Assume Role credential provider 14 | ([PR#8](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/8)) 15 | 16 | ### Bug Fixes 17 | * Prevent Firehose Sink from exporting batches that exceed the maximum size of 18 | [4MiB per call](https://docs.aws.amazon.com/firehose/latest/dev/limits.html). 19 | Limits maximum batch size to 1MiB in regions with a 1MiB/second throughput quota 20 | ([PR#9](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/9)) 21 | 22 | * Fix `ProfileCredentialsProvider` when specifying a custom path to the configuration file 23 | ([PR#4](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/4)) 24 | 25 | ### Other Changes 26 | * Update AWS SDK from version `1.11.379` to `1.11.803` 27 | ([PR#10](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/10)) 28 | 29 | * Remove dependency on Guava 30 | ([PR#10](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/10)) 31 | 32 | * Shaded and relocated AWS SDK dependency 33 | ([PR#7](https://github.com/aws/aws-kinesisanalytics-flink-connectors/pull/7)) 34 | 35 | ### Migration Notes 36 | * The following dependencies have been removed/relocated. 37 | Any transitive references will no longer resolve in upstream projects: 38 | 39 | ```xml 40 | 41 | 42 | com.google.guava 43 | guava 44 | ${guava.version} 45 | 46 | 47 | 48 | 49 | com.amazonaws 50 | aws-java-sdk-iam 51 | ${aws-sdk.version} 52 | 53 | 54 | com.amazonaws 55 | aws-java-sdk-kinesis 56 | ${aws-sdk.version} 57 | 58 | ``` 59 | -------------------------------------------------------------------------------- /CODE_OF_CONDUCT.md: -------------------------------------------------------------------------------- 1 | ## Code of Conduct 2 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 3 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 4 | opensource-codeofconduct@amazon.com with any additional questions or comments. 5 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing Guidelines 2 | 3 | Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional 4 | documentation, we greatly value feedback and contributions from our community. 5 | 6 | Please read through this document before submitting any issues or pull requests to ensure we have all the necessary 7 | information to effectively respond to your bug report or contribution. 8 | 9 | 10 | ## Reporting Bugs/Feature Requests 11 | 12 | We welcome you to use the GitHub issue tracker to report bugs or suggest features. 13 | 14 | When filing an issue, please check [existing open](https://github.com/aws/aws-kinesisanalytics-flink-connectors/issues), or [recently closed](https://github.com/aws/aws-kinesisanalytics-flink-connectors/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20), issues to make sure somebody else hasn't already 15 | reported the issue. Please try to include as much information as you can. Details like these are incredibly useful: 16 | 17 | * A reproducible test case or series of steps 18 | * The version of our code being used 19 | * Any modifications you've made relevant to the bug 20 | * Anything unusual about your environment or deployment 21 | 22 | 23 | ## Contributing via Pull Requests 24 | Contributions via pull requests are much appreciated. Before sending us a pull request, please ensure that: 25 | 26 | 1. You are working against the latest source on the *master* branch. 27 | 2. You check existing open, and recently merged, pull requests to make sure someone else hasn't addressed the problem already. 28 | 3. You open an issue to discuss any significant work - we would hate for your time to be wasted. 29 | 30 | To send us a pull request, please: 31 | 32 | 1. Fork the repository. 33 | 2. Modify the source; please focus on the specific change you are contributing. If you also reformat all the code, it will be hard for us to focus on your change. 34 | 3. Ensure local tests pass. 35 | 4. Commit to your fork using clear commit messages. 36 | 5. Send us a pull request, answering any default questions in the pull request interface. 37 | 6. Pay attention to any automated CI failures reported in the pull request, and stay involved in the conversation. 38 | 39 | GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and 40 | [creating a pull request](https://help.github.com/articles/creating-a-pull-request/). 41 | 42 | 43 | ## Finding contributions to work on 44 | Looking at the existing issues is a great way to find something to contribute on. As our projects, by default, use the default GitHub issue labels (enhancement/bug/duplicate/help wanted/invalid/question/wontfix), looking at any ['help wanted'](https://github.com/aws/aws-kinesisanalytics-flink-connectors/labels/help%20wanted) issues is a great place to start. 45 | 46 | 47 | ## Code of Conduct 48 | This project has adopted the [Amazon Open Source Code of Conduct](https://aws.github.io/code-of-conduct). 49 | For more information see the [Code of Conduct FAQ](https://aws.github.io/code-of-conduct-faq) or contact 50 | opensource-codeofconduct@amazon.com with any additional questions or comments. 51 | 52 | 53 | ## Security issue notifications 54 | If you discover a potential security issue in this project we ask that you notify AWS/Amazon Security via our [vulnerability reporting page](http://aws.amazon.com/security/vulnerability-reporting/). Please do **not** create a public github issue. 55 | 56 | 57 | ## Licensing 58 | 59 | See the [LICENSE](https://github.com/aws/aws-kinesisanalytics-flink-connectors/blob/master/LICENSE) file for our project's licensing. We will ask you to confirm the licensing of your contribution. 60 | 61 | We may ask you to sign a [Contributor License Agreement (CLA)](http://en.wikipedia.org/wiki/Contributor_License_Agreement) for larger changes. 62 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /NOTICE: -------------------------------------------------------------------------------- 1 | AWS Kinesisanalytics Flink Connectors 2 | Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. 3 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Amazon Kinesis Flink Connectors 2 | 3 | This repository contains various Apache Flink connectors to connect to [AWS Kinesis][kinesis] data sources and sinks. 4 | 5 | ## Amazon Kinesis Data Firehose Producer for Apache Flink 6 | This Producer allows Flink applications to push directly to [Kinesis Firehose][firehose]. 7 | - [AWS Documentation][firehose-documentation] 8 | - [Issues][issues] 9 | 10 | ### Quickstart 11 | Configure and instantiate a `FlinkKinesisFirehoseProducer`: 12 | 13 | ```java 14 | Properties config = new Properties(); 15 | outputProperties.setProperty(ConsumerConfigConstants.AWS_REGION, region); 16 | 17 | FlinkKinesisFirehoseProducer sink = new FlinkKinesisFirehoseProducer<>(streamName, new SimpleStringSchema(), config); 18 | ``` 19 | 20 | ### Getting Started 21 | Follow the [example instructions][example] to create an end to end application: 22 | - Write data into a [Kinesis Data Stream][kds] 23 | - Process the streaming data using [Kinesis Data Analytics][kda] 24 | - Write the results to a [Kinesis Firehose][firehose] using the `FlinkKinesisFirehoseProducer` 25 | - Store the data in an [S3 Bucket][s3] 26 | 27 | ### Building from Source 28 | 1. You will need to install Java 1.8+ and Maven 29 | 1. Clone the repository from Github 30 | 1. Build using Maven from the project root directory: 31 | 1. `$ mvn clean install` 32 | 33 | ### Flink Version Matrix 34 | Flink maintain backwards compatibility for the Sink interface used by the Firehose Producer. 35 | This project is compatible with Flink 1.x, there is no guarantee it will support Flink 2.x should it release in the future. 36 | 37 | Connector Version | Flink Version | Release Date 38 | ----------------- | ------------- | ------------ 39 | 2.1.0 | 1.x | Feb, 2023 40 | 2.0.0 | 1.x | Jul, 2020 41 | 1.1.0 | 1.x | Dec, 2019 42 | 1.0.1 | 1.x | Dec, 2018 43 | 1.0.0 | 1.x | Dec, 2018 44 | 45 | ## License 46 | 47 | This library is licensed under the Apache 2.0 License. 48 | 49 | [kinesis]: https://aws.amazon.com/kinesis 50 | [firehose]: https://aws.amazon.com/kinesis/data-firehose/ 51 | [kds]: https://aws.amazon.com/kinesis/data-streams/ 52 | [kda]: https://aws.amazon.com/kinesis/data-analytics/ 53 | [s3]: https://aws.amazon.com/s3/ 54 | [firehose-documentation]: https://docs.aws.amazon.com/kinesisanalytics/latest/java/how-sinks.html#sinks-firehose-create 55 | [issues]: https://github.com/aws/aws-kinesisanalytics-flink-connectors/issues 56 | [example]: https://docs.aws.amazon.com/kinesisanalytics/latest/java/get-started-exercise-fh.html -------------------------------------------------------------------------------- /pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 21 | 22 | 4.0.0 23 | com.amazonaws 24 | aws-kinesisanalytics-flink 25 | 2.2.0-SNAPSHOT 26 | jar 27 | Amazon Kinesis Analytics Java Flink Connectors 28 | This library contains various Apache Flink connectors to connect to AWS data sources and sinks. 29 | https://aws.amazon.com/kinesis/data-analytics/ 30 | 31 | 32 | amazonwebservices 33 | Amazon Web Services 34 | https://aws.amazon.com/ 35 | 36 | developer 37 | 38 | 39 | 40 | 41 | 42 | The Apache License, Version 2.0 43 | http://www.apache.org/licenses/LICENSE-2.0.txt 44 | 45 | 46 | 47 | 48 | scm:git:https://github.com/aws/aws-kinesisanalytics-flink-connectors.git 49 | scm:git:git@github.com:aws/aws-kinesisanalytics-flink-connectors.git 50 | https://github.com/aws/aws-kinesisanalytics-flink-connectors/tree/master 51 | 52 | 53 | 54 | UTF-8 55 | 1.8.3 56 | 1.11.803 57 | 2.11 58 | 1.8 59 | ${java.version} 60 | ${java.version} 61 | 3.1 62 | 7.5.1 63 | 3.16.1 64 | 3.3.3 65 | 0.8.5 66 | 1.2.3 67 | 3.2.4 68 | https://mvnrepository.com/artifact/com.amazonaws/aws-kinesisanalytics-flink 69 | 70 | 71 | 72 | 73 | com.amazonaws 74 | aws-java-sdk-iam 75 | ${aws-sdk.version} 76 | 77 | 78 | com.amazonaws 79 | aws-java-sdk-kinesis 80 | ${aws-sdk.version} 81 | 82 | 83 | com.amazonaws 84 | aws-java-sdk-sts 85 | ${aws-sdk.version} 86 | 87 | 88 | 89 | 90 | 91 | org.apache.flink 92 | flink-streaming-java_${scala.binary.version} 93 | ${flink.version} 94 | provided 95 | 96 | 97 | 98 | 99 | org.testng 100 | testng 101 | ${testng.version} 102 | test 103 | 104 | 105 | 106 | org.mockito 107 | mockito-core 108 | ${mockito.version} 109 | test 110 | 111 | 112 | 113 | org.assertj 114 | assertj-core 115 | ${assertj.version} 116 | test 117 | 118 | 119 | 120 | ch.qos.logback 121 | logback-classic 122 | ${logback.version} 123 | test 124 | 125 | 126 | 127 | 128 | 129 | ossrh 130 | https://oss.sonatype.org/content/repositories/snapshots 131 | 132 | 133 | 134 | 135 | 136 | 137 | org.apache.maven.plugins 138 | maven-compiler-plugin 139 | ${maven-compiler-plugin.version} 140 | 141 | ${java.version} 142 | ${java.version} 143 | 144 | 145 | 146 | org.jacoco 147 | jacoco-maven-plugin 148 | ${jacoco.version} 149 | 150 | 151 | 152 | prepare-agent 153 | 154 | 155 | 156 | report 157 | test 158 | 159 | report 160 | 161 | 162 | 163 | 164 | 165 | 166 | org.apache.maven.plugins 167 | maven-shade-plugin 168 | ${maven-shade-plugin.version} 169 | 170 | 171 | shade 172 | package 173 | 174 | shade 175 | 176 | 177 | true 178 | 179 | 180 | com.amazonaws:* 181 | org.apache.httpcomponents:* 182 | 183 | 184 | 185 | 186 | com.amazonaws 187 | com.amazonaws.services.kinesisanalytics.shaded.com.amazonaws 188 | 189 | 190 | com.amazonaws.services.kinesisanalytics.flink.connectors.** 191 | 192 | 193 | 194 | org.apache.http 195 | com.amazonaws.services.kinesisanalytics.shaded.org.apache.http 196 | 197 | 198 | 199 | 200 | 201 | *:* 202 | 203 | META-INF/** 204 | 205 | 206 | 207 | 208 | 209 | 210 | 211 | 212 | 213 | 214 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/config/AWSConfigConstants.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.config; 20 | 21 | import org.apache.commons.lang3.StringUtils; 22 | import org.apache.commons.lang3.Validate; 23 | 24 | import javax.annotation.Nonnull; 25 | import javax.annotation.Nullable; 26 | 27 | /** 28 | * AWS Kinesis Firehose configuration constants 29 | */ 30 | public class AWSConfigConstants { 31 | 32 | public enum CredentialProviderType { 33 | 34 | /** Look for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY into passed configuration */ 35 | BASIC, 36 | 37 | /** Look for the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to create AWS credentials. */ 38 | ENV_VARIABLES, 39 | 40 | /** Look for Java system properties aws.accessKeyId and aws.secretKey to create AWS credentials. */ 41 | SYS_PROPERTIES, 42 | 43 | /** Use a AWS credentials profile file to create the AWS credentials. */ 44 | PROFILE, 45 | 46 | /** Create AWS credentials by assuming a role. The credentials for assuming the role must be supplied. **/ 47 | ASSUME_ROLE, 48 | 49 | /** A credentials provider chain will be used that searches for credentials in this order: 50 | * ENV_VARIABLES, SYS_PROPERTIES, PROFILE in the AWS instance metadata. **/ 51 | AUTO 52 | } 53 | 54 | /** The AWS access key for provider type basic */ 55 | public static final String AWS_ACCESS_KEY_ID = "aws_access_key_id"; 56 | 57 | /** The AWS secret key for provider type basic */ 58 | public static final String AWS_SECRET_ACCESS_KEY = "aws_secret_access_key"; 59 | 60 | /** The AWS Kinesis Firehose region, if not specified defaults to us-east-1 */ 61 | public static final String AWS_REGION = "aws.region"; 62 | 63 | /** 64 | * The credential provider type to use when AWS credentials are required 65 | * (AUTO is used if not set, unless access key id and access secret key are set, then BASIC is used). 66 | */ 67 | public static final String AWS_CREDENTIALS_PROVIDER = "aws.credentials.provider"; 68 | 69 | /** The Kinesis Firehose endpoint */ 70 | public static final String AWS_KINESIS_FIREHOSE_ENDPOINT = "aws.kinesis.firehose.endpoint"; 71 | 72 | public static final String AWS_KINESIS_FIREHOSE_ENDPOINT_SIGNING_REGION = "aws.kinesis.firehose.endpoint.signing.region"; 73 | 74 | /** Optional configuration in case the provider is AwsProfileCredentialProvider */ 75 | public static final String AWS_PROFILE_NAME = profileName(AWS_CREDENTIALS_PROVIDER); 76 | 77 | /** Optional configuration in case the provider is AwsProfileCredentialProvider */ 78 | public static final String AWS_PROFILE_PATH = profilePath(AWS_CREDENTIALS_PROVIDER); 79 | 80 | /** The role ARN to use when credential provider type is set to ASSUME_ROLE. */ 81 | public static final String AWS_ROLE_ARN = roleArn(AWS_CREDENTIALS_PROVIDER); 82 | 83 | /** The role session name to use when credential provider type is set to ASSUME_ROLE. */ 84 | public static final String AWS_ROLE_SESSION_NAME = roleSessionName(AWS_CREDENTIALS_PROVIDER); 85 | 86 | /** The external ID to use when credential provider type is set to ASSUME_ROLE. */ 87 | public static final String AWS_ROLE_EXTERNAL_ID = externalId(AWS_CREDENTIALS_PROVIDER); 88 | 89 | /** 90 | * The credentials provider that provides credentials for assuming the role when credential 91 | * provider type is set to ASSUME_ROLE. 92 | * Roles can be nested, so AWS_ROLE_CREDENTIALS_PROVIDER can again be set to "ASSUME_ROLE" 93 | */ 94 | public static final String AWS_ROLE_CREDENTIALS_PROVIDER = roleCredentialsProvider(AWS_CREDENTIALS_PROVIDER); 95 | 96 | private AWSConfigConstants() { 97 | // Prevent instantiation 98 | } 99 | 100 | @Nonnull 101 | public static String accessKeyId(@Nullable String prefix) { 102 | return (!StringUtils.isEmpty(prefix) ? prefix + ".basic." : "") + AWS_ACCESS_KEY_ID; 103 | } 104 | 105 | @Nonnull 106 | public static String accessKeyId() { 107 | return accessKeyId(null); 108 | } 109 | 110 | @Nonnull 111 | public static String secretKey(@Nullable String prefix) { 112 | return (!StringUtils.isEmpty(prefix) ? prefix + ".basic." : "") + AWS_SECRET_ACCESS_KEY; 113 | } 114 | 115 | @Nonnull 116 | public static String secretKey() { 117 | return secretKey(null); 118 | } 119 | 120 | @Nonnull 121 | public static String profilePath(@Nonnull String prefix) { 122 | Validate.notBlank(prefix); 123 | return prefix + ".profile.path"; 124 | } 125 | 126 | @Nonnull 127 | public static String profileName(@Nonnull String prefix) { 128 | Validate.notBlank(prefix); 129 | return prefix + ".profile.name"; 130 | } 131 | 132 | @Nonnull 133 | public static String roleArn(@Nonnull String prefix) { 134 | Validate.notBlank(prefix); 135 | return prefix + ".role.arn"; 136 | } 137 | 138 | @Nonnull 139 | public static String roleSessionName(@Nonnull String prefix) { 140 | Validate.notBlank(prefix); 141 | return prefix + ".role.sessionName"; 142 | } 143 | 144 | @Nonnull 145 | public static String externalId(@Nonnull String prefix) { 146 | Validate.notBlank(prefix); 147 | return prefix + ".role.externalId"; 148 | } 149 | 150 | @Nonnull 151 | public static String roleCredentialsProvider(@Nonnull String prefix) { 152 | Validate.notBlank(prefix); 153 | return prefix + ".role.provider"; 154 | } 155 | } 156 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/config/ProducerConfigConstants.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.config; 20 | 21 | import java.util.concurrent.TimeUnit; 22 | 23 | public class ProducerConfigConstants { 24 | 25 | /** The default MAX buffer size. Users should be able to specify a larger buffer if needed, since we don't bound it. 26 | * However, this value should be exercised with caution, since Kinesis Firehose limits PutRecordBatch at 500 records or 4MiB per call. 27 | * Please refer to https://docs.aws.amazon.com/firehose/latest/dev/limits.html for further reference. 28 | * */ 29 | public static final int DEFAULT_MAX_BUFFER_SIZE = 500; 30 | 31 | /** The maximum number of bytes that can be sent in a single PutRecordBatch operation */ 32 | public static final int DEFAULT_MAXIMUM_BATCH_BYTES = 4 * 1_024 * 1_024; 33 | 34 | /** The MAX default timeout for the buffer to be flushed */ 35 | public static final long DEFAULT_MAX_BUFFER_TIMEOUT = TimeUnit.MINUTES.toMillis(5); 36 | 37 | /** The MAX default timeout for a given addUserRecord operation */ 38 | public static final long DEFAULT_MAX_OPERATION_TIMEOUT = TimeUnit.MINUTES.toMillis(5); 39 | 40 | /** The default wait time in milliseconds in case a buffer is full */ 41 | public static final long DEFAULT_WAIT_TIME_FOR_BUFFER_FULL = 100L; 42 | 43 | /** The default interval between buffer flushes */ 44 | public static final long DEFAULT_INTERVAL_BETWEEN_FLUSHES = 50L; 45 | 46 | /** The default MAX number of retries in case of recoverable failures */ 47 | public static final int DEFAULT_NUMBER_OF_RETRIES = 10; 48 | 49 | /** The default MAX backoff timeout 50 | * https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 51 | * */ 52 | public static final long DEFAULT_MAX_BACKOFF = 100L; 53 | 54 | /** The default BASE timeout to be used on Jitter backoff 55 | * https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 56 | * */ 57 | public static final long DEFAULT_BASE_BACKOFF = 10L; 58 | 59 | /** The reduced quota maximum throughout. 60 | * Some regions have lower throughput quotas than others. 61 | * Please refer to https://docs.aws.amazon.com/firehose/latest/dev/limits.html for further reference. */ 62 | public static final int REDUCED_QUOTA_MAXIMUM_THROUGHPUT = 1_024 * 1_024; 63 | 64 | public static final String FIREHOSE_PRODUCER_BUFFER_MAX_SIZE = "firehose.producer.batch.size"; 65 | public static final String FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES = "firehose.producer.batch.bytes"; 66 | public static final String FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT = "firehose.producer.buffer.timeout"; 67 | public static final String FIREHOSE_PRODUCER_BUFFER_FULL_WAIT_TIMEOUT = "firehose.producer.buffer.full.wait.timeout"; 68 | public static final String FIREHOSE_PRODUCER_BUFFER_FLUSH_TIMEOUT = "firehose.producer.buffer.flush.timeout"; 69 | public static final String FIREHOSE_PRODUCER_BUFFER_FLUSH_MAX_NUMBER_OF_RETRIES = "firehose.producer.buffer.flush.retries"; 70 | public static final String FIREHOSE_PRODUCER_BUFFER_MAX_BACKOFF_TIMEOUT = "firehose.producer.buffer.max.backoff"; 71 | public static final String FIREHOSE_PRODUCER_BUFFER_BASE_BACKOFF_TIMEOUT = "firehose.producer.buffer.base.backoff"; 72 | public static final String FIREHOSE_PRODUCER_MAX_OPERATION_TIMEOUT = "firehose.producer.operation.timeout"; 73 | 74 | private ProducerConfigConstants() { 75 | // Prevent instantiation 76 | } 77 | } 78 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/exception/FlinkKinesisFirehoseException.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.exception; 20 | 21 | public class FlinkKinesisFirehoseException extends Exception { 22 | 23 | public FlinkKinesisFirehoseException(final String msg, final Throwable ex) { 24 | super(msg, ex); 25 | } 26 | 27 | public FlinkKinesisFirehoseException(final String msg) { 28 | super(msg); 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/exception/RecordCouldNotBeBuffered.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.exception; 20 | 21 | public class RecordCouldNotBeBuffered extends FlinkKinesisFirehoseException { 22 | 23 | public RecordCouldNotBeBuffered(String msg, Throwable ex) { 24 | super(msg, ex); 25 | } 26 | 27 | public RecordCouldNotBeBuffered(String msg) { 28 | super(msg); 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/exception/RecordCouldNotBeSentException.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.exception; 20 | 21 | public class RecordCouldNotBeSentException extends RuntimeException { 22 | 23 | public RecordCouldNotBeSentException(final String msg, final Throwable ex) { 24 | super(msg, ex); 25 | } 26 | 27 | public RecordCouldNotBeSentException(final String msg) { 28 | super(msg); 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/exception/SerializationException.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.exception; 20 | 21 | public class SerializationException extends RuntimeException { 22 | 23 | public SerializationException(final String msg, final Throwable t) { 24 | super(msg, t); 25 | } 26 | 27 | private SerializationException(final String msg) { 28 | super(msg); 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/exception/TimeoutExpiredException.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.exception; 20 | 21 | public class TimeoutExpiredException extends FlinkKinesisFirehoseException { 22 | 23 | public TimeoutExpiredException(String msg, Throwable ex) { 24 | super(msg, ex); 25 | } 26 | 27 | public TimeoutExpiredException(String msg) { 28 | super(msg); 29 | } 30 | } 31 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/FlinkKinesisFirehoseProducer.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer; 20 | 21 | 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.FlinkKinesisFirehoseException; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.RecordCouldNotBeSentException; 24 | import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl.FirehoseProducer; 25 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.CredentialProvider; 26 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.factory.CredentialProviderFactory; 27 | import com.amazonaws.services.kinesisanalytics.flink.connectors.serialization.KinesisFirehoseSerializationSchema; 28 | import com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil; 29 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehose; 30 | import com.amazonaws.services.kinesisfirehose.model.Record; 31 | import org.apache.commons.lang3.Validate; 32 | import org.apache.flink.api.common.serialization.SerializationSchema; 33 | import org.apache.flink.configuration.Configuration; 34 | import org.apache.flink.runtime.state.FunctionInitializationContext; 35 | import org.apache.flink.runtime.state.FunctionSnapshotContext; 36 | import org.apache.flink.streaming.api.checkpoint.CheckpointedFunction; 37 | import org.apache.flink.streaming.api.functions.sink.RichSinkFunction; 38 | import org.slf4j.Logger; 39 | import org.slf4j.LoggerFactory; 40 | 41 | import javax.annotation.Nonnull; 42 | import java.nio.ByteBuffer; 43 | import java.util.Properties; 44 | 45 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 46 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType; 47 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl.FirehoseProducer.UserRecordResult; 48 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.getCredentialProviderType; 49 | 50 | public class FlinkKinesisFirehoseProducer extends RichSinkFunction implements CheckpointedFunction { 51 | private static final Logger LOGGER = LoggerFactory.getLogger(FlinkKinesisFirehoseProducer.class); 52 | 53 | private final KinesisFirehoseSerializationSchema schema; 54 | private final Properties config; 55 | private final CredentialProviderType credentialProviderType; 56 | 57 | /** Name of the default delivery stream to produce to. Can be overwritten by the serialization schema */ 58 | private final String defaultDeliveryStream; 59 | 60 | /** Specify whether stop and fail in case of an error */ 61 | private boolean failOnError; 62 | 63 | /** Remembers the last Async thrown exception */ 64 | private transient volatile Throwable lastThrownException; 65 | 66 | /** The Crendential provider should be not serialized */ 67 | private transient CredentialProvider credentialsProvider; 68 | 69 | /** AWS client cannot be serialized when building the Flink Job graph */ 70 | private transient AmazonKinesisFirehose firehoseClient; 71 | 72 | /** AWS Kinesis Firehose producer */ 73 | private transient IProducer firehoseProducer; 74 | 75 | /** 76 | * Creates a new Flink Kinesis Firehose Producer. 77 | * @param deliveryStream The AWS Kinesis Firehose delivery stream. 78 | * @param schema The Serialization schema for the given data type. 79 | * @param configProps The properties used to configure Kinesis Firehose client. 80 | * @param credentialProviderType The specified Credential Provider type. 81 | */ 82 | public FlinkKinesisFirehoseProducer(final String deliveryStream, 83 | final KinesisFirehoseSerializationSchema schema, 84 | final Properties configProps, 85 | final CredentialProviderType credentialProviderType) { 86 | this.defaultDeliveryStream = Validate.notBlank(deliveryStream, "Delivery stream cannot be null or empty"); 87 | this.schema = Validate.notNull(schema, "Kinesis serialization schema cannot be null"); 88 | this.config = Validate.notNull(configProps, "Configuration properties cannot be null"); 89 | this.credentialProviderType = Validate.notNull(credentialProviderType, 90 | "Credential Provider type cannot be null"); 91 | } 92 | 93 | public FlinkKinesisFirehoseProducer(final String deliveryStream , final SerializationSchema schema, 94 | final Properties configProps, 95 | final CredentialProviderType credentialProviderType) { 96 | this(deliveryStream, new KinesisFirehoseSerializationSchema() { 97 | @Override 98 | public ByteBuffer serialize(OUT element) { 99 | return ByteBuffer.wrap(schema.serialize(element)); 100 | } 101 | }, configProps, credentialProviderType); 102 | } 103 | 104 | public FlinkKinesisFirehoseProducer(final String deliveryStream, final KinesisFirehoseSerializationSchema schema, 105 | final Properties configProps) { 106 | this(deliveryStream, schema, configProps, getCredentialProviderType(configProps, AWS_CREDENTIALS_PROVIDER)); 107 | } 108 | 109 | public FlinkKinesisFirehoseProducer(final String deliveryStream, final SerializationSchema schema, 110 | final Properties configProps) { 111 | this(deliveryStream, schema, configProps, getCredentialProviderType(configProps, AWS_CREDENTIALS_PROVIDER)); 112 | } 113 | 114 | public void setFailOnError(final boolean failOnError) { 115 | this.failOnError = failOnError; 116 | } 117 | 118 | @Override 119 | public void open(Configuration parameters) throws Exception { 120 | super.open(parameters); 121 | 122 | this.credentialsProvider = CredentialProviderFactory.newCredentialProvider(credentialProviderType, config); 123 | LOGGER.info("Credential provider: {}", credentialsProvider.getAwsCredentialsProvider().getClass().getName() ); 124 | 125 | this.firehoseClient = createKinesisFirehoseClient(); 126 | this.firehoseProducer = createFirehoseProducer(); 127 | 128 | LOGGER.info("Started Kinesis Firehose client. Delivering to stream: {}", defaultDeliveryStream); 129 | } 130 | 131 | @Nonnull 132 | AmazonKinesisFirehose createKinesisFirehoseClient() { 133 | return AWSUtil.createKinesisFirehoseClientFromConfiguration(config, credentialsProvider); 134 | } 135 | 136 | @Nonnull 137 | IProducer createFirehoseProducer() { 138 | return new FirehoseProducer<>(defaultDeliveryStream, firehoseClient, config); 139 | } 140 | 141 | @Override 142 | public void invoke(final OUT value, final Context context) throws Exception { 143 | Validate.notNull(value); 144 | ByteBuffer serializedValue = schema.serialize(value); 145 | 146 | Validate.validState((firehoseProducer != null && !firehoseProducer.isDestroyed()), 147 | "Firehose producer has been destroyed"); 148 | Validate.validState(firehoseClient != null, "Kinesis Firehose client has been closed"); 149 | 150 | propagateAsyncExceptions(); 151 | 152 | firehoseProducer 153 | .addUserRecord(new Record().withData(serializedValue)) 154 | .handleAsync((record, throwable) -> { 155 | if (throwable != null) { 156 | final String msg = "An error has occurred trying to write a record."; 157 | if (failOnError) { 158 | lastThrownException = throwable; 159 | } else { 160 | LOGGER.warn(msg, throwable); 161 | } 162 | } 163 | 164 | if (record != null && !record.isSuccessful()) { 165 | final String msg = "Record could not be successfully sent."; 166 | if (failOnError && lastThrownException == null) { 167 | lastThrownException = new RecordCouldNotBeSentException(msg, record.getException()); 168 | } else { 169 | LOGGER.warn(msg, record.getException()); 170 | } 171 | } 172 | 173 | return null; 174 | }); 175 | } 176 | 177 | @Override 178 | public void snapshotState(final FunctionSnapshotContext functionSnapshotContext) throws Exception { 179 | //Propagates asynchronously wherever exception that might happened previously. 180 | propagateAsyncExceptions(); 181 | 182 | //Forces the Firehose producer to flush the buffer. 183 | LOGGER.debug("Outstanding records before snapshot: {}", firehoseProducer.getOutstandingRecordsCount()); 184 | flushSync(); 185 | LOGGER.debug("Outstanding records after snapshot: {}", firehoseProducer.getOutstandingRecordsCount()); 186 | if (firehoseProducer.getOutstandingRecordsCount() > 0) { 187 | throw new IllegalStateException("An error has occurred trying to flush the buffer synchronously."); 188 | } 189 | 190 | // If the flush produced any exceptions, we should propagates it also and fail the checkpoint. 191 | propagateAsyncExceptions(); 192 | } 193 | 194 | @Override 195 | public void initializeState(final FunctionInitializationContext functionInitializationContext) throws Exception { 196 | //No Op 197 | } 198 | 199 | @Override 200 | public void close() throws Exception { 201 | try { 202 | super.close(); 203 | propagateAsyncExceptions(); 204 | } catch (Exception ex) { 205 | LOGGER.error(ex.getMessage(), ex); 206 | throw ex; 207 | } finally { 208 | flushSync(); 209 | firehoseProducer.destroy(); 210 | if (firehoseClient != null) { 211 | LOGGER.debug("Shutting down Kinesis Firehose client..."); 212 | firehoseClient.shutdown(); 213 | } 214 | } 215 | } 216 | 217 | private void propagateAsyncExceptions() throws Exception { 218 | if (lastThrownException == null) { 219 | return; 220 | } 221 | 222 | final String msg = "An exception has been thrown while trying to process a record"; 223 | if (failOnError) { 224 | throw new FlinkKinesisFirehoseException(msg, lastThrownException); 225 | } else { 226 | LOGGER.warn(msg, lastThrownException); 227 | lastThrownException = null; 228 | } 229 | } 230 | 231 | /** 232 | * This method waits until the buffer is flushed, an error has occurred or the thread was interrupted. 233 | */ 234 | private void flushSync() { 235 | while (firehoseProducer.getOutstandingRecordsCount() > 0 && !firehoseProducer.isFlushFailed()) { 236 | firehoseProducer.flush(); 237 | try { 238 | LOGGER.debug("Number of outstanding records before going to sleep: {}", firehoseProducer.getOutstandingRecordsCount()); 239 | Thread.sleep(500); 240 | } catch (InterruptedException ex) { 241 | LOGGER.warn("Flushing has been interrupted."); 242 | break; 243 | } 244 | } 245 | } 246 | } 247 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/IProducer.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer; 20 | 21 | import java.util.concurrent.CompletableFuture; 22 | 23 | /** 24 | * Interface responsible for sending data a specific sink 25 | */ 26 | public interface IProducer { 27 | 28 | /** 29 | * This method should send data to an specific destination. 30 | * @param record the type of data to be sent 31 | * @return a {@code ListenableFuture} with the result for the operation. 32 | * @throws Exception 33 | */ 34 | CompletableFuture addUserRecord(final R record) throws Exception; 35 | 36 | /** 37 | * This method should send data to an specific destination 38 | * @param record the type of data to be sent 39 | * @param operationTimeoutInMillis the expected operation timeout 40 | * @return a {@code ListenableFuture} with the result for the operation. 41 | * @throws Exception 42 | */ 43 | CompletableFuture addUserRecord(final R record, final long operationTimeoutInMillis) throws Exception; 44 | 45 | /** 46 | * Destroy and release any used resource. 47 | * @throws Exception 48 | */ 49 | void destroy() throws Exception; 50 | 51 | /** 52 | * Returns whether the producer has been destroyed or not 53 | * @return 54 | */ 55 | boolean isDestroyed(); 56 | 57 | /** 58 | * Should return the number of outstanding records if the producer implements buffering. 59 | * @return an integer with the number of outstanding records. 60 | */ 61 | int getOutstandingRecordsCount(); 62 | 63 | /** 64 | * This method flushes the buffer immediately. 65 | */ 66 | void flush(); 67 | 68 | /** 69 | * Performs a synchronous flush on the buffer waiting until the whole buffer is drained. 70 | */ 71 | void flushSync(); 72 | 73 | /** 74 | * A flag representing whether the flush has failed or not. 75 | * @return {@code boolean} representing whether the success of failure of flush buffer operation. 76 | */ 77 | boolean isFlushFailed(); 78 | } 79 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/impl/FirehoseProducerConfiguration.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil; 24 | import org.apache.commons.lang3.Validate; 25 | 26 | import javax.annotation.Nonnull; 27 | import javax.annotation.Nullable; 28 | import java.util.Properties; 29 | 30 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAXIMUM_BATCH_BYTES; 31 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_BUFFER_SIZE; 32 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_BASE_BACKOFF_TIMEOUT; 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FLUSH_MAX_NUMBER_OF_RETRIES; 34 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FLUSH_TIMEOUT; 35 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FULL_WAIT_TIMEOUT; 36 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_BACKOFF_TIMEOUT; 37 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES; 38 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_SIZE; 39 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT; 40 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_MAX_OPERATION_TIMEOUT; 41 | import static java.util.Optional.ofNullable; 42 | 43 | /** An immutable configuration class for {@link FirehoseProducer}. */ 44 | public class FirehoseProducerConfiguration { 45 | 46 | /** The default MAX producerBuffer size. Users should be able to specify a smaller producerBuffer if needed. 47 | * However, this value should be exercised with caution, since Kinesis Firehose limits PutRecordBatch at 500 records or 4MiB per call. 48 | * Please refer to https://docs.aws.amazon.com/firehose/latest/dev/limits.html for further reference. 49 | * */ 50 | private final int maxBufferSize; 51 | 52 | /** The maximum number of bytes that can be sent in a single PutRecordBatch operation */ 53 | private final int maxPutRecordBatchBytes; 54 | 55 | /** The specified amount timeout the producerBuffer must be flushed if haven't met any other conditions previously */ 56 | private final long bufferTimeoutInMillis; 57 | 58 | /** The wait time in milliseconds in case a producerBuffer is full */ 59 | private final long bufferFullWaitTimeoutInMillis; 60 | 61 | /** The interval between producerBuffer flushes */ 62 | private final long bufferTimeoutBetweenFlushes; 63 | 64 | /** The MAX number of retries in case of recoverable failures */ 65 | private final int numberOfRetries; 66 | 67 | /** The default MAX backoff timeout 68 | * https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 69 | */ 70 | private final long maxBackOffInMillis; 71 | 72 | /** The default BASE timeout to be used on Jitter backoff 73 | * https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/ 74 | */ 75 | private final long baseBackOffInMillis; 76 | 77 | /** The MAX timeout for a given addUserRecord operation */ 78 | private final long maxOperationTimeoutInMillis; 79 | 80 | private FirehoseProducerConfiguration(@Nonnull final Builder builder) { 81 | this.maxBufferSize = builder.maxBufferSize; 82 | this.maxPutRecordBatchBytes = builder.maxPutRecordBatchBytes; 83 | this.bufferTimeoutInMillis = builder.bufferTimeoutInMillis; 84 | this.bufferFullWaitTimeoutInMillis = builder.bufferFullWaitTimeoutInMillis; 85 | this.bufferTimeoutBetweenFlushes = builder.bufferTimeoutBetweenFlushes; 86 | this.numberOfRetries = builder.numberOfRetries; 87 | this.maxBackOffInMillis = builder.maxBackOffInMillis; 88 | this.baseBackOffInMillis = builder.baseBackOffInMillis; 89 | this.maxOperationTimeoutInMillis = builder.maxOperationTimeoutInMillis; 90 | } 91 | 92 | /** 93 | * The max producer buffer size; the maximum number of records that will be sent in a PutRecordBatch request. 94 | * @return the max producer buffer size. 95 | */ 96 | public int getMaxBufferSize() { 97 | return maxBufferSize; 98 | } 99 | 100 | /** 101 | * The maximum number of bytes that will be sent in a single PutRecordBatch operation. 102 | * @return the maximum number of PutRecordBatch bytes 103 | */ 104 | public int getMaxPutRecordBatchBytes() { 105 | return maxPutRecordBatchBytes; 106 | } 107 | 108 | /** 109 | * The specified amount timeout the producerBuffer must be flushed if haven't met any other conditions previously. 110 | * @return the specified amount timeout the producerBuffer must be flushed 111 | */ 112 | public long getBufferTimeoutInMillis() { 113 | return bufferTimeoutInMillis; 114 | } 115 | 116 | /** 117 | * The wait time in milliseconds in case a producerBuffer is full. 118 | * @return The wait time in milliseconds 119 | */ 120 | public long getBufferFullWaitTimeoutInMillis() { 121 | return bufferFullWaitTimeoutInMillis; 122 | } 123 | 124 | /** 125 | * The interval between producerBuffer flushes. 126 | * @return The interval between producerBuffer flushes 127 | */ 128 | public long getBufferTimeoutBetweenFlushes() { 129 | return bufferTimeoutBetweenFlushes; 130 | } 131 | 132 | /** 133 | * The max number of retries in case of recoverable failures. 134 | * @return the max number of retries in case of recoverable failures 135 | */ 136 | public int getNumberOfRetries() { 137 | return numberOfRetries; 138 | } 139 | 140 | /** 141 | * The max backoff timeout (https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) 142 | * @return The max backoff timeout 143 | */ 144 | public long getMaxBackOffInMillis() { 145 | return maxBackOffInMillis; 146 | } 147 | 148 | /** 149 | * The base backoff timeout (https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) 150 | * @return The base backoff timeout 151 | */ 152 | public long getBaseBackOffInMillis() { 153 | return baseBackOffInMillis; 154 | } 155 | 156 | /** 157 | * The max timeout for a given addUserRecord operation. 158 | * @return the max timeout for a given addUserRecord operation 159 | */ 160 | public long getMaxOperationTimeoutInMillis() { 161 | return maxOperationTimeoutInMillis; 162 | } 163 | 164 | @Nonnull 165 | public static Builder builder(@Nonnull final Properties config) { 166 | final String region = config.getProperty(AWSConfigConstants.AWS_REGION); 167 | return builder(region).withProperties(config); 168 | } 169 | 170 | @Nonnull 171 | public static Builder builder(@Nullable final String region) { 172 | return new Builder(region); 173 | } 174 | 175 | public static class Builder { 176 | private int maxBufferSize = ProducerConfigConstants.DEFAULT_MAX_BUFFER_SIZE; 177 | private int maxPutRecordBatchBytes; 178 | private int numberOfRetries = ProducerConfigConstants.DEFAULT_NUMBER_OF_RETRIES; 179 | private long bufferTimeoutInMillis = ProducerConfigConstants.DEFAULT_MAX_BUFFER_TIMEOUT; 180 | private long maxOperationTimeoutInMillis = ProducerConfigConstants.DEFAULT_MAX_OPERATION_TIMEOUT; 181 | private long bufferFullWaitTimeoutInMillis = ProducerConfigConstants.DEFAULT_WAIT_TIME_FOR_BUFFER_FULL; 182 | private long bufferTimeoutBetweenFlushes = ProducerConfigConstants.DEFAULT_INTERVAL_BETWEEN_FLUSHES; 183 | private long maxBackOffInMillis = ProducerConfigConstants.DEFAULT_MAX_BACKOFF; 184 | private long baseBackOffInMillis = ProducerConfigConstants.DEFAULT_BASE_BACKOFF; 185 | 186 | public Builder(@Nullable final String region) { 187 | this.maxPutRecordBatchBytes = AWSUtil.getDefaultMaxPutRecordBatchBytes(region); 188 | } 189 | 190 | @Nonnull 191 | public FirehoseProducerConfiguration build() { 192 | return new FirehoseProducerConfiguration(this); 193 | } 194 | 195 | /** 196 | * The max producer buffer size; the maximum number of records that will be sent in a PutRecordBatch request. 197 | * @param maxBufferSize the max producer buffer size 198 | * @return this builder 199 | */ 200 | @Nonnull 201 | public Builder withMaxBufferSize(final int maxBufferSize) { 202 | Validate.isTrue(maxBufferSize > 0 && maxBufferSize <= DEFAULT_MAX_BUFFER_SIZE, 203 | String.format("Buffer size must be between 1 and %d", DEFAULT_MAX_BUFFER_SIZE)); 204 | 205 | this.maxBufferSize = maxBufferSize; 206 | return this; 207 | } 208 | 209 | /** 210 | * The maximum number of bytes that will be sent in a single PutRecordBatch operation. 211 | * @param maxPutRecordBatchBytes the maximum number of PutRecordBatch bytes 212 | * @return this builder 213 | */ 214 | @Nonnull 215 | public Builder withMaxPutRecordBatchBytes(final int maxPutRecordBatchBytes) { 216 | Validate.isTrue(maxPutRecordBatchBytes > 0 && maxPutRecordBatchBytes <= DEFAULT_MAXIMUM_BATCH_BYTES, 217 | String.format("Maximum batch size in bytes must be between 1 and %d", DEFAULT_MAXIMUM_BATCH_BYTES)); 218 | 219 | this.maxPutRecordBatchBytes = maxPutRecordBatchBytes; 220 | return this; 221 | } 222 | 223 | /** 224 | * The max number of retries in case of recoverable failures. 225 | * @param numberOfRetries the max number of retries in case of recoverable failures. 226 | * @return this builder 227 | */ 228 | @Nonnull 229 | public Builder withNumberOfRetries(final int numberOfRetries) { 230 | Validate.isTrue(numberOfRetries >= 0, "Number of retries cannot be negative."); 231 | 232 | this.numberOfRetries = numberOfRetries; 233 | return this; 234 | } 235 | 236 | /** 237 | * The specified amount timeout the producerBuffer must be flushed if haven't met any other conditions previously. 238 | * @param bufferTimeoutInMillis the specified amount timeout the producerBuffer must be flushed 239 | * @return this builder 240 | */ 241 | @Nonnull 242 | public Builder withBufferTimeoutInMillis(final long bufferTimeoutInMillis) { 243 | Validate.isTrue(bufferTimeoutInMillis >= 0, "Flush timeout should be greater than 0."); 244 | 245 | this.bufferTimeoutInMillis = bufferTimeoutInMillis; 246 | return this; 247 | } 248 | 249 | /** 250 | * The max timeout for a given addUserRecord operation. 251 | * @param maxOperationTimeoutInMillis The max timeout for a given addUserRecord operation 252 | * @return this builder 253 | */ 254 | @Nonnull 255 | public Builder withMaxOperationTimeoutInMillis(final long maxOperationTimeoutInMillis) { 256 | Validate.isTrue(maxOperationTimeoutInMillis >= 0, "Max operation timeout should be greater than 0."); 257 | 258 | this.maxOperationTimeoutInMillis = maxOperationTimeoutInMillis; 259 | return this; 260 | } 261 | 262 | /** 263 | * The wait time in milliseconds in case a producerBuffer is full. 264 | * @param bufferFullWaitTimeoutInMillis the wait time in milliseconds in case a producerBuffer is full 265 | * @return this builder 266 | */ 267 | @Nonnull 268 | public Builder withBufferFullWaitTimeoutInMillis(final long bufferFullWaitTimeoutInMillis) { 269 | Validate.isTrue(bufferFullWaitTimeoutInMillis >= 0, "Buffer full waiting timeout should be greater than 0."); 270 | 271 | this.bufferFullWaitTimeoutInMillis = bufferFullWaitTimeoutInMillis; 272 | return this; 273 | } 274 | 275 | /** 276 | * The interval between producerBuffer flushes. 277 | * @param bufferTimeoutBetweenFlushes the interval between producerBuffer flushes 278 | * @return this builder 279 | */ 280 | @Nonnull 281 | public Builder withBufferTimeoutBetweenFlushes(final long bufferTimeoutBetweenFlushes) { 282 | Validate.isTrue(bufferTimeoutBetweenFlushes >= 0, "Interval between flushes cannot be negative."); 283 | 284 | this.bufferTimeoutBetweenFlushes = bufferTimeoutBetweenFlushes; 285 | return this; 286 | } 287 | 288 | /** 289 | * The max backoff timeout (https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) 290 | * @param maxBackOffInMillis the max backoff timeout 291 | * @return this builder 292 | */ 293 | @Nonnull 294 | public Builder withMaxBackOffInMillis(final long maxBackOffInMillis) { 295 | Validate.isTrue(maxBackOffInMillis >= 0, "Max backoff timeout should be greater than 0."); 296 | 297 | this.maxBackOffInMillis = maxBackOffInMillis; 298 | return this; 299 | } 300 | 301 | /** 302 | * The base backoff timeout (https://aws.amazon.com/blogs/architecture/exponential-backoff-and-jitter/) 303 | * @param baseBackOffInMillis The base backoff timeout 304 | * @return this builder 305 | */ 306 | @Nonnull 307 | public Builder withBaseBackOffInMillis(final long baseBackOffInMillis) { 308 | Validate.isTrue(baseBackOffInMillis >= 0, "Base backoff timeout should be greater than 0."); 309 | 310 | this.baseBackOffInMillis = baseBackOffInMillis; 311 | return this; 312 | } 313 | 314 | /** 315 | * Creates a Builder populated with values from the Properties. 316 | * @param config the configuration properties 317 | * @return this builder 318 | */ 319 | @Nonnull 320 | public Builder withProperties(@Nonnull final Properties config) { 321 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_MAX_SIZE)) 322 | .map(Integer::parseInt) 323 | .ifPresent(this::withMaxBufferSize); 324 | 325 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES)) 326 | .map(Integer::parseInt) 327 | .ifPresent(this::withMaxPutRecordBatchBytes); 328 | 329 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_FLUSH_MAX_NUMBER_OF_RETRIES)) 330 | .map(Integer::parseInt) 331 | .ifPresent(this::withNumberOfRetries); 332 | 333 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT)) 334 | .map(Long::parseLong) 335 | .ifPresent(this::withBufferTimeoutInMillis); 336 | 337 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_FULL_WAIT_TIMEOUT)) 338 | .map(Long::parseLong) 339 | .ifPresent(this::withBufferFullWaitTimeoutInMillis); 340 | 341 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_FLUSH_TIMEOUT)) 342 | .map(Long::parseLong) 343 | .ifPresent(this::withBufferTimeoutBetweenFlushes); 344 | 345 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_MAX_BACKOFF_TIMEOUT)) 346 | .map(Long::parseLong) 347 | .ifPresent(this::withMaxBackOffInMillis); 348 | 349 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_BUFFER_BASE_BACKOFF_TIMEOUT)) 350 | .map(Long::parseLong) 351 | .ifPresent(this::withBaseBackOffInMillis); 352 | 353 | ofNullable(config.getProperty(FIREHOSE_PRODUCER_MAX_OPERATION_TIMEOUT)) 354 | .map(Long::parseLong) 355 | .ifPresent(this::withMaxOperationTimeoutInMillis); 356 | 357 | return this; 358 | } 359 | } 360 | } 361 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/AssumeRoleCredentialsProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 24 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.factory.CredentialProviderFactory; 25 | import com.amazonaws.services.securitytoken.AWSSecurityTokenService; 26 | import com.amazonaws.services.securitytoken.AWSSecurityTokenServiceClientBuilder; 27 | 28 | import javax.annotation.Nonnull; 29 | import java.util.Properties; 30 | 31 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 32 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.getCredentialProviderType; 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateAssumeRoleCredentialsProvider; 34 | 35 | public class AssumeRoleCredentialsProvider extends CredentialProvider { 36 | 37 | public AssumeRoleCredentialsProvider(final Properties properties, final String providerKey) { 38 | super(validateAssumeRoleCredentialsProvider(properties, providerKey), providerKey); 39 | } 40 | 41 | public AssumeRoleCredentialsProvider(final Properties properties) { 42 | this(properties, AWS_CREDENTIALS_PROVIDER); 43 | } 44 | 45 | @Override 46 | public AWSCredentialsProvider getAwsCredentialsProvider() { 47 | final String baseCredentialsProviderKey = AWSConfigConstants.roleCredentialsProvider(providerKey); 48 | final AWSConfigConstants.CredentialProviderType baseCredentialsProviderType = getCredentialProviderType(properties, baseCredentialsProviderKey); 49 | final CredentialProvider baseCredentialsProvider = 50 | CredentialProviderFactory.newCredentialProvider(baseCredentialsProviderType, properties, baseCredentialsProviderKey); 51 | final AWSSecurityTokenService baseCredentials = AWSSecurityTokenServiceClientBuilder.standard() 52 | .withCredentials(baseCredentialsProvider.getAwsCredentialsProvider()) 53 | .withRegion(properties.getProperty(AWSConfigConstants.AWS_REGION)) 54 | .build(); 55 | 56 | return createAwsCredentialsProvider( 57 | properties.getProperty(AWSConfigConstants.roleArn(providerKey)), 58 | properties.getProperty(AWSConfigConstants.roleSessionName(providerKey)), 59 | properties.getProperty(AWSConfigConstants.externalId(providerKey)), 60 | baseCredentials); 61 | } 62 | 63 | AWSCredentialsProvider createAwsCredentialsProvider(@Nonnull String roleArn, 64 | @Nonnull String roleSessionName, 65 | @Nonnull String externalId, 66 | @Nonnull AWSSecurityTokenService securityTokenService) { 67 | return new STSAssumeRoleSessionCredentialsProvider.Builder(roleArn, roleSessionName) 68 | .withExternalId(externalId) 69 | .withStsClient(securityTokenService) 70 | .build(); 71 | } 72 | } 73 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/BasicCredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentials; 22 | import com.amazonaws.auth.AWSCredentialsProvider; 23 | import com.amazonaws.auth.BasicAWSCredentials; 24 | 25 | import java.util.Properties; 26 | 27 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.accessKeyId; 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.secretKey; 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateBasicProviderConfiguration; 30 | 31 | public class BasicCredentialProvider extends CredentialProvider { 32 | 33 | public BasicCredentialProvider(final Properties properties, final String providerKey) { 34 | super(validateBasicProviderConfiguration(properties, providerKey), providerKey); 35 | } 36 | 37 | public BasicCredentialProvider(Properties properties) { 38 | this(properties, null); 39 | } 40 | 41 | @Override 42 | public AWSCredentialsProvider getAwsCredentialsProvider() { 43 | return new AWSCredentialsProvider() { 44 | @Override 45 | public AWSCredentials getCredentials() { 46 | return new BasicAWSCredentials(properties.getProperty(accessKeyId(providerKey)), properties.getProperty(secretKey(providerKey))); 47 | } 48 | 49 | @Override 50 | public void refresh() { 51 | 52 | } 53 | }; 54 | } 55 | } 56 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/CredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil; 23 | 24 | import java.util.Properties; 25 | 26 | public abstract class CredentialProvider { 27 | 28 | final Properties properties; 29 | final String providerKey; 30 | 31 | CredentialProvider(final Properties properties, final String providerKey) { 32 | this.properties = AWSUtil.validateConfiguration(properties); 33 | this.providerKey = providerKey == null ? "" : providerKey; 34 | } 35 | 36 | public CredentialProvider(final Properties properties) { 37 | this(properties, null); 38 | } 39 | 40 | public abstract AWSCredentialsProvider getAwsCredentialsProvider(); 41 | 42 | protected Properties getProperties() { 43 | return this.properties; 44 | } 45 | 46 | protected String getProviderKey() { 47 | return this.providerKey; 48 | } 49 | } 50 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/DefaultCredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.auth.DefaultAWSCredentialsProviderChain; 23 | 24 | import java.util.Properties; 25 | 26 | public class DefaultCredentialProvider extends CredentialProvider { 27 | 28 | public DefaultCredentialProvider(final Properties properties, final String providerKey) { 29 | super(properties, providerKey); 30 | } 31 | 32 | public DefaultCredentialProvider(final Properties properties) { 33 | this(properties, null); 34 | } 35 | 36 | @Override 37 | public AWSCredentialsProvider getAwsCredentialsProvider() { 38 | return new DefaultAWSCredentialsProviderChain(); 39 | } 40 | } 41 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/EnvironmentCredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.auth.EnvironmentVariableCredentialsProvider; 23 | 24 | import java.util.Properties; 25 | 26 | public class EnvironmentCredentialProvider extends CredentialProvider { 27 | 28 | 29 | public EnvironmentCredentialProvider(final Properties properties, final String providerKey) { 30 | super(properties, providerKey); 31 | } 32 | 33 | public EnvironmentCredentialProvider(final Properties properties) { 34 | this(properties, null); 35 | } 36 | 37 | @Override 38 | public AWSCredentialsProvider getAwsCredentialsProvider() { 39 | return new EnvironmentVariableCredentialsProvider(); 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/ProfileCredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.auth.profile.ProfileCredentialsProvider; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 24 | import org.apache.commons.lang3.StringUtils; 25 | 26 | import java.util.Properties; 27 | 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateProfileProviderConfiguration; 30 | 31 | public class ProfileCredentialProvider extends CredentialProvider { 32 | 33 | 34 | public ProfileCredentialProvider(final Properties properties, final String providerKey) { 35 | super(validateProfileProviderConfiguration(properties, providerKey), providerKey); 36 | } 37 | 38 | public ProfileCredentialProvider(final Properties properties) { 39 | this(properties, AWS_CREDENTIALS_PROVIDER); 40 | } 41 | 42 | @Override 43 | public AWSCredentialsProvider getAwsCredentialsProvider() { 44 | final String profileName = properties.getProperty(AWSConfigConstants.profileName(providerKey)); 45 | final String profilePath = properties.getProperty(AWSConfigConstants.profilePath(providerKey)); 46 | 47 | return StringUtils.isEmpty(profilePath) ? new ProfileCredentialsProvider(profileName) : 48 | new ProfileCredentialsProvider(profilePath, profileName); 49 | } 50 | } 51 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/SystemCredentialProvider.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentialsProvider; 22 | import com.amazonaws.auth.SystemPropertiesCredentialsProvider; 23 | 24 | import java.util.Properties; 25 | 26 | public class SystemCredentialProvider extends CredentialProvider { 27 | 28 | 29 | public SystemCredentialProvider(final Properties properties, final String providerKey) { 30 | super(properties, providerKey); 31 | } 32 | 33 | public SystemCredentialProvider(final Properties properties) { 34 | this(properties, null); 35 | } 36 | 37 | @Override 38 | public AWSCredentialsProvider getAwsCredentialsProvider() { 39 | return new SystemPropertiesCredentialsProvider(); 40 | } 41 | } 42 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/factory/CredentialProviderFactory.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.factory; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.AssumeRoleCredentialsProvider; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.BasicCredentialProvider; 24 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.CredentialProvider; 25 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.DefaultCredentialProvider; 26 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.EnvironmentCredentialProvider; 27 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.ProfileCredentialProvider; 28 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.SystemCredentialProvider; 29 | import org.apache.commons.lang3.Validate; 30 | 31 | import java.util.Properties; 32 | 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 34 | 35 | public final class CredentialProviderFactory { 36 | 37 | private CredentialProviderFactory() { 38 | 39 | } 40 | 41 | public static CredentialProvider newCredentialProvider(final CredentialProviderType credentialProviderType, 42 | final Properties awsConfigProps, 43 | final String awsConfigCredentialProviderKey) { 44 | Validate.notNull(awsConfigProps, "AWS configuration properties cannot be null"); 45 | 46 | if (credentialProviderType == null) { 47 | return new DefaultCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 48 | } 49 | 50 | switch (credentialProviderType) { 51 | case BASIC: 52 | // For basic provider, allow the top-level provider key to be missing 53 | if (AWS_CREDENTIALS_PROVIDER.equals(awsConfigCredentialProviderKey) 54 | && !awsConfigProps.containsKey(AWS_CREDENTIALS_PROVIDER)) { 55 | return new BasicCredentialProvider(awsConfigProps, null); 56 | } else { 57 | return new BasicCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 58 | } 59 | case PROFILE: 60 | return new ProfileCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 61 | case ENV_VARIABLES: 62 | return new EnvironmentCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 63 | case SYS_PROPERTIES: 64 | return new SystemCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 65 | case ASSUME_ROLE: 66 | return new AssumeRoleCredentialsProvider(awsConfigProps, awsConfigCredentialProviderKey); 67 | default: 68 | case AUTO: 69 | return new DefaultCredentialProvider(awsConfigProps, awsConfigCredentialProviderKey); 70 | } 71 | } 72 | 73 | public static CredentialProvider newCredentialProvider(final CredentialProviderType credentialProviderType, 74 | final Properties awsConfigProps) { 75 | return newCredentialProvider(credentialProviderType, awsConfigProps, AWS_CREDENTIALS_PROVIDER); 76 | } 77 | } 78 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/serialization/JsonSerializationSchema.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.serialization; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.SerializationException; 22 | import org.apache.commons.lang3.Validate; 23 | import org.apache.flink.api.common.serialization.SerializationSchema; 24 | import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.core.JsonProcessingException; 25 | import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.ObjectMapper; 26 | 27 | public class JsonSerializationSchema implements SerializationSchema { 28 | 29 | private static final ObjectMapper mapper = new ObjectMapper(); 30 | 31 | /** 32 | * Serializes the incoming element to a specified type. 33 | * 34 | * @param element The incoming element to be serialized 35 | * @return The serialized element. 36 | */ 37 | @Override 38 | public byte[] serialize(T element) { 39 | Validate.notNull(element); 40 | try { 41 | return mapper.writeValueAsBytes(element); 42 | } catch (JsonProcessingException e) { 43 | throw new SerializationException("Failed trying to serialize", e); 44 | } 45 | } 46 | } 47 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/serialization/KinesisFirehoseSerializationSchema.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.serialization; 20 | 21 | import java.io.Serializable; 22 | import java.nio.ByteBuffer; 23 | 24 | public interface KinesisFirehoseSerializationSchema extends Serializable { 25 | 26 | ByteBuffer serialize(T element); 27 | } 28 | -------------------------------------------------------------------------------- /src/main/java/com/amazonaws/services/kinesisanalytics/flink/connectors/util/AWSUtil.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.util; 20 | 21 | import com.amazonaws.client.builder.AwsClientBuilder; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.CredentialProvider; 24 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehose; 25 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehoseClientBuilder; 26 | import org.apache.commons.lang3.StringUtils; 27 | import org.apache.commons.lang3.Validate; 28 | 29 | import javax.annotation.Nonnull; 30 | import javax.annotation.Nullable; 31 | import java.util.Properties; 32 | 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 34 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_KINESIS_FIREHOSE_ENDPOINT; 35 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_KINESIS_FIREHOSE_ENDPOINT_SIGNING_REGION; 36 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 37 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.accessKeyId; 38 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.profileName; 39 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.roleArn; 40 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.roleSessionName; 41 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.secretKey; 42 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAXIMUM_BATCH_BYTES; 43 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.REDUCED_QUOTA_MAXIMUM_THROUGHPUT; 44 | 45 | public final class AWSUtil { 46 | 47 | private AWSUtil() { 48 | 49 | } 50 | 51 | public static AmazonKinesisFirehose createKinesisFirehoseClientFromConfiguration(@Nonnull final Properties configProps, 52 | @Nonnull final CredentialProvider credentialsProvider) { 53 | validateConfiguration(configProps); 54 | Validate.notNull(credentialsProvider, "Credential Provider cannot be null."); 55 | 56 | AmazonKinesisFirehoseClientBuilder firehoseClientBuilder = AmazonKinesisFirehoseClientBuilder 57 | .standard() 58 | .withCredentials(credentialsProvider.getAwsCredentialsProvider()); 59 | 60 | final String region = configProps.getProperty(AWS_REGION, null); 61 | 62 | final String firehoseEndpoint = configProps.getProperty( 63 | AWS_KINESIS_FIREHOSE_ENDPOINT, null); 64 | 65 | final String firehoseEndpointSigningRegion = configProps.getProperty( 66 | AWS_KINESIS_FIREHOSE_ENDPOINT_SIGNING_REGION, null); 67 | 68 | firehoseClientBuilder = (region != null) ? firehoseClientBuilder.withRegion(region) 69 | : firehoseClientBuilder.withEndpointConfiguration( 70 | new AwsClientBuilder.EndpointConfiguration(firehoseEndpoint, firehoseEndpointSigningRegion)); 71 | 72 | return firehoseClientBuilder.build(); 73 | } 74 | 75 | public static Properties validateConfiguration(final Properties configProps) { 76 | Validate.notNull(configProps, "Configuration properties cannot be null."); 77 | 78 | if (!configProps.containsKey(AWS_REGION) ^ (configProps.containsKey(AWS_KINESIS_FIREHOSE_ENDPOINT) && 79 | configProps.containsKey(AWS_KINESIS_FIREHOSE_ENDPOINT_SIGNING_REGION))) { 80 | 81 | throw new IllegalArgumentException( 82 | "Either AWS region should be specified or AWS Firehose endpoint and endpoint signing region."); 83 | } 84 | 85 | return configProps; 86 | } 87 | 88 | public static Properties validateBasicProviderConfiguration(final Properties configProps, final String providerKey) { 89 | validateConfiguration(configProps); 90 | 91 | Validate.isTrue(configProps.containsKey(accessKeyId(providerKey)), 92 | "AWS access key must be specified with credential provider BASIC."); 93 | Validate.isTrue(configProps.containsKey(secretKey(providerKey)), 94 | "AWS secret key must be specified with credential provider BASIC."); 95 | 96 | return configProps; 97 | } 98 | 99 | public static Properties validateBasicProviderConfiguration(final Properties configProps) { 100 | return validateBasicProviderConfiguration(configProps, null); 101 | } 102 | 103 | public static boolean containsBasicProperties(final Properties configProps, final String providerKey) { 104 | Validate.notNull(configProps); 105 | return configProps.containsKey(accessKeyId(providerKey)) && configProps.containsKey(secretKey(providerKey)); 106 | } 107 | 108 | public static AWSConfigConstants.CredentialProviderType getCredentialProviderType(final Properties configProps, 109 | final String providerKey) { 110 | if (providerKey == null || !configProps.containsKey(providerKey)) { 111 | return containsBasicProperties(configProps, providerKey) ? 112 | AWSConfigConstants.CredentialProviderType.BASIC : AWSConfigConstants.CredentialProviderType.AUTO; 113 | } 114 | 115 | final String providerTypeString = configProps.getProperty(providerKey); 116 | if (StringUtils.isEmpty(providerTypeString)) { 117 | return AWSConfigConstants.CredentialProviderType.AUTO; 118 | } 119 | 120 | try { 121 | return AWSConfigConstants.CredentialProviderType.valueOf(providerTypeString); 122 | } catch (IllegalArgumentException e) { 123 | return AWSConfigConstants.CredentialProviderType.AUTO; 124 | } 125 | } 126 | 127 | public static Properties validateProfileProviderConfiguration(final Properties configProps, final String providerKey) { 128 | validateConfiguration(configProps); 129 | Validate.notBlank(providerKey); 130 | 131 | Validate.isTrue(configProps.containsKey(profileName(providerKey)), 132 | "AWS profile name should be specified with credential provider PROFILE."); 133 | 134 | return configProps; 135 | } 136 | 137 | public static Properties validateProfileProviderConfiguration(final Properties configProps) { 138 | return validateProfileProviderConfiguration(configProps, AWS_CREDENTIALS_PROVIDER); 139 | } 140 | 141 | public static Properties validateAssumeRoleCredentialsProvider(final Properties configProps, final String providerKey) { 142 | validateConfiguration(configProps); 143 | 144 | Validate.isTrue(configProps.containsKey(roleArn(providerKey)), 145 | "AWS role arn to be assumed must be provided with credential provider type ASSUME_ROLE"); 146 | Validate.isTrue(configProps.containsKey(roleSessionName(providerKey)), 147 | "AWS role session name must be provided with credential provider type ASSUME_ROLE"); 148 | 149 | return configProps; 150 | } 151 | 152 | public static Properties validateAssumeRoleCredentialsProvider(final Properties configProps) { 153 | return validateAssumeRoleCredentialsProvider(configProps, AWS_CREDENTIALS_PROVIDER); 154 | } 155 | 156 | /** 157 | * Computes a sensible maximum put record batch size based on region. 158 | * There is a maximum batch size of 4 MiB per call, this will exceed the 1 MiB/second quota in some regions. 159 | * https://docs.aws.amazon.com/firehose/latest/dev/limits.html 160 | * 161 | * If the region is null, it falls back to the lower batch size. 162 | * Customer can override this value in producer properties. 163 | * 164 | * @param region the region the producer is running in 165 | * @return a sensible maximum batch size 166 | */ 167 | public static int getDefaultMaxPutRecordBatchBytes(@Nullable final String region) { 168 | if (region != null) { 169 | switch (region) { 170 | case "us-east-1": 171 | case "us-west-2": 172 | case "eu-west-1": 173 | return DEFAULT_MAXIMUM_BATCH_BYTES; 174 | } 175 | } 176 | return REDUCED_QUOTA_MAXIMUM_THROUGHPUT; 177 | } 178 | } 179 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/config/AWSConfigConstantsTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.config; 20 | 21 | import org.testng.annotations.Test; 22 | 23 | import static org.assertj.core.api.Assertions.assertThat; 24 | import static org.assertj.core.api.Assertions.assertThatExceptionOfType; 25 | 26 | public class AWSConfigConstantsTest { 27 | 28 | @Test 29 | public void testAccessKeyId() { 30 | assertThat(AWSConfigConstants.accessKeyId("prefix")).isEqualTo("prefix.basic.aws_access_key_id"); 31 | } 32 | 33 | @Test 34 | public void testAccessKeyId_null() { 35 | assertThat(AWSConfigConstants.accessKeyId(null)).isEqualTo("aws_access_key_id"); 36 | } 37 | 38 | @Test 39 | public void testAccessKeyId_empty() { 40 | assertThat(AWSConfigConstants.accessKeyId("")).isEqualTo("aws_access_key_id"); 41 | } 42 | 43 | @Test 44 | public void testAccessKeyId_noPrefix() { 45 | assertThat(AWSConfigConstants.accessKeyId()).isEqualTo("aws_access_key_id"); 46 | } 47 | 48 | @Test 49 | public void testSecretKey() { 50 | assertThat(AWSConfigConstants.secretKey("prefix")).isEqualTo("prefix.basic.aws_secret_access_key"); 51 | } 52 | 53 | @Test 54 | public void testSecretKey_null() { 55 | assertThat(AWSConfigConstants.secretKey(null)).isEqualTo("aws_secret_access_key"); 56 | } 57 | 58 | @Test 59 | public void testSecretKey_empty() { 60 | assertThat(AWSConfigConstants.secretKey("")).isEqualTo("aws_secret_access_key"); 61 | } 62 | 63 | @Test 64 | public void testSecretKey_noPrefix() { 65 | assertThat(AWSConfigConstants.secretKey()).isEqualTo("aws_secret_access_key"); 66 | } 67 | 68 | @Test 69 | public void testProfilePath() { 70 | assertThat(AWSConfigConstants.profilePath("prefix")).isEqualTo("prefix.profile.path"); 71 | } 72 | 73 | @Test 74 | public void testProfilePath_empty() { 75 | assertThatExceptionOfType(IllegalArgumentException.class) 76 | .isThrownBy(() -> AWSConfigConstants.profilePath("")); 77 | } 78 | 79 | @Test 80 | public void testProfileName() { 81 | assertThat(AWSConfigConstants.profileName("prefix")).isEqualTo("prefix.profile.name"); 82 | } 83 | 84 | @Test 85 | public void testProfileName_empty() { 86 | assertThatExceptionOfType(IllegalArgumentException.class) 87 | .isThrownBy(() -> AWSConfigConstants.profileName("")); 88 | } 89 | 90 | @Test 91 | public void testRoleArn() { 92 | assertThat(AWSConfigConstants.roleArn("prefix")).isEqualTo("prefix.role.arn"); 93 | } 94 | 95 | @Test 96 | public void testRoleArn_empty() { 97 | assertThatExceptionOfType(IllegalArgumentException.class) 98 | .isThrownBy(() -> AWSConfigConstants.roleArn("")); 99 | } 100 | 101 | @Test 102 | public void testRoleSessionName() { 103 | assertThat(AWSConfigConstants.roleSessionName("prefix")).isEqualTo("prefix.role.sessionName"); 104 | } 105 | 106 | @Test 107 | public void testRoleSessionName_empty() { 108 | assertThatExceptionOfType(IllegalArgumentException.class) 109 | .isThrownBy(() -> AWSConfigConstants.roleSessionName("")); 110 | } 111 | 112 | @Test 113 | public void testExternalId() { 114 | assertThat(AWSConfigConstants.externalId("prefix")).isEqualTo("prefix.role.externalId"); 115 | } 116 | 117 | @Test 118 | public void testExternalId_empty() { 119 | assertThatExceptionOfType(IllegalArgumentException.class) 120 | .isThrownBy(() -> AWSConfigConstants.externalId("")); 121 | } 122 | 123 | @Test 124 | public void testRoleCredentialsProvider() { 125 | assertThat(AWSConfigConstants.roleCredentialsProvider("prefix")).isEqualTo("prefix.role.provider"); 126 | } 127 | 128 | @Test 129 | public void testRoleCredentialsProvider_empty() { 130 | assertThatExceptionOfType(IllegalArgumentException.class) 131 | .isThrownBy(() -> AWSConfigConstants.roleCredentialsProvider("")); 132 | } 133 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/firehose/examples/AssumeRoleSimpleStreamString.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.firehose.examples; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.FlinkKinesisFirehoseProducer; 23 | import org.apache.flink.api.common.serialization.SimpleStringSchema; 24 | import org.apache.flink.streaming.api.datastream.DataStream; 25 | import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; 26 | 27 | import java.util.Properties; 28 | 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.ASSUME_ROLE; 30 | 31 | /** 32 | * This example application streams dummy data to the specified Firehose using Assume Role authentication mechanism. 33 | * See https://docs.aws.amazon.com/kinesisanalytics/latest/java/examples-cross.html for more information. 34 | */ 35 | public class AssumeRoleSimpleStreamString { 36 | 37 | private static final String SINK_NAME = "Flink Kinesis Firehose Sink"; 38 | private static final String STREAM_NAME = ""; 39 | private static final String ROLE_ARN = ""; 40 | private static final String ROLE_SESSION_NAME = ""; 41 | private static final String REGION = "us-east-1"; 42 | 43 | public static void main(String[] args) throws Exception { 44 | final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); 45 | env.setParallelism(1); 46 | 47 | DataStream simpleStringStream = env.addSource(new SimpleStreamString.EventsGenerator()); 48 | 49 | Properties configProps = new Properties(); 50 | configProps.setProperty(AWSConfigConstants.AWS_CREDENTIALS_PROVIDER, ASSUME_ROLE.name()); 51 | configProps.setProperty(AWSConfigConstants.AWS_ROLE_ARN, ROLE_ARN); 52 | configProps.setProperty(AWSConfigConstants.AWS_ROLE_SESSION_NAME, ROLE_SESSION_NAME); 53 | configProps.setProperty(AWSConfigConstants.AWS_REGION, REGION); 54 | 55 | FlinkKinesisFirehoseProducer producer = 56 | new FlinkKinesisFirehoseProducer<>(STREAM_NAME, new SimpleStringSchema(), configProps); 57 | 58 | simpleStringStream.addSink(producer).name(SINK_NAME); 59 | env.execute(); 60 | } 61 | } 62 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/firehose/examples/SimpleStreamString.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.firehose.examples; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.FlinkKinesisFirehoseProducer; 23 | import org.apache.commons.lang3.RandomStringUtils; 24 | import org.apache.flink.api.common.serialization.SimpleStringSchema; 25 | import org.apache.flink.streaming.api.datastream.DataStream; 26 | import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; 27 | import org.apache.flink.streaming.api.functions.source.SourceFunction; 28 | 29 | import java.util.Properties; 30 | 31 | public class SimpleStreamString { 32 | 33 | private static final String SINK_NAME = "Flink Kinesis Firehose Sink"; 34 | 35 | public static void main(String[] args) throws Exception { 36 | final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); 37 | env.setParallelism(1); 38 | 39 | DataStream simpleStringStream = env.addSource(new EventsGenerator()); 40 | 41 | Properties configProps = new Properties(); 42 | configProps.setProperty(AWSConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id"); 43 | configProps.setProperty(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key"); 44 | configProps.setProperty(AWSConfigConstants.AWS_REGION, "us-east-1"); 45 | 46 | FlinkKinesisFirehoseProducer producer = 47 | new FlinkKinesisFirehoseProducer<>("firehose-delivery-stream-name", new SimpleStringSchema(), 48 | configProps); 49 | 50 | simpleStringStream.addSink(producer).name(SINK_NAME); 51 | env.execute(); 52 | } 53 | 54 | /** 55 | * Data generator that creates strings starting with a sequence number followed by a dash and 12 random characters. 56 | */ 57 | public static class EventsGenerator implements SourceFunction { 58 | private boolean running = true; 59 | 60 | @Override 61 | public void run(SourceContext ctx) throws Exception { 62 | long seq = 0; 63 | while (running) { 64 | Thread.sleep(10); 65 | ctx.collect((seq++) + "-" + RandomStringUtils.randomAlphabetic(12)); 66 | } 67 | } 68 | 69 | @Override 70 | public void cancel() { 71 | running = false; 72 | } 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/firehose/examples/SimpleWordCount.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.firehose.examples; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.FlinkKinesisFirehoseProducer; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.serialization.JsonSerializationSchema; 24 | import org.apache.flink.api.common.functions.FlatMapFunction; 25 | import org.apache.flink.api.java.tuple.Tuple2; 26 | import org.apache.flink.streaming.api.datastream.DataStream; 27 | import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment; 28 | import org.apache.flink.util.Collector; 29 | 30 | import java.util.Arrays; 31 | import java.util.Properties; 32 | 33 | public class SimpleWordCount { 34 | 35 | private static final String SINK_NAME = "Flink Kinesis Firehose Sink"; 36 | 37 | public static void main(String[] args) throws Exception { 38 | 39 | // set up the execution environment 40 | final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment(); 41 | env.setParallelism(1); 42 | 43 | // get input data 44 | DataStream text = env.fromElements(WordCountData.WORDS); 45 | 46 | DataStream> counts = 47 | // normalize and split each line 48 | text.map(line -> line.toLowerCase().split("\\W+")) 49 | // convert split line in pairs (2-tuples) containing: (word,1) 50 | .flatMap(new FlatMapFunction>() { 51 | @Override 52 | public void flatMap(String[] value, Collector> out) throws Exception { 53 | Arrays.stream(value) 54 | .filter(t -> t.length() > 0) 55 | .forEach(t -> out.collect(new Tuple2<>(t, 1))); 56 | } 57 | }) 58 | // group by the tuple field "0" and sum up tuple field "1" 59 | .keyBy(0) 60 | .sum(1); 61 | 62 | Properties configProps = new Properties(); 63 | configProps.setProperty(AWSConfigConstants.AWS_ACCESS_KEY_ID, "aws_access_key_id"); 64 | configProps.setProperty(AWSConfigConstants.AWS_SECRET_ACCESS_KEY, "aws_secret_access_key"); 65 | configProps.setProperty(AWSConfigConstants.AWS_REGION, "us-east-1"); 66 | 67 | FlinkKinesisFirehoseProducer> producer = 68 | new FlinkKinesisFirehoseProducer<>("firehose-delivery-stream", new JsonSerializationSchema<>(), 69 | configProps); 70 | 71 | counts.addSink(producer).name(SINK_NAME); 72 | env.execute(); 73 | } 74 | } 75 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/firehose/examples/WordCountData.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.firehose.examples; 20 | 21 | import org.apache.flink.api.java.DataSet; 22 | import org.apache.flink.api.java.ExecutionEnvironment; 23 | 24 | public class WordCountData { 25 | 26 | public static final String[] WORDS = new String[] { 27 | "To be, or not to be,--that is the question:--", 28 | "Whether 'tis nobler in the mind to suffer", 29 | "The slings and arrows of outrageous fortune", 30 | "Or to take arms against a sea of troubles,", 31 | "And by opposing end them?--To die,--to sleep,--", 32 | "No more; and by a sleep to say we end", 33 | "The heartache, and the thousand natural shocks", 34 | "That flesh is heir to,--'tis a consummation", 35 | "Devoutly to be wish'd. To die,--to sleep;--", 36 | "To sleep! perchance to dream:--ay, there's the rub;", 37 | "For in that sleep of death what dreams may come,", 38 | "When we have shuffled off this mortal coil,", 39 | "Must give us pause: there's the respect", 40 | "That makes calamity of so long life;", 41 | "For who would bear the whips and scorns of time,", 42 | "The oppressor's wrong, the proud man's contumely,", 43 | "The pangs of despis'd love, the law's delay,", 44 | "The insolence of office, and the spurns", 45 | "That patient merit of the unworthy takes,", 46 | "When he himself might his quietus make", 47 | "With a bare bodkin? who would these fardels bear,", 48 | "To grunt and sweat under a weary life,", 49 | "But that the dread of something after death,--", 50 | "The undiscover'd country, from whose bourn", 51 | "No traveller returns,--puzzles the will,", 52 | "And makes us rather bear those ills we have", 53 | "Than fly to others that we know not of?", 54 | "Thus conscience does make cowards of us all;", 55 | "And thus the native hue of resolution", 56 | "Is sicklied o'er with the pale cast of thought;", 57 | "And enterprises of great pith and moment,", 58 | "With this regard, their currents turn awry,", 59 | "And lose the name of action.--Soft you now!", 60 | "The fair Ophelia!--Nymph, in thy orisons", 61 | "Be all my sins remember'd." 62 | }; 63 | 64 | public static DataSet getDefaultTextLineDataSet(ExecutionEnvironment env) { 65 | return env.fromElements(WORDS); 66 | } 67 | } 68 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/FlinkKinesisFirehoseProducerTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.FlinkKinesisFirehoseException; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.RecordCouldNotBeBuffered; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.RecordCouldNotBeSentException; 24 | import com.amazonaws.services.kinesisanalytics.flink.connectors.serialization.KinesisFirehoseSerializationSchema; 25 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehose; 26 | import com.amazonaws.services.kinesisfirehose.model.Record; 27 | import org.apache.flink.api.common.serialization.SerializationSchema; 28 | import org.apache.flink.configuration.Configuration; 29 | import org.apache.flink.runtime.state.FunctionSnapshotContext; 30 | import org.mockito.Mock; 31 | import org.mockito.MockitoAnnotations; 32 | import org.slf4j.Logger; 33 | import org.slf4j.LoggerFactory; 34 | import org.testng.annotations.BeforeMethod; 35 | import org.testng.annotations.DataProvider; 36 | import org.testng.annotations.Test; 37 | 38 | import javax.annotation.Nonnull; 39 | import java.util.Properties; 40 | import java.util.concurrent.CompletableFuture; 41 | 42 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType; 43 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl.FirehoseProducer.UserRecordResult; 44 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.DEFAULT_DELIVERY_STREAM; 45 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.DEFAULT_TEST_ERROR_MSG; 46 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.getContext; 47 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.getKinesisFirehoseSerializationSchema; 48 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.getSerializationSchema; 49 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.getStandardProperties; 50 | import static org.apache.flink.streaming.api.functions.sink.SinkFunction.Context; 51 | import static org.assertj.core.api.Assertions.assertThat; 52 | import static org.assertj.core.api.Assertions.assertThatExceptionOfType; 53 | import static org.mockito.ArgumentMatchers.any; 54 | import static org.mockito.Mockito.doNothing; 55 | import static org.mockito.Mockito.doReturn; 56 | import static org.mockito.Mockito.mock; 57 | import static org.mockito.Mockito.never; 58 | import static org.mockito.Mockito.spy; 59 | import static org.mockito.Mockito.times; 60 | import static org.mockito.Mockito.verify; 61 | import static org.mockito.Mockito.when; 62 | import static org.testng.Assert.assertNotNull; 63 | import static org.testng.Assert.fail; 64 | 65 | public class FlinkKinesisFirehoseProducerTest { 66 | private static final Logger LOGGER = LoggerFactory.getLogger(FlinkKinesisFirehoseProducerTest.class); 67 | 68 | private FlinkKinesisFirehoseProducer flinkKinesisFirehoseProducer; 69 | private Context context; 70 | private final Configuration properties = new Configuration(); 71 | 72 | @Mock 73 | private AmazonKinesisFirehose kinesisFirehoseClient; 74 | 75 | @Mock 76 | private IProducer firehoseProducer; 77 | 78 | @BeforeMethod 79 | public void init() { 80 | MockitoAnnotations.initMocks(this); 81 | 82 | flinkKinesisFirehoseProducer = createProducer(); 83 | doReturn(firehoseProducer).when(flinkKinesisFirehoseProducer).createFirehoseProducer(); 84 | doReturn(kinesisFirehoseClient).when(flinkKinesisFirehoseProducer).createKinesisFirehoseClient(); 85 | 86 | context = getContext(); 87 | } 88 | 89 | @DataProvider(name = "kinesisFirehoseSerializationProvider") 90 | public Object[][] kinesisFirehoseSerializationProvider() { 91 | 92 | return new Object[][]{ 93 | {DEFAULT_DELIVERY_STREAM, getKinesisFirehoseSerializationSchema(), getStandardProperties(), null}, 94 | {DEFAULT_DELIVERY_STREAM, getKinesisFirehoseSerializationSchema(), getStandardProperties(), CredentialProviderType.BASIC}, 95 | }; 96 | } 97 | 98 | @DataProvider(name = "serializationSchemaProvider") 99 | public Object[][] serializationSchemaProvider() { 100 | return new Object[][] { 101 | {DEFAULT_DELIVERY_STREAM, getSerializationSchema(), getStandardProperties(), null}, 102 | {DEFAULT_DELIVERY_STREAM, getSerializationSchema(), getStandardProperties(), CredentialProviderType.BASIC} 103 | }; 104 | } 105 | 106 | @Test(dataProvider = "kinesisFirehoseSerializationProvider") 107 | public void testFlinkKinesisFirehoseProducerHappyCase(final String deliveryStream, 108 | final KinesisFirehoseSerializationSchema schema, 109 | final Properties configProps, 110 | final CredentialProviderType credentialType) { 111 | FlinkKinesisFirehoseProducer firehoseProducer = (credentialType != null) ? 112 | new FlinkKinesisFirehoseProducer<>(deliveryStream, schema, configProps, credentialType) : 113 | new FlinkKinesisFirehoseProducer<>(deliveryStream, schema, configProps); 114 | assertNotNull(firehoseProducer); 115 | } 116 | 117 | @Test(dataProvider = "serializationSchemaProvider") 118 | public void testFlinkKinesisFirehoseProducerWithSerializationSchemaHappyCase(final String deliveryStream , 119 | final SerializationSchema schema, 120 | final Properties configProps, 121 | CredentialProviderType credentialType) { 122 | FlinkKinesisFirehoseProducer firehoseProducer = (credentialType != null) ? 123 | new FlinkKinesisFirehoseProducer<>(deliveryStream, schema, configProps, credentialType) : 124 | new FlinkKinesisFirehoseProducer<>(deliveryStream, schema, configProps); 125 | 126 | assertNotNull(firehoseProducer); 127 | } 128 | 129 | /** 130 | * This test is responsible for testing rethrow in for an async error closing the sink (producer). 131 | */ 132 | @Test 133 | public void testAsyncErrorRethrownOnClose() throws Exception { 134 | try { 135 | flinkKinesisFirehoseProducer.setFailOnError(true); 136 | when(firehoseProducer.addUserRecord(any(Record.class))) 137 | .thenReturn(getUserRecordResult(true, false)); 138 | 139 | flinkKinesisFirehoseProducer.open(properties); 140 | flinkKinesisFirehoseProducer.invoke("Test", context); 141 | Thread.sleep(1000); 142 | flinkKinesisFirehoseProducer.close(); 143 | 144 | LOGGER.warn("Should not reach this line"); 145 | fail(); 146 | } catch (FlinkKinesisFirehoseException ex) { 147 | LOGGER.info("Exception has been thrown inside testAsyncErrorRethrownOnClose"); 148 | exceptionAssert(ex); 149 | 150 | } finally { 151 | verify(flinkKinesisFirehoseProducer, times(1)).open(properties); 152 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test", context); 153 | verify(flinkKinesisFirehoseProducer, times(1)).close(); 154 | } 155 | } 156 | 157 | /** 158 | * This test is responsible for testing an async error rethrow during invoke. 159 | */ 160 | @Test 161 | public void testAsyncErrorRethrownOnInvoke() throws Exception { 162 | try { 163 | flinkKinesisFirehoseProducer.setFailOnError(true); 164 | when(firehoseProducer.addUserRecord(any(Record.class))) 165 | .thenReturn(getUserRecordResult(true, false)) 166 | .thenReturn(getUserRecordResult(false, true)); 167 | 168 | flinkKinesisFirehoseProducer.open(properties); 169 | flinkKinesisFirehoseProducer.invoke("Test", context); 170 | Thread.sleep(1000); 171 | flinkKinesisFirehoseProducer.invoke("Test2", context); 172 | LOGGER.warn("Should not reach this line"); 173 | fail(); 174 | 175 | } catch (FlinkKinesisFirehoseException ex) { 176 | LOGGER.info("Exception has been thrown inside testAsyncErrorRethrownOnInvoke"); 177 | exceptionAssert(ex); 178 | 179 | } finally { 180 | verify(flinkKinesisFirehoseProducer, times(1)).open(properties); 181 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test", context); 182 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test2", context); 183 | verify(flinkKinesisFirehoseProducer, never()).close(); 184 | } 185 | } 186 | 187 | @Test 188 | public void testAsyncErrorRethrownWhenRecordFailedToSend() throws Exception { 189 | flinkKinesisFirehoseProducer.setFailOnError(true); 190 | 191 | UserRecordResult recordResult = new UserRecordResult(); 192 | recordResult.setSuccessful(false); 193 | recordResult.setException(new RuntimeException("A bad thing has happened")); 194 | 195 | when(firehoseProducer.addUserRecord(any(Record.class))) 196 | .thenReturn(CompletableFuture.completedFuture(recordResult)); 197 | 198 | flinkKinesisFirehoseProducer.open(properties); 199 | flinkKinesisFirehoseProducer.invoke("Test", context); 200 | 201 | assertThatExceptionOfType(FlinkKinesisFirehoseException.class) 202 | .isThrownBy(() -> flinkKinesisFirehoseProducer.close()) 203 | .withMessageContaining("An exception has been thrown while trying to process a record") 204 | .withCauseInstanceOf(RecordCouldNotBeSentException.class) 205 | .withStackTraceContaining("A bad thing has happened"); 206 | } 207 | 208 | /** 209 | * This test is responsible for testing async error, however should not rethrow in case of failures. 210 | * This is the default scenario for FlinkKinesisFirehoseProducer. 211 | */ 212 | @Test 213 | public void testAsyncErrorNotRethrowOnInvoke() throws Exception { 214 | flinkKinesisFirehoseProducer.setFailOnError(false); 215 | 216 | when(firehoseProducer.addUserRecord(any(Record.class))) 217 | .thenReturn(getUserRecordResult(true, false)) 218 | .thenReturn(getUserRecordResult(true, true)); 219 | 220 | flinkKinesisFirehoseProducer.open(properties); 221 | flinkKinesisFirehoseProducer.invoke("Test", context); 222 | flinkKinesisFirehoseProducer.invoke("Test2", context); 223 | 224 | verify(flinkKinesisFirehoseProducer, times(1)).open(properties); 225 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test", context); 226 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test2", context); 227 | verify(flinkKinesisFirehoseProducer, never()).close(); 228 | } 229 | 230 | @Test 231 | public void testFlinkKinesisFirehoseProducerHappyWorkflow() throws Exception { 232 | 233 | when(firehoseProducer.addUserRecord(any(Record.class))) 234 | .thenReturn(getUserRecordResult(false, true)); 235 | 236 | flinkKinesisFirehoseProducer.open(properties); 237 | flinkKinesisFirehoseProducer.invoke("Test", context); 238 | flinkKinesisFirehoseProducer.close(); 239 | 240 | verify(flinkKinesisFirehoseProducer, times(1)).open(properties); 241 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test", context); 242 | verify(flinkKinesisFirehoseProducer, times(1)).close(); 243 | } 244 | 245 | @Test 246 | public void testFlinkKinesisFirehoseProducerCloseAndFlushHappyWorkflow() throws Exception { 247 | 248 | when(firehoseProducer.addUserRecord(any(Record.class))) 249 | .thenReturn(getUserRecordResult(false, true)); 250 | 251 | doNothing().when(firehoseProducer).flush(); 252 | 253 | when(firehoseProducer.getOutstandingRecordsCount()).thenReturn(1).thenReturn(0); 254 | when(firehoseProducer.isFlushFailed()).thenReturn(false); 255 | 256 | flinkKinesisFirehoseProducer.open(properties); 257 | flinkKinesisFirehoseProducer.invoke("Test", context); 258 | flinkKinesisFirehoseProducer.close(); 259 | 260 | verify(firehoseProducer, times(1)).flush(); 261 | } 262 | 263 | @Test 264 | public void testFlinkKinesisFirehoseProducerTakeSnapshotHappyWorkflow() throws Exception { 265 | 266 | when(firehoseProducer.addUserRecord(any(Record.class))) 267 | .thenReturn(getUserRecordResult(false, true)); 268 | 269 | doNothing().when(firehoseProducer).flush(); 270 | 271 | when(firehoseProducer.getOutstandingRecordsCount()).thenReturn(1).thenReturn(1).thenReturn(1).thenReturn(0); 272 | when(firehoseProducer.isFlushFailed()).thenReturn(false); 273 | 274 | FunctionSnapshotContext functionContext = mock(FunctionSnapshotContext.class); 275 | flinkKinesisFirehoseProducer.open(properties); 276 | flinkKinesisFirehoseProducer.invoke("Test", context); 277 | flinkKinesisFirehoseProducer.snapshotState(functionContext); 278 | 279 | verify(firehoseProducer, times(1)).flush(); 280 | } 281 | 282 | @Test(expectedExceptions = IllegalStateException.class, 283 | expectedExceptionsMessageRegExp = "An error has occurred trying to flush the buffer synchronously.*") 284 | public void testFlinkKinesisFirehoseProducerTakeSnapshotFailedFlush() throws Exception { 285 | 286 | when(firehoseProducer.addUserRecord(any(Record.class))) 287 | .thenReturn(getUserRecordResult(false, true)); 288 | 289 | doNothing().when(firehoseProducer).flush(); 290 | 291 | when(firehoseProducer.getOutstandingRecordsCount()).thenReturn(1).thenReturn(1); 292 | when(firehoseProducer.isFlushFailed()).thenReturn(false).thenReturn(true); 293 | 294 | FunctionSnapshotContext functionContext = mock(FunctionSnapshotContext.class); 295 | flinkKinesisFirehoseProducer.open(properties); 296 | flinkKinesisFirehoseProducer.invoke("Test", context); 297 | flinkKinesisFirehoseProducer.snapshotState(functionContext); 298 | 299 | fail("We should not reach here."); 300 | } 301 | 302 | /** 303 | * This test is responsible for testing a scenarion when there are exceptions to be thrown closing the sink (producer) 304 | * This is the default scenario for FlinkKinesisFirehoseProducer. 305 | */ 306 | @Test 307 | public void testAsyncErrorNotRethrownOnClose() throws Exception { 308 | 309 | flinkKinesisFirehoseProducer.setFailOnError(false); 310 | 311 | when(firehoseProducer.addUserRecord(any(Record.class))) 312 | .thenReturn(getUserRecordResult(true, false)) 313 | .thenReturn(getUserRecordResult(true, false)); 314 | 315 | flinkKinesisFirehoseProducer.open(properties); 316 | flinkKinesisFirehoseProducer.invoke("Test", context); 317 | flinkKinesisFirehoseProducer.invoke("Test2", context); 318 | flinkKinesisFirehoseProducer.close(); 319 | 320 | verify(flinkKinesisFirehoseProducer, times(1)).open(properties); 321 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test", context); 322 | verify(flinkKinesisFirehoseProducer, times(1)).invoke("Test2", context); 323 | verify(flinkKinesisFirehoseProducer, times(1)).close(); 324 | } 325 | 326 | private void exceptionAssert(FlinkKinesisFirehoseException ex) { 327 | final String expectedErrorMsg = "An exception has been thrown while trying to process a record"; 328 | LOGGER.info(ex.getMessage()); 329 | assertThat(ex.getMessage()).isEqualTo(expectedErrorMsg); 330 | 331 | assertThat(ex.getCause()).isInstanceOf(RecordCouldNotBeBuffered.class); 332 | 333 | LOGGER.info(ex.getCause().getMessage()); 334 | assertThat(ex.getCause().getMessage()).isEqualTo(DEFAULT_TEST_ERROR_MSG); 335 | } 336 | 337 | @Nonnull 338 | private CompletableFuture getUserRecordResult(final boolean isFailedRecord, final boolean isSuccessful) { 339 | UserRecordResult recordResult = new UserRecordResult().setSuccessful(isSuccessful); 340 | 341 | if (isFailedRecord) { 342 | CompletableFuture future = new CompletableFuture<>(); 343 | future.completeExceptionally(new RecordCouldNotBeBuffered(DEFAULT_TEST_ERROR_MSG)); 344 | return future; 345 | } else { 346 | return CompletableFuture.completedFuture(recordResult); 347 | } 348 | } 349 | 350 | @Nonnull 351 | private FlinkKinesisFirehoseProducer createProducer() { 352 | return spy(new FlinkKinesisFirehoseProducer<>(DEFAULT_DELIVERY_STREAM, 353 | getKinesisFirehoseSerializationSchema(), getStandardProperties())); 354 | } 355 | } 356 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/impl/FirehoseProducerConfigurationTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl; 20 | 21 | import org.testng.annotations.Test; 22 | 23 | import javax.annotation.Nonnull; 24 | import java.util.Properties; 25 | 26 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_BASE_BACKOFF; 27 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_INTERVAL_BETWEEN_FLUSHES; 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAXIMUM_BATCH_BYTES; 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_BACKOFF; 30 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_BUFFER_SIZE; 31 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_BUFFER_TIMEOUT; 32 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_OPERATION_TIMEOUT; 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_NUMBER_OF_RETRIES; 34 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_WAIT_TIME_FOR_BUFFER_FULL; 35 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_BASE_BACKOFF_TIMEOUT; 36 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FLUSH_MAX_NUMBER_OF_RETRIES; 37 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FLUSH_TIMEOUT; 38 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_FULL_WAIT_TIMEOUT; 39 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_BACKOFF_TIMEOUT; 40 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES; 41 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_SIZE; 42 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT; 43 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_MAX_OPERATION_TIMEOUT; 44 | import static org.assertj.core.api.Assertions.assertThat; 45 | import static org.assertj.core.api.Assertions.assertThatExceptionOfType; 46 | 47 | public class FirehoseProducerConfigurationTest { 48 | private static final String REGION = "us-east-1"; 49 | 50 | @Test 51 | public void testBuilderWithDefaultProperties() { 52 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration.builder(REGION).build(); 53 | 54 | assertThat(configuration.getMaxBufferSize()).isEqualTo(DEFAULT_MAX_BUFFER_SIZE); 55 | assertThat(configuration.getMaxPutRecordBatchBytes()).isEqualTo(DEFAULT_MAXIMUM_BATCH_BYTES); 56 | assertThat(configuration.getNumberOfRetries()).isEqualTo(DEFAULT_NUMBER_OF_RETRIES); 57 | assertThat(configuration.getBufferFullWaitTimeoutInMillis()).isEqualTo(DEFAULT_WAIT_TIME_FOR_BUFFER_FULL); 58 | assertThat(configuration.getBufferTimeoutInMillis()).isEqualTo(DEFAULT_MAX_BUFFER_TIMEOUT); 59 | assertThat(configuration.getBufferTimeoutBetweenFlushes()).isEqualTo(DEFAULT_INTERVAL_BETWEEN_FLUSHES); 60 | assertThat(configuration.getMaxBackOffInMillis()).isEqualTo(DEFAULT_MAX_BACKOFF); 61 | assertThat(configuration.getBaseBackOffInMillis()).isEqualTo(DEFAULT_BASE_BACKOFF); 62 | assertThat(configuration.getMaxOperationTimeoutInMillis()).isEqualTo(DEFAULT_MAX_OPERATION_TIMEOUT); 63 | } 64 | 65 | @Test 66 | public void testBuilderWithMaxBufferSize() { 67 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 68 | .builder(REGION) 69 | .withMaxBufferSize(250) 70 | .build(); 71 | 72 | assertThat(configuration.getMaxBufferSize()).isEqualTo(250); 73 | } 74 | 75 | @Test 76 | public void testBuilderWithMaxBufferSizeRejectsZero() { 77 | assertThatExceptionOfType(IllegalArgumentException.class) 78 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxBufferSize(0)) 79 | .withMessageContaining("Buffer size must be between 1 and 500"); 80 | } 81 | 82 | @Test 83 | public void testBuilderWithMaxBufferSizeRejectsUpperLimit() { 84 | assertThatExceptionOfType(IllegalArgumentException.class) 85 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxBufferSize(501)) 86 | .withMessageContaining("Buffer size must be between 1 and 500"); 87 | } 88 | 89 | @Test 90 | public void testBuilderWithMaxPutRecordBatchBytes() { 91 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 92 | .builder(REGION) 93 | .withMaxPutRecordBatchBytes(100) 94 | .build(); 95 | 96 | assertThat(configuration.getMaxPutRecordBatchBytes()).isEqualTo(100); 97 | } 98 | 99 | @Test 100 | public void testBuilderWithMaxPutRecordBatchBytesRejectsZero() { 101 | assertThatExceptionOfType(IllegalArgumentException.class) 102 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxPutRecordBatchBytes(0)) 103 | .withMessageContaining("Maximum batch size in bytes must be between 1 and 4194304"); 104 | } 105 | 106 | @Test 107 | public void testBuilderWithMaxPutRecordBatchBytesRejectsUpperLimit() { 108 | assertThatExceptionOfType(IllegalArgumentException.class) 109 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxPutRecordBatchBytes(4194305)) 110 | .withMessageContaining("Maximum batch size in bytes must be between 1 and 4194304"); 111 | } 112 | 113 | @Test 114 | public void testBuilderWithNumberOfRetries() { 115 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 116 | .builder(REGION) 117 | .withNumberOfRetries(100) 118 | .build(); 119 | 120 | assertThat(configuration.getNumberOfRetries()).isEqualTo(100); 121 | } 122 | 123 | @Test 124 | public void testBuilderWithNumberOfRetriesRejectsNegative() { 125 | assertThatExceptionOfType(IllegalArgumentException.class) 126 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withNumberOfRetries(-1)) 127 | .withMessageContaining("Number of retries cannot be negative"); 128 | } 129 | 130 | @Test 131 | public void testBuilderWithBufferTimeoutInMillis() { 132 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 133 | .builder(REGION) 134 | .withBufferTimeoutInMillis(12345L) 135 | .build(); 136 | 137 | assertThat(configuration.getBufferTimeoutInMillis()).isEqualTo(12345L); 138 | } 139 | 140 | @Test 141 | public void testBuilderWithBufferTimeoutInMillisRejects() { 142 | assertThatExceptionOfType(IllegalArgumentException.class) 143 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withBufferTimeoutInMillis(-1)) 144 | .withMessageContaining("Flush timeout should be greater than 0"); 145 | } 146 | 147 | @Test 148 | public void testBuilderWithMaxOperationTimeoutInMillis() { 149 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 150 | .builder(REGION) 151 | .withMaxOperationTimeoutInMillis(999L) 152 | .build(); 153 | 154 | assertThat(configuration.getMaxOperationTimeoutInMillis()).isEqualTo(999L); 155 | } 156 | 157 | @Test 158 | public void testBuilderWithMaxOperationTimeoutInMillisRejectsNegative() { 159 | assertThatExceptionOfType(IllegalArgumentException.class) 160 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxOperationTimeoutInMillis(-1)) 161 | .withMessageContaining("Max operation timeout should be greater than 0"); 162 | } 163 | 164 | @Test 165 | public void testBuilderWithBufferFullWaitTimeoutInMillis() { 166 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 167 | .builder(REGION) 168 | .withBufferFullWaitTimeoutInMillis(1L) 169 | .build(); 170 | 171 | assertThat(configuration.getBufferFullWaitTimeoutInMillis()).isEqualTo(1L); 172 | } 173 | 174 | @Test 175 | public void testBuilderWithBufferFullWaitTimeoutInMillisRejectsNegative() { 176 | assertThatExceptionOfType(IllegalArgumentException.class) 177 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withBufferFullWaitTimeoutInMillis(-1)) 178 | .withMessageContaining("Buffer full waiting timeout should be greater than 0"); 179 | } 180 | 181 | @Test 182 | public void testBuilderWithBufferTimeoutBetweenFlushes() { 183 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 184 | .builder(REGION) 185 | .withBufferTimeoutBetweenFlushes(2L) 186 | .build(); 187 | 188 | assertThat(configuration.getBufferTimeoutBetweenFlushes()).isEqualTo(2L); 189 | } 190 | 191 | @Test 192 | public void testBuilderWithBufferTimeoutBetweenFlushesRejectsNegative() { 193 | assertThatExceptionOfType(IllegalArgumentException.class) 194 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withBufferTimeoutBetweenFlushes(-1)) 195 | .withMessageContaining("Interval between flushes cannot be negative"); 196 | } 197 | 198 | @Test 199 | public void testBuilderWithMaxBackOffInMillis() { 200 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 201 | .builder(REGION) 202 | .withMaxBackOffInMillis(3L) 203 | .build(); 204 | 205 | assertThat(configuration.getMaxBackOffInMillis()).isEqualTo(3L); 206 | } 207 | 208 | @Test 209 | public void testBuilderWithMaxBackOffInMillisRejectsNegative() { 210 | assertThatExceptionOfType(IllegalArgumentException.class) 211 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withMaxBackOffInMillis(-1)) 212 | .withMessageContaining("Max backoff timeout should be greater than 0"); 213 | } 214 | 215 | @Test 216 | public void testBuilderWithBaseBackOffInMillis() { 217 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 218 | .builder(REGION) 219 | .withBaseBackOffInMillis(4L) 220 | .build(); 221 | 222 | assertThat(configuration.getBaseBackOffInMillis()).isEqualTo(4L); 223 | } 224 | 225 | @Test 226 | public void testBuilderWithBaseBackOffInMillisRejectsNegative() { 227 | assertThatExceptionOfType(IllegalArgumentException.class) 228 | .isThrownBy(() -> FirehoseProducerConfiguration.builder(REGION).withBaseBackOffInMillis(-1)) 229 | .withMessageContaining("Base backoff timeout should be greater than 0"); 230 | } 231 | 232 | @Test 233 | public void testBuilderWithMaxBufferSizeFromProperties() { 234 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 235 | .builder(REGION) 236 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_MAX_SIZE, "250")) 237 | .build(); 238 | 239 | assertThat(configuration.getMaxBufferSize()).isEqualTo(250); 240 | } 241 | 242 | @Test 243 | public void testBuilderWithMaxPutRecordBatchBytesFromProperties() { 244 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 245 | .builder(REGION) 246 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES, "100")) 247 | .build(); 248 | 249 | assertThat(configuration.getMaxPutRecordBatchBytes()).isEqualTo(100); 250 | } 251 | 252 | @Test 253 | public void testBuilderWithNumberOfRetriesFromProperties() { 254 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 255 | .builder(REGION) 256 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_FLUSH_MAX_NUMBER_OF_RETRIES, "100")) 257 | .build(); 258 | 259 | assertThat(configuration.getNumberOfRetries()).isEqualTo(100); 260 | } 261 | 262 | @Test 263 | public void testBuilderWithBufferTimeoutInMillisFromProperties() { 264 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 265 | .builder(REGION) 266 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT, "12345")) 267 | .build(); 268 | 269 | assertThat(configuration.getBufferTimeoutInMillis()).isEqualTo(12345L); 270 | } 271 | 272 | @Test 273 | public void testBuilderWithMaxOperationTimeoutInMillisFromProperties() { 274 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 275 | .builder(REGION) 276 | .withProperties(props(FIREHOSE_PRODUCER_MAX_OPERATION_TIMEOUT, "999")) 277 | .build(); 278 | 279 | assertThat(configuration.getMaxOperationTimeoutInMillis()).isEqualTo(999L); 280 | } 281 | 282 | @Test 283 | public void testBuilderWithBufferFullWaitTimeoutInMillisFromProperties() { 284 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 285 | .builder(REGION) 286 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_FULL_WAIT_TIMEOUT, "1")) 287 | .build(); 288 | 289 | assertThat(configuration.getBufferFullWaitTimeoutInMillis()).isEqualTo(1L); 290 | } 291 | 292 | @Test 293 | public void testBuilderWithBufferTimeoutBetweenFlushesFromProperties() { 294 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 295 | .builder(REGION) 296 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_FLUSH_TIMEOUT, "2")) 297 | .build(); 298 | 299 | assertThat(configuration.getBufferTimeoutBetweenFlushes()).isEqualTo(2L); 300 | } 301 | 302 | @Test 303 | public void testBuilderWithMaxBackOffInMillisFromProperties() { 304 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 305 | .builder(REGION) 306 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_MAX_BACKOFF_TIMEOUT, "3")) 307 | .build(); 308 | 309 | assertThat(configuration.getMaxBackOffInMillis()).isEqualTo(3L); 310 | } 311 | 312 | @Test 313 | public void testBuilderWithBaseBackOffInMillisFromProperties() { 314 | FirehoseProducerConfiguration configuration = FirehoseProducerConfiguration 315 | .builder(REGION) 316 | .withProperties(props(FIREHOSE_PRODUCER_BUFFER_BASE_BACKOFF_TIMEOUT, "4")) 317 | .build(); 318 | 319 | assertThat(configuration.getBaseBackOffInMillis()).isEqualTo(4L); 320 | } 321 | 322 | @Nonnull 323 | private Properties props(@Nonnull final String key, @Nonnull final String value) { 324 | Properties properties = new Properties(); 325 | properties.setProperty(key, value); 326 | return properties; 327 | } 328 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/producer/impl/FirehoseProducerTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl.FirehoseProducer.FirehoseThreadFactory; 22 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehose; 23 | import com.amazonaws.services.kinesisfirehose.model.PutRecordBatchRequest; 24 | import com.amazonaws.services.kinesisfirehose.model.PutRecordBatchResponseEntry; 25 | import com.amazonaws.services.kinesisfirehose.model.PutRecordBatchResult; 26 | import com.amazonaws.services.kinesisfirehose.model.Record; 27 | import org.apache.commons.lang3.RandomStringUtils; 28 | import org.mockito.ArgumentCaptor; 29 | import org.mockito.Captor; 30 | import org.mockito.Mock; 31 | import org.mockito.MockitoAnnotations; 32 | import org.slf4j.Logger; 33 | import org.slf4j.LoggerFactory; 34 | import org.testng.annotations.BeforeMethod; 35 | import org.testng.annotations.Test; 36 | 37 | import javax.annotation.Nonnull; 38 | import java.nio.ByteBuffer; 39 | import java.util.ArrayList; 40 | import java.util.List; 41 | import java.util.Properties; 42 | import java.util.concurrent.Callable; 43 | import java.util.concurrent.CompletableFuture; 44 | import java.util.concurrent.ExecutorService; 45 | import java.util.concurrent.Executors; 46 | import java.util.concurrent.Future; 47 | import java.util.stream.IntStream; 48 | 49 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 50 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAX_BUFFER_SIZE; 51 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES; 52 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT; 53 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.producer.impl.FirehoseProducer.UserRecordResult; 54 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.testutils.TestUtils.DEFAULT_DELIVERY_STREAM; 55 | import static org.assertj.core.api.Assertions.assertThat; 56 | import static org.mockito.ArgumentMatchers.any; 57 | import static org.mockito.Mockito.mock; 58 | import static org.mockito.Mockito.times; 59 | import static org.mockito.Mockito.verify; 60 | import static org.mockito.Mockito.when; 61 | import static org.testng.Assert.fail; 62 | 63 | /** 64 | * All tests make relies on best effort to simulate and wait how a multi-threading system should be behave, 65 | * trying to rely on deterministic results, however, the results and timing depends on the operating system scheduler and JVM. 66 | * So, if any of these tests failed, you may want to increase the sleep timeout or perhaps comment out the failed ones. 67 | */ 68 | public class FirehoseProducerTest { 69 | private static final Logger LOGGER = LoggerFactory.getLogger(FirehoseProducerTest.class); 70 | 71 | private static final int KB_512 = 512 * 1_024; 72 | 73 | @Mock 74 | private AmazonKinesisFirehose firehoseClient; 75 | 76 | private FirehoseProducer firehoseProducer; 77 | 78 | @Captor 79 | private ArgumentCaptor putRecordCaptor; 80 | 81 | @BeforeMethod 82 | public void init() { 83 | MockitoAnnotations.initMocks(this); 84 | 85 | this.firehoseProducer = createFirehoseProducer(); 86 | } 87 | 88 | @Test 89 | public void testFirehoseProducerSingleThreadHappyCase() throws Exception { 90 | PutRecordBatchResult successResult = new PutRecordBatchResult(); 91 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 92 | 93 | for (int i = 0; i < DEFAULT_MAX_BUFFER_SIZE; ++i) { 94 | addRecord(firehoseProducer); 95 | } 96 | Thread.sleep(2000); 97 | 98 | LOGGER.debug("Number of outstanding records: {}", firehoseProducer.getOutstandingRecordsCount()); 99 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 100 | } 101 | 102 | @Test 103 | public void testFirehoseProducerMultiThreadHappyCase() throws Exception { 104 | PutRecordBatchResult successResult = new PutRecordBatchResult(); 105 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 106 | 107 | ExecutorService exec = Executors.newFixedThreadPool(4); 108 | List>> futures = new ArrayList<>(); 109 | 110 | for (int j = 0; j < DEFAULT_MAX_BUFFER_SIZE; ++j) { 111 | futures.add(() -> addRecord(firehoseProducer)); 112 | } 113 | 114 | exec.invokeAll(futures); 115 | Thread.currentThread().join(3000); 116 | LOGGER.debug("Number of outstanding items: {}", firehoseProducer.getOutstandingRecordsCount()); 117 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 118 | } 119 | 120 | @Test 121 | public void testFirehoseProducerMultiThreadFlushSyncHappyCase() throws Exception { 122 | PutRecordBatchResult successResult = mock(PutRecordBatchResult.class); 123 | ArgumentCaptor captor = ArgumentCaptor.forClass(PutRecordBatchRequest.class); 124 | 125 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 126 | 127 | ExecutorService exec = Executors.newFixedThreadPool(4); 128 | List>> futures = new ArrayList<>(); 129 | 130 | for (int j = 0; j < 400; ++j) { 131 | futures.add(() -> addRecord(firehoseProducer)); 132 | } 133 | 134 | List>> results = exec.invokeAll(futures); 135 | 136 | for (Future> f : results) { 137 | while(!f.isDone()) { 138 | Thread.sleep(100); 139 | } 140 | CompletableFuture fi = f.get(); 141 | UserRecordResult r = fi.get(); 142 | assertThat(r.isSuccessful()).isTrue(); 143 | } 144 | firehoseProducer.flushSync(); 145 | 146 | LOGGER.debug("Number of outstanding items: {}", firehoseProducer.getOutstandingRecordsCount()); 147 | verify(firehoseClient).putRecordBatch(captor.capture()); 148 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 149 | assertThat(firehoseProducer.isFlushFailed()).isFalse(); 150 | } 151 | 152 | 153 | @Test 154 | public void testFirehoseProducerMultiThreadFlushAndWaitHappyCase() throws Exception { 155 | PutRecordBatchResult successResult = mock(PutRecordBatchResult.class); 156 | ArgumentCaptor captor = ArgumentCaptor.forClass(PutRecordBatchRequest.class); 157 | 158 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 159 | 160 | ExecutorService exec = Executors.newFixedThreadPool(4); 161 | List>> futures = new ArrayList<>(); 162 | 163 | for (int j = 0; j < 400; ++j) { 164 | futures.add(() -> addRecord(firehoseProducer)); 165 | } 166 | 167 | List>> results = exec.invokeAll(futures); 168 | 169 | for (Future> f : results) { 170 | while(!f.isDone()) { 171 | Thread.sleep(100); 172 | } 173 | CompletableFuture fi = f.get(); 174 | UserRecordResult r = fi.get(); 175 | assertThat(r.isSuccessful()).isTrue(); 176 | } 177 | 178 | while (firehoseProducer.getOutstandingRecordsCount() > 0 && !firehoseProducer.isFlushFailed()) { 179 | firehoseProducer.flush(); 180 | try { 181 | Thread.sleep(500); 182 | } catch (InterruptedException ex) { 183 | fail(); 184 | } 185 | } 186 | 187 | LOGGER.debug("Number of outstanding items: {}", firehoseProducer.getOutstandingRecordsCount()); 188 | verify(firehoseClient).putRecordBatch(captor.capture()); 189 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 190 | assertThat(firehoseProducer.isFlushFailed()).isFalse(); 191 | } 192 | 193 | @Test 194 | public void testFirehoseProducerSingleThreadTimeoutExpiredHappyCase() throws Exception { 195 | PutRecordBatchResult successResult = new PutRecordBatchResult(); 196 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 197 | 198 | for (int i = 0; i < 100; ++i) { 199 | addRecord(firehoseProducer); 200 | } 201 | Thread.sleep(2000); 202 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 203 | } 204 | 205 | @Test 206 | public void testFirehoseProducerSingleThreadBufferIsFullHappyCase() throws Exception { 207 | PutRecordBatchResult successResult = new PutRecordBatchResult(); 208 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(successResult); 209 | 210 | for (int i = 0; i < 2 * DEFAULT_MAX_BUFFER_SIZE; ++i) { 211 | addRecord(firehoseProducer); 212 | } 213 | 214 | Thread.sleep(2000); 215 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 216 | } 217 | 218 | /** 219 | * This test is responsible for checking if the consumer thread has performed the work or not, so there is no way to 220 | * throw an exception to be catch here, so the assertion goes along the fact if the buffer was flushed or not. 221 | */ 222 | @Test 223 | public void testFirehoseProducerSingleThreadFailedToSendRecords() throws Exception { 224 | PutRecordBatchResult failedResult = new PutRecordBatchResult() 225 | .withFailedPutCount(1) 226 | .withRequestResponses(new PutRecordBatchResponseEntry() 227 | .withErrorCode("400") 228 | .withErrorMessage("Invalid Schema")); 229 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))).thenReturn(failedResult); 230 | 231 | for (int i = 0; i < DEFAULT_MAX_BUFFER_SIZE; ++i) { 232 | addRecord(firehoseProducer); 233 | } 234 | Thread.sleep(2000); 235 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(DEFAULT_MAX_BUFFER_SIZE); 236 | assertThat(firehoseProducer.isFlushFailed()).isTrue(); 237 | } 238 | 239 | @Test 240 | public void testFirehoseProducerBatchesRecords() throws Exception { 241 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))) 242 | .thenReturn(new PutRecordBatchResult()); 243 | 244 | // Fill up the maximum capacity: 8 * 512kB = 4MB 245 | IntStream.range(0, 8).forEach(i -> addRecord(firehoseProducer, KB_512)); 246 | 247 | // Add a single byte to overflow the maximum 248 | addRecord(firehoseProducer, 1); 249 | 250 | Thread.sleep(3000); 251 | 252 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 253 | verify(firehoseClient, times(2)).putRecordBatch(putRecordCaptor.capture()); 254 | 255 | // The first batch should contain 4 records (up to 4MB), the second should contain the remaining record 256 | assertThat(putRecordCaptor.getAllValues().get(0).getRecords()) 257 | .hasSize(8).allMatch(e -> e.getData().limit() == KB_512); 258 | 259 | assertThat(putRecordCaptor.getAllValues().get(1).getRecords()) 260 | .hasSize(1).allMatch(e -> e.getData().limit() == 1); 261 | } 262 | 263 | @Test 264 | public void testFirehoseProducerBatchesRecordsWithCustomBatchSize() throws Exception { 265 | Properties config = new Properties(); 266 | config.setProperty(FIREHOSE_PRODUCER_BUFFER_MAX_BATCH_BYTES, "100"); 267 | 268 | FirehoseProducer producer = createFirehoseProducer(config); 269 | 270 | when(firehoseClient.putRecordBatch(any(PutRecordBatchRequest.class))) 271 | .thenReturn(new PutRecordBatchResult()); 272 | 273 | // Overflow the maximum capacity: 2 * 100kB = 200kB 274 | IntStream.range(0, 2).forEach(i -> addRecord(producer, 100)); 275 | 276 | Thread.sleep(3000); 277 | 278 | assertThat(firehoseProducer.getOutstandingRecordsCount()).isEqualTo(0); 279 | verify(firehoseClient, times(2)).putRecordBatch(putRecordCaptor.capture()); 280 | 281 | // The first batch should contain 1 record (up to 100kB), the second should contain the remaining record 282 | assertThat(putRecordCaptor.getAllValues().get(0).getRecords()) 283 | .hasSize(1).allMatch(e -> e.getData().limit() == 100); 284 | 285 | assertThat(putRecordCaptor.getAllValues().get(1).getRecords()) 286 | .hasSize(1).allMatch(e -> e.getData().limit() == 100); 287 | } 288 | 289 | @Test 290 | public void testThreadFactoryNewThreadName() { 291 | FirehoseThreadFactory threadFactory = new FirehoseThreadFactory(); 292 | Thread thread1 = threadFactory.newThread(() -> LOGGER.info("Running task 1")); 293 | Thread thread2 = threadFactory.newThread(() -> LOGGER.info("Running task 2")); 294 | Thread thread3 = threadFactory.newThread(() -> LOGGER.info("Running task 3")); 295 | 296 | // Thread index is allocated statically, so cannot deterministically guarantee the thread number 297 | // Work out thread1's number and then check subsequent thread names 298 | int threadNumber = Integer.parseInt(thread1.getName().substring(thread1.getName().lastIndexOf('-') + 1)); 299 | 300 | assertThat(thread1.getName()).isEqualTo("kda-writer-thread-" + threadNumber++); 301 | assertThat(thread1.isDaemon()).isFalse(); 302 | 303 | assertThat(thread2.getName()).isEqualTo("kda-writer-thread-" + threadNumber++); 304 | assertThat(thread2.isDaemon()).isFalse(); 305 | 306 | assertThat(thread3.getName()).isEqualTo("kda-writer-thread-" + threadNumber); 307 | assertThat(thread3.isDaemon()).isFalse(); 308 | } 309 | 310 | @Nonnull 311 | private CompletableFuture addRecord(final FirehoseProducer producer) { 312 | return addRecord(producer, 64); 313 | } 314 | 315 | @Nonnull 316 | private CompletableFuture addRecord(final FirehoseProducer producer, final int length) { 317 | try { 318 | Record record = new Record().withData(ByteBuffer.wrap( 319 | RandomStringUtils.randomAlphabetic(length).getBytes())); 320 | 321 | return producer.addUserRecord(record); 322 | } catch (Exception e) { 323 | throw new RuntimeException(e); 324 | } 325 | } 326 | 327 | @Nonnull 328 | private FirehoseProducer createFirehoseProducer() { 329 | return createFirehoseProducer(new Properties()); 330 | } 331 | 332 | @Nonnull 333 | private FirehoseProducer createFirehoseProducer(@Nonnull final Properties config) { 334 | config.setProperty(FIREHOSE_PRODUCER_BUFFER_MAX_TIMEOUT, "1000"); 335 | config.setProperty(AWS_REGION, "us-east-1"); 336 | return new FirehoseProducer<>(DEFAULT_DELIVERY_STREAM, firehoseClient, config); 337 | } 338 | } 339 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/AssumeRoleCredentialsProviderTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 23 | import org.testng.annotations.Test; 24 | 25 | import javax.annotation.Nonnull; 26 | import java.util.Properties; 27 | 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 29 | import static org.assertj.core.api.Assertions.assertThat; 30 | import static org.mockito.ArgumentMatchers.any; 31 | import static org.mockito.ArgumentMatchers.anyString; 32 | import static org.mockito.Mockito.doReturn; 33 | import static org.mockito.Mockito.eq; 34 | import static org.mockito.Mockito.mock; 35 | import static org.mockito.Mockito.spy; 36 | import static org.mockito.Mockito.verify; 37 | 38 | public class AssumeRoleCredentialsProviderTest { 39 | 40 | @Test 41 | public void testGetAwsCredentialsProviderWithDefaultPrefix() { 42 | Properties properties = createAssumeRoleProperties(AWS_CREDENTIALS_PROVIDER); 43 | AssumeRoleCredentialsProvider credentialsProvider = new AssumeRoleCredentialsProvider(properties); 44 | 45 | assertGetAwsCredentialsProvider(credentialsProvider); 46 | } 47 | 48 | @Test 49 | public void testGetAwsCredentialsProviderWithCustomPrefix() { 50 | Properties properties = createAssumeRoleProperties("prefix"); 51 | AssumeRoleCredentialsProvider credentialsProvider = new AssumeRoleCredentialsProvider(properties, "prefix"); 52 | 53 | assertGetAwsCredentialsProvider(credentialsProvider); 54 | } 55 | 56 | private void assertGetAwsCredentialsProvider(@Nonnull final AssumeRoleCredentialsProvider credentialsProvider) { 57 | STSAssumeRoleSessionCredentialsProvider expected = mock(STSAssumeRoleSessionCredentialsProvider.class); 58 | 59 | AssumeRoleCredentialsProvider provider = spy(credentialsProvider); 60 | doReturn(expected).when(provider).createAwsCredentialsProvider(any(), anyString(), anyString(), any()); 61 | 62 | assertThat(provider.getAwsCredentialsProvider()).isEqualTo(expected); 63 | verify(provider).createAwsCredentialsProvider(eq("arn-1234567812345678"), eq("session-name"), eq("external-id"), any()); 64 | } 65 | 66 | @Nonnull 67 | private Properties createAssumeRoleProperties(@Nonnull final String prefix) { 68 | Properties properties = new Properties(); 69 | properties.put(AWSConfigConstants.AWS_REGION, "eu-west-2"); 70 | properties.put(AWSConfigConstants.roleArn(prefix), "arn-1234567812345678"); 71 | properties.put(AWSConfigConstants.roleSessionName(prefix), "session-name"); 72 | properties.put(AWSConfigConstants.externalId(prefix), "external-id"); 73 | return properties; 74 | } 75 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/BasicCredentialProviderTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentials; 22 | import com.amazonaws.auth.AWSCredentialsProvider; 23 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 24 | import org.testng.annotations.BeforeMethod; 25 | import org.testng.annotations.Test; 26 | 27 | import java.util.Properties; 28 | 29 | import static org.assertj.core.api.Assertions.assertThat; 30 | 31 | public class BasicCredentialProviderTest { 32 | 33 | private BasicCredentialProvider basicCredentialProvider; 34 | 35 | @BeforeMethod 36 | public void setUp() { 37 | Properties properties = new Properties(); 38 | properties.put(AWSConfigConstants.accessKeyId(), "ACCESS"); 39 | properties.put(AWSConfigConstants.secretKey(), "SECRET"); 40 | properties.put(AWSConfigConstants.AWS_REGION, "eu-west-2"); 41 | 42 | basicCredentialProvider = new BasicCredentialProvider(properties); 43 | } 44 | 45 | @Test 46 | public void testGetAwsCredentialsProvider() { 47 | AWSCredentials credentials = basicCredentialProvider.getAwsCredentialsProvider().getCredentials(); 48 | 49 | assertThat(credentials.getAWSAccessKeyId()).isEqualTo("ACCESS"); 50 | assertThat(credentials.getAWSSecretKey()).isEqualTo("SECRET"); 51 | } 52 | 53 | @Test 54 | public void testGetAwsCredentialsProviderSuppliesCredentialsAfterRefresh() { 55 | AWSCredentialsProvider provider = basicCredentialProvider.getAwsCredentialsProvider(); 56 | provider.refresh(); 57 | 58 | AWSCredentials credentials = provider.getCredentials(); 59 | assertThat(credentials.getAWSAccessKeyId()).isEqualTo("ACCESS"); 60 | assertThat(credentials.getAWSSecretKey()).isEqualTo("SECRET"); 61 | } 62 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/CredentialProviderFactoryTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import org.testng.annotations.BeforeMethod; 23 | import org.testng.annotations.Test; 24 | 25 | import java.util.Properties; 26 | 27 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_ACCESS_KEY_ID; 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_PROFILE_NAME; 30 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 31 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_SECRET_ACCESS_KEY; 32 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.ASSUME_ROLE; 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.AUTO; 34 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.BASIC; 35 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.ENV_VARIABLES; 36 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.PROFILE; 37 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.SYS_PROPERTIES; 38 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.factory.CredentialProviderFactory.newCredentialProvider; 39 | import static org.assertj.core.api.Assertions.assertThat; 40 | import static org.assertj.core.api.Assertions.assertThatExceptionOfType; 41 | 42 | public class CredentialProviderFactoryTest { 43 | 44 | private Properties configProps; 45 | 46 | @BeforeMethod 47 | public void setUp() { 48 | configProps = new Properties(); 49 | configProps.setProperty(AWS_REGION, "us-west-2"); 50 | } 51 | 52 | @Test 53 | public void testBasicCredentialProviderHappyCase() { 54 | configProps.setProperty(AWS_ACCESS_KEY_ID, "accessKeyId"); 55 | configProps.setProperty(AWS_SECRET_ACCESS_KEY, "secretAccessKey"); 56 | CredentialProvider credentialProvider = newCredentialProvider(BASIC, configProps); 57 | assertThat(credentialProvider).isInstanceOf(BasicCredentialProvider.class); 58 | } 59 | 60 | @Test 61 | public void testBasicCredentialProviderWithNullProviderKey() { 62 | configProps.setProperty(AWS_ACCESS_KEY_ID, "accessKeyId"); 63 | configProps.setProperty(AWS_SECRET_ACCESS_KEY, "secretAccessKey"); 64 | CredentialProvider credentialProvider = newCredentialProvider(BASIC, configProps, null); 65 | assertThat(credentialProvider).isInstanceOf(BasicCredentialProvider.class); 66 | } 67 | 68 | @Test 69 | public void testBasicCredentialProviderInvalidConfigurationProperties() { 70 | assertThatExceptionOfType(IllegalArgumentException.class) 71 | .isThrownBy(() -> newCredentialProvider(BASIC, configProps)) 72 | .withMessageContaining("AWS access key must be specified with credential provider BASIC."); 73 | } 74 | 75 | @Test 76 | public void testProfileCredentialProviderHappyCase() { 77 | configProps.setProperty(AWS_PROFILE_NAME, "TEST"); 78 | CredentialProvider credentialProvider = newCredentialProvider(PROFILE, configProps); 79 | assertThat(credentialProvider).isInstanceOf(ProfileCredentialProvider.class); 80 | } 81 | 82 | @Test 83 | public void testProfileCredentialProviderInvalidConfigurationProperties() { 84 | assertThatExceptionOfType(IllegalArgumentException.class) 85 | .isThrownBy(() -> newCredentialProvider(PROFILE, configProps)) 86 | .withMessageContaining("AWS profile name should be specified with credential provider PROFILE."); 87 | } 88 | 89 | @Test 90 | public void testEnvironmentCredentialProviderHappyCase() { 91 | CredentialProvider credentialProvider = newCredentialProvider(ENV_VARIABLES, configProps); 92 | assertThat(credentialProvider).isInstanceOf(EnvironmentCredentialProvider.class); 93 | } 94 | 95 | @Test 96 | public void testSystemCredentialProviderHappyCase() { 97 | CredentialProvider credentialProvider = newCredentialProvider(SYS_PROPERTIES, configProps); 98 | assertThat(credentialProvider).isInstanceOf(SystemCredentialProvider.class); 99 | } 100 | 101 | @Test 102 | public void testDefaultCredentialProviderHappyCase() { 103 | CredentialProvider credentialProvider = newCredentialProvider(AUTO, configProps); 104 | assertThat(credentialProvider).isInstanceOf(DefaultCredentialProvider.class); 105 | } 106 | 107 | @Test 108 | public void testCredentialProviderWithNullProvider() { 109 | CredentialProvider credentialProvider = newCredentialProvider(null, configProps); 110 | assertThat(credentialProvider).isInstanceOf(DefaultCredentialProvider.class); 111 | } 112 | 113 | @Test 114 | public void testAssumeRoleCredentialProviderHappyCase() { 115 | configProps.setProperty(AWSConfigConstants.roleArn(AWS_CREDENTIALS_PROVIDER), "arn-1234567812345678"); 116 | configProps.setProperty(AWSConfigConstants.roleSessionName(AWS_CREDENTIALS_PROVIDER), "role-session"); 117 | CredentialProvider credentialProvider = newCredentialProvider(ASSUME_ROLE, configProps); 118 | assertThat(credentialProvider).isInstanceOf(AssumeRoleCredentialsProvider.class); 119 | } 120 | 121 | @Test 122 | public void testAssumeRoleCredentialProviderInvalidConfigurationProperties() { 123 | assertThatExceptionOfType(IllegalArgumentException.class) 124 | .isThrownBy(() -> newCredentialProvider(ASSUME_ROLE, configProps)) 125 | .withMessageContaining("AWS role arn to be assumed must be provided with credential provider type ASSUME_ROLE"); 126 | } 127 | } 128 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/CredentialProviderTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import org.testng.annotations.Test; 23 | 24 | import java.util.Properties; 25 | 26 | import static org.assertj.core.api.Assertions.assertThat; 27 | 28 | public class CredentialProviderTest { 29 | 30 | @Test 31 | public void testGetProperties() { 32 | String key = "key"; 33 | Properties properties = new Properties(); 34 | properties.put(AWSConfigConstants.accessKeyId(key), "ACCESS"); 35 | properties.put(AWSConfigConstants.secretKey(key), "SECRET"); 36 | properties.put(AWSConfigConstants.AWS_REGION, "eu-west-2"); 37 | 38 | CredentialProvider provider = new BasicCredentialProvider(properties, key); 39 | 40 | assertThat(provider.getProperties()).isEqualTo(properties); 41 | assertThat(provider.getProviderKey()).isEqualTo(key); 42 | } 43 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/provider/credential/ProfileCredentialProviderTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential; 20 | 21 | import com.amazonaws.auth.AWSCredentials; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 23 | import org.testng.annotations.Test; 24 | 25 | import java.util.Properties; 26 | 27 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 29 | import static org.assertj.core.api.Assertions.assertThat; 30 | 31 | public class ProfileCredentialProviderTest { 32 | 33 | @Test 34 | public void testGetAwsCredentialsProvider() { 35 | Properties properties = new Properties(); 36 | properties.put(AWS_REGION, "eu-west-2"); 37 | properties.put(AWSConfigConstants.profileName(AWS_CREDENTIALS_PROVIDER), "default"); 38 | properties.put(AWSConfigConstants.profilePath(AWS_CREDENTIALS_PROVIDER), "src/test/resources/profile"); 39 | 40 | AWSCredentials credentials = new ProfileCredentialProvider(properties) 41 | .getAwsCredentialsProvider().getCredentials(); 42 | 43 | assertThat(credentials.getAWSAccessKeyId()).isEqualTo("AKIAIOSFODNN7EXAMPLE"); 44 | assertThat(credentials.getAWSSecretKey()).isEqualTo("wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"); 45 | } 46 | } -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/serialization/JsonSerializationSchemaTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.serialization; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.exception.SerializationException; 22 | import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.annotation.JsonCreator; 23 | import org.apache.flink.shaded.jackson2.com.fasterxml.jackson.databind.annotation.JsonSerialize; 24 | import org.testng.annotations.BeforeMethod; 25 | import org.testng.annotations.Test; 26 | 27 | import static org.testng.Assert.assertNotNull; 28 | 29 | public class JsonSerializationSchemaTest { 30 | 31 | private JsonSerializationSchema serializationSchema; 32 | 33 | @BeforeMethod 34 | public void init() { 35 | serializationSchema = new JsonSerializationSchema<>(); 36 | } 37 | 38 | @Test 39 | public void testJsonSerializationSchemaHappyCase() { 40 | TestSerializable serializable = new TestSerializable(1, "Test description"); 41 | byte[] serialized = serializationSchema.serialize(serializable); 42 | assertNotNull(serialized); 43 | } 44 | 45 | @Test(expectedExceptions = NullPointerException.class) 46 | public void testJsonSerializationSchemaNullCase() { 47 | serializationSchema.serialize(null); 48 | } 49 | 50 | @Test(expectedExceptions = SerializationException.class, 51 | expectedExceptionsMessageRegExp = "Failed trying to serialize.*") 52 | public void testJsonSerializationSchemaInvalidSerializable() { 53 | JsonSerializationSchema serializationSchema = 54 | new JsonSerializationSchema<>(); 55 | 56 | TestInvalidSerializable invalidSerializable = new TestInvalidSerializable("Unit", "Test"); 57 | serializationSchema.serialize(invalidSerializable); 58 | } 59 | 60 | private static class TestSerializable { 61 | @JsonSerialize 62 | private final int id; 63 | @JsonSerialize 64 | private final String description; 65 | 66 | @JsonCreator 67 | public TestSerializable(final int id, final String desc) { 68 | this.id = id; 69 | this.description = desc; 70 | } 71 | } 72 | 73 | private static class TestInvalidSerializable { 74 | private final String firstName; 75 | private final String lastName; 76 | 77 | public TestInvalidSerializable(final String firstName, final String lastName) { 78 | this.firstName = firstName; 79 | this.lastName = lastName; 80 | } 81 | } 82 | } 83 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/testutils/TestUtils.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.testutils; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.serialization.KinesisFirehoseSerializationSchema; 22 | import org.apache.flink.api.common.serialization.SerializationSchema; 23 | 24 | import java.nio.ByteBuffer; 25 | import java.nio.charset.StandardCharsets; 26 | import java.util.Properties; 27 | 28 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_ACCESS_KEY_ID; 29 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 30 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_SECRET_ACCESS_KEY; 31 | import static org.apache.flink.streaming.api.functions.sink.SinkFunction.Context; 32 | 33 | public final class TestUtils { 34 | 35 | private TestUtils() { 36 | 37 | } 38 | 39 | public static final String DEFAULT_DELIVERY_STREAM = "test-stream"; 40 | public static final String DEFAULT_TEST_ERROR_MSG = "Test exception"; 41 | 42 | public static Properties getStandardProperties() { 43 | Properties config = new Properties(); 44 | config.setProperty(AWS_REGION, "us-east-1"); 45 | config.setProperty(AWS_ACCESS_KEY_ID, "accessKeyId"); 46 | config.setProperty(AWS_SECRET_ACCESS_KEY, "awsSecretAccessKey"); 47 | return config; 48 | } 49 | 50 | public static KinesisFirehoseSerializationSchema getKinesisFirehoseSerializationSchema() { 51 | return (KinesisFirehoseSerializationSchema) element -> ByteBuffer.wrap(element.getBytes(StandardCharsets.UTF_8)); 52 | } 53 | 54 | public static SerializationSchema getSerializationSchema() { 55 | return (SerializationSchema) element -> 56 | ByteBuffer.wrap(element.getBytes(StandardCharsets.UTF_8)).array(); 57 | } 58 | 59 | public static Context getContext() { 60 | return new Context() { 61 | @Override 62 | public long currentProcessingTime() { 63 | return System.currentTimeMillis(); 64 | } 65 | 66 | @Override 67 | public long currentWatermark() { 68 | return 10L; 69 | } 70 | 71 | @Override 72 | public Long timestamp() { 73 | return System.currentTimeMillis(); 74 | } 75 | }; 76 | } 77 | } 78 | -------------------------------------------------------------------------------- /src/test/java/com/amazonaws/services/kinesisanalytics/flink/connectors/util/AWSUtilTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to the Apache Software Foundation (ASF) under one 3 | * or more contributor license agreements. See the NOTICE file 4 | * distributed with this work for additional information 5 | * regarding copyright ownership. The ASF licenses this file 6 | * to you under the Apache License, Version 2.0 (the 7 | * "License"); you may not use this file except in compliance 8 | * with the License. You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, software 13 | * distributed under the License is distributed on an "AS IS" BASIS, 14 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 15 | * See the License for the specific language governing permissions and 16 | * limitations under the License. 17 | */ 18 | 19 | package com.amazonaws.services.kinesisanalytics.flink.connectors.util; 20 | 21 | import com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants; 22 | import com.amazonaws.services.kinesisanalytics.flink.connectors.provider.credential.BasicCredentialProvider; 23 | import com.amazonaws.services.kinesisfirehose.AmazonKinesisFirehose; 24 | import org.testng.annotations.BeforeMethod; 25 | import org.testng.annotations.Test; 26 | 27 | import javax.annotation.Nonnull; 28 | import java.util.Properties; 29 | 30 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_ACCESS_KEY_ID; 31 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_CREDENTIALS_PROVIDER; 32 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_PROFILE_NAME; 33 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_REGION; 34 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.AWS_SECRET_ACCESS_KEY; 35 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.ASSUME_ROLE; 36 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.AUTO; 37 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.AWSConfigConstants.CredentialProviderType.BASIC; 38 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.DEFAULT_MAXIMUM_BATCH_BYTES; 39 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.config.ProducerConfigConstants.REDUCED_QUOTA_MAXIMUM_THROUGHPUT; 40 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.createKinesisFirehoseClientFromConfiguration; 41 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.getCredentialProviderType; 42 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateAssumeRoleCredentialsProvider; 43 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateBasicProviderConfiguration; 44 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateConfiguration; 45 | import static com.amazonaws.services.kinesisanalytics.flink.connectors.util.AWSUtil.validateProfileProviderConfiguration; 46 | import static org.assertj.core.api.Assertions.assertThat; 47 | import static org.assertj.core.api.Assertions.assertThatExceptionOfType; 48 | 49 | public class AWSUtilTest { 50 | 51 | private Properties configProps; 52 | 53 | @BeforeMethod 54 | public void setUp() { 55 | configProps = new Properties(); 56 | configProps.setProperty(AWS_ACCESS_KEY_ID, "DUMMY"); 57 | configProps.setProperty(AWS_SECRET_ACCESS_KEY, "DUMMY-SECRET"); 58 | configProps.setProperty(AWS_PROFILE_NAME, "Test"); 59 | configProps.setProperty(AWS_REGION, "us-east-1"); 60 | } 61 | 62 | @Test 63 | public void testCreateKinesisFirehoseClientFromConfigurationWithNullConfiguration() { 64 | assertThatExceptionOfType(NullPointerException.class) 65 | .isThrownBy(() -> createKinesisFirehoseClientFromConfiguration(null, new BasicCredentialProvider(configProps))) 66 | .withMessageContaining("Configuration properties cannot be null"); 67 | } 68 | 69 | @Test 70 | public void testCreateKinesisFirehoseClientFromConfigurationWithNullCredentialProvider() { 71 | assertThatExceptionOfType(NullPointerException.class) 72 | .isThrownBy(() -> createKinesisFirehoseClientFromConfiguration(configProps, null)) 73 | .withMessageContaining("Credential Provider cannot be null"); 74 | } 75 | 76 | @Test 77 | public void testCreateKinesisFirehoseClientFromConfigurationHappyCase() { 78 | AmazonKinesisFirehose firehoseClient = createKinesisFirehoseClientFromConfiguration(configProps, 79 | new BasicCredentialProvider(configProps)); 80 | 81 | assertThat(firehoseClient).isNotNull(); 82 | } 83 | 84 | @Test 85 | public void testValidateConfigurationWithNullConfiguration() { 86 | assertThatExceptionOfType(NullPointerException.class) 87 | .isThrownBy(() -> validateConfiguration(null)) 88 | .withMessageContaining("Configuration properties cannot be null"); 89 | } 90 | 91 | @Test 92 | public void testValidateConfigurationWithNoRegionOrFirehoseEndpoint() { 93 | assertThatExceptionOfType(IllegalArgumentException.class) 94 | .isThrownBy(() -> validateConfiguration(new Properties())) 95 | .withMessageContaining("Either AWS region should be specified or AWS Firehose endpoint and endpoint signing region"); 96 | } 97 | 98 | @Test 99 | public void testValidateConfigurationHappyCase() { 100 | Properties config = validateConfiguration(configProps); 101 | assertThat(configProps).isEqualTo(config); 102 | } 103 | 104 | @Test 105 | public void testValidateBasicConfigurationHappyCase() { 106 | Properties config = validateBasicProviderConfiguration(configProps); 107 | assertThat(configProps).isEqualTo(config); 108 | } 109 | 110 | @Test 111 | public void testValidateBasicConfigurationWithNullConfiguration() { 112 | assertThatExceptionOfType(NullPointerException.class) 113 | .isThrownBy(() -> validateBasicProviderConfiguration(null)) 114 | .withMessageContaining("Configuration properties cannot be null"); 115 | } 116 | 117 | @Test 118 | public void testValidateBasicConfigurationWithNoAwsAccessKeyId() { 119 | configProps.remove(AWS_ACCESS_KEY_ID); 120 | 121 | assertThatExceptionOfType(IllegalArgumentException.class) 122 | .isThrownBy(() -> validateBasicProviderConfiguration(configProps)) 123 | .withMessageContaining("AWS access key must be specified with credential provider BASIC"); 124 | } 125 | 126 | @Test 127 | public void testValidateBasicConfigurationWithNoAwsSecretKeyId() { 128 | configProps.remove(AWS_SECRET_ACCESS_KEY); 129 | 130 | assertThatExceptionOfType(IllegalArgumentException.class) 131 | .isThrownBy(() -> validateBasicProviderConfiguration(configProps)) 132 | .withMessageContaining("AWS secret key must be specified with credential provider BASIC"); 133 | } 134 | 135 | @Test 136 | public void testValidateProfileProviderConfigurationWithNullConfiguration() { 137 | assertThatExceptionOfType(NullPointerException.class) 138 | .isThrownBy(() -> validateProfileProviderConfiguration(null)) 139 | .withMessageContaining("Configuration properties cannot be null"); 140 | } 141 | 142 | @Test 143 | public void testValidateProfileProviderConfigurationHappyCase() { 144 | Properties config = validateProfileProviderConfiguration(configProps); 145 | assertThat(configProps).isEqualTo(config); 146 | } 147 | 148 | @Test 149 | public void testValidateProfileProviderConfigurationWithNoProfileName() { 150 | configProps.remove(AWS_PROFILE_NAME); 151 | 152 | assertThatExceptionOfType(IllegalArgumentException.class) 153 | .isThrownBy(() -> validateProfileProviderConfiguration(configProps)) 154 | .withMessageContaining("AWS profile name should be specified with credential provider PROFILE"); 155 | } 156 | 157 | @Test 158 | public void testValidateAssumeRoleProviderConfigurationHappyCase() { 159 | Properties properties = buildAssumeRoleProperties(); 160 | assertThat(validateAssumeRoleCredentialsProvider(properties)).isEqualTo(properties); 161 | } 162 | 163 | @Test 164 | public void testValidateAssumeRoleProviderConfigurationWithNoRoleArn() { 165 | Properties properties = buildAssumeRoleProperties(); 166 | properties.remove(AWSConfigConstants.roleArn(AWS_CREDENTIALS_PROVIDER)); 167 | 168 | assertThatExceptionOfType(IllegalArgumentException.class) 169 | .isThrownBy(() -> validateAssumeRoleCredentialsProvider(properties)) 170 | .withMessageContaining("AWS role arn to be assumed must be provided with credential provider type ASSUME_ROLE"); 171 | } 172 | 173 | @Test 174 | public void testValidateAssumeRoleProviderConfigurationWithNoRoleSessionName() { 175 | Properties properties = buildAssumeRoleProperties(); 176 | properties.remove(AWSConfigConstants.roleSessionName(AWS_CREDENTIALS_PROVIDER)); 177 | 178 | assertThatExceptionOfType(IllegalArgumentException.class) 179 | .isThrownBy(() -> validateAssumeRoleCredentialsProvider(properties)) 180 | .withMessageContaining("AWS role session name must be provided with credential provider type ASSUME_ROLE"); 181 | } 182 | 183 | @Test 184 | public void testValidateAssumeRoleProviderConfigurationWithNullConfiguration() { 185 | assertThatExceptionOfType(NullPointerException.class) 186 | .isThrownBy(() -> validateAssumeRoleCredentialsProvider(null)) 187 | .withMessageContaining("Configuration properties cannot be null"); 188 | } 189 | 190 | @Nonnull 191 | private Properties buildAssumeRoleProperties() { 192 | Properties properties = new Properties(); 193 | properties.putAll(configProps); 194 | properties.put(AWSConfigConstants.roleArn(AWS_CREDENTIALS_PROVIDER), "arn-1234567812345678"); 195 | properties.put(AWSConfigConstants.roleSessionName(AWS_CREDENTIALS_PROVIDER), "session-name"); 196 | return properties; 197 | } 198 | 199 | @Test 200 | public void testGetCredentialProviderTypeIsAutoNullProviderKey() { 201 | assertThat(getCredentialProviderType(new Properties(), null)).isEqualTo(AUTO); 202 | } 203 | 204 | @Test 205 | public void testGetCredentialProviderTypeIsAutoWithProviderKeyMismatch() { 206 | assertThat(getCredentialProviderType(configProps, "missing-key")).isEqualTo(AUTO); 207 | } 208 | 209 | @Test 210 | public void testGetCredentialProviderTypeIsAutoMissingAccessKey() { 211 | configProps.remove(AWS_ACCESS_KEY_ID); 212 | 213 | assertThat(getCredentialProviderType(configProps, null)).isEqualTo(AUTO); 214 | } 215 | 216 | @Test 217 | public void testGetCredentialProviderTypeIsAutoMissingSecretKey() { 218 | configProps.remove(AWS_SECRET_ACCESS_KEY); 219 | 220 | assertThat(getCredentialProviderType(configProps, null)).isEqualTo(AUTO); 221 | } 222 | 223 | @Test 224 | public void testGetCredentialProviderTypeIsBasic() { 225 | assertThat(getCredentialProviderType(configProps, null)).isEqualTo(BASIC); 226 | } 227 | 228 | @Test 229 | public void testGetCredentialProviderTypeIsAutoWithEmptyProviderKey() { 230 | configProps.setProperty("key", ""); 231 | 232 | assertThat(getCredentialProviderType(configProps, "key")).isEqualTo(AUTO); 233 | } 234 | 235 | @Test 236 | public void testGetCredentialProviderTypeIsAutoWithBadConfiguration() { 237 | configProps.setProperty("key", "Bad"); 238 | 239 | assertThat(getCredentialProviderType(configProps, "key")).isEqualTo(AUTO); 240 | } 241 | 242 | @Test 243 | public void testGetCredentialProviderTypeIsParsedFromProviderKey() { 244 | configProps.setProperty("key", "ASSUME_ROLE"); 245 | 246 | assertThat(getCredentialProviderType(configProps, "key")).isEqualTo(ASSUME_ROLE); 247 | } 248 | 249 | @Test 250 | public void testGetDefaultMaxPutRecordBatchBytesForNullRegion() { 251 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes(null)).isEqualTo(REDUCED_QUOTA_MAXIMUM_THROUGHPUT); 252 | } 253 | 254 | @Test 255 | public void testGetDefaultMaxPutRecordBatchBytesForHighQuotaRegions() { 256 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("us-east-1")).isEqualTo(DEFAULT_MAXIMUM_BATCH_BYTES); 257 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("us-west-2")).isEqualTo(DEFAULT_MAXIMUM_BATCH_BYTES); 258 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("eu-west-1")).isEqualTo(DEFAULT_MAXIMUM_BATCH_BYTES); 259 | } 260 | 261 | @Test 262 | public void testGetDefaultMaxPutRecordBatchBytesForReducedQuotaRegions() { 263 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("us-east-2")).isEqualTo(REDUCED_QUOTA_MAXIMUM_THROUGHPUT); 264 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("us-west-1")).isEqualTo(REDUCED_QUOTA_MAXIMUM_THROUGHPUT); 265 | assertThat(AWSUtil.getDefaultMaxPutRecordBatchBytes("eu-west-2")).isEqualTo(REDUCED_QUOTA_MAXIMUM_THROUGHPUT); 266 | } 267 | } 268 | -------------------------------------------------------------------------------- /src/test/resources/logback.xml: -------------------------------------------------------------------------------- 1 | 2 | 21 | 22 | 23 | 24 | %d [%thread] %-5level %logger{36} - %msg %n 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /src/test/resources/profile: -------------------------------------------------------------------------------- 1 | [default] 2 | aws_access_key_id=AKIAIOSFODNN7EXAMPLE 3 | aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY --------------------------------------------------------------------------------