├── .github ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md └── PULL_REQUEST_TEMPLATE.md ├── .gitignore ├── .travis.yml ├── CHANGELOG.md ├── CONTRIBUTORS ├── Gemfile ├── LICENSE ├── NOTICE.TXT ├── README.md ├── Rakefile ├── docs └── index.asciidoc ├── lib └── logstash │ └── inputs │ ├── sqs.rb │ └── sqs │ └── patch.rb ├── logstash-input-sqs.gemspec └── spec ├── inputs └── sqs_spec.rb ├── integration └── sqs_spec.rb ├── spec_helper.rb └── support └── helpers.rb /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Logstash 2 | 3 | All contributions are welcome: ideas, patches, documentation, bug reports, 4 | complaints, etc! 5 | 6 | Programming is not a required skill, and there are many ways to help out! 7 | It is more important to us that you are able to contribute. 8 | 9 | That said, some basic guidelines, which you are free to ignore :) 10 | 11 | ## Want to learn? 12 | 13 | Want to lurk about and see what others are doing with Logstash? 14 | 15 | * The irc channel (#logstash on irc.freenode.org) is a good place for this 16 | * The [forum](https://discuss.elastic.co/c/logstash) is also 17 | great for learning from others. 18 | 19 | ## Got Questions? 20 | 21 | Have a problem you want Logstash to solve for you? 22 | 23 | * You can ask a question in the [forum](https://discuss.elastic.co/c/logstash) 24 | * Alternately, you are welcome to join the IRC channel #logstash on 25 | irc.freenode.org and ask for help there! 26 | 27 | ## Have an Idea or Feature Request? 28 | 29 | * File a ticket on [GitHub](https://github.com/elastic/logstash/issues). Please remember that GitHub is used only for issues and feature requests. If you have a general question, the [forum](https://discuss.elastic.co/c/logstash) or IRC would be the best place to ask. 30 | 31 | ## Something Not Working? Found a Bug? 32 | 33 | If you think you found a bug, it probably is a bug. 34 | 35 | * If it is a general Logstash or a pipeline issue, file it in [Logstash GitHub](https://github.com/elasticsearch/logstash/issues) 36 | * If it is specific to a plugin, please file it in the respective repository under [logstash-plugins](https://github.com/logstash-plugins) 37 | * or ask the [forum](https://discuss.elastic.co/c/logstash). 38 | 39 | # Contributing Documentation and Code Changes 40 | 41 | If you have a bugfix or new feature that you would like to contribute to 42 | logstash, and you think it will take more than a few minutes to produce the fix 43 | (ie; write code), it is worth discussing the change with the Logstash users and developers first! You can reach us via [GitHub](https://github.com/elastic/logstash/issues), the [forum](https://discuss.elastic.co/c/logstash), or via IRC (#logstash on freenode irc) 44 | Please note that Pull Requests without tests will not be merged. If you would like to contribute but do not have experience with writing tests, please ping us on IRC/forum or create a PR and ask our help. 45 | 46 | ## Contributing to plugins 47 | 48 | Check our [documentation](https://www.elastic.co/guide/en/logstash/current/contributing-to-logstash.html) on how to contribute to plugins or write your own! It is super easy! 49 | 50 | ## Contribution Steps 51 | 52 | 1. Test your changes! [Run](https://github.com/elastic/logstash#testing) the test suite 53 | 2. Please make sure you have signed our [Contributor License 54 | Agreement](https://www.elastic.co/contributor-agreement/). We are not 55 | asking you to assign copyright to us, but to give us the right to distribute 56 | your code without restriction. We ask this of all contributors in order to 57 | assure our users of the origin and continuing existence of the code. You 58 | only need to sign the CLA once. 59 | 3. Send a pull request! Push your changes to your fork of the repository and 60 | [submit a pull 61 | request](https://help.github.com/articles/using-pull-requests). In the pull 62 | request, describe what your changes do and mention any bugs/issues related 63 | to the pull request. 64 | 65 | 66 | -------------------------------------------------------------------------------- /.github/ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Please post all product and debugging questions on our [forum](https://discuss.elastic.co/c/logstash). Your questions will reach our wider community members there, and if we confirm that there is a bug, then we can open a new issue here. 2 | 3 | For all general issues, please provide the following details for fast resolution: 4 | 5 | - Version: 6 | - Operating System: 7 | - Config File (if you have sensitive info, please remove it): 8 | - Sample Data: 9 | - Steps to Reproduce: 10 | -------------------------------------------------------------------------------- /.github/PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | Thanks for contributing to Logstash! If you haven't already signed our CLA, here's a handy link: https://www.elastic.co/contributor-agreement/ 2 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | *.gem 2 | Gemfile.lock 3 | .bundle 4 | vendor 5 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | import: 2 | - logstash-plugins/.ci:travis/travis.yml@1.x -------------------------------------------------------------------------------- /CHANGELOG.md: -------------------------------------------------------------------------------- 1 | ## 3.3.2 2 | - Fix an issue that prevented timely shutdown when subscribed to an inactive queue [#65](https://github.com/logstash-plugins/logstash-input-sqs/pull/65) 3 | 4 | ## 3.3.1 5 | - Refactoring: used logstash-mixin-aws to leverage shared code to manage `additional_settings` [#64](https://github.com/logstash-plugins/logstash-input-sqs/pull/64) 6 | 7 | ## 3.3.0 8 | - Feature: Add `additional_settings` option to fine-grain configuration of AWS client [#61](https://github.com/logstash-plugins/logstash-input-sqs/pull/61) 9 | 10 | ## 3.2.0 11 | - Feature: Add `queue_owner_aws_account_id` parameter for cross-account queues [#60](https://github.com/logstash-plugins/logstash-input-sqs/pull/60) 12 | 13 | ## 3.1.3 14 | - Fix: retry networking errors (with backoff) [#57](https://github.com/logstash-plugins/logstash-input-sqs/pull/57) 15 | 16 | ## 3.1.2 17 | - Added support for multiple events inside same message from SQS [#48](https://github.com/logstash-plugins/logstash-input-sqs/pull/48/files) 18 | 19 | ## 3.1.1 20 | - Docs: Set the default_codec doc attribute. 21 | 22 | ## 3.1.0 23 | - Add documentation for endpoint, role_arn and role_session_name #46 24 | - Fix sample IAM policy to match to match the documentation #32 25 | 26 | ## 3.0.6 27 | - Update gemspec summary 28 | 29 | ## 3.0.5 30 | - Fix some documentation issues 31 | 32 | ## 3.0.3 33 | - Monkey-patch the AWS-SDK to prevent "uninitialized constant" errors. 34 | 35 | ## 3.0.2 36 | - Relax constraint on logstash-core-plugin-api to >= 1.60 <= 2.99 37 | 38 | ## 3.0.1 39 | - Republish all the gems under jruby. 40 | ## 3.0.0 41 | - Update the plugin to the version 2.0 of the plugin api, this change is required for Logstash 5.0 compatibility. See https://github.com/elastic/logstash/issues/5141 42 | # 2.0.5 43 | - Depend on logstash-core-plugin-api instead of logstash-core, removing the need to mass update plugins on major releases of logstash 44 | # 2.0.4 45 | - New dependency requirements for logstash-core for the 5.0 release 46 | ## 2.0.3 47 | - Fixes #22, wrong key use on the stats object 48 | ## 2.0.0 49 | - Plugins were updated to follow the new shutdown semantic, this mainly allows Logstash to instruct input plugins to terminate gracefully, 50 | instead of using Thread.raise on the plugins' threads. Ref: https://github.com/elastic/logstash/pull/3895 51 | - Dependency on logstash-core update to 2.0 52 | 53 | # 1.1.0 54 | - AWS ruby SDK v2 upgrade 55 | - Replaces aws-sdk dependencies with mixin-aws 56 | - Removes unnecessary de-allocation 57 | - Move the code into smaller methods to allow easier mocking and testing 58 | - Add the option to configure polling frequency 59 | - Adding a monkey patch to make sure `LogStash::ShutdownSignal` doesn't get catch by AWS RetryError. 60 | -------------------------------------------------------------------------------- /CONTRIBUTORS: -------------------------------------------------------------------------------- 1 | The following is a list of people who have contributed ideas, code, bug 2 | reports, or in general have helped logstash along its way. 3 | 4 | Contributors: 5 | * Aaron Broad (AaronTheApe) 6 | * Colin Surprenant (colinsurprenant) 7 | * Jordan Sissel (jordansissel) 8 | * Joshua Spence (joshuaspence) 9 | * Pier-Hugues Pellerin (ph) 10 | * Richard Pijnenburg (electrical) 11 | * Sean Laurent (organicveggie) 12 | * Suyog Rao (suyograo) 13 | * Toby Collier (tobyw4n) 14 | * emmalzhang 15 | 16 | Note: If you've sent us patches, bug reports, or otherwise contributed to 17 | Logstash, and you aren't on the list above and want to be, please let us know 18 | and we'll make sure you're here. Contributions from folks like you are what make 19 | open source awesome. 20 | -------------------------------------------------------------------------------- /Gemfile: -------------------------------------------------------------------------------- 1 | source 'https://rubygems.org' 2 | 3 | gemspec 4 | 5 | logstash_path = ENV["LOGSTASH_PATH"] || "../../logstash" 6 | use_logstash_source = ENV["LOGSTASH_SOURCE"] && ENV["LOGSTASH_SOURCE"].to_s == "1" 7 | 8 | if Dir.exist?(logstash_path) && use_logstash_source 9 | gem 'logstash-core', :path => "#{logstash_path}/logstash-core" 10 | gem 'logstash-core-plugin-api', :path => "#{logstash_path}/logstash-core-plugin-api" 11 | end 12 | 13 | if RUBY_VERSION == "1.9.3" 14 | gem 'rake', '12.2.1' 15 | end 16 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright 2020 Elastic and contributors 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /NOTICE.TXT: -------------------------------------------------------------------------------- 1 | Elasticsearch 2 | Copyright 2012-2015 Elasticsearch 3 | 4 | This product includes software developed by The Apache Software 5 | Foundation (http://www.apache.org/). -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Logstash Plugin 2 | 3 | [![Travis Build Status](https://travis-ci.com/logstash-plugins/logstash-input-sqs.svg)](https://travis-ci.com/logstash-plugins/logstash-input-sqs) 4 | 5 | This is a plugin for [Logstash](https://github.com/elastic/logstash). 6 | 7 | It is fully free and fully open source. The license is Apache 2.0, meaning you are pretty much free to use it however you want in whatever way. 8 | 9 | ## Documentation 10 | 11 | Logstash provides infrastructure to automatically generate documentation for this plugin. We use the asciidoc format to write documentation so any comments in the source code will be first converted into asciidoc and then into html. All plugin documentation are placed under one [central location](http://www.elastic.co/guide/en/logstash/current/). 12 | 13 | - For formatting code or config example, you can use the asciidoc `[source,ruby]` directive 14 | - For more asciidoc formatting tips, see the excellent reference here https://github.com/elastic/docs#asciidoc-guide 15 | 16 | ## Need Help? 17 | 18 | Need help? Try #logstash on freenode IRC or the https://discuss.elastic.co/c/logstash discussion forum. 19 | 20 | ## Developing 21 | 22 | ### 1. Plugin Developement and Testing 23 | 24 | #### Code 25 | - To get started, you'll need JRuby with the Bundler gem installed. 26 | 27 | - Create a new plugin or clone and existing from the GitHub [logstash-plugins](https://github.com/logstash-plugins) organization. We also provide [example plugins](https://github.com/logstash-plugins?query=example). 28 | 29 | - Install dependencies 30 | ```sh 31 | bundle install 32 | ``` 33 | 34 | #### Test 35 | 36 | - Update your dependencies 37 | 38 | ```sh 39 | bundle install 40 | ``` 41 | 42 | - Run tests 43 | 44 | ```sh 45 | bundle exec rspec 46 | ``` 47 | 48 | ### 2. Running your unpublished Plugin in Logstash 49 | 50 | #### 2.1 Run in a local Logstash clone 51 | 52 | - Edit Logstash `Gemfile` and add the local plugin path, for example: 53 | ```ruby 54 | gem "logstash-filter-awesome", :path => "/your/local/logstash-filter-awesome" 55 | ``` 56 | - Install plugin 57 | ```sh 58 | # Logstash 2.3 and higher 59 | bin/logstash-plugin install --no-verify 60 | 61 | # Prior to Logstash 2.3 62 | bin/plugin install --no-verify 63 | 64 | ``` 65 | - Run Logstash with your plugin 66 | ```sh 67 | bin/logstash -e 'filter {awesome {}}' 68 | ``` 69 | At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply rerun Logstash. 70 | 71 | #### 2.2 Run in an installed Logstash 72 | 73 | You can use the same **2.1** method to run your plugin in an installed Logstash by editing its `Gemfile` and pointing the `:path` to your local plugin development directory or you can build the gem and install it using: 74 | 75 | - Build your plugin gem 76 | ```sh 77 | gem build logstash-filter-awesome.gemspec 78 | ``` 79 | - Install the plugin from the Logstash home 80 | ```sh 81 | # Logstash 2.3 and higher 82 | bin/logstash-plugin install --no-verify 83 | 84 | # Prior to Logstash 2.3 85 | bin/plugin install --no-verify 86 | 87 | ``` 88 | - Start Logstash and proceed to test the plugin 89 | 90 | ## Contributing 91 | 92 | All contributions are welcome: ideas, patches, documentation, bug reports, complaints, and even something you drew up on a napkin. 93 | 94 | Programming is not a required skill. Whatever you've seen about open source and maintainers or community members saying "send patches or die" - you will not see that here. 95 | 96 | It is more important to the community that you are able to contribute. 97 | 98 | For more information about contributing, see the [CONTRIBUTING](https://github.com/elastic/logstash/blob/master/CONTRIBUTING.md) file. -------------------------------------------------------------------------------- /Rakefile: -------------------------------------------------------------------------------- 1 | @files=[] 2 | 3 | task :default do 4 | system("rake -T") 5 | end 6 | 7 | require "logstash/devutils/rake" 8 | -------------------------------------------------------------------------------- /docs/index.asciidoc: -------------------------------------------------------------------------------- 1 | :plugin: sqs 2 | :type: input 3 | :default_codec: json 4 | 5 | /////////////////////////////////////////// 6 | START - GENERATED VARIABLES, DO NOT EDIT! 7 | /////////////////////////////////////////// 8 | :version: %VERSION% 9 | :release_date: %RELEASE_DATE% 10 | :changelog_url: %CHANGELOG_URL% 11 | :include_path: ../../../../logstash/docs/include 12 | /////////////////////////////////////////// 13 | END - GENERATED VARIABLES, DO NOT EDIT! 14 | /////////////////////////////////////////// 15 | 16 | [id="plugins-{type}s-{plugin}"] 17 | 18 | === Sqs input plugin 19 | 20 | include::{include_path}/plugin_header.asciidoc[] 21 | 22 | ==== Description 23 | 24 | 25 | Pull events from an Amazon Web Services Simple Queue Service (SQS) queue. 26 | 27 | SQS is a simple, scalable queue system that is part of the 28 | Amazon Web Services suite of tools. 29 | 30 | Although SQS is similar to other queuing systems like AMQP, it 31 | uses a custom API and requires that you have an AWS account. 32 | See http://aws.amazon.com/sqs/ for more details on how SQS works, 33 | what the pricing schedule looks like and how to setup a queue. 34 | 35 | To use this plugin, you *must*: 36 | 37 | * Have an AWS account 38 | * Setup an SQS queue 39 | * Create an identity that has access to consume messages from the queue. 40 | 41 | The "consumer" identity must have the following permissions on the queue: 42 | 43 | * `sqs:ChangeMessageVisibility` 44 | * `sqs:ChangeMessageVisibilityBatch` 45 | * `sqs:DeleteMessage` 46 | * `sqs:DeleteMessageBatch` 47 | * `sqs:GetQueueAttributes` 48 | * `sqs:GetQueueUrl` 49 | * `sqs:ListQueues` 50 | * `sqs:ReceiveMessage` 51 | 52 | Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. 53 | A sample policy is as follows: 54 | [source,json] 55 | { 56 | "Statement": [ 57 | { 58 | "Action": [ 59 | "sqs:ChangeMessageVisibility", 60 | "sqs:ChangeMessageVisibilityBatch", 61 | "sqs:DeleteMessage", 62 | "sqs:DeleteMessageBatch", 63 | "sqs:GetQueueAttributes", 64 | "sqs:GetQueueUrl", 65 | "sqs:ListQueues", 66 | "sqs:ReceiveMessage" 67 | ], 68 | "Effect": "Allow", 69 | "Resource": [ 70 | "arn:aws:sqs:us-east-1:123456789012:Logstash" 71 | ] 72 | } 73 | ] 74 | } 75 | 76 | See http://aws.amazon.com/iam/ for more details on setting up AWS identities. 77 | 78 | 79 | [id="plugins-{type}s-{plugin}-options"] 80 | ==== Sqs Input Configuration Options 81 | 82 | This plugin supports the following configuration options plus the <> described later. 83 | 84 | [cols="<,<,<",options="header",] 85 | |======================================================================= 86 | |Setting |Input type|Required 87 | | <> |<>|No 88 | | <> |<>|No 89 | | <> |<>|No 90 | | <> |<>|No 91 | | <> |<>|No 92 | | <> |<>|No 93 | | <> |<>|No 94 | | <> |<>|No 95 | | <> |<>|Yes 96 | | <> |<>|No 97 | | <> |<>|No 98 | | <> |<>|No 99 | | <> |<>|No 100 | | <> |<>|No 101 | | <> |<>|No 102 | | <> |<>|No 103 | | <> |<>|No 104 | |======================================================================= 105 | 106 | Also see <> for a list of options supported by all 107 | input plugins. 108 | 109 |   110 | 111 | [id="plugins-{type}s-{plugin}-access_key_id"] 112 | ===== `access_key_id` 113 | 114 | * Value type is <> 115 | * There is no default value for this setting. 116 | 117 | This plugin uses the AWS SDK and supports several ways to get credentials, which will be tried in this order: 118 | 119 | 1. Static configuration, using `access_key_id` and `secret_access_key` params in logstash plugin config 120 | 2. External credentials file specified by `aws_credentials_file` 121 | 3. Environment variables `AWS_ACCESS_KEY_ID` and `AWS_SECRET_ACCESS_KEY` 122 | 4. Environment variables `AMAZON_ACCESS_KEY_ID` and `AMAZON_SECRET_ACCESS_KEY` 123 | 5. IAM Instance Profile (available when running inside EC2) 124 | 125 | [id="plugins-{type}s-{plugin}-additional_settings"] 126 | ===== `additional_settings` 127 | 128 | * Value type is <> 129 | * Default value is `{}` 130 | 131 | Key-value pairs of settings and corresponding values used to parametrize 132 | the connection to SQS. See full list in https://docs.aws.amazon.com/sdk-for-ruby/v2/api/Aws/SQS/Client.html[the AWS SDK documentation]. Example: 133 | 134 | [source,ruby] 135 | input { 136 | sqs { 137 | access_key_id => "1234" 138 | secret_access_key => "secret" 139 | queue => "logstash-test-queue" 140 | additional_settings => { 141 | force_path_style => true 142 | follow_redirects => false 143 | } 144 | } 145 | } 146 | 147 | [id="plugins-{type}s-{plugin}-aws_credentials_file"] 148 | ===== `aws_credentials_file` 149 | 150 | * Value type is <> 151 | * There is no default value for this setting. 152 | 153 | Path to YAML file containing a hash of AWS credentials. 154 | This file will only be loaded if `access_key_id` and 155 | `secret_access_key` aren't set. The contents of the 156 | file should look like this: 157 | 158 | [source,ruby] 159 | ---------------------------------- 160 | :access_key_id: "12345" 161 | :secret_access_key: "54321" 162 | ---------------------------------- 163 | 164 | [id="plugins-{type}s-{plugin}-endpoint"] 165 | ===== `endpoint` 166 | 167 | * Value type is <> 168 | * There is no default value for this setting. 169 | 170 | The endpoint to connect to. By default it is constructed using the value of `region`. 171 | This is useful when connecting to S3 compatible services, but beware that these aren't 172 | guaranteed to work correctly with the AWS SDK. 173 | 174 | [id="plugins-{type}s-{plugin}-id_field"] 175 | ===== `id_field` 176 | 177 | * Value type is <> 178 | * There is no default value for this setting. 179 | 180 | Name of the event field in which to store the SQS message ID 181 | 182 | [id="plugins-{type}s-{plugin}-md5_field"] 183 | ===== `md5_field` 184 | 185 | * Value type is <> 186 | * There is no default value for this setting. 187 | 188 | Name of the event field in which to store the SQS message MD5 checksum 189 | 190 | [id="plugins-{type}s-{plugin}-polling_frequency"] 191 | ===== `polling_frequency` 192 | 193 | * Value type is <> 194 | * Default value is `20` 195 | 196 | Polling frequency, default is 20 seconds 197 | 198 | [id="plugins-{type}s-{plugin}-proxy_uri"] 199 | ===== `proxy_uri` 200 | 201 | * Value type is <> 202 | * There is no default value for this setting. 203 | 204 | URI to proxy server if required 205 | 206 | [id="plugins-{type}s-{plugin}-queue"] 207 | ===== `queue` 208 | 209 | * This is a required setting. 210 | * Value type is <> 211 | * There is no default value for this setting. 212 | 213 | Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN. 214 | 215 | [id="plugins-{type}s-{plugin}-queue_owner_aws_account_id"] 216 | ===== `queue_owner_aws_account_id` 217 | 218 | * Value type is <> 219 | * There is no default value for this setting. 220 | 221 | ID of the AWS account owning the queue if you want to use a https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-basic-examples-of-sqs-policies.html#grant-two-permissions-to-one-account[cross-account queue] with embedded policy. Note that AWS SDK only support numerical account ID and not account aliases. 222 | 223 | [id="plugins-{type}s-{plugin}-region"] 224 | ===== `region` 225 | 226 | * Value type is <> 227 | * Default value is `"us-east-1"` 228 | 229 | The AWS Region 230 | 231 | [id="plugins-{type}s-{plugin}-role_arn"] 232 | ===== `role_arn` 233 | 234 | * Value type is <> 235 | * There is no default value for this setting. 236 | 237 | The AWS IAM Role to assume, if any. 238 | This is used to generate temporary credentials, typically for cross-account access. 239 | See the https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html[AssumeRole API documentation] for more information. 240 | 241 | [id="plugins-{type}s-{plugin}-role_session_name"] 242 | ===== `role_session_name` 243 | 244 | * Value type is <> 245 | * Default value is `"logstash"` 246 | 247 | Session name to use when assuming an IAM role. 248 | 249 | [id="plugins-{type}s-{plugin}-secret_access_key"] 250 | ===== `secret_access_key` 251 | 252 | * Value type is <> 253 | * There is no default value for this setting. 254 | 255 | The AWS Secret Access Key 256 | 257 | [id="plugins-{type}s-{plugin}-sent_timestamp_field"] 258 | ===== `sent_timestamp_field` 259 | 260 | * Value type is <> 261 | * There is no default value for this setting. 262 | 263 | Name of the event field in which to store the SQS message Sent Timestamp 264 | 265 | [id="plugins-{type}s-{plugin}-session_token"] 266 | ===== `session_token` 267 | 268 | * Value type is <> 269 | * There is no default value for this setting. 270 | 271 | The AWS Session token for temporary credential 272 | 273 | [id="plugins-{type}s-{plugin}-threads"] 274 | ===== `threads` 275 | 276 | * Value type is <> 277 | * Default value is `1` 278 | 279 | 280 | 281 | 282 | 283 | [id="plugins-{type}s-{plugin}-common-options"] 284 | include::{include_path}/{type}.asciidoc[] 285 | 286 | :default_codec!: 287 | -------------------------------------------------------------------------------- /lib/logstash/inputs/sqs.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | # 3 | require "logstash/inputs/threadable" 4 | require "logstash/namespace" 5 | require "logstash/timestamp" 6 | require "logstash/plugin_mixins/aws_config" 7 | require "logstash/errors" 8 | require 'logstash/inputs/sqs/patch' 9 | 10 | # Forcibly load all modules marked to be lazily loaded. 11 | # 12 | # It is recommended that this is called prior to launching threads. See 13 | # https://aws.amazon.com/blogs/developer/threading-with-the-aws-sdk-for-ruby/. 14 | Aws.eager_autoload! 15 | 16 | # Pull events from an Amazon Web Services Simple Queue Service (SQS) queue. 17 | # 18 | # SQS is a simple, scalable queue system that is part of the 19 | # Amazon Web Services suite of tools. 20 | # 21 | # Although SQS is similar to other queuing systems like AMQP, it 22 | # uses a custom API and requires that you have an AWS account. 23 | # See http://aws.amazon.com/sqs/ for more details on how SQS works, 24 | # what the pricing schedule looks like and how to setup a queue. 25 | # 26 | # To use this plugin, you *must*: 27 | # 28 | # * Have an AWS account 29 | # * Setup an SQS queue 30 | # * Create an identify that has access to consume messages from the queue. 31 | # 32 | # The "consumer" identity must have the following permissions on the queue: 33 | # 34 | # * `sqs:ChangeMessageVisibility` 35 | # * `sqs:ChangeMessageVisibilityBatch` 36 | # * `sqs:DeleteMessage` 37 | # * `sqs:DeleteMessageBatch` 38 | # * `sqs:GetQueueAttributes` 39 | # * `sqs:GetQueueUrl` 40 | # * `sqs:ListQueues` 41 | # * `sqs:ReceiveMessage` 42 | # 43 | # Typically, you should setup an IAM policy, create a user and apply the IAM policy to the user. 44 | # A sample policy is as follows: 45 | # [source,json] 46 | # { 47 | # "Statement": [ 48 | # { 49 | # "Action": [ 50 | # "sqs:ChangeMessageVisibility", 51 | # "sqs:ChangeMessageVisibilityBatch", 52 | # "sqs:GetQueueAttributes", 53 | # "sqs:GetQueueUrl", 54 | # "sqs:ListQueues", 55 | # "sqs:SendMessage", 56 | # "sqs:SendMessageBatch" 57 | # ], 58 | # "Effect": "Allow", 59 | # "Resource": [ 60 | # "arn:aws:sqs:us-east-1:123456789012:Logstash" 61 | # ] 62 | # } 63 | # ] 64 | # } 65 | # 66 | # See http://aws.amazon.com/iam/ for more details on setting up AWS identities. 67 | # 68 | class LogStash::Inputs::SQS < LogStash::Inputs::Threadable 69 | include LogStash::PluginMixins::AwsConfig::V2 70 | 71 | MAX_TIME_BEFORE_GIVING_UP = 60 72 | MAX_MESSAGES_TO_FETCH = 10 # Between 1-10 in the AWS-SDK doc 73 | SENT_TIMESTAMP = "SentTimestamp" 74 | SQS_ATTRIBUTES = [SENT_TIMESTAMP] 75 | BACKOFF_SLEEP_TIME = 1 76 | BACKOFF_FACTOR = 2 77 | DEFAULT_POLLING_FREQUENCY = 20 78 | 79 | config_name "sqs" 80 | 81 | default :codec, "json" 82 | 83 | config :additional_settings, :validate => :hash, :default => {} 84 | 85 | # Name of the SQS Queue name to pull messages from. Note that this is just the name of the queue, not the URL or ARN. 86 | config :queue, :validate => :string, :required => true 87 | 88 | # Account ID of the AWS account which owns the queue. 89 | config :queue_owner_aws_account_id, :validate => :string, :required => false 90 | 91 | # Name of the event field in which to store the SQS message ID 92 | config :id_field, :validate => :string 93 | 94 | # Name of the event field in which to store the SQS message MD5 checksum 95 | config :md5_field, :validate => :string 96 | 97 | # Name of the event field in which to store the SQS message Sent Timestamp 98 | config :sent_timestamp_field, :validate => :string 99 | 100 | # Polling frequency, default is 20 seconds 101 | config :polling_frequency, :validate => :number, :default => DEFAULT_POLLING_FREQUENCY 102 | 103 | attr_reader :poller 104 | 105 | def register 106 | require "aws-sdk" 107 | @logger.info("Registering SQS input", :queue => @queue, :queue_owner_aws_account_id => @queue_owner_aws_account_id) 108 | 109 | setup_queue 110 | end 111 | 112 | def queue_url(aws_sqs_client) 113 | if @queue_owner_aws_account_id 114 | return aws_sqs_client.get_queue_url({:queue_name => @queue, :queue_owner_aws_account_id => @queue_owner_aws_account_id})[:queue_url] 115 | else 116 | return aws_sqs_client.get_queue_url(:queue_name => @queue)[:queue_url] 117 | end 118 | end 119 | 120 | def setup_queue 121 | aws_sqs_client = Aws::SQS::Client.new(aws_options_hash || {}) 122 | poller = Aws::SQS::QueuePoller.new(queue_url(aws_sqs_client), :client => aws_sqs_client) 123 | poller.before_request { |stats| throw :stop_polling if stop? } 124 | 125 | @poller = poller 126 | rescue Aws::SQS::Errors::ServiceError, Seahorse::Client::NetworkingError => e 127 | @logger.error("Cannot establish connection to Amazon SQS", exception_details(e)) 128 | raise LogStash::ConfigurationError, "Verify the SQS queue name and your credentials" 129 | end 130 | 131 | def polling_options 132 | { 133 | :max_number_of_messages => MAX_MESSAGES_TO_FETCH, 134 | :attribute_names => SQS_ATTRIBUTES, 135 | :wait_time_seconds => @polling_frequency 136 | } 137 | end 138 | 139 | def add_sqs_data(event, message) 140 | event.set(@id_field, message.message_id) if @id_field 141 | event.set(@md5_field, message.md5_of_body) if @md5_field 142 | event.set(@sent_timestamp_field, convert_epoch_to_timestamp(message.attributes[SENT_TIMESTAMP])) if @sent_timestamp_field 143 | event 144 | end 145 | 146 | def handle_message(message, output_queue) 147 | @codec.decode(message.body) do |event| 148 | add_sqs_data(event, message) 149 | decorate(event) 150 | output_queue << event 151 | end 152 | end 153 | 154 | def run(output_queue) 155 | @logger.debug("Polling SQS queue", :polling_options => polling_options) 156 | 157 | run_with_backoff do 158 | poller.poll(polling_options) do |messages, stats| 159 | break if stop? 160 | messages.each {|message| handle_message(message, output_queue) } 161 | @logger.debug("SQS Stats:", :request_count => stats.request_count, 162 | :received_message_count => stats.received_message_count, 163 | :last_message_received_at => stats.last_message_received_at) if @logger.debug? 164 | end 165 | end 166 | end 167 | 168 | private 169 | 170 | # Runs an AWS request inside a Ruby block with an exponential backoff in case 171 | # we experience a ServiceError. 172 | # 173 | # @param [Block] block Ruby code block to execute. 174 | def run_with_backoff(&block) 175 | sleep_time = BACKOFF_SLEEP_TIME 176 | begin 177 | block.call 178 | rescue Aws::SQS::Errors::ServiceError, Seahorse::Client::NetworkingError => e 179 | @logger.warn("SQS error ... retrying with exponential backoff", exception_details(e, sleep_time)) 180 | sleep_time = backoff_sleep(sleep_time) 181 | retry 182 | end 183 | end 184 | 185 | def backoff_sleep(sleep_time) 186 | sleep(sleep_time) 187 | sleep_time > MAX_TIME_BEFORE_GIVING_UP ? sleep_time : sleep_time * BACKOFF_FACTOR 188 | end 189 | 190 | def convert_epoch_to_timestamp(time) 191 | LogStash::Timestamp.at(time.to_i / 1000) 192 | end 193 | 194 | def exception_details(e, sleep_time = nil) 195 | details = { :queue => @queue, :exception => e.class, :message => e.message } 196 | details[:code] = e.code if e.is_a?(Aws::SQS::Errors::ServiceError) && e.code 197 | details[:cause] = e.original_error if e.respond_to?(:original_error) && e.original_error # Seahorse::Client::NetworkingError 198 | details[:sleep_time] = sleep_time if sleep_time 199 | details[:backtrace] = e.backtrace if @logger.debug? 200 | details 201 | end 202 | 203 | end # class LogStash::Inputs::SQS 204 | -------------------------------------------------------------------------------- /lib/logstash/inputs/sqs/patch.rb: -------------------------------------------------------------------------------- 1 | # This patch was stolen from logstash-plugins/logstash-output-sqs#20. 2 | # 3 | # This patch is a workaround for a JRuby issue which has been fixed in JRuby 4 | # 9000, but not in JRuby 1.7. See https://github.com/jruby/jruby/issues/3645 5 | # and https://github.com/jruby/jruby/issues/3920. This is necessary because the 6 | # `aws-sdk` is doing tricky name discovery to generate the correct error class. 7 | # 8 | # As per https://github.com/aws/aws-sdk-ruby/issues/1301#issuecomment-261115960, 9 | # this patch may be short-lived anyway. 10 | require 'aws-sdk' 11 | 12 | begin 13 | old_stderr = $stderr 14 | $stderr = StringIO.new 15 | 16 | module Aws 17 | const_set(:SQS, Aws::SQS) 18 | end 19 | ensure 20 | $stderr = old_stderr 21 | end 22 | -------------------------------------------------------------------------------- /logstash-input-sqs.gemspec: -------------------------------------------------------------------------------- 1 | Gem::Specification.new do |s| 2 | s.name = 'logstash-input-sqs' 3 | s.version = '3.3.2' 4 | s.licenses = ['Apache-2.0'] 5 | s.summary = "Pulls events from an Amazon Web Services Simple Queue Service queue" 6 | s.description = "This gem is a Logstash plugin required to be installed on top of the Logstash core pipeline using $LS_HOME/bin/logstash-plugin install gemname. This gem is not a stand-alone program" 7 | s.authors = ["Elastic"] 8 | s.email = 'info@elastic.co' 9 | s.homepage = "http://www.elastic.co/guide/en/logstash/current/index.html" 10 | s.require_paths = ["lib"] 11 | 12 | # Files 13 | s.files = Dir["lib/**/*","spec/**/*","*.gemspec","*.md","CONTRIBUTORS","Gemfile","LICENSE","NOTICE.TXT", "vendor/jar-dependencies/**/*.jar", "vendor/jar-dependencies/**/*.rb", "VERSION", "docs/**/*"] 14 | 15 | # Tests 16 | s.test_files = s.files.grep(%r{^(test|spec|features)/}) 17 | 18 | # Special flag to let us know this is actually a logstash plugin 19 | s.metadata = { "logstash_plugin" => "true", "logstash_group" => "input" } 20 | 21 | # Gem dependencies 22 | s.add_runtime_dependency "logstash-core-plugin-api", ">= 1.60", "<= 2.99" 23 | 24 | s.add_runtime_dependency 'logstash-codec-json' 25 | s.add_runtime_dependency 'logstash-mixin-aws', '>= 5.1.0' 26 | 27 | s.add_development_dependency 'logstash-devutils' 28 | s.add_development_dependency "logstash-codec-json_lines" 29 | end 30 | 31 | -------------------------------------------------------------------------------- /spec/inputs/sqs_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "spec_helper" 3 | require "logstash/devutils/rspec/shared_examples" 4 | require "logstash/inputs/sqs" 5 | require "logstash/errors" 6 | require "logstash/event" 7 | require "logstash/json" 8 | require "aws-sdk" 9 | require "ostruct" 10 | 11 | describe LogStash::Inputs::SQS do 12 | let(:queue_name) { "the-infinite-pandora-box" } 13 | let(:queue_url) { "https://sqs.test.local/#{queue_name}" } 14 | let(:config) do 15 | { 16 | "region" => "us-east-1", 17 | "access_key_id" => "123", 18 | "secret_access_key" => "secret", 19 | "queue" => queue_name 20 | } 21 | end 22 | 23 | let(:input) { LogStash::Inputs::SQS.new(config) } 24 | let(:decoded_message) { { "bonjour" => "awesome" } } 25 | let(:encoded_message) { double("sqs_message", :body => LogStash::Json::dump(decoded_message)) } 26 | 27 | subject { input } 28 | 29 | let(:mock_sqs) { Aws::SQS::Client.new({ :stub_responses => true }) } 30 | 31 | 32 | context "with invalid credentials" do 33 | before do 34 | expect(Aws::SQS::Client).to receive(:new).and_return(mock_sqs) 35 | expect(mock_sqs).to receive(:get_queue_url).with({ :queue_name => queue_name }) { raise Aws::SQS::Errors::ServiceError.new("bad-something", "bad token") } 36 | end 37 | 38 | it "raises a Configuration error if the credentials are bad" do 39 | expect { subject.register }.to raise_error(LogStash::ConfigurationError) 40 | end 41 | end 42 | 43 | context "valid credentials" do 44 | let(:queue) { [] } 45 | 46 | it "doesn't raise an error with valid credentials" do 47 | expect(Aws::SQS::Client).to receive(:new).and_return(mock_sqs) 48 | expect(mock_sqs).to receive(:get_queue_url).with({ :queue_name => queue_name }).and_return({:queue_url => queue_url }) 49 | expect { subject.register }.not_to raise_error 50 | end 51 | 52 | context "when queue_aws_account_id option is specified" do 53 | let(:queue_account_id) { "123456789012" } 54 | let(:config) do 55 | { 56 | "region" => "us-east-1", 57 | "access_key_id" => "123", 58 | "secret_access_key" => "secret", 59 | "queue" => queue_name, 60 | "queue_owner_aws_account_id" => queue_account_id 61 | } 62 | end 63 | it "passes the option to sqs client" do 64 | expect(Aws::SQS::Client).to receive(:new).and_return(mock_sqs) 65 | expect(mock_sqs).to receive(:get_queue_url).with({ :queue_name => queue_name, :queue_owner_aws_account_id => queue_account_id }).and_return({:queue_url => queue_url }) 66 | expect { subject.register }.not_to raise_error 67 | end 68 | end 69 | 70 | describe "additional_settings" do 71 | context "supported settings" do 72 | let(:config) { 73 | { 74 | "additional_settings" => { "force_path_style" => 'true', "ssl_verify_peer" => 'false', "profile" => 'logstash' }, 75 | "queue" => queue_name 76 | } 77 | } 78 | 79 | it 'should instantiate Aws::SQS clients with force_path_style set' do 80 | expect(Aws::SQS::Client).to receive(:new).and_return(mock_sqs) 81 | # mock a remote call to retrieve the queue URL 82 | expect(mock_sqs).to receive(:get_queue_url).with({ :queue_name => queue_name }).and_return({:queue_url => queue_url }) 83 | 84 | expect(subject.aws_options_hash).to include({:force_path_style => true, :ssl_verify_peer => false, :profile => 'logstash'}) 85 | 86 | expect { subject.register }.not_to raise_error 87 | end 88 | end 89 | 90 | context "unsupported settings" do 91 | let(:config) { 92 | { 93 | "additional_settings" => { "stub_responses" => 'true', "invalid_option" => "invalid" }, 94 | "queue" => queue_name 95 | } 96 | } 97 | 98 | it 'must fail with ArgumentError' do 99 | expect {subject.register}.to raise_error(ArgumentError, /invalid_option/) 100 | end 101 | end 102 | 103 | end 104 | 105 | context "when interrupting the plugin" do 106 | before do 107 | expect(Aws::SQS::Client).to receive(:new).and_return(mock_sqs) 108 | expect(mock_sqs).to receive(:get_queue_url).with({ :queue_name => queue_name }).and_return({:queue_url => queue_url }) 109 | expect(subject).to receive(:poller).and_return(mock_sqs).at_least(:once) 110 | 111 | # We have to make sure we create a bunch of events 112 | # so we actually really try to stop the plugin. 113 | # 114 | # rspec's `and_yield` allow you to define a fix amount of possible 115 | # yielded values and doesn't allow you to create infinite loop. 116 | # And since we are actually creating thread we need to make sure 117 | # we have enough work to keep the thread working until we kill it.. 118 | # 119 | # I haven't found a way to make it rspec friendly 120 | mock_sqs.instance_eval do 121 | def poll(polling_options = {}) 122 | loop do 123 | yield [OpenStruct.new(:body => LogStash::Json::dump({ "message" => "hello world"}))], OpenStruct.new 124 | end 125 | end 126 | end 127 | end 128 | 129 | it_behaves_like "an interruptible input plugin" 130 | end 131 | 132 | context "enrich event" do 133 | let(:event) { LogStash::Event.new } 134 | 135 | let(:message_id) { "123" } 136 | let(:md5_of_body) { "dr strange" } 137 | let(:sent_timestamp) { LogStash::Timestamp.new } 138 | let(:epoch_timestamp) { (sent_timestamp.utc.to_f * 1000).to_i } 139 | 140 | let(:id_field) { "my_id_field" } 141 | let(:md5_field) { "my_md5_field" } 142 | let(:sent_timestamp_field) { "my_sent_timestamp_field" } 143 | 144 | let(:message) do 145 | double("message", :message_id => message_id, :md5_of_body => md5_of_body, :attributes => { LogStash::Inputs::SQS::SENT_TIMESTAMP => epoch_timestamp } ) 146 | end 147 | 148 | subject { input.add_sqs_data(event, message) } 149 | 150 | context "when the option is specified" do 151 | let(:config) do 152 | { 153 | "region" => "us-east-1", 154 | "access_key_id" => "123", 155 | "secret_access_key" => "secret", 156 | "queue" => queue_name, 157 | "id_field" => id_field, 158 | "md5_field" => md5_field, 159 | "sent_timestamp_field" => sent_timestamp_field 160 | } 161 | end 162 | 163 | it "add the `message_id`" do 164 | expect(subject.get(id_field)).to eq(message_id) 165 | end 166 | 167 | it "add the `md5_of_body`" do 168 | expect(subject.get(md5_field)).to eq(md5_of_body) 169 | end 170 | 171 | it "add the `sent_timestamp`" do 172 | expect(subject.get(sent_timestamp_field).to_i).to eq(sent_timestamp.to_i) 173 | end 174 | end 175 | 176 | context "when the option isn't specified" do 177 | it "doesnt add the `message_id`" do 178 | expect(subject).not_to include(id_field) 179 | end 180 | 181 | it "doesnt add the `md5_of_body`" do 182 | expect(subject).not_to include(md5_field) 183 | end 184 | 185 | it "doesnt add the `sent_timestamp`" do 186 | expect(subject).not_to include(sent_timestamp_field) 187 | end 188 | end 189 | end 190 | 191 | context "when decoding body" do 192 | subject { LogStash::Inputs::SQS::new(config.merge({ "codec" => "json" })) } 193 | 194 | it "uses the specified codec" do 195 | subject.handle_message(encoded_message, queue) 196 | expect(queue.pop.get("bonjour")).to eq(decoded_message["bonjour"]) 197 | end 198 | end 199 | 200 | context "receiving messages" do 201 | before do 202 | expect(subject).to receive(:poller).and_return(mock_sqs).at_least(:once) 203 | end 204 | 205 | it "creates logstash event" do 206 | expect(mock_sqs).to receive(:poll).with(anything()).and_yield([encoded_message], double("stats")) 207 | subject.run(queue) 208 | expect(queue.pop.get("bonjour")).to eq(decoded_message["bonjour"]) 209 | end 210 | 211 | context 'can create multiple events' do 212 | require "logstash/codecs/json_lines" 213 | let(:config) { super().merge({ "codec" => "json_lines" }) } 214 | let(:first_message) { { "sequence" => "first" } } 215 | let(:second_message) { { "sequence" => "second" } } 216 | let(:encoded_message) { double("sqs_message", :body => "#{LogStash::Json::dump(first_message)}\n#{LogStash::Json::dump(second_message)}\n") } 217 | 218 | it 'creates multiple events' do 219 | expect(mock_sqs).to receive(:poll).with(anything()).and_yield([encoded_message], double("stats")) 220 | subject.run(queue) 221 | events = queue.map{ |e|e.get('sequence')} 222 | expect(events).to match_array([first_message['sequence'], second_message['sequence']]) 223 | end 224 | end 225 | end 226 | 227 | context "on errors" do 228 | let(:payload) { "Hello world" } 229 | 230 | before do 231 | expect(subject).to receive(:poller).and_return(mock_sqs).at_least(:once) 232 | end 233 | 234 | context "SQS error" do 235 | it "retry to fetch messages" do 236 | # change the poller implementation to raise SQS errors. 237 | had_error = false 238 | 239 | # actually using the child of `Object` to do an expectation of `#sleep` 240 | expect(subject).to receive(:sleep).with(LogStash::Inputs::SQS::BACKOFF_SLEEP_TIME) 241 | expect(mock_sqs).to receive(:poll).with(anything()).at_most(2) do 242 | unless had_error 243 | had_error = true 244 | raise Aws::SQS::Errors::ServiceError.new("testing", "testing exception") 245 | end 246 | 247 | queue << payload 248 | end 249 | 250 | subject.run(queue) 251 | 252 | expect(queue.size).to eq(1) 253 | expect(queue.pop).to eq(payload) 254 | end 255 | end 256 | 257 | context "SQS error (retries)" do 258 | 259 | it "retry to fetch messages" do 260 | sleep_time = LogStash::Inputs::SQS::BACKOFF_SLEEP_TIME 261 | expect(subject).to receive(:sleep).with(sleep_time) 262 | expect(subject).to receive(:sleep).with(sleep_time * 2) 263 | expect(subject).to receive(:sleep).with(sleep_time * 4) 264 | 265 | error_count = 0 266 | expect(mock_sqs).to receive(:poll).with(anything()).at_most(4) do 267 | error_count += 1 268 | if error_count <= 3 269 | raise Aws::SQS::Errors::QueueDoesNotExist.new("testing", "testing exception (#{error_count})") 270 | end 271 | 272 | queue << payload 273 | end 274 | 275 | subject.run(queue) 276 | 277 | expect(queue.size).to eq(1) 278 | expect(queue.pop).to eq(payload) 279 | end 280 | 281 | end 282 | 283 | context "networking error" do 284 | 285 | before(:all) { require 'seahorse/client/networking_error' } 286 | 287 | it "retry to fetch messages" do 288 | sleep_time = LogStash::Inputs::SQS::BACKOFF_SLEEP_TIME 289 | expect(subject).to receive(:sleep).with(sleep_time).twice 290 | 291 | error_count = 0 292 | expect(mock_sqs).to receive(:poll).with(anything()).at_most(5) do 293 | error_count += 1 294 | if error_count == 1 295 | raise Seahorse::Client::NetworkingError.new(Net::OpenTimeout.new, 'timeout') 296 | end 297 | if error_count == 3 298 | raise Seahorse::Client::NetworkingError.new(SocketError.new('spec-error')) 299 | end 300 | 301 | queue << payload 302 | end 303 | 304 | subject.run(queue) 305 | subject.run(queue) 306 | 307 | expect(queue.size).to eq(2) 308 | expect(queue.pop).to eq(payload) 309 | end 310 | 311 | end 312 | 313 | context "other error" do 314 | it "stops executing the code and raise the exception" do 315 | expect(mock_sqs).to receive(:poll).with(anything()).at_most(2) do 316 | raise RuntimeError 317 | end 318 | 319 | expect { subject.run(queue) }.to raise_error(RuntimeError) 320 | end 321 | end 322 | end 323 | end 324 | end 325 | -------------------------------------------------------------------------------- /spec/integration/sqs_spec.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "spec_helper" 3 | require "logstash/inputs/sqs" 4 | require "logstash/event" 5 | require "logstash/json" 6 | require "aws-sdk" 7 | require_relative "../support/helpers" 8 | require "thread" 9 | 10 | Thread.abort_on_exception = true 11 | 12 | describe "LogStash::Inputs::SQS integration", :integration => true do 13 | let(:decoded_message) { { "drstrange" => "is-he-really-that-strange" } } 14 | let(:encoded_message) { LogStash::Json.dump(decoded_message) } 15 | let(:queue) { Queue.new } 16 | 17 | let(:input) { LogStash::Inputs::SQS.new(options) } 18 | 19 | context "with invalid credentials" do 20 | let(:options) do 21 | { 22 | "queue" => "do-not-exist", 23 | "access_key_id" => "bad_access", 24 | "secret_access_key" => "bad_secret_key", 25 | "region" => ENV["AWS_REGION"] 26 | } 27 | end 28 | 29 | subject { input } 30 | 31 | it "raises a Configuration error if the credentials are bad" do 32 | expect { subject.register }.to raise_error(LogStash::ConfigurationError) 33 | end 34 | end 35 | 36 | context "with valid credentials" do 37 | let(:options) do 38 | { 39 | "queue" => ENV["SQS_QUEUE_NAME"], 40 | "access_key_id" => ENV['AWS_ACCESS_KEY_ID'], 41 | "secret_access_key" => ENV['AWS_SECRET_ACCESS_KEY'], 42 | "region" => ENV["AWS_REGION"] 43 | } 44 | end 45 | 46 | before :each do 47 | push_sqs_event(encoded_message) 48 | input.register 49 | @server = Thread.new { input.run(queue) } 50 | end 51 | 52 | after do 53 | @server.kill 54 | end 55 | 56 | subject { queue.pop } 57 | 58 | it "creates logstash events" do 59 | expect(subject["drstrange"]).to eq(decoded_message["drstrange"]) 60 | end 61 | 62 | context "when the optionals fields are not specified" do 63 | let(:id_field) { "my_id_field" } 64 | let(:md5_field) { "my_md5_field" } 65 | let(:sent_timestamp_field) { "my_sent_timestamp_field" } 66 | 67 | it "add the `message_id`" do 68 | expect(subject[id_field]).to be_nil 69 | end 70 | 71 | it "add the `md5_of_body`" do 72 | expect(subject[md5_field]).to be_nil 73 | end 74 | 75 | it "add the `sent_timestamp`" do 76 | expect(subject[sent_timestamp_field]).to be_nil 77 | end 78 | 79 | end 80 | 81 | context "when the optionals fields are specified" do 82 | let(:id_field) { "my_id_field" } 83 | let(:md5_field) { "my_md5_field" } 84 | let(:sent_timestamp_field) { "my_sent_timestamp_field" } 85 | 86 | let(:options) do 87 | { 88 | "queue" => ENV["SQS_QUEUE_NAME"], 89 | "access_key_id" => ENV['AWS_ACCESS_KEY_ID'], 90 | "secret_access_key" => ENV['AWS_SECRET_ACCESS_KEY'], 91 | "region" => ENV["AWS_REGION"], 92 | "id_field" => id_field, 93 | "md5_field" => md5_field, 94 | "sent_timestamp_field" => sent_timestamp_field 95 | } 96 | end 97 | 98 | it "add the `message_id`" do 99 | expect(subject[id_field]).not_to be_nil 100 | end 101 | 102 | it "add the `md5_of_body`" do 103 | expect(subject[md5_field]).not_to be_nil 104 | end 105 | 106 | it "add the `sent_timestamp`" do 107 | expect(subject[sent_timestamp_field]).not_to be_nil 108 | end 109 | end 110 | end 111 | end 112 | -------------------------------------------------------------------------------- /spec/spec_helper.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | require "logstash/devutils/rspec/spec_helper" 3 | -------------------------------------------------------------------------------- /spec/support/helpers.rb: -------------------------------------------------------------------------------- 1 | # encoding: utf-8 2 | def push_sqs_event(message) 3 | client = Aws::SQS::Client.new 4 | queue_url = client.get_queue_url(:queue_name => ENV["SQS_QUEUE_NAME"]) 5 | 6 | client.send_message({ 7 | queue_url: queue_url.queue_url, 8 | message_body: message, 9 | }) 10 | end 11 | --------------------------------------------------------------------------------