├── .gitignore ├── CONTRIBUTING.md ├── LICENSE.txt ├── README.md ├── dev-tools └── release.py ├── pom.xml └── src ├── main ├── assemblies │ └── plugin.xml ├── java │ └── org │ │ └── elasticsearch │ │ ├── plugin │ │ └── river │ │ │ └── rabbitmq │ │ │ └── RabbitmqRiverPlugin.java │ │ └── river │ │ └── rabbitmq │ │ ├── RabbitmqRiver.java │ │ └── RabbitmqRiverModule.java └── resources │ └── es-plugin.properties └── test └── java └── org └── elasticsearch └── river └── rabbitmq ├── AbstractRabbitMQTest.java ├── RabbitMQIntegrationTest.java └── script ├── MockScript.java └── MockScriptFactory.java /.gitignore: -------------------------------------------------------------------------------- 1 | /data 2 | /work 3 | /logs 4 | /.idea 5 | /target 6 | .DS_Store 7 | *.iml 8 | /.project 9 | /.settings 10 | /.classpath 11 | /plugin_tools 12 | /.local-execution-hints.log 13 | /.local-*-execution-hints.log 14 | /eclipse-build/ 15 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | Contributing to elasticsearch 2 | ============================= 3 | 4 | Elasticsearch is an open source project and we love to receive contributions from our community — you! There are many ways to contribute, from writing tutorials or blog posts, improving the documentation, submitting bug reports and feature requests or writing code which can be incorporated into Elasticsearch itself. 5 | 6 | Bug reports 7 | ----------- 8 | 9 | If you think you have found a bug in Elasticsearch, first make sure that you are testing against the [latest version of Elasticsearch](http://www.elasticsearch.org/download/) - your issue may already have been fixed. If not, search our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub in case a similar issue has already been opened. 10 | 11 | It is very helpful if you can prepare a reproduction of the bug. In other words, provide a small test case which we can run to confirm your bug. It makes it easier to find the problem and to fix it. Test cases should be provided as `curl` commands which we can copy and paste into a terminal to run it locally, for example: 12 | 13 | ```sh 14 | # delete the index 15 | curl -XDELETE localhost:9200/test 16 | 17 | # insert a document 18 | curl -XPUT localhost:9200/test/test/1 -d '{ 19 | "title": "test document" 20 | }' 21 | 22 | # this should return XXXX but instead returns YYY 23 | curl .... 24 | ``` 25 | 26 | Provide as much information as you can. You may think that the problem lies with your query, when actually it depends on how your data is indexed. The easier it is for us to recreate your problem, the faster it is likely to be fixed. 27 | 28 | Feature requests 29 | ---------------- 30 | 31 | If you find yourself wishing for a feature that doesn't exist in Elasticsearch, you are probably not alone. There are bound to be others out there with similar needs. Many of the features that Elasticsearch has today have been added because our users saw the need. 32 | Open an issue on our [issues list](https://github.com/elasticsearch/elasticsearch/issues) on GitHub which describes the feature you would like to see, why you need it, and how it should work. 33 | 34 | Contributing code and documentation changes 35 | ------------------------------------------- 36 | 37 | If you have a bugfix or new feature that you would like to contribute to Elasticsearch, please find or open an issue about it first. Talk about what you would like to do. It may be that somebody is already working on it, or that there are particular issues that you should know about before implementing the change. 38 | 39 | We enjoy working with contributors to get their code accepted. There are many approaches to fixing a problem and it is important to find the best approach before writing too much code. 40 | 41 | The process for contributing to any of the [Elasticsearch repositories](https://github.com/elasticsearch/) is similar. Details for individual projects can be found below. 42 | 43 | ### Fork and clone the repository 44 | 45 | You will need to fork the main Elasticsearch code or documentation repository and clone it to your local machine. See 46 | [github help page](https://help.github.com/articles/fork-a-repo) for help. 47 | 48 | Further instructions for specific projects are given below. 49 | 50 | ### Submitting your changes 51 | 52 | Once your changes and tests are ready to submit for review: 53 | 54 | 1. Test your changes 55 | Run the test suite to make sure that nothing is broken. 56 | 57 | 2. Sign the Contributor License Agreement 58 | Please make sure you have signed our [Contributor License Agreement](http://www.elasticsearch.org/contributor-agreement/). We are not asking you to assign copyright to us, but to give us the right to distribute your code without restriction. We ask this of all contributors in order to assure our users of the origin and continuing existence of the code. You only need to sign the CLA once. 59 | 60 | 3. Rebase your changes 61 | Update your local repository with the most recent code from the main Elasticsearch repository, and rebase your branch on top of the latest master branch. We prefer your changes to be squashed into a single commit. 62 | 63 | 4. Submit a pull request 64 | Push your local changes to your forked copy of the repository and [submit a pull request](https://help.github.com/articles/using-pull-requests). In the pull request, describe what your changes do and mention the number of the issue where discussion has taken place, eg "Closes #123". 65 | 66 | Then sit back and wait. There will probably be discussion about the pull request and, if any changes are needed, we would love to work with you to get your pull request merged into Elasticsearch. 67 | 68 | 69 | Contributing to the Elasticsearch plugin 70 | ---------------------------------------- 71 | 72 | **Repository:** [https://github.com/elasticsearch/elasticsearch-river-rabbitmq](https://github.com/elasticsearch/elasticsearch-river-rabbitmq) 73 | 74 | Make sure you have [Maven](http://maven.apache.org) installed, as Elasticsearch uses it as its build system. Integration with IntelliJ and Eclipse should work out of the box. Eclipse users can automatically configure their IDE by running `mvn eclipse:eclipse` and then importing the project into their workspace: `File > Import > Existing project into workspace`. 75 | 76 | Please follow these formatting guidelines: 77 | 78 | * Java indent is 4 spaces 79 | * Line width is 140 characters 80 | * The rest is left to Java coding standards 81 | * Disable “auto-format on save” to prevent unnecessary format changes. This makes reviews much harder as it generates unnecessary formatting changes. If your IDE supports formatting only modified chunks that is fine to do. 82 | 83 | To create a distribution from the source, simply run: 84 | 85 | ```sh 86 | cd elasticsearch-river-rabbitmq/ 87 | mvn clean package -DskipTests 88 | ``` 89 | 90 | You will find the newly built packages under: `./target/releases/`. 91 | 92 | Before submitting your changes, run the test suite to make sure that nothing is broken, with: 93 | 94 | ```sh 95 | mvn clean test 96 | ``` 97 | 98 | Source: [Contributing to elasticsearch](http://www.elasticsearch.org/contributing-to-elasticsearch/) 99 | -------------------------------------------------------------------------------- /LICENSE.txt: -------------------------------------------------------------------------------- 1 | 2 | Apache License 3 | Version 2.0, January 2004 4 | http://www.apache.org/licenses/ 5 | 6 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 7 | 8 | 1. Definitions. 9 | 10 | "License" shall mean the terms and conditions for use, reproduction, 11 | and distribution as defined by Sections 1 through 9 of this document. 12 | 13 | "Licensor" shall mean the copyright owner or entity authorized by 14 | the copyright owner that is granting the License. 15 | 16 | "Legal Entity" shall mean the union of the acting entity and all 17 | other entities that control, are controlled by, or are under common 18 | control with that entity. For the purposes of this definition, 19 | "control" means (i) the power, direct or indirect, to cause the 20 | direction or management of such entity, whether by contract or 21 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 22 | outstanding shares, or (iii) beneficial ownership of such entity. 23 | 24 | "You" (or "Your") shall mean an individual or Legal Entity 25 | exercising permissions granted by this License. 26 | 27 | "Source" form shall mean the preferred form for making modifications, 28 | including but not limited to software source code, documentation 29 | source, and configuration files. 30 | 31 | "Object" form shall mean any form resulting from mechanical 32 | transformation or translation of a Source form, including but 33 | not limited to compiled object code, generated documentation, 34 | and conversions to other media types. 35 | 36 | "Work" shall mean the work of authorship, whether in Source or 37 | Object form, made available under the License, as indicated by a 38 | copyright notice that is included in or attached to the work 39 | (an example is provided in the Appendix below). 40 | 41 | "Derivative Works" shall mean any work, whether in Source or Object 42 | form, that is based on (or derived from) the Work and for which the 43 | editorial revisions, annotations, elaborations, or other modifications 44 | represent, as a whole, an original work of authorship. For the purposes 45 | of this License, Derivative Works shall not include works that remain 46 | separable from, or merely link (or bind by name) to the interfaces of, 47 | the Work and Derivative Works thereof. 48 | 49 | "Contribution" shall mean any work of authorship, including 50 | the original version of the Work and any modifications or additions 51 | to that Work or Derivative Works thereof, that is intentionally 52 | submitted to Licensor for inclusion in the Work by the copyright owner 53 | or by an individual or Legal Entity authorized to submit on behalf of 54 | the copyright owner. For the purposes of this definition, "submitted" 55 | means any form of electronic, verbal, or written communication sent 56 | to the Licensor or its representatives, including but not limited to 57 | communication on electronic mailing lists, source code control systems, 58 | and issue tracking systems that are managed by, or on behalf of, the 59 | Licensor for the purpose of discussing and improving the Work, but 60 | excluding communication that is conspicuously marked or otherwise 61 | designated in writing by the copyright owner as "Not a Contribution." 62 | 63 | "Contributor" shall mean Licensor and any individual or Legal Entity 64 | on behalf of whom a Contribution has been received by Licensor and 65 | subsequently incorporated within the Work. 66 | 67 | 2. Grant of Copyright License. Subject to the terms and conditions of 68 | this License, each Contributor hereby grants to You a perpetual, 69 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 70 | copyright license to reproduce, prepare Derivative Works of, 71 | publicly display, publicly perform, sublicense, and distribute the 72 | Work and such Derivative Works in Source or Object form. 73 | 74 | 3. Grant of Patent License. Subject to the terms and conditions of 75 | this License, each Contributor hereby grants to You a perpetual, 76 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 77 | (except as stated in this section) patent license to make, have made, 78 | use, offer to sell, sell, import, and otherwise transfer the Work, 79 | where such license applies only to those patent claims licensable 80 | by such Contributor that are necessarily infringed by their 81 | Contribution(s) alone or by combination of their Contribution(s) 82 | with the Work to which such Contribution(s) was submitted. If You 83 | institute patent litigation against any entity (including a 84 | cross-claim or counterclaim in a lawsuit) alleging that the Work 85 | or a Contribution incorporated within the Work constitutes direct 86 | or contributory patent infringement, then any patent licenses 87 | granted to You under this License for that Work shall terminate 88 | as of the date such litigation is filed. 89 | 90 | 4. Redistribution. You may reproduce and distribute copies of the 91 | Work or Derivative Works thereof in any medium, with or without 92 | modifications, and in Source or Object form, provided that You 93 | meet the following conditions: 94 | 95 | (a) You must give any other recipients of the Work or 96 | Derivative Works a copy of this License; and 97 | 98 | (b) You must cause any modified files to carry prominent notices 99 | stating that You changed the files; and 100 | 101 | (c) You must retain, in the Source form of any Derivative Works 102 | that You distribute, all copyright, patent, trademark, and 103 | attribution notices from the Source form of the Work, 104 | excluding those notices that do not pertain to any part of 105 | the Derivative Works; and 106 | 107 | (d) If the Work includes a "NOTICE" text file as part of its 108 | distribution, then any Derivative Works that You distribute must 109 | include a readable copy of the attribution notices contained 110 | within such NOTICE file, excluding those notices that do not 111 | pertain to any part of the Derivative Works, in at least one 112 | of the following places: within a NOTICE text file distributed 113 | as part of the Derivative Works; within the Source form or 114 | documentation, if provided along with the Derivative Works; or, 115 | within a display generated by the Derivative Works, if and 116 | wherever such third-party notices normally appear. The contents 117 | of the NOTICE file are for informational purposes only and 118 | do not modify the License. You may add Your own attribution 119 | notices within Derivative Works that You distribute, alongside 120 | or as an addendum to the NOTICE text from the Work, provided 121 | that such additional attribution notices cannot be construed 122 | as modifying the License. 123 | 124 | You may add Your own copyright statement to Your modifications and 125 | may provide additional or different license terms and conditions 126 | for use, reproduction, or distribution of Your modifications, or 127 | for any such Derivative Works as a whole, provided Your use, 128 | reproduction, and distribution of the Work otherwise complies with 129 | the conditions stated in this License. 130 | 131 | 5. Submission of Contributions. Unless You explicitly state otherwise, 132 | any Contribution intentionally submitted for inclusion in the Work 133 | by You to the Licensor shall be under the terms and conditions of 134 | this License, without any additional terms or conditions. 135 | Notwithstanding the above, nothing herein shall supersede or modify 136 | the terms of any separate license agreement you may have executed 137 | with Licensor regarding such Contributions. 138 | 139 | 6. Trademarks. This License does not grant permission to use the trade 140 | names, trademarks, service marks, or product names of the Licensor, 141 | except as required for reasonable and customary use in describing the 142 | origin of the Work and reproducing the content of the NOTICE file. 143 | 144 | 7. Disclaimer of Warranty. Unless required by applicable law or 145 | agreed to in writing, Licensor provides the Work (and each 146 | Contributor provides its Contributions) on an "AS IS" BASIS, 147 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 148 | implied, including, without limitation, any warranties or conditions 149 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 150 | PARTICULAR PURPOSE. You are solely responsible for determining the 151 | appropriateness of using or redistributing the Work and assume any 152 | risks associated with Your exercise of permissions under this License. 153 | 154 | 8. Limitation of Liability. In no event and under no legal theory, 155 | whether in tort (including negligence), contract, or otherwise, 156 | unless required by applicable law (such as deliberate and grossly 157 | negligent acts) or agreed to in writing, shall any Contributor be 158 | liable to You for damages, including any direct, indirect, special, 159 | incidental, or consequential damages of any character arising as a 160 | result of this License or out of the use or inability to use the 161 | Work (including but not limited to damages for loss of goodwill, 162 | work stoppage, computer failure or malfunction, or any and all 163 | other commercial damages or losses), even if such Contributor 164 | has been advised of the possibility of such damages. 165 | 166 | 9. Accepting Warranty or Additional Liability. While redistributing 167 | the Work or Derivative Works thereof, You may choose to offer, 168 | and charge a fee for, acceptance of support, warranty, indemnity, 169 | or other liability obligations and/or rights consistent with this 170 | License. However, in accepting such obligations, You may act only 171 | on Your own behalf and on Your sole responsibility, not on behalf 172 | of any other Contributor, and only if You agree to indemnify, 173 | defend, and hold each Contributor harmless for any liability 174 | incurred by, or claims asserted against, such Contributor by reason 175 | of your accepting any such warranty or additional liability. 176 | 177 | END OF TERMS AND CONDITIONS 178 | 179 | APPENDIX: How to apply the Apache License to your work. 180 | 181 | To apply the Apache License to your work, attach the following 182 | boilerplate notice, with the fields enclosed by brackets "[]" 183 | replaced with your own identifying information. (Don't include 184 | the brackets!) The text should be enclosed in the appropriate 185 | comment syntax for the file format. We also recommend that a 186 | file or class name and description of purpose be included on the 187 | same "printed page" as the copyright notice for easier 188 | identification within third-party archives. 189 | 190 | Copyright [yyyy] [name of copyright owner] 191 | 192 | Licensed under the Apache License, Version 2.0 (the "License"); 193 | you may not use this file except in compliance with the License. 194 | You may obtain a copy of the License at 195 | 196 | http://www.apache.org/licenses/LICENSE-2.0 197 | 198 | Unless required by applicable law or agreed to in writing, software 199 | distributed under the License is distributed on an "AS IS" BASIS, 200 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 201 | See the License for the specific language governing permissions and 202 | limitations under the License. 203 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | **Important**: This project has been stopped since elasticsearch 2.0. 2 | 3 | ---- 4 | 5 | RabbitMQ River Plugin for Elasticsearch 6 | ================================== 7 | 8 | The RabbitMQ River plugin allows index [bulk format messages](http://www.elasticsearch.org/guide/reference/api/bulk/) into elasticsearch. 9 | RabbitMQ River allows to automatically index a [RabbitMQ](http://www.rabbitmq.com/) queue. The format of the messages follows the bulk api format: 10 | 11 | ```javascript 12 | { "index" : { "_index" : "twitter", "_type" : "tweet", "_id" : "1" } } 13 | { "tweet" : { "text" : "this is a tweet" } } 14 | { "delete" : { "_index" : "twitter", "_type" : "tweet", "_id" : "2" } } 15 | { "create" : { "_index" : "twitter", "_type" : "tweet", "_id" : "1" } } 16 | { "tweet" : { "text" : "another tweet" } } 17 | ``` 18 | 19 | **Rivers are [deprecated](https://www.elastic.co/blog/deprecating_rivers) and will be removed in the future.** 20 | Have a look at [logstash rabbitmq input](http://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html). 21 | 22 | In order to install the plugin, run: 23 | 24 | ```sh 25 | bin/plugin install elasticsearch/elasticsearch-river-rabbitmq/2.6.0 26 | ``` 27 | 28 | You need to install a version matching your Elasticsearch version: 29 | 30 | | Elasticsearch | RabbitMQ River | Docs | 31 | |------------------------|-------------------|------------------------------------------------------------------------------------------------------------------------------------| 32 | | master | Build from source | See below | 33 | | es-1.x | Build from source | [2.7.0-SNAPSHOT](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/es-1.x/#version-270-snapshot-for-elasticsearch-1x)| 34 | | es-1.6 | 2.6.0 | [2.6.0](https://github.com/elastic/elasticsearch-river-rabbitmq/tree/v2.6.0/#version-260-for-elasticsearch-16) | 35 | | es-1.5 | 2.5.0 | [2.5.0](https://github.com/elastic/elasticsearch-river-rabbitmq/tree/v2.5.0/#version-250-for-elasticsearch-15) | 36 | | es-1.4 | 2.4.1 | [2.4.1](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v2.4.1/#version-241-for-elasticsearch-14) | 37 | | es-1.3 | 2.3.0 | [2.3.0](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v2.3.0/#version-230-for-elasticsearch-13) | 38 | | es-1.2 | 2.2.0 | [2.2.0](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v2.2.0/#version-220-for-elasticsearch-12) | 39 | | es-1.1 | 2.0.0 | [2.0.0](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v2.0.0/#rabbitmq-river-plugin-for-elasticsearch) | 40 | | es-1.0 | 2.0.0 | [2.0.0](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v2.0.0/#rabbitmq-river-plugin-for-elasticsearch) | 41 | | es-0.90 | 1.6.0 | [1.6.0](https://github.com/elasticsearch/elasticsearch-river-rabbitmq/tree/v1.6.0/#rabbitmq-river-plugin-for-elasticsearch) | 42 | 43 | To build a `SNAPSHOT` version, you need to build it with Maven: 44 | 45 | ```bash 46 | mvn clean install 47 | plugin --install river-rabbitmq \ 48 | --url file:target/releases/elasticsearch-river-rabbitmq-X.X.X-SNAPSHOT.zip 49 | ``` 50 | 51 | Create river 52 | ------------ 53 | 54 | Creating the rabbitmq river is as simple as (all configuration parameters are provided, with default values): 55 | 56 | ```sh 57 | curl -XPUT 'localhost:9200/_river/my_river/_meta' -d '{ 58 | "type" : "rabbitmq", 59 | "rabbitmq" : { 60 | "host" : "localhost", 61 | "port" : 5672, 62 | "user" : "guest", 63 | "pass" : "guest", 64 | "vhost" : "/", 65 | "queue" : "elasticsearch", 66 | "exchange" : "elasticsearch", 67 | "routing_key" : "elasticsearch", 68 | "exchange_declare" : true, 69 | "exchange_type" : "direct", 70 | "exchange_durable" : true, 71 | "queue_declare" : true, 72 | "queue_bind" : true, 73 | "queue_durable" : true, 74 | "queue_auto_delete" : false, 75 | "heartbeat" : "30m", 76 | "qos_prefetch_size" : 0, 77 | "qos_prefetch_count" : 10, 78 | "nack_errors" : true 79 | }, 80 | "index" : { 81 | "bulk_size" : 100, 82 | "bulk_timeout" : "10ms", 83 | "ordered" : false, 84 | "replication" : "default" 85 | } 86 | }' 87 | ``` 88 | 89 | You can disable exchange or queue declaration by setting `exchange_declare` or `queue_declare` to `false` 90 | (`true` by default). 91 | You can disable queue binding by setting `queue_bind` to `false` (`true` by default). 92 | 93 | Addresses(host-port pairs) also available. it is useful to taking advantage rabbitmq HA(active/active) without any rabbitmq load balancer. 94 | (http://www.rabbitmq.com/ha.html) 95 | 96 | ```javascript 97 | ... 98 | "rabbitmq" : { 99 | "addresses" : [ 100 | { 101 | "host" : "rabbitmq-host1", 102 | "port" : 5672 103 | }, 104 | { 105 | "host" : "rabbitmq-host2", 106 | "port" : 5672 107 | } 108 | ], 109 | "user" : "guest", 110 | "pass" : "guest", 111 | "vhost" : "/", 112 | ... 113 | } 114 | ... 115 | ``` 116 | 117 | The river is automatically bulking queue messages if the queue is overloaded, allowing for faster catchup with the 118 | messages streamed into the queue. The `ordered` flag allows to make sure that the messages will be indexed in the 119 | same order as they arrive in the query by blocking on the bulk request before picking up the next data to be indexed. 120 | It can also be used as a simple way to throttle indexing. 121 | 122 | You can set `heartbeat` option to define heartbeat to RabbitMQ river even if no more messages are intended to be consumed 123 | (default to `30m`). 124 | 125 | Replication mode is set to node default value. You can change it by forcing `replication` to `async` or `sync`. 126 | 127 | By default, when exception happens while executing bulk, failing messages are marked as rejected. 128 | You can ignore errors and ack messages in any case setting `nack_errors` to `false`. 129 | 130 | Setting `qos_prefetch_size` will define maximum amount of content (measured in octets) that the server will deliver 131 | (0 if unlimited - default). 132 | 133 | Setting `qos_prefetch_count` will define maximum number of messages that the server will deliver (0 if unlimited). 134 | Default to `bulk_size*2`. 135 | 136 | Scripting 137 | --------- 138 | 139 | RabbitMQ river can call scripts to modify or filter messages. 140 | 141 | ### Full bulk scripting 142 | 143 | To enable bulk scripting use the following configuration options: 144 | 145 | ```sh 146 | curl -XPUT 'localhost:9200/_river/my_river/_meta' -d '{ 147 | "type" : "rabbitmq", 148 | "rabbitmq" : { 149 | ... 150 | }, 151 | "index" : { 152 | ... 153 | }, 154 | "bulk_script_filter" : { 155 | "script" : "myscript", 156 | "script_lang" : "native", 157 | "script_params" : { 158 | "param1" : "val1", 159 | "param2" : "val2" 160 | ... 161 | } 162 | } 163 | }' 164 | ``` 165 | 166 | * `script` is optional and is the name of the registered script in `elasticsearch.yml`. Basically, add the following 167 | property: `script.native.myscript.type: sample.MyNativeScriptFactory` and provide this class to elasticsearch 168 | classloader. 169 | * `script_lang` is by default `native`. 170 | * `script_params` are optional configuration arguments for the script. 171 | 172 | The script will receive a variable called `body` which contains a String representation of RabbitMQ's message body. 173 | That `body` can be modified by the script, and it must return the new body as a String as well. 174 | If the returned body is null, that message will be skipped from the indexing flow. 175 | 176 | For more information see [Scripting module](http://www.elasticsearch.org/guide/reference/modules/scripting/) 177 | 178 | ### Doc per doc scripting 179 | 180 | You may also want to apply scripts document per document. It will only works for index or create operations. 181 | 182 | To enable scripting use the following configuration options: 183 | 184 | ```sh 185 | curl -XPUT 'localhost:9200/_river/my_river/_meta' -d '{ 186 | "type" : "rabbitmq", 187 | "rabbitmq" : { 188 | ... 189 | }, 190 | "index" : { 191 | ... 192 | }, 193 | "script_filter" : { 194 | "script" : "ctx.type1.field1 += param1", 195 | "script_lang" : "mvel", 196 | "script_params" : { 197 | "param1" : 1 198 | } 199 | } 200 | }' 201 | ``` 202 | 203 | * `script` is your javascript code if you use `mvel` scripts. 204 | * `script_lang` is by default `mvel`. 205 | * `script_params` are optional configuration arguments for the script. 206 | 207 | The script will receive a variable called `ctx` which contains a String representation of the current document 208 | meant to be indexed or created. 209 | 210 | For more information see [Scripting module](http://www.elasticsearch.org/guide/reference/modules/scripting/) 211 | 212 | Tests 213 | ===== 214 | 215 | Integrations tests in this plugin require working RabbitMQ service and therefore disabled by default. 216 | You need to launch locally `rabbitmq-server` before starting integration tests. 217 | 218 | To run test: 219 | 220 | ```sh 221 | mvn clean test -Dtests.rabbitmq=true 222 | ``` 223 | 224 | 225 | License 226 | ------- 227 | 228 | This software is licensed under the Apache 2 license, quoted below. 229 | 230 | Copyright 2009-2014 Elasticsearch 231 | 232 | Licensed under the Apache License, Version 2.0 (the "License"); you may not 233 | use this file except in compliance with the License. You may obtain a copy of 234 | the License at 235 | 236 | http://www.apache.org/licenses/LICENSE-2.0 237 | 238 | Unless required by applicable law or agreed to in writing, software 239 | distributed under the License is distributed on an "AS IS" BASIS, WITHOUT 240 | WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the 241 | License for the specific language governing permissions and limitations under 242 | the License. 243 | -------------------------------------------------------------------------------- /dev-tools/release.py: -------------------------------------------------------------------------------- 1 | # Licensed to Elasticsearch under one or more contributor 2 | # license agreements. See the NOTICE file distributed with 3 | # this work for additional information regarding copyright 4 | # ownership. Elasticsearch licenses this file to you under 5 | # the Apache License, Version 2.0 (the "License"); you may 6 | # not use this file except in compliance with the License. 7 | # You may obtain a copy of the License at 8 | # 9 | # http://www.apache.org/licenses/LICENSE-2.0 10 | # 11 | # Unless required by applicable law or agreed to in writing, 12 | # software distributed under the License is distributed on 13 | # an 'AS IS' BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, 14 | # either express or implied. See the License for the specific 15 | # language governing permissions and limitations under the License. 16 | 17 | import datetime 18 | import os 19 | import shutil 20 | import sys 21 | import time 22 | import urllib 23 | import urllib.request 24 | import zipfile 25 | 26 | from os.path import dirname, abspath 27 | 28 | """ 29 | This tool builds a release from the a given elasticsearch plugin branch. 30 | 31 | It is basically a wrapper on top of launch_release.py which: 32 | 33 | - tries to get a more recent version of launch_release.py in ... 34 | - download it if needed 35 | - launch it passing all arguments to it, like: 36 | 37 | $ python3 dev_tools/release.py --branch master --publish --remote origin 38 | 39 | Important options: 40 | 41 | # Dry run 42 | $ python3 dev_tools/release.py 43 | 44 | # Dry run without tests 45 | python3 dev_tools/release.py --skiptests 46 | 47 | # Release, publish artifacts and announce 48 | $ python3 dev_tools/release.py --publish 49 | 50 | See full documentation in launch_release.py 51 | """ 52 | env = os.environ 53 | 54 | # Change this if the source repository for your scripts is at a different location 55 | SOURCE_REPO = 'elasticsearch/elasticsearch-plugins-script' 56 | # We define that we should download again the script after 1 days 57 | SCRIPT_OBSOLETE_DAYS = 1 58 | # We ignore in master.zip file the following files 59 | IGNORED_FILES = ['.gitignore', 'README.md'] 60 | 61 | 62 | ROOT_DIR = abspath(os.path.join(abspath(dirname(__file__)), '../')) 63 | TARGET_TOOLS_DIR = ROOT_DIR + '/plugin_tools' 64 | DEV_TOOLS_DIR = ROOT_DIR + '/dev-tools' 65 | BUILD_RELEASE_FILENAME = 'release.zip' 66 | BUILD_RELEASE_FILE = TARGET_TOOLS_DIR + '/' + BUILD_RELEASE_FILENAME 67 | SOURCE_URL = 'https://github.com/%s/archive/master.zip' % SOURCE_REPO 68 | 69 | # Download a recent version of the release plugin tool 70 | try: 71 | os.mkdir(TARGET_TOOLS_DIR) 72 | print('directory %s created' % TARGET_TOOLS_DIR) 73 | except FileExistsError: 74 | pass 75 | 76 | 77 | try: 78 | # we check latest update. If we ran an update recently, we 79 | # are not going to check it again 80 | download = True 81 | 82 | try: 83 | last_download_time = datetime.datetime.fromtimestamp(os.path.getmtime(BUILD_RELEASE_FILE)) 84 | if (datetime.datetime.now()-last_download_time).days < SCRIPT_OBSOLETE_DAYS: 85 | download = False 86 | except FileNotFoundError: 87 | pass 88 | 89 | if download: 90 | urllib.request.urlretrieve(SOURCE_URL, BUILD_RELEASE_FILE) 91 | with zipfile.ZipFile(BUILD_RELEASE_FILE) as myzip: 92 | for member in myzip.infolist(): 93 | filename = os.path.basename(member.filename) 94 | # skip directories 95 | if not filename: 96 | continue 97 | if filename in IGNORED_FILES: 98 | continue 99 | 100 | # copy file (taken from zipfile's extract) 101 | source = myzip.open(member.filename) 102 | target = open(os.path.join(TARGET_TOOLS_DIR, filename), "wb") 103 | with source, target: 104 | shutil.copyfileobj(source, target) 105 | # We keep the original date 106 | date_time = time.mktime(member.date_time + (0, 0, -1)) 107 | os.utime(os.path.join(TARGET_TOOLS_DIR, filename), (date_time, date_time)) 108 | print('plugin-tools updated from %s' % SOURCE_URL) 109 | except urllib.error.HTTPError: 110 | pass 111 | 112 | 113 | # Let see if we need to update the release.py script itself 114 | source_time = os.path.getmtime(TARGET_TOOLS_DIR + '/release.py') 115 | repo_time = os.path.getmtime(DEV_TOOLS_DIR + '/release.py') 116 | if source_time > repo_time: 117 | input('release.py needs an update. Press a key to update it...') 118 | shutil.copyfile(TARGET_TOOLS_DIR + '/release.py', DEV_TOOLS_DIR + '/release.py') 119 | 120 | # We can launch the build process 121 | try: 122 | PYTHON = 'python' 123 | # make sure python3 is used if python3 is available 124 | # some systems use python 2 as default 125 | os.system('python3 --version > /dev/null 2>&1') 126 | PYTHON = 'python3' 127 | except RuntimeError: 128 | pass 129 | 130 | release_args = '' 131 | for x in range(1, len(sys.argv)): 132 | release_args += ' ' + sys.argv[x] 133 | 134 | os.system('%s %s/build_release.py %s' % (PYTHON, TARGET_TOOLS_DIR, release_args)) 135 | -------------------------------------------------------------------------------- /pom.xml: -------------------------------------------------------------------------------- 1 | 2 | 5 | 4.0.0 6 | 7 | org.elasticsearch 8 | elasticsearch-river-rabbitmq 9 | 3.0.0-SNAPSHOT 10 | jar 11 | Elasticsearch RabbitMQ River plugin 12 | The RabbitMQ River plugin allows index bulk format messages into elasticsearch. 13 | https://github.com/elastic/elasticsearch-river-rabbitmq/ 14 | 2009 15 | 16 | 17 | The Apache Software License, Version 2.0 18 | http://www.apache.org/licenses/LICENSE-2.0.txt 19 | repo 20 | 21 | 22 | 23 | scm:git:git@github.com:elastic/elasticsearch-river-rabbitmq.git 24 | scm:git:git@github.com:elastic/elasticsearch-river-rabbitmq.git 25 | http://github.com/elastic/elasticsearch-river-rabbitmq 26 | 27 | 28 | 29 | org.elasticsearch 30 | elasticsearch-plugin 31 | 2.0.0-SNAPSHOT 32 | 33 | 34 | 35 | 3.3.4 36 | 1 37 | 38 | warn 39 | 40 | 41 | 42 | 43 | com.rabbitmq 44 | amqp-client 45 | ${amqp-client.version} 46 | 47 | 48 | 49 | org.codehaus.groovy 50 | groovy-all 51 | indy 52 | test 53 | 54 | 55 | 56 | 57 | 58 | 59 | org.apache.maven.plugins 60 | maven-assembly-plugin 61 | 62 | 63 | 64 | 65 | 66 | 67 | oss-snapshots 68 | Sonatype OSS Snapshots 69 | https://oss.sonatype.org/content/repositories/snapshots/ 70 | 71 | 72 | 73 | -------------------------------------------------------------------------------- /src/main/assemblies/plugin.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | plugin 4 | 5 | zip 6 | 7 | false 8 | 9 | 10 | / 11 | true 12 | true 13 | 14 | org.elasticsearch:elasticsearch 15 | 16 | 17 | 18 | / 19 | true 20 | true 21 | 22 | com.rabbitmq:amqp-client 23 | 24 | 25 | 26 | -------------------------------------------------------------------------------- /src/main/java/org/elasticsearch/plugin/river/rabbitmq/RabbitmqRiverPlugin.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.plugin.river.rabbitmq; 21 | 22 | import org.elasticsearch.common.inject.Inject; 23 | import org.elasticsearch.plugins.AbstractPlugin; 24 | import org.elasticsearch.river.RiversModule; 25 | import org.elasticsearch.river.rabbitmq.RabbitmqRiverModule; 26 | 27 | /** 28 | * 29 | */ 30 | public class RabbitmqRiverPlugin extends AbstractPlugin { 31 | 32 | @Inject 33 | public RabbitmqRiverPlugin() { 34 | } 35 | 36 | @Override 37 | public String name() { 38 | return "river-rabbitmq"; 39 | } 40 | 41 | @Override 42 | public String description() { 43 | return "River RabbitMQ Plugin"; 44 | } 45 | 46 | public void onModule(RiversModule module) { 47 | module.registerRiver("rabbitmq", RabbitmqRiverModule.class); 48 | } 49 | } 50 | -------------------------------------------------------------------------------- /src/main/java/org/elasticsearch/river/rabbitmq/RabbitmqRiver.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq; 21 | 22 | import com.rabbitmq.client.*; 23 | 24 | import org.elasticsearch.action.ActionListener; 25 | import org.elasticsearch.action.bulk.BulkRequestBuilder; 26 | import org.elasticsearch.action.bulk.BulkResponse; 27 | import org.elasticsearch.client.Client; 28 | import org.elasticsearch.common.collect.Lists; 29 | import org.elasticsearch.common.collect.Maps; 30 | import org.elasticsearch.common.inject.Inject; 31 | import org.elasticsearch.common.io.FastStringReader; 32 | import org.elasticsearch.common.jackson.core.JsonFactory; 33 | import org.elasticsearch.common.unit.TimeValue; 34 | import org.elasticsearch.common.util.concurrent.EsExecutors; 35 | import org.elasticsearch.common.xcontent.XContentFactory; 36 | import org.elasticsearch.common.xcontent.XContentType; 37 | import org.elasticsearch.common.xcontent.json.JsonXContentParser; 38 | import org.elasticsearch.common.xcontent.support.XContentMapValues; 39 | import org.elasticsearch.river.AbstractRiverComponent; 40 | import org.elasticsearch.river.River; 41 | import org.elasticsearch.river.RiverName; 42 | import org.elasticsearch.river.RiverSettings; 43 | import org.elasticsearch.script.ExecutableScript; 44 | import org.elasticsearch.script.Script; 45 | import org.elasticsearch.script.ScriptContext; 46 | import org.elasticsearch.script.ScriptService; 47 | 48 | import java.io.BufferedReader; 49 | import java.io.IOException; 50 | import java.io.StringReader; 51 | import java.nio.charset.StandardCharsets; 52 | import java.util.ArrayList; 53 | import java.util.HashMap; 54 | import java.util.List; 55 | import java.util.Locale; 56 | import java.util.Map; 57 | 58 | /** 59 | * 60 | */ 61 | public class RabbitmqRiver extends AbstractRiverComponent implements River { 62 | 63 | private static final Map AUTHORIZED_SCRIPT_VARS; 64 | 65 | private final Client client; 66 | 67 | private final Address[] rabbitAddresses; 68 | private final String rabbitUser; 69 | private final String rabbitPassword; 70 | private final String rabbitVhost; 71 | 72 | private final String rabbitQueue; 73 | private final boolean rabbitQueueDeclare; 74 | private final boolean rabbitQueueBind; 75 | private final String rabbitExchange; 76 | private final String rabbitExchangeType; 77 | private final String rabbitRoutingKey; 78 | private final boolean rabbitExchangeDurable; 79 | private final boolean rabbitExchangeDeclare; 80 | private final boolean rabbitQueueDurable; 81 | private final boolean rabbitQueueAutoDelete; 82 | private final int rabbitQosPrefetchSize; 83 | private final int rabbitQosPrefetchCount; 84 | private Map rabbitQueueArgs = null; //extra arguments passed to queue for creation (ha settings for example) 85 | private final TimeValue rabbitHeartbeat; 86 | private final boolean rabbitNackErrors; 87 | 88 | private final int bulkSize; 89 | private final TimeValue bulkTimeout; 90 | private final boolean ordered; 91 | 92 | private final ScriptService scriptService; 93 | private final ExecutableScript bulkScript; 94 | private final ExecutableScript script; 95 | 96 | private volatile boolean closed = false; 97 | 98 | private volatile Thread thread; 99 | 100 | private volatile ConnectionFactory connectionFactory; 101 | 102 | static { 103 | AUTHORIZED_SCRIPT_VARS = new HashMap(); 104 | AUTHORIZED_SCRIPT_VARS.put("_index", "_index"); 105 | AUTHORIZED_SCRIPT_VARS.put("_type", "_type"); 106 | AUTHORIZED_SCRIPT_VARS.put("_id", "_id"); 107 | AUTHORIZED_SCRIPT_VARS.put("_version", "_version"); 108 | AUTHORIZED_SCRIPT_VARS.put("version", "_version"); 109 | AUTHORIZED_SCRIPT_VARS.put("_routing", "_routing"); 110 | AUTHORIZED_SCRIPT_VARS.put("routing", "_routing"); 111 | AUTHORIZED_SCRIPT_VARS.put("_parent", "_parent"); 112 | AUTHORIZED_SCRIPT_VARS.put("parent", "_parent"); 113 | AUTHORIZED_SCRIPT_VARS.put("_timestamp", "_timestamp"); 114 | AUTHORIZED_SCRIPT_VARS.put("timestamp", "_timestamp"); 115 | AUTHORIZED_SCRIPT_VARS.put("_ttl", "_ttl"); 116 | AUTHORIZED_SCRIPT_VARS.put("ttl", "_ttl"); 117 | } 118 | 119 | @SuppressWarnings({"unchecked"}) 120 | @Inject 121 | public RabbitmqRiver(RiverName riverName, RiverSettings settings, Client client, ScriptService scriptService) { 122 | super(riverName, settings); 123 | this.client = client; 124 | this.scriptService = scriptService; 125 | 126 | if (settings.settings().containsKey("rabbitmq")) { 127 | Map rabbitSettings = (Map) settings.settings().get("rabbitmq"); 128 | 129 | if (rabbitSettings.containsKey("addresses")) { 130 | List
addresses = new ArrayList
(); 131 | for(Map address : (List>) rabbitSettings.get("addresses")) { 132 | addresses.add( new Address(XContentMapValues.nodeStringValue(address.get("host"), "localhost"), 133 | XContentMapValues.nodeIntegerValue(address.get("port"), AMQP.PROTOCOL.PORT))); 134 | } 135 | rabbitAddresses = addresses.toArray(new Address[addresses.size()]); 136 | } else { 137 | String rabbitHost = XContentMapValues.nodeStringValue(rabbitSettings.get("host"), "localhost"); 138 | int rabbitPort = XContentMapValues.nodeIntegerValue(rabbitSettings.get("port"), AMQP.PROTOCOL.PORT); 139 | rabbitAddresses = new Address[]{ new Address(rabbitHost, rabbitPort) }; 140 | } 141 | 142 | rabbitUser = XContentMapValues.nodeStringValue(rabbitSettings.get("user"), "guest"); 143 | rabbitPassword = XContentMapValues.nodeStringValue(rabbitSettings.get("pass"), "guest"); 144 | rabbitVhost = XContentMapValues.nodeStringValue(rabbitSettings.get("vhost"), "/"); 145 | 146 | rabbitQueue = XContentMapValues.nodeStringValue(rabbitSettings.get("queue"), "elasticsearch"); 147 | rabbitExchange = XContentMapValues.nodeStringValue(rabbitSettings.get("exchange"), "elasticsearch"); 148 | rabbitRoutingKey = XContentMapValues.nodeStringValue(rabbitSettings.get("routing_key"), "elasticsearch"); 149 | 150 | rabbitExchangeDeclare = XContentMapValues.nodeBooleanValue(rabbitSettings.get("exchange_declare"), true); 151 | if (rabbitExchangeDeclare) { 152 | 153 | rabbitExchangeType = XContentMapValues.nodeStringValue(rabbitSettings.get("exchange_type"), "direct"); 154 | rabbitExchangeDurable = XContentMapValues.nodeBooleanValue(rabbitSettings.get("exchange_durable"), true); 155 | } else { 156 | rabbitExchangeType = "direct"; 157 | rabbitExchangeDurable = true; 158 | } 159 | 160 | rabbitQueueDeclare = XContentMapValues.nodeBooleanValue(rabbitSettings.get("queue_declare"), true); 161 | if (rabbitQueueDeclare) { 162 | rabbitQueueDurable = XContentMapValues.nodeBooleanValue(rabbitSettings.get("queue_durable"), true); 163 | rabbitQueueAutoDelete = XContentMapValues.nodeBooleanValue(rabbitSettings.get("queue_auto_delete"), false); 164 | if (rabbitSettings.containsKey("args")) { 165 | rabbitQueueArgs = (Map) rabbitSettings.get("args"); 166 | } 167 | } else { 168 | rabbitQueueDurable = true; 169 | rabbitQueueAutoDelete = false; 170 | } 171 | rabbitQueueBind = XContentMapValues.nodeBooleanValue(rabbitSettings.get("queue_bind"), true); 172 | 173 | rabbitHeartbeat = TimeValue.parseTimeValue(XContentMapValues.nodeStringValue( 174 | rabbitSettings.get("heartbeat"), "30m"), TimeValue.timeValueMinutes(30)); 175 | rabbitNackErrors = XContentMapValues.nodeBooleanValue(rabbitSettings.get("nack_errors"), true); 176 | } else { 177 | rabbitAddresses = new Address[]{ new Address("localhost", AMQP.PROTOCOL.PORT) }; 178 | rabbitUser = "guest"; 179 | rabbitPassword = "guest"; 180 | rabbitVhost = "/"; 181 | 182 | rabbitQueue = "elasticsearch"; 183 | rabbitQueueAutoDelete = false; 184 | rabbitQueueDurable = true; 185 | rabbitExchange = "elasticsearch"; 186 | rabbitExchangeType = "direct"; 187 | rabbitExchangeDurable = true; 188 | rabbitRoutingKey = "elasticsearch"; 189 | 190 | rabbitExchangeDeclare = true; 191 | rabbitQueueDeclare = true; 192 | rabbitQueueBind = true; 193 | 194 | rabbitHeartbeat = TimeValue.timeValueMinutes(30); 195 | rabbitNackErrors = true; 196 | } 197 | 198 | if (settings.settings().containsKey("index")) { 199 | Map indexSettings = (Map) settings.settings().get("index"); 200 | bulkSize = XContentMapValues.nodeIntegerValue(indexSettings.get("bulk_size"), 100); 201 | if (indexSettings.containsKey("bulk_timeout")) { 202 | bulkTimeout = TimeValue.parseTimeValue(XContentMapValues.nodeStringValue(indexSettings.get("bulk_timeout"), "10ms"), TimeValue.timeValueMillis(10)); 203 | } else { 204 | bulkTimeout = TimeValue.timeValueMillis(10); 205 | } 206 | ordered = XContentMapValues.nodeBooleanValue(indexSettings.get("ordered"), false); 207 | } else { 208 | bulkSize = 100; 209 | bulkTimeout = TimeValue.timeValueMillis(10); 210 | ordered = false; 211 | } 212 | 213 | if (settings.settings().containsKey("rabbitmq")) { 214 | Map rabbitSettings = (Map) settings.settings().get("rabbitmq"); 215 | rabbitQosPrefetchSize = XContentMapValues.nodeIntegerValue(rabbitSettings.get("qos_prefetch_size"), 0); 216 | rabbitQosPrefetchCount = XContentMapValues.nodeIntegerValue(rabbitSettings.get("qos_prefetch_count"), bulkSize * 2); 217 | } else { 218 | rabbitQosPrefetchSize = 0; 219 | rabbitQosPrefetchCount = bulkSize * 2; 220 | } 221 | 222 | bulkScript = buildScript("bulk_script_filter"); 223 | script = buildScript("script_filter"); 224 | } 225 | 226 | @Override 227 | public void start() { 228 | connectionFactory = new ConnectionFactory(); 229 | connectionFactory.setUsername(rabbitUser); 230 | connectionFactory.setPassword(rabbitPassword); 231 | connectionFactory.setVirtualHost(rabbitVhost); 232 | connectionFactory.setRequestedHeartbeat(new Long(rabbitHeartbeat.getSeconds()).intValue()); 233 | 234 | logger.info("creating rabbitmq river, addresses [{}], user [{}], vhost [{}]", rabbitAddresses, connectionFactory.getUsername(), connectionFactory.getVirtualHost()); 235 | 236 | thread = EsExecutors.daemonThreadFactory(settings.globalSettings(), "rabbitmq_river").newThread(new Consumer()); 237 | thread.start(); 238 | } 239 | 240 | @Override 241 | public void close() { 242 | if (closed) { 243 | return; 244 | } 245 | logger.info("closing rabbitmq river"); 246 | closed = true; 247 | thread.interrupt(); 248 | } 249 | 250 | private class Consumer implements Runnable { 251 | 252 | private Connection connection; 253 | 254 | private Channel channel; 255 | 256 | @Override 257 | public void run() { 258 | while (true) { 259 | if (closed) { 260 | break; 261 | } 262 | try { 263 | connection = connectionFactory.newConnection(rabbitAddresses); 264 | channel = connection.createChannel(); 265 | } catch (Exception e) { 266 | if (!closed) { 267 | logger.warn("failed to created a connection / channel", e); 268 | } else { 269 | continue; 270 | } 271 | cleanup(0, "failed to connect"); 272 | try { 273 | Thread.sleep(5000); 274 | } catch (InterruptedException e1) { 275 | // ignore, if we are closing, we will exit later 276 | } 277 | } 278 | 279 | QueueingConsumer consumer = new QueueingConsumer(channel); 280 | // define the queue 281 | try { 282 | if (rabbitQueueDeclare) { 283 | // only declare the queue if we should 284 | channel.queueDeclare(rabbitQueue/*queue*/, rabbitQueueDurable/*durable*/, false/*exclusive*/, rabbitQueueAutoDelete/*autoDelete*/, rabbitQueueArgs/*extra args*/); 285 | } 286 | if (rabbitExchangeDeclare) { 287 | // only declare the exchange if we should 288 | channel.exchangeDeclare(rabbitExchange/*exchange*/, rabbitExchangeType/*type*/, rabbitExchangeDurable); 289 | } 290 | if (rabbitQueueBind) { 291 | // only bind queue if we should 292 | channel.queueBind(rabbitQueue/*queue*/, rabbitExchange/*exchange*/, rabbitRoutingKey/*routingKey*/); 293 | } 294 | channel.basicQos(rabbitQosPrefetchSize/*qos_prefetch_size*/, rabbitQosPrefetchCount/*qos_prefetch_count*/, false); 295 | channel.basicConsume(rabbitQueue/*queue*/, false/*noAck*/, consumer); 296 | } catch (Exception e) { 297 | if (!closed) { 298 | logger.warn("failed to create queue. Check your queue settings. Throttling river for 10s."); 299 | // Print expected settings 300 | if (rabbitQueueDeclare) { 301 | logger.debug("expected settings: queue [{}], durable [{}], exclusive [{}], auto_delete [{}], args [{}]", 302 | rabbitQueue, rabbitQueueDurable, false, rabbitQueueAutoDelete, rabbitQueueArgs); 303 | } 304 | if (rabbitExchangeDeclare) { 305 | logger.debug("expected settings: exchange [{}], type [{}], durable [{}]", 306 | rabbitExchange, rabbitExchangeType, rabbitExchangeDurable); 307 | } 308 | if (rabbitQueueBind) { 309 | logger.debug("expected settings for queue binding: queue [{}], exchange [{}], routing_key [{}]", 310 | rabbitQueue, rabbitExchange, rabbitRoutingKey); 311 | } 312 | 313 | try { 314 | Thread.sleep(10000); 315 | } catch (InterruptedException e1) { 316 | // ignore, if we are closing, we will exit later 317 | } 318 | } 319 | cleanup(0, "failed to create queue"); 320 | continue; 321 | } 322 | 323 | // now use the queue to listen for messages 324 | while (true) { 325 | if (closed) { 326 | break; 327 | } 328 | QueueingConsumer.Delivery task; 329 | try { 330 | task = consumer.nextDelivery(); 331 | } catch (Exception e) { 332 | if (!closed) { 333 | logger.error("failed to get next message, reconnecting...", e); 334 | } 335 | cleanup(0, "failed to get message"); 336 | break; 337 | } 338 | 339 | if (task != null && task.getBody() != null) { 340 | final List deliveryTags = Lists.newArrayList(); 341 | 342 | BulkRequestBuilder bulkRequestBuilder = client.prepareBulk(); 343 | 344 | try { 345 | processBody(task.getBody(), bulkRequestBuilder); 346 | } catch (Exception e) { 347 | logger.warn("failed to parse request for delivery tag [{}], ack'ing...", e, task.getEnvelope().getDeliveryTag()); 348 | try { 349 | channel.basicAck(task.getEnvelope().getDeliveryTag(), false); 350 | } catch (IOException e1) { 351 | logger.warn("failed to ack [{}]", e1, task.getEnvelope().getDeliveryTag()); 352 | } 353 | continue; 354 | } 355 | 356 | deliveryTags.add(task.getEnvelope().getDeliveryTag()); 357 | 358 | if (bulkRequestBuilder.numberOfActions() < bulkSize) { 359 | // try and spin some more of those without timeout, so we have a bigger bulk (bounded by the bulk size) 360 | try { 361 | while ((task = consumer.nextDelivery(bulkTimeout.millis())) != null) { 362 | try { 363 | processBody(task.getBody(), bulkRequestBuilder); 364 | deliveryTags.add(task.getEnvelope().getDeliveryTag()); 365 | } catch (Throwable e) { 366 | logger.warn("failed to parse request for delivery tag [{}], ack'ing...", e, task.getEnvelope().getDeliveryTag()); 367 | try { 368 | channel.basicAck(task.getEnvelope().getDeliveryTag(), false); 369 | } catch (Exception e1) { 370 | logger.warn("failed to ack on failure [{}]", e1, task.getEnvelope().getDeliveryTag()); 371 | } 372 | } 373 | if (bulkRequestBuilder.numberOfActions() >= bulkSize) { 374 | break; 375 | } 376 | } 377 | } catch (InterruptedException e) { 378 | if (closed) { 379 | break; 380 | } 381 | } catch (ShutdownSignalException sse) { 382 | logger.warn("Received a shutdown signal! initiatedByApplication: [{}], hard error: [{}]", sse, 383 | sse.isInitiatedByApplication(), sse.isHardError()); 384 | if (!closed && sse.isInitiatedByApplication()) { 385 | logger.error("failed to get next message, reconnecting...", sse); 386 | } 387 | cleanup(0, "failed to get message"); 388 | break; 389 | } 390 | } 391 | 392 | if (logger.isTraceEnabled()) { 393 | logger.trace("executing bulk with [{}] actions", bulkRequestBuilder.numberOfActions()); 394 | } 395 | 396 | if (ordered) { 397 | try { 398 | if (bulkRequestBuilder.numberOfActions() > 0) { 399 | BulkResponse response = bulkRequestBuilder.execute().actionGet(); 400 | if (response.hasFailures()) { 401 | // TODO write to exception queue? 402 | logger.warn("failed to execute" + response.buildFailureMessage()); 403 | } 404 | } 405 | for (Long deliveryTag : deliveryTags) { 406 | try { 407 | channel.basicAck(deliveryTag, false); 408 | } catch (Exception e1) { 409 | logger.warn("failed to ack [{}]", e1, deliveryTag); 410 | } 411 | } 412 | } catch (Exception e) { 413 | logger.warn("failed to execute bulk", e); 414 | if (rabbitNackErrors) { 415 | logger.warn("failed to execute bulk for delivery tags [{}], nack'ing", e, deliveryTags); 416 | for (Long deliveryTag : deliveryTags) { 417 | try { 418 | channel.basicNack(deliveryTag, false, false); 419 | } catch (Exception e1) { 420 | logger.warn("failed to nack [{}]", e1, deliveryTag); 421 | } 422 | } 423 | } else { 424 | logger.warn("failed to execute bulk for delivery tags [{}], ignoring", e, deliveryTags); 425 | } 426 | } 427 | } else { 428 | if (bulkRequestBuilder.numberOfActions()>0) { 429 | bulkRequestBuilder.execute(new ActionListener() { 430 | @Override 431 | public void onResponse(BulkResponse response) { 432 | if (response.hasFailures()) { 433 | // TODO write to exception queue? 434 | logger.warn("failed to execute" + response.buildFailureMessage()); 435 | } 436 | for (Long deliveryTag : deliveryTags) { 437 | try { 438 | channel.basicAck(deliveryTag, false); 439 | } catch (Exception e1) { 440 | logger.warn("failed to ack [{}]", e1, deliveryTag); 441 | } 442 | } 443 | } 444 | 445 | @Override 446 | public void onFailure(Throwable e) { 447 | if (rabbitNackErrors) { 448 | logger.warn("failed to execute bulk for delivery tags [{}], nack'ing", e, deliveryTags); 449 | for (Long deliveryTag : deliveryTags) { 450 | try { 451 | channel.basicNack(deliveryTag, false, false); 452 | } catch (Exception e1) { 453 | logger.warn("failed to nack [{}]", e1, deliveryTag); 454 | } 455 | } 456 | } else { 457 | logger.warn("failed to execute bulk for delivery tags [{}], ignoring", e, deliveryTags); 458 | } 459 | } 460 | }); 461 | } 462 | } 463 | } 464 | } 465 | } 466 | cleanup(0, "closing river"); 467 | } 468 | 469 | private void cleanup(int code, String message) { 470 | try { 471 | if (channel != null && channel.isOpen()) { 472 | channel.close(code, message); 473 | } 474 | } catch (Exception e) { 475 | logger.debug("failed to close channel on [{}]", e, message); 476 | } 477 | try { 478 | if (connection != null && connection.isOpen()) { 479 | connection.close(code, message); 480 | } 481 | } catch (Exception e) { 482 | logger.debug("failed to close connection on [{}]", e, message); 483 | } 484 | } 485 | 486 | private void processBody(byte[] body, BulkRequestBuilder bulkRequestBuilder) throws Exception { 487 | if (body == null) return; 488 | 489 | // first, the "full bulk" script 490 | if (bulkScript != null) { 491 | String bodyStr = new String(body, StandardCharsets.UTF_8); 492 | bulkScript.setNextVar("body", bodyStr); 493 | String newBodyStr = (String) bulkScript.run(); 494 | if (newBodyStr == null) return ; 495 | body = newBodyStr.getBytes(StandardCharsets.UTF_8); 496 | } 497 | 498 | // second, the "doc per doc" script 499 | if (script != null) { 500 | processBodyPerLine(body, bulkRequestBuilder); 501 | } else { 502 | bulkRequestBuilder.add(body, 0, body.length); 503 | } 504 | } 505 | 506 | private void processBodyPerLine(byte[] body, BulkRequestBuilder bulkRequestBuilder) throws Exception { 507 | BufferedReader reader = new BufferedReader(new FastStringReader(new String(body, StandardCharsets.UTF_8))); 508 | 509 | JsonFactory factory = new JsonFactory(); 510 | for (String line = reader.readLine(); line != null; line = reader.readLine()) { 511 | JsonXContentParser parser = new JsonXContentParser(factory.createParser(line)); 512 | Map asMap = parser.map(); 513 | 514 | if (asMap.get("delete") != null) { 515 | // We don't touch deleteRequests 516 | String newContent = line + "\n"; 517 | bulkRequestBuilder.add(newContent.getBytes(StandardCharsets.UTF_8), 0, newContent.getBytes(StandardCharsets.UTF_8).length); 518 | } else { 519 | // But we send other requests to the script Engine in ctx field 520 | Map ctx; 521 | String payload = null; 522 | try { 523 | payload = reader.readLine(); 524 | ctx = XContentFactory.xContent(XContentType.JSON).createParser(payload).mapAndClose(); 525 | } catch (IOException e) { 526 | logger.warn("failed to parse {}", e, payload); 527 | continue; 528 | } 529 | 530 | // Sets some vars 531 | script.setNextVar("ctx", ctx); 532 | 533 | if (!asMap.isEmpty()) { 534 | for (Map.Entry bulkItem : asMap.entrySet()) { 535 | String action = bulkItem.getKey().toLowerCase(Locale.ROOT); 536 | if ("index".equals(action) || "update".equals(action) || "create".equals(action)) { 537 | script.setNextVar("_action", action); 538 | 539 | Object bulkData = bulkItem.getValue(); 540 | if ((bulkData != null) && (bulkData instanceof Map)) { 541 | Map bulkItemMap = ((Map) bulkData); 542 | for(Object dataKey : bulkItemMap.keySet()) { 543 | if (AUTHORIZED_SCRIPT_VARS.containsKey(dataKey)) { 544 | script.setNextVar(AUTHORIZED_SCRIPT_VARS.get(dataKey), bulkItemMap.get(dataKey)); 545 | } 546 | } 547 | } 548 | } 549 | } 550 | } 551 | 552 | script.run(); 553 | ctx = (Map) script.unwrap(ctx); 554 | if (ctx != null) { 555 | // Adding header 556 | StringBuffer request = new StringBuffer(line); 557 | request.append("\n"); 558 | // Adding new payload 559 | request.append(XContentFactory.jsonBuilder().map(ctx).string()); 560 | request.append("\n"); 561 | 562 | if (logger.isTraceEnabled()) { 563 | logger.trace("new bulk request is now: {}", request.toString()); 564 | } 565 | byte[] binRequest = request.toString().getBytes(StandardCharsets.UTF_8); 566 | bulkRequestBuilder.add(binRequest, 0, binRequest.length); 567 | } 568 | } 569 | } 570 | } 571 | } 572 | 573 | /** 574 | * Build an executable script if provided as settings 575 | * @param settingName 576 | * @return 577 | */ 578 | private ExecutableScript buildScript(String settingName) { 579 | if (settings.settings().containsKey(settingName)) { 580 | Map scriptSettings = (Map) settings.settings().get(settingName); 581 | if (scriptSettings.containsKey("script")) { 582 | String scriptLang = "groovy"; 583 | if (scriptSettings.containsKey("script_lang")) { 584 | scriptLang = scriptSettings.get("script_lang").toString(); 585 | } 586 | Map scriptParams = null; 587 | if (scriptSettings.containsKey("script_params")) { 588 | scriptParams = (Map) scriptSettings.get("script_params"); 589 | } else { 590 | scriptParams = Maps.newHashMap(); 591 | } 592 | return scriptService.executable( 593 | new Script(scriptLang, scriptSettings.get("script").toString(), ScriptService.ScriptType.INLINE, scriptParams), 594 | ScriptContext.Standard.UPDATE); 595 | } 596 | } 597 | 598 | return null; 599 | } 600 | } 601 | -------------------------------------------------------------------------------- /src/main/java/org/elasticsearch/river/rabbitmq/RabbitmqRiverModule.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq; 21 | 22 | import org.elasticsearch.common.inject.AbstractModule; 23 | import org.elasticsearch.river.River; 24 | 25 | /** 26 | * 27 | */ 28 | public class RabbitmqRiverModule extends AbstractModule { 29 | 30 | @Override 31 | protected void configure() { 32 | bind(River.class).to(RabbitmqRiver.class).asEagerSingleton(); 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /src/main/resources/es-plugin.properties: -------------------------------------------------------------------------------- 1 | plugin=org.elasticsearch.plugin.river.rabbitmq.RabbitmqRiverPlugin 2 | version=${project.version} 3 | -------------------------------------------------------------------------------- /src/test/java/org/elasticsearch/river/rabbitmq/AbstractRabbitMQTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq; 21 | 22 | import org.elasticsearch.test.ElasticsearchIntegrationTest; 23 | import org.elasticsearch.test.ElasticsearchIntegrationTest.ThirdParty; 24 | 25 | /** 26 | * Base class for tests that require RabbitMQ to run. RabbitMQ tests are disabled by default. 27 | *

28 | * To enable test add -Dtests.thirdparty=true 29 | *

30 | */ 31 | @ThirdParty 32 | public abstract class AbstractRabbitMQTest extends ElasticsearchIntegrationTest { 33 | 34 | } 35 | -------------------------------------------------------------------------------- /src/test/java/org/elasticsearch/river/rabbitmq/RabbitMQIntegrationTest.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq; 21 | 22 | import com.carrotsearch.randomizedtesting.annotations.Repeat; 23 | import com.rabbitmq.client.AMQP; 24 | import com.rabbitmq.client.Channel; 25 | import com.rabbitmq.client.Connection; 26 | import com.rabbitmq.client.ConnectionFactory; 27 | 28 | import org.elasticsearch.Version; 29 | import org.elasticsearch.action.count.CountResponse; 30 | import org.elasticsearch.action.get.GetResponse; 31 | import org.elasticsearch.action.index.IndexResponse; 32 | import org.elasticsearch.action.search.SearchResponse; 33 | import org.elasticsearch.common.Strings; 34 | import org.elasticsearch.common.base.Predicate; 35 | import org.elasticsearch.common.settings.Settings; 36 | import org.elasticsearch.common.xcontent.XContentBuilder; 37 | import org.elasticsearch.indices.IndexMissingException; 38 | import org.elasticsearch.river.RiverIndexName; 39 | import org.elasticsearch.river.rabbitmq.script.MockScriptFactory; 40 | import org.elasticsearch.search.SearchHit; 41 | import org.elasticsearch.test.ElasticsearchIntegrationTest; 42 | import org.elasticsearch.test.store.MockFSDirectoryService; 43 | import org.junit.Test; 44 | 45 | import java.net.ConnectException; 46 | import java.nio.charset.StandardCharsets; 47 | import java.util.HashSet; 48 | import java.util.Set; 49 | import java.util.concurrent.TimeUnit; 50 | 51 | import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder; 52 | import static org.hamcrest.CoreMatchers.*; 53 | import static org.hamcrest.Matchers.equalTo; 54 | 55 | /** 56 | * Integration tests for RabbitMQ river
57 | * You may have a rabbitmq instance running on localhost:15672. 58 | */ 59 | @ElasticsearchIntegrationTest.ClusterScope( 60 | scope = ElasticsearchIntegrationTest.Scope.SUITE, 61 | numDataNodes = 1, 62 | numClientNodes = 0, 63 | transportClientRatio = 0.0) 64 | public class RabbitMQIntegrationTest extends AbstractRabbitMQTest { 65 | 66 | private interface InjectorHook { 67 | public void inject(); 68 | } 69 | 70 | private static final String testDbPrefix = "elasticsearch_test_"; 71 | 72 | @Override 73 | protected Settings nodeSettings(int nodeOrdinal) { 74 | return Settings.settingsBuilder() 75 | .put(super.nodeSettings(nodeOrdinal)) 76 | .put("script.native.mock_script.type", MockScriptFactory.class) 77 | .put("threadpool.bulk.queue_size", 200) 78 | .build(); 79 | } 80 | 81 | @Override 82 | public Settings indexSettings() { 83 | return Settings.builder() 84 | .put(super.indexSettings()) 85 | .put(MockFSDirectoryService.RANDOM_PREVENT_DOUBLE_WRITE, false) 86 | .put(MockFSDirectoryService.RANDOM_NO_DELETE_OPEN_FILE, false) 87 | .build(); 88 | } 89 | 90 | @Override 91 | protected int numberOfReplicas() { 92 | return 0; 93 | } 94 | 95 | @Override 96 | protected int numberOfShards() { 97 | return 1; 98 | } 99 | 100 | private String getDbName() { 101 | String testName = testDbPrefix.concat(Strings.toUnderscoreCase(getTestName())); 102 | if (testName.contains(" ")) { 103 | testName = Strings.split(testName, " ")[0]; 104 | } 105 | testName += testName.concat("_").concat(Version.CURRENT.number().replace('.', '_')); 106 | return testName; 107 | } 108 | 109 | private void launchTest(XContentBuilder river, 110 | final int numMessages, 111 | final int numDocsPerMessage, 112 | InjectorHook injectorHook, 113 | boolean delete, 114 | boolean update 115 | ) 116 | throws Exception { 117 | 118 | final String dbName = getDbName(); 119 | logger.info(" --> create index [{}]", dbName); 120 | try { 121 | client().admin().indices().prepareDelete(dbName).get(); 122 | } catch (IndexMissingException e) { 123 | // No worries. 124 | } 125 | try { 126 | createIndex(dbName); 127 | } catch (IndexMissingException e) { 128 | // No worries. 129 | } 130 | ensureGreen(dbName); 131 | 132 | logger.info(" -> Checking rabbitmq running"); 133 | // We try to connect to RabbitMQ. 134 | // If it's not launched, we don't fail the test but only log it 135 | Channel channel = null; 136 | Connection connection = null; 137 | try { 138 | logger.info(" --> connecting to rabbitmq"); 139 | ConnectionFactory factory = new ConnectionFactory(); 140 | factory.setHost("localhost"); 141 | factory.setPort(AMQP.PROTOCOL.PORT); 142 | connection = factory.newConnection(); 143 | } catch (ConnectException ce) { 144 | throw new Exception("RabbitMQ service is not launched on localhost:" + AMQP.PROTOCOL.PORT + 145 | ". Can not start Integration test. " + 146 | "Launch `rabbitmq-server`.", ce); 147 | } 148 | 149 | try { 150 | logger.info(" -> Creating [{}] channel", dbName); 151 | channel = connection.createChannel(); 152 | 153 | logger.info(" -> Creating queue [{}]", dbName); 154 | channel.queueDeclare(getDbName(), true, false, false, null); 155 | 156 | // We purge the queue in case of something is remaining there 157 | logger.info(" -> Purging [{}] channel", dbName); 158 | channel.queuePurge(getDbName()); 159 | 160 | logger.info(" -> Put [{}] messages with [{}] documents each = [{}] docs", numMessages, numDocsPerMessage, 161 | numMessages * numDocsPerMessage); 162 | final Set removed = new HashSet(); 163 | int nbUpdated = 0; 164 | for (int i = 0; i < numMessages; i++) { 165 | StringBuffer message = new StringBuffer(); 166 | 167 | for (int j = 0; j < numDocsPerMessage; j++) { 168 | if (logger.isTraceEnabled()) { 169 | logger.trace(" -> Indexing document [{}] - [{}][{}]", i + "_" + j, i, j); 170 | } 171 | message.append("{ \"index\" : { \"_index\" : \"" + dbName + "\", \"_type\" : \"typex\", \"_id\" : \""+ i + "_" + j +"\" } }\n"); 172 | message.append("{ \"field\" : \"" + i + "_" + j + "\",\"numeric\" : " + i * j + " }\n"); 173 | 174 | // Sometime we update a document 175 | if (update && rarely()) { 176 | String id = between(0, i) + "_" + between(0, j); 177 | // We can only update if it has not been removed :) 178 | if (!removed.contains(id)) { 179 | logger.debug(" -> Updating document [{}] - [{}][{}]", id, i, j); 180 | message.append("{ \"update\" : { \"_index\" : \"" + dbName + "\", \"_type\" : \"typex\", \"_id\" : \""+ id +"\" } }\n"); 181 | message.append("{ \"doc\": { \"foo\" : \"bar\", \"field2\" : \"" + i + "_" + j + "\" }}\n"); 182 | nbUpdated++; 183 | } 184 | } 185 | 186 | // Sometime we delete a document 187 | if (delete && rarely()) { 188 | String id = between(0, i) + "_" + between(0, j); 189 | if (!removed.contains(id)) { 190 | logger.debug(" -> Removing document [{}] - [{}][{}]", id, i, j); 191 | message.append("{ \"delete\" : { \"_index\" : \"" + dbName + "\", \"_type\" : \"typex\", \"_id\" : \""+ id +"\" } }\n"); 192 | removed.add(id); 193 | } 194 | } 195 | } 196 | 197 | channel.basicPublish("", dbName, null, message.toString().getBytes(StandardCharsets.UTF_8)); 198 | } 199 | 200 | logger.info(" -> We removed [{}] docs and updated [{}] docs", removed.size(), nbUpdated); 201 | 202 | if (injectorHook != null) { 203 | logger.info(" -> Injecting extra data"); 204 | injectorHook.inject(); 205 | } 206 | 207 | logger.info(" --> create river"); 208 | IndexResponse indexResponse = index("_river", dbName, "_meta", river); 209 | assertTrue(indexResponse.isCreated()); 210 | 211 | logger.info("--> checking that river [{}] was created", dbName); 212 | assertThat(awaitBusy(new Predicate() { 213 | public boolean apply(Object obj) { 214 | GetResponse response = client().prepareGet(RiverIndexName.Conf.DEFAULT_INDEX_NAME, dbName, "_status").get(); 215 | return response.isExists(); 216 | } 217 | }, 5, TimeUnit.SECONDS), equalTo(true)); 218 | 219 | 220 | // Check that docs are still processed by the river 221 | logger.info(" --> waiting for expected number of docs: [{}]", numDocsPerMessage * numMessages - removed.size()); 222 | assertThat(awaitBusy(new Predicate() { 223 | public boolean apply(Object obj) { 224 | try { 225 | refresh(); 226 | int expected = numDocsPerMessage * numMessages - removed.size(); 227 | CountResponse response = client().prepareCount(dbName).get(); 228 | logger.debug(" -> got {} docs, expected {}", response.getCount(), expected); 229 | return response.getCount() == expected; 230 | } catch (IndexMissingException e) { 231 | return false; 232 | } 233 | } 234 | }, 20, TimeUnit.SECONDS), equalTo(true)); 235 | } finally { 236 | if (channel != null && channel.isOpen()) { 237 | channel.close(); 238 | } 239 | if (connection != null && connection.isOpen()) { 240 | connection.close(); 241 | } 242 | 243 | // Deletes the river 244 | GetResponse response = client().prepareGet(RiverIndexName.Conf.DEFAULT_INDEX_NAME, dbName, "_status").get(); 245 | if (response.isExists()) { 246 | client().prepareDelete(RiverIndexName.Conf.DEFAULT_INDEX_NAME, dbName, "_meta").get(); 247 | client().prepareDelete(RiverIndexName.Conf.DEFAULT_INDEX_NAME, dbName, "_status").get(); 248 | } 249 | 250 | assertThat(awaitBusy(new Predicate() { 251 | public boolean apply(Object obj) { 252 | GetResponse response = client().prepareGet(RiverIndexName.Conf.DEFAULT_INDEX_NAME, dbName, "_status").get(); 253 | return response.isExists(); 254 | } 255 | }, 5, TimeUnit.SECONDS), equalTo(false)); 256 | } 257 | } 258 | 259 | @Test @Repeat(iterations = 10) 260 | public void testSimpleRiver() throws Exception { 261 | launchTest(jsonBuilder() 262 | .startObject() 263 | .field("type", "rabbitmq") 264 | .startObject("rabbitmq") 265 | .field("queue", getDbName()) 266 | .endObject() 267 | .startObject("index") 268 | .field("ordered", true) 269 | .endObject() 270 | .endObject(), randomIntBetween(1, 10), randomIntBetween(1, 500), null, true, true); 271 | } 272 | 273 | @Test 274 | public void testHeartbeat() throws Exception { 275 | launchTest(jsonBuilder() 276 | .startObject() 277 | .field("type", "rabbitmq") 278 | .startObject("rabbitmq") 279 | .field("queue", getDbName()) 280 | .field("heartbeat", "100ms") 281 | .endObject() 282 | .startObject("index") 283 | .field("ordered", true) 284 | .endObject() 285 | .endObject(), randomIntBetween(1, 10), randomIntBetween(1, 500), null, true, true); 286 | } 287 | 288 | @Test 289 | public void testConsumers() throws Exception { 290 | launchTest(jsonBuilder() 291 | .startObject() 292 | .field("type", "rabbitmq") 293 | .startObject("rabbitmq") 294 | .field("queue", getDbName()) 295 | .field("num_consumers", 5) 296 | .endObject() 297 | .startObject("index") 298 | .field("ordered", true) 299 | .endObject() 300 | .endObject(), randomIntBetween(5, 20), randomIntBetween(100, 1000), null, false, false); 301 | } 302 | 303 | @Test 304 | public void testInlineScript() throws Exception { 305 | launchTest(jsonBuilder() 306 | .startObject() 307 | .field("type", "rabbitmq") 308 | .startObject("rabbitmq") 309 | .field("queue", getDbName()) 310 | .endObject() 311 | .startObject("script_filter") 312 | .field("script", " if (ctx.numeric != null) {ctx.numeric += param1}") 313 | .startObject("script_params") 314 | .field("param1", 1) 315 | .endObject() 316 | .endObject() 317 | .startObject("index") 318 | .field("ordered", true) 319 | .endObject() 320 | .endObject(), 3, 10, null, true, true); 321 | 322 | // We should have data we don't have without raw set to true 323 | SearchResponse response = client().prepareSearch(getDbName()) 324 | .addField("numeric") 325 | .get(); 326 | 327 | logger.info(" --> Search response: {}", response.toString()); 328 | 329 | for (SearchHit hit : response.getHits().getHits()) { 330 | assertThat(hit.field("numeric"), notNullValue()); 331 | assertThat(hit.field("numeric").getValue(), instanceOf(Integer.class)); 332 | // Value is based on id 333 | String[] id = Strings.split(hit.getId(), "_"); 334 | int expected = Integer.parseInt(id[0]) * Integer.parseInt(id[1]) + 1; 335 | assertThat((Integer) hit.field("numeric").getValue(), is(expected)); 336 | } 337 | 338 | } 339 | 340 | @Test 341 | public void testInlineScriptWithAdditionalInfos() throws Exception { 342 | launchTest(jsonBuilder() 343 | .startObject() 344 | .field("type", "rabbitmq") 345 | .startObject("rabbitmq") 346 | .field("queue", getDbName()) 347 | .endObject() 348 | .startObject("script_filter") 349 | .field("script", "ctx.bulkindextype = _index + '#' + _type; ctx.lengthid = (_id != null ? _id.length() : 0)") 350 | .endObject() 351 | .startObject("index") 352 | .field("ordered", true) 353 | .endObject() 354 | .endObject(), 3, 10, null, true, true); 355 | 356 | // We should have data we don't have without raw set to true 357 | SearchResponse response = client().prepareSearch(getDbName()) 358 | .addField("bulkindextype") 359 | .addField("lengthid") 360 | .get(); 361 | 362 | logger.info(" --> Search response: {}", response.toString()); 363 | 364 | for (SearchHit hit : response.getHits().getHits()) { 365 | assertThat(hit.field("bulkindextype"), notNullValue()); 366 | assertThat(hit.field("bulkindextype").getValue(), equalTo(hit.getIndex() + "#" + hit.getType())); 367 | 368 | assertThat(hit.field("lengthid"), notNullValue()); 369 | assertThat(hit.field("lengthid").getValue(), instanceOf(Integer.class)); 370 | assertThat(hit.field("lengthid").getValue(), equalTo(hit.getId().length())); 371 | } 372 | 373 | } 374 | 375 | @Test 376 | public void testNativeScript() throws Exception { 377 | launchTest(jsonBuilder() 378 | .startObject() 379 | .field("type", "rabbitmq") 380 | .startObject("rabbitmq") 381 | .field("queue", getDbName()) 382 | .endObject() 383 | .startObject("bulk_script_filter") 384 | .field("script", "mock_script") 385 | .field("script_lang", "native") 386 | .endObject() 387 | .startObject("index") 388 | .field("ordered", true) 389 | .endObject() 390 | .endObject(), 3, 10, null, true, true); 391 | 392 | // We should have data we don't have without raw set to true 393 | SearchResponse response = client().prepareSearch(getDbName()) 394 | .addField("numeric") 395 | .get(); 396 | 397 | logger.info(" --> Search response: {}", response.toString()); 398 | 399 | for (SearchHit hit : response.getHits().getHits()) { 400 | assertThat(hit.field("numeric"), notNullValue()); 401 | assertThat(hit.field("numeric").getValue(), instanceOf(Integer.class)); 402 | // Value is based on id 403 | String[] id = Strings.split(hit.getId(), "_"); 404 | int expected = Integer.parseInt(id[0]) * Integer.parseInt(id[1]) + 1; 405 | assertThat((Integer) hit.field("numeric").getValue(), is(expected)); 406 | } 407 | } 408 | 409 | @Test 410 | public void testBothScript() throws Exception { 411 | launchTest(jsonBuilder() 412 | .startObject() 413 | .field("type", "rabbitmq") 414 | .startObject("rabbitmq") 415 | .field("queue", getDbName()) 416 | .endObject() 417 | .startObject("script_filter") 418 | .field("script", "if (ctx.numeric != null) {ctx.numeric += param1}") 419 | .startObject("script_params") 420 | .field("param1", 1) 421 | .endObject() 422 | .endObject() 423 | .startObject("bulk_script_filter") 424 | .field("script", "mock_script") 425 | .field("script_lang", "native") 426 | .endObject() 427 | .startObject("index") 428 | .field("ordered", true) 429 | .endObject() 430 | .endObject(), 3, 10, null, true, true); 431 | 432 | // We should have data we don't have without raw set to true 433 | SearchResponse response = client().prepareSearch(getDbName()) 434 | .addField("numeric") 435 | .get(); 436 | 437 | logger.info(" --> Search response: {}", response.toString()); 438 | 439 | for (SearchHit hit : response.getHits().getHits()) { 440 | assertThat(hit.field("numeric"), notNullValue()); 441 | assertThat(hit.field("numeric").getValue(), instanceOf(Integer.class)); 442 | // Value is based on id 443 | String[] id = Strings.split(hit.getId(), "_"); 444 | int expected = Integer.parseInt(id[0]) * Integer.parseInt(id[1]) + 2; 445 | assertThat((Integer) hit.field("numeric").getValue(), is(expected)); 446 | } 447 | } 448 | } 449 | -------------------------------------------------------------------------------- /src/test/java/org/elasticsearch/river/rabbitmq/script/MockScript.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq.script; 21 | 22 | import org.elasticsearch.common.jackson.core.JsonFactory; 23 | import org.elasticsearch.common.logging.ESLogger; 24 | import org.elasticsearch.common.logging.ESLoggerFactory; 25 | import org.elasticsearch.common.xcontent.XContentFactory; 26 | import org.elasticsearch.common.xcontent.json.JsonXContentParser; 27 | import org.elasticsearch.script.AbstractExecutableScript; 28 | 29 | import java.io.*; 30 | import java.util.Map; 31 | 32 | public class MockScript extends AbstractExecutableScript { 33 | 34 | private final ESLogger logger = ESLoggerFactory.getLogger(MockScript.class.getName()); 35 | private final Map params; 36 | 37 | public MockScript(Map params) { 38 | super(); 39 | this.params = params; 40 | } 41 | 42 | @Override 43 | public void setNextVar(String name, Object value) { 44 | params.put(name, value); 45 | } 46 | 47 | @Override 48 | public Object run() { 49 | String body = (String) params.get("body"); 50 | BufferedReader reader = new BufferedReader(new StringReader(body)); 51 | 52 | CharArrayWriter charArrayWriter = new CharArrayWriter(); 53 | BufferedWriter writer = new BufferedWriter(charArrayWriter); 54 | 55 | try { 56 | process(reader, writer); 57 | } catch (IOException e) { 58 | // TODO: wrap or treat it 59 | throw new RuntimeException(e); 60 | } 61 | 62 | String outputBody = charArrayWriter.toString(); 63 | logger.debug("input message: {}", body); 64 | logger.debug("output message: {}", outputBody); 65 | 66 | return outputBody; 67 | } 68 | 69 | private void process(BufferedReader reader, BufferedWriter writer) throws IOException { 70 | JsonFactory factory = new JsonFactory(); 71 | for (String header = reader.readLine(); header != null; header = reader.readLine()) { 72 | String content = null; 73 | JsonXContentParser parser = new JsonXContentParser(factory.createParser(header)); 74 | Map headerAsMap = parser.map(); 75 | 76 | if (headerAsMap.containsKey("create") || 77 | headerAsMap.containsKey("index") || 78 | headerAsMap.containsKey("update")) { 79 | // skip "create" operations, header and body 80 | content = reader.readLine(); 81 | 82 | JsonXContentParser contentParser = new JsonXContentParser(factory.createParser(content)); 83 | Map contentAsMap = contentParser.map(); 84 | 85 | Object numeric = contentAsMap.get("numeric"); 86 | if (numeric != null) { 87 | if (numeric instanceof Integer) { 88 | Integer integer = (Integer) numeric; 89 | contentAsMap.put("numeric", ++integer); 90 | 91 | content = XContentFactory.jsonBuilder().map(contentAsMap).string(); 92 | } else { 93 | logger.warn("We don't know what to do with that numeric value: {}", numeric.getClass().getName()); 94 | } 95 | } 96 | } else if (headerAsMap.containsKey("delete")) { 97 | // No content line 98 | } else { 99 | // That's bad. We don't know what to do :( 100 | logger.warn("We don't know what to do with that line: {}", header); 101 | } 102 | writer.write(header); 103 | writer.newLine(); 104 | if (content != null) { 105 | writer.write(content); 106 | writer.newLine(); 107 | } 108 | } 109 | writer.flush(); 110 | writer.close(); 111 | } 112 | } 113 | -------------------------------------------------------------------------------- /src/test/java/org/elasticsearch/river/rabbitmq/script/MockScriptFactory.java: -------------------------------------------------------------------------------- 1 | /* 2 | * Licensed to Elasticsearch under one or more contributor 3 | * license agreements. See the NOTICE file distributed with 4 | * this work for additional information regarding copyright 5 | * ownership. Elasticsearch licenses this file to you under 6 | * the Apache License, Version 2.0 (the "License"); you may 7 | * not use this file except in compliance with the License. 8 | * You may obtain a copy of the License at 9 | * 10 | * http://www.apache.org/licenses/LICENSE-2.0 11 | * 12 | * Unless required by applicable law or agreed to in writing, 13 | * software distributed under the License is distributed on an 14 | * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | * KIND, either express or implied. See the License for the 16 | * specific language governing permissions and limitations 17 | * under the License. 18 | */ 19 | 20 | package org.elasticsearch.river.rabbitmq.script; 21 | 22 | import org.elasticsearch.script.ExecutableScript; 23 | import org.elasticsearch.script.NativeScriptFactory; 24 | 25 | import java.util.Map; 26 | 27 | public class MockScriptFactory implements NativeScriptFactory { 28 | 29 | @Override 30 | public ExecutableScript newScript(Map params) { 31 | return new MockScript(params); 32 | } 33 | } 34 | --------------------------------------------------------------------------------