├── LICENSE ├── README.md ├── bin ├── control ├── install └── setup ├── conf ├── elasticsearch.yml └── logging.yml ├── env └── ELASTICSEARCH_VERSION ├── hooks ├── publish-http-url ├── publish-unicast-host └── set-unicast-hosts ├── logs └── .gitkeep ├── metadata ├── managed_files.yml └── manifest.yml ├── template ├── .gitkeep └── plugins.txt └── usr ├── bin ├── elasticsearch ├── elasticsearch.in.sh └── plugin └── lib ├── antlr-runtime-3.5.jar ├── apache-log4j-extras-1.2.17.jar ├── asm-4.1.jar ├── asm-commons-4.1.jar ├── elasticsearch-1.7.1.jar ├── groovy-all-2.4.4.jar ├── jna-4.1.0.jar ├── jts-1.13.jar ├── log4j-1.2.17.jar ├── lucene-analyzers-common-4.10.4.jar ├── lucene-core-4.10.4.jar ├── lucene-expressions-4.10.4.jar ├── lucene-grouping-4.10.4.jar ├── lucene-highlighter-4.10.4.jar ├── lucene-join-4.10.4.jar ├── lucene-memory-4.10.4.jar ├── lucene-misc-4.10.4.jar ├── lucene-queries-4.10.4.jar ├── lucene-queryparser-4.10.4.jar ├── lucene-sandbox-4.10.4.jar ├── lucene-spatial-4.10.4.jar ├── lucene-suggest-4.10.4.jar ├── sigar ├── libsigar-amd64-freebsd-6.so ├── libsigar-amd64-linux.so ├── libsigar-amd64-solaris.so ├── libsigar-ia64-linux.so ├── libsigar-sparc-solaris.so ├── libsigar-sparc64-solaris.so ├── libsigar-universal-macosx.dylib ├── libsigar-universal64-macosx.dylib ├── libsigar-x86-freebsd-5.so ├── libsigar-x86-freebsd-6.so ├── libsigar-x86-linux.so ├── libsigar-x86-solaris.so ├── sigar-1.6.4.jar ├── sigar-amd64-winnt.dll ├── sigar-x86-winnt.dll └── sigar-x86-winnt.lib └── spatial4j-0.4.1.jar /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | OpenShift ElasticSearch Cartridge 2 | ================================= 3 | Downloadable ElasticSearch cartridge for OpenShift. 4 | 5 | To create your scalable ElasticSearch app, run: 6 | 7 | rhc app create http://cartreflect-claytondev.rhcloud.com/github/rbrower3/openshift-elasticsearch-cartridge -s 8 | 9 | **NOTE:** your app currently must be a scalable app or this cartridge will not run. 10 | 11 | 12 | Adding additional cluster nodes 13 | =============================== 14 | To add more nodes to the cluster, simply add more gears: 15 | 16 | rhc cartridge scale -a elasticsearch 17 | 18 | 19 | Plugins 20 | ======= 21 | To install ElasticSearch plugins - 22 | * create new app in openshift 23 | * edit the `plugins.txt` file 24 | * commit 25 | * push your changes to openshift. 26 | 27 | The above steps have been tested in OpenShift Online (v2). For Openshift Enterprise, in case it does not have internet access, you may need to copy the plugins and install them. 28 | -------------------------------------------------------------------------------- /bin/control: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | source $OPENSHIFT_CARTRIDGE_SDK_BASH 4 | 5 | PID_FILE=$OPENSHIFT_ELASTICSEARCH_DIR/run/elasticsearch.pid 6 | 7 | function _install_plugins() 8 | { 9 | local PLUGIN_CMD="$OPENSHIFT_ELASTICSEARCH_DIR/usr/bin/plugin -Des.path.plugins=$OPENSHIFT_DATA_DIR/elasticsearch-plugins" 10 | 11 | local old_plugins=$($PLUGIN_CMD --list | awk '/-/{print $2}' | xargs) 12 | if [ -n "$old_plugins" -a "$old_plugins" != "No" ]; then #ARGH! 13 | echo "Removing old ElasticSearch plugins..." 14 | for plugin in $old_plugins; do 15 | $PLUGIN_CMD --remove $plugin 16 | done 17 | fi 18 | 19 | echo "Installing ElasticSearch plugins..." 20 | 21 | local plugins="$(grep -v '^#' $OPENSHIFT_REPO_DIR/plugins.txt 2>/dev/null | xargs)" 22 | 23 | if [ "${plugins}" ]; then 24 | for plugin in ${plugins}; do 25 | local name=$(echo $plugin | cut -f 1 -d =) 26 | local url=$(echo $plugin | cut -f 2 -d =) 27 | if [ "$name" == "$url" ]; then 28 | $PLUGIN_CMD --install $name 29 | else 30 | $PLUGIN_CMD --url $url --install $name 31 | fi 32 | done 33 | fi 34 | } 35 | 36 | function _is_running() { 37 | if [ -f $PID_FILE ]; then 38 | local zpid=$(cat $PID_FILE 2> /dev/null) 39 | local myid=$(id -u) 40 | if `ps -opid,args --pid $zpid 2>&1 &> /dev/null`; then 41 | return 0 42 | fi 43 | fi 44 | 45 | return 1 46 | } 47 | 48 | function start() { 49 | if _is_running; then 50 | echo "ElasticSearch is already running" 1>&2 51 | return 0 52 | fi 53 | 54 | export PUBLISH_HOST=$(python -c "import socket; print socket.gethostbyname('$OPENSHIFT_GEAR_DNS')") 55 | 56 | $OPENSHIFT_ELASTICSEARCH_DIR/usr/bin/elasticsearch -d -p $PID_FILE 57 | } 58 | 59 | function stop() { 60 | if ! _is_running; then 61 | echo "ElasticSearch is already stopped" 1>&2 62 | return 0 63 | fi 64 | 65 | if [ -f $PID_FILE ]; then 66 | local zpid=$(cat $PID_FILE 2> /dev/null) 67 | fi 68 | 69 | if [ -n $zpid ]; then 70 | /bin/kill $zpid 71 | local ret=$? 72 | if [ $ret -eq 0 ]; then 73 | local TIMEOUT=10 74 | while [ $TIMEOUT -gt 0 ] && _is_running ; do 75 | /bin/kill -0 "$zpid" > /dev/null 2>&1 || break 76 | sleep 1 77 | let TIMEOUT=${TIMEOUT}-1 78 | done 79 | fi 80 | fi 81 | } 82 | 83 | function restart() { 84 | stop 85 | start 86 | } 87 | 88 | function status() { 89 | local output="" 90 | if output=$(curl http://$OPENSHIFT_RUBY_IP:$OPENSHIFT_RUBY_PORT/ &> /dev/null); then 91 | client_result "Application is running" 92 | else 93 | client_result "Application is either stopped or inaccessible" 94 | fi 95 | } 96 | 97 | function deploy() { 98 | _install_plugins 99 | } 100 | 101 | case "$1" in 102 | start) start ;; 103 | stop) stop ;; 104 | restart | reload ) restart $1 ;; 105 | status) status ;; 106 | deploy) deploy ;; 107 | *) exit 0 108 | esac 109 | -------------------------------------------------------------------------------- /bin/install: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | mkdir -p $OPENSHIFT_ELASTICSEARCH_DIR/run $OPENSHIFT_DATA_DIR/{elasticsearch,elasticsearch-plugins} 4 | 5 | touch $OPENSHIFT_ELASTICSEARCH_DIR/env/OPENSHIFT_ELASTICSEARCH_CLUSTER 6 | -------------------------------------------------------------------------------- /bin/setup: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | -------------------------------------------------------------------------------- /conf/elasticsearch.yml: -------------------------------------------------------------------------------- 1 | ##################### Elasticsearch Configuration Example ##################### 2 | 3 | # This file contains an overview of various configuration settings, 4 | # targeted at operations staff. Application developers should 5 | # consult the guide at . 6 | # 7 | # The installation procedure is covered at 8 | # . 9 | # 10 | # Elasticsearch comes with reasonable defaults for most settings, 11 | # so you can try it out without bothering with configuration. 12 | # 13 | # Most of the time, these defaults are just fine for running a production 14 | # cluster. If you're fine-tuning your cluster, or wondering about the 15 | # effect of certain configuration option, please _do ask_ on the 16 | # mailing list or IRC channel [http://elasticsearch.org/community]. 17 | 18 | # Any element in the configuration can be replaced with environment variables 19 | # by placing them in ${...} notation. For example: 20 | # 21 | #node.rack: ${RACK_ENV_VAR} 22 | 23 | # For information on supported formats and syntax for the config file, see 24 | # 25 | 26 | 27 | ################################### Cluster ################################### 28 | 29 | # Cluster name identifies your cluster for auto-discovery. If you're running 30 | # multiple clusters on the same network, make sure you're using unique names. 31 | # 32 | cluster.name: elasticsearch-${OPENSHIFT_APP_UUID} 33 | 34 | 35 | #################################### Node ##################################### 36 | 37 | # Node names are generated dynamically on startup, so you're relieved 38 | # from configuring them manually. You can tie this node to a specific name: 39 | # 40 | node.name: ${OPENSHIFT_GEAR_UUID} 41 | 42 | # Every node can be configured to allow or deny being eligible as the master, 43 | # and to allow or deny to store the data. 44 | # 45 | # Allow this node to be eligible as a master node (enabled by default): 46 | # 47 | #node.master: true 48 | # 49 | # Allow this node to store data (enabled by default): 50 | # 51 | #node.data: true 52 | 53 | # You can exploit these settings to design advanced cluster topologies. 54 | # 55 | # 1. You want this node to never become a master node, only to hold data. 56 | # This will be the "workhorse" of your cluster. 57 | # 58 | #node.master: false 59 | #node.data: true 60 | # 61 | # 2. You want this node to only serve as a master: to not store any data and 62 | # to have free resources. This will be the "coordinator" of your cluster. 63 | # 64 | #node.master: true 65 | #node.data: false 66 | # 67 | # 3. You want this node to be neither master nor data node, but 68 | # to act as a "search load balancer" (fetching data from nodes, 69 | # aggregating results, etc.) 70 | # 71 | #node.master: false 72 | #node.data: false 73 | 74 | # Use the Cluster Health API [http://localhost:9200/_cluster/health], the 75 | # Node Info API [http://localhost:9200/_nodes] or GUI tools 76 | # such as , 77 | # , 78 | # and 79 | # to inspect the cluster state. 80 | 81 | # A node can have generic attributes associated with it, which can later be used 82 | # for customized shard allocation filtering, or allocation awareness. An attribute 83 | # is a simple key value pair, similar to node.key: value, here is an example: 84 | # 85 | #node.rack: rack314 86 | 87 | # By default, multiple nodes are allowed to start from the same installation location 88 | # to disable it, set the following: 89 | #node.max_local_storage_nodes: 1 90 | 91 | 92 | #################################### Index #################################### 93 | 94 | # You can set a number of options (such as shard/replica options, mapping 95 | # or analyzer definitions, translog settings, ...) for indices globally, 96 | # in this file. 97 | # 98 | # Note, that it makes more sense to configure index settings specifically for 99 | # a certain index, either when creating it or by using the index templates API. 100 | # 101 | # See and 102 | # 103 | # for more information. 104 | 105 | # Set the number of shards (splits) of an index (5 by default): 106 | # 107 | #index.number_of_shards: 5 108 | 109 | # Set the number of replicas (additional copies) of an index (1 by default): 110 | # 111 | #index.number_of_replicas: 1 112 | 113 | # Note, that for development on a local machine, with small indices, it usually 114 | # makes sense to "disable" the distributed features: 115 | # 116 | #index.number_of_shards: 1 117 | #index.number_of_replicas: 0 118 | 119 | # These settings directly affect the performance of index and search operations 120 | # in your cluster. Assuming you have enough machines to hold shards and 121 | # replicas, the rule of thumb is: 122 | # 123 | # 1. Having more *shards* enhances the _indexing_ performance and allows to 124 | # _distribute_ a big index across machines. 125 | # 2. Having more *replicas* enhances the _search_ performance and improves the 126 | # cluster _availability_. 127 | # 128 | # The "number_of_shards" is a one-time setting for an index. 129 | # 130 | # The "number_of_replicas" can be increased or decreased anytime, 131 | # by using the Index Update Settings API. 132 | # 133 | # Elasticsearch takes care about load balancing, relocating, gathering the 134 | # results from nodes, etc. Experiment with different settings to fine-tune 135 | # your setup. 136 | 137 | # Use the Index Status API () to inspect 138 | # the index status. 139 | 140 | 141 | #################################### Paths #################################### 142 | 143 | # Path to directory containing configuration (this file and logging.yml): 144 | # 145 | #path.conf: /path/to/conf 146 | 147 | # Path to directory where to store index data allocated for this node. 148 | # 149 | path.data: ${OPENSHIFT_DATA_DIR}/elasticsearch 150 | # 151 | # Can optionally include more than one location, causing data to be striped across 152 | # the locations (a la RAID 0) on a file level, favouring locations with most free 153 | # space on creation. For example: 154 | # 155 | #path.data: /path/to/data1,/path/to/data2 156 | 157 | # Path to temporary files: 158 | # 159 | #path.work: /path/to/work 160 | 161 | # Path to log files: 162 | # 163 | path.logs: ${OPENSHIFT_ELASTICSEARCH_DIR}/logs 164 | 165 | # Path to where plugins are installed: 166 | # 167 | path.plugins: ${OPENSHIFT_DATA_DIR}/elasticsearch-plugins 168 | 169 | 170 | #################################### Plugin ################################### 171 | 172 | # If a plugin listed here is not installed for current node, the node will not start. 173 | # 174 | #plugin.mandatory: mapper-attachments,lang-groovy 175 | 176 | 177 | ################################### Memory #################################### 178 | 179 | # Elasticsearch performs poorly when JVM starts swapping: you should ensure that 180 | # it _never_ swaps. 181 | # 182 | # Set this property to true to lock the memory: 183 | # 184 | #bootstrap.mlockall: true 185 | 186 | # Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set 187 | # to the same value, and that the machine has enough memory to allocate 188 | # for Elasticsearch, leaving enough memory for the operating system itself. 189 | # 190 | # You should also make sure that the Elasticsearch process is allowed to lock 191 | # the memory, eg. by using `ulimit -l unlimited`. 192 | 193 | 194 | ############################## Network And HTTP ############################### 195 | 196 | # Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens 197 | # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node 198 | # communication. (the range means that if the port is busy, it will automatically 199 | # try the next port). 200 | 201 | # Set the bind address specifically (IPv4 or IPv6): 202 | # 203 | network.bind_host: ${OPENSHIFT_ELASTICSEARCH_IP} 204 | 205 | # Set the address other nodes will use to communicate with this node. If not 206 | # set, it is automatically derived. It must point to an actual IP address. 207 | # 208 | network.publish_host: ${PUBLISH_HOST} 209 | transport.publish_port: ${OPENSHIFT_ELASTICSEARCH_TRANSPORT_PROXY_PORT} 210 | 211 | # Set both 'bind_host' and 'publish_host': 212 | # 213 | #network.host: 192.168.0.1 214 | 215 | # Set a custom port for the node to node communication (9300 by default): 216 | # 217 | #transport.tcp.port: 9300 218 | 219 | # Enable compression for all communication between nodes (disabled by default): 220 | # 221 | #transport.tcp.compress: true 222 | 223 | # Set a custom port to listen for HTTP traffic: 224 | # 225 | #http.port: 9200 226 | 227 | # Set a custom allowed content length: 228 | # 229 | #http.max_content_length: 100mb 230 | 231 | # Disable HTTP completely: 232 | # 233 | #http.enabled: false 234 | 235 | 236 | ################################### Gateway ################################### 237 | 238 | # The gateway allows for persisting the cluster state between full cluster 239 | # restarts. Every change to the state (such as adding an index) will be stored 240 | # in the gateway, and when the cluster starts up for the first time, 241 | # it will read its state from the gateway. 242 | 243 | # There are several types of gateway implementations. For more information, see 244 | # . 245 | 246 | # The default gateway type is the "local" gateway (recommended): 247 | # 248 | #gateway.type: local 249 | 250 | # Settings below control how and when to start the initial recovery process on 251 | # a full cluster restart (to reuse as much local data as possible when using shared 252 | # gateway). 253 | 254 | # Allow recovery process after N nodes in a cluster are up: 255 | # 256 | #gateway.recover_after_nodes: 1 257 | 258 | # Set the timeout to initiate the recovery process, once the N nodes 259 | # from previous setting are up (accepts time value): 260 | # 261 | #gateway.recover_after_time: 5m 262 | 263 | # Set how many nodes are expected in this cluster. Once these N nodes 264 | # are up (and recover_after_nodes is met), begin recovery process immediately 265 | # (without waiting for recover_after_time to expire): 266 | # 267 | #gateway.expected_nodes: 2 268 | 269 | 270 | ############################# Recovery Throttling ############################# 271 | 272 | # These settings allow to control the process of shards allocation between 273 | # nodes during initial recovery, replica allocation, rebalancing, 274 | # or when adding and removing nodes. 275 | 276 | # Set the number of concurrent recoveries happening on a node: 277 | # 278 | # 1. During the initial recovery 279 | # 280 | #cluster.routing.allocation.node_initial_primaries_recoveries: 4 281 | # 282 | # 2. During adding/removing nodes, rebalancing, etc 283 | # 284 | #cluster.routing.allocation.node_concurrent_recoveries: 2 285 | 286 | # Set to throttle throughput when recovering (eg. 100mb, by default 20mb): 287 | # 288 | #indices.recovery.max_bytes_per_sec: 20mb 289 | 290 | # Set to limit the number of open concurrent streams when 291 | # recovering a shard from a peer: 292 | # 293 | #indices.recovery.concurrent_streams: 5 294 | 295 | 296 | ################################## Discovery ################################## 297 | 298 | # Discovery infrastructure ensures nodes can be found within a cluster 299 | # and master node is elected. Multicast discovery is the default. 300 | 301 | # Set to ensure a node sees N other master eligible nodes to be considered 302 | # operational within the cluster. This should be set to a quorum/majority of 303 | # the master-eligible nodes in the cluster. 304 | # 305 | #discovery.zen.minimum_master_nodes: 1 306 | 307 | # Set the time to wait for ping responses from other nodes when discovering. 308 | # Set this option to a higher value on a slow or congested network 309 | # to minimize discovery failures: 310 | # 311 | #discovery.zen.ping.timeout: 3s 312 | 313 | # For more information, see 314 | # 315 | 316 | # Unicast discovery allows to explicitly control which nodes will be used 317 | # to discover the cluster. It can be used when multicast is not present, 318 | # or to restrict the cluster communication-wise. 319 | # 320 | # 1. Disable multicast discovery (enabled by default): 321 | # 322 | discovery.zen.ping.multicast.enabled: false 323 | # 324 | # 2. Configure an initial list of master nodes in the cluster 325 | # to perform discovery when new nodes (master or data) are started: 326 | # 327 | discovery.zen.ping.unicast.hosts: ${OPENSHIFT_ELASTICSEARCH_CLUSTER} 328 | 329 | # EC2 discovery allows to use AWS EC2 API in order to perform discovery. 330 | # 331 | # You have to install the cloud-aws plugin for enabling the EC2 discovery. 332 | # 333 | # For more information, see 334 | # 335 | # 336 | # See 337 | # for a step-by-step tutorial. 338 | 339 | # GCE discovery allows to use Google Compute Engine API in order to perform discovery. 340 | # 341 | # You have to install the cloud-gce plugin for enabling the GCE discovery. 342 | # 343 | # For more information, see . 344 | 345 | # Azure discovery allows to use Azure API in order to perform discovery. 346 | # 347 | # You have to install the cloud-azure plugin for enabling the Azure discovery. 348 | # 349 | # For more information, see . 350 | 351 | ################################## Slow Log ################################## 352 | 353 | # Shard level query and fetch threshold logging. 354 | 355 | #index.search.slowlog.threshold.query.warn: 10s 356 | #index.search.slowlog.threshold.query.info: 5s 357 | #index.search.slowlog.threshold.query.debug: 2s 358 | #index.search.slowlog.threshold.query.trace: 500ms 359 | 360 | #index.search.slowlog.threshold.fetch.warn: 1s 361 | #index.search.slowlog.threshold.fetch.info: 800ms 362 | #index.search.slowlog.threshold.fetch.debug: 500ms 363 | #index.search.slowlog.threshold.fetch.trace: 200ms 364 | 365 | #index.indexing.slowlog.threshold.index.warn: 10s 366 | #index.indexing.slowlog.threshold.index.info: 5s 367 | #index.indexing.slowlog.threshold.index.debug: 2s 368 | #index.indexing.slowlog.threshold.index.trace: 500ms 369 | 370 | ################################## GC Logging ################################ 371 | 372 | #monitor.jvm.gc.young.warn: 1000ms 373 | #monitor.jvm.gc.young.info: 700ms 374 | #monitor.jvm.gc.young.debug: 400ms 375 | 376 | #monitor.jvm.gc.old.warn: 10s 377 | #monitor.jvm.gc.old.info: 5s 378 | #monitor.jvm.gc.old.debug: 2s 379 | 380 | ################################## Marvel Logging ############################# 381 | marvel.agent.enabled: true 382 | marvel.agent.exporter.es.hosts: ${OPENSHIFT_ELASTICSEARCH_IP}:${OPENSHIFT_ELASTICSEARCH_PORT} 383 | marvel.agent.interval: 60s 384 | 385 | ################################## Security ################################ 386 | 387 | # Uncomment if you want to enable JSONP as a valid return transport on the 388 | # http server. With this enabled, it may pose a security risk, so disabling 389 | # it unless you need it is recommended (it is disabled by default). 390 | # 391 | #http.jsonp.enable: true 392 | -------------------------------------------------------------------------------- /conf/logging.yml: -------------------------------------------------------------------------------- 1 | # you can override this using by setting a system property, for example -Des.logger.level=DEBUG 2 | es.logger.level: INFO 3 | rootLogger: ${es.logger.level}, console, file 4 | logger: 5 | # log action execution errors for easier debugging 6 | action: DEBUG 7 | # reduce the logging for aws, too much is logged under the default INFO 8 | com.amazonaws: WARN 9 | 10 | # gateway 11 | #gateway: DEBUG 12 | #index.gateway: DEBUG 13 | 14 | # peer shard recovery 15 | #indices.recovery: DEBUG 16 | 17 | # discovery 18 | #discovery: TRACE 19 | 20 | index.search.slowlog: TRACE, index_search_slow_log_file 21 | index.indexing.slowlog: TRACE, index_indexing_slow_log_file 22 | 23 | additivity: 24 | index.search.slowlog: false 25 | index.indexing.slowlog: false 26 | 27 | appender: 28 | console: 29 | type: console 30 | layout: 31 | type: consolePattern 32 | conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 33 | 34 | file: 35 | type: dailyRollingFile 36 | file: ${path.logs}/${cluster.name}.log 37 | datePattern: "'.'yyyy-MM-dd" 38 | layout: 39 | type: pattern 40 | conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 41 | 42 | # Use the following log4j-extras RollingFileAppender to enable gzip compression of log files. 43 | # For more information see https://logging.apache.org/log4j/extras/apidocs/org/apache/log4j/rolling/RollingFileAppender.html 44 | #file: 45 | #type: extrasRollingFile 46 | #file: ${path.logs}/${cluster.name}.log 47 | #rollingPolicy: timeBased 48 | #rollingPolicy.FileNamePattern: ${path.logs}/${cluster.name}.log.%d{yyyy-MM-dd}.gz 49 | #layout: 50 | #type: pattern 51 | #conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 52 | 53 | index_search_slow_log_file: 54 | type: dailyRollingFile 55 | file: ${path.logs}/${cluster.name}_index_search_slowlog.log 56 | datePattern: "'.'yyyy-MM-dd" 57 | layout: 58 | type: pattern 59 | conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 60 | 61 | index_indexing_slow_log_file: 62 | type: dailyRollingFile 63 | file: ${path.logs}/${cluster.name}_index_indexing_slowlog.log 64 | datePattern: "'.'yyyy-MM-dd" 65 | layout: 66 | type: pattern 67 | conversionPattern: "[%d{ISO8601}][%-5p][%-25c] %m%n" 68 | -------------------------------------------------------------------------------- /env/ELASTICSEARCH_VERSION: -------------------------------------------------------------------------------- 1 | 1.7.1 2 | -------------------------------------------------------------------------------- /hooks/publish-http-url: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # 3 | # This hook is needed only for backwards compatibility with OpenShift Origin release 2. 4 | # 5 | 6 | # Exit on any errors 7 | set -e 8 | 9 | # Get gear ip address. 10 | if ! gip=$(facter ipaddress); then 11 | gip=$(python -c "import socket; print socket.gethostbyname('$(hostname)')") 12 | fi 13 | 14 | # 15 | # Publish this gear's HTTP URL/endpoint. 16 | # 17 | echo "${OPENSHIFT_GEAR_DNS}|${gip}:${OPENSHIFT_ELASTICSEARCH_PROXY_PORT}" 18 | -------------------------------------------------------------------------------- /hooks/publish-unicast-host: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | ip_address=$(python -c "import socket; print socket.gethostbyname('$OPENSHIFT_GEAR_DNS')") 4 | echo $ip_address:$OPENSHIFT_ELASTICSEARCH_TRANSPORT_PROXY_PORT 5 | -------------------------------------------------------------------------------- /hooks/set-unicast-hosts: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | 3 | source $OPENSHIFT_CARTRIDGE_SDK_BASH 4 | 5 | list= 6 | kvargs=$(echo "${@:4}" | tr -d "\n" ) 7 | for arg in $kvargs; do 8 | ip=$(echo "$arg" | cut -f 2 -d '=' | tr -d "'") 9 | if [ -z "$list" ]; then 10 | list="$ip" 11 | else 12 | list="$list,$ip" 13 | fi 14 | done 15 | 16 | echo $list > $OPENSHIFT_ELASTICSEARCH_DIR/env/OPENSHIFT_ELASTICSEARCH_CLUSTER 17 | -------------------------------------------------------------------------------- /logs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/logs/.gitkeep -------------------------------------------------------------------------------- /metadata/managed_files.yml: -------------------------------------------------------------------------------- 1 | --- 2 | locked_files: 3 | - bin/ 4 | - bin/* 5 | - env/ 6 | - env/ELASTICSEARCH_VERSION 7 | - hooks/ 8 | - hooks/* 9 | processed_templates: 10 | - conf/*.erb 11 | -------------------------------------------------------------------------------- /metadata/manifest.yml: -------------------------------------------------------------------------------- 1 | Name: elasticsearch 2 | Cartridge-Short-Name: ELASTICSEARCH 3 | Display-Name: ElasticSearch 1.7.1 4 | Description: "ElasticSearch 1.7.1" 5 | Version: 1.7.1 6 | License: Apache 7 | Vendor: elasticsearch.org 8 | Cartridge-Version: 1.7.1 9 | Cartridge-Vendor: rbrower3 10 | 11 | Categories: 12 | - web_framework 13 | - service 14 | 15 | Provides: 16 | - elasticsearch 17 | - elasticsearch-1.7.1 18 | 19 | Publishes: 20 | publish-unicast-host: 21 | Type: NET_TCP:elasticsearch-cluster-info 22 | publish-http-url: 23 | Type: NET_TCP:httpd-proxy-info 24 | publish-gear-endpoint: 25 | Type: NET_TCP:gear-endpoint-info 26 | 27 | Subscribes: 28 | set-unicast-hosts: 29 | Type: NET_TCP:elasticsearch-cluster-info 30 | 31 | Scaling: 32 | Min: 1 33 | Max: -1 34 | 35 | Endpoints: 36 | - Private-IP-Name: IP 37 | Private-Port-Name: PORT 38 | Private-Port: 9200 39 | Public-Port-Name: PROXY_PORT 40 | Protocols: 41 | - http 42 | Mappings: 43 | - Frontend: '' 44 | Backend: '' 45 | 46 | - Private-IP-Name: IP 47 | Private-Port-Name: TRANSPORT_PORT 48 | Private-Port: 9300 49 | Public-Port-Name: TRANSPORT_PROXY_PORT 50 | -------------------------------------------------------------------------------- /template/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/template/.gitkeep -------------------------------------------------------------------------------- /template/plugins.txt: -------------------------------------------------------------------------------- 1 | ## 2 | ## ElasticSearch plugins to install, one per line. 3 | ## Edit this file, commit and push. All plugins will be 4 | ## installed inside all gears. 5 | ## 6 | ## To use a specific URL to download plugin code from, use: 7 | ## [PLUGIN_NAME]=[URL] 8 | ## Ex: 9 | ## myplugin=https://mysite.com/elasticsearch/my-awesome-plugin.zip 10 | ## 11 | 12 | ## Description: 13 | ## ElasticSearch-head is a web front end for browsing and 14 | ## interacting with an Elastic Search cluster. 15 | ## 16 | ## URL: 17 | ## http://mobz.github.io/elasticsearch-head 18 | 19 | ## Uncomment line below to install this plugin 20 | ## mobz/elasticsearch-head 21 | 22 | ## Description: 23 | ## Paramedic is a simple yet sexy tool to monitor and inspect 24 | ## ElasticSearch clusters. It displays real-time statistics and 25 | ## information about your nodes and indices, as well as shard 26 | ## allocation within the cluster. 27 | ## 28 | ## URL: 29 | ## https://github.com/karmi/elasticsearch-paramedic 30 | 31 | ## Uncomment line below to install this plugin 32 | ## karmi/elasticsearch-paramedic 33 | 34 | ## Description: 35 | ## Site plugin for Elasticsearch to help understand and debug queries. 36 | ## 37 | ## URL: 38 | ## https://github.com/polyfractal/elasticsearch-inquisitor 39 | 40 | ## Uncomment line below to install this plugin 41 | ## polyfractal/elasticsearch-inquisitor 42 | 43 | ## Description: 44 | ## SegmentSpy is a tool to watch the segments in your indices. 45 | ## Segment graphs update in real-time, allowing you to watch as 46 | ## ElasticSearch (Lucene) merges your segments 47 | ## 48 | ## URL: 49 | ## https://github.com/polyfractal/elasticsearch-segmentspy 50 | 51 | ## Uncomment line below to install this plugin 52 | ## polyfractal/elasticsearch-segmentspy 53 | 54 | ## Description: 55 | ## Plugin shows the elasticsearch node and cluster stats. 56 | ## 57 | ## URL: 58 | ## http://www.elasticsearch.org/overview/marvel/ 59 | 60 | ## Uncomment line below to install this plugin 61 | ## elasticsearch/marvel/latest 62 | -------------------------------------------------------------------------------- /usr/bin/elasticsearch: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | # OPTIONS: 4 | # -d daemonize (run in background) 5 | # -p pidfile write PID to 6 | # -h 7 | # --help print command line options 8 | # -v print elasticsearch version, then exit 9 | # -D prop set JAVA system property 10 | # -X prop set non-standard JAVA system property 11 | # --prop=val 12 | # --prop val set elasticsearch property (i.e. -Des.=) 13 | 14 | # CONTROLLING STARTUP: 15 | # 16 | # This script relies on few environment variables to determine startup 17 | # behavior, those variables are: 18 | # 19 | # ES_CLASSPATH -- A Java classpath containing everything necessary to run. 20 | # JAVA_OPTS -- Additional arguments to the JVM for heap size, etc 21 | # ES_JAVA_OPTS -- External Java Opts on top of the defaults set 22 | # 23 | # 24 | # Optionally, exact memory values can be set using the following values, note, 25 | # they can still be set using the `ES_JAVA_OPTS`. Sample format include "512m", and "10g". 26 | # 27 | # ES_HEAP_SIZE -- Sets both the minimum and maximum memory to allocate (recommended) 28 | # 29 | # As a convenience, a fragment of shell is sourced in order to set one or 30 | # more of these variables. This so-called `include' can be placed in a 31 | # number of locations and will be searched for in order. The lowest 32 | # priority search path is the same directory as the startup script, and 33 | # since this is the location of the sample in the project tree, it should 34 | # almost work Out Of The Box. 35 | # 36 | # Any serious use-case though will likely require customization of the 37 | # include. For production installations, it is recommended that you copy 38 | # the sample to one of /usr/share/elasticsearch/elasticsearch.in.sh, 39 | # /usr/local/share/elasticsearch/elasticsearch.in.sh, or 40 | # /opt/elasticsearch/elasticsearch.in.sh and make your modifications there. 41 | # 42 | # Another option is to specify the full path to the include file in the 43 | # environment. For example: 44 | # 45 | # $ ES_INCLUDE=/path/to/in.sh elasticsearch -p /var/run/es.pid 46 | # 47 | # Note: This is particularly handy for running multiple instances on a 48 | # single installation, or for quick tests. 49 | # 50 | # If you would rather configure startup entirely from the environment, you 51 | # can disable the include by exporting an empty ES_INCLUDE, or by 52 | # ensuring that no include files exist in the aforementioned search list. 53 | # Be aware that you will be entirely responsible for populating the needed 54 | # environment variables. 55 | 56 | 57 | # Maven will replace the project.name with elasticsearch below. If that 58 | # hasn't been done, we assume that this is not a packaged version and the 59 | # user has forgotten to run Maven to create a package. 60 | IS_PACKAGED_VERSION='elasticsearch' 61 | if [ "$IS_PACKAGED_VERSION" != "elasticsearch" ]; then 62 | cat >&2 << EOF 63 | Error: You must build the project with Maven or download a pre-built package 64 | before you can run Elasticsearch. See 'Building from Source' in README.textile 65 | or visit http://www.elasticsearch.org/download to get a pre-built package. 66 | EOF 67 | exit 1 68 | fi 69 | 70 | CDPATH="" 71 | SCRIPT="$0" 72 | 73 | # SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path. 74 | while [ -h "$SCRIPT" ] ; do 75 | ls=`ls -ld "$SCRIPT"` 76 | # Drop everything prior to -> 77 | link=`expr "$ls" : '.*-> \(.*\)$'` 78 | if expr "$link" : '/.*' > /dev/null; then 79 | SCRIPT="$link" 80 | else 81 | SCRIPT=`dirname "$SCRIPT"`/"$link" 82 | fi 83 | done 84 | 85 | # determine elasticsearch home 86 | ES_HOME=`dirname "$SCRIPT"`/.. 87 | 88 | # make ELASTICSEARCH_HOME absolute 89 | ES_HOME=`cd "$ES_HOME"; pwd` 90 | 91 | 92 | # If an include wasn't specified in the environment, then search for one... 93 | if [ "x$ES_INCLUDE" = "x" ]; then 94 | # Locations (in order) to use when searching for an include file. 95 | for include in /usr/share/elasticsearch/elasticsearch.in.sh \ 96 | /usr/local/share/elasticsearch/elasticsearch.in.sh \ 97 | /opt/elasticsearch/elasticsearch.in.sh \ 98 | ~/.elasticsearch.in.sh \ 99 | $ES_HOME/bin/elasticsearch.in.sh \ 100 | "`dirname "$0"`"/elasticsearch.in.sh; do 101 | if [ -r "$include" ]; then 102 | . "$include" 103 | break 104 | fi 105 | done 106 | # ...otherwise, source the specified include. 107 | elif [ -r "$ES_INCLUDE" ]; then 108 | . "$ES_INCLUDE" 109 | fi 110 | 111 | if [ -x "$JAVA_HOME/bin/java" ]; then 112 | JAVA="$JAVA_HOME/bin/java" 113 | else 114 | JAVA=`which java` 115 | fi 116 | 117 | if [ ! -x "$JAVA" ]; then 118 | echo "Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME" 119 | exit 1 120 | fi 121 | 122 | if [ -z "$ES_CLASSPATH" ]; then 123 | echo "You must set the ES_CLASSPATH var" >&2 124 | exit 1 125 | fi 126 | 127 | # Special-case path variables. 128 | case `uname` in 129 | CYGWIN*) 130 | ES_CLASSPATH=`cygpath -p -w "$ES_CLASSPATH"` 131 | ES_HOME=`cygpath -p -w "$ES_HOME"` 132 | ;; 133 | esac 134 | 135 | launch_service() 136 | { 137 | pidpath=$1 138 | daemonized=$2 139 | props=$3 140 | es_parms="-Delasticsearch" 141 | es_parms="$es_parms -Des.path.conf=$OPENSHIFT_ELASTICSEARCH_DIR/conf" 142 | 143 | if [ "x$pidpath" != "x" ]; then 144 | es_parms="$es_parms -Des.pidfile=$pidpath" 145 | fi 146 | 147 | # Make sure we dont use any predefined locale, as we check some exception message strings and rely on english language 148 | # As those strings are created by the OS, they are dependant on the configured locale 149 | LANG=en_US.UTF-8 150 | LC_ALL=en_US.UTF-8 151 | 152 | # The es-foreground option will tell Elasticsearch not to close stdout/stderr, but it's up to us not to daemonize. 153 | if [ "x$daemonized" = "x" ]; then 154 | es_parms="$es_parms -Des.foreground=yes" 155 | exec "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" $props \ 156 | org.elasticsearch.bootstrap.Elasticsearch 157 | # exec without running it in the background, makes it replace this shell, we'll never get here... 158 | # no need to return something 159 | else 160 | # Startup Elasticsearch, background it, and write the pid. 161 | exec "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" $props \ 162 | org.elasticsearch.bootstrap.Elasticsearch <&- & 163 | return $? 164 | fi 165 | } 166 | 167 | # Print command line usage / help 168 | usage() { 169 | echo "Usage: $0 [-vdh] [-p pidfile] [-D prop] [-X prop]" 170 | echo "Start elasticsearch." 171 | echo " -d daemonize (run in background)" 172 | echo " -p pidfile write PID to " 173 | echo " -h" 174 | echo " --help print command line options" 175 | echo " -v print elasticsearch version, then exit" 176 | echo " -D prop set JAVA system property" 177 | echo " -X prop set non-standard JAVA system property" 178 | echo " --prop=val" 179 | echo " --prop val set elasticsearch property (i.e. -Des.=)" 180 | } 181 | 182 | # Parse any long getopt options and put them into properties before calling getopt below 183 | # Be dash compatible to make sure running under ubuntu works 184 | ARGV="" 185 | while [ $# -gt 0 ] 186 | do 187 | case $1 in 188 | --help) ARGV="$ARGV -h"; shift;; 189 | --*=*) properties="$properties -Des.${1#--}" 190 | shift 1 191 | ;; 192 | --*) [ $# -le 1 ] && { 193 | echo "Option requires an argument: '$1'." 194 | shift 195 | continue 196 | } 197 | properties="$properties -Des.${1#--}=$2" 198 | shift 2 199 | ;; 200 | *) ARGV="$ARGV $1" ; shift 201 | esac 202 | done 203 | 204 | # Parse any command line options. 205 | args=`getopt vdhp:D:X: $ARGV` 206 | eval set -- "$args" 207 | 208 | while true; do 209 | case $1 in 210 | -v) 211 | "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS $es_parms -Des.path.home="$ES_HOME" -cp "$ES_CLASSPATH" $props \ 212 | org.elasticsearch.Version 213 | exit 0 214 | ;; 215 | -p) 216 | pidfile="$2" 217 | shift 2 218 | ;; 219 | -d) 220 | daemonized="yes" 221 | shift 222 | ;; 223 | -h) 224 | usage 225 | exit 0 226 | ;; 227 | -D) 228 | properties="$properties -D$2" 229 | shift 2 230 | ;; 231 | -X) 232 | properties="$properties -X$2" 233 | shift 2 234 | ;; 235 | --) 236 | shift 237 | break 238 | ;; 239 | *) 240 | echo "Error parsing argument $1!" >&2 241 | usage 242 | exit 1 243 | ;; 244 | esac 245 | done 246 | 247 | # Start up the service 248 | launch_service "$pidfile" "$daemonized" "$properties" 249 | 250 | exit $? -------------------------------------------------------------------------------- /usr/bin/elasticsearch.in.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ES_CLASSPATH=$ES_CLASSPATH:$ES_HOME/lib/elasticsearch-1.7.1.jar:$ES_HOME/lib/*:$ES_HOME/lib/sigar/* 4 | 5 | if [ "x$ES_MIN_MEM" = "x" ]; then 6 | ES_MIN_MEM=256m 7 | fi 8 | if [ "x$ES_MAX_MEM" = "x" ]; then 9 | ES_MAX_MEM=1g 10 | fi 11 | if [ "x$ES_HEAP_SIZE" != "x" ]; then 12 | ES_MIN_MEM=$ES_HEAP_SIZE 13 | ES_MAX_MEM=$ES_HEAP_SIZE 14 | fi 15 | 16 | # min and max heap sizes should be set to the same value to avoid 17 | # stop-the-world GC pauses during resize, and so that we can lock the 18 | # heap in memory on startup to prevent any of it from being swapped 19 | # out. 20 | JAVA_OPTS="$JAVA_OPTS -Xms${ES_MIN_MEM}" 21 | JAVA_OPTS="$JAVA_OPTS -Xmx${ES_MAX_MEM}" 22 | 23 | # new generation 24 | if [ "x$ES_HEAP_NEWSIZE" != "x" ]; then 25 | JAVA_OPTS="$JAVA_OPTS -Xmn${ES_HEAP_NEWSIZE}" 26 | fi 27 | 28 | # max direct memory 29 | if [ "x$ES_DIRECT_SIZE" != "x" ]; then 30 | JAVA_OPTS="$JAVA_OPTS -XX:MaxDirectMemorySize=${ES_DIRECT_SIZE}" 31 | fi 32 | 33 | # set to headless, just in case 34 | JAVA_OPTS="$JAVA_OPTS -Djava.awt.headless=true" 35 | 36 | # Force the JVM to use IPv4 stack 37 | if [ "x$ES_USE_IPV4" != "x" ]; then 38 | JAVA_OPTS="$JAVA_OPTS -Djava.net.preferIPv4Stack=true" 39 | fi 40 | 41 | JAVA_OPTS="$JAVA_OPTS -XX:+UseParNewGC" 42 | JAVA_OPTS="$JAVA_OPTS -XX:+UseConcMarkSweepGC" 43 | 44 | JAVA_OPTS="$JAVA_OPTS -XX:CMSInitiatingOccupancyFraction=75" 45 | JAVA_OPTS="$JAVA_OPTS -XX:+UseCMSInitiatingOccupancyOnly" 46 | 47 | # GC logging options 48 | if [ "x$ES_USE_GC_LOGGING" != "x" ]; then 49 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCDetails" 50 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCTimeStamps" 51 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintClassHistogram" 52 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintTenuringDistribution" 53 | JAVA_OPTS="$JAVA_OPTS -XX:+PrintGCApplicationStoppedTime" 54 | JAVA_OPTS="$JAVA_OPTS -Xloggc:/var/log/elasticsearch/gc.log" 55 | fi 56 | 57 | # Causes the JVM to dump its heap on OutOfMemory. 58 | JAVA_OPTS="$JAVA_OPTS -XX:+HeapDumpOnOutOfMemoryError" 59 | # The path to the heap dump location, note directory must exists and have enough 60 | # space for a full heap dump. 61 | #JAVA_OPTS="$JAVA_OPTS -XX:HeapDumpPath=$ES_HOME/logs/heapdump.hprof" 62 | 63 | # Disables explicit GC 64 | JAVA_OPTS="$JAVA_OPTS -XX:+DisableExplicitGC" 65 | 66 | # Ensure UTF-8 encoding by default (e.g. filenames) 67 | JAVA_OPTS="$JAVA_OPTS -Dfile.encoding=UTF-8" -------------------------------------------------------------------------------- /usr/bin/plugin: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | CDPATH="" 4 | SCRIPT="$0" 5 | 6 | # SCRIPT may be an arbitrarily deep series of symlinks. Loop until we have the concrete path. 7 | while [ -h "$SCRIPT" ] ; do 8 | ls=`ls -ld "$SCRIPT"` 9 | # Drop everything prior to -> 10 | link=`expr "$ls" : '.*-> \(.*\)$'` 11 | if expr "$link" : '/.*' > /dev/null; then 12 | SCRIPT="$link" 13 | else 14 | SCRIPT=`dirname "$SCRIPT"`/"$link" 15 | fi 16 | done 17 | 18 | # determine elasticsearch home 19 | ES_HOME=`dirname "$SCRIPT"`/.. 20 | 21 | # make ELASTICSEARCH_HOME absolute 22 | ES_HOME=`cd "$ES_HOME"; pwd` 23 | 24 | 25 | if [ -x "$JAVA_HOME/bin/java" ]; then 26 | JAVA=$JAVA_HOME/bin/java 27 | else 28 | JAVA=`which java` 29 | fi 30 | 31 | # real getopt cannot be used because we need to hand options over to the PluginManager 32 | while [ $# -gt 0 ]; do 33 | case $1 in 34 | -D*=*) 35 | properties="$properties $1" 36 | ;; 37 | -D*) 38 | var=$1 39 | shift 40 | properties="$properties $var=$1" 41 | ;; 42 | *) 43 | args="$args $1" 44 | esac 45 | shift 46 | done 47 | 48 | exec "$JAVA" $JAVA_OPTS $ES_JAVA_OPTS -Xmx64m -Xms16m -Delasticsearch -Des.path.home="$ES_HOME" $properties -cp "$ES_HOME/lib/*" org.elasticsearch.plugins.PluginManager $args 49 | -------------------------------------------------------------------------------- /usr/lib/antlr-runtime-3.5.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/antlr-runtime-3.5.jar -------------------------------------------------------------------------------- /usr/lib/apache-log4j-extras-1.2.17.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/apache-log4j-extras-1.2.17.jar -------------------------------------------------------------------------------- /usr/lib/asm-4.1.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/asm-4.1.jar -------------------------------------------------------------------------------- /usr/lib/asm-commons-4.1.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/asm-commons-4.1.jar -------------------------------------------------------------------------------- /usr/lib/elasticsearch-1.7.1.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/elasticsearch-1.7.1.jar -------------------------------------------------------------------------------- /usr/lib/groovy-all-2.4.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/groovy-all-2.4.4.jar -------------------------------------------------------------------------------- /usr/lib/jna-4.1.0.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/jna-4.1.0.jar -------------------------------------------------------------------------------- /usr/lib/jts-1.13.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/jts-1.13.jar -------------------------------------------------------------------------------- /usr/lib/log4j-1.2.17.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/log4j-1.2.17.jar -------------------------------------------------------------------------------- /usr/lib/lucene-analyzers-common-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-analyzers-common-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-core-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-core-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-expressions-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-expressions-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-grouping-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-grouping-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-highlighter-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-highlighter-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-join-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-join-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-memory-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-memory-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-misc-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-misc-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-queries-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-queries-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-queryparser-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-queryparser-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-sandbox-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-sandbox-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-spatial-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-spatial-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/lucene-suggest-4.10.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/lucene-suggest-4.10.4.jar -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-amd64-freebsd-6.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-amd64-freebsd-6.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-amd64-linux.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-amd64-linux.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-amd64-solaris.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-amd64-solaris.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-ia64-linux.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-ia64-linux.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-sparc-solaris.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-sparc-solaris.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-sparc64-solaris.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-sparc64-solaris.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-universal-macosx.dylib: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-universal-macosx.dylib -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-universal64-macosx.dylib: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-universal64-macosx.dylib -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-x86-freebsd-5.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-x86-freebsd-5.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-x86-freebsd-6.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-x86-freebsd-6.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-x86-linux.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-x86-linux.so -------------------------------------------------------------------------------- /usr/lib/sigar/libsigar-x86-solaris.so: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/libsigar-x86-solaris.so -------------------------------------------------------------------------------- /usr/lib/sigar/sigar-1.6.4.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/sigar-1.6.4.jar -------------------------------------------------------------------------------- /usr/lib/sigar/sigar-amd64-winnt.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/sigar-amd64-winnt.dll -------------------------------------------------------------------------------- /usr/lib/sigar/sigar-x86-winnt.dll: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/sigar-x86-winnt.dll -------------------------------------------------------------------------------- /usr/lib/sigar/sigar-x86-winnt.lib: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/sigar/sigar-x86-winnt.lib -------------------------------------------------------------------------------- /usr/lib/spatial4j-0.4.1.jar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/rbrower3/openshift-elasticsearch-cartridge/29f5bbe5a90bdbf94fb21c6e88be48f7f6ebe866/usr/lib/spatial4j-0.4.1.jar --------------------------------------------------------------------------------