├── .gitignore ├── DEVELOPMENT.md ├── LICENSE ├── Makefile ├── README.md ├── admin ├── build-release-task.sh └── build-release.sh ├── apps └── riak_explorer │ ├── ebin │ ├── include │ └── src ├── docs ├── .gitkeep └── RELEASES.md ├── include └── re_wm.hrl ├── output.html ├── pkg.vars.config ├── priv └── ember_riak_explorer │ └── dist │ ├── assets │ ├── ember-riak-explorer-1c08463dc4905f38189fbf189c9d4683.js │ ├── ember-riak-explorer-9ecfc362e73f3261881b1283c9960926.css │ ├── images │ │ ├── ajax-loading-big-a5291414ac90a6ef191134f917631dec.gif │ │ ├── riak-14a5288eb5ad20a7958cda159f71e7dc.png │ │ └── sample_logo-eb3d11f4175609d291de514c8ab27f6c.png │ ├── vendor-4ce527910826be98f1da04503769e514.js │ └── vendor-ce9df154529b9de706ffbbd59803695f.css │ ├── crossdomain.xml │ ├── index.html │ └── robots.txt ├── rebar ├── rebar.config ├── rebar.config.script ├── rel ├── files │ ├── advanced.config │ ├── cert.pem │ ├── key.pem │ └── riak_explorer.schema ├── rebar.config ├── reltool.config └── vars.config ├── src ├── re_cluster.erl ├── re_file_util.erl ├── re_job.erl ├── re_job_manager.erl ├── re_node.erl ├── re_node_control.erl ├── re_node_job.erl ├── re_riak_patch.erl ├── re_sup.erl ├── re_wm.erl ├── re_wm_control.erl ├── re_wm_explore.erl ├── re_wm_proxy.erl ├── re_wm_static.erl ├── riak_control_sup.erl ├── riak_explorer.app.src └── riak_explorer.erl ├── test └── integration │ ├── re_wm_explore_test.erl │ └── ret.erl ├── tools.mk └── vagrant └── ubuntu └── trusty64 └── Vagrantfile /.gitignore: -------------------------------------------------------------------------------- 1 | .eunit 2 | deps 3 | *.o 4 | *.plt 5 | erl_crash.dump 6 | *.beam 7 | _build 8 | *.tar.gz 9 | ebin/*.beam 10 | ebin/*.app 11 | rel/riak_explorer 12 | rel/riak-addon 13 | rel/root 14 | .concrete/DEV_MODE 15 | .rebar 16 | configure.sh 17 | .DS_Store 18 | *~ 19 | .vagrant 20 | packages 21 | oauth.txt 22 | rel/sandbox 23 | -------------------------------------------------------------------------------- /DEVELOPMENT.md: -------------------------------------------------------------------------------- 1 | #Development 2 | 3 | The Riak Explorer project consists of two sub-projects: 4 | the Erlang API (this repo), and the front end JS 5 | [riak-explorer-gui](https://github.com/basho-labs/riak-explorer-gui). 6 | The GUI is optional; you can run the API as a standalone add-on to Riak. 7 | 8 | ## Requirements - Explorer API 9 | 10 | * [Install Erlang](http://docs.basho.com/riak/latest/ops/building/installing/erlang/) 11 | * [Install Riak](http://docs.basho.com/riak/latest/ops/building/installing/) 12 | (from pre-built package, or from source) 13 | * Clone `riak_explorer` (this repo) and `cd` into it. 14 | 15 | ## Developer Instructions - Explorer API 16 | These instructions assume that you have Erlang installed, and Riak installed 17 | and started. 18 | 19 | #### Compile the API code 20 | 21 | 1. `make` - Loads and compiles all dependencies (depends on `erl`) 22 | 23 | 2. `make rel` - Performs release tasks, creates `rel/riak_explorer`, 24 | where its executable and config files will be located. 25 | 26 | 3. `make stage` - Enables faster development cycles; Only needs to be run once to set up lib and static web symlinks 27 | 28 | #### Configure and Test the connection to Riak 29 | 30 | Riak Control only interacts with the clusters specified in its config file 31 | (`riak_explorer.conf`). 32 | 33 | By default, it's going to try and connect to a `default` cluster, expecting your 34 | local Riak node to have the id `riak@127.0.0.1`. If your Riak's node id is 35 | different (check Riak's `riak.conf` file), you will need the change Explorer's 36 | config to point it in the right direction. (For advanced Riak users: also 37 | double-check that Riak's Erlang cookie matches Explorer's cookie.) 38 | 39 | 4. Verify settings in `rel/riak_explorer/etc/riak_explorer.conf` 40 | 41 | 5. `./rel/riak_explorer/bin/riak_explorer start` (or `console|attach|stop`) - 42 | Starts the `riak_explorer` Erlang API, as well as the Webmachine web server 43 | that will be serving the Ember.js GUI's HTTP and AJAX requests. 44 | 45 | 6. `curl localhost:9000/explore/ping` - Test that Explorer is up and running. 46 | By default, Explorer listens on port `9000` (you can change this in 47 | `riak_explorer.conf`). 48 | 49 | 7. `curl localhost:9000/explore/clusters/default/nodes` - Test to make sure 50 | Explorer can connect to the default cluster. You should see a response like 51 | `{"nodes":[{"id":"riak@127.0.0.1"}], ...` If the list of nodes comes back 52 | empty, make sure Riak is started, and the node id and Erlang cookie are 53 | correct. 54 | 55 | #### Running Explorer from within Riak (W.I.P.) 56 | 57 | (work in progress, to be expanded) 58 | 59 | First, locate the path to your Riak `lib` directory (as well as `lib/basho-patches`, 60 | within it). If your Riak 61 | is installed locally from source (or from the Mac OS X app), it will be located 62 | at `/lib`. If you installed it on a server 63 | OS from package, see the [basho-patches section](http://docs.basho.com/riak/latest/ops/upgrading/rolling-upgrades/#Basho-Patches) for an idea of where Riak `lib/` is installed. 64 | 65 | (#TODO - explain where Riak's `priv/` is located). 66 | 67 | Let's export these locations to variables in your terminal session, to make the 68 | instructions easier. (The instructions below are for a local Mac OS X install 69 | from `Riak.app`.) 70 | 71 | ```bash 72 | export RIAK_PATH=/Applications/Riak.app/Contents/Resources/riak-2.1.0 73 | export RIAK_LIB=$RIAK_PATH/lib 74 | export RIAK_PRIV=$RIAK_PATH/priv 75 | ``` 76 | 77 | The `basho-patches` directory will be located within `$RIAK_LIB`. This directory 78 | allows hot-loading of custom Erlang modules into Riak, and that's exactly the 79 | mechanism the Explorer API will be using. 80 | 81 | 1. `make riak-addon` 82 | 83 | 2. `cp rel/riak-addon/ebin/* $RIAK_LIB/basho-patches/` 84 | 85 | 3. `rm -rf $RIAK_PRIV/*` <- are we sure this part doesn't get rid of anything 86 | necessary? 87 | 88 | 4. `cp -R rel/riak-addon/priv/* $RIAK_PRIV` 89 | 90 | #### Run the Tests 91 | 92 | 8. `make test` - Recompiles `src` and executes unit tests 93 | 94 | 9. `make itest` - Recompiles `src`, executes integration tests (run `./rel/riak_explorer/bin/riak_explorer start` first) 95 | 96 | ## Developer Instructions - Explorer GUI (optional) 97 | 98 | For instructions on how to install the front-end GUI (and its dependencies), 99 | see the README at 100 | [riak-explorer-gui](https://github.com/basho-labs/riak-explorer-gui). 101 | 102 | ## Thank You! 103 | 104 | Thank you for being part of the community! We love you for it. 105 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "{}" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright {yyyy} {name of copyright owner} 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | 203 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | REPO ?= riak_explorer 2 | PKG_VERSION ?= $(shell git describe --tags --abbrev=0 | tr - .) 3 | OS_FAMILY ?= ubuntu 4 | OS_VERSION ?= 14.04 5 | PKGNAME ?= $(REPO)-$(PKG_VERSION)-$(OS_FAMILY)-$(OS_VERSION).tar.gz 6 | OAUTH_TOKEN ?= $(shell cat oauth.txt) 7 | GIT_TAG ?= $(shell git describe --tags --abbrev=0) 8 | RELEASE_ID ?= $(shell curl -sS https://api.github.com/repos/basho-labs/$(REPO)/releases/tags/$(GIT_TAG)?access_token=$(OAUTH_TOKEN) | python -c 'import sys, json; print(json.load(sys.stdin)["id"])') 9 | DEPLOY_BASE ?= "https://uploads.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN)&name=$(PKGNAME)" 10 | DOWNLOAD_BASE ?= https://github.com/basho-labs/$(REPO)/releases/download/$(GIT_TAG)/$(PKGNAME) 11 | 12 | BASE_DIR = $(shell pwd) 13 | ERLANG_BIN = $(shell dirname $(shell which erl)) 14 | REBAR ?= $(BASE_DIR)/rebar 15 | OVERLAY_VARS ?= 16 | 17 | ifneq (,$(shell whereis sha256sum | awk '{print $2}';)) 18 | SHASUM = sha256sum 19 | else 20 | SHASUM = shasum -a 256 21 | endif 22 | 23 | .PHONY: deps 24 | 25 | all: compile 26 | 27 | compile: deps 28 | $(REBAR) compile 29 | recompile: 30 | $(REBAR) compile skip_deps=true 31 | deps: 32 | $(REBAR) get-deps 33 | clean: cleantest relclean clean-sandbox 34 | -rm -rf packages 35 | clean-sandbox: 36 | -rm -rf rel/sandbox 37 | cleantest: 38 | rm -rf .eunit/* 39 | test: cleantest 40 | $(REBAR) skip_deps=true eunit 41 | itest: recompile cleantest 42 | INTEGRATION_TEST=true $(REBAR) skip_deps=true eunit 43 | reitest: cleantest 44 | INTEGRATION_TEST=true $(REBAR) skip_deps=true eunit 45 | relclean: 46 | -rm -rf rel/riak_explorer 47 | -rm -rf rel/root 48 | rel: relclean deps compile 49 | $(REBAR) compile 50 | $(REBAR) skip_deps=true generate $(OVERLAY_VARS) 51 | stage: rel 52 | $(foreach dep,$(wildcard deps/*), rm -rf rel/riak_explorer/lib/$(shell basename $(dep))-* && ln -sf $(abspath $(dep)) rel/riak_explorer/lib;) 53 | $(foreach app,$(wildcard apps/*), rm -rf rel/riak_explorer/lib/$(shell basename $(app))-* && ln -sf $(abspath $(app)) rel/riak_explorer/lib;) 54 | rm -rf rel/riak_explorer/priv/ember_riak_explorer/dist && ln -sf $(abspath priv/ember_riak_explorer/dist) rel/riak_explorer/priv/ember_riak_explorer 55 | 56 | riak-addon: 57 | -rm -rf rel/riak-addon 58 | mkdir -p rel/riak-addon/ebin 59 | mkdir -p rel/riak-addon/priv 60 | $(REBAR) compile 61 | cp -R deps/riakc/ebin/* rel/riak-addon/ebin/ 62 | cp -R deps/riak_pb/ebin/* rel/riak-addon/riak/lib/basho-patches/ 63 | cp -R deps/protobuffs/ebin/* rel/riak-addon/riak/lib/basho-patches/ 64 | cp -R ebin/* rel/riak-addon/ebin/ 65 | cp -R priv/* rel/riak-addon/priv/ 66 | 67 | ## 68 | ## Packaging targets 69 | ## 70 | tarball-standalone: rel 71 | echo "Creating packages/"$(PKGNAME) 72 | mkdir -p packages 73 | tar -C rel -czf $(PKGNAME) $(REPO)/ 74 | mv $(PKGNAME) packages/ 75 | cd packages && $(SHASUM) $(PKGNAME) > $(PKGNAME).sha 76 | cd packages && echo "$(DOWNLOAD_BASE)" > remote.txt 77 | cd packages && echo "$(BASE_DIR)/packages/$(PKGNAME)" > local.txt 78 | sync-standalone: 79 | echo "Uploading to "$(DOWNLOAD_BASE) 80 | @cd packages && \ 81 | curl -XPOST -sS -H 'Content-Type: application/gzip' $(DEPLOY_BASE) --data-binary @$(PKGNAME) && \ 82 | curl -XPOST -sS -H 'Content-Type: application/octet-stream' $(DEPLOY_BASE).sha --data-binary @$(PKGNAME).sha 83 | 84 | ASSET_ID ?= $(shell curl -sS https://api.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN) | python -c 'import sys, json; print("".join([str(asset["id"]) if asset["name"] == "$(PKGNAME)" else "" for asset in json.load(sys.stdin)]))') 85 | ASSET_SHA_ID ?= $(shell curl -sS https://api.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN) | python -c 'import sys, json; print("".join([str(asset["id"]) if asset["name"] == "$(PKGNAME).sha" else "" for asset in json.load(sys.stdin)]))') 86 | DELETE_DEPLOY_BASE ?= "https://api.github.com/repos/basho-labs/$(REPO)/releases/assets/$(ASSET_ID)?access_token=$(OAUTH_TOKEN)" 87 | DELETE_SHA_DEPLOY_BASE ?= "https://api.github.com/repos/basho-labs/$(REPO)/releases/assets/$(ASSET_SHA_ID)?access_token=$(OAUTH_TOKEN)" 88 | 89 | sync-delete-standalone: 90 | echo "Deleting "$(DOWNLOAD_BASE) 91 | - $(shell curl -XDELETE -sS $(DELETE_DEPLOY_BASE)) 92 | - $(shell curl -XDELETE -sS $(DELETE_SHA_DEPLOY_BASE)) 93 | 94 | RIAK_BASE ?= root 95 | PATCH_PKG_VERSION ?= $(PKG_VERSION).patch 96 | PATCH_PKGNAME ?= $(REPO)-$(PATCH_PKG_VERSION)-$(OS_FAMILY)-$(OS_VERSION).tar.gz 97 | PATCH_DEPLOY_BASE ?= "https://uploads.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN)&name=$(PATCH_PKGNAME)" 98 | PATCH_DOWNLOAD_BASE ?= https://github.com/basho-labs/$(REPO)/releases/download/$(GIT_TAG)/$(PATCH_PKGNAME) 99 | 100 | reltarball: RIAK_BASE = . 101 | reltarball: PATCH_PKG_VERSION = $(PKG_VERSION).relpatch 102 | reltarball: compile clean-sandbox 103 | reltarball: 104 | $(call build-tarball) 105 | 106 | relsync: RIAK_BASE = . 107 | relsync: PATCH_PKG_VERSION = $(PKG_VERSION).relpatch 108 | relsync: 109 | $(call do-sync) 110 | 111 | tarball: compile clean-sandbox 112 | tarball: 113 | $(call build-tarball) 114 | 115 | sync: 116 | $(call do-sync) 117 | 118 | define build-tarball 119 | echo "Creating packages/"$(PATCH_PKGNAME) 120 | -rm -rf rel/sandbox/$(RIAK_BASE) 121 | mkdir -p rel/sandbox/$(RIAK_BASE)/riak/lib/basho-patches 122 | mkdir -p rel/sandbox/$(RIAK_BASE)/riak/priv 123 | cp -R deps/riakc/ebin/* rel/sandbox/$(RIAK_BASE)/riak/lib/basho-patches/ 124 | cp -R deps/riak_pb/ebin/* rel/sandbox/$(RIAK_BASE)/riak/lib/basho-patches/ 125 | cp -R deps/protobuffs/ebin/* rel/sandbox/$(RIAK_BASE)/riak/lib/basho-patches/ 126 | cp -R ebin/* rel/sandbox/$(RIAK_BASE)/riak/lib/basho-patches/ 127 | cp -R priv/* rel/sandbox/$(RIAK_BASE)/riak/priv/ 128 | mkdir -p packages 129 | tar -C rel/sandbox -czf $(PATCH_PKGNAME) $(RIAK_BASE) 130 | mv $(PATCH_PKGNAME) packages/ 131 | cd packages && $(SHASUM) $(PATCH_PKGNAME) > $(PATCH_PKGNAME).sha 132 | cd packages && echo "$(PATCH_DOWNLOAD_BASE)" > remote.txt 133 | cd packages && echo "$(BASE_DIR)/packages/$(PATCH_PKGNAME)" > local.txt 134 | endef 135 | 136 | define do-sync 137 | echo "Uploading to "$(PATCH_DOWNLOAD_BASE) 138 | @cd packages && \ 139 | curl -XPOST -sS -H 'Content-Type: application/gzip' $(PATCH_DEPLOY_BASE) --data-binary @$(PATCH_PKGNAME) && \ 140 | curl -XPOST -sS -H 'Content-Type: application/octet-stream' $(PATCH_DEPLOY_BASE).sha --data-binary @$(PATCH_PKGNAME).sha 141 | endef 142 | 143 | PATCH_ASSET_ID ?= $(shell curl -sS https://api.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN) | python -c 'import sys, json; print "".join([str(asset["id"]) if asset["name"] == "$(PATCH_PKGNAME)" else "" for asset in json.load(sys.stdin)])') 144 | PATCH_ASSET_SHA_ID ?= $(shell curl -sS https://api.github.com/repos/basho-labs/$(REPO)/releases/$(RELEASE_ID)/assets?access_token=$(OAUTH_TOKEN) | python -c 'import sys, json; print "".join([str(asset["id"]) if asset["name"] == "$(PATCH_PKGNAME).sha" else "" for asset in json.load(sys.stdin)])') 145 | PATCH_DELETE_DEPLOY_BASE ?= "https://api.github.com/repos/basho-labs/$(REPO)/releases/assets/$(PATCH_ASSET_ID)?access_token=$(OAUTH_TOKEN)" 146 | PATCH_DELETE_SHA_DEPLOY_BASE ?= "https://api.github.com/repos/basho-labs/$(REPO)/releases/assets/$(PATCH_ASSET_SHA_ID)?access_token=$(OAUTH_TOKEN)" 147 | 148 | sync-delete: 149 | echo "Deleting "$(PATCH_DOWNLOAD_BASE) 150 | - $(shell curl -XDELETE -sS $(PATCH_DELETE_DEPLOY_BASE)) 151 | - $(shell curl -XDELETE -sS $(PATCH_DELETE_SHA_DEPLOY_BASE)) 152 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Riak Explorer 2 | 3 | [![Join the chat at https://gitter.im/basho-labs/riak_explorer](https://badges.gitter.im/basho-labs/riak_explorer.svg)](https://gitter.im/basho-labs/riak_explorer?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) 4 | 5 | Riak Explorer provides browsing and admin capabilities for [Riak 6 | KV](http://basho.com/products/riak-kv/) and [Riak TS](http://basho.com/products/riak-ts/), a distributed NoSQL data 7 | store that offers high availability, fault tolerance, operational simplicity, 8 | and scalability. 9 | 10 | Riak Explorer is useful while in a development or production environment. It includes 11 | convenient methods to browse Bucket Types, Buckets, Keys, view and edit Riak 12 | Objects, and more. To prevent heavy I/O requests from key listings, be sure to 13 | edit the config file to reflect the environment as [explained in Using Riak 14 | Explorer](#using-riak-explorer). 15 | 16 | * [Installation](#installation) 17 | * [System Architecture](#system-architecture) 18 | * [Using Riak Explorer](#using-riak-explorer) 19 | * [Development / Contributing](#development--contributing) 20 | 21 | ## Installation 22 | 23 | ### Installing from pre-built Package 24 | 25 | The easiest way to install Riak Explorer (for non-developers) is to use one of 26 | the pre-compiled packages below. These include both the Erlang backend API code 27 | (this repository), and the front-end Ember.js GUI code (from 28 | [riak-explorer-gui](https://github.com/basho-labs/riak-explorer-gui)). 29 | 30 | #### Standalone Version 31 | 32 | 1. Download and extract a `.tar.gz` release from the releases page: [https://github.com/basho-labs/riak_explorer/releases](https://github.com/basho-labs/riak_explorer/releases). 33 | * *Note: If you'd like to support further OSes, please [open an Issue](https://github.com/basho-labs/riak_explorer/issues)* 34 | 35 | 2. Verify the default settings in `riak_explorer/etc/riak_explorer.conf` 36 | will work for your configuration (primarily that port 9000 is available on your 37 | host). Pay special attention to development mode settings, this should be `off` 38 | for use with a production environment to prevent accidental keylistings. 39 | 40 | 3. Run `./riak_explorer/bin/riak_explorer start` to start the `riak_explorer` 41 | application 42 | 43 | 4. Navigate to [http://localhost:9000/](http://localhost:9000/) 44 | 45 | #### Riak Patch Version 46 | 47 | 1. Download and extract a `patch` release from the releases page: [https://github.com/basho-labs/riak_explorer/releases](https://github.com/basho-labs/riak_explorer/releases). 48 | 49 | 2. Locate your Riak installation and `cp -R root/riak/lib/basho-patches/* /path/to/riak/lib/basho-patches/`, `cp -R /root/riak/priv /path/to/riak/priv`. 50 | 51 | 3. Run `riak/bin/riak start` 52 | 53 | 4. Navigate to [http://localhost:8098/admin](http://localhost:8098/admin) 54 | 55 | ### Installing the Dev Environment 56 | 57 | For developer install instructions (and contribution guidelines), see the 58 | [Development / Contributing](#development--contributing) section, below. 59 | 60 | ## System Architecture 61 | 62 | *Front-end GUI:* [Ember.js](http://emberjs.com/). 63 | See [riak-explorer-gui](https://github.com/basho-labs/riak-explorer-gui) 64 | repository for the front-end code and setup instructions. 65 | 66 | *Back-end:* [Erlang](http://www.erlang.org/) is the primary development language 67 | and [WebMachine](http://webmachine.github.io/) is used to serve a RESTful 68 | API (and, optionally, to serve the Ember.js GUI app). 69 | 70 | ## Using Riak Explorer 71 | 72 | ### Development Mode 73 | 74 | The concept of "Development Mode" is crucial to Riak Explorer. 75 | 76 | Because Explorer allows users to perform operations that are disastrous in 77 | production (such as List Keys or List Buckets operations), the app is careful 78 | to enable those operations *only in Development Mode*. This setting is 79 | toggled in the Explorer config file, on a per-cluster basis. 80 | 81 | If you take a look in `rel/riak_explorer/etc/riak_explorer.conf`, you will see 82 | a line like: 83 | 84 | ``` 85 | clusters.default.development_mode = on 86 | ``` 87 | 88 | This means that the `default` cluster has Dev Mode *enabled*, and *WILL* allow 89 | prohibitive operations such as Streaming List Keys. Operators are strongly 90 | encouraged to either: 91 | 92 | a. Not point Explorer at production clusters, or 93 | 94 | b. If used with production clusters, be sure to set `development_mode = off` for 95 | that cluster. 96 | 97 | ### Table Row, Key, and Bucket List Caches (in Dev Mode only) 98 | 99 | Even in Dev Mode, Explorer tries not to run listing operations 100 | more than necessary. To that end, the API runs the List command once requested by the user and then *caches* the 101 | result in a text file, on disk. The GUI app user, when browsing a list, only 102 | interacts with those caches. 103 | 104 | ### Explorer API endpoints 105 | 106 | The three types of API endpoints available are: 107 | 108 | 1. The **Riak proxy** endpoints, `/riak/nodes/` and `/riak/clusters/`. The app 109 | uses these endpoints to make calls to the plain [Riak HTTP 110 | API](http://docs.basho.com/riak/latest/dev/references/http/). The proxy 111 | endpoints are used for several reasons, primarily due to CORS issues 112 | (on the Riak API side). 113 | 114 | So, for example, `curl localhost:9000/riak/nodes/riak@127.0.0.1/ping` 115 | proxies the request to that specific node's [ping HTTP API](http://docs.basho.com/riak/latest/dev/references/http/ping/). 116 | 117 | Similarly, using `curl localhost:9000/riak/clusters/default/ping` 118 | proxies the request to the *cluster* (which also ends up going to that same 119 | node, since this cluster just has one node in it). 120 | 121 | In general, it is preferable to use the `clusters/` proxy endpoint (unless 122 | you specifically want to access an individual node's REST API). 123 | 124 | 2. **Explore** endpoints, at `/explore/`. Think of it as an enhancement to 125 | Riak's own HTTP API, to fill in missing functionality. For example, 126 | the plain Riak API doesn't have a 'list bucket types' functionality -- 127 | that can only be done via `riak-admin` CLI. The Explorer endpoints enable 128 | this, at `/explore/clusters/$cluster/bucket_types`. 129 | 130 | 3. **Control** endpoints at `/control/`. These provide a REST API to cluster 131 | operations that are normally available only through the [Riak Admin 132 | CLI](http://docs.basho.com/riak/latest/ops/running/tools/riak-admin/) 133 | (for example, `riak-admin cluster join`). 134 | 135 | ### API Documentation 136 | 137 | For in-depth documentation of the available API endpoints, complete with 138 | sample responses, see [Riak Explorer API](http://basho-labs.github.io/riak_explorer/docs/api.html). 139 | 140 | You can also generate this API documentation locally, using 141 | [aglio](https://github.com/danielgtaylor/aglio): 142 | 143 | ``` 144 | # install the Aglio renderer for the API Blueprint markdown format 145 | npm install -g aglio 146 | 147 | # generate the documentation 148 | aglio -i API.apib.md --theme-full-width -o docs/api.html 149 | 150 | # open them in your browser 151 | open docs/api.html 152 | ``` 153 | 154 | The source code for these docs is in [API Blueprint Format](https://github.com/apiaryio/api-blueprint/blob/master/API Blueprint Specification.md) (see also the [sample API markup example](https://raw.githubusercontent.com/danielgtaylor/aglio/master/example.apib)), 155 | and is located in the [API.apib.md](https://github.com/basho-labs/riak_explorer/blob/gh-pages/API.apib.md) file on the `gh-pages` branch. 156 | 157 | **To add to the API documentation:** 158 | 159 | 1. Check out the `gh-pages` branch: 160 | 161 | ``` 162 | git checkout gh-pages 163 | ``` 164 | 165 | 2. Make changed to the source markup. 166 | 3. Generate the HTML using `aglio` (see above). 167 | 4. Commit both the source markup and the generated HTML. 168 | 5. `git push origin gh-pages` 169 | 170 | #### Full API Endpoint Listing 171 | 172 | Riak Explorer exposes a REST API (by default located at [http://localhost:9000/explore](http://localhost:9000/explore)). 173 | 174 | Following are the available routes (these can also be obtained from `/explore/routes`): 175 | 176 | ``` 177 | /explore/nodes/$node/bucket_types/$bucket_type/buckets/$bucket/keys 178 | /explore/nodes/$node/bucket_types/$bucket_type/buckets/$bucket/refresh_keys/source/riak_kv 179 | /explore/clusters/$cluster/bucket_types/$bucket_type/buckets/$bucket/keys 180 | /explore/clusters/$cluster/bucket_types/$bucket_type/buckets/$bucket/refresh_keys/source/riak_kv 181 | /explore/nodes/$node/bucket_types/$bucket_type/buckets/$bucket/$resource (Resources: [jobs]) 182 | /explore/nodes/$node/bucket_types/$bucket_type/buckets/$bucket 183 | /explore/nodes/$node/bucket_types/$bucket_type/buckets 184 | /explore/nodes/$node/bucket_types/$bucket_type/refresh_buckets/source/riak_kv 185 | /explore/clusters/$cluster/bucket_types/$bucket_type/buckets/$bucket/$resource (Resources: [jobs]) 186 | /explore/clusters/$cluster/bucket_types/$bucket_type/buckets/$bucket 187 | /explore/clusters/$cluster/bucket_types/$bucket_type/buckets 188 | /explore/clusters/$cluster/bucket_types/$bucket_type/refresh_buckets/source/riak_kv 189 | /explore/nodes/$node/bucket_types/$bucket_type/$resource (Resources: [jobs]) 190 | /explore/nodes/$node/bucket_types/$bucket_type 191 | /explore/nodes/$node/bucket_types 192 | /explore/clusters/$cluster/bucket_types/$bucket_type/$resource (Resources: [jobs]) 193 | /explore/clusters/$cluster/bucket_types/$bucket_type 194 | /explore/clusters/$cluster/bucket_types 195 | /explore/nodes/$node 196 | /explore/nodes/$node/$resource (Resources: [config] 197 | /explore/clusters/$cluster/nodes/$node 198 | /explore/clusters/$cluster/nodes/$node/$resource (Resources: [config]) 199 | /explore/clusters/$cluster/nodes 200 | /explore/nodes/$node/config/files/$file 201 | /explore/nodes/$node/config/files 202 | /explore/clusters/$cluster/nodes/$node/config/files/$file 203 | /explore/clusters/$cluster/nodes/$node/config/files 204 | /explore/nodes/$node/log/files/$file 205 | /explore/nodes/$node/log/files 206 | /explore/clusters/$cluster/nodes/$node/log/files/$file 207 | /explore/clusters/$cluster/nodes/$node/log/files 208 | /explore/clusters/$cluster 209 | /explore/clusters 210 | /explore 211 | /explore/$resource (Resources: [routes,props,jobs,ping]) 212 | /control/nodes/$node/repl-fullsync-stop/$arg1 213 | /control/nodes/$node/repl-fullsync-stop 214 | /control/nodes/$node/repl-fullsync-start/$arg1 215 | /control/nodes/$node/repl-fullsync-start 216 | /control/nodes/$node/repl-fullsync-disable/$arg1 217 | /control/nodes/$node/repl-fullsync-enable/$arg1 218 | /control/nodes/$node/repl-realtime-stop/$arg1 219 | /control/nodes/$node/repl-realtime-stop 220 | /control/nodes/$node/repl-realtime-start/$arg1 221 | /control/nodes/$node/repl-realtime-start 222 | /control/nodes/$node/repl-realtime-disable/$arg1 223 | /control/nodes/$node/repl-realtime-enable/$arg1 224 | /control/nodes/$node/repl-clusterstats-realtime 225 | /control/nodes/$node/repl-clusterstats-proxy_get 226 | /control/nodes/$node/repl-clusterstats-fullsync 227 | /control/nodes/$node/repl-clusterstats-fs_coordinate 228 | /control/nodes/$node/repl-clusterstats-cluster_mgr 229 | /control/nodes/$node/repl-clusterstats/$arg1/$arg2 230 | /control/nodes/$node/repl-clusterstats 231 | /control/nodes/$node/repl-connections 232 | /control/nodes/$node/repl-disconnect/$arg1 233 | /control/nodes/$node/repl-connect/$arg1/$arg2 234 | /control/nodes/$node/repl-clustername/$arg1 235 | /control/nodes/$node/repl-clustername 236 | /control/nodes/$node/aae-status 237 | /control/nodes/$node/transfers 238 | /control/nodes/$node/ringready 239 | /control/nodes/$node/status 240 | /control/nodes/$node/clear 241 | /control/nodes/$node/commit 242 | /control/nodes/$node/plan 243 | /control/nodes/$node/force-replace/$arg1/$arg2 244 | /control/nodes/$node/staged-replace/$arg1/$arg2 245 | /control/nodes/$node/replace/$arg1/$arg2 246 | /control/nodes/$node/force-remove/$arg1 247 | /control/nodes/$node/staged-leave/$arg1 248 | /control/nodes/$node/staged-leave 249 | /control/nodes/$node/staged-join/$arg1 250 | /control/nodes/$node/leave/$arg1 251 | /control/nodes/$node/join/$arg1 252 | /control/nodes/$node/repair 253 | /control/clusters/$cluster/repl-fullsync-stop/$arg1 254 | /control/clusters/$cluster/repl-fullsync-stop 255 | /control/clusters/$cluster/repl-fullsync-start/$arg1 256 | /control/clusters/$cluster/repl-fullsync-start 257 | /control/clusters/$cluster/repl-fullsync-disable/$arg1 258 | /control/clusters/$cluster/repl-fullsync-enable/$arg1 259 | /control/clusters/$cluster/repl-realtime-stop/$arg1 260 | /control/clusters/$cluster/repl-realtime-stop 261 | /control/clusters/$cluster/repl-realtime-start/$arg1 262 | /control/clusters/$cluster/repl-realtime-start 263 | /control/clusters/$cluster/repl-realtime-disable/$arg1 264 | /control/clusters/$cluster/repl-realtime-enable/$arg1 265 | /control/clusters/$cluster/repl-clusterstats-realtime 266 | /control/clusters/$cluster/repl-clusterstats-proxy_get 267 | /control/clusters/$cluster/repl-clusterstats-fullsync 268 | /control/clusters/$cluster/repl-clusterstats-fs_coordinate 269 | /control/clusters/$cluster/repl-clusterstats-cluster_mgr 270 | /control/clusters/$cluster/repl-clusterstats/$arg1/$arg2 271 | /control/clusters/$cluster/repl-clusterstats 272 | /control/clusters/$cluster/repl-connections 273 | /control/clusters/$cluster/repl-disconnect/$arg1 274 | /control/clusters/$cluster/repl-connect/$arg1/$arg2 275 | /control/clusters/$cluster/repl-clustername/$arg1 276 | /control/clusters/$cluster/repl-clustername 277 | /control/clusters/$cluster/aae-status 278 | /control/clusters/$cluster/transfers 279 | /control/clusters/$cluster/ringready 280 | /control/clusters/$cluster/status 281 | /control/clusters/$cluster/clear 282 | /control/clusters/$cluster/commit 283 | /control/clusters/$cluster/plan 284 | /control/clusters/$cluster/force-replace/$arg1/$arg2 285 | /control/clusters/$cluster/staged-replace/$arg1/$arg2 286 | /control/clusters/$cluster/replace/$arg1/$arg2 287 | /control/clusters/$cluster/force-remove/$arg1 288 | /control/clusters/$cluster/staged-leave/$arg1 289 | /control/clusters/$cluster/staged-leave 290 | /control/clusters/$cluster/staged-join/$arg1 291 | /control/clusters/$cluster/leave/$arg1 292 | /control/clusters/$cluster/join/$arg1 293 | /control/clusters/$cluster/repair 294 | /riak/nodes/$node/$* (Riak Direct HTTP Proxy) 295 | /riak/clusters/$cluster/$* (Riak Direct HTTP Proxy) 296 | /$* (Static Endpoint) 297 | ``` 298 | 299 | Explanation: 300 | 301 | * `$cluster`: Specifying `default` will use the cluster that this riak_explorer is connected to. 302 | * `$node`: Example: `riak@127.0.0.1` 303 | * `$bucket_type`: Example: `default` 304 | * `$bucket`: Example: `mybucket` 305 | * `$key`: Example: `mykey` 306 | * `$schema`: Example: `_yz_default` 307 | * `$index`: Example: `myindex` 308 | * `$*`: Wildcard with deep paths. Example: `assets/ember-riak-explorer.js` for the static route, or `ping` for the riak_proxy route 309 | * `$resource`: A list of valid `resources` for a given module can be found in `explore.resources` 310 | 311 | ## Development / Contributing 312 | 313 | For developer installation instructions and environment setup, visit 314 | [DEVELOPMENT.md](DEVELOPMENT.md). 315 | 316 | * Whether your contribution is for a bug fix or a feature request, **create an [Issue](https://github.com/basho/riak_explorer/issues)** and let us know what you are thinking. 317 | * **For bugs**, if you have already found a fix, feel free to submit a Pull Request referencing the Issue you created. 318 | * **For feature requests**, we want to improve upon the library incrementally which means small changes at a time. In order ensure your PR can be reviewed in a timely manner, please keep PRs small, e.g. <10 files and <500 lines changed. If you think this is unrealistic, then mention that within the Issue and we can discuss it. 319 | 320 | Once you're ready to contribute code back to this repo, start with these steps: 321 | 322 | * Fork the appropriate sub-projects that are affected by your change 323 | * Create a topic branch for your change and checkout that branch 324 | `git checkout -b some-topic-branch` 325 | * Make your changes and run the test suite if one is provided (see below) 326 | * Commit your changes and push them to your fork 327 | * Open a pull request for the appropriate project 328 | * Contributors will review your pull request, suggest changes, and merge it when it’s ready and/or offer feedback 329 | * To report a bug or issue, please open a new issue against this repository 330 | 331 | You can [read the full guidelines for bug reporting and code contributions](http://docs.basho.com/riak/latest/community/bugs/) on the Riak Docs. 332 | 333 | And **thank you!** Your contribution is incredibly important to us. It'd be great for you to add it to a current or past community release note [here](https://github.com/basho-labs/the-riak-community/tree/master/release-notes). 334 | 335 | ### Seeding Data (For developers and testers) 336 | 337 | Some suggestions on how to create some sample data, to try out the Explorer GUI. 338 | 339 | 1. Set up a couple of clusters in `riak_explorer.conf`. Have one or more with 340 | `development_mode = on`, and one or more with it set to `off` (meaning, in 341 | production mode). 342 | 343 | 2. Enable Search in Riak's config file (`riak.conf`). Set up a [Search 344 | Index](http://docs.basho.com/riak/latest/dev/using/search/#Simple-Setup). 345 | For example, to create a search index named `test-users-idx` that uses 346 | the default schema, do a PUT from the command-line (assuming your Riak 347 | node is available on `localhost`, using the default HTTP port `8098`): 348 | 349 | ``` 350 | curl -XPUT http://localhost:8098/search/index/test-users-idx 351 | ``` 352 | 353 | 3. Set up a `users` Bucket Type, and associate it with the `users-idx` Search 354 | index created above: 355 | 356 | ``` 357 | riak-admin bucket-type create test-users '{"props":{"search_index":"test-users-idx"}}' 358 | riak-admin bucket-type activate test-users 359 | ``` 360 | 361 | 4. Create and activate a Bucket Type for each main [Riak Data 362 | Type](http://docs.basho.com/riak/latest/dev/using/data-types/): 363 | 364 | ``` 365 | riak-admin bucket-type create maps '{"props":{"datatype":"map"}}' 366 | riak-admin bucket-type activate maps 367 | riak-admin bucket-type create sets '{"props":{"datatype":"set"}}' 368 | riak-admin bucket-type activate sets 369 | riak-admin bucket-type create counters '{"props":{"datatype":"counter"}}' 370 | riak-admin bucket-type activate counters 371 | ``` 372 | 373 | 5. Create and activate a `test-carts` Bucket Type, with [Siblings](http://docs.basho.com/riak/latest/dev/using/conflict-resolution/#Siblings) 374 | enabled: 375 | 376 | ``` 377 | riak-admin bucket-type create test-carts '{"props":{"allow_mult":true}}' 378 | ``` 379 | 380 | 6. Insert some sample Counter type objects, say to the `test-page-loads` bucket: 381 | 382 | ``` 383 | curl localhost:8098/types/counters/buckets/test-page-loads/datatypes/page123 -XPOST \ 384 | -H "Content-Type: application/json" \ 385 | -d '{"increment": 5}' 386 | 387 | curl localhost:8098/types/counters/buckets/test-page-loads/datatypes/page456 -XPOST \ 388 | -H "Content-Type: application/json" \ 389 | -d '{"increment": 1}' 390 | ``` 391 | 392 | 6. Insert some sample Set type objects, say to the `test-cities-visited` bucket: 393 | 394 | ``` 395 | curl localhost:8098/types/sets/buckets/test-cities-visited/datatypes/user123 -XPOST \ 396 | -H "Content-Type: application/json" \ 397 | -d '{"add_all":["Toronto", "Montreal", "Quebec", "New York City"]}' 398 | 399 | curl localhost:8098/types/sets/buckets/test-cities-visited/datatypes/user456 -XPOST \ 400 | -H "Content-Type: application/json" \ 401 | -d '{"add_all":["Washington D.C.", "Los Angeles", "Las Vegas"]}' 402 | ``` 403 | 404 | 6. Insert some sample Map type objects, say to the `test-tweets` bucket: 405 | 406 | ``` 407 | curl localhost:8098/types/maps/buckets/test-tweets/datatypes/user123 -XPOST \ 408 | -H "Content-Type: application/json" \ 409 | -d '{"update":{ "favorited_flag": "disable", "id_str_register": "240859602684612608", "favourites_count_counter": 24, "entities_map":{ "update": { "urls_set":{ "add_all": ["url1", "url2", "url3"]}} } }}' 410 | 411 | curl localhost:8098/types/maps/buckets/test-tweets/datatypes/user456 -XPOST \ 412 | -H "Content-Type: application/json" \ 413 | -d '{"update":{ "favorited_flag": "enable", "id_str_register": "240859602699912715", "favourites_count_counter": 1, "entities_map":{ "update": { "urls_set":{ "add_all": ["url4", "url5", "url6"]}} } }}' 414 | ``` 415 | 416 | 7. Insert some objects with sample Custom headers, and Secondary Index headers: 417 | 418 | ``` 419 | curl localhost:8098/types/default/buckets/user-accounts/keys/user123 -XPUT \ 420 | -H 'X-Riak-Meta-date-created: 2015-01-01' \ 421 | -H 'X-Riak-Meta-last-accessed: 2015-09-01' \ 422 | -H 'X-Riak-Index-email_bin: user@gmail.com' \ 423 | -H 'X-Riak-Index-country_bin: usa' \ 424 | -H 'Content-Type: application/json' \ 425 | -d '{"name":"User One", "id":"user123"}' 426 | ``` 427 | 428 | ### Related Projects 429 | - [riak-explorer-gui](https://github.com/basho-labs/riak-explorer-gui) - the 430 | front-end Ember.js GUI code to go along with the Explorer API. 431 | - [riak_control](https://github.com/basho/riak_control) - legacy official Riak 432 | GUI 433 | - [riak_cs_control](https://github.com/basho/riak_cs_control) - legacy official 434 | Riak S2 (Riak CS) GUI 435 | - [rekon](https://github.com/basho/rekon) (old bucket / object explorer gui) - 436 | legacy unofficial Javascript Riak GUI. 437 | -------------------------------------------------------------------------------- /admin/build-release-task.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | if [ "$#" -ne 1 ]; then 4 | >&2 echo "Usage $0 OS_NAME" 5 | exit 1 6 | fi 7 | 8 | REX_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" 9 | 10 | case $1 in 11 | "centos-7") 12 | ERLANG_TARBALL=https://basholabs.artifactoryonline.com/basholabs/build/centos-7/erlang/OTP_R16B02_basho10/erlang-OTP_R16B02_basho10.tgz 13 | export OS_FAMILY=centos 14 | export OS_VERSION=7 15 | ;; 16 | 17 | "debian-8") 18 | ERLANG_TARBALL=https://basholabs.artifactoryonline.com/basholabs/build/debian-8/erlang/OTP_R16B02_basho10/erlang-OTP_R16B02_basho10.tgz 19 | export OS_FAMILY=debian 20 | export OS_VERSION=8 21 | ;; 22 | 23 | "ubuntu-14.04") 24 | ERLANG_TARBALL=https://basholabs.artifactoryonline.com/basholabs/build/ubuntu-14.04/erlang/OTP_R16B02_basho10/erlang-OTP_R16B02_basho10.tgz 25 | export OS_FAMILY=ubuntu 26 | export OS_VERSION=14.04 27 | ;; 28 | 29 | "osx") 30 | export OS_FAMILY=osx 31 | export OS_VERSION=10.11 32 | ;; 33 | 34 | *) 35 | >&2 echo "Unsupported OS" 36 | exit 1 37 | ;; 38 | esac 39 | 40 | if [ "osx" != "${OS_FAMILY}" ]; then 41 | if [ -z "${ERLANG_TARBALL}" ]; then 42 | >&2 echo "Shouldn't get here, ERLANG_TARBALL variable should be set" 43 | exit 1 44 | fi 45 | 46 | # Download and install Erlang 47 | mkdir -p /usr/lib/erlang 48 | cd /usr/lib/erlang 49 | curl -O -Ssl "${ERLANG_TARBALL}" 50 | tar xf erlang-OTP_R16B02_basho10.tgz 51 | 52 | # Add Erlang to the PATH 53 | export PATH=$PATH:/usr/lib/erlang/bin 54 | else 55 | # Assumes Erlang is installed via KERL in ~/erlang 56 | . ~/erlang/R16B02/activate 57 | fi 58 | 59 | # Make erlang build 60 | cd "${REX_DIR}" 61 | make tarball 62 | make reltarball 63 | make tarball-standalone 64 | make sync 65 | make relsync 66 | make sync-standalone 67 | -------------------------------------------------------------------------------- /admin/build-release.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | # Capture the output of all of the commands 4 | OUTPUT_LOG=/tmp/riak_explorer_build.out 5 | OUTPUT_PIPE=/tmp/riak_explorer_build-output.pipe 6 | 7 | if [ ! -e ${OUTPUT_PIPE} ]; then 8 | mkfifo ${OUTPUT_PIPE} 9 | fi 10 | 11 | if [ -e ${OUTPUT_LOG} ]; then 12 | rm ${OUTPUT_LOG} 13 | fi 14 | 15 | exec 3>&1 4>&2 16 | tee ${OUTPUT_LOG} < ${OUTPUT_PIPE} >&3 & 17 | tpid=$! 18 | exec > ${OUTPUT_PIPE} 2>&1 19 | 20 | REX_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )" 21 | REX_ADMIN_SCRIPTS_DIR="${REX_DIR}"/admin 22 | 23 | # Make sure the oath token exists 24 | if [ ! -e "${REX_DIR}"/oauth.txt ]; then 25 | >&2 echo "Github OAUTH token required in ${REX_DIR}/oauth.txt" 26 | exit 1 27 | fi 28 | 29 | declare -a OS_NAMES 30 | OS_NAMES=( centos-7 debian-8 ubuntu-14.04 ) 31 | 32 | # Make sure latest image is pulled 33 | for os in "${OS_NAMES[@]}"; do 34 | docker pull basho/build-essential:"${os}" | grep -E "^Status" 35 | done 36 | 37 | # Run the build in container for each OS 38 | for os in "${OS_NAMES[@]}"; do 39 | docker run --rm -it \ 40 | -v ${REX_DIR}:/riak_explorer \ 41 | basho/build-essential:"${os}" \ 42 | /riak_explorer/admin/build-release-task.sh "${os}" 43 | done 44 | 45 | # Also run the build locally for macOS 46 | if [ "$(uname)" == "Darwin" ]; then 47 | ${REX_ADMIN_SCRIPTS_DIR}/build-release-task.sh osx 48 | fi 49 | 50 | exec 1>&3 3>&- 2>&4 4>&- 51 | wait ${tpid} 52 | 53 | rm ${OUTPUT_PIPE} 54 | -------------------------------------------------------------------------------- /apps/riak_explorer/ebin: -------------------------------------------------------------------------------- 1 | ../../ebin -------------------------------------------------------------------------------- /apps/riak_explorer/include: -------------------------------------------------------------------------------- 1 | ../../include -------------------------------------------------------------------------------- /apps/riak_explorer/src: -------------------------------------------------------------------------------- 1 | ../../src -------------------------------------------------------------------------------- /docs/.gitkeep: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/basho-labs/riak_explorer/3d06ca2b14a57ed2850e56efc280d7e5b5f4bbbc/docs/.gitkeep -------------------------------------------------------------------------------- /docs/RELEASES.md: -------------------------------------------------------------------------------- 1 | # Riak Explorer Releases 2 | 3 | ## Requirements 4 | 5 | 1. macOS machine 6 | 2. Either: 7 | a. Docker on macOS or 8 | b. Ubuntu 14.04 machine 9 | CentOS 7 machine 10 | Debian 8 machine 11 | 4. Github oauth token 12 | 13 | ## Tag 14 | 15 | 1. Visit https://github.com/basho-labs/riak_explorer/releases and click `Draft a new release`. 16 | 2. Create a new tag and release with the naming convention X.X.X (i.e. `1.1.3`) 17 | 18 | ## Clone 19 | 20 | SSH into each of the 3 machines listed above and run the following 21 | 22 | ``` 23 | git clone https://github.com/basho-labs/riak_explorer.git 24 | cd riak_explorer 25 | echo "YOUR_OAUTH_TOKEN" > oauth.txt 26 | ``` 27 | 28 | If you are working from a local copy, make sure you have run the following after 29 | creating the release on Github to get the tag: 30 | 31 | ``` 32 | git pull --tags --all 33 | ``` 34 | 35 | ## Build and Upload 36 | 37 | We have release automation scripts in the [admin](../admin) directory. If you 38 | run the main `build-release.sh` script in that directory, it will build and 39 | upload the release for each of the following platforms: 40 | 41 | * OS X 42 | * CentOS 7 43 | * Debian 8 44 | * Ubuntu 14.04 45 | 46 | The Linux releases will be done in Docker, using 47 | the [basho/build-essential](https://hub.docker.com/r/basho/build-essential/) 48 | Docker image. 49 | 50 | This assumes Erlang is installed on OS X 51 | via [kerl](https://github.com/kerl/kerl) and is found in `~/erlang/R16B02`. 52 | 53 | Kick off the build by doing the following: 54 | 55 | ``` 56 | ./admin/build-release.sh 57 | ``` 58 | 59 | If you wish to do this by hand on each machine, do the following: 60 | 61 | * OSX 62 | 63 | ``` 64 | OS_FAMILY=osx OS_VERSION=10.11 make tarball 65 | OS_FAMILY=osx OS_VERSION=10.11 make reltarball 66 | OS_FAMILY=osx OS_VERSION=10.11 make tarball-standalone 67 | OS_FAMILY=osx OS_VERSION=10.11 make sync 68 | OS_FAMILY=osx OS_VERSION=10.11 make relsync 69 | OS_FAMILY=osx OS_VERSION=10.11 make sync-standalone 70 | ``` 71 | 72 | * Ubuntu 73 | 74 | ``` 75 | OS_FAMILY=ubuntu OS_VERSION=14.04 make tarball 76 | OS_FAMILY=ubuntu OS_VERSION=14.04 make reltarball 77 | OS_FAMILY=ubuntu OS_VERSION=14.04 make tarball-standalone 78 | OS_FAMILY=ubuntu OS_VERSION=14.04 make sync 79 | OS_FAMILY=ubuntu OS_VERSION=14.04 make relsync 80 | OS_FAMILY=ubuntu OS_VERSION=14.04 make sync-standalone 81 | ``` 82 | 83 | * CentOS 84 | 85 | ``` 86 | OS_FAMILY=centos OS_VERSION=7 make tarball 87 | OS_FAMILY=centos OS_VERSION=7 make reltarball 88 | OS_FAMILY=centos OS_VERSION=7 make tarball-standalone 89 | OS_FAMILY=centos OS_VERSION=7 make sync 90 | OS_FAMILY=centos OS_VERSION=7 make relsync 91 | OS_FAMILY=centos OS_VERSION=7 make sync-standalone 92 | ``` 93 | 94 | * Debian 95 | 96 | ``` 97 | OS_FAMILY=debian OS_VERSION=8 make tarball 98 | OS_FAMILY=debian OS_VERSION=8 make reltarball 99 | OS_FAMILY=debian OS_VERSION=8 make tarball-standalone 100 | OS_FAMILY=debian OS_VERSION=8 make sync 101 | OS_FAMILY=debian OS_VERSION=8 make relsync 102 | OS_FAMILY=debian OS_VERSION=8 make sync-standalone 103 | ``` 104 | 105 | ## Verify 106 | 107 | 1. Navigate to https://github.com/basho-labs/riak_explorer/releases/tag/YOUR_TAG 108 | 2. Verify that all 3 sets of the following files are listed as downloads for the release: 109 | 1. riak_explorer-X.X.X-OS_FAMILY-OS_VERSION-tar.gz 110 | 2. riak_explorer-X.X.X-OS_FAMILY-OS_VERSION-tar.gz.sha 111 | 3. riak_explorer-X.X.X.patch-OS_FAMILY-OS_VERSION-tar.gz 112 | 4. riak_explorer-X.X.X.patch-OS_FAMILY-OS_VERSION-tar.gz.sha 113 | 5. riak_explorer-X.X.X.relpatch-OS_FAMILY-OS_VERSION-tar.gz 114 | 6. riak_explorer-X.X.X.relpatch-OS_FAMILY-OS_VERSION-tar.gz.sha 115 | -------------------------------------------------------------------------------- /include/re_wm.hrl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -define(ACCEPT(T), {T, accept_content}). 22 | -define(PROVIDE(T), {T, provide_content}). 23 | -define(JSON_TYPE, "application/json"). 24 | -define(TEXT_TYPE, "plain/text"). 25 | -define(OCTET_TYPE, "application/octet-stream"). 26 | -define(FORM_TYPE, "application/x-www-form-urlencoded"). 27 | -define(PROVIDE_TEXT, [{?TEXT_TYPE, provide_text_content}]). 28 | -define(ACCEPT_TEXT, [?ACCEPT(?FORM_TYPE), 29 | ?ACCEPT(?OCTET_TYPE), 30 | ?ACCEPT(?TEXT_TYPE), 31 | ?ACCEPT(?JSON_TYPE)]). 32 | 33 | -record(route, {base = [] :: [string()], 34 | path :: [string() | atom()], 35 | available = true :: {module(), atom()} | boolean(), 36 | methods = ['GET'] :: [atom()], 37 | accepts = [] :: [{string(), atom()}], 38 | provides = [?PROVIDE(?JSON_TYPE)] :: [{string(), atom()}] | 39 | {module(), atom()}, 40 | exists = true :: {module(), atom()} | boolean(), 41 | content = [{success, true}] :: {module(), atom()} | 42 | nonempty_list(), 43 | accept = ?ACCEPT_TEXT :: {module(), atom()} | undefined, 44 | delete :: {module(), atom()} | undefined, 45 | post_create = false :: boolean(), 46 | post_path :: {module(), atom()} | undefined, 47 | last_modified :: {module(), atom()} | undefined}). 48 | 49 | -type route() :: #route{}. 50 | -------------------------------------------------------------------------------- /pkg.vars.config: -------------------------------------------------------------------------------- 1 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 2 | %% ex: ts=4 sw=4 et 3 | 4 | %% 5 | %% Packaging 6 | %% 7 | {package_name, "riak_explorer"}. 8 | {package_install_name, "riak_explorer"}. 9 | {package_install_user, "riak_explorer"}. 10 | {package_install_group, "riak"}. 11 | {package_install_user_desc, "riak_explorer user"}. 12 | {package_shortdesc, "Riak development tool and admin GUI"}. 13 | {package_desc, "Riak development tool and admin GUI"}. 14 | {package_commands, {list, [[{name, "riak_explorer"}]]}}. 15 | {package_patch_dir, "basho-patches"}. 16 | {bin_or_sbin, "sbin"}. 17 | {license_type, "Apache 2"}. 18 | {copyright, "2015 Basho Technologies, Inc"}. 19 | {vendor_name, "Basho Technologies, Inc"}. 20 | {vendor_url, "http://basho.com"}. 21 | {vendor_contact_name, "Basho Package Maintainer"}. 22 | {vendor_contact_email, "packaging@basho.com"}. 23 | {license_full_text, 24 | "This file is provided to you under the Apache License,\n" 25 | "Version 2.0 (the \"License\"); you may not use this file\n" 26 | "except in compliance with the License. You may obtain\n" 27 | "a copy of the License at\n" 28 | "\n" 29 | " http://www.apache.org/licenses/LICENSE-2.0\n" 30 | "\n" 31 | "Unless required by applicable law or agreed to in writing,\n" 32 | "software distributed under the License is distributed on an\n" 33 | "\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n" 34 | "KIND, either express or implied. See the License for the\n" 35 | "specific language governing permissions and limitations\n" 36 | "under the License."}. 37 | {solaris_pkgname, "BASHOriak_explorer"}. 38 | -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/assets/images/ajax-loading-big-a5291414ac90a6ef191134f917631dec.gif: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/basho-labs/riak_explorer/3d06ca2b14a57ed2850e56efc280d7e5b5f4bbbc/priv/ember_riak_explorer/dist/assets/images/ajax-loading-big-a5291414ac90a6ef191134f917631dec.gif -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/assets/images/riak-14a5288eb5ad20a7958cda159f71e7dc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/basho-labs/riak_explorer/3d06ca2b14a57ed2850e56efc280d7e5b5f4bbbc/priv/ember_riak_explorer/dist/assets/images/riak-14a5288eb5ad20a7958cda159f71e7dc.png -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/assets/images/sample_logo-eb3d11f4175609d291de514c8ab27f6c.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/basho-labs/riak_explorer/3d06ca2b14a57ed2850e56efc280d7e5b5f4bbbc/priv/ember_riak_explorer/dist/assets/images/sample_logo-eb3d11f4175609d291de514c8ab27f6c.png -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/assets/vendor-ce9df154529b9de706ffbbd59803695f.css: -------------------------------------------------------------------------------- 1 | .cm-tab-wrap-hack:after,.tooltip:after{content:''}.loading-slider{position:fixed;overflow:hidden;top:0;left:0;height:2px;width:100%;z-index:999}.loading-slider.expanding{text-align:center}.loading-slider span{position:inherit;height:2px;background-color:red}/*! tooltip 0.1.0 - 18th Dec 2013 | https://github.com/darsain/tooltip */.tooltip{position:absolute;top:10px;max-width:200px;color:#fff;background:#3a3c47;text-shadow:-1px -1px 0 rgba(0,0,0,.2);-webkit-touch-callout:none;-webkit-user-select:none;user-select:none;pointer-events:none;font-size:14px;padding:6px 10px;border-radius:3px}.tooltip:after{position:absolute;width:10px;height:10px;margin:-5px;background:inherit;-webkit-transform:rotate(45deg);-ms-transform:rotate(45deg);transform:rotate(45deg)}.tooltip.top-left:after,.tooltip.top-right:after,.tooltip.top:after{bottom:0}.tooltip.bottom-left:after,.tooltip.bottom-right:after,.tooltip.bottom:after{top:0}.tooltip.bottom:after,.tooltip.top:after{left:50%}.tooltip.bottom-left:after,.tooltip.top-left:after{right:15px}.tooltip.bottom-right:after,.tooltip.top-right:after{left:15px}.tooltip.left-bottom:after,.tooltip.left-top:after,.tooltip.left:after{right:0}.tooltip.right-bottom:after,.tooltip.right-top:after,.tooltip.right:after{left:0}.tooltip.left:after,.tooltip.right:after{top:50%}.tooltip.left-top:after,.tooltip.right-top:after{bottom:15px}.tooltip.left-bottom:after,.tooltip.right-bottom:after{top:15px}.tooltip.fade{opacity:0;transition:opacity .2s ease-out}.tooltip.fade.in{opacity:1;transition-duration:.1s}.tooltip.slide{opacity:0;transition:-webkit-transform .2s ease-out;transition:transform .2s ease-out;transition-property:-webkit-transform,opacity;transition-property:transform,opacity}.tooltip.slide.top,.tooltip.slide.top-left,.tooltip.slide.top-right{-webkit-transform:translateY(15px);transform:translateY(15px)}.tooltip.slide.bottom,.tooltip.slide.bottom-left,.tooltip.slide.bottom-right{-webkit-transform:translateY(-15px);transform:translateY(-15px)}.tooltip.slide.left,.tooltip.slide.left-bottom,.tooltip.slide.left-top{-webkit-transform:translateX(15px);transform:translateX(15px)}.tooltip.slide.right,.tooltip.slide.right-bottom,.tooltip.slide.right-top{-webkit-transform:translateX(-15px);transform:translateX(-15px)}.tooltip.slide.in{opacity:1;-webkit-transform:none;transform:none;transition-duration:.1s}.tooltip.grow{-webkit-transform:scale(0);transform:scale(0);transition:-webkit-transform .2s ease-out;transition:transform .2s ease-out}.tooltip.grow.top{-webkit-transform:translateY(60%) scale(0);transform:translateY(60%) scale(0)}.tooltip.grow.top-left{-webkit-transform:translateY(60%) translateX(40%) scale(0);transform:translateY(60%) translateX(40%) scale(0)}.tooltip.grow.top-right{-webkit-transform:translateY(60%) translateX(-40%) scale(0);transform:translateY(60%) translateX(-40%) scale(0)}.tooltip.grow.bottom{-webkit-transform:translateY(-60%) scale(0);transform:translateY(-60%) scale(0)}.tooltip.grow.bottom-left{-webkit-transform:translateY(-60%) translateX(40%) scale(0);transform:translateY(-60%) translateX(40%) scale(0)}.tooltip.grow.bottom-right{-webkit-transform:translateY(-60%) translateX(-40%) scale(0);transform:translateY(-60%) translateX(-40%) scale(0)}.tooltip.grow.left{-webkit-transform:translateX(53%) scale(0);transform:translateX(53%) scale(0)}.tooltip.grow.left-top{-webkit-transform:translateX(53%) translateY(40%) scale(0);transform:translateX(53%) translateY(40%) scale(0)}.tooltip.grow.left-bottom{-webkit-transform:translateX(53%) translateY(-40%) scale(0);transform:translateX(53%) translateY(-40%) scale(0)}.tooltip.grow.right{-webkit-transform:translateX(-53%) scale(0);transform:translateX(-53%) scale(0)}.tooltip.grow.right-top{-webkit-transform:translateX(-53%) translateY(40%) scale(0);transform:translateX(-53%) translateY(40%) scale(0)}.tooltip.grow.right-bottom{-webkit-transform:translateX(-53%) translateY(-40%) scale(0);transform:translateX(-53%) translateY(-40%) scale(0)}.tooltip.grow.in{-webkit-transform:none;transform:none;transition-duration:.1s}.tooltip.light{color:#3a3c47;background:#fff;text-shadow:none}.tooltip.success{background:#8dc572}.tooltip.warning{background:#ddc12e}.tooltip.error{background:#be6464}.CodeMirror{font-family:monospace;height:300px;color:#000}.CodeMirror-lines{padding:4px 0}.CodeMirror pre{padding:0 4px}.CodeMirror-gutter-filler,.CodeMirror-scrollbar-filler{background-color:#fff}.CodeMirror-gutters{border-right:1px solid #ddd;background-color:#f7f7f7;white-space:nowrap}.CodeMirror-linenumber{padding:0 3px 0 5px;min-width:20px;text-align:right;color:#999;white-space:nowrap}.CodeMirror-guttermarker{color:#000}.CodeMirror-guttermarker-subtle{color:#999}.CodeMirror-cursor{border-left:1px solid #000;border-right:none;width:0}.CodeMirror div.CodeMirror-secondarycursor{border-left:1px solid silver}.cm-fat-cursor .CodeMirror-cursor{width:auto;border:0;background:#7e7}.cm-fat-cursor div.CodeMirror-cursors{z-index:1}.cm-animate-fat-cursor{width:auto;border:0;-webkit-animation:blink 1.06s steps(1) infinite;-moz-animation:blink 1.06s steps(1) infinite;animation:blink 1.06s steps(1) infinite;background-color:#7e7}@-moz-keyframes blink{50%{background-color:transparent}}@-webkit-keyframes blink{50%{background-color:transparent}}@keyframes blink{50%{background-color:transparent}}.cm-tab{display:inline-block;text-decoration:inherit}.CodeMirror-ruler{border-left:1px solid #ccc;position:absolute}.cm-s-default .cm-header{color:#00f}.cm-s-default .cm-quote{color:#090}.cm-negative{color:#d44}.cm-positive{color:#292}.cm-header,.cm-strong{font-weight:700}.cm-em{font-style:italic}.cm-link{text-decoration:underline}.cm-strikethrough{text-decoration:line-through}.cm-s-default .cm-keyword{color:#708}.cm-s-default .cm-atom{color:#219}.cm-s-default .cm-number{color:#164}.cm-s-default .cm-def{color:#00f}.cm-s-default .cm-variable-2{color:#05a}.cm-s-default .cm-variable-3{color:#085}.cm-s-default .cm-comment{color:#a50}.cm-s-default .cm-string{color:#a11}.cm-s-default .cm-string-2{color:#f50}.cm-s-default .cm-meta,.cm-s-default .cm-qualifier{color:#555}.cm-s-default .cm-builtin{color:#30a}.cm-s-default .cm-bracket{color:#997}.cm-s-default .cm-tag{color:#170}.cm-s-default .cm-attribute{color:#00c}.cm-s-default .cm-hr{color:#999}.cm-s-default .cm-link{color:#00c}.cm-invalidchar,.cm-s-default .cm-error{color:red}.CodeMirror-composing{border-bottom:2px solid}div.CodeMirror span.CodeMirror-matchingbracket{color:#0f0}div.CodeMirror span.CodeMirror-nonmatchingbracket{color:#f22}.CodeMirror-matchingtag{background:rgba(255,150,0,.3)}.CodeMirror-activeline-background{background:#e8f2ff}.CodeMirror{position:relative;overflow:hidden;background:#fff}.CodeMirror-scroll{overflow:scroll!important;margin-bottom:-30px;margin-right:-30px;padding-bottom:30px;height:100%;outline:0;position:relative}.CodeMirror-sizer{position:relative;border-right:30px solid transparent}.CodeMirror-gutter-filler,.CodeMirror-hscrollbar,.CodeMirror-scrollbar-filler,.CodeMirror-vscrollbar{position:absolute;z-index:6;display:none}.CodeMirror-vscrollbar{right:0;top:0;overflow-x:hidden;overflow-y:scroll}.CodeMirror-hscrollbar{bottom:0;left:0;overflow-y:hidden;overflow-x:scroll}.CodeMirror-scrollbar-filler{right:0;bottom:0}.CodeMirror-gutter-filler{left:0;bottom:0}.CodeMirror-gutters{position:absolute;left:0;top:0;min-height:100%;z-index:3}.CodeMirror-gutter{white-space:normal;height:100%;display:inline-block;vertical-align:top;margin-bottom:-30px}.CodeMirror-gutter-wrapper{position:absolute;z-index:4;background:0 0!important;border:none!important;-webkit-user-select:none;-moz-user-select:none;user-select:none}.CodeMirror-gutter-background{position:absolute;top:0;bottom:0;z-index:4}.CodeMirror-gutter-elt{position:absolute;cursor:default;z-index:4}.CodeMirror-lines{cursor:text;min-height:1px}.CodeMirror pre{-moz-border-radius:0;-webkit-border-radius:0;border-radius:0;border-width:0;background:0 0;font-family:inherit;font-size:inherit;margin:0;white-space:pre;word-wrap:normal;line-height:inherit;color:inherit;z-index:2;position:relative;overflow:visible;-webkit-tap-highlight-color:transparent;-webkit-font-variant-ligatures:none;font-variant-ligatures:none}.CodeMirror-wrap pre{word-wrap:break-word;white-space:pre-wrap;word-break:normal}.CodeMirror-linebackground{position:absolute;left:0;right:0;top:0;bottom:0;z-index:0}.CodeMirror-linewidget{position:relative;z-index:2;overflow:auto}.CodeMirror-code{outline:0}.CodeMirror-gutter,.CodeMirror-gutters,.CodeMirror-linenumber,.CodeMirror-scroll,.CodeMirror-sizer{-moz-box-sizing:content-box;box-sizing:content-box}.CodeMirror-measure{position:absolute;width:100%;height:0;overflow:hidden;visibility:hidden}.CodeMirror-cursor{position:absolute}.CodeMirror-measure pre{position:static}div.CodeMirror-cursors{visibility:hidden;position:relative;z-index:3}.CodeMirror-focused div.CodeMirror-cursors,div.CodeMirror-dragcursors{visibility:visible}.CodeMirror-selected{background:#d9d9d9}.CodeMirror-focused .CodeMirror-selected{background:#d7d4f0}.CodeMirror-crosshair{cursor:crosshair}.CodeMirror-line::selection,.CodeMirror-line>span::selection,.CodeMirror-line>span>span::selection{background:#d7d4f0}.CodeMirror-line::-moz-selection,.CodeMirror-line>span::-moz-selection,.CodeMirror-line>span>span::-moz-selection{background:#d7d4f0}.cm-searching{background:#ffa;background:rgba(255,255,0,.4)}.cm-force-border{padding-right:.1px}@media print{.CodeMirror div.CodeMirror-cursors{visibility:hidden}}span.CodeMirror-selectedtext{background:0 0}.cm-s-material{background-color:#263238;color:rgba(233,237,237,1)}.cm-s-material .CodeMirror-gutters{background:#263238;color:#537f7e;border:none}.cm-s-material .CodeMirror-guttermarker,.cm-s-material .CodeMirror-guttermarker-subtle,.cm-s-material .CodeMirror-linenumber{color:#537f7e}.cm-s-material .CodeMirror-cursor{border-left:1px solid #f8f8f0}.cm-s-material div.CodeMirror-selected{background:rgba(255,255,255,.15)}.cm-s-material.CodeMirror-focused div.CodeMirror-selected{background:rgba(255,255,255,.1)}.cm-s-material .CodeMirror-line::selection,.cm-s-material .CodeMirror-line>span::selection,.cm-s-material .CodeMirror-line>span>span::selection{background:rgba(255,255,255,.1)}.cm-s-material .CodeMirror-line::-moz-selection,.cm-s-material .CodeMirror-line>span::-moz-selection,.cm-s-material .CodeMirror-line>span>span::-moz-selection{background:rgba(255,255,255,.1)}.cm-s-material .CodeMirror-activeline-background{background:rgba(0,0,0,0)}.cm-s-material .cm-keyword{color:rgba(199,146,234,1)}.cm-s-material .cm-operator{color:rgba(233,237,237,1)}.cm-s-material .cm-variable-2{color:#80CBC4}.cm-s-material .cm-builtin{color:#DECB6B}.cm-s-material .cm-atom,.cm-s-material .cm-number{color:#F77669}.cm-s-material .cm-def{color:rgba(233,237,237,1)}.cm-s-material .cm-string{color:#C3E88D}.cm-s-material .cm-string-2{color:#80CBC4}.cm-s-material .cm-comment{color:#546E7A}.cm-s-material .cm-variable{color:#82B1FF}.cm-s-material .cm-meta,.cm-s-material .cm-tag{color:#80CBC4}.cm-s-material .cm-attribute{color:#FFCB6B}.cm-s-material .cm-property{color:#80CBAE}.cm-s-material .cm-qualifier,.cm-s-material .cm-variable-3{color:#DECB6B}.cm-s-material .cm-tag{color:rgba(255,83,112,1)}.cm-s-material .cm-error{color:rgba(255,255,255,1);background-color:#EC5F67}.cm-s-material .CodeMirror-matchingbracket{text-decoration:underline;color:#fff!important}[contenteditable=true]:empty:not(:focus):before{content:attr(placeholder)}[contenteditable=true]{cursor:text}.ember-content-editable:empty{color:#a9a9a9}.ember-content-editable:empty:after{content:"\0000a0"} -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/crossdomain.xml: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 15 | 16 | -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | Riak Explorer 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 |
21 | 22 | 23 | -------------------------------------------------------------------------------- /priv/ember_riak_explorer/dist/robots.txt: -------------------------------------------------------------------------------- 1 | # http://www.robotstxt.org 2 | User-agent: * 3 | Disallow: 4 | -------------------------------------------------------------------------------- /rebar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/basho-labs/riak_explorer/3d06ca2b14a57ed2850e56efc280d7e5b5f4bbbc/rebar -------------------------------------------------------------------------------- /rebar.config: -------------------------------------------------------------------------------- 1 | {sub_dirs, ["rel"]}. 2 | 3 | {cover_enabled, true}. 4 | 5 | {lib_dirs, ["deps", "apps"]}. 6 | 7 | {erl_opts, [debug_info, warnings_as_errors, {parse_transform, lager_transform}]}. 8 | 9 | {xref_checks, []}. 10 | {xref_queries, [{"(XC - UC) || (XU - X - B)", []}]}. 11 | 12 | {reset_after_eunit, true}. 13 | 14 | {deps, [ 15 | {riakc, ".*", {git, "git://github.com/basho/riak-erlang-client", {tag, "2.4.1"}}}, 16 | {node_package, ".*", {git, "git://github.com/basho/node_package", {branch, "no-epmd"}}}, 17 | {webmachine, "1.10.*", {git, "git://github.com/basho/webmachine", {tag, "1.10.8"}}}, 18 | {ibrowse, "4.0.2", {git, "git://github.com/cmullaparthi/ibrowse.git", {tag, "v4.0.2"}}}, 19 | {lager, ".*", {git, "git://github.com/basho/lager", {tag, "2.0.3"}}}, 20 | {lager_syslog, ".*", {git, "git://github.com/basho/lager_syslog", {tag, "2.0.3"}}}, 21 | {cuttlefish, ".*", {git, "git://github.com/basho/cuttlefish", {tag, "2.0.1"}}}, 22 | {eper, ".*", {git, "git://github.com/basho/eper.git", "0.78"}} 23 | ]}. 24 | -------------------------------------------------------------------------------- /rebar.config.script: -------------------------------------------------------------------------------- 1 | case os:getenv("INTEGRATION_TEST") of 2 | false -> CONFIG; % env var not defined 3 | [] -> CONFIG; % env var set to empty string 4 | _ -> lists:keystore(eunit_compile_opts, 1, CONFIG, {eunit_compile_opts, [{d, integration_test}]}) 5 | end. -------------------------------------------------------------------------------- /rel/files/advanced.config: -------------------------------------------------------------------------------- 1 | [{riak_explorer, []}]. 2 | -------------------------------------------------------------------------------- /rel/files/cert.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIICvTCCAiYCCQDgxT3HogRJ/TANBgkqhkiG9w0BAQUFADCBojELMAkGA1UEBhMC 3 | VVMxFjAUBgNVBAgTDU1hc3NhY2h1c2V0dHMxEjAQBgNVBAcTCUNhbWJyaWRnZTEf 4 | MB0GA1UEChMWQmFzaG8gVGVjaG5vbG9naWVzIEluYzENMAsGA1UECxMEUmlhazEY 5 | MBYGA1UEAxMPSmVmZnJleSBNYXNzdW5nMR0wGwYJKoZIhvcNAQkBFg5qZWZmQGJh 6 | c2hvLmNvbTAeFw0xMTEwMzExNjQ3NTNaFw0yMTEwMjgxNjQ3NTNaMIGiMQswCQYD 7 | VQQGEwJVUzEWMBQGA1UECBMNTWFzc2FjaHVzZXR0czESMBAGA1UEBxMJQ2FtYnJp 8 | ZGdlMR8wHQYDVQQKExZCYXNobyBUZWNobm9sb2dpZXMgSW5jMQ0wCwYDVQQLEwRS 9 | aWFrMRgwFgYDVQQDEw9KZWZmcmV5IE1hc3N1bmcxHTAbBgkqhkiG9w0BCQEWDmpl 10 | ZmZAYmFzaG8uY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCkys/CB8Ce 11 | fO19JsFiL2K5pODMWFmXxfQgvARB2rIvoJ4R4mNKI639xRbR+gCPreJvZRw8trgD 12 | 8sARFv8J1SlqqYRDN8zfMlvolXh6Atujou/LwjUpTA3pMe9lZrWU1+JZOMAk79lz 13 | O/1etfR12By0SqNfDgUMIIZST7i3Fw2IkwIDAQABMA0GCSqGSIb3DQEBBQUAA4GB 14 | ABTOXMBYoEn0biwqaDcZzInZvZHupFkmkOABumgJ5xVDQ/9LIzy9mfg0Ko+JERlM 15 | 0w0dliu2sfFoXLps9EohjIzrU1B3CwSNvNqPmcj9V4k0iXrsvDfbG9eJ9nYaUY0Y 16 | L5I9/KAOIf3fEmnFbjtmyiLVhrM6kBB3fvoAVQfwL6cZ 17 | -----END CERTIFICATE----- 18 | -------------------------------------------------------------------------------- /rel/files/key.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIICXAIBAAKBgQCkys/CB8CefO19JsFiL2K5pODMWFmXxfQgvARB2rIvoJ4R4mNK 3 | I639xRbR+gCPreJvZRw8trgD8sARFv8J1SlqqYRDN8zfMlvolXh6Atujou/LwjUp 4 | TA3pMe9lZrWU1+JZOMAk79lzO/1etfR12By0SqNfDgUMIIZST7i3Fw2IkwIDAQAB 5 | AoGANAW2eolZ/G5xxo2ChQ1yfCqZsMi/V9NtExxnt6Zjk/d/jyPJtnD3D2K1ponm 6 | vXTmQ8ZGmMAR7WUnzv1Ue/UoAntcyXwKAm+T+2IUJiir/qzYKLSn8FJ3wA+OWYKs 7 | 1nSryi54IuNenKUslxMPDPk/0bM6nZS2AvbNPYhX7a8evXkCQQDNdsvDO3Ofn0pJ 8 | +3bMLH5Ch/adrJ0TfF1H8n9pxiuq813ppffXsyzv3haTbGHOtEM5tILYbHmO16h1 9 | vY+hhoHHAkEAzVMVRaDefjp1qKfoyRm9ySa3GgH71t1dvm1jGTRxNk9M8pekLDz+ 10 | GWyTzffM2/+8Xz4RFzLjqAoAjzBGMeAC1QJAANiycjV2fnvbhH6CuMieJIwG2hNx 11 | +jiS8c7v83GbkHK8OlAyuzLDxqE1mpnhtUZM2JoDx/x6a7o7uXB0fQfe1QJBAKxi 12 | d/aYhJS4IjaymqfUm9m5TntgdP9FpcIOdugfdnmhhLochLK7lp7j4QhJZ07B3Hae 13 | Vp0Clc5sb2HIpvaS2+0CQEV5NxPjavmlCQksQvU2OAQvTW3Sm9lahTl4XvdVxfj0 14 | G9sZ2erg7MIo2LF4V6FM6Hbfoj/FhAMOXlXUoUGs1uI= 15 | -----END RSA PRIVATE KEY----- 16 | -------------------------------------------------------------------------------- /rel/files/riak_explorer.schema: -------------------------------------------------------------------------------- 1 | %%-*- mode: erlang -*- 2 | 3 | %% @doc listen port and IP address 4 | {mapping, "listener", "riak_explorer.host", [ 5 | {default, {"{{riak_explorer_ip}}" , {{riak_explorer_port}} }}, 6 | {datatype, ip}, 7 | {validators, ["valid_host"]} 8 | ]}. 9 | 10 | {mapping, "clusters.default.riak_node", "riak_explorer.clusters", [ 11 | {default, "{{cluster_riak_node}}"} 12 | ]}. 13 | 14 | {mapping, "clusters.default.development_mode", "riak_explorer.clusters", [ 15 | {default, {{cluster_development_mode}} }, 16 | {datatype, flag} 17 | ]}. 18 | 19 | {mapping, "clusters.$name.riak_node", "riak_explorer.clusters", [ 20 | {default, "{{cluster_riak_node}}"}, 21 | hidden 22 | ]}. 23 | 24 | {mapping, "clusters.$name.development_mode", "riak_explorer.clusters", [ 25 | {default, {{cluster_development_mode}} }, 26 | {datatype, flag}, 27 | hidden 28 | ]}. 29 | 30 | {translation, 31 | "riak_explorer.clusters", 32 | fun(Conf, _Schema) -> 33 | lists:filtermap(fun({Path, Value}) -> 34 | case Path of 35 | ["clusters",Cluster, "riak_node"] -> 36 | ClusterConfig = 37 | {list_to_atom(Cluster), 38 | [{riak_node, Value}, 39 | {development_mode, cuttlefish:conf_get(["clusters", Cluster, "development_mode"], Conf)} 40 | ] 41 | }, 42 | {true, ClusterConfig}; 43 | _ -> false 44 | end 45 | end, Conf) 46 | end 47 | }. 48 | 49 | %% @doc Cookie for distributed node communication. All nodes in the 50 | %% same cluster should use the same cookie or they will not be able to 51 | %% communicate. 52 | {mapping, "distributed_cookie", "vm_args.-setcookie", [ 53 | {default, "riak"} 54 | ]}. 55 | 56 | %% @doc Default cert location for https can be overridden 57 | %% with the ssl config variable, for example: 58 | {mapping, "ssl.certfile", "riak_explorer.ssl.certfile", [ 59 | {datatype, file}, 60 | {commented, "$(platform_etc_dir)/cert.pem"} 61 | ]}. 62 | 63 | %% @doc Default key location for https can be overridden with the ssl 64 | %% config variable, for example: 65 | {mapping, "ssl.keyfile", "riak_explorer.ssl.keyfile", [ 66 | {datatype, file}, 67 | {commented, "$(platform_etc_dir)/key.pem"} 68 | ]}. 69 | 70 | %% @doc Platform-specific installation paths 71 | {mapping, "platform_bin_dir", "riak_explorer.platform_bin_dir", [ 72 | {datatype, directory}, 73 | {default, "{{platform_bin_dir}}"} 74 | ]}. 75 | 76 | %% @see platform_bin_dir 77 | {mapping, "platform_data_dir", "riak_explorer.platform_data_dir", [ 78 | {datatype, directory}, 79 | {default, "{{platform_data_dir}}"} 80 | ]}. 81 | 82 | %% @see platform_bin_dir 83 | {mapping, "platform_etc_dir", "riak_explorer.platform_etc_dir", [ 84 | {datatype, directory}, 85 | {default, "{{platform_etc_dir}}"} 86 | ]}. 87 | 88 | %% @see platform_bin_dir 89 | {mapping, "platform_lib_dir", "riak_explorer.platform_lib_dir", [ 90 | {datatype, directory}, 91 | {default, "{{platform_lib_dir}}"} 92 | ]}. 93 | 94 | %% @see platform_bin_dir 95 | {mapping, "platform_log_dir", "riak_explorer.platform_log_dir", [ 96 | {datatype, directory}, 97 | {default, "{{platform_log_dir}}"} 98 | ]}. 99 | 100 | %% @doc Where to emit the default log messages (typically at 'info' 101 | %% severity): 102 | %% off: disabled 103 | %% file: the file specified by log.console.file 104 | %% console: to standard output (seen when using `riak attach-direct`) 105 | %% both: log.console.file and standard out. 106 | {mapping, "log.console", "lager.handlers", [ 107 | {default, {{console_log_default}} }, 108 | {datatype, {enum, [off, file, console, both]}} 109 | ]}. 110 | 111 | %% @doc The severity level of the console log, default is 'info'. 112 | {mapping, "log.console.level", "lager.handlers", [ 113 | {default, info}, 114 | {datatype, {enum, [debug, info, notice, warning, error, critical, alert, emergency, none]}} 115 | ]}. 116 | 117 | %% @doc When 'log.console' is set to 'file' or 'both', the file where 118 | %% console messages will be logged. 119 | {mapping, "log.console.file", "lager.handlers", [ 120 | {default, "$(platform_log_dir)/console.log"}, 121 | {datatype, file} 122 | ]}. 123 | 124 | %% @doc The file where error messages will be logged. 125 | {mapping, "log.error.file", "lager.handlers", [ 126 | {default, "$(platform_log_dir)/error.log"}, 127 | {datatype, file} 128 | ]}. 129 | 130 | %% @doc When set to 'on', enables log output to syslog. 131 | {mapping, "log.syslog", "lager.handlers", [ 132 | {default, off}, 133 | {datatype, flag} 134 | ]}. 135 | 136 | %% @doc Syslog default identifier 137 | {mapping, "log.syslog.ident", "lager.handlers", [ 138 | {default, "riak_explorer"}, 139 | hidden 140 | ]}. 141 | 142 | %% @doc Syslog facility to log entries from riak_explorer. 143 | {mapping, "log.syslog.facility", "lager.handlers", [ 144 | {default, daemon}, 145 | {datatype, {enum,[kern, user, mail, daemon, auth, syslog, 146 | lpr, news, uucp, clock, authpriv, ftp, 147 | cron, local0, local1, local2, local3, 148 | local4, local5, local6, local7]}}, 149 | hidden 150 | ]}. 151 | 152 | %% @doc The severity level at which to log entries to syslog, default is 'info'. 153 | {mapping, "log.syslog.level", "lager.handlers", [ 154 | {default, info}, 155 | {datatype, {enum, [debug, info, notice, warning, error, critical, alert, emergency, none]}}, 156 | hidden 157 | ]}. 158 | 159 | {translation, 160 | "lager.handlers", 161 | fun(Conf) -> 162 | SyslogHandler = case cuttlefish:conf_get("log.syslog", Conf) of 163 | true -> 164 | Ident = cuttlefish:conf_get("log.syslog.ident", Conf), 165 | Facility = cuttlefish:conf_get("log.syslog.facility", Conf), 166 | LogLevel = cuttlefish:conf_get("log.syslog.level", Conf), 167 | [{lager_syslog_backend, [Ident, Facility, LogLevel]}]; 168 | _ -> [] 169 | end, 170 | ErrorHandler = case cuttlefish:conf_get("log.error.file", Conf) of 171 | undefined -> []; 172 | ErrorFilename -> [{lager_file_backend, [{file, ErrorFilename}, 173 | {level, error}, 174 | {size, 10485760}, 175 | {date, "$D0"}, 176 | {count, 5}]}] 177 | end, 178 | 179 | ConsoleLogLevel = cuttlefish:conf_get("log.console.level", Conf), 180 | ConsoleLogFile = cuttlefish:conf_get("log.console.file", Conf), 181 | 182 | ConsoleHandler = {lager_console_backend, ConsoleLogLevel}, 183 | ConsoleFileHandler = {lager_file_backend, [{file, ConsoleLogFile}, 184 | {level, ConsoleLogLevel}, 185 | {size, 10485760}, 186 | {date, "$D0"}, 187 | {count, 5}]}, 188 | 189 | ConsoleHandlers = case cuttlefish:conf_get("log.console", Conf) of 190 | off -> []; 191 | file -> [ConsoleFileHandler]; 192 | console -> [ConsoleHandler]; 193 | both -> [ConsoleHandler, ConsoleFileHandler]; 194 | _ -> [] 195 | end, 196 | SyslogHandler ++ ConsoleHandlers ++ ErrorHandler 197 | end 198 | }. 199 | 200 | 201 | %% @doc Whether to enable Erlang's built-in error logger. 202 | {mapping, "sasl", "sasl.sasl_error_logger", [ 203 | {default, off}, 204 | {datatype, flag}, 205 | hidden 206 | ]}. 207 | 208 | %% @doc Whether to enable the crash log. 209 | {mapping, "log.crash", "lager.crash_log", [ 210 | {default, on}, 211 | {datatype, flag} 212 | ]}. 213 | 214 | %% @doc If the crash log is enabled, the file where its messages will 215 | %% be written. 216 | {mapping, "log.crash.file", "lager.crash_log", [ 217 | {default, "$(platform_log_dir)/crash.log"}, 218 | {datatype, file} 219 | ]}. 220 | 221 | {translation, 222 | "lager.crash_log", 223 | fun(Conf) -> 224 | case cuttlefish:conf_get("log.crash", Conf) of 225 | false -> undefined; 226 | _ -> 227 | cuttlefish:conf_get("log.crash.file", Conf, "{{platform_log_dir}}/crash.log") 228 | end 229 | end}. 230 | 231 | %% @doc Maximum size in bytes of individual messages in the crash log 232 | {mapping, "log.crash.maximum_message_size", "lager.crash_log_msg_size", [ 233 | {default, "64KB"}, 234 | {datatype, bytesize} 235 | ]}. 236 | 237 | %% @doc Maximum size of the crash log in bytes, before it is rotated 238 | {mapping, "log.crash.size", "lager.crash_log_size", [ 239 | {default, "10MB"}, 240 | {datatype, bytesize} 241 | ]}. 242 | 243 | %% @doc The schedule on which to rotate the crash log. For more 244 | %% information see: 245 | %% https://github.com/basho/lager/blob/master/README.md#internal-log-rotation 246 | {mapping, "log.crash.rotation", "lager.crash_log_date", [ 247 | {default, "$D0"} 248 | ]}. 249 | 250 | %% @doc The number of rotated crash logs to keep. When set to 251 | %% 'current', only the current open log file is kept. 252 | {mapping, "log.crash.rotation.keep", "lager.crash_log_count", [ 253 | {default, 5}, 254 | {datatype, [integer, {atom, current}]}, 255 | {validators, ["rotation_count"]} 256 | ]}. 257 | 258 | {validator, 259 | "rotation_count", 260 | "must be 'current' or a positive integer", 261 | fun(current) -> true; 262 | (Int) when is_integer(Int) andalso Int >= 0 -> true; 263 | (_) -> false 264 | end}. 265 | 266 | {translation, 267 | "lager.crash_log_count", 268 | fun(Conf) -> 269 | case cuttlefish:conf_get("log.crash.rotation.keep", Conf) of 270 | current -> 0; 271 | Int -> Int 272 | end 273 | end}. 274 | 275 | %% @doc Whether to redirect error_logger messages into lager - 276 | %% defaults to true 277 | {mapping, "log.error.redirect", "lager.error_logger_redirect", [ 278 | {default, on}, 279 | {datatype, flag}, 280 | hidden 281 | ]}. 282 | 283 | %% @doc Maximum number of error_logger messages to handle in a second 284 | {mapping, "log.error.messages_per_second", "lager.error_logger_hwm", [ 285 | {default, 100}, 286 | {datatype, integer}, 287 | hidden 288 | ]}. 289 | 290 | 291 | %% VM scheduler collapse, part 1 of 2 292 | {mapping, "erlang.schedulers.force_wakeup_interval", "vm_args.+sfwi", [ 293 | {default, 500}, 294 | {datatype, integer}, 295 | merge 296 | ]}. 297 | 298 | %% VM scheduler collapse, part 2 of 2 299 | {mapping, "erlang.schedulers.compaction_of_load", "vm_args.+scl", [ 300 | {default, false}, 301 | merge 302 | ]}. -------------------------------------------------------------------------------- /rel/rebar.config: -------------------------------------------------------------------------------- 1 | {plugin_dir, "../deps/cuttlefish/src"}. 2 | {plugins, [cuttlefish_rebar_plugin]}. 3 | {cuttlefish_filename, "riak_explorer.conf"}. 4 | -------------------------------------------------------------------------------- /rel/reltool.config: -------------------------------------------------------------------------------- 1 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 2 | %% ex: ft=erlang ts=4 sw=4 et 3 | {sys, [ 4 | {lib_dirs, ["../deps", "../apps"]}, 5 | {rel, "riak_explorer", "2.1.0", 6 | [ 7 | kernel, 8 | stdlib, 9 | sasl, 10 | public_key, 11 | ssl, 12 | os_mon, 13 | crypto, 14 | runtime_tools, 15 | mochiweb, 16 | webmachine, 17 | lager, 18 | lager_syslog, 19 | riak_explorer 20 | ]}, 21 | {rel, "start_clean", "", 22 | [ 23 | kernel, 24 | stdlib 25 | ]}, 26 | {boot_rel, "riak_explorer"}, 27 | {profile, embedded}, 28 | {excl_sys_filters, ["^bin/.*", 29 | "^erts.*/bin/(dialyzer|typer)"]}, 30 | {excl_archive_filters, [".*"]}, 31 | {app, sasl, [{incl_cond, include}]}, 32 | {app, cuttlefish, [{incl_cond, include}]}, 33 | {app, lager, [{incl_cond, include}]}, 34 | {app, riak_explorer, [{incl_cond, include}]} 35 | ]}. 36 | 37 | 38 | {target_dir, "riak_explorer"}. 39 | 40 | {overlay_vars, "vars.config"}. 41 | 42 | {overlay, [ 43 | %% Scan for scripts in included apps 44 | %% Setup basic dirs 45 | {mkdir, "log"}, 46 | {mkdir, "data/riak_explorer"}, 47 | {mkdir, "priv/ember_riak_explorer"}, 48 | 49 | %% Copy base files for starting and interacting w/ node 50 | {copy, "../deps/node_package/priv/base/erl", 51 | "{{erts_vsn}}/bin/erl"}, 52 | {copy, "../deps/cuttlefish/cuttlefish", 53 | "{{erts_vsn}}/bin/cuttlefish"}, 54 | {copy, "../deps/node_package/priv/base/nodetool", 55 | "{{erts_vsn}}/bin/nodetool"}, 56 | {template, "../deps/node_package/priv/base/runner", 57 | "bin/riak_explorer"}, 58 | {template, "../deps/node_package/priv/base/env.sh", 59 | "lib/env.sh"}, 60 | 61 | %% Copy static web files 62 | {copy, "../priv/ember_riak_explorer/dist", 63 | "priv/ember_riak_explorer/dist"}, 64 | 65 | %% Copy config files 66 | %% Cuttlefish Schema Files have a priority order. 67 | %% Anything in a file prefixed with 00- will override 68 | %% anything in a file with a higher numbered prefix. 69 | {template, "files/riak_explorer.schema", "lib/00-riak_explorer.schema"}, 70 | {template, "../deps/cuttlefish/priv/erlang_vm.schema", "lib/11-erlang_vm.schema"}, 71 | 72 | {template, "files/advanced.config", "etc/advanced.config"}, 73 | 74 | {mkdir, "lib/basho-patches"}, 75 | 76 | %% Copy ssl certs 77 | {template, "files/cert.pem", "etc/cert.pem"}, 78 | {template, "files/key.pem", "etc/key.pem"} 79 | 80 | ]}. 81 | -------------------------------------------------------------------------------- /rel/vars.config: -------------------------------------------------------------------------------- 1 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 2 | %% ex: ft=erlang ts=4 sw=4 et 3 | 4 | %% Platform-specific installation paths 5 | {platform_bin_dir, "./bin"}. 6 | {platform_data_dir, "./data"}. 7 | {platform_etc_dir, "./etc"}. 8 | {platform_lib_dir, "./lib"}. 9 | {platform_log_dir, "./log"}. 10 | 11 | %% 12 | %% etc/app.config 13 | %% 14 | {riak_explorer_ip, "127.0.0.1"}. 15 | {riak_explorer_port, 9000}. 16 | {cluster_riak_node, "riak@127.0.0.1"}. 17 | {cluster_development_mode, on}. 18 | 19 | %% 20 | %% etc/vm.args 21 | %% 22 | {node, "riak_explorer@127.0.0.1"}. 23 | {crash_dump, "{{platform_log_dir}}/erl_crash.dump"}. 24 | 25 | %% 26 | %% bin/riak_explorer 27 | %% 28 | {data_dir, "{{target_dir}}/data"}. 29 | {runner_script_dir, "\`cd \\`dirname $0\\` && /bin/pwd\`"}. 30 | {runner_base_dir, "{{runner_script_dir}}/.."}. 31 | {runner_etc_dir, "$RUNNER_BASE_DIR/etc"}. 32 | {runner_log_dir, "$RUNNER_BASE_DIR/log"}. 33 | {runner_lib_dir, "$RUNNER_BASE_DIR/lib"}. 34 | {runner_patch_dir, "$RUNNER_BASE_DIR/lib/basho-patches"}. 35 | {pipe_dir, "/tmp/$RUNNER_BASE_DIR/"}. 36 | {runner_user, ""}. 37 | {runner_wait_process, "re_sup"}. 38 | 39 | %% lager 40 | {console_log_default, file}. 41 | 42 | %% cuttlefish 43 | {cuttlefish, "on"}. 44 | {cuttlefish_conf, "riak_explorer.conf"}. 45 | -------------------------------------------------------------------------------- /src/re_cluster.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_cluster). 22 | 23 | -compile({no_auto_import,[nodes/1]}). 24 | 25 | -export([exists/1, 26 | props/1, 27 | from_props/2, 28 | riak_node/1, 29 | development_mode/1, 30 | nodes/1]). 31 | 32 | -type(re_cluster() :: atom()). 33 | -export_type([re_cluster/0]). 34 | 35 | -type(re_cluster_prop() :: 36 | {id, re_cluster()} | 37 | {development_mode, boolean()} | 38 | {riak_type, re_node:re_node_type() | unavailable} | 39 | {riak_version, binary() | unavailable} | 40 | {available, boolean()}). 41 | 42 | -type(re_cluster_props() :: [re_cluster_prop()]). 43 | -export_type([re_cluster_props/0]). 44 | 45 | %%%=================================================================== 46 | %%% API 47 | %%%=================================================================== 48 | 49 | -spec exists(re_cluster()) -> boolean(). 50 | exists(C) -> 51 | riak_explorer:cluster_config(C) =/= {error, not_found}. 52 | 53 | -spec props(re_cluster()) -> {error, not_found} | re_cluster_props(). 54 | props(C) -> 55 | case riak_explorer:cluster_config(C) of 56 | {error, not_found} -> {error, not_found}; 57 | Props -> from_props(C, Props) 58 | end. 59 | 60 | -spec from_props(re_cluster(), re_cluster_props()) -> re_cluster_props(). 61 | from_props(C, Props) -> 62 | N = case proplists:get_value(riak_node, Props, default_riak_node()) of 63 | N1 when is_list(N1) -> list_to_atom(N1); 64 | N2 when is_atom(N2) -> N2; 65 | _ -> default_riak_node() 66 | end, 67 | [{id,C}, 68 | {riak_node, N}, 69 | {development_mode, proplists:get_value(development_mode, Props, true)}, 70 | {riak_type, proplists:get_value(riak_type, Props, re_node:type(N))}, 71 | {riak_version, proplists:get_value(riak_version, Props, re_node:version(N))}, 72 | {available, proplists:get_value(available, Props, re_node:available(N))}]. 73 | 74 | -spec riak_node(re_cluster()) -> {error, not_found} | re_node:re_node(). 75 | riak_node(C) -> 76 | case props(C) of 77 | {error, not_found} -> 78 | {error, not_found}; 79 | Props -> 80 | proplists:get_value(riak_node, Props, {error, not_found}) 81 | end. 82 | 83 | -spec development_mode(re_cluster()) -> boolean(). 84 | development_mode(C) -> 85 | case riak_explorer:cluster_config(C) of 86 | {error, not_found} -> 87 | true; 88 | Props -> 89 | proplists:get_value(development_mode, Props, true) 90 | end. 91 | 92 | -spec nodes(re_cluster()) -> {error, term()} | [re_node:re_node_props()]. 93 | nodes(C) -> 94 | case riak_node(C) of 95 | {error, not_found} -> 96 | {error, not_found}; 97 | N -> 98 | case re_node:ring_members(N) of 99 | {error, Reason} -> 100 | {error, Reason}; 101 | Nodes -> 102 | lists:map(fun(N1) -> re_node:props(N1) end, Nodes) 103 | end 104 | end. 105 | 106 | %%%=================================================================== 107 | %%% Private 108 | %%%=================================================================== 109 | 110 | -spec default_riak_node() -> re_node:re_node(). 111 | default_riak_node() -> 112 | case riak_explorer:is_riak() of 113 | false -> 'riak@127.0.0.1'; 114 | true -> node() 115 | end. 116 | -------------------------------------------------------------------------------- /src/re_file_util.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_file_util). 22 | 23 | -export([find_single_file/1, 24 | clean_dir/1, 25 | partial_file/3, 26 | timestamp_string/0, 27 | timestamp_human/1, 28 | sort_file/1, 29 | add_slash/1, 30 | ensure_data_dir/1, 31 | for_each_line_in_file/4]). 32 | 33 | %%%=================================================================== 34 | %%% API 35 | %%%=================================================================== 36 | 37 | -spec find_single_file(string()) -> {error, term()} | string(). 38 | find_single_file(Dir) -> 39 | case file:list_dir(Dir) of 40 | {ok, [File|_]} -> 41 | File; 42 | {ok, []} -> 43 | {error, not_found}; 44 | {error, Reason} -> 45 | {error, Reason} 46 | end. 47 | 48 | -spec clean_dir(string()) -> {error, term()} | ok. 49 | clean_dir(Dir) -> 50 | case file:list_dir(Dir) of 51 | {ok, OldFiles} -> 52 | [ file:delete(filename:join([Dir, F])) || F <- OldFiles ], 53 | ok; 54 | {error, Reason} -> 55 | {error, Reason} 56 | end. 57 | 58 | -spec partial_file(string(), non_neg_integer(), non_neg_integer()) -> 59 | {non_neg_integer(), non_neg_integer(), 60 | non_neg_integer(), non_neg_integer(), 61 | [string()]}. 62 | partial_file(File, Start, Rows) -> 63 | {T, RC, S, E, LinesR} = for_each_line_in_file(File, 64 | fun(Entry, {T, RC, S, E, Accum}) -> 65 | case should_add_entry(T, S, E) of 66 | true -> 67 | B = re:replace(Entry, "(^\\s+)|(\\s+$)", "", [global,{return,list}]), 68 | {T + 1, RC + 1, S, E, [list_to_binary(B)|Accum]}; 69 | _ -> {T + 1, RC, S, E, Accum} 70 | end 71 | end, [read], {0, 0, Start, Start+Rows,[]}), 72 | {T, RC, S, E, lists:reverse(LinesR)}. 73 | 74 | -spec timestamp_string() -> string(). 75 | timestamp_string() -> 76 | %% change erlang:now() to erlang:timestamp() for R18 in the future 77 | {{Year,Month,Day},{Hour,Min,Sec}} = calendar:now_to_universal_time(erlang:now()), 78 | lists:flatten(io_lib:fwrite("~4..0B~2.10.0B~2.10.0B~2.10.0B~2.10.0B~2.10.0B",[Year, Month, Day, Hour, Min, Sec])). 79 | 80 | -spec timestamp_human(string()) -> string(). 81 | timestamp_human(Time) -> 82 | Year = lists:sublist(Time, 4), 83 | Month = lists:sublist(Time, 5, 2), 84 | Day = lists:sublist(Time, 7, 2), 85 | Hour = lists:sublist(Time, 9, 2), 86 | Min = lists:sublist(Time, 11, 2), 87 | Sec = lists:sublist(Time, 13, 2), 88 | lists:flatten(io_lib:fwrite("~s-~s-~s ~s:~s:~s",[Year, Month, Day, Hour, Min, Sec])). 89 | 90 | -spec sort_file(string()) -> ok. 91 | sort_file(Filename) -> 92 | UnsortedLines = re_file_util:for_each_line_in_file(Filename, 93 | fun(Entry, Accum) -> [string:strip(Entry, both, $\n)| Accum] end, 94 | [read], []), 95 | SortedLines = lists:sort(UnsortedLines), 96 | NewFile = string:join(SortedLines, io_lib:nl()), 97 | file:write_file(Filename, NewFile, [write]), 98 | ok. 99 | 100 | 101 | add_slash(Str) -> 102 | case lists:last(Str) of 103 | 47 -> Str; 104 | _ -> Str ++ "/" 105 | end. 106 | 107 | ensure_data_dir(Path) -> 108 | DataPath = add_slash(filename:join([riak_explorer:data_dir() | Path])), 109 | filelib:ensure_dir(DataPath), 110 | DataPath. 111 | 112 | for_each_line_in_file(Name, Proc, Mode, Accum0) -> 113 | {ok, Device} = file:open(Name, Mode), 114 | for_each_line(Device, Proc, Accum0). 115 | 116 | %%%=================================================================== 117 | %%% Private 118 | %%%=================================================================== 119 | 120 | for_each_line(Device, Proc, Accum) -> 121 | case io:get_line(Device, "") of 122 | eof -> 123 | file:close(Device), Accum; 124 | Line -> 125 | NewAccum = Proc(Line, Accum), 126 | for_each_line(Device, Proc, NewAccum) 127 | end. 128 | 129 | should_add_entry(Total, _Start, Stop) when Total > Stop -> false; 130 | should_add_entry(Total, Start, _Stop) when Total >= Start -> true; 131 | should_add_entry(_Total, _Start, _Stop) -> false. 132 | -------------------------------------------------------------------------------- /src/re_job.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_job). 22 | 23 | -behaviour(gen_fsm). 24 | 25 | -export([start_link/0]). 26 | 27 | -export([start_job/2, 28 | get_info/1, 29 | set_meta/2, 30 | set_error/2, 31 | set_finish/1]). 32 | 33 | -export([ready/2, 34 | ready/3, 35 | started/2, 36 | started/3, 37 | failed/2, 38 | failed/3, 39 | finished/2, 40 | finished/3]). 41 | 42 | -export([init/1, 43 | handle_event/3, 44 | handle_sync_event/4, 45 | handle_info/3, 46 | terminate/3, 47 | code_change/4]). 48 | 49 | -record(state, { 50 | proc :: pid() | undefined, 51 | meta :: term() | undefined, 52 | error :: term() | undefined 53 | }). 54 | 55 | -type(state_name() :: atom()). 56 | 57 | %%%=================================================================== 58 | %%% API 59 | %%%=================================================================== 60 | 61 | -spec start_link() -> {ok, pid()} | {error, term()}. 62 | start_link() -> 63 | gen_fsm:start_link(?MODULE, [], []). 64 | 65 | -spec start_job(pid(), term()) -> {error, term()} | ok. 66 | start_job(Pid, MFA) -> 67 | gen_fsm:sync_send_event(Pid, {start_job, MFA}). 68 | 69 | -spec get_info(pid()) -> {error, term()} | [{atom(), term()}]. 70 | get_info(Pid) -> 71 | gen_fsm:sync_send_all_state_event(Pid, get_info). 72 | 73 | -spec set_meta(pid(), term()) -> {error, term()} | ok. 74 | set_meta(Pid, Meta) -> 75 | gen_fsm:sync_send_event(Pid, {set_meta, Meta}). 76 | 77 | -spec set_error(pid(), term()) -> {error, term()} | ok. 78 | set_error(Pid, Error) -> 79 | gen_fsm:sync_send_all_state_event(Pid, {set_error, Error}). 80 | 81 | -spec set_finish(pid()) -> {error, term()} | ok. 82 | set_finish(Pid) -> 83 | gen_fsm:sync_send_event(Pid, set_finish). 84 | 85 | %%%=================================================================== 86 | %%% Callbacks 87 | %%%=================================================================== 88 | 89 | -spec init({atom(), {module(), atom(), [term()]}}) -> {ok, atom(), #state{}}. 90 | init([]) -> 91 | {ok, ready, #state{}}. 92 | 93 | -type(async_reply() :: 94 | {next_state, state_name(), #state{}} | 95 | {next_state, state_name(), #state{}, timeout()} | 96 | {stop, term(), #state{}}). 97 | 98 | -spec ready(term(), #state{}) -> async_reply(). 99 | ready(_Event, State) -> 100 | {stop, {error, unhandled_event}, State}. 101 | 102 | -spec started(term(), #state{}) -> async_reply(). 103 | started(_Event, State) -> 104 | {stop, {error, unhandled_event}, State}. 105 | 106 | -spec failed(term(), #state{}) -> async_reply(). 107 | failed(_Event, State) -> 108 | {stop, {error, unhandled_event}, State}. 109 | 110 | -spec finished(term(), #state{}) -> async_reply(). 111 | finished(_Event, State) -> 112 | {stop, {error, unhandled_event}, State}. 113 | 114 | -type(sync_reply() :: 115 | {next_state, state_name(), #state{}} | 116 | {next_state, state_name(), #state{}, timeout()} | 117 | {reply, term(), state_name(), #state{}} | 118 | {reply, term(), state_name(), #state{}, timeout()} | 119 | {stop, term(), #state{}} | 120 | {stop, term(), term(), #state{}}). 121 | 122 | -spec ready(term(), pid(), #state{}) -> sync_reply(). 123 | ready({start_job, MFA}, _From, _State) -> 124 | do_start_job(MFA, #state{}); 125 | ready(_Event, _From, State) -> 126 | {reply, {error, unhandled_event}, ready, State}. 127 | 128 | -spec started(term(), pid(), #state{}) -> sync_reply(). 129 | started({start_job, _}, _From, State) -> 130 | {reply, {error, already_started}, started, State}; 131 | started({set_meta, Meta}, _From, State) -> 132 | State1 = State#state{meta=Meta}, 133 | {reply, ok, started, State1}; 134 | started(set_finish, _From, State) -> 135 | {reply, ok, finished, State}; 136 | started(_Event, _From, State) -> 137 | {reply, {error, unhandled_event}, ready, State}. 138 | 139 | -spec failed(term(), pid(), #state{}) -> sync_reply(). 140 | failed({start_job, MFA}, _From, _State) -> 141 | do_start_job(MFA, #state{}); 142 | failed(_Event, _From, State) -> 143 | {reply, {error, unhandled_event}, ready, State}. 144 | 145 | -spec finished(term(), pid(), #state{}) -> sync_reply(). 146 | finished({start_job, MFA}, _From, _State) -> 147 | do_start_job(MFA, #state{}); 148 | finished(_Event, _From, State) -> 149 | {reply, {error, unhandled_event}, ready, State}. 150 | 151 | -spec handle_event(term(), state_name(), #state{}) -> async_reply(). 152 | handle_event(_Event, _StateName, State) -> 153 | {stop, {error, unhandled_event}, State}. 154 | 155 | -spec handle_sync_event(term(), pid(), state_name(), #state{}) -> 156 | sync_reply(). 157 | handle_sync_event(get_info, _From, StateName, 158 | State=#state{meta=Meta, error=Error}) -> 159 | Info = [{state, StateName}, 160 | {meta, Meta}, 161 | {error, Error}], 162 | {reply, Info, StateName, State}; 163 | handle_sync_event({set_error, Error}, _From, _, State) -> 164 | State1 = State#state{error=Error}, 165 | {reply, ok, failed, State1}; 166 | handle_sync_event(_Event, _From, StateName, State) -> 167 | {reply, {error, unhandled_event}, StateName, State}. 168 | 169 | -spec handle_info(term(), state_name(), #state{}) -> async_reply(). 170 | handle_info(_Info, StateName, State) -> 171 | {next_state, StateName, State}. 172 | 173 | -spec terminate(term(), state_name(), #state{}) -> ok. 174 | terminate(_Reason, _StateName, _State) -> 175 | ok. 176 | 177 | -spec code_change(term(), state_name(), #state{}, term()) -> 178 | {ok, state_name(), #state{}}. 179 | code_change(_OldVsn, StateName, State, _Extra) -> 180 | {ok, StateName, State}. 181 | 182 | %%%=================================================================== 183 | %%% Private 184 | %%%=================================================================== 185 | 186 | do_start_job({M, F, A}, State) -> 187 | case erlang:apply(M, F, A) of 188 | P when is_pid(P) -> 189 | State1 = State#state{proc = P}, 190 | {reply, ok, started, State1}; 191 | {error, Reason} -> 192 | {reply, {error, Reason}, ready, State} 193 | end. 194 | -------------------------------------------------------------------------------- /src/re_job_manager.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_job_manager). 22 | 23 | -behaviour(supervisor). 24 | 25 | -export([start_link/0]). 26 | 27 | -export([add_job/2, 28 | get_job/1, 29 | get_jobs/0]). 30 | 31 | -export([init/1]). 32 | 33 | %%%=================================================================== 34 | %%% API 35 | %%%=================================================================== 36 | 37 | -spec start_link() -> {ok, pid()}. 38 | start_link() -> 39 | supervisor:start_link({local, ?MODULE}, ?MODULE, {}). 40 | 41 | -spec add_job(atom(), {module(), atom(), [term()]}) -> ok | {error, term()}. 42 | add_job(Id, MFA) -> 43 | case get_job_pid(Id) of 44 | {ok, Pid} -> 45 | re_job:start_job(Pid, MFA); 46 | {error, not_found} -> 47 | JobSpec = 48 | {Id, 49 | {re_job, start_link, []}, 50 | transient, 5000, worker, [re_job]}, 51 | case supervisor:start_child(?MODULE, JobSpec) of 52 | {ok, Pid} -> 53 | re_job:start_job(Pid, MFA); 54 | {error, {already_started, Pid}} -> 55 | re_job:start_job(Pid, MFA); 56 | {error, Reason} -> 57 | {error, Reason} 58 | end 59 | end. 60 | 61 | -spec get_job(atom()) -> {error, term()} | [{atom(), term()}]. 62 | get_job(Id) -> 63 | case get_job_pid(Id) of 64 | {ok, Pid} -> 65 | re_job:get_info(Pid); 66 | {error, not_found} -> 67 | {error, not_found} 68 | end. 69 | 70 | -spec get_jobs() -> [{atom(), term()}]. 71 | get_jobs() -> 72 | [ {Id, re_job:get_info(Pid)} 73 | || {Id, Pid} <- get_job_pids() ]. 74 | 75 | -spec get_job_pid(rms_node:key()) -> {error, not_found} | {ok, pid()}. 76 | get_job_pid(Id) -> 77 | case lists:keyfind(Id, 1, get_job_pids()) of 78 | {_, Pid} -> 79 | {ok, Pid}; 80 | false -> 81 | {error, not_found} 82 | end. 83 | 84 | %%%=================================================================== 85 | %%% Callbacks 86 | %%%=================================================================== 87 | 88 | -spec init({}) -> 89 | {ok, {{supervisor:strategy(), 1, 1}, [supervisor:child_spec()]}}. 90 | init({}) -> 91 | {ok, {{one_for_one, 1, 1}, []}}. 92 | 93 | %% get_jobs() -> 94 | %% gen_server:call(?MODULE, {get_jobs}). 95 | 96 | %% get(Id) -> 97 | %% gen_server:call(?MODULE, {get_info, Id}). 98 | 99 | %% set_meta(Id, Meta) -> 100 | %% gen_server:cast(?MODULE, {set_meta, Id, Meta}). 101 | 102 | %% error(Id, Meta) -> 103 | %% gen_server:cast(?MODULE, {error, Id, Meta}). 104 | 105 | %% finish(Id) -> 106 | %% gen_server:cast(?MODULE, {finish, Id}). 107 | 108 | %% init(_Args) -> 109 | %% process_flag(trap_exit, true), 110 | %% {ok, #state{}}. 111 | 112 | %% handle_call({create, Id, {M, F, A}}, _From, State=#state{jobs=Jobs}) -> 113 | %% lager:info("Creating job: ~p, {~p, ~p, ~p}. Existing Jobs: ~p.", [Id, M, F, A, Jobs]), 114 | %% case proplists:get_value(Id, Jobs) of 115 | %% #job{status=in_progress} -> 116 | %% {reply, [{error, already_started}], State}; 117 | %% _ -> 118 | %% Pid = spawn(M, F, A), 119 | %% J = #job{pid=Pid, status=in_progress, meta=[]}, 120 | %% {reply, ok, State#state{jobs=put_job(Id, Jobs, J, [])}} 121 | %% end; 122 | 123 | %% handle_call({get_jobs}, _From, State=#state{jobs=Jobs}) -> 124 | %% R = lists:map(fun({Id, #job{status=S,meta=M}}) -> [{id, Id},{status, S},{meta, M}] end, Jobs), 125 | %% {reply, R, State}; 126 | 127 | %% handle_call({get_info, Id}, _From, State=#state{jobs=Jobs}) -> 128 | %% case proplists:get_value(Id, Jobs) of 129 | %% J=#job{} -> 130 | %% {reply, [{id, Id}, {status, J#job.status}, {meta, J#job.meta}], State}; 131 | %% _ -> 132 | %% {reply, [{error, not_found}], State} 133 | %% end. 134 | 135 | %% handle_cast({set_meta, Id, Meta}, State=#state{jobs=Jobs}) -> 136 | %% case proplists:get_value(Id, Jobs) of 137 | %% J=#job{} -> 138 | %% {noreply, State#state{ 139 | %% jobs=put_job(Id, Jobs, J#job{meta=Meta}, [])}}; 140 | %% _ -> 141 | %% {noreply, State} 142 | %% end; 143 | 144 | %% handle_cast({error, Id, Meta}, State=#state{jobs=Jobs}) -> 145 | %% case proplists:get_value(Id, Jobs) of 146 | %% J=#job{} -> 147 | %% {noreply, State#state{ 148 | %% jobs=put_job(Id, Jobs, J#job{status=error, meta=Meta}, [])}}; 149 | %% _ -> 150 | %% {noreply, State} 151 | %% end; 152 | 153 | %% handle_cast({finish, Id}, State=#state{jobs=Jobs}) -> 154 | %% case proplists:get_value(Id, Jobs) of 155 | %% J=#job{} -> 156 | %% {noreply, State#state{ 157 | %% jobs=put_job(Id, Jobs, J#job{status=done}, [])}}; 158 | %% _ -> 159 | %% {noreply, State} 160 | %% end. 161 | 162 | %% handle_info({'EXIT', _Pid, _Reason}, State) -> 163 | %% {noreply, State}. 164 | 165 | %% code_change(_OldVsn, State, _Extra) -> 166 | %% {ok, State}. 167 | 168 | %% terminate(Reason, _State) -> 169 | %% lager:error("Job manager terminated, reason: ~p", [Reason]), 170 | %% ok. 171 | 172 | %%%=================================================================== 173 | %%% Private 174 | %%%=================================================================== 175 | 176 | %% put_job(Id, [], Job, Accum) -> 177 | %% case proplists:is_defined(Id, Accum) of 178 | %% true -> 179 | %% lists:reverse(Accum); 180 | %% false -> 181 | %% lists:reverse([{Id, Job}|Accum]) 182 | %% end; 183 | %% put_job(Id, [{Id, _}|Rest], Job, Accum) -> 184 | %% put_job(Id, Rest, Job, [{Id, Job}|Accum]); 185 | %% put_job(Id, [Old|Rest], Job, Accum) -> 186 | %% put_job(Id, Rest, Job, [Old|Accum]). 187 | 188 | -spec get_job_pids() -> [{atom(), pid()}]. 189 | get_job_pids() -> 190 | [ {Id, Pid} || {Id, Pid, _, _} <- supervisor:which_children(?MODULE) ]. 191 | -------------------------------------------------------------------------------- /src/re_node_job.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | -module(re_node_job). 21 | 22 | -export([start_list_buckets/4, 23 | start_list_keys/5, 24 | start_delete_bucket/5, 25 | start_list_keys_ts/4]). 26 | 27 | -export([init_list_buckets/1, 28 | init_list_keys/1, 29 | init_delete_bucket/1, 30 | init_list_keys_ts/1]). 31 | 32 | %%%=================================================================== 33 | %%% API 34 | %%%=================================================================== 35 | 36 | -spec start_list_buckets(re_cluster:re_cluster(), re_node:re_node(), binary(), [term()]) -> 37 | pid(). 38 | start_list_buckets(Cluster, Node, BucketType, Options) -> 39 | spawn(?MODULE, init_list_buckets, [[self(), Cluster, Node, BucketType, Options]]). 40 | 41 | -spec start_list_keys(re_cluster:re_cluster(), re_node:re_node(), binary(), binary(), [term()]) -> 42 | pid(). 43 | start_list_keys(Cluster, Node, BucketType, Bucket, Options) -> 44 | spawn(?MODULE, init_list_keys, [[self(), Cluster, Node, BucketType, Bucket, Options]]). 45 | 46 | -spec start_delete_bucket(re_cluster:re_cluster(), re_node:re_node(), binary(), binary(), [term()]) -> 47 | pid(). 48 | start_delete_bucket(Cluster, Node, BucketType, Bucket, Options) -> 49 | spawn(?MODULE, init_delete_bucket, [[self(), Cluster, Node, BucketType, Bucket, Options]]). 50 | 51 | -spec start_list_keys_ts(re_cluster:re_cluster(), re_node:re_node(), binary(), [term()]) -> 52 | pid(). 53 | start_list_keys_ts(Cluster, Node, Table, Options) -> 54 | spawn(?MODULE, init_list_keys_ts, [[self(), Cluster, Node, Table, Options]]). 55 | 56 | %%%=================================================================== 57 | %%% Callbacks 58 | %%%=================================================================== 59 | 60 | -spec init_list_buckets([term()]) -> {error, term()} | ok. 61 | init_list_buckets([From, Cluster, Node, BucketType, Options]) -> 62 | Dir = re_file_util:ensure_data_dir( 63 | ["buckets", atom_to_list(Cluster), binary_to_list(BucketType)]), 64 | C = re_node:client(Node), 65 | Req = riakc_pb_socket:stream_list_buckets(C, BucketType), 66 | do_list(From, Req, Dir, Options). 67 | 68 | -spec init_list_keys([term()]) -> {error, term()} | ok. 69 | init_list_keys([From, Cluster, Node, BucketType, Bucket, Options]) -> 70 | Dir = re_file_util:ensure_data_dir( 71 | ["keys", atom_to_list(Cluster), binary_to_list(BucketType), 72 | binary_to_list(Bucket)]), 73 | C = re_node:client(Node), 74 | Req = riakc_pb_socket:stream_list_keys(C, {BucketType, Bucket}), 75 | do_list(From, Req, Dir, Options). 76 | 77 | -spec init_delete_bucket([term()]) -> {error, term()} | ok. 78 | init_delete_bucket([From, Cluster, Node, BucketType, Bucket, Options]) -> 79 | Dir = re_file_util:ensure_data_dir( 80 | ["keys", atom_to_list(Cluster), binary_to_list(BucketType), 81 | binary_to_list(Bucket)]), 82 | case re_file_util:find_single_file(Dir) of 83 | {error, Reason} -> 84 | {error, Reason}; 85 | File -> 86 | DirFile = filename:join([Dir, File]), 87 | C = re_node:client(Node), 88 | Fun = 89 | fun(Entry0, {Oks0, Errors0}) -> 90 | RT = BucketType, 91 | RB = Bucket, 92 | RK = list_to_binary(re:replace(Entry0, "(^\\s+)|(\\s+$)", "", [global,{return,list}])), 93 | {Oks1,Errors1} = 94 | case riakc_pb_socket:delete(C, {RT,RB}, RK) of 95 | ok -> 96 | riakc_pb_socket:get(C, {RT,RB}, RK), 97 | {Oks0+1, Errors0}; 98 | {error, Reason} -> 99 | lager:warning("Failed to delete types/~p/buckets/~p/keys/~p with reason ~p", 100 | [RT, RB, RK, Reason]), 101 | {Oks0, Errors0+1} 102 | end, 103 | re_job:set_meta(From, [{oks, Oks1},{errors,Errors1}]), 104 | {Oks1,Errors1} 105 | end, 106 | {Os,Es} = re_file_util:for_each_line_in_file(DirFile, Fun, [read], {0, 0}), 107 | lager:info("Completed deletion of types/~p/buckets/~p with ~p successful deletes and ~p errors", 108 | [BucketType, Bucket, Os, Es]), 109 | re_job:set_finish(From), 110 | case proplists:get_value(refresh_cache, Options, false) of 111 | true -> 112 | re_node:clean_buckets_cache(Cluster, BucketType), 113 | re_node:clean_keys_cache(Cluster, BucketType, Bucket), 114 | ok; 115 | false -> 116 | ok 117 | end 118 | end. 119 | 120 | -spec init_list_keys_ts([term()]) -> {error, term()} | ok. 121 | init_list_keys_ts([From, Cluster, Node, Table, Options]) -> 122 | Dir = re_file_util:ensure_data_dir( 123 | ["keys_ts", atom_to_list(Cluster), binary_to_list(Table)]), 124 | C = re_node:client(Node), 125 | Req = riakc_ts:stream_list_keys(C, Table, []), 126 | do_list(From, Req, Dir, Options). 127 | 128 | %%%=================================================================== 129 | %%% Private 130 | %%%=================================================================== 131 | 132 | do_list(From, {ok, ReqId}, Dir, Options) -> 133 | re_file_util:clean_dir(Dir), 134 | TimeStamp = re_file_util:timestamp_string(), 135 | Filename = filename:join([Dir, TimeStamp]), 136 | file:write_file(Filename, "", [write]), 137 | lager:info("List started for file: ~p.", [Filename]), 138 | {ok, Device} = file:open(Filename, [append]), 139 | case write_loop(From, ReqId, Device, []) of 140 | ok -> 141 | case proplists:get_value(sort, Options, true) of 142 | true -> 143 | re_file_util:sort_file(Filename); 144 | _ -> 145 | ok 146 | end, 147 | lager:info("File ~p written.", [Filename]), 148 | re_job:set_finish(From), 149 | file:close(Device), 150 | ok; 151 | {error, Reason} -> 152 | re_job:set_error(From, Reason), 153 | file:close(Device), 154 | {error, Reason} 155 | end; 156 | do_list(From, {error, Reason}, Dir, _) -> 157 | re_job:set_error(From, Reason), 158 | lager:error("Bucket list failed for dir: ~p with reason: ~p", [Dir, Reason]), 159 | {error, Reason}. 160 | 161 | -spec write_loop(pid(), term(), term(), term()) -> 162 | {error, term()} | ok. 163 | write_loop(From, ReqId, Device, Meta) -> 164 | case re_job:set_meta(From, Meta) of 165 | {error, Reason} -> 166 | {error, Reason}; 167 | ok -> 168 | receive 169 | {ReqId, done} -> 170 | ok; 171 | {ReqId, {error, Reason}} -> 172 | {error, Reason}; 173 | {ReqId, {_, Entries}} -> 174 | case length(Entries) > 0 of 175 | true -> 176 | Entries1 = [ case E of 177 | {_,_,_} -> 178 | ts_entry_to_list(E); 179 | _ -> 180 | binary_to_list(E) 181 | end 182 | || E <- Entries ], 183 | io:fwrite(Device, string:join(Entries1, io_lib:nl()) ++ io_lib:nl(), []); 184 | _ -> 185 | ok 186 | end, 187 | Meta1 = [{count, proplists:get_value(count, Meta, 0) + length(Entries)}], 188 | write_loop(From, ReqId, Device, Meta1); 189 | Reason -> 190 | {error, Reason} 191 | end 192 | 193 | end. 194 | 195 | ts_entry_to_list({F1, F2, F3}) -> 196 | binary_to_list(list_to_binary(mochijson2:encode([F1, F2, F3]))). 197 | -------------------------------------------------------------------------------- /src/re_riak_patch.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_riak_patch). 22 | -export([ 23 | is_enterprise/0, 24 | is_timeseries/0, 25 | version/0, 26 | bucket_type_create/1, 27 | bucket_type_activate/1, 28 | bucket_type_update/1, 29 | bucket_types/0, 30 | bucket_type_exists/1, 31 | riak_version/0, 32 | tail_log/2, 33 | get_log_files/0, 34 | get_config/1, 35 | get_config_files/0, 36 | effective_config/0]). 37 | 38 | %%%=================================================================== 39 | %%% API 40 | %%%=================================================================== 41 | 42 | %% Increment this when code changes 43 | version() -> 14. 44 | 45 | is_enterprise() -> 46 | case code:ensure_loaded(riak_repl_console) of 47 | {module,riak_repl_console} -> true; 48 | _ -> false 49 | end. 50 | 51 | is_timeseries() -> 52 | case code:ensure_loaded(riak_ql_ddl) of 53 | {module,riak_ql_ddl} -> true; 54 | _ -> false 55 | end. 56 | 57 | bucket_type_print_status(Type, undefined) -> 58 | {error, list_to_binary(io_lib:format("~ts is not an existing bucket type", [Type]))}; 59 | bucket_type_print_status(Type, created) -> 60 | {error, list_to_binary(io_lib:format("~ts has been created but cannot be activated yet", [Type]))}; 61 | bucket_type_print_status(Type, ready) -> 62 | list_to_binary(io_lib:format("~ts has been created and may be activated", [Type])); 63 | bucket_type_print_status(Type, active) -> 64 | list_to_binary(io_lib:format("~ts is active", [Type])). 65 | 66 | bucket_type_activate([TypeStr]) -> 67 | Type = unicode:characters_to_binary(TypeStr, utf8, utf8), 68 | bucket_type_print_activate_result(Type, riak_core_bucket_type:activate(Type), false). 69 | 70 | bucket_type_print_activate_result(Type, ok, _) -> 71 | list_to_binary(io_lib:format("~ts has been activated", [Type])); 72 | bucket_type_print_activate_result(Type, {error, undefined}, _IsFirst) -> 73 | bucket_type_print_status(Type, undefined); 74 | bucket_type_print_activate_result(Type, {error, not_ready}, _IsFirst) -> 75 | bucket_type_print_status(Type, created). 76 | 77 | bucket_type_create([TypeStr, PropsStr]) -> 78 | Type = unicode:characters_to_binary(TypeStr, utf8, utf8), 79 | CreateTypeFn = 80 | fun(Props) -> 81 | Result = riak_core_bucket_type:create(Type, Props), 82 | bucket_type_print_create_result(Type, Result) 83 | end, 84 | Result = bucket_type_create(CreateTypeFn, Type, decode_json_props(PropsStr)), 85 | wait_for({re_riak_patch, bucket_type_exists, TypeStr}, 10), 86 | Result. 87 | 88 | bucket_type_exists(Type) -> 89 | case riak_core_bucket_type:status(Type) of 90 | ready -> true; 91 | _ -> false 92 | end. 93 | 94 | %% Attempt to decode the json to string or provide defaults if empty. 95 | %% mochijson2 has no types exported so returning any. 96 | -spec decode_json_props(JsonProps::string()) -> any(). 97 | decode_json_props("") -> 98 | {struct, [{<<"props">>, {struct, []}}]}; 99 | decode_json_props(JsonProps) -> 100 | catch mochijson2:decode(JsonProps). 101 | 102 | -spec bucket_type_create( 103 | CreateTypeFn :: fun(([proplists:property()]) -> ok), 104 | Type :: binary(), 105 | JSON :: any()) -> ok | error. 106 | bucket_type_create(CreateTypeFn, Type, {struct, Fields}) -> 107 | case Fields of 108 | [{<<"props", _/binary>>, {struct, Props1}}] -> 109 | case code:ensure_loaded(riak_kv_ts_util) of 110 | {module,riak_kv_ts_util} -> 111 | case catch riak_kv_ts_util:maybe_parse_table_def(Type, Props1) of 112 | {ok, Props2} -> 113 | Props3 = [riak_kv_wm_utils:erlify_bucket_prop(P) || P <- Props2], 114 | CreateTypeFn(Props3); 115 | {error, ErrorMessage} when is_list(ErrorMessage) orelse is_binary(ErrorMessage) -> 116 | bucket_type_print_create_result_error_header(Type), 117 | {error, list_to_binary(io_lib:format("~ts", [ErrorMessage]))}; 118 | {error, Error} -> 119 | bucket_type_print_create_result(Type, {error, Error}) 120 | end; 121 | _ -> 122 | Props2 = [riak_kv_wm_utils:erlify_bucket_prop(P) || P <- Props1], 123 | CreateTypeFn(Props2) 124 | end; 125 | _ -> 126 | {error, list_to_binary(io_lib:format("Cannot create bucket type ~ts: no props field found in json", [Type]))} 127 | end; 128 | bucket_type_create(_, Type, _) -> 129 | {error, list_to_binary(io_lib:format("Cannot create bucket type ~ts: invalid json", [Type]))}. 130 | 131 | bucket_type_print_create_result(Type, ok) -> 132 | list_to_binary(io_lib:format("~ts created", [Type])); 133 | bucket_type_print_create_result(Type, {error, Reason}) -> 134 | bucket_type_print_create_result_error_header(Type), 135 | {error, list_to_binary(io_lib:format("Error creating bucket type: ~p", [Reason]))}. 136 | 137 | bucket_type_print_create_result_error_header(Type) -> 138 | {error, list_to_binary(io_lib:format("Error creating bucket type ~ts:", [Type]))}. 139 | 140 | bucket_type_update([TypeStr, PropsStr]) -> 141 | Type = unicode:characters_to_binary(TypeStr, utf8, utf8), 142 | bucket_type_update(Type, catch mochijson2:decode(PropsStr)). 143 | 144 | bucket_type_update(Type, {struct, Fields}) -> 145 | case proplists:get_value(<<"props">>, Fields) of 146 | {struct, Props} -> 147 | ErlProps = [riak_kv_wm_utils:erlify_bucket_prop(P) || P <- Props], 148 | bucket_type_print_update_result(Type, riak_core_bucket_type:update(Type, ErlProps)); 149 | _ -> 150 | {error, list_to_binary(io_lib:format("Cannot create bucket type ~ts: no props field found in json", [Type]))} 151 | end; 152 | bucket_type_update(Type, _) -> 153 | {error, list_to_binary(io_lib:format("Cannot update bucket type: ~ts: invalid json", [Type]))}. 154 | 155 | bucket_type_print_update_result(Type, ok) -> 156 | list_to_binary(io_lib:format("~ts updated", [Type])); 157 | bucket_type_print_update_result(Type, {error, Reason}) -> 158 | {error, list_to_binary(io_lib:format("Error updating bucket type ~ts, Reason:~p", [Type, Reason]))}. 159 | 160 | bucket_types() -> 161 | It = riak_core_bucket_type:iterator(), 162 | List0 = [[{name, <<"default">>},{props, [{active, true} | format_props(riak_core_bucket_props:defaults(), [])]}]], 163 | fetch_types(It, List0). 164 | 165 | riak_version() -> 166 | LibDir = app_helper:get_env(riak_core, platform_lib_dir), 167 | EnvFile = filename:join([LibDir, "env.sh"]), 168 | {ok, Device} = file:open(EnvFile, read), 169 | Proc = fun(Entry, Accum) -> 170 | case re:run(Entry, "^APP_VERSION=(.*)", [{capture, all_but_first, list}]) of 171 | {match, [Version]} -> list_to_binary(Version); 172 | _ -> Accum 173 | end 174 | end, 175 | for_each_line(Device, Proc, undefined). 176 | 177 | tail_log(Name, NumLines) -> 178 | LogDir = app_helper:get_env(riak_core, platform_log_dir), 179 | LogFile = filename:join([LogDir, Name]), 180 | case file:open(LogFile, read) of 181 | {ok, Device} -> 182 | Proc = fun(Entry, Accum) -> [Entry|Accum] end, 183 | Lines0 = for_each_line(Device, Proc, []), 184 | Count = length(Lines0), 185 | Lines1 = lists:nthtail(max(Count-NumLines, 0), Lines0), 186 | {Count, lists:map(fun(Line) -> list_to_binary(re:replace(Line, "(^\\s+)|(\\s+$)", "", [global,{return,list}])) end, Lines1)}; 187 | _ -> 188 | {error, not_found} 189 | end. 190 | 191 | get_config_files() -> 192 | EtcDir = app_helper:get_env(riak_core, platform_etc_dir), 193 | 194 | case file:list_dir(EtcDir) of 195 | {ok,Files} -> Files; 196 | _ -> [] 197 | end. 198 | 199 | get_log_files() -> 200 | LogDir = app_helper:get_env(riak_core, platform_log_dir), 201 | 202 | case file:list_dir(LogDir) of 203 | {ok,Files} -> Files; 204 | _ -> [] 205 | end. 206 | 207 | get_config(Name) -> 208 | EtcDir = app_helper:get_env(riak_core, platform_etc_dir), 209 | EtcFile = filename:join([EtcDir, Name]), 210 | case file:open(EtcFile, read) of 211 | {ok, Device} -> 212 | Proc = fun(Entry, Accum) -> 213 | B = re:replace(Entry, "(^\\s+)|(\\s+$)", "", [global,{return,list}]), 214 | [list_to_binary(B)|Accum] 215 | end, 216 | Lines = for_each_line(Device, Proc, ""), 217 | lists:reverse(Lines); 218 | _ -> 219 | {error, not_found} 220 | end. 221 | 222 | effective_config() -> 223 | EtcDir = app_helper:get_env(riak_core, platform_etc_dir), 224 | Conf = load_riak_conf(EtcDir), 225 | Schema = load_schema(), 226 | {AppConfigExists, _} = check_existence(EtcDir, "app.config"), 227 | {VMArgsExists, _} = check_existence(EtcDir, "vm.args"), 228 | case {AppConfigExists, VMArgsExists} of 229 | {false, false} -> 230 | AdvConfig = load_advanced_config(EtcDir), 231 | EffectiveString = string:join(cuttlefish_effective:build(Conf, Schema, AdvConfig), "\n"), 232 | EffectiveProplist = generate_effective_proplist(EffectiveString), 233 | case AdvConfig of 234 | [] -> [{config, EffectiveProplist}]; 235 | _ -> [{config, EffectiveProplist}, {advanced_config, format_value(AdvConfig)}] 236 | end; 237 | _ -> 238 | {error,legacy_config} 239 | end. 240 | 241 | %%%=================================================================== 242 | %%% Private 243 | %%%=================================================================== 244 | 245 | check_existence(EtcDir, Filename) -> 246 | FullName = filename:join(EtcDir, Filename), %% Barfolomew 247 | Exists = filelib:is_file(FullName), 248 | lager:info("Checking ~s exists... ~p", [FullName, Exists]), 249 | {Exists, FullName}. 250 | 251 | load_riak_conf(EtcDir) -> 252 | case check_existence(EtcDir, "riak.conf") of 253 | {true, ConfFile} -> 254 | cuttlefish_conf:files([ConfFile]); 255 | _ -> 256 | {error, riak_conf_not_found} 257 | end. 258 | 259 | load_schema() -> 260 | SchemaDir = app_helper:get_env(riak_core, platform_lib_dir), 261 | SchemaFiles = [filename:join(SchemaDir, Filename) || Filename <- filelib:wildcard("*.schema", SchemaDir)], 262 | SortedSchemaFiles = lists:sort(fun(A,B) -> A < B end, SchemaFiles), 263 | cuttlefish_schema:files(SortedSchemaFiles). 264 | 265 | load_advanced_config(EtcDir) -> 266 | case check_existence(EtcDir, "advanced.config") of 267 | {true, AdvancedConfigFile} -> 268 | case file:consult(AdvancedConfigFile) of 269 | {ok, [AdvancedConfig]} -> 270 | AdvancedConfig; 271 | {error, Error} -> 272 | [], 273 | lager:error("Error parsing advanced.config: ~s", [file:format_error(Error)]), 274 | {error, Error} 275 | end; 276 | _ -> 277 | [] 278 | end. 279 | 280 | generate_effective_proplist(EffectiveString) -> 281 | EffectivePropList = conf_parse:parse(EffectiveString), 282 | lists:foldl( 283 | fun({Variable, Value}, Acc) -> 284 | Var = list_to_binary(string:join(Variable, ".")), 285 | Val = case Value of 286 | S when is_list(S) -> list_to_binary(S); 287 | _ -> Value 288 | end, 289 | [{Var, Val} | Acc] 290 | end, 291 | [], 292 | EffectivePropList 293 | ). 294 | 295 | for_each_line(Device, Proc, Accum) -> 296 | case io:get_line(Device, "") of 297 | eof -> 298 | file:close(Device), Accum; 299 | Line -> 300 | NewAccum = Proc(Line, Accum), 301 | for_each_line(Device, Proc, NewAccum) 302 | end. 303 | 304 | fetch_types(It, Acc) -> 305 | case riak_core_bucket_type:itr_done(It) of 306 | true -> 307 | riak_core_bucket_type:itr_close(It), 308 | lists:reverse(Acc); 309 | _ -> 310 | {Type, Props} = riak_core_bucket_type:itr_value(It), 311 | T = [{name, Type},{props, format_props(Props, [])}], 312 | fetch_types(riak_core_bucket_type:itr_next(It), [T | Acc]) 313 | end. 314 | 315 | format_props([], Acc) -> 316 | lists:reverse(Acc); 317 | format_props([{ddl, Val} | Rest], Acc) -> 318 | format_props(Rest, [{ddl, format_ddl(Val)} | Acc]); 319 | format_props([{Name, Val} | Rest], Acc) -> 320 | format_props(Rest, [{Name, format_value(Val)} | Acc]). 321 | 322 | format_list([], Acc) -> 323 | lists:reverse(Acc); 324 | format_list([Val | Rest], Acc) -> 325 | format_list(Rest, [format_value(Val) | Acc]). 326 | 327 | format_value(Val) when is_list(Val) -> 328 | format_list(Val, []); 329 | format_value(Val) when is_number(Val) -> 330 | Val; 331 | format_value(Val) when is_binary(Val) -> 332 | Val; 333 | format_value(true) -> 334 | true; 335 | format_value(false) -> 336 | false; 337 | format_value(Val) -> 338 | list_to_binary(lists:flatten(io_lib:format("~p", [Val]))). 339 | 340 | format_ddl({ddl_v1, Name, Fields, {key_v1, PartitionKey}, {key_v1, LocalKey}}) -> 341 | [{name, Name}, 342 | {fields, format_ddl_fields(Fields, [])}, 343 | {partition_key, format_ddl_key(PartitionKey, [])}, 344 | {local_key, format_ddl_key(LocalKey, [])}]; 345 | format_ddl({ddl_v2, Name, Fields, {key_v1, PartitionKey}, {key_v1, LocalKey}, DDLVersion}) -> 346 | [{name, Name}, 347 | {fields, format_ddl_fields(Fields, [])}, 348 | {partition_key, format_ddl_key(PartitionKey, [])}, 349 | {local_key, format_ddl_key(LocalKey, [])}, 350 | {minimum_capability, DDLVersion}]. 351 | 352 | format_ddl_fields([], Accum) -> 353 | lists:reverse(Accum); 354 | format_ddl_fields([{riak_field_v1,Name,Position,Type,Optional}|Fields], Accum) -> 355 | Field = {Name, [{position, Position}, 356 | {type, Type}, 357 | {optional, Optional}]}, 358 | format_ddl_fields(Fields, [Field | Accum]). 359 | 360 | format_ddl_key([], Accum) -> 361 | lists:reverse(Accum); 362 | format_ddl_key([{param_v1,[Name]}|Keys], Accum) -> 363 | format_ddl_key(Keys, [Name | Accum]); 364 | format_ddl_key([{param_v2,[Name], Ordering}|Keys], Accum) -> 365 | Key = {Name, [{ordering, Ordering}]}, 366 | format_ddl_key(Keys, [Key | Accum]); 367 | format_ddl_key([{hash_fn_v1,riak_ql_quanta, 368 | Fn,[{param_v1,[Name]},N,Unit],_Type}|Keys], Accum) -> 369 | format_ddl_key(Keys, [list_to_binary( 370 | atom_to_list(Fn) ++ "(" ++ binary_to_list(Name) ++ "," ++ 371 | integer_to_list(N) ++ "," ++ 372 | atom_to_list(Unit) ++ ")")|Accum]); 373 | format_ddl_key([{hash_fn_v1,riak_ql_quanta, 374 | Fn,[{param_v2,[Name],Ordering},N,Unit],_Type}|Keys], Accum) -> 375 | format_ddl_key(Keys, [{list_to_binary( 376 | atom_to_list(Fn) ++ "(" ++ binary_to_list(Name) ++ "," ++ 377 | integer_to_list(N) ++ "," ++ 378 | atom_to_list(Unit) ++ ")"), [{ordering, Ordering}]}|Accum]). 379 | 380 | %% @doc Wait for `Check' for the given number of `Seconds'. 381 | wait_for(_, 0) -> 382 | ok; 383 | wait_for(Check={M,F,A}, Seconds) when Seconds > 0 -> 384 | case M:F(A) of 385 | true -> 386 | ok; 387 | false -> 388 | lager:debug("Waiting for ~p:~p(~p)...", [M, F, A]), 389 | timer:sleep(1000), 390 | wait_for(Check, Seconds - 1) 391 | end. 392 | -------------------------------------------------------------------------------- /src/re_sup.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_sup). 22 | -behaviour(supervisor). 23 | -export([start_link/1]). 24 | -export([init/1]). 25 | 26 | 27 | %%%=================================================================== 28 | %%% API 29 | %%%=================================================================== 30 | 31 | start_link(StartArgs) -> 32 | supervisor:start_link({local, ?MODULE}, ?MODULE, StartArgs). 33 | 34 | %%%=================================================================== 35 | %%% Callbacks 36 | %%%=================================================================== 37 | 38 | init(Args) -> 39 | JobManager = {re_job_manager, 40 | {re_job_manager, start_link, []}, 41 | permanent, 5000, worker, [re_job_manager]}, 42 | Specs = case webmachine_is_running() of 43 | true -> 44 | add_wm_routes(Args), 45 | [JobManager]; 46 | _ -> 47 | Web = {webmachine_mochiweb, 48 | {webmachine_mochiweb, start, [web_config(Args)]}, 49 | permanent, 5000, worker, [mochiweb_socket_server]}, 50 | [Web, JobManager] 51 | end, 52 | 53 | {ok, { {one_for_one, 10, 10}, Specs} }. 54 | 55 | %% ==================================================================== 56 | %% Private 57 | %% ==================================================================== 58 | 59 | add_wm_routes(Args) -> 60 | [webmachine_router:add_route(R) || R <- lists:reverse(re_wm:dispatch(Args))]. 61 | 62 | web_config(Args) -> 63 | {Ip, Port} = riak_explorer:host_port(), 64 | WebConfig0 = [ 65 | {ip, Ip}, 66 | {port, Port}, 67 | {nodelay, true}, 68 | {log_dir, "log"}, 69 | {dispatch, re_wm:dispatch(Args)} 70 | ], 71 | WebConfig1 = case application:get_env(riak_explorer, ssl) of 72 | {ok, SSLOpts} -> 73 | WebConfig0 ++ [{ssl, true}, {ssl_opts, SSLOpts}]; 74 | undefined -> 75 | WebConfig0 76 | end, 77 | WebConfig1. 78 | 79 | -spec webmachine_is_running() -> boolean(). 80 | webmachine_is_running() -> 81 | try 82 | case mochiweb_socket_server:get(webmachine_mochiweb, port) of 83 | P when is_integer(P) -> true; 84 | _ -> false 85 | end 86 | catch 87 | _:_ -> 88 | riak_explorer:is_riak() 89 | end. 90 | -------------------------------------------------------------------------------- /src/re_wm.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_wm). 22 | 23 | -export([resources/0, 24 | routes/0, 25 | dispatch/0, 26 | dispatch/1, 27 | base_route/0]). 28 | 29 | -export([rd_url/1, 30 | rd_accepts/2, 31 | add_content/2, 32 | add_error/2, 33 | rd_content/2, 34 | rd_cluster_exists/1, 35 | rd_cluster/1, 36 | rd_node_exists/1, 37 | rd_node/1, 38 | maybe_atomize/1, 39 | maybe_to_list/1, 40 | url_decode/1 41 | ]). 42 | 43 | -export([init/1, 44 | service_available/2, 45 | allowed_methods/2, 46 | content_types_provided/2, 47 | content_types_accepted/2, 48 | resource_exists/2, 49 | provide_content/2, 50 | delete_resource/2, 51 | process_post/2, 52 | provide_text_content/2, 53 | provide_static_content/2, 54 | accept_content/2, 55 | post_is_create/2, 56 | create_path/2, 57 | last_modified/2]). 58 | 59 | -include_lib("webmachine/include/webmachine.hrl"). 60 | -include("re_wm.hrl"). 61 | 62 | -record(ctx, { 63 | proxy :: {module(), atom()} | undefined, 64 | route :: route() 65 | }). 66 | 67 | %%%=================================================================== 68 | %%% API 69 | %%%=================================================================== 70 | 71 | %%% Routing 72 | 73 | -spec resources() -> [module()]. 74 | resources() -> 75 | [ 76 | re_wm_explore, 77 | re_wm_control, 78 | re_wm_proxy, 79 | re_wm_static 80 | ]. 81 | 82 | -spec routes() -> [route()]. 83 | routes() -> 84 | routes(resources(), []). 85 | 86 | -spec routes([module()], [route()]) -> [route()]. 87 | routes([], Routes) -> 88 | Routes; 89 | routes([Resource|Rest], Routes) -> 90 | routes(Rest, Routes ++ Resource:routes()). 91 | 92 | -spec dispatch() -> [{[string() | atom], module(), [term()]}]. 93 | dispatch() -> 94 | dispatch([]). 95 | 96 | -spec dispatch([term()]) -> [{[string() | atom], module(), [term()]}]. 97 | dispatch(Args) -> 98 | WmRoutes = build_wm_routes(base_route(), routes(), []), 99 | [ {R, M, A ++ Args} || {R, M, A} <- WmRoutes ]. 100 | 101 | -spec base_route() -> string(). 102 | base_route() -> 103 | case riak_explorer:is_riak() of 104 | false -> []; 105 | true -> ["admin"] 106 | end. 107 | 108 | %%% Utility 109 | 110 | -spec rd_url(#wm_reqdata{}) -> string(). 111 | rd_url(ReqData) -> 112 | BaseUrl = wrq:base_uri(ReqData), 113 | case base_route() of 114 | [] -> 115 | BaseUrl ++ "/"; 116 | [R] -> 117 | BaseUrl ++ "/" ++ R ++ "/" 118 | end. 119 | 120 | -spec rd_accepts(string(), #wm_reqdata{}) -> boolean(). 121 | rd_accepts(CT, ReqData) -> 122 | case wrq:get_req_header("Accept", ReqData) of 123 | undefined -> 124 | true; 125 | Accept -> 126 | string:str(Accept,CT) > 0 127 | end. 128 | 129 | -spec add_content(term(), #wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 130 | add_content({error, not_found}, ReqData) -> 131 | {{halt, 404}, ReqData}; 132 | add_content({error, Reason}, ReqData) -> 133 | {{halt, 500}, add_error(Reason, ReqData)}; 134 | add_content(ok, ReqData) -> 135 | {true, ReqData}; 136 | add_content(Content, ReqData) -> 137 | Tokens = string:tokens(wrq:path(ReqData), "/"), 138 | Last = lists:nth(length(Tokens), Tokens), 139 | {true, wrq:append_to_response_body(mochijson2:encode([{list_to_binary(Last), Content}]), ReqData)}. 140 | 141 | -spec add_error(term(), #wm_reqdata{}) -> #wm_reqdata{}. 142 | add_error(Error, ReqData) -> 143 | wrq:append_to_response_body(mochijson2:encode([{error, list_to_binary(io_lib:format("~p", [Error]))}]), ReqData). 144 | 145 | -spec rd_content(term(), #wm_reqdata{}) -> 146 | {[{binary(), term()}], #wm_reqdata{}}. 147 | rd_content({error, not_found}, ReqData) -> 148 | {{halt, 404}, ReqData}; 149 | rd_content({error, Reason}, ReqData) -> 150 | {{halt, 500}, add_error(Reason, ReqData)}; 151 | rd_content(Content, ReqData) -> 152 | Tokens = string:tokens(wrq:path(ReqData), "/"), 153 | Last = lists:nth(length(Tokens), Tokens), 154 | {[{list_to_binary(Last), Content}], ReqData}. 155 | 156 | -spec rd_cluster_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 157 | rd_cluster_exists(ReqData) -> 158 | C = rd_cluster(ReqData), 159 | {re_cluster:exists(C), ReqData}. 160 | 161 | -spec rd_cluster(#wm_reqdata{}) -> re_cluster:re_cluster(). 162 | rd_cluster(ReqData) -> 163 | maybe_atomize(wrq:path_info(cluster, ReqData)). 164 | 165 | -spec rd_node_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 166 | rd_node_exists(ReqData) -> 167 | case rd_cluster_exists(ReqData) of 168 | {true,_} -> 169 | case rd_node(ReqData) of 170 | {error, not_found} -> 171 | {false, ReqData}; 172 | N -> 173 | {re_node:exists(N), ReqData} 174 | end; 175 | _ -> 176 | {false, ReqData} 177 | end. 178 | 179 | -spec rd_node(#wm_reqdata{}) -> {error, not_found} | re_node:re_node(). 180 | rd_node(ReqData) -> 181 | N = url_decode(wrq:path_info(node, ReqData)), 182 | N1 = maybe_atomize(N), 183 | 184 | case N1 of 185 | undefined -> 186 | C = rd_cluster(ReqData), 187 | re_cluster:riak_node(maybe_atomize(C)); 188 | N2 -> 189 | N2 190 | end. 191 | 192 | maybe_to_list(Data) when is_list(Data) -> Data; 193 | maybe_to_list(Data) when is_atom(Data) -> atom_to_list(Data). 194 | 195 | maybe_atomize(Data) when is_list(Data) -> list_to_atom(Data); 196 | maybe_atomize(Data) when is_atom(Data) -> Data. 197 | 198 | url_decode(Data) -> 199 | re:replace(maybe_to_list(Data), "%40", "@", [{return, list}]). 200 | 201 | %%%=================================================================== 202 | %%% Webmachine Callbacks 203 | %%%=================================================================== 204 | 205 | init(Args) -> 206 | Ctx = 207 | case proplists:get_value(proxy, Args) of 208 | undefined -> 209 | #ctx{}; 210 | {PM, PF} -> 211 | #ctx{proxy = {PM, PF}} 212 | end, 213 | {ok, Ctx}. 214 | 215 | service_available(ReqData, Ctx) -> 216 | Route = case get_route(base_route(), routes(), ReqData) of 217 | #route{}=R -> 218 | R; 219 | _ -> 220 | [R] = re_wm_static:routes(), 221 | R 222 | end, 223 | {Available, ReqData1} = 224 | case Route#route.available of 225 | {M, F} -> maybe_proxy_request(M, F, ReqData, Ctx); 226 | Bool -> {Bool, ReqData} 227 | end, 228 | {Available, ReqData1, Ctx#ctx{route = Route}}. 229 | 230 | allowed_methods(ReqData, Ctx = #ctx{route = Route}) -> 231 | {Route#route.methods, ReqData, Ctx}. 232 | 233 | content_types_provided(ReqData, Ctx = #ctx{route = Route}) -> 234 | case Route#route.provides of 235 | {M, F} -> 236 | {CTs, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 237 | {CTs, ReqData1, Ctx}; 238 | Provides -> 239 | {Provides, ReqData, Ctx} 240 | end. 241 | 242 | content_types_accepted(ReqData, Ctx = #ctx{route = Route}) -> 243 | {Route#route.accepts, ReqData, Ctx}. 244 | 245 | resource_exists(ReqData, Ctx = #ctx{route = #route{exists = {M, F}}}) -> 246 | {Success, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 247 | {Success, ReqData1, Ctx}; 248 | resource_exists(ReqData, Ctx = #ctx{route = #route{exists = Exists}}) 249 | when is_boolean(Exists) -> 250 | {Exists, ReqData, Ctx}. 251 | 252 | delete_resource(ReqData, Ctx = #ctx{route = #route{delete = {M, F}}}) -> 253 | {Success, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 254 | {Success, ReqData1, Ctx}. 255 | 256 | provide_content(ReqData, Ctx = #ctx{route = #route{content = {M, F}}}) -> 257 | case maybe_proxy_request(M, F, ReqData, Ctx) of 258 | {{halt,_}=Body, ReqData1} -> 259 | {Body, ReqData1, Ctx}; 260 | {Body, ReqData1} -> 261 | {mochijson2:encode(Body), ReqData1, Ctx} 262 | end. 263 | 264 | provide_text_content(ReqData, Ctx = #ctx{route = #route{content = {M, F}}}) -> 265 | {Body, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 266 | case Body of 267 | {{halt,_}=B, ReqData1} -> 268 | {B, ReqData1, Ctx}; 269 | B when is_binary(B) -> 270 | {binary_to_list(B), ReqData1, Ctx}; 271 | [{_,Props}]=B -> 272 | %% TODO: Improve 273 | case {proplists:get_value(lines, Props), 274 | proplists:get_value(keys, Props), 275 | proplists:get_value(buckets, Props)} of 276 | {undefined, undefined, undefined} -> 277 | {mochijson2:encode(B), ReqData1, Ctx}; 278 | {Values, undefined, undefined} -> 279 | Lines = [binary_to_list(L) || L <- Values], 280 | {string:join(Lines, io_lib:nl()), ReqData1, Ctx}; 281 | {undefined, Values, undefined} -> 282 | Lines = [binary_to_list(L) || L <- Values], 283 | {string:join(Lines, io_lib:nl()), ReqData1, Ctx}; 284 | {undefined, undefined, Values} -> 285 | Lines = [binary_to_list(L) || L <- Values], 286 | {string:join(Lines, io_lib:nl()), ReqData1, Ctx} 287 | end 288 | end. 289 | 290 | provide_static_content(ReqData, Ctx = #ctx{route = #route{content = {M, F}}}) -> 291 | {Body, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 292 | {Body, ReqData1, Ctx}. 293 | 294 | accept_content(ReqData, Ctx = #ctx{route = #route{accept = {M, F}}}) -> 295 | {Success, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 296 | {Success, ReqData1, Ctx}; 297 | accept_content(ReqData, Ctx = #ctx{route = #route{accept = undefined}}) -> 298 | {false, ReqData, Ctx}. 299 | 300 | process_post(ReqData, Ctx = #ctx{route = #route{accept = {M, F}}}) -> 301 | {Success, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 302 | {Success, ReqData1, Ctx}. 303 | 304 | post_is_create(ReqData, Ctx = #ctx{route = #route{post_create = PostCreate}}) -> 305 | {PostCreate, ReqData, Ctx}. 306 | 307 | create_path(ReqData, Ctx = #ctx{route = #route{post_path = {M, F}}}) -> 308 | {Path, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 309 | {Path, ReqData1, Ctx}. 310 | 311 | last_modified(ReqData, Ctx = #ctx{route = #route{last_modified = undefined}}) -> 312 | {undefined, ReqData, Ctx}; 313 | last_modified(ReqData, Ctx = #ctx{route = #route{last_modified = {M, F}}}) -> 314 | {LM, ReqData1} = maybe_proxy_request(M, F, ReqData, Ctx), 315 | {LM, ReqData1, Ctx}. 316 | 317 | %% ==================================================================== 318 | %% Private 319 | %% ==================================================================== 320 | 321 | get_route(_, [], _ReqData) -> 322 | undefined; 323 | get_route(BaseRoute, [Route=#route{base=[],path=Paths} | Rest], ReqData) -> 324 | case get_route_path(BaseRoute, [], Paths, Route, ReqData) of 325 | undefined -> 326 | get_route(BaseRoute, Rest, ReqData); 327 | R -> R 328 | end; 329 | get_route(BaseRoute, [Route=#route{base=Bases,path=[]} | Rest], ReqData) -> 330 | case get_route_path(BaseRoute, [], Bases, Route, ReqData) of 331 | undefined -> 332 | get_route(BaseRoute, Rest, ReqData); 333 | R -> R 334 | end; 335 | get_route(BaseRoute, [Route=#route{base=Bases,path=Paths} | Rest], ReqData) -> 336 | case get_route_base(BaseRoute, Bases, Paths, Route, ReqData) of 337 | undefined -> 338 | get_route(BaseRoute, Rest, ReqData); 339 | R -> R 340 | end. 341 | 342 | get_route_base(_, [], _, _, _) -> 343 | undefined; 344 | get_route_base(BaseRoute, [Base|Rest], Paths, Route, ReqData) -> 345 | case get_route_path(BaseRoute, Base, Paths, Route, ReqData) of 346 | undefined -> 347 | get_route_base(BaseRoute, Rest, Paths, Route, ReqData); 348 | R -> R 349 | end. 350 | 351 | get_route_path(_, _, [], _, _) -> 352 | undefined; 353 | get_route_path(BaseRoute, Base, [Path|Rest], Route, ReqData) -> 354 | ReqPath = string:tokens(wrq:path(ReqData), "/"), 355 | case expand_path(BaseRoute ++ Base ++ Path, ReqData, []) of 356 | ReqPath -> 357 | Route; 358 | _ -> 359 | get_route_path(BaseRoute, Base, Rest, Route, ReqData) 360 | end. 361 | 362 | expand_path([], _ReqData, Acc) -> 363 | lists:reverse(Acc); 364 | expand_path([Part|Rest], ReqData, Acc) when is_list(Part) -> 365 | expand_path(Rest, ReqData, [Part | Acc]); 366 | expand_path(['*'|Rest], ReqData, Acc) -> 367 | Tokens = string:tokens(wrq:path(ReqData), "/"), 368 | case length(Acc) > length(Tokens) of 369 | true -> 370 | undefined; 371 | false -> 372 | expand_path(Rest, ReqData, lists:reverse(lists:nthtail(length(Acc), Tokens)) ++ Acc) 373 | end; 374 | expand_path([Part|Rest], ReqData, Acc) when is_atom(Part) -> 375 | expand_path(Rest, ReqData, [wrq:path_info(Part, ReqData) | Acc]). 376 | 377 | build_wm_routes(_BaseRoute, [], Acc) -> 378 | lists:reverse(lists:flatten(Acc)); 379 | build_wm_routes(BaseRoute, [#route{base = [], path = Paths} | Rest], Acc) -> 380 | build_wm_routes(BaseRoute, Rest, [build_wm_route(BaseRoute, [], Paths, []) | Acc]); 381 | build_wm_routes(BaseRoute, [#route{base = Bases, path = []} | Rest], Acc) -> 382 | build_wm_routes(BaseRoute, Rest, [build_wm_route(BaseRoute, [], Bases, []) | Acc]); 383 | build_wm_routes(BaseRoute, [#route{base = Bases, path = Paths} | Rest], Acc) -> 384 | build_wm_routes(BaseRoute, Rest, [build_wm_routes(BaseRoute, Bases, Paths, []) | Acc]). 385 | 386 | build_wm_routes(_BaseRoute, [], _, Acc) -> 387 | Acc; 388 | build_wm_routes(BaseRoute, [Base|Rest], Paths, Acc) -> 389 | build_wm_routes(BaseRoute, Rest, Paths, [build_wm_route(BaseRoute, Base, Paths, [])|Acc]). 390 | 391 | build_wm_route(_, _, [], Acc) -> 392 | Acc; 393 | build_wm_route(BaseRoute, Base, [Path|Rest], Acc) -> 394 | build_wm_route(BaseRoute, Base, Rest, [{BaseRoute ++ Base ++ Path, ?MODULE, []}|Acc]). 395 | 396 | maybe_proxy_request(M, F, ReqData, #ctx{proxy = undefined}) -> 397 | M:F(ReqData); 398 | maybe_proxy_request(M, F, ReqData, #ctx{proxy = {PM, PF}}) -> 399 | case PM:PF(M, F, ReqData) of 400 | {ok, Result} -> 401 | Result; 402 | {forward, local} -> 403 | M:F(ReqData); 404 | {forward, {location, Location, Path, NewPath}} -> 405 | re_wm_proxy:send_proxy_request(Location, Path, NewPath, ReqData) 406 | end. 407 | -------------------------------------------------------------------------------- /src/re_wm_control.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_wm_control). 22 | 23 | -export([routes/0]). 24 | 25 | -export([command0_exists/1, 26 | command0/1, 27 | command1_exists/1, 28 | command1/1, 29 | command2_exists/1, 30 | command2/1]). 31 | 32 | -define(BASE, "control"). 33 | -define( 34 | CONTROL_BASE, 35 | [ 36 | [?BASE, "clusters", cluster], 37 | [?BASE, "nodes", node] 38 | ]). 39 | 40 | -define( 41 | COMMAND0_ROUTES, 42 | [ 43 | ["repair"], 44 | ["staged-leave"], 45 | ["plan"], 46 | ["commit"], 47 | ["clear"], 48 | ["status"], 49 | ["ringready"], 50 | ["transfers"], 51 | ["aae-status"], 52 | ["repl-clustername"], 53 | ["repl-connections"], 54 | ["repl-clusterstats"], 55 | ["repl-clusterstats-cluster_mgr"], 56 | ["repl-clusterstats-fs_coordinate"], 57 | ["repl-clusterstats-fullsync"], 58 | ["repl-clusterstats-proxy_get"], 59 | ["repl-clusterstats-realtime"], 60 | ["repl-realtime-start"], 61 | ["repl-realtime-stop"], 62 | ["repl-fullsync-start"], 63 | ["repl-fullsync-stop"]]). 64 | 65 | -define( 66 | COMMAND1_ROUTES, 67 | [ 68 | ["join", arg1], 69 | ["leave", arg1], 70 | ["staged-join", arg1], 71 | ["staged-leave", arg1], 72 | ["force-remove", arg1], 73 | ["repl-clustername", arg1], 74 | ["repl-disconnect", arg1], 75 | ["repl-realtime-enable", arg1], 76 | ["repl-realtime-disable", arg1], 77 | ["repl-realtime-start", arg1], 78 | ["repl-realtime-stop", arg1], 79 | ["repl-fullsync-enable", arg1], 80 | ["repl-fullsync-disable", arg1], 81 | ["repl-fullsync-start", arg1], 82 | ["repl-fullsync-stop", arg1]]). 83 | 84 | -define( 85 | COMMAND2_ROUTES, 86 | [ 87 | ["replace", arg1, arg2], 88 | ["staged-replace", arg1, arg2], 89 | ["force-replace", arg1, arg2], 90 | ["repl-connect", arg1, arg2], 91 | ["repl-clusterstats", arg1, arg2]]). 92 | 93 | -include_lib("webmachine/include/webmachine.hrl"). 94 | -include("re_wm.hrl"). 95 | 96 | %%%=================================================================== 97 | %%% API 98 | %%%=================================================================== 99 | 100 | routes() -> 101 | [%% Base 102 | #route{base=?CONTROL_BASE, 103 | path=?COMMAND0_ROUTES, 104 | exists={?MODULE, command0_exists}, 105 | content={?MODULE,command0}}, 106 | #route{base=?CONTROL_BASE, 107 | path=?COMMAND1_ROUTES, 108 | exists={?MODULE, command1_exists}, 109 | content={?MODULE,command1}}, 110 | #route{base=?CONTROL_BASE, 111 | path=?COMMAND0_ROUTES, 112 | exists={?MODULE, command2_exists}, 113 | content={?MODULE,command2}} 114 | ]. 115 | 116 | %%%=================================================================== 117 | %%% Callbacks 118 | %%%=================================================================== 119 | 120 | command0_exists(ReqData) -> 121 | Command = rd_command(ReqData), 122 | case re_wm:rd_node_exists(ReqData) of 123 | {true, _} -> 124 | {lists:member([Command], ?COMMAND0_ROUTES), ReqData}; 125 | _ -> 126 | {false, ReqData} 127 | end. 128 | 129 | 130 | command0(ReqData) -> 131 | Node = re_wm:rd_node(ReqData), 132 | Command = rd_command(ReqData), 133 | Response = 134 | case Command of 135 | "repair" -> re_node_control:repair(Node); 136 | "staged-leave" -> re_node_control:staged_leave(Node); 137 | "plan" -> re_node_control:plan(Node); 138 | "commit" -> re_node_control:commit(Node); 139 | "clear" -> re_node_control:clear(Node); 140 | "status" -> re_node_control:status(Node); 141 | "ringready" -> re_node_control:ringready(Node); 142 | "transfers" -> re_node_control:transfers(Node); 143 | "aae-status" -> re_node_control:aae_status(Node); 144 | "repl-clustername" -> re_node_control:repl_clustername(Node); 145 | "repl-connections" -> re_node_control:repl_connections(Node); 146 | "repl-realtime-start" -> re_node_control:repl_realtime_start(Node); 147 | "repl-realtime-stop" -> re_node_control:repl_realtime_stop(Node); 148 | "repl-fullsync-start" -> re_node_control:repl_fullsync_start(Node); 149 | "repl-fullsync-stop" -> re_node_control:repl_fullsync_stop(Node); 150 | "repl-clusterstats" -> re_node_control:repl_clusterstats(Node); 151 | "repl-clusterstats-cluster_mgr" -> re_node_control:repl_clusterstats_cluster_mgr(Node); 152 | "repl-clusterstats-fs_coordinate" -> re_node_control:repl_clusterstats_fs_coordinate(Node); 153 | "repl-clusterstats-fullsync" -> re_node_control:repl_clusterstats_fullsync(Node); 154 | "repl-clusterstats-proxy_get" -> re_node_control:repl_clusterstats_proxy_get(Node); 155 | "repl-clusterstats-realtime" -> re_node_control:repl_clusterstats_realtime(Node); 156 | _ -> {error, not_found} 157 | end, 158 | re_wm:rd_content(Response, ReqData). 159 | 160 | command1_exists(ReqData) -> 161 | Command = rd_command(ReqData), 162 | case re_wm:rd_node_exists(ReqData) of 163 | {true, _} -> 164 | {lists:member([Command, arg1], ?COMMAND1_ROUTES), ReqData}; 165 | _ -> 166 | {false, ReqData} 167 | end. 168 | 169 | command1(ReqData) -> 170 | Node = re_wm:rd_node(ReqData), 171 | Arg1 = rd_arg1(ReqData), 172 | Command = rd_command(ReqData), 173 | Response = 174 | case Command of 175 | "join" -> re_node_control:join(Node, Arg1); 176 | "staged-join" -> re_node_control:staged_join(Node, Arg1); 177 | "leave" -> re_node_control:leave(Node, Arg1); 178 | "staged-leave" -> re_node_control:staged_leave(Node, Arg1); 179 | "force-remove" -> re_node_control:force_remove(Node, Arg1); 180 | "repl-clustername" -> re_node_control:repl_clustername(Node, Arg1); 181 | "repl-disconnect" -> re_node_control:repl_disconnect(Node, Arg1); 182 | "repl-realtime-enable" -> re_node_control:repl_realtime_enable(Node, Arg1); 183 | "repl-realtime-disable" -> re_node_control:repl_realtime_disable(Node, Arg1); 184 | "repl-realtime-start" -> re_node_control:repl_realtime_start(Node, Arg1); 185 | "repl-realtime-stop" -> re_node_control:repl_realtime_stop(Node, Arg1); 186 | "repl-fullsync-enable" -> re_node_control:repl_fullsync_enable(Node, Arg1); 187 | "repl-fullsync-disable" -> re_node_control:repl_fullsync_disable(Node, Arg1); 188 | "repl-fullsync-start" -> re_node_control:repl_fullsync_start(Node, Arg1); 189 | "repl-fullsync-stop" -> re_node_control:repl_fullsync_stop(Node, Arg1); 190 | _ -> {error, not_found} 191 | end, 192 | re_wm:rd_content(Response, ReqData). 193 | 194 | command2_exists(ReqData) -> 195 | Command = rd_command(ReqData), 196 | case re_wm:rd_node_exists(ReqData) of 197 | {true, _} -> 198 | {lists:member([Command, arg1, arg2], ?COMMAND2_ROUTES), ReqData}; 199 | _ -> 200 | {false, ReqData} 201 | end. 202 | 203 | command2(ReqData) -> 204 | Node = re_wm:rd_node(ReqData), 205 | Arg1 = rd_arg1(ReqData), 206 | Arg2 = rd_arg2(ReqData), 207 | Command = rd_command(ReqData), 208 | Response = 209 | case Command of 210 | "staged-replace" -> re_node_control:staged_replace(Node, Arg1, Arg2); 211 | "replace" -> re_node_control:replace(Node, Arg1, Arg2); 212 | "force-replace" -> re_node_control:force_replace(Node, Arg1, Arg2); 213 | "repl-connect" -> re_node_control:repl_connect(Node, Arg1, Arg2); 214 | "repl-clusterstats" -> re_node_control:repl_clusterstats(Node, Arg1, Arg2); 215 | _ -> {error, not_found} 216 | end, 217 | re_wm:rd_content(Response, ReqData). 218 | 219 | %% ==================================================================== 220 | %% Private 221 | %% ==================================================================== 222 | 223 | rd_command(ReqData) -> 224 | case riak_explorer:is_riak() of 225 | true -> 226 | lists:nth(5, string:tokens(wrq:path(ReqData), "/")); 227 | false -> 228 | lists:nth(4, string:tokens(wrq:path(ReqData), "/")) 229 | end. 230 | 231 | rd_arg1(ReqData) -> 232 | re_wm:maybe_atomize(re_wm:url_decode(wrq:path_info(arg1, ReqData))). 233 | 234 | rd_arg2(ReqData) -> 235 | re_wm:maybe_atomize(re_wm:url_decode(wrq:path_info(arg2, ReqData))). 236 | -------------------------------------------------------------------------------- /src/re_wm_explore.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_wm_explore). 22 | 23 | -export([routes/0]). 24 | 25 | -export([home/1, 26 | ping/1, 27 | routes/1, 28 | props/1, 29 | jobs/1]). 30 | 31 | -export([clusters/1, 32 | cluster/1]). 33 | 34 | -export([nodes/1, 35 | node/1, 36 | node_config/1, 37 | node_config_files/1, 38 | node_config_file_exists/1, 39 | node_config_file/1, 40 | node_log_files/1, 41 | node_log_file_exists/1, 42 | node_log_file/1, 43 | tables/1, 44 | table_put/1, 45 | tables_query/1, 46 | table_exists/1, 47 | table/1, 48 | table_query/1, 49 | refresh_keys_ts/1, 50 | keys_ts/1, 51 | keys_ts_put/1, 52 | keys_ts_delete/1]). 53 | 54 | -export([bucket_types/1, 55 | bucket_type_exists/1, 56 | bucket_type/1, 57 | bucket_type_put/1, 58 | bucket_type_jobs/1]). 59 | 60 | -export([refresh_buckets/1, 61 | buckets/1, 62 | buckets_delete/1, 63 | buckets_put/1, 64 | bucket_exists/1, 65 | bucket/1, 66 | bucket_delete/1, 67 | bucket_jobs/1]). 68 | 69 | -export([refresh_keys/1, 70 | keys/1, 71 | keys_delete/1, 72 | keys_put/1]). 73 | 74 | -define(BASE, "explore"). 75 | -define(EXPLORE_BASE, [?BASE]). 76 | 77 | -define(CLUSTER_BASE, [?BASE, "clusters", cluster]). 78 | -define(CLUSTER_NODE, ?CLUSTER_BASE ++ ["nodes", node]). 79 | -define(BASE_NODE, [?BASE, "nodes", node]). 80 | -define(NODE_BASE, [?CLUSTER_BASE, ?CLUSTER_NODE, ?BASE_NODE]). 81 | 82 | -define(CLUSTER_TYPE, ?CLUSTER_BASE ++ ["bucket_types", bucket_type]). 83 | -define(CLUSTER_NODE_TYPE, ?CLUSTER_BASE ++ ["nodes", node, "bucket_types", bucket_type]). 84 | -define(BASE_TYPE, [?BASE, "nodes", node, "bucket_types", bucket_type]). 85 | -define(BUCKET_TYPE_BASE, [?CLUSTER_TYPE, ?CLUSTER_NODE_TYPE, ?BASE_TYPE]). 86 | 87 | -define(CLUSTER_BUCKET, ?CLUSTER_TYPE ++ ["buckets", bucket]). 88 | -define(CLUSTER_NODE_BUCKET, ?CLUSTER_NODE_TYPE ++ ["buckets", bucket]). 89 | -define(BASE_BUCKET, ?BASE_TYPE ++ ["buckets", bucket]). 90 | -define(BUCKET_BASE, [?CLUSTER_BUCKET, ?CLUSTER_NODE_BUCKET, ?BASE_BUCKET]). 91 | 92 | -include_lib("webmachine/include/webmachine.hrl"). 93 | -include("re_wm.hrl"). 94 | 95 | %%%=================================================================== 96 | %%% API 97 | %%%=================================================================== 98 | 99 | -spec routes() -> [route()]. 100 | routes() -> 101 | [%% Base 102 | #route{base=[], path=[?EXPLORE_BASE], content={?MODULE,home}}, 103 | #route{base=[?EXPLORE_BASE], path=[["ping"]], content={?MODULE,ping}}, 104 | #route{base=[?EXPLORE_BASE], path=[["routes"]], content={?MODULE,routes}}, 105 | #route{base=[?EXPLORE_BASE], path=[["props"]], content={?MODULE,props}}, 106 | #route{base=[?EXPLORE_BASE], path=[["jobs"]], content={?MODULE,jobs}}, 107 | %% Cluster 108 | #route{base=[?EXPLORE_BASE], 109 | path=[["clusters"]], 110 | content={?MODULE,clusters}}, 111 | #route{base=[?EXPLORE_BASE], 112 | path=[["clusters",cluster]], 113 | exists={re_wm,rd_cluster_exists}, 114 | content={?MODULE,cluster}}, 115 | %% Node 116 | #route{base=[?EXPLORE_BASE,?CLUSTER_BASE], 117 | path=[["nodes"]], 118 | exists={re_wm,rd_cluster_exists}, 119 | content={?MODULE,nodes}}, 120 | #route{base=?NODE_BASE, 121 | path=[], 122 | exists={re_wm,rd_node_exists}, 123 | content={?MODULE,node}}, 124 | #route{base=?NODE_BASE, 125 | path=[["config"]], 126 | exists={re_wm,rd_node_exists}, 127 | content={?MODULE,node_config}}, 128 | #route{base=?NODE_BASE, 129 | path=[["config", "files"]], 130 | exists={re_wm,rd_node_exists}, 131 | content={?MODULE,node_config_files}}, 132 | #route{base=?NODE_BASE, 133 | path=[["config", "files", file]], 134 | provides=?PROVIDE_TEXT ++ [?PROVIDE(?JSON_TYPE)], 135 | exists={?MODULE,node_config_file_exists}, 136 | content={?MODULE,node_config_file}}, 137 | #route{base=?NODE_BASE, 138 | path=[["log", "files"]], 139 | exists={re_wm,rd_node_exists}, 140 | content={?MODULE,node_log_files}}, 141 | #route{base=?NODE_BASE, 142 | path=[["log", "files", file]], 143 | provides=?PROVIDE_TEXT ++ [?PROVIDE(?JSON_TYPE)], 144 | exists={?MODULE,node_log_file_exists}, 145 | content={?MODULE,node_log_file}}, 146 | #route{base=?NODE_BASE, 147 | path=[["tables"]], 148 | exists={re_wm,rd_node_exists}, 149 | content={?MODULE,tables}}, 150 | #route{base=?NODE_BASE, 151 | path=[["tables", "query"]], 152 | methods=['POST'], 153 | exists={re_wm,rd_node_exists}, 154 | accepts=?ACCEPT_TEXT, 155 | accept={?MODULE,tables_query}}, 156 | #route{base=?NODE_BASE, 157 | path=[["tables", table]], 158 | methods=['PUT','GET'], 159 | exists={?MODULE,table_exists}, 160 | content={?MODULE,table}, 161 | accepts = ?ACCEPT_TEXT, 162 | accept = {?MODULE, table_put}}, 163 | #route{base=?NODE_BASE, 164 | path=[["tables", table, "refresh_keys", "source", "riak_kv"]], 165 | methods=['POST'], 166 | exists={?MODULE,table_exists}, 167 | accept={?MODULE,refresh_keys_ts}}, 168 | #route{base=?NODE_BASE, 169 | path=[["tables", table, "keys"]], 170 | provides=[?PROVIDE(?JSON_TYPE)] ++ ?PROVIDE_TEXT, 171 | methods=['PUT','GET','DELETE'], 172 | exists={?MODULE,table_exists}, 173 | content={?MODULE,keys_ts}, 174 | accepts=?ACCEPT_TEXT, 175 | accept={?MODULE,keys_ts_put}, 176 | delete={?MODULE,keys_ts_delete}}, 177 | #route{base=?NODE_BASE, 178 | path=[["tables", table, "query"]], 179 | methods=['POST'], 180 | exists={?MODULE,table_exists}, 181 | accepts=?ACCEPT_TEXT, 182 | accept={?MODULE,table_query}}, 183 | %% Bucket Type 184 | #route{base=?NODE_BASE, 185 | path=[["bucket_types"]], 186 | exists={re_wm,rd_node_exists}, 187 | content={?MODULE,bucket_types}}, 188 | #route{base=?NODE_BASE, 189 | path=[["bucket_types", bucket_type]], 190 | methods=['PUT','GET'], 191 | exists={?MODULE,bucket_type_exists}, 192 | content={?MODULE,bucket_type}, 193 | accepts=?ACCEPT_TEXT, 194 | accept={?MODULE,bucket_type_put}}, 195 | #route{base=?BUCKET_TYPE_BASE, 196 | path=[["jobs"]], 197 | exists={?MODULE,bucket_type_exists}, 198 | content={?MODULE,bucket_type_jobs}}, 199 | %% Bucket 200 | #route{base=?BUCKET_TYPE_BASE, 201 | path=[["refresh_buckets", "source", "riak_kv"]], 202 | methods=['POST'], 203 | exists={?MODULE,bucket_type_exists}, 204 | accept={?MODULE,refresh_buckets}}, 205 | #route{base=?BUCKET_TYPE_BASE, 206 | path=[["buckets"]], 207 | provides=[?PROVIDE(?JSON_TYPE)] ++ ?PROVIDE_TEXT, 208 | methods=['PUT','GET','DELETE'], 209 | exists={?MODULE,bucket_type_exists}, 210 | content={?MODULE,buckets}, 211 | accepts=?ACCEPT_TEXT, 212 | accept={?MODULE,buckets_put}, 213 | delete={?MODULE,buckets_delete}}, 214 | #route{base=?BUCKET_TYPE_BASE, 215 | path=[["buckets",bucket]], 216 | methods=['GET', 'DELETE'], 217 | exists={?MODULE,bucket_exists}, 218 | content={?MODULE,bucket}, 219 | delete={?MODULE,bucket_delete}}, 220 | #route{base=?BUCKET_TYPE_BASE, 221 | path=[["buckets",bucket,"jobs"]], 222 | exists={?MODULE,bucket_exists}, 223 | content={?MODULE,bucket_jobs}}, 224 | %% Key 225 | #route{base=?BUCKET_BASE, 226 | path=[["refresh_keys", "source", "riak_kv"]], 227 | methods=['POST'], 228 | exists={?MODULE,bucket_exists}, 229 | accept={?MODULE,refresh_keys}}, 230 | #route{base=?BUCKET_BASE, 231 | path=[["keys"]], 232 | provides=[?PROVIDE(?JSON_TYPE)] ++ ?PROVIDE_TEXT, 233 | methods=['PUT','GET','DELETE'], 234 | exists={?MODULE,bucket_exists}, 235 | content={?MODULE,keys}, 236 | accepts=?ACCEPT_TEXT, 237 | accept={?MODULE,keys_put}, 238 | delete={?MODULE,keys_delete}} 239 | ]. 240 | 241 | %%%=================================================================== 242 | %%% Callbacks 243 | %%%=================================================================== 244 | 245 | %%% Explore 246 | 247 | -spec home(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 248 | home(ReqData) -> 249 | re_wm:rd_content(<<"riak_explorer api">>, ReqData). 250 | 251 | -spec ping(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 252 | ping(ReqData) -> 253 | re_wm:rd_content(pong, ReqData). 254 | 255 | -spec routes(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 256 | routes(ReqData) -> 257 | Formatted = 258 | [ [{methods, Methods}, 259 | {base, [ [ case is_atom(Part) of 260 | true -> Part; 261 | false -> list_to_binary(Part) 262 | end|| Part <- Base] 263 | || Base <- Bases]}, 264 | {path, [ [ case is_atom(Part) of 265 | true -> Part; 266 | false -> list_to_binary(Part) 267 | end|| Part <- Path] 268 | || Path <- Paths]}] 269 | || #route{base=Bases,path=Paths,methods=Methods} <- re_wm:routes()], 270 | re_wm:rd_content(Formatted, ReqData). 271 | 272 | -spec props(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 273 | props(ReqData) -> 274 | re_wm:rd_content(riak_explorer:props(), ReqData). 275 | 276 | -spec jobs(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 277 | jobs(ReqData) -> 278 | Jobs = case re_job_manager:get_jobs() of 279 | {error, not_found} -> []; 280 | J -> [J] 281 | end, 282 | re_wm:rd_content(Jobs, ReqData). 283 | 284 | %%% Cluster 285 | 286 | -spec clusters(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 287 | clusters(ReqData) -> 288 | re_wm:rd_content(riak_explorer:clusters(), ReqData). 289 | 290 | -spec cluster(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 291 | cluster(ReqData) -> 292 | C = re_wm:rd_cluster(ReqData), 293 | re_wm:rd_content(re_cluster:props(C), ReqData). 294 | 295 | %%% Node 296 | 297 | -spec nodes(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 298 | nodes(ReqData) -> 299 | C = re_wm:rd_cluster(ReqData), 300 | re_wm:rd_content(re_cluster:nodes(C), ReqData). 301 | 302 | -spec node(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 303 | node(ReqData) -> 304 | N = re_wm:rd_node(ReqData), 305 | re_wm:rd_content(re_node:props(N), ReqData). 306 | 307 | -spec node_config(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 308 | node_config(ReqData) -> 309 | N = re_wm:rd_node(ReqData), 310 | re_wm:rd_content(re_node:config(N), ReqData). 311 | 312 | -spec node_config_files(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 313 | node_config_files(ReqData) -> 314 | N = re_wm:rd_node(ReqData), 315 | re_wm:rd_content(re_node:config_files(N), ReqData). 316 | 317 | -spec node_config_file_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 318 | node_config_file_exists(ReqData) -> 319 | N = re_wm:rd_node(ReqData), 320 | F = wrq:path_info(file, ReqData), 321 | case re_wm:rd_node_exists(ReqData) of 322 | {true,_} -> 323 | {re_node:config_file_exists(N, F), ReqData}; 324 | _ -> 325 | {false, ReqData} 326 | end. 327 | 328 | -spec node_config_file(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 329 | node_config_file(ReqData) -> 330 | N = re_wm:rd_node(ReqData), 331 | F = wrq:path_info(file, ReqData), 332 | Result = re_node:config_file(N, F), 333 | re_wm:rd_content(Result, ReqData). 334 | 335 | -spec node_log_files(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 336 | node_log_files(ReqData) -> 337 | N = re_wm:rd_node(ReqData), 338 | re_wm:rd_content(re_node:log_files(N), ReqData). 339 | 340 | -spec node_log_file_exists(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 341 | node_log_file_exists(ReqData) -> 342 | N = re_wm:rd_node(ReqData), 343 | F = wrq:path_info(file, ReqData), 344 | case re_wm:rd_node_exists(ReqData) of 345 | {true,_} -> 346 | {re_node:log_file_exists(N, F), ReqData}; 347 | _ -> 348 | {false, ReqData} 349 | end. 350 | 351 | -spec node_log_file(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 352 | node_log_file(ReqData) -> 353 | Rows = list_to_integer(wrq:get_qs_value("rows","1000",ReqData)), 354 | N = re_wm:rd_node(ReqData), 355 | F = wrq:path_info(file, ReqData), 356 | Result = re_node:log_file(N, F, Rows), 357 | re_wm:rd_content(Result, ReqData). 358 | 359 | -spec tables(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 360 | tables(ReqData) -> 361 | N = re_wm:rd_node(ReqData), 362 | re_wm:rd_content(re_node:tables(N), ReqData). 363 | 364 | -spec tables_query(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 365 | tables_query(ReqData) -> 366 | N = re_wm:rd_node(ReqData), 367 | Query = wrq:req_body(ReqData), 368 | Response = re_node:query_ts(N, Query), 369 | re_wm:add_content(Response, ReqData). 370 | 371 | -spec table_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 372 | table_exists(ReqData) -> 373 | N = re_wm:rd_node(ReqData), 374 | Table = list_to_binary(wrq:path_info(table, ReqData)), 375 | case re_wm:rd_node_exists(ReqData) of 376 | {true,_} -> 377 | {re_node:table_exists(N, Table), ReqData}; 378 | _ -> 379 | {false, ReqData} 380 | end. 381 | 382 | -spec table(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 383 | table(ReqData) -> 384 | N = re_wm:rd_node(ReqData), 385 | Table = list_to_binary(wrq:path_info(table, ReqData)), 386 | re_wm:rd_content(re_node:table(N, Table), ReqData). 387 | 388 | -spec table_put(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 389 | table_put(ReqData) -> 390 | N = re_wm:rd_node(ReqData), 391 | Table = list_to_binary(wrq:path_info(table, ReqData)), 392 | RawRows = wrq:req_body(ReqData), 393 | Rows = mochijson2:decode(RawRows), 394 | case re_node:put_ts(N, Table, Rows) of 395 | [{ts, [{success, true}]}] -> 396 | {true, ReqData}; 397 | _ -> 398 | {false, ReqData} 399 | end. 400 | 401 | -spec table_query(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 402 | table_query(ReqData) -> 403 | N = re_wm:rd_node(ReqData), 404 | Table = list_to_binary(wrq:path_info(table, ReqData)), 405 | QueryRaw = wrq:req_body(ReqData), 406 | Query = mochijson2:decode(QueryRaw), 407 | Response = re_node:get_ts(N, Table, Query), 408 | re_wm:add_content(Response, ReqData). 409 | 410 | -spec refresh_keys_ts(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 411 | refresh_keys_ts(ReqData) -> 412 | C = re_wm:rd_cluster(ReqData), 413 | N = re_wm:rd_node(ReqData), 414 | T = list_to_binary(wrq:path_info(table, ReqData)), 415 | Sort = list_to_atom(wrq:get_qs_value("sort","true",ReqData)), 416 | Options = [{sort, Sort}], 417 | JobsPath = string:substr(wrq:path(ReqData),1, 418 | string:str(wrq:path(ReqData), 419 | "refresh_keys") - 1) ++ "jobs", 420 | JobResponse = re_node:list_keys_ts(C, N, T, Options), 421 | set_jobs_response(JobResponse, JobsPath, ReqData). 422 | 423 | -spec keys_ts(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 424 | keys_ts(ReqData) -> 425 | C = re_wm:rd_cluster(ReqData), 426 | T = list_to_binary(wrq:path_info(table, ReqData)), 427 | Start = list_to_integer(wrq:get_qs_value("start","0",ReqData)), 428 | Rows = list_to_integer(wrq:get_qs_value("rows","1000",ReqData)), 429 | Result = re_node:list_keys_ts_cache(C, T, Start, Rows), 430 | re_wm:rd_content(Result, ReqData). 431 | 432 | -spec keys_ts_delete(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 433 | keys_ts_delete(ReqData) -> 434 | C = re_wm:rd_cluster(ReqData), 435 | T = list_to_binary(wrq:path_info(table, ReqData)), 436 | Response = re_node:clean_keys_ts_cache(C, T), 437 | re_wm:add_content(Response, ReqData). 438 | 439 | -spec keys_ts_put(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 440 | keys_ts_put(ReqData) -> 441 | C = re_wm:rd_cluster(ReqData), 442 | T = list_to_binary(wrq:path_info(table, ReqData)), 443 | RawValue = wrq:req_body(ReqData), 444 | Keys = case wrq:get_req_header("Content-Type", ReqData) of 445 | "application/json" -> 446 | {struct, [{<<"keys">>, K}]} = mochijson2:decode(RawValue), 447 | K; 448 | _ -> 449 | KeysStr = string:tokens(RawValue, "\n"), 450 | lists:map(fun(K) -> list_to_binary(K) end, KeysStr) 451 | end, 452 | Response = re_node:put_keys_ts_cache(C, T, Keys), 453 | re_wm:add_content(Response, ReqData). 454 | 455 | %%% Bucket Type 456 | 457 | -spec bucket_types(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 458 | bucket_types(ReqData) -> 459 | N = re_wm:rd_node(ReqData), 460 | re_wm:rd_content(re_node:bucket_types(N), ReqData). 461 | 462 | -spec bucket_type_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 463 | bucket_type_exists(ReqData) -> 464 | N = re_wm:rd_node(ReqData), 465 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 466 | case re_wm:rd_node_exists(ReqData) of 467 | {true,_} -> 468 | {re_node:bucket_type_exists(N, T), ReqData}; 469 | _ -> 470 | {false, ReqData} 471 | end. 472 | 473 | -spec bucket_type(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 474 | bucket_type(ReqData) -> 475 | N = re_wm:rd_node(ReqData), 476 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 477 | re_wm:rd_content(re_node:bucket_type(N, T), ReqData). 478 | 479 | -spec bucket_type_put(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 480 | bucket_type_put(ReqData) -> 481 | N = re_wm:rd_node(ReqData), 482 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 483 | RawValue = wrq:req_body(ReqData), 484 | Response = re_node:create_bucket_type(N, T, RawValue), 485 | re_wm:add_content(Response, ReqData). 486 | 487 | -spec bucket_type_jobs(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 488 | bucket_type_jobs(ReqData) -> 489 | Jobs = case re_job_manager:get_job(list_buckets) of 490 | {error, not_found} -> []; 491 | J -> [J] 492 | end, 493 | re_wm:rd_content(Jobs, ReqData). 494 | 495 | %%% Bucket 496 | 497 | -spec refresh_buckets(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 498 | refresh_buckets(ReqData) -> 499 | C = re_wm:rd_cluster(ReqData), 500 | N = re_wm:rd_node(ReqData), 501 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 502 | Sort = list_to_atom(wrq:get_qs_value("sort","true",ReqData)), 503 | Options = [{sort, Sort}], 504 | JobsPath = string:substr(wrq:path(ReqData),1, 505 | string:str(wrq:path(ReqData), 506 | "refresh_buckets") - 1) ++ "jobs", 507 | JobResponse = re_node:list_buckets(C, N, T, Options), 508 | set_jobs_response(JobResponse, JobsPath, ReqData). 509 | 510 | -spec buckets(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 511 | buckets(ReqData) -> 512 | C = re_wm:rd_cluster(ReqData), 513 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 514 | Start = list_to_integer(wrq:get_qs_value("start","0",ReqData)), 515 | Rows = list_to_integer(wrq:get_qs_value("rows","1000",ReqData)), 516 | Result = re_node:list_buckets_cache(C, T, Start, Rows), 517 | re_wm:rd_content(Result, ReqData). 518 | 519 | -spec buckets_delete(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 520 | buckets_delete(ReqData) -> 521 | C = re_wm:rd_cluster(ReqData), 522 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 523 | Response = re_node:clean_buckets_cache(C, T), 524 | re_wm:add_content(Response, ReqData). 525 | 526 | -spec buckets_put(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 527 | buckets_put(ReqData) -> 528 | C = re_wm:rd_cluster(ReqData), 529 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 530 | RawValue = wrq:req_body(ReqData), 531 | Buckets = case wrq:get_req_header("Content-Type", ReqData) of 532 | "application/json" -> 533 | {struct, [{<<"buckets">>, B}]} = mochijson2:decode(RawValue), 534 | B; 535 | _ -> 536 | BucketsStr = string:tokens(RawValue, "\n"), 537 | lists:map(fun(B) -> list_to_binary(B) end, BucketsStr) 538 | end, 539 | Response = re_node:put_buckets_cache(C, T, Buckets), 540 | re_wm:add_content(Response, ReqData). 541 | 542 | -spec bucket_exists(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 543 | bucket_exists(ReqData) -> 544 | bucket_type_exists(ReqData). 545 | 546 | -spec bucket(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 547 | bucket(ReqData) -> 548 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 549 | re_wm:rd_content( 550 | [{id,B}, {props, []}], ReqData). 551 | 552 | -spec bucket_delete(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 553 | bucket_delete(ReqData) -> 554 | C = re_wm:rd_cluster(ReqData), 555 | N = re_wm:rd_node(ReqData), 556 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 557 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 558 | JobsPath = wrq:path(ReqData) ++ "/jobs", 559 | JobResponse = re_node:delete_bucket(C, N, T, B), 560 | set_jobs_response(JobResponse, JobsPath, ReqData). 561 | 562 | -spec bucket_jobs(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 563 | bucket_jobs(ReqData) -> 564 | KeysJobs = case re_job_manager:get_job(list_keys) of 565 | {error, not_found} -> []; 566 | KJ -> [KJ] 567 | end, 568 | DeleteBucketJobs = case re_job_manager:get_job(delete_bucket) of 569 | {error, not_found} -> []; 570 | DJ -> [DJ] 571 | end, 572 | re_wm:rd_content( 573 | KeysJobs ++ DeleteBucketJobs, ReqData). 574 | 575 | -spec refresh_keys(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 576 | refresh_keys(ReqData) -> 577 | C = re_wm:rd_cluster(ReqData), 578 | N = re_wm:rd_node(ReqData), 579 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 580 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 581 | Sort = list_to_atom(wrq:get_qs_value("sort","true",ReqData)), 582 | Options = [{sort, Sort}], 583 | JobsPath = string:substr(wrq:path(ReqData),1, 584 | string:str(wrq:path(ReqData), 585 | "refresh_keys") - 1) ++ "jobs", 586 | JobResponse = re_node:list_keys(C, N, T, B, Options), 587 | set_jobs_response(JobResponse, JobsPath, ReqData). 588 | 589 | -spec keys(#wm_reqdata{}) -> {term(), #wm_reqdata{}}. 590 | keys(ReqData) -> 591 | C = re_wm:rd_cluster(ReqData), 592 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 593 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 594 | Start = list_to_integer(wrq:get_qs_value("start","0",ReqData)), 595 | Rows = list_to_integer(wrq:get_qs_value("rows","1000",ReqData)), 596 | Result = re_node:list_keys_cache(C, T, B, Start, Rows), 597 | re_wm:rd_content(Result, ReqData). 598 | 599 | -spec keys_delete(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 600 | keys_delete(ReqData) -> 601 | C = re_wm:rd_cluster(ReqData), 602 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 603 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 604 | Response = re_node:clean_keys_cache(C, T, B), 605 | re_wm:add_content(Response, ReqData). 606 | 607 | -spec keys_put(#wm_reqdata{}) -> {boolean(), #wm_reqdata{}}. 608 | keys_put(ReqData) -> 609 | C = re_wm:rd_cluster(ReqData), 610 | T = list_to_binary(wrq:path_info(bucket_type, ReqData)), 611 | B = list_to_binary(wrq:path_info(bucket, ReqData)), 612 | RawValue = wrq:req_body(ReqData), 613 | Keys = case wrq:get_req_header("Content-Type", ReqData) of 614 | "application/json" -> 615 | {struct, [{<<"keys">>, K}]} = mochijson2:decode(RawValue), 616 | K; 617 | _ -> 618 | KeysStr = string:tokens(RawValue, "\n"), 619 | lists:map(fun(K) -> list_to_binary(K) end, KeysStr) 620 | end, 621 | Response = re_node:put_keys_cache(C, T, B, Keys), 622 | re_wm:add_content(Response, ReqData). 623 | 624 | %% ==================================================================== 625 | %% Private 626 | %% ==================================================================== 627 | 628 | -spec set_jobs_response(term(), string(), #wm_reqdata{}) -> 629 | {{halt, 202|102|403}, #wm_reqdata{}}. 630 | set_jobs_response(ok, JobsPath, ReqData) -> 631 | ReqData1 = wrq:set_resp_headers([{"Location",JobsPath}], ReqData), 632 | {{halt, 202}, ReqData1}; 633 | set_jobs_response({error, already_started}, JobsPath, ReqData) -> 634 | ReqData1 = wrq:set_resp_headers([{"Location",JobsPath}], ReqData), 635 | {{halt, 102}, ReqData1}; 636 | set_jobs_response({error, developer_mode_off}, _, ReqData) -> 637 | {{halt, 403}, ReqData}; 638 | set_jobs_response({error, Reason}, _, ReqData) -> 639 | {{halt, 500}, re_wm:add_error(Reason, ReqData)}. 640 | -------------------------------------------------------------------------------- /src/re_wm_proxy.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_wm_proxy). 22 | 23 | -export([routes/0]). 24 | 25 | -export([send_proxy_request/4]). 26 | 27 | -export([proxy_available/1]). 28 | 29 | -define(BASE, "riak"). 30 | -define(PROXY_BASE, [?BASE]). 31 | 32 | -include_lib("webmachine/include/webmachine.hrl"). 33 | -include("re_wm.hrl"). 34 | 35 | %%%=================================================================== 36 | %%% API 37 | %%%=================================================================== 38 | 39 | -spec routes() -> [route()]. 40 | routes() -> 41 | [#route{base=[?PROXY_BASE], 42 | path=[["clusters", cluster, '*']], 43 | available={?MODULE,proxy_available}}, 44 | #route{base=[?PROXY_BASE], 45 | path=[["nodes", node, '*']], 46 | available={?MODULE,proxy_available}} 47 | ]. 48 | 49 | send_proxy_request(Location, RelPath, NewLocation, ReqData) -> 50 | Path = lists:append( 51 | [Location, 52 | RelPath, 53 | case wrq:req_qs(ReqData) of 54 | [] -> []; 55 | Qs -> [$?|mochiweb_util:urlencode(Qs)] 56 | end]), 57 | 58 | lager:info("Sending Explorer Proxy Request to: ~p", [Path]), 59 | 60 | %% translate webmachine details to ibrowse details 61 | Headers = clean_request_headers( 62 | mochiweb_headers:to_list(wrq:req_headers(ReqData))), 63 | Method = wm_to_ibrowse_method(wrq:method(ReqData)), 64 | ReqBody = case wrq:req_body(ReqData) of 65 | undefined -> []; 66 | B -> B 67 | end, 68 | 69 | case ibrowse:send_req(Path, Headers, Method, ReqBody) of 70 | {ok, Status, RiakHeaders, RespBody} -> 71 | RespHeaders = fix_location(RiakHeaders, NewLocation), 72 | {{halt, list_to_integer(Status)}, 73 | wrq:set_resp_headers(RespHeaders, 74 | wrq:set_resp_body(RespBody, ReqData))}; 75 | {error, Reason} -> 76 | re_wm:add_content({error, Reason}, ReqData); 77 | _ -> 78 | {false, ReqData} 79 | end. 80 | 81 | %%%=================================================================== 82 | %%% Callbacks 83 | %%%=================================================================== 84 | 85 | proxy_available(ReqData) -> 86 | case {re_wm:rd_cluster_exists(ReqData), 87 | re_wm:rd_node_exists(ReqData)} of 88 | {{true,_}, {true,_}} -> 89 | send_proxy_request(ReqData); 90 | E -> 91 | lager:info("E: ~p", [E]), 92 | {{halt, 404}, ReqData} 93 | end. 94 | 95 | %% ==================================================================== 96 | %% Private 97 | %% ==================================================================== 98 | 99 | send_proxy_request(ReqData) -> 100 | N = re_wm:rd_node(ReqData), 101 | C = re_wm:rd_cluster(ReqData), 102 | 103 | case re_node:http_listener(N) of 104 | {error, Reason} -> 105 | re_wm:add_content({error, Reason}, ReqData); 106 | Listener -> 107 | RiakPath = "http://" ++ binary_to_list(Listener) ++ "/", 108 | Path = wrq:disp_path(ReqData), 109 | NewLocation = re_wm:rd_url(ReqData) ++ ?BASE++"/clusters/"++atom_to_list(C), 110 | send_proxy_request(RiakPath, Path, NewLocation, ReqData) 111 | end. 112 | 113 | clean_request_headers(Headers) -> 114 | [{K,V} || {K,V} <- Headers, 115 | K /= 'Host', 116 | K /= 'Content-Length', 117 | K /= 'X-Requested-With', 118 | K /= 'Referer']. 119 | 120 | wm_to_ibrowse_method(Method) when is_list(Method) -> 121 | list_to_atom(string:to_lower(Method)); 122 | wm_to_ibrowse_method(Method) when is_atom(Method) -> 123 | wm_to_ibrowse_method(atom_to_list(Method)). 124 | 125 | fix_location([], _) -> []; 126 | fix_location([{"Location", RiakDataPath}|Rest], NewLocation) -> 127 | [{"Location", NewLocation++RiakDataPath}|Rest]; 128 | fix_location([H|T], NewLocation) -> 129 | [H|fix_location(T, NewLocation)]. 130 | -------------------------------------------------------------------------------- /src/re_wm_static.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(re_wm_static). 22 | 23 | -export([routes/0]). 24 | 25 | -export([static_types/1, 26 | static_file_exists/1, 27 | static_last_modified/1, 28 | static_file/1]). 29 | 30 | -define(STATIC_ROOT, "priv/ember_riak_explorer/dist"). 31 | -define(DEFAULT_INDEX, "index.html"). 32 | 33 | -include_lib("kernel/include/file.hrl"). 34 | -include_lib("webmachine/include/webmachine.hrl"). 35 | -include("re_wm.hrl"). 36 | 37 | %%%=================================================================== 38 | %%% API 39 | %%%=================================================================== 40 | 41 | routes() -> 42 | [ 43 | #route{base = [], 44 | path = [['*']], 45 | exists = {?MODULE, static_file_exists}, 46 | provides = {?MODULE, static_types}, 47 | content = {?MODULE, static_file}, 48 | last_modified = {?MODULE, static_last_modified}} 49 | ]. 50 | 51 | %%%=================================================================== 52 | %%% Callbacks 53 | %%%=================================================================== 54 | 55 | %% Static. 56 | 57 | static_types(ReqData) -> 58 | Resource = static_filename(ReqData), 59 | CT = webmachine_util:guess_mime(Resource), 60 | {[{CT, provide_static_content}], ReqData}. 61 | 62 | static_file_exists(ReqData) -> 63 | case wrq:disp_path(ReqData) of 64 | "" -> 65 | case [lists:nth(length(wrq:path(ReqData)), wrq:path(ReqData))] of 66 | "/" -> 67 | {true, ReqData}; 68 | _ -> 69 | {{halt, 302}, wrq:set_resp_header("Location", re_wm:rd_url(ReqData), ReqData)} 70 | end; 71 | _ -> 72 | Filename = static_filename(ReqData), 73 | {filelib:is_regular(Filename), ReqData} 74 | end. 75 | 76 | static_last_modified(ReqData) -> 77 | LM = filelib:last_modified(static_filename(ReqData)), 78 | {LM, ReqData}. 79 | 80 | static_file(ReqData) -> 81 | Filename = static_filename(ReqData), 82 | {ok, Response} = file:read_file(Filename), 83 | ET = hash_body(Response), 84 | ReqData1 = wrq:set_resp_header("ETag", webmachine_util:quoted_string(ET), ReqData), 85 | {Response, ReqData1}. 86 | 87 | %% ==================================================================== 88 | %% Private 89 | %% ==================================================================== 90 | 91 | static_filename(ReqData) -> 92 | case wrq:disp_path(ReqData) of 93 | "" -> 94 | filename:join([?STATIC_ROOT, "index.html"]); 95 | F -> 96 | filename:join([?STATIC_ROOT, F]) 97 | end. 98 | 99 | hash_body(Body) -> mochihex:to_hex(binary_to_list(crypto:hash(sha,Body))). 100 | -------------------------------------------------------------------------------- /src/riak_control_sup.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2011 Basho Technologies, Inc. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | %% 21 | %% @doc Application supervisor. 22 | 23 | -module(riak_control_sup). 24 | 25 | -behaviour(supervisor). 26 | 27 | %% API 28 | -export([start_link/0]). 29 | 30 | %% Supervisor callbacks 31 | -export([init/1]). 32 | 33 | %% Helper macro for declaring children of supervisor 34 | -define(CHILD(I, Type), {I, {I, start_link, []}, permanent, 5000, Type, [I]}). 35 | 36 | %% =================================================================== 37 | %% API functions 38 | %% =================================================================== 39 | 40 | start_link() -> 41 | supervisor:start_link({local, ?MODULE}, ?MODULE, []). 42 | 43 | %% =================================================================== 44 | %% Supervisor callbacks 45 | %% =================================================================== 46 | 47 | init([]) -> 48 | RiakControlSession={riak_control_session, 49 | {riak_control_session, start_link, []}, 50 | permanent, 51 | 5000, 52 | worker, 53 | [riak_control_session]}, 54 | ExplorerSpec = {riak_explorer, 55 | {riak_explorer, start, [[]]}, 56 | temporary, 5000, worker, [riak_explorer]}, 57 | 58 | %% determine if riak_control is enabled or not 59 | case app_helper:get_env(riak_control, enabled, false) of 60 | true -> 61 | Resources = [riak_control_wm_gui, 62 | riak_control_wm_cluster, 63 | riak_control_wm_nodes, 64 | riak_control_wm_partitions], 65 | Routes = lists:append([Resource:routes() || Resource <- Resources]), 66 | _ = [webmachine_router:add_route(R) || R <- Routes], 67 | 68 | %% start riak control 69 | {ok, { {one_for_one, 5, 10}, [RiakControlSession, ExplorerSpec]} }; 70 | _ -> 71 | {ok, { {one_for_one, 5, 10}, [ExplorerSpec] } } 72 | end. 73 | -------------------------------------------------------------------------------- /src/riak_explorer.app.src: -------------------------------------------------------------------------------- 1 | %% -*- erlang -*- 2 | {application, riak_explorer, 3 | [ 4 | {description, "Riak development tool and admin GUI"}, 5 | {vsn, git}, 6 | {registered, []}, 7 | {modules, []}, 8 | {applications, [ 9 | kernel, 10 | stdlib, 11 | inets, 12 | crypto, 13 | mochiweb, 14 | webmachine, 15 | lager, 16 | ibrowse 17 | ]}, 18 | {mod, { riak_explorer, []}}, 19 | {env, [ 20 | {listener, {"127.0.0.1", 9000}} 21 | ]} 22 | ]}. 23 | -------------------------------------------------------------------------------- /src/riak_explorer.erl: -------------------------------------------------------------------------------- 1 | %% ------------------------------------------------------------------- 2 | %% 3 | %% Copyright (c) 2015 Basho Technologies, Inc. All Rights Reserved. 4 | %% 5 | %% This file is provided to you under the Apache License, 6 | %% Version 2.0 (the "License"); you may not use this file 7 | %% except in compliance with the License. You may obtain 8 | %% a copy of the License at 9 | %% 10 | %% http://www.apache.org/licenses/LICENSE-2.0 11 | %% 12 | %% Unless required by applicable law or agreed to in writing, 13 | %% software distributed under the License is distributed on an 14 | %% "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY 15 | %% KIND, either express or implied. See the License for the 16 | %% specific language governing permissions and limitations 17 | %% under the License. 18 | %% 19 | %% ------------------------------------------------------------------- 20 | 21 | -module(riak_explorer). 22 | 23 | -compile({no_auto_import,[nodes/1]}). 24 | 25 | -behaviour(application). 26 | 27 | -export([start/1, 28 | start/2, 29 | stop/1]). 30 | 31 | -export([cluster_config/1, 32 | cluster_configs/0, 33 | clusters/0]). 34 | 35 | -export([is_riak/0, 36 | url/0, 37 | url/2, 38 | props/0, 39 | host_port/0, 40 | data_dir/0]). 41 | 42 | %%%=================================================================== 43 | %%% API 44 | %%%=================================================================== 45 | 46 | -spec cluster_config(re_cluster:re_cluster()) -> {error, not_found} | re_cluster:re_cluster_props(). 47 | cluster_config(undefined) -> 48 | cluster_config(default); 49 | cluster_config(C) -> 50 | proplists:get_value(C, cluster_configs(), {error, not_found}). 51 | 52 | -spec cluster_configs() -> [{re_cluster:re_cluster(), re_cluster:re_cluster_props()}]. 53 | cluster_configs() -> 54 | case application:get_env(riak_explorer, clusters) of 55 | undefined -> 56 | [{default, []}]; 57 | {ok, Clusters} -> 58 | Clusters1 = lists:keysort(1, Clusters), 59 | Clusters2 = 60 | case should_clean_default_cluster() of 61 | true -> 62 | proplists:delete(default, Clusters1); 63 | false -> 64 | Clusters1 65 | end, 66 | case length(Clusters2) of 67 | 0 -> [{default, []}]; 68 | _ -> Clusters2 69 | end 70 | end. 71 | 72 | -spec clusters() -> [{re_cluster:re_cluster(), re_cluster:re_cluster_props()}]. 73 | clusters() -> 74 | [re_cluster:from_props(C, Props) || {C, Props} <- cluster_configs()]. 75 | 76 | -spec is_riak() -> boolean(). 77 | is_riak() -> 78 | case code:ensure_loaded(riak_core) of 79 | {module,riak_core} -> true; 80 | _ -> false 81 | end. 82 | 83 | -spec url() -> string(). 84 | url() -> 85 | {Ip, Port} = host_port(), 86 | url(Ip, Port). 87 | 88 | -spec url(string(), integer()) -> string(). 89 | url(Ip, Port) -> 90 | Base = 91 | case is_riak() of 92 | true -> 93 | case re_node:http_listener(node()) of 94 | {error, _} -> 95 | "/"; 96 | U -> 97 | binary_to_list(U) ++ "/" 98 | end; 99 | false -> 100 | "http://" ++ Ip ++ ":" ++ integer_to_list(Port) ++ "/" 101 | end, 102 | case re_wm:base_route() of 103 | [] -> 104 | Base; 105 | [R] -> 106 | Base ++ R ++ "/" 107 | end. 108 | 109 | -spec props() -> [{atom(), atom() | binary()}]. 110 | props() -> 111 | props_to_bin(application:get_all_env(riak_explorer), []). 112 | 113 | -spec host_port() -> {string(), integer()}. 114 | host_port() -> 115 | case application:get_env(riak_explorer, host) of 116 | {ok, {_, _} = HostPort} -> HostPort; 117 | undefined -> {"0.0.0.0", 9000} 118 | end. 119 | 120 | -spec data_dir() -> string(). 121 | data_dir() -> 122 | Def = "./data", 123 | Dir = application:get_env(riak_explorer, platform_data_dir, Def), 124 | Dir. 125 | 126 | %%%=================================================================== 127 | %%% Callbacks 128 | %%%=================================================================== 129 | 130 | start(StartArgs) -> 131 | start(normal, StartArgs). 132 | 133 | start(_Type, StartArgs) -> 134 | re_sup:start_link([StartArgs]). 135 | 136 | stop(_State) -> 137 | ok. 138 | 139 | %%%=================================================================== 140 | %%% Private 141 | %%%=================================================================== 142 | 143 | -spec should_clean_default_cluster() -> boolean(). 144 | should_clean_default_cluster() -> 145 | case application:get_env(riak_explorer, platform_etc_dir) of 146 | {ok, EtcDir} -> 147 | EtcFile = filename:join([EtcDir, "riak_explorer.conf"]), 148 | Proc = fun(Entry, Accum) -> 149 | case re:run(Entry, "^clusters.default.*$", []) of 150 | {match, _} -> false; 151 | _ -> Accum 152 | end 153 | end, 154 | re_file_util:for_each_line_in_file(EtcFile, Proc, read, true); 155 | _ -> 156 | false 157 | end. 158 | 159 | -spec props_to_bin([{atom(), term()}], [{atom(), atom() | binary()}]) -> [{atom(), atom() | binary()}]. 160 | props_to_bin([], Accum) -> lists:reverse(Accum); 161 | props_to_bin([{Name, {Host, Port}} | Rest], Accum) -> 162 | props_to_bin(Rest, [{Name, list_to_binary(url(Host, Port))} | Accum]); 163 | props_to_bin([{Name, []} | Rest], Accum) -> 164 | props_to_bin(Rest, [{Name, []} | Accum]); 165 | props_to_bin([{Name, [{_, _} | _]=Nested} | Rest], Accum) -> 166 | props_to_bin(Rest, [{Name, props_to_bin(Nested, [])} | Accum]); 167 | props_to_bin([{Name, Value} | Rest], Accum) when is_list(Value) -> 168 | props_to_bin(Rest, [{Name, list_to_binary(Value)} | Accum]); 169 | props_to_bin([{Name, Value} | Rest], Accum) -> 170 | props_to_bin(Rest, [{Name, Value} | Accum]). 171 | -------------------------------------------------------------------------------- /test/integration/re_wm_explore_test.erl: -------------------------------------------------------------------------------- 1 | %% @doc Test the API in various ways. 2 | -module(re_wm_explore_test). 3 | 4 | -compile(export_all). 5 | -ifdef(integration_test). 6 | -include_lib("eunit/include/eunit.hrl"). 7 | -include("re_wm.hrl"). 8 | 9 | %%%%%%%%%%%%%%%%%%%%%%%%%% 10 | %%% TEST DESCRIPTIONS %%% 11 | %%%%%%%%%%%%%%%%%%%%%%%%%% 12 | re_wm_explore_test_() -> 13 | {setup, 14 | %% Setup 15 | fun () -> 16 | {RiakType, _} = riak_type(), 17 | if RiakType == ts -> 18 | ?debugFmt("On TS, so creating bucket", []), 19 | {ok, _, _} = ret:http(put, 20 | "http://localhost:9000/explore/clusters/default/bucket_types/GeoCheckin", 21 | <<"{\"props\":{\"table_def\": \"CREATE TABLE GeoCheckin (myfamily varchar not null, myseries varchar not null, time timestamp not null, myint sint64 not null, mytext varchar not null, myfloat double not null, mybool boolean not null, PRIMARY KEY ((myfamily, myseries, quantum(time, 15, 'm')), myfamily, myseries, time))\"}}">>); 22 | true -> ?debugFmt("On KV, skipping bucket creation", []) 23 | end 24 | end, 25 | %% No cleanup 26 | fun (_) -> ok end, 27 | %% Tests 28 | fun (_) -> 29 | {timeout, 60, [ 30 | all_routes() 31 | ]} 32 | end}. 33 | 34 | %%%%%%%%%%%%%%%%%%%% 35 | %%% ACTUAL TESTS %%% 36 | %%%%%%%%%%%%%%%%%%%% 37 | 38 | all_routes() -> 39 | Routes = re_wm:routes(), 40 | RiakType = riak_type(), 41 | lists:flatten([ [ assert_paths(Method, Base, Paths, RiakType, []) 42 | || Method <- Methods ] 43 | || #route{base=[Base|_],path=Paths,methods=Methods} <- Routes ]). 44 | 45 | assert_paths(_, _, [], _, Accum) -> lists:reverse(Accum); 46 | assert_paths(Method, Base, [Path|Paths], RiakType, Accum) -> 47 | case is_testable_path(Base, Path, RiakType) of 48 | true -> 49 | Url = ret:url(to_path_str(Base) ++ "/" ++ to_path_str(Path)), 50 | Body = path_body(Method, Path), 51 | {ok, Code, Content} = ret:http(Method, Url, Body), 52 | ExpectedCode = path_code(Method, Path), 53 | assert_paths(Method, Base, Paths, RiakType, [?_assertEqual({ExpectedCode, Method, Url, Content}, {Code, Method, Url, Content})|Accum]); 54 | false -> 55 | assert_paths(Method, Base, Paths, RiakType, Accum) 56 | end. 57 | 58 | 59 | to_path_str(["config", "files", file]) -> 60 | string:join(["config", "files", "riak.conf"], "/"); 61 | to_path_str(["log", "files", file]) -> 62 | string:join(["log", "files", "console.log"], "/"); 63 | to_path_str(Path) -> 64 | string:join([ path_part(P, Path) || P <- Path ], "/"). 65 | 66 | path_part(P, _) when is_list(P) -> P; 67 | path_part(cluster, _) -> "default"; 68 | path_part(node, _) -> "riak@127.0.0.1"; 69 | path_part(bucket_type, _) -> "mytype"; 70 | path_part(bucket, _) -> "test"; 71 | path_part(table, _) -> "GeoCheckin"; 72 | path_part(arg1, _) -> "riak@127.0.0.1"; 73 | path_part(arg2, _) -> "riak@127.0.0.1"; 74 | path_part('*',["clusters",cluster,'*']) -> "ping"; 75 | path_part('*',["nodes",node,'*']) -> "ping"; 76 | path_part('*',['*']) -> "index.html". 77 | 78 | path_body('PUT', ["keys"]) -> 79 | <<"{\"keys\":[\"test\"]}">>; 80 | path_body('PUT', ["tables", table, "keys"]) -> 81 | <<"{\"keys\":[[\"family1\", \"series1\", 25]]}">>; 82 | path_body('PUT', ["buckets"]) -> 83 | <<"{\"buckets\":[\"test\"]}">>; 84 | path_body('PUT', ["bucket_types", bucket_type]) -> 85 | <<"{\"props\":{\"n_val\":3}}">>; 86 | path_body('PUT', ["tables", table]) -> 87 | <<"[[\"family1\", \"series1\", 25, \"hot\", 23]]">>; 88 | path_body('POST', ["tables", "query"]) -> 89 | <<"select * from GeoCheckin where time > 24 and time < 26 and myfamily = 'family1' and myseries = 'series1'">>; 90 | path_body('POST', ["tables", table, "query"]) -> 91 | <<"[\"family2\", \"series99\", 26]">>; 92 | path_body(_, _) -> 93 | <<>>. 94 | 95 | path_code('POST', ["tables", "query"]) -> "200"; 96 | path_code('POST', ["tables", table, "query"]) -> "200"; 97 | path_code('POST', _) -> "202"; 98 | path_code('GET', ["staged-leave"]) -> "500"; 99 | path_code('GET', ["commit"]) -> "500"; 100 | path_code('GET', ["join", arg1]) -> "500"; 101 | path_code('GET', ["leave", arg1]) -> "500"; 102 | path_code('GET', ["staged-join", arg1]) -> "500"; 103 | path_code('GET', ["staged-leave", arg1]) -> "500"; 104 | path_code('GET', ["force-remove", arg1]) -> "500"; 105 | path_code('GET', ["repl-fullsync-start", arg1]) -> "500"; 106 | path_code('GET', _) -> "200"; 107 | path_code('DELETE', ["buckets",bucket]) -> "202"; 108 | path_code('DELETE', _) -> "204"; 109 | path_code('PUT', ["bucket_types", bucket_type]) -> "200"; 110 | path_code('PUT', _) -> "204". 111 | 112 | %%%%%%%%%%%%%%%%%%%%%%%% 113 | %%% HELPER FUNCTIONS %%% 114 | %%%%%%%%%%%%%%%%%%%%%%%% 115 | 116 | home() -> 117 | render_json([{explore, <<"riak_explorer api">>}]). 118 | 119 | ping() -> 120 | render_json([{ping, pong}]). 121 | 122 | render_json(Data) -> 123 | Body = binary_to_list(list_to_binary(mochijson2:encode(Data))), 124 | Body. 125 | 126 | riak_type() -> 127 | {_, _, Data} = ret:http(get, "http://localhost:9000/explore/clusters/default"), 128 | {struct, JsonData} = mochijson2:decode(Data), 129 | {struct, Cluster} = proplists:get_value(<<"default">>, JsonData), 130 | RiakType = binary_to_list(proplists:get_value(<<"riak_type">>, Cluster)), 131 | case {lists:prefix("ts", RiakType), 132 | lists:suffix("ee", RiakType)} of 133 | {false, false} -> {kv, oss}; 134 | {false, true} -> {kv, ee}; 135 | {true, false} -> {ts, oss}; 136 | {true, true} -> {ts, ee} 137 | end. 138 | 139 | %% The '*repl*' paths are not testable when Riak OSS is being used 140 | %% The '*tables*' paths are not testable when Riak KV is being used 141 | is_testable_path(Base, [Path|_], RiakType) -> 142 | case {lists:prefix("repl", Path), 143 | lists:prefix("tables", Path), 144 | RiakType} of 145 | {true, _, {_, oss}} -> 146 | ?debugFmt("Skipping ~p/~p because we are on Riak OSS.~n", [Base, Path]), 147 | false; 148 | {_, true, {kv, _}} -> 149 | ?debugFmt("Skipping ~p/~p because we are on Riak KV.~n", [Base, Path]), 150 | false; 151 | _ -> true 152 | end. 153 | 154 | -endif. 155 | -------------------------------------------------------------------------------- /test/integration/ret.erl: -------------------------------------------------------------------------------- 1 | %% @doc Test helper functions. 2 | -module(ret). 3 | 4 | -compile(export_all). 5 | 6 | -include_lib("eunit/include/eunit.hrl"). 7 | 8 | -ifdef(integration_test). 9 | 10 | fmt(S, Args) -> 11 | lists:flatten(io_lib:format(S, Args)). 12 | 13 | re_host() -> 14 | case os:getenv("RE_HOST") of 15 | false -> "localhost"; 16 | Host -> Host 17 | end. 18 | 19 | re_port() -> 20 | case os:getenv("RE_PORT") of 21 | false -> 9000; 22 | Port -> list_to_integer(Port) 23 | end. 24 | 25 | url(Path) -> 26 | fmt("http://~s:~B/~s", [re_host(), re_port(), Path]). 27 | 28 | ensure_ibrowse() -> 29 | case whereis(ibrowse) of 30 | undefined -> ibrowse:start(); 31 | Any when is_pid(Any)-> ok 32 | end. 33 | 34 | http(Method, URL, Body, H0, ReturnHeader) -> 35 | Method1 = list_to_atom(string:to_lower(atom_to_list(Method))), 36 | ensure_ibrowse(), 37 | Opts = [], 38 | 39 | Headers = H0 ++ [ 40 | {"Content-Type", "application/json"} 41 | ], 42 | 43 | Res = ibrowse:send_req(URL, Headers, Method1, Body, Opts), 44 | 45 | case ReturnHeader of 46 | true -> Res; 47 | _ -> 48 | case Res of 49 | {error, Reason} -> 50 | {ok, 000, Reason}; 51 | {ok, S, _, B} -> 52 | {ok, S, B} 53 | end 54 | end. 55 | 56 | http(Method, URL) -> 57 | http(Method, URL, <<>>, [], false). 58 | http(Method, URL, Body) -> 59 | http(Method, URL, Body, [], false). 60 | http(Method, URL, Body, Headers) -> 61 | http(Method, URL, Body, Headers, false). 62 | 63 | -endif. 64 | -------------------------------------------------------------------------------- /tools.mk: -------------------------------------------------------------------------------- 1 | REBAR ?= ./rebar 2 | 3 | compile-no-deps: 4 | ${REBAR} compile skip_deps=true 5 | 6 | test: compile 7 | ${REBAR} eunit skip_deps=true 8 | 9 | docs: 10 | ${REBAR} doc skip_deps=true 11 | 12 | xref: compile 13 | ${REBAR} xref skip_deps=true 14 | 15 | PLT ?= $(HOME)/.combo_dialyzer_plt 16 | LOCAL_PLT = .local_dialyzer_plt 17 | DIALYZER_FLAGS ?= -Wunmatched_returns 18 | 19 | ${PLT}: compile 20 | @if [ -f $(PLT) ]; then \ 21 | dialyzer --check_plt --plt $(PLT) --apps $(DIALYZER_APPS) && \ 22 | dialyzer --add_to_plt --plt $(PLT) --output_plt $(PLT) --apps $(DIALYZER_APPS) ; test $$? -ne 1; \ 23 | else \ 24 | dialyzer --build_plt --output_plt $(PLT) --apps $(DIALYZER_APPS); test $$? -ne 1; \ 25 | fi 26 | 27 | ${LOCAL_PLT}: compile 28 | @if [ -d deps ]; then \ 29 | if [ -f $(LOCAL_PLT) ]; then \ 30 | dialyzer --check_plt --plt $(LOCAL_PLT) deps/*/ebin && \ 31 | dialyzer --add_to_plt --plt $(LOCAL_PLT) --output_plt $(LOCAL_PLT) deps/*/ebin ; test $$? -ne 1; \ 32 | else \ 33 | dialyzer --build_plt --output_plt $(LOCAL_PLT) deps/*/ebin ; test $$? -ne 1; \ 34 | fi \ 35 | fi 36 | 37 | dialyzer-run: 38 | @echo "==> $(shell basename $(shell pwd)) (dialyzer)" 39 | @if [ -f $(LOCAL_PLT) ]; then \ 40 | PLTS="$(PLT) $(LOCAL_PLT)"; \ 41 | else \ 42 | PLTS=$(PLT); \ 43 | fi; \ 44 | if [ -f dialyzer.ignore-warnings ]; then \ 45 | if [ $$(grep -cvE '[^[:space:]]' dialyzer.ignore-warnings) -ne 0 ]; then \ 46 | echo "ERROR: dialyzer.ignore-warnings contains a blank/empty line, this will match all messages!"; \ 47 | exit 1; \ 48 | fi; \ 49 | dialyzer $(DIALYZER_FLAGS) --plts $${PLTS} -c ebin > dialyzer_warnings ; \ 50 | egrep -v "^\s*(done|Checking|Proceeding|Compiling)" dialyzer_warnings | grep -F -f dialyzer.ignore-warnings -v > dialyzer_unhandled_warnings ; \ 51 | cat dialyzer_unhandled_warnings ; \ 52 | [ $$(cat dialyzer_unhandled_warnings | wc -l) -eq 0 ] ; \ 53 | else \ 54 | dialyzer $(DIALYZER_FLAGS) --plts $${PLTS} -c ebin; \ 55 | fi 56 | 57 | dialyzer-quick: compile-no-deps dialyzer-run 58 | 59 | dialyzer: ${PLT} ${LOCAL_PLT} dialyzer-run 60 | 61 | cleanplt: 62 | @echo 63 | @echo "Are you sure? It takes several minutes to re-build." 64 | @echo Deleting $(PLT) and $(LOCAL_PLT) in 5 seconds. 65 | @echo 66 | sleep 5 67 | rm $(PLT) 68 | rm $(LOCAL_PLT) 69 | 70 | include tools.mk -------------------------------------------------------------------------------- /vagrant/ubuntu/trusty64/Vagrantfile: -------------------------------------------------------------------------------- 1 | # -*- mode: ruby -*- 2 | # vi: set ft=ruby : 3 | 4 | Vagrant.configure(2) do |config| 5 | config.vm.box = "ubuntu/trusty64" 6 | 7 | config.vm.provider "virtualbox" do |vb| 8 | vb.memory = "2048" 9 | end 10 | 11 | config.vm.synced_folder "./../../../", "/riak_explorer" 12 | 13 | config.vm.provision "shell", inline: <<-SHELL 14 | apt-get -y update 15 | apt-get -y install build-essential libncurses5-dev openssl libssl-dev fop xsltproc unixodbc-dev git curl autoconf 16 | 17 | # wget http://s3.amazonaws.com/downloads.basho.com/erlang/otp_src_R16B02-basho8.tar.gz 18 | # tar zxvf otp_src_R16B02-basho8.tar.gz 19 | # cd OTP_R16B02_basho8 20 | # autoconf 21 | # cd erts && autoheader && autoconf 22 | # ./configure && make && sudo make install 23 | 24 | # Need to figure out what is wrong with OTP R16B02-basho8. For now, this 25 | # installs R16B03 26 | apt-get -y install erlang 27 | apt-get -y install npm 28 | npm install -g nodejs 29 | npm install -g ember-cli 30 | npm install -g bower 31 | cd /usr/bin && ln -fs nodejs node 32 | npm install -g phantomjs 33 | cd /riak_explorer && make package-linux 34 | 35 | # Relocatable Riak, just in case 36 | cd 37 | apt-get -y install build-essential libc6-dev-i386 git 38 | apt-get -y install libpam0g-dev 39 | wget http://s3.amazonaws.com/downloads.basho.com/riak/2.1/2.1.1/riak-2.1.1.tar.gz 40 | tar zxvf riak-2.1.1.tar.gz 41 | cd riak-2.1.1 42 | make rel 43 | cd rel && tar -zcvf riak_linux_amd64.tar.gz riak 44 | mv rel/riak_linux_amd64.tar.gz /riak_explorer/_build/ 45 | SHELL 46 | end 47 | --------------------------------------------------------------------------------