├── .gitignore ├── CHANGES.md ├── LICENSE ├── Makefile ├── README.md ├── actordb_console ├── adbt.thrift ├── c_src └── cmdshell.c ├── etc ├── app.config ├── init.sql └── vm.args ├── pkg.vars.config ├── priv └── mkconsole.escript ├── rebar ├── rebar.config ├── rel ├── files │ ├── actordb_console │ ├── app.config │ ├── dbfiles │ ├── erl │ ├── nodetool │ ├── sqlite_extensions_here │ ├── sys.config │ └── vm.args ├── reltool.config └── vars.config ├── src ├── README ├── actordb_console.erl └── actordb_wxconsole.erl ├── startdev.sh ├── test ├── dist_test.erl ├── emysql.ez ├── etc │ ├── app.config │ └── simple_schema.yaml ├── mysql.ez ├── repl_test.erl └── test_util.erl └── winbuild /.gitignore: -------------------------------------------------------------------------------- 1 | # /deps 2 | /distdir 3 | /log 4 | .detest 5 | var/ 6 | ebin/ 7 | *.beam 8 | -------------------------------------------------------------------------------- /CHANGES.md: -------------------------------------------------------------------------------- 1 | **0.10.29 - 1 Oct 2019** 2 | - Bugfix: Bugfix for transaction restart after it was interrupted 3 | 4 | **0.10.28 - 23 Sep 2019** 5 | - Bugfix: Starting a node in a cluster might not complete successfully. 6 | - Bugfix: Raft tunnel connection within driver had a missing send timeout. 7 | - Bugfix: Tunnel connection might not update due to queue being full. 8 | - Bugfix: When copying an actor from one node to another there might be a crash when trying to unlock an actor that has already ben unlocked. 9 | - Bugfix: Raft election logic, run on detected network changes to sync. 10 | 11 | **0.10.27 - 26 Jan 2019** 12 | - Improvements for when LMDB larger than RAM 13 | 14 | **0.10.26 - 23 Nov 2018** 15 | - Bugfix: Fixes for Erlang 20.0+ 16 | 17 | 18 | **0.10.25 - 26 Apr 2017** 19 | - Bugfix: Proper handling of CREATE TRIGGER statements in schema. 20 | - Bugfix: actordb_console -noshell -q was not printing output. 21 | - Bugfix: show schema was not working from mysql console. 22 | 23 | 24 | **0.10.24 - 22 Nov 2016** 25 | - Bugfix: 0.10.23 shipped with a broken app.config 26 | 27 | 28 | **0.10.23 - 21 Nov 2016** 29 | - Bugfix: Multiple multiactor transactions could deadlock each other. A mechanism was added that guarantees progress. 30 | - Feature: extensions folder where you can add .dll/.so./.dylib sqlite extension files. This way you can use custom SQL functionality. 31 | - Feature: allow SQLite's pragma statements: freelist_count, page_count, table_info, index_list, index_xinfo. 32 | - Feature: Upgrade to sqlite 3.15.1 33 | 34 | 35 | **0.10.22 - 9 Aug 2016** 36 | - Bugfix: ActorDB did not start when running on more than 8 cores. 37 | - Feature: Upgrade to sqlite 3.14.0 38 | - Feature: thrift_framed setting in app.config and -framed in actordb_console. 39 | 40 | 41 | **0.10.21 - 26 Apr 2016** 42 | - Bugfix: Memory leak on read lmdb transactions. 43 | - Bugfix: Distributed transactions, the actor where transaction failed could crash needlessly. 44 | - Bugfix: Update to SQLite 3.12.2 (http://www.sqlite.org/news.html). 45 | - Bugfix: On OSX fix linking to SSL. 46 | - Bugfix: actordb_console printing blobs. 47 | - Bugfix: Adding a node to an already initialized cluster could stall. 48 | - Change: Increase number of generated shards per-node. From 4 to 12. The previously low number was a leftover of old days. Existing installations are unaffected. 49 | 50 | 51 | **0.10.20 - 31 Mar 2016** 52 | - Bugfix: Running an empty write that increases evnum but does not actually execute one could result in checkpoints not completing because of the gap in the eventlog. 53 | - Bugfix: Errors for update/insert statements did not clean up the transaction properly. 54 | - Feature: actordb_tool print now has an optional actorindex parameter. 55 | 56 | 57 | **0.10.19 - 27 Mar 2016** 58 | - Bugfix: In rare cases an endless loop could develop while waiting for a write to complete on an actor. 59 | - Feature: Select only certain actors by name: actor mytype(* when id like 'actorname%') 60 | - Feature: safe flag on reads. Requires a read to be confirmed by a majority of nodes. 61 | 62 | 63 | **0.10.18 - 23 Mar 2016** 64 | - Bugfix: Better fix for yesterdays bug. 65 | - Bugfix: Self-heal mechanism when an actor is left in an intermediate state due to abnormal shutdown. 66 | 67 | 68 | **0.10.17 - 22 Mar 2016** 69 | - Bugfix: Bug in new query interrupt code for 0.10.16 caused queries to get terminated too early. 70 | 71 | 72 | **0.10.16 - 22 Mar 2016** 73 | - Bugfix: Thrift API update. Some non SQL query calls were not checking user. 74 | - Bugfix: Increase sqlite per actor cache_size. It seems having it too low makes some queries fail. 75 | - Bugfix: Election RPCs are now limited to 200ms. 76 | - Bugfix: query_timeout only worked for read queries. Default timeout has been lowered to 10s. 77 | - Feature: show queries to see currently running queries. 78 | - Feature: Use a lock free task queue for the storage engine. 79 | - Feature: RPC optimizations by using batching. 80 | - Feature: Use C based atomic integers to count queries for backpressure mechanism. Important because we need constant global reads and writes, which is hard to achieve efficiently in Erlang. 81 | 82 | 83 | **0.10.15 - 29 Feb 2016** 84 | - Bugfix: Driver work queue thread safety issue. 85 | - Bugfix: Windows backup fix. Windows backups are now always compact. 86 | - Bugfix: Executing a backup could produce a (harmless) error once backup has already been completed. 87 | - Bugfix: Bugfix for :memory: sqlite instances. On close sqlite may call undo in wal.c. 88 | - Bugfix: Bugfixes in catchup code. When a node comes back online, so it knows what actors to update. 89 | - Bugfix: Build of actordb_console required erlang-wx to be installed, even though it was only used on windows. 90 | - Bugfix: Replication bug caused unnecessary node chatter until next write. 91 | - Bugfix: When calling non-existing actor without create flag, we might create a slow moving endless loop. 92 | - Feature: Update sqlite to 3.11.0 93 | - Feature: Compact backup which produces a smaller file, but may take longer and use more CPU time. 94 | 95 | 96 | **0.10.14 - 9 Feb 2016** 97 | - Bugfix: PRAGMA copy response. Copy succeeded, but error was returned. 98 | - Bugfix: When initializing with localnode() rpcport was ignored so only default worked. 99 | - Feature: Remove some LMDB workarounds that are no longer necessary in 0.9.17. Performance and reliability is improved as a result. 100 | - Feature: Added client_inactivity_timeout setting to app.config. 101 | - Feature: Update sqlite to 3.10.2. 102 | 103 | 104 | **0.10.13 - 16 Dec 2015** 105 | - Bugfix: Parameterized queries over thrift with create flag were failing. Bug was created in 0.10.12. 106 | - Bugfix: actordb_client can now be started without parameters and pool connections added later. 107 | - Feature: Upgrade LMDB to 0.9.17 108 | 109 | 110 | **0.10.12 - 27 Nov 2015** 111 | - Bugfix: Mysql protocol bugfix for passing strings using parameterized queries. 112 | - Bugfix: Import from backup. It now imports users, schema and idmax. Only nodes/groups need to be recreated. 113 | - Bugfix: Bugfix with {{uniqid}} tags in sql. Previously they only worked in multi-actor transactions. 114 | - Change: Server will now always return integers in bigint from thrift. This simplifies clients. 115 | 116 | 117 | **0.10.11 - 26 Nov 2015** 118 | - Bugfix: Thrift bugfix for blobs, data may have been returned in text instead of blob. 119 | - Bugfix: Thrift bugfix for returning floats, bug was created in 0.10.8. 120 | - Bugfix: PRAGMA exists failed with parameterized queries. 121 | 122 | 123 | **0.10.10 - 24 Nov 2015** 124 | - Feature: "show status" command to display status of ActorDB node 125 | - Feature: "show shards" command to display shards running on an ActorDB node 126 | - Bugfix: Go mysql driver compatibility bugfix. 127 | - Bugfix: Bugfix for boolean bind parameters. 128 | - Bugfix: On initialize through actordb_console do not print help headers. 129 | - Bugfix: On import from backup, carry over idmax. 130 | - Bugfix: Race condition on saving global state. 131 | 132 | 133 | **0.10.9 - 21 Nov 2015** 134 | - Bugfix: If actor was inactive and schema was updated during, it might not update it on next start. 135 | 136 | 137 | **0.10.8 - 20 Nov 2015** 138 | - Bugfix: Schema was updated right after a read to an actor instead of before. 139 | - Bugfix: When actor moves to a new shard, replace original actor database with redirect marker. This way we do not waste space. 140 | - Feature: Added blob to thrift value type. Somehow we managed to forget about it. 141 | 142 | 143 | **0.10.7 - 10 Nov 2015** 144 | - Bugfix: Cleaner handling of nocreate error (trying to execute on an actor that does not exist without create flag). 145 | - Bugfix: Updating schema for an actor failed. Bug in schema check before updating. 146 | - Bugfix: For certain types of rare sql execute errors, sqlite statement may not have been cleaned up. 147 | 148 | 149 | **0.10.6 - 2 Nov 2015** 150 | - Bugfix: Initialization with actordb_console -f "path/to/file.sql" fixed. Bug was created in 0.10.5. 151 | 152 | 153 | **0.10.5 - 28 Oct 2015** 154 | - Feature: Add state table to config. This is to store various config items like schema version. 155 | - Feature: actordb_console flag: -q "select ...." to run single queries and exit. 156 | - Feature: actordb_console flag: -print < default|min|csv|csvh > to actordb_console. 157 | - Feature: actordb_console flag: -pw to set password directly (instead of being prompted). 158 | - Feature: actordb_console flag: -use < actordb|config|schema > which db to use by default. 159 | - Feature: actordb_console flag: -noshell for use with -q, recommended but not required. 160 | - Feature: new thrift calls: uniqid, actor_types, actor_tables, actor_columns 161 | - Bugfix: If {{RESULT}} was present in a multiactor write, query was wrongly interpreted as a read and failed when executing. 162 | - Bugfix: Invalid return result on pragma delete statements. Causing thrift to return error when in fact everything was ok. 163 | 164 | 165 | **0.10.4 - 5 Oct 2015** 166 | - Bugfix: Batching writes to an actor is much improved. 167 | - Bugfix: sql parsing bug on insanely written sEleCT statements. 168 | - Bugfix: Thrift interface returned error when doing multiple inserts using a single parameterized query. 169 | - Bugfix: On SQL error thrift interface was not returning the sql error code. 170 | 171 | 172 | **0.10.3 - 24 Sept 2015** 173 | - Bugfix: When deleting actor, some data was left behind. 174 | - Bugfix: Backup fix for lmdb. 175 | - Bugfix: Some vm.args changes that result in better erlang efficiency. 176 | - Bugfix: actordb_console ignore invalid flags. 177 | 178 | 179 | **0.10.2 - 16 Sept 2015** 180 | - Bugfix: Actor statement parsing could read wrong actor type. 181 | - Bugfix: PRAGMA list on kv types did not work. 182 | - Bugfix: Integer overflow in driver. 183 | - Bugfix: Some write results could be ignored if write delays happen. 184 | 185 | 186 | **0.10.1 - 11 Sept 2015** 187 | - Bugfix: If api_network_interface was not set, thrift was not listening on all interfaces. 188 | - Bugfix: Cleaned up error results in console. 189 | - Bugfix: Improper error handling of certain kind of invalid read queries. 190 | 191 | 192 | **0.10 - 9 Sept 2015** 193 | - Feature: ActorDB now uses LMDB as a storage engine. SQLite is still the SQL engine. Performace is vastly improved. Old versions are automatically imported. Old files will be left intact. A new file named lmdb will appear in every storage folder and it will contain all data. 194 | - Feature: Seperate Read/Write threads for improved concurrency. 195 | - Feature: Write batching. If an actor receives many writes at once, they will be a part of a single replication event. 196 | - Feature: Added LIMIT and OFFSET to pragma list: "actor type1(*);PRAGMA list LIMIT 1000 OFFSET 10;" 197 | - Feature: actordb_console to manage the database. It replaces actordbctrl. 198 | - Feature: actordb_tool for backups and diagnostics. 199 | - Feature: Expanded thrift interface. ActorDB can now be completely controled through thrift. 200 | - Feature: Finally added user management. Thrift interface supports safer mysql style login that does not send the actual password over the wire. 201 | - Feature: Expanded configuration options in app.config. 202 | - Feature: Added fsync flag to queries. Writes with fsync will be synced to disk before response is sent to client. You can set fsync to safe in app.config, which will have the same result for all writes. 203 | - Feature: MySQL protocol support for prepared statements. 204 | - Feature: Thrift protocol support for parameterized queries. 205 | - Bugfix: MySQL console bugfix for pragma statements. 206 | - Bugfix: ActorDB was listening on all interfaces instead of just the one it actually is configured to use. 207 | - Bugfix: Removing nodes was broken. 208 | 209 | 210 | **0.9 - 7 May 2015** 211 | - Bugfix: Pragma exists was still creating actor files when actor did not actually exist. 212 | - Bugfix: Add timeout to recovery process so that it can get started again if leader changes. 213 | 214 | 215 | **0.9pre9 - 30 Apr 2015** 216 | - Bugfix: Bugfix replication with noop append entries could cause a follower to never catchup. 217 | - Improvement: Added {error,consensus_timeout} when queries are unable to be processed because not enough nodes are available. 218 | - Improvement: Multiactor updates now return how many actors have been changed. 219 | 220 | **0.9pre8 - 13 Apr 2015** 221 | - Bugfix: Transaction size bugfix, 0.9pre7 turns out just made it more rare to appear. 222 | - Bugfix: If actor was deleted, pragma exists still returned true. 223 | - Bugfix: Actor delete did not complete if schema had an autoincrement. 224 | 225 | 226 | **0.9pre7 - 2 Apr 2015** 227 | - Bugfix: Manually setting cache size was a limit to transaction size. 228 | 229 | 230 | **0.9pre6 - 31 Mar 2015** 231 | - Bugfix: Rename two thrift values as they had language conflicts. 232 | 233 | 234 | **0.9pre5 - 30 Mar 2015** 235 | - Bugfix: Raft replication bugfix 236 | 237 | 238 | **0.9pre4 - 24 Mar 2015** 239 | 240 | - Bugfix: New driver had improper Raft handling on conflicts 241 | - Bugfix: Driver/sqlite settings were improper, resulting in slowdowns when creating new actors 242 | - Bugfix: Cluster latency detection was not working 243 | - Feature: Basic thrift interface 244 | 245 | 246 | **0.9pre3 - 3 Mar 2015** 247 | 248 | - Bugfix: Sometimes initialized database did not start completely when restarted. 249 | 250 | **0.9pre2 - 27 Feb 2015** 251 | 252 | - Bugfix: Sometimes driver crashed when closing program. This did not affect any data. 253 | 254 | **0.9pre1 - 18 Feb 2015** 255 | 256 | - Feature: We have rewritten the replication core of ActorDB. It now uses the Raft algorithm for replication. For global state and individual actor state. 257 | - Feature: New sqlite driver (actordb_driver). It uses a combined wal file for all actors. This is a giant performance improvement. 258 | - Bugfix: Use variable as actor name. 259 | - Bugfix: Reads accross actors add the right actor name to actor column. 260 | - Bugfix: Mysql protocol do not close connections for invalid ActorDB sql statements. 261 | 262 | 263 | **0.5.2 - 26 Feb 2014** 264 | 265 | - Feature: Windows build. It requires 64bit erlang version to be installed from http://www.erlang.org/download.html 266 | - Change: Simplified and optimized shard rebalancing when adding new nodes. 267 | - Change: Switched to MurmurHash3 for sharding instead of erlang:phash2. It has a better spread. This and the above change means 0.5.2 is backwards incompatible. We promise to make it the last such change. 268 | - Bugfix: Java mysql driver compatibiliy. 269 | - Bugfix: Using an underscore (_) in a key did not work for KV datatypes (you can use ascii characters, numbers, dots and underscore). 270 | - Bugfix: Will no longer create folders in bkdcore/priv 271 | 272 | 273 | **0.5.1** 274 | 275 | - Feature: actordb:types/0, actordb:tables/1, actordb:columns/2 (useful calls when embedded). 276 | - Feature: Return number of rows changed and last insert rowid in non select statements. 277 | - Bugfix: Check schema is valid before saving (did not check in all cases before), check if kv table has right name and types. 278 | - Bugfix: Significantly improved actor migration reliability during cluster rebalancing. 279 | - Bugfix: For "actor type(*);..." queries in mysql console. 280 | 281 | 282 | **0.5** 283 | 284 | - Initial public release 285 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | 2 | Mozilla Public License 3 | Version 2.0 4 | 1. Definitions 5 | 6 | 1.1. “Contributor” 7 | 8 | means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software. 9 | 1.2. “Contributor Version” 10 | 11 | means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor’s Contribution. 12 | 1.3. “Contribution” 13 | 14 | means Covered Software of a particular Contributor. 15 | 1.4. “Covered Software” 16 | 17 | means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof. 18 | 1.5. “Incompatible With Secondary Licenses” 19 | 20 | means 21 | 22 | that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or 23 | 24 | that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License. 25 | 26 | 1.6. “Executable Form” 27 | 28 | means any form of the work other than Source Code Form. 29 | 1.7. “Larger Work” 30 | 31 | means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software. 32 | 1.8. “License” 33 | 34 | means this document. 35 | 1.9. “Licensable” 36 | 37 | means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License. 38 | 1.10. “Modifications” 39 | 40 | means any of the following: 41 | 42 | any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or 43 | 44 | any new file in Source Code Form that contains any Covered Software. 45 | 46 | 1.11. “Patent Claims” of a Contributor 47 | 48 | means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version. 49 | 1.12. “Secondary License” 50 | 51 | means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses. 52 | 1.13. “Source Code Form” 53 | 54 | means the form of the work preferred for making modifications. 55 | 1.14. “You” (or “Your”) 56 | 57 | means an individual or a legal entity exercising rights under this License. For legal entities, “You” includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, “control” means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity. 58 | 59 | 2. License Grants and Conditions 60 | 2.1. Grants 61 | 62 | Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license: 63 | 64 | under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and 65 | 66 | under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version. 67 | 68 | 2.2. Effective Date 69 | 70 | The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution. 71 | 2.3. Limitations on Grant Scope 72 | 73 | The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor: 74 | 75 | for any code that a Contributor has removed from Covered Software; or 76 | 77 | for infringements caused by: (i) Your and any other third party’s modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or 78 | 79 | under Patent Claims infringed by Covered Software in the absence of its Contributions. 80 | 81 | This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4). 82 | 2.4. Subsequent Licenses 83 | 84 | No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3). 85 | 2.5. Representation 86 | 87 | Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License. 88 | 2.6. Fair Use 89 | 90 | This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents. 91 | 2.7. Conditions 92 | 93 | Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1. 94 | 3. Responsibilities 95 | 3.1. Distribution of Source Form 96 | 97 | All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients’ rights in the Source Code Form. 98 | 3.2. Distribution of Executable Form 99 | 100 | If You distribute Covered Software in Executable Form then: 101 | 102 | such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and 103 | 104 | You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients’ rights in the Source Code Form under this License. 105 | 106 | 3.3. Distribution of a Larger Work 107 | 108 | You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s). 109 | 3.4. Notices 110 | 111 | You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies. 112 | 3.5. Application of Additional Terms 113 | 114 | You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction. 115 | 4. Inability to Comply Due to Statute or Regulation 116 | 117 | If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it. 118 | 5. Termination 119 | 120 | 5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice. 121 | 122 | 5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate. 123 | 124 | 5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination. 125 | 6. Disclaimer of Warranty 126 | 127 | Covered Software is provided under this License on an “as is” basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer. 128 | 7. Limitation of Liability 129 | 130 | Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party’s negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You. 131 | 8. Litigation 132 | 133 | Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party’s ability to bring cross-claims or counter-claims. 134 | 9. Miscellaneous 135 | 136 | This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor. 137 | 10. Versions of the License 138 | 10.1. New Versions 139 | 140 | Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number. 141 | 10.2. Effect of New Versions 142 | 143 | You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward. 144 | 10.3. Modified Versions 145 | 146 | If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License). 147 | 10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses 148 | 149 | If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached. 150 | Exhibit A - Source Code Form License Notice 151 | 152 | This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/. 153 | 154 | If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice. 155 | 156 | You may add additional accurate notices of copyright ownership. 157 | Exhibit B - “Incompatible With Secondary Licenses” Notice 158 | 159 | This Source Code Form is “Incompatible With Secondary Licenses”, as defined by the Mozilla Public License, v. 2.0. 160 | 161 | -------------------------------------------------------------------------------- /Makefile: -------------------------------------------------------------------------------- 1 | REPO ?= actordb 2 | PKG_REVISION ?= $(shell git describe --tags) 3 | PKG_BUILD = 1 4 | BASE_DIR = $(shell pwd) 5 | ERLANG_BIN = $(shell dirname $(shell which erl)) 6 | REBAR ?= $(BASE_DIR)/rebar 7 | OVERLAY_VARS ?= 8 | uname_S := $(shell sh -c 'uname -s 2>/dev/null || echo not') 9 | 10 | ifeq ($(uname_S),Darwin) 11 | SHELLCMD = gcc c_src/cmdshell.c -I/usr/local/opt/readline/include/ /usr/local/opt/readline/lib/libreadline.a -lncurses -o priv/cmdshell 12 | else ifeq ($(shell cat /etc/redhat-release | cut -d' ' -f1),CentOS) 13 | SHELLCMD = gcc c_src/cmdshell.c -lreadline -lncurses -o priv/cmdshell 14 | else 15 | SHELLCMD = gcc c_src/cmdshell.c -static -Wl,-Bdynamic,-lgcc_s,-Bstatic -lreadline -lncurses -ltinfo -o priv/cmdshell 16 | endif 17 | 18 | ifeq ($(uname_S),Darwin) 19 | TOOLCMD = gcc deps/actordb_driver/c_src/tool.c deps/actordb_driver/c_src/mdb.c deps/actordb_driver/c_src/midl.c deps/actordb_driver/c_src/lz4.c -D_TESTAPP_=1 -DMDB_MAXKEYSIZE=0 -DSQLITE_DEFAULT_PAGE_SIZE=4096 -DSQLITE_DEFAULT_WAL_AUTOCHECKPOINT=0 -o actordb_tool 20 | else 21 | TOOLCMD = gcc deps/actordb_driver/c_src/tool.c deps/actordb_driver/c_src/mdb.c deps/actordb_driver/c_src/midl.c deps/actordb_driver/c_src/lz4.c -D_TESTAPP_=1 -DMDB_MAXKEYSIZE=0 -DSQLITE_DEFAULT_PAGE_SIZE=4096 -DSQLITE_DEFAULT_WAL_AUTOCHECKPOINT=0 -lpthread -ldl -o actordb_tool 22 | endif 23 | 24 | $(if $(ERLANG_BIN),,$(warning "Warning: No Erlang found in your path, this will probably not work")) 25 | 26 | all: deps compile 27 | 28 | .PHONY: rel deps 29 | 30 | # thrift -gen erl -out deps/adbt/src/ adbt.thrift 31 | compile: 32 | $(SHELLCMD) 33 | $(TOOLCMD) 34 | ./rebar compile 35 | ./priv/mkconsole.escript 36 | 37 | test_cluster: 38 | ./detest test/dist_test.erl cluster 39 | 40 | test: compile test_cluster 41 | 42 | 43 | recompile: 44 | ./rebar update-deps 45 | ./rebar compile 46 | 47 | update: 48 | ./rebar update-deps 49 | 50 | deps: 51 | ./rebar get-deps 52 | 53 | clean: 54 | ./rebar clean 55 | 56 | 57 | distclean: clean relclean ballclean 58 | ./rebar delete-deps 59 | 60 | generate: 61 | ./rebar generate $(OVERLAY_VARS) 62 | 63 | rel: deps compile generate 64 | 65 | relclean: 66 | rm -rf rel/riak 67 | 68 | ballclean: 69 | rm -rf $(PKG_ID).tar.gz distdir 70 | 71 | ARCH= $(shell uname -m) 72 | 73 | ## 74 | ## Version and naming variables for distribution and packaging 75 | ## 76 | 77 | # Tag from git with style -- 78 | # Ex: When on a tag: actordb-1.0.3 (no commits since tag) 79 | # For most normal Commits: actordb-1.1.0pre1-27-g1170096 80 | # Last tag: actordb-1.1.0pre1 81 | # Commits since tag: 27 82 | # Hash of commit: g1170096 83 | REPO_TAG := $(shell git describe --tags) 84 | 85 | # Split off repo name 86 | # Changes to 1.0.3 or 1.1.0pre1-27-g1170096 from example above 87 | REVISION = $(shell echo $(REPO_TAG) | sed -e 's/^$(REPO)-//') 88 | 89 | # Primary version identifier, strip off commmit information 90 | # Changes to 1.0.3 or 1.1.0pre1 from example above 91 | MAJOR_VERSION ?= $(shell echo $(REVISION) | sed -e 's/\([0-9.]*\)-.*/\1/') 92 | 93 | 94 | ## 95 | ## Release tarball creation 96 | ## Generates a tarball that includes all the deps sources so no checkouts are necessary 97 | ## 98 | 99 | # Use git archive make a clean copy of a repository at a current 100 | # revision and copy to a new directory 101 | archive_git = git archive --format=tar --prefix=$(1)/ HEAD | (cd $(2) && tar xf -) 102 | 103 | # Alternative to git archive to remove .git directory, but not any 104 | # other files outside of the source tree (used for eleveldb which 105 | # brings in leveldb) 106 | clean_git = cp -R ../../$(1) $(2)/deps/ && find $(2)/$(1) -name .git -type d | xargs rm -rf 107 | 108 | # Determines which function to call. eleveldb is treated as a special case 109 | archive = $(call archive_git,$(1),$(2)) 110 | 111 | 112 | # Checkout tag, fetch deps (so we don't have to do it multiple times) and collect 113 | # the version of all the dependencies into the MANIFEST_FILE 114 | CLONEDIR ?= actordb-clone 115 | MANIFEST_FILE ?= dependency_manifest.git 116 | get_dist_deps = mkdir distdir && \ 117 | git clone . distdir/$(CLONEDIR) && \ 118 | cd distdir/$(CLONEDIR) && \ 119 | git checkout $(REPO_TAG) && \ 120 | $(MAKE) deps && \ 121 | echo "- Dependencies and their tags at build time of $(REPO) at $(REPO_TAG)" > $(MANIFEST_FILE) && \ 122 | for dep in deps/*; do \ 123 | cd $${dep} && \ 124 | printf "$${dep} version `git describe --long --tags 2>/dev/null || git rev-parse HEAD`\n" >> ../../$(MANIFEST_FILE) && \ 125 | cd ../..; done && \ 126 | LC_ALL=POSIX && export LC_ALL && sort $(MANIFEST_FILE) > $(MANIFEST_FILE).tmp && mv $(MANIFEST_FILE).tmp $(MANIFEST_FILE); 127 | 128 | 129 | # Name resulting directory & tar file based on current status of the git tag 130 | # If it is a tagged release (PKG_VERSION == MAJOR_VERSION), use the toplevel 131 | # tag as the package name, otherwise generate a unique hash of all the 132 | # dependencies revisions to make the package name unique. 133 | # This enables the toplevel repository package to change names 134 | # when underlying dependencies change. 135 | NAME_HASH = $(shell git hash-object distdir/$(CLONEDIR)/$(MANIFEST_FILE) 2>/dev/null | cut -c 1-8) 136 | ifeq ($(REVISION), $(MAJOR_VERSION)) 137 | PKG_ID := $(REPO_TAG) 138 | else 139 | PKG_ID = $(REPO)-$(MAJOR_VERSION)-$(NAME_HASH) 140 | endif 141 | 142 | # To ensure a clean build, copy the CLONEDIR at a specific tag to a new directory 143 | # which will be the basis of the src tar file (and packages) 144 | # The vsn.git file is required by rebar to be able to build from the resulting 145 | # tar file 146 | build_clean_dir = cd distdir/$(CLONEDIR) && \ 147 | $(call archive_git,$(PKG_ID),..) && \ 148 | cp $(MANIFEST_FILE) ../$(PKG_ID)/ && \ 149 | mkdir ../$(PKG_ID)/deps && \ 150 | for dep in deps/*; do \ 151 | cd $${dep} && \ 152 | mkdir -p ../../../$(PKG_ID)/$${dep}/priv && \ 153 | $(call archive,$${dep},../../../$(PKG_ID)) && \ 154 | printf "`git describe --long --tags 2>/dev/null || git rev-parse HEAD`" > ../../../$(PKG_ID)/$${dep}/priv/vsn.git && \ 155 | cd ../..; \ 156 | done 157 | 158 | 159 | distdir/$(CLONEDIR)/$(MANIFEST_FILE): 160 | $(if $(REPO_TAG), $(call get_dist_deps), $(error "You can't generate a release tarball from a non-tagged revision. Run 'git checkout ', then 'make dist'")) 161 | 162 | distdir/$(PKG_ID): distdir/$(CLONEDIR)/$(MANIFEST_FILE) 163 | $(call build_clean_dir) 164 | 165 | distdir/$(PKG_ID).tar.gz: distdir/$(PKG_ID) 166 | tar -C distdir -czf distdir/$(PKG_ID).tar.gz $(PKG_ID) 167 | 168 | dist: distdir/$(PKG_ID).tar.gz 169 | cp distdir/$(PKG_ID).tar.gz . 170 | 171 | ballclean: 172 | rm -rf $(PKG_ID).tar.gz distdir 173 | 174 | pkgclean: ballclean 175 | rm -rf package 176 | 177 | ## 178 | ## Packaging targets 179 | ## 180 | 181 | # Yes another variable, this one is repo-- 183 | PKG_VERSION = $(shell echo $(PKG_ID) | sed -e 's/^$(REPO)-//') 184 | 185 | package: distdir/$(PKG_ID).tar.gz 186 | ln -s distdir package 187 | $(MAKE) -C package -f $(PKG_ID)/deps/node_package/Makefile 188 | 189 | .PHONY: package 190 | export PKG_VERSION PKG_ID PKG_BUILD BASE_DIR ERLANG_BIN REBAR OVERLAY_VARS RELEASE 191 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### ActorDB is a distributed SQL database... 2 | 3 | with the scalability of a KV store, while keeping the query capabilities of a relational database. 4 | 5 | ActorDB is ideal as a server side database for [apps](http://www.actordb.com/docs-examples.html#example_filesync). Think of running a large mail service, dropbox, evernote, etc. They all require server side storage for user data, but the vast majority of queries is within a specific user. With many users, the server side database can get very large. Using ActorDB you can keep a full relational database for every user and not be forced into painful scaling strategies that require you to throw away everything that makes relational databases good. 6 | 7 | ActorDB is a database that does not hide sharding from you. It makes it explicit, so you can keep fully relational chunks (i.e. actors) for the 99% of your database queries. 8 | 9 | Even if your data model is not easily partitioned, ActorDB has a powerful KV data type that you can use instead. An [ActorDB KV](http://www.actordb.com/docs-kvstore.html#about_kv_store) type is an sql table that is partitioned across all servers. That table can have sub tables linked to it using foreign keys. 10 | 11 | You can run queries or transactions on a single actor or across any number of actors. ActorDB can run on a single server or many servers. Writing to one actor is completely independent of writes to another actor, unless they are participating in the same transaction. 12 | 13 | Servers can be added and schema can be updated at any time while the database is running. 14 | 15 | Homepage: http://www.actordb.com/ 16 | 17 | For any questions you can use: https://gitter.im/actordb/ 18 | 19 | ActorDB is: 20 | 21 | * A distributed relational SQL database. 22 | * Consistent (not eventually consistent). 23 | * Distributed. 24 | * Redundant. 25 | * Massively concurrent. 26 | * No single point of failure. 27 | * ACID. 28 | * Connectable over MySQL protocol and [Thrift](https://github.com/biokoda/actordb/blob/master/adbt.thrift). 29 | * Replicated safely using the Raft distributed consensus algorithm. 30 | 31 | Advantages 32 | 33 | * Complete horizontal scalability. All nodes are equivalent and you can have as many nodes as you need. 34 | * Full featured ACID database. 35 | * Suitable for very large datasets over many actors and servers. 36 | * No special drivers needed. Use the mysql driver of your language of choice. 37 | * Easy to configure and administer. 38 | * No global locks. Only the actors (one or many) involved in a transaction are locked during a write. All other actors are unaffected. 39 | * Uses stable reliable SQL and storage engines: SQLite on top of LMDB. 40 | * Inherits SQLite features like JSON support and common table expressions. 41 | 42 | ### Would you like to contribute? 43 | 44 | What we would most like to see is more client libraries on top of Thrift. Thrift generated code can be a bit verbose. Generally it is much nicer to implement an interface to it that hides some boilerplate code and uses nicer types. 45 | 46 | Also if you have any ideas, thoughts on possible improvements or bugs to report, contact us using github issues. 47 | 48 | So if you're interested in contributing. Use your language of choice. Generate a thrift interface using our [adbt.thrift](https://github.com/biokoda/actordb/blob/master/adbt.thrift), then write a clean interface to it. 49 | 50 | We will list any outside contributions here. 51 | 52 | ### Learn more 53 | 54 | Documentation: http://www.actordb.com/docs-about.html 55 | 56 | Story: http://blog.biokoda.com/post/112206754025/why-we-built-actordb 57 | 58 | How SQLite runs on top of LMDB: http://blog.biokoda.com/post/133121776825/actordb-how-and-why-we-run-sqlite-on-top-of-lmdb 59 | 60 | How to configure and run: http://www.actordb.com/docs-configuration.html 61 | 62 | Change log: https://github.com/biokoda/actordb/blob/master/CHANGES.md 63 | 64 | ### Client libs 65 | 66 | Erlang: https://github.com/biokoda/actordb_client 67 | 68 | .NET 2.0: https://github.com/hq-io/actordb-net 69 | 70 | ### Builds 71 | 72 | **ubuntu/debian package (64bit)** 73 | 74 | https://dzbscw1ubdtyw.cloudfront.net/actordb_0.10.29-1_amd64.deb 75 | 76 | **osx package (64bit):** 77 | 78 | https://dzbscw1ubdtyw.cloudfront.net/actordb-0.10.29-OSX-x86_64.tar.gz 79 | 80 | **red hat/centos package (64bit):** 81 | 82 | Centos 7: https://dzbscw1ubdtyw.cloudfront.net/actordb-0.10.29-1.el7.x86_64.rpm 83 | 84 | **general linux build (64bit)** 85 | 86 | https://dzbscw1ubdtyw.cloudfront.net/actordb-0.10.29-linux.tar.gz 87 | 88 | **windows package (64bit):** 89 | 90 | https://dzbscw1ubdtyw.cloudfront.net/actordb-0.10.25-win-x86_64.zip 91 | -------------------------------------------------------------------------------- /actordb_console: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | ./priv/cmdshell escript ./priv/actordb_console "$@" 3 | -------------------------------------------------------------------------------- /adbt.thrift: -------------------------------------------------------------------------------- 1 | namespace java com.actordb.thrift 2 | namespace cpp com.actordb 3 | namespace csharp Actordb 4 | namespace py actordb 5 | namespace php actordb 6 | namespace perl Actordb 7 | namespace rb ActordbThrift 8 | namespace erl adbt 9 | namespace go actordb 10 | 11 | const string VERSION = "1.4.0" 12 | 13 | union Val 14 | { 15 | 1: i64 bigint, 16 | 2: i32 integer, 17 | 3: i16 smallint, 18 | 4: double real, 19 | 5: bool bval, 20 | 6: string text, 21 | 7: bool isnull, 22 | 8: binary blob 23 | } 24 | 25 | struct ReadResult 26 | { 27 | 1: required bool hasMore, // not used yet 28 | 2: required list columns, 29 | 3: required list> rows 30 | } 31 | 32 | struct WriteResult 33 | { 34 | 1: required i64 lastChangeRowid, 35 | 2: required i64 rowsChanged 36 | } 37 | 38 | struct LoginResult 39 | { 40 | 1: required bool success, 41 | 2: optional string error 42 | 3: optional list readaccess; 43 | 4: optional list writeaccess; 44 | } 45 | 46 | union QueryResult 47 | { 48 | 1: ReadResult rdRes, 49 | 2: WriteResult wrRes, 50 | } 51 | 52 | union Result 53 | { 54 | 1: ReadResult rdRes, 55 | 2: WriteResult wrRes, 56 | 3: list batchRes 57 | } 58 | 59 | enum ErrorCode { 60 | NotLoggedIn = 1, 61 | EmptyActorName = 2, 62 | InvalidActorName = 3, // Invalid characters 63 | InvalidType = 4, // Invalid actor type 64 | NotPermitted = 5, // Accessing actor that user does not have permission for 65 | SqlError = 6, 66 | ConsensusTimeout = 7, // After writing not enough nodes responded to confirm. 67 | // Write may later be successfuly replicated, or it may be 68 | // abandoned. 69 | ConsensusImpossibleAtm = 8, // Query was not executed because not enough nodes are online. 70 | LocalNodeMissing = 9, // When creating a cluster, node where init was attempted was missing 71 | // from the node list. 72 | MissingGroupInsert = 10, // When creating a cluster, if no groups are specified 73 | MissingNodesInsert = 11, // When creating a cluster, if no nodes are specified 74 | MissingRootUser = 12, // When creating a cluster, if no root user was specified 75 | LoginFailed = 13, // Username and/or password was incorrect 76 | NotInitialized = 14, // Query before actordb initialized 77 | NoCreate = 15, // Query without create flag was attempted on an actor which does not exist. 78 | Error = 100 // Unknown error 79 | } 80 | 81 | exception InvalidRequestException { 82 | 1: required ErrorCode code, 83 | 2: required string info 84 | } 85 | 86 | service Actordb { 87 | 88 | string protocolVersion(), 89 | 90 | LoginResult login(1: required string username, 2: required string password) throws (1:InvalidRequestException ire), 91 | 92 | // For safer login, get 20 bytes of cryptographically random data, use it to hash password for login call. 93 | // It uses the same hashing algorithm as mysql: 94 | // SHA1( password ) XOR SHA1( "20-bytes random data from server" SHA1( SHA1( password ) ) ) 95 | binary salt(), 96 | 97 | // Initialize instance/cluster(s), create users 98 | Result exec_config(1: required string sql) throws (1:InvalidRequestException ire), 99 | 100 | // Change schema 101 | Result exec_schema(1: required string sql) throws (1:InvalidRequestException ire), 102 | 103 | // query for a single actor of type 104 | Result exec_single(1: required string actorname, 2: required string actortype, 3: required string sql, 4: list flags = []) throws (1:InvalidRequestException ire), 105 | 106 | // query for a single actor of type with parameterized query (ex.: "insert into tab values (?1,?2,?3)") 107 | // This is faster and safer. 108 | Result exec_single_param(1: required string actorname, 2: required string actortype, 3: required string sql, 4: list flags = [], 5: list>> bindingvals = []) throws (1:InvalidRequestException ire), 109 | 110 | Result exec_single_batch_param(1: required string actorname, 2: required string actortype, 3: required string sql, 4: list flags = [], 5: list>> bindingvals = []) throws (1:InvalidRequestException ire), 111 | 112 | // query over multiple actors of type 113 | Result exec_multi(1: required list actors, 2: required string actortype, 3: required string sql, 4: list flags = []) throws (1:InvalidRequestException ire), 114 | 115 | // query over all actors for type 116 | Result exec_all(1: required string actortype, 2: required string sql, 3: list flags = []) throws (1:InvalidRequestException ire), 117 | 118 | // all in sql: actor sometype(actorname) create; select * from mytab; 119 | Result exec_sql(1: required string sql) throws (1:InvalidRequestException ire), 120 | 121 | // all in sql but with parameterized query 122 | Result exec_sql_param(1: required string sql, 2: list> bindingvals = []) throws (1:InvalidRequestException ire), 123 | 124 | // Which actor types in schema. 125 | list actor_types() throws (1:InvalidRequestException ire), 126 | 127 | // Which tables are in an actor type. 128 | list actor_tables(1: required string actor_type) throws (1:InvalidRequestException ire), 129 | 130 | // Which columns for actor type and table. 131 | map actor_columns(1: required string actor_type, 2: required string actor_table) throws (1:InvalidRequestException ire), 132 | 133 | // Returns a unique integer 134 | i64 uniqid() throws (1:InvalidRequestException ire) 135 | 136 | } 137 | -------------------------------------------------------------------------------- /c_src/cmdshell.c: -------------------------------------------------------------------------------- 1 | // This Source Code Form is subject to the terms of the Mozilla Public 2 | // License, v. 2.0. If a copy of the MPL was not distributed with this 3 | // file, You can obtain one at http://mozilla.org/MPL/2.0/. 4 | #include 5 | #include 6 | #include 7 | #ifndef _WIN32 8 | #include 9 | #include 10 | #endif 11 | #include 12 | #include 13 | #ifdef __linux__ 14 | #include 15 | #endif 16 | #include 17 | #include 18 | #include 19 | #include 20 | #include 21 | #include 22 | #define READ 0 23 | #define WRITE 1 24 | 25 | // gcc c_src/cmdshell.c -I/usr/local/Cellar/readline/6.3.8/include/ /usr/local/Cellar/readline/6.3.8/lib/libreadline.a -lncurses -o cmdshell 26 | 27 | char running=1; 28 | char havestdin=1; 29 | char doshell=1; 30 | int infp, outfp; 31 | int req = 0, resp = 0; 32 | const char *prompt = "actordb> "; 33 | char pipe_req[128]; 34 | char pipe_resp[128]; 35 | unsigned long long int mytime = 0; 36 | 37 | pid_t popen2(char* argv[], int argc, int *infp, int *outfp) 38 | { 39 | int p_stdin[2], p_stdout[2]; 40 | pid_t pid; 41 | 42 | if (pipe(p_stdin) != 0 || pipe(p_stdout) != 0) 43 | return -1; 44 | 45 | pid = fork(); 46 | if (pid < 0) 47 | return pid; 48 | 49 | else if (pid == 0) 50 | { 51 | char *args[argc]; 52 | int i; 53 | args[0] = argv[1]; 54 | // args[1] = "pipe"; 55 | // args[2] = pipe_req; 56 | // args[3] = pipe_resp; 57 | for (i = 2; i < argc; i++) 58 | args[i-2+1] = argv[i]; 59 | args[i-2+1] = NULL; 60 | 61 | close(p_stdin[WRITE]); 62 | dup2(p_stdin[READ], READ); 63 | close(p_stdout[READ]); 64 | dup2(p_stdout[WRITE], WRITE); 65 | // execl("/bin/sh", "sh", "-c", command, NULL); 66 | execvp(argv[1],args); 67 | exit(1); 68 | } 69 | 70 | if (infp == NULL) 71 | close(p_stdin[WRITE]); 72 | else 73 | *infp = p_stdin[WRITE]; 74 | if (outfp == NULL) 75 | close(p_stdout[READ]); 76 | else 77 | *outfp = p_stdout[READ]; 78 | return pid; 79 | } 80 | 81 | void proc_exit() 82 | { 83 | running = 0; 84 | } 85 | 86 | static void rl_handler(char* line) 87 | { 88 | if (line == NULL) 89 | { 90 | // running = 0; 91 | havestdin = 0; 92 | return; 93 | } 94 | if (strlen(line) > 1) 95 | add_history(line); 96 | 97 | // #ifdef TIOCGWINSZ 98 | // { 99 | // struct winsize ws; 100 | // if (ioctl(0, TIOCGWINSZ, &ws) >= 0) 101 | // { 102 | // char dim[30]; 103 | // sprintf(dim,"dim=%d,%d\n",(int)ws.ws_row, (int)ws.ws_col), 104 | // write(req, dim, strlen(dim)); 105 | // } 106 | // } 107 | // #endif 108 | if (strlen(line) == 0) 109 | { 110 | if (write(req, "\n", strlen("\n")) < 0) 111 | running = 0; 112 | } 113 | else if (write(req, line, strlen(line)) < 0) 114 | { 115 | running = 0; 116 | } 117 | } 118 | 119 | void stop() 120 | { 121 | running = 0; 122 | // close(STDIN_FILENO); 123 | close(resp); 124 | } 125 | 126 | int main(int argc, char *argv[]) 127 | { 128 | signal(SIGCHLD, proc_exit); 129 | signal(SIGQUIT, stop); 130 | signal(SIGINT, stop); 131 | int nread = 0, sread = 0, i; 132 | char buf[1024*64]; 133 | struct timeval tv; 134 | gettimeofday(&tv, NULL); 135 | FILE *comfile = NULL; 136 | 137 | if (argc < 2) 138 | { 139 | printf("Missing program to execute\n"); 140 | return 0; 141 | } 142 | 143 | for (i = 1; i < argc; i++) 144 | { 145 | if (strcmp(argv[i],"-noshell") == 0) 146 | doshell = 0; 147 | } 148 | 149 | 150 | mytime = tv.tv_sec*1000 + (tv.tv_usec / 1000); 151 | snprintf(pipe_req,125,"/tmp/actordb.%llu.req",mytime); 152 | snprintf(pipe_resp,125,"/tmp/actordb.%llu.resp",mytime); 153 | 154 | comfile = fopen("/tmp/comfile","wb"); 155 | sprintf(buf,"%s\n%s",pipe_req,pipe_resp); 156 | fwrite(buf,strlen(buf),1,comfile); 157 | fclose(comfile); 158 | 159 | mkfifo(pipe_req, 0666); 160 | mkfifo(pipe_resp, 0666); 161 | 162 | resp = open(pipe_resp,O_RDONLY | O_NONBLOCK); 163 | if (resp == -1) 164 | { 165 | printf("Unable to open resp pipe\n"); 166 | goto finished; 167 | } 168 | if (popen2(argv, argc, &infp, &outfp) <= 0) 169 | { 170 | printf("Unable to exec your-program-B\n"); 171 | goto finished; 172 | } 173 | 174 | 175 | req = open(pipe_req,O_WRONLY); 176 | if (req == -1) 177 | { 178 | printf("Unable to open req pipe\n"); 179 | goto finished; 180 | } 181 | if (doshell) 182 | rl_callback_handler_install(prompt, &rl_handler); 183 | 184 | while (running) 185 | { 186 | int rc; 187 | fd_set fdread; 188 | FD_ZERO(&fdread); 189 | if (havestdin) 190 | FD_SET(STDIN_FILENO, &fdread); 191 | FD_SET(resp, &fdread); 192 | 193 | rc = select(resp+1, &fdread, NULL, NULL, NULL); 194 | if (!running) 195 | break; 196 | 197 | if (havestdin && FD_ISSET(STDIN_FILENO, &fdread)) 198 | { 199 | if (doshell) 200 | rl_callback_read_char(); 201 | else 202 | { 203 | char line[256]; 204 | memset(line,0,sizeof(line)); 205 | fgets(line,sizeof(line),fdopen(STDIN_FILENO,"r")); 206 | for (i = 0; line[i] != 0 && i < sizeof(line); i++) 207 | { 208 | if (line[i] == '\n') 209 | { 210 | line[i] = 0; 211 | break; 212 | } 213 | } 214 | 215 | if (line[0] == 0) 216 | { 217 | havestdin = 0; 218 | } 219 | else if (write(req, line, strlen(line)) < 0) 220 | { 221 | havestdin = 0; 222 | } 223 | } 224 | } 225 | else if (FD_ISSET(resp, &fdread)) 226 | { 227 | int offset = 0; 228 | sread = read(resp, buf, sizeof(buf)-1); 229 | if (sread <= 0 && errno == EWOULDBLOCK) 230 | continue; 231 | else if (sread <= 0) 232 | { 233 | rl_set_prompt(""); 234 | rl_redisplay(); 235 | break; 236 | } 237 | buf[sread] = 0; 238 | if (buf[0] == '~' && buf[1] == '~') 239 | { 240 | int i = 0; 241 | for (i = 0; i < sread; i++) 242 | { 243 | if (buf[i] == '\n' || buf[i] == '\r') 244 | { 245 | buf[i] = 0; 246 | break; 247 | } 248 | } 249 | if (strcmp(buf+2,"getpass") == 0) 250 | { 251 | char *line = NULL; 252 | rl_save_prompt(); 253 | rl_message(""); 254 | rl_callback_handler_remove(); 255 | line = getpass("Password:"); 256 | write(req, line, strlen(line)); 257 | rl_callback_handler_install(prompt, &rl_handler); 258 | continue; 259 | } 260 | else 261 | { 262 | rl_set_prompt(buf+2); 263 | rl_redisplay(); 264 | } 265 | 266 | for (; i < sread; i++) 267 | { 268 | if (buf[i] >= '!') 269 | break; 270 | } 271 | if (i == sread) 272 | continue; 273 | 274 | offset = i; 275 | sread = strlen(buf+offset); 276 | } 277 | 278 | rl_save_prompt(); 279 | rl_replace_line("",0); 280 | rl_redisplay(); 281 | printf("%.*s",sread,buf+offset); 282 | rl_clear_message(); 283 | rl_restore_prompt(); 284 | rl_redisplay(); 285 | // printlog(buf,sread); 286 | } 287 | } 288 | rl_callback_handler_remove(); 289 | finished: 290 | unlink(pipe_req); 291 | unlink(pipe_resp); 292 | 293 | return 0; 294 | } 295 | -------------------------------------------------------------------------------- /etc/app.config: -------------------------------------------------------------------------------- 1 | [ 2 | {actordb_core,[ 3 | 4 | {main_db_folder,"var"}, 5 | % {extra_db_folders,["/someplace/else"]}, 6 | 7 | % ActorDB uses LMDB storage engine, which requires a max size of database. 8 | % If in doubt, pick 90% of your HD free space. 9 | % If using multiple folders, this will limit DB size for every folder. 10 | % Values can be in TB, GB, MB. 11 | {max_db_size,"1TB"}, 12 | 13 | % Thrift API is the recommended interface 14 | % https://github.com/biokoda/actordb_thrift/blob/master/adbt.thrift 15 | {thrift_port,33306}, 16 | % Use framed thrift protocol. If this is enabled actordb_console must be executed with -framed. 17 | % {thrift_framed, false}, 18 | 19 | % Which port to listen to for mysql protocol. 0 for disabled 20 | {mysql_protocol, 33307}, 21 | 22 | % After how much time of inactivity ActorDB will close client connections. 23 | % 0 means never and will wait for tcp close. 24 | {client_inactivity_timeout, 10000}, 25 | 26 | % Which network interface thrift/mysql should listen on 27 | % Default is all interfaces. You should set it to whichever interface you are using. 28 | %{api_network_interface,"127.0.0.1"}, 29 | 30 | % How many transaction managers in node. 31 | % They are used for transactions over more than one actor but not for writes to a single actor. 32 | % Higher number means more transaction managers that can work at the same time. 33 | % Too low the server will not be utilized well enough, too high and it will do too 34 | % much work at the same time. 35 | {num_transaction_managers,12}, 36 | 37 | % Maximum number of miliseconds to wait for DB queries to complete before they are canceled. 38 | % Set to 0 for no timeout. 39 | {query_timeout,10000}, 40 | 41 | % safe - Writes will always be synced to disk before replying to client. 42 | % Hopefully you are using a HD that actually executes a sync when told to do so. 43 | % This has a significant performance impact. 44 | % interval - in miliseconds. This will issue fsyncs on a timer. A compromise for speed and safety. 45 | % ActorDB is a distributed database, in case of a server crash writes will still be 46 | % replicated to other nodes, which means the degree of safety is reasonable for a lot of use cases. 47 | % fast - issue writes as fast as possible, let the OS/hardware decide when to sync. In case of crash, data is 48 | % is still replicated to other nodes, so it's not as dangerous as it may sound. 49 | % If some actors are more important than others, you can add an fsync flag to write queries. Successful write 50 | % queries with fsync flag are fsynced to disk before replying to client. 51 | %{fsync, safe}, 52 | {fsync, {interval,1000}}, 53 | %{fsync, fast}, 54 | 55 | % If set it will enable automatic loading of sqlite extensions. This will 56 | % cause all shared libraries in the folder to be loaded on ActorDB startup. If they 57 | % are valid sqlite3 extensions they will be available through SQL. 58 | % Entry point name of the extension must match filename. 59 | {sqlite_extensions_folder,"extensions"}, 60 | 61 | % Every actor requires replication space. 62 | % This way if one of the nodes falls behind for whatever reason, they can receive what they missed. 63 | % Worst case scenario is at least one of the nodes being offline for a longer period of time. 64 | % These parameters govern how much replication space actors should hold. Once a node is too far behind 65 | % it will require a full resync of actor. 66 | % Values are in pages and are per actor. {5000,0.1} means keep max(5000,0.1*ActorSize) 67 | % of replication space. A page is 4096 bytes, but when stored on disk is actually compressed. 68 | {max_replication_space, {5000,0.1}} 69 | ]}, 70 | 71 | 72 | {bkdcore,[ 73 | {autocompile,["bkdcore","actordb_core","actordb_thrift"]}, 74 | % rpcport should not change once ActorDB has been initialized. 75 | {rpcport,44380} 76 | ]}, 77 | 78 | {kernel, [ 79 | {inet_dist_listen_min, 44370}, 80 | {inet_dist_listen_max, 44370} 81 | ]}, 82 | 83 | {lager, [ 84 | {colored, true}, 85 | {handlers, [ 86 | {lager_console_backend, [{level,debug}, 87 | {formatter,lager_default_formatter}, 88 | {formatter_config,[time," ",pid," ",module," ",line," [",severity,"] ", message, "\n"]} 89 | ]}, 90 | {lager_file_backend, [{file, "log/error.log"}, {level, error}, {formatter, lager_default_formatter}, 91 | {formatter_config, [date, " ", time," [",severity,"] ",pid," ",module, " ",line, " ", message, "\n"]}]}, 92 | {lager_file_backend, [{file, "log/console.log"}, {level, info}]} 93 | ]} 94 | ]}, 95 | 96 | %% SASL config 97 | {sasl, [ 98 | {sasl_error_logger, {file, "log/sasl-error.log"}}, 99 | {errlog_type, error}, 100 | {error_logger_mf_dir, "log/sasl"}, % Log directory 101 | {error_logger_mf_maxbytes, 10485760}, % 10 MB max file size 102 | {error_logger_mf_maxfiles, 5} % 5 files max 103 | ]} 104 | ]. 105 | -------------------------------------------------------------------------------- /etc/init.sql: -------------------------------------------------------------------------------- 1 | --WARNING: Every sql statement must be in its own line. 2 | 3 | -- First initialize node. Create group, create node and create root user. Only this created 4 | -- user is able to change schema or change configuration. Once initialization is done 5 | -- console will be connected as this user. Every user created in this stage will have all privileges. 6 | -- Which means you should not create more than one. Add additional users later. 7 | use config 8 | insert into groups values ('grp1','cluster') 9 | -- localnode() is whatever is in vm.args (-name ....) for node we are connected to. 10 | insert into nodes values (localnode(),'grp1') 11 | CREATE USER 'root' IDENTIFIED BY 'rootpass' 12 | commit 13 | 14 | -- Still in config db, now add second user to run queries with 15 | CREATE USER 'myuser' IDENTIFIED BY 'mypass' 16 | -- * means user has access to all actor types 17 | GRANT read,write ON * to 'myuser' 18 | -- We could also set a user that only has access to type1 actors with 19 | -- CREATE USER 'type1user' IDENTIFIED BY 'type1pass' 20 | -- GRANT read,write ON type1 to 'type1user'; 21 | commit 22 | 23 | -- Set schema 24 | use schema 25 | actor type1 26 | CREATE TABLE tab (id INTEGER PRIMARY KEY, txt TEXT) 27 | CREATE TABLE tab1 (id INTEGER PRIMARY KEY, txt TEXT) 28 | ALTER TABLE tab ADD i INTEGER 29 | CREATE TABLE tabx (id INTEGER PRIMARY KEY CHECK (typeof(id) == 'integer'), txt TEXT CHECK (typeof(id) == 'text')) 30 | actor type2 31 | CREATE TABLE asdf (id INTEGER PRIMARY KEY AUTOINCREMENT, txt BLOB) 32 | -- KV type of actor. This means counters is a sharded table across all nodes. 33 | actor counters kv 34 | CREATE TABLE actors (id TEXT PRIMARY KEY, hash INTEGER, val INTEGER) WITHOUT ROWID 35 | -- Another KV type. This one with a nother sub table. 36 | -- Any sub table must use foreign key on actors.id and have "on delete cascade" 37 | actor filesystem kv 38 | CREATE TABLE actors (id TEXT PRIMARY KEY, hash INTEGER, size INTEGER) WITHOUT ROWID 39 | CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, fileid TEXT, uid INTEGER, FOREIGN KEY (fileid) REFERENCES actors(id) ON DELETE CASCADE) 40 | commit 41 | -------------------------------------------------------------------------------- /etc/vm.args: -------------------------------------------------------------------------------- 1 | -name testnd@127.0.0.1 2 | -setcookie actordb 3 | +A 3 4 | +stbt db 5 | -------------------------------------------------------------------------------- /pkg.vars.config: -------------------------------------------------------------------------------- 1 | {package_name, "actordb"}. 2 | {package_install_name, "actordb"}. 3 | {package_install_user, "actordb"}. 4 | {package_install_group, "actordb"}. 5 | {package_install_user_desc, "ActorDB user"}. 6 | {package_commands, {list, [[{name, "actordb"}],[{name, "actordb_console"}],[{name,"actordb_tool"}]]}}. 7 | {package_shortdesc, "ActorDB is a distributed SQL database"}. 8 | {package_desc, "ActorDB is a distributed SQL database"}. 9 | {bin_or_sbin, "sbin"}. 10 | {package_patch_dir, "actordb-patches"}. 11 | {solaris_pkgname, "BIOKODAactordb"}. 12 | {license_type, "Mozilla Public License 2.0"}. 13 | {copyright, "2016 Biokoda d.o.o"}. 14 | {vendor_name, "Biokoda d.o.o."}. 15 | {vendor_url, "http://www.biokoda.com"}. 16 | {vendor_contact_name, "Biokoda"}. 17 | {vendor_contact_email, "actordb@biokoda.com"}. 18 | {license_full_text, "This Source Code Form is subject to the terms of the Mozilla Public\n" 19 | "License, v. 2.0. If a copy of the MPL was not distributed with this\n" 20 | "file, You can obtain one at http://mozilla.org/MPL/2.0/."}. 21 | 22 | -------------------------------------------------------------------------------- /priv/mkconsole.escript: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env escript 2 | 3 | 4 | main(BinFiles1) -> 5 | BinFiles = ["deps/bkdcore/ebin/butil.beam", 6 | "deps/actordb_core/ebin/actordb_sql.beam", 7 | "ebin/actordb_console.beam"] ++ BinFiles1, 8 | Apps = [thrift,lager,poolboy, actordb_client,adbt], 9 | 10 | file:make_dir("ebin"), 11 | 12 | {ok, _Mod,Bin} = compile:file("src/actordb_console.erl", [binary,return_errors]), 13 | filelib:ensure_dir("ebin/"), 14 | file:write_file("ebin/actordb_console.beam",Bin), 15 | 16 | case os:type() of 17 | {win32, _} -> 18 | EndName = "actordb_wxconsole", 19 | {ok, _,WxBin} = compile:file("src/actordb_wxconsole.erl", [binary,return_errors]), 20 | filelib:ensure_dir("ebin/"), 21 | file:write_file("ebin/actordb_wxconsole.beam",WxBin); 22 | _ -> 23 | EndName = "actordb_console" 24 | end, 25 | 26 | %% Add ebin paths to our path 27 | % true = code:add_path("ebin"), 28 | ok = code:add_paths(filelib:wildcard("deps/*/ebin")), 29 | 30 | %% Read the contents of the files in ebin(s) 31 | Files1 = [begin 32 | FileList = filelib:wildcard("deps/"++atom_to_list(Dir)++"/ebin/*.*") ++ filelib:wildcard("ebin/*.*"), 33 | [{filename:basename(Nm),element(2,file:read_file(Nm))} || Nm <- FileList] 34 | end || Dir <- Apps], 35 | 36 | Files = [{filename:basename(Fn),element(2,file:read_file(Fn))} || Fn <- BinFiles]++lists:flatten(Files1), 37 | 38 | case zip:create("mem", Files, [memory]) of 39 | {ok, {"mem", ZipBin}} -> 40 | Script = <<"#!/usr/bin/env escript\n%%! +Bc \n", ZipBin/binary>>, 41 | case file:write_file("priv/"++EndName, Script) of 42 | ok -> ok; 43 | {error, WriteError} -> 44 | io:format("Failed to write ~p: ~p\n", [EndName,WriteError]), 45 | halt(1) 46 | end; 47 | {error, ZipError} -> 48 | io:format("Failed to construct ~p archive: ~p\n", [EndName,ZipError]), 49 | halt(1) 50 | end, 51 | 52 | %% Finally, update executable perms for our script 53 | case os:type() of 54 | {unix,_} -> 55 | [] = os:cmd("chmod a+x actordb_console"), 56 | ok; 57 | _ -> 58 | ok 59 | end. 60 | -------------------------------------------------------------------------------- /rebar: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/biokoda/actordb/cf48f819c6cd1a5d4d50ff977db1617b3cece460/rebar -------------------------------------------------------------------------------- /rebar.config: -------------------------------------------------------------------------------- 1 | {erl_opts, [{parse_transform, lager_transform}]}. 2 | 3 | {sub_dirs, ["rel"]}. 4 | 5 | {cover_enabled, true}. 6 | 7 | {lib_dirs, ["deps"]}. 8 | 9 | {erl_opts, [{src_dirs,["src"]}]}. 10 | 11 | {deps, [ 12 | {actordb_core,".*",{git,"https://github.com/biokoda/actordb_core.git",{branch,"master"}}}, 13 | {node_package,".*",{git,"https://github.com/biokoda/node_package.git",{branch,"4.0"}}}, 14 | {actordb_client,".*",{git,"https://github.com/biokoda/actordb_client",{tag,"master"}}} 15 | ]}. 16 | -------------------------------------------------------------------------------- /rel/files/actordb_console: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | {{runner_base_dir}}/{{erts_vsn}}/bin/cmdshell {{runner_base_dir}}/{{erts_vsn}}/bin/escript {{runner_base_dir}}/{{erts_vsn}}/bin/actordb_console "$@" 3 | 4 | -------------------------------------------------------------------------------- /rel/files/app.config: -------------------------------------------------------------------------------- 1 | [ 2 | {actordb_core,[ 3 | 4 | {main_db_folder,"{{platform_data_dir}}"}, 5 | % {extra_db_folders,["/harddrive2/actordb","/harddrive3/actordb"]}, 6 | 7 | % ActorDB uses LMDB storage engine, which requires a max size of database. 8 | % If in doubt, pick 90% of your HD free space. 9 | % If using multiple folders, this will limit DB size for every folder. 10 | % Values can be in TB, GB, MB. 11 | {max_db_size,"1TB"}, 12 | 13 | % Thrift API is the recommended interface 14 | % https://github.com/biokoda/actordb_thrift/blob/master/adbt.thrift 15 | % 0 for disable thrift interface. 16 | {thrift_port,33306}, 17 | % Use framed thrift protocol. If this is enabled actordb_console must be executed with -framed. 18 | % {thrift_framed, false}, 19 | 20 | % Which port to listen to for mysql protocol. 0 for disabled 21 | {mysql_protocol, 33307}, 22 | 23 | % After how much time of inactivity ActorDB will close client connections. 24 | % 0 means never and will wait for tcp close. 25 | %{client_inactivity_timeout, 0}, 26 | 27 | % Which network interface thrift/mysql should listen on 28 | % Default is all interfaces. You should set it to whichever interface you are using. 29 | % {api_network_interface,"0.0.0.0"}, 30 | 31 | % How many transaction managers in node. 32 | % They are used for transactions over more than one actor but not for writes to a single actor. 33 | % Higher number means more transaction managers that can work at the same time. 34 | % Too low the server will not be utilized well enough, too high and it will do too 35 | % much work at the same time. 36 | {num_transaction_managers,12}, 37 | 38 | % Maximum number of miliseconds to wait for DB queries to complete before they are canceled. 39 | % Set to 0 for no timeout. 40 | {query_timeout,10000}, 41 | 42 | % safe - Writes will always be synced to disk before replying to client. 43 | % Hopefully you are using a HD that actually executes a sync when told to do so. 44 | % This has a significant performance impact. 45 | % interval - in miliseconds. This will issue fsyncs on a timer. A compromise for speed and safety. 46 | % ActorDB is a distributed database, in case of a server crash writes will still be 47 | % replicated to other nodes, which means the degree of safety is reasonable for a lot of use cases. 48 | % fast - issue writes as fast as possible, let the OS/hardware decide when to sync. In case of crash, data is 49 | % is still replicated to other nodes, so it's not as dangerous as it may sound. 50 | % If some actors are more important than others, you can add an fsync flag to write queries. Successful write 51 | % queries with fsync flag are fsynced to disk before replying to client. 52 | %{fsync, safe}, 53 | {fsync, {interval,1000}}, 54 | %{fsync, fast}, 55 | 56 | % If set it will enable automatic loading of sqlite extensions. This will 57 | % cause all shared libraries in the folder to be loaded on ActorDB startup. If they 58 | % are valid sqlite3 extensions they will be available through SQL. 59 | % Entry point name of the extension must match filename. 60 | % {sqlite_extensions_folder,"{{extensions_dir}}"}, 61 | 62 | % Every actor requires replication space. 63 | % This way if one of the nodes falls behind for whatever reason, they can receive what they missed. 64 | % Worst case scenario is at least one of the nodes being offline for a longer period of time. 65 | % These parameters govern how much replication space actors should hold. Once a node is too far behind 66 | % it will require a full resync of actor. 67 | % Values are in pages and are per actor. {5000,0.1} means keep max(5000,0.1*ActorSize) 68 | % of replication space. A page is 4096 bytes, but when stored on disk is actually compressed. 69 | {max_replication_space, {5000,0.1}} 70 | ]}, 71 | 72 | {bkdcore,[ 73 | % rpcport should not change once ActorDB has been initialized. 74 | {rpcport,44380}, 75 | {autoload_files,false} 76 | ]}, 77 | 78 | {kernel, [ 79 | {inet_dist_listen_min, 44370}, 80 | {inet_dist_listen_max, 44379} 81 | ]}, 82 | 83 | {lager, [ 84 | {crash_log, "{{platform_log_dir}}/crash.log"}, 85 | {error_logger_hwm, 100}, 86 | {handlers, [ 87 | {lager_file_backend, [{size, 10485760},{file, "{{platform_log_dir}}/error.log"}, {level, error}, {formatter, lager_default_formatter}, 88 | {formatter_config, [date, " ", time," [",severity,"] ",pid," ",module, " ",line, " ", message, "\n"]}]}, 89 | {lager_file_backend, [{file, "{{platform_log_dir}}/console.log"}, {level, info}, {size, 10485760}, {formatter, lager_default_formatter}, {formatter_config, [date, " ", time," [",severity,"] ",pid," ",module, " ",line, " ", message, "\n"]}]} 90 | ]} 91 | ]}, 92 | 93 | 94 | %% SASL config 95 | {sasl, [ 96 | {sasl_error_logger, {file, "{{platform_log_dir}}/sasl-error.log"}}, 97 | {errlog_type, error}, 98 | {error_logger_mf_dir, "{{platform_log_dir}}/sasl"}, % Log directory 99 | {error_logger_mf_maxbytes, 10485760}, % 10 MB max file size 100 | {error_logger_mf_maxfiles, 5} % 5 files max 101 | ]} 102 | ]. 103 | -------------------------------------------------------------------------------- /rel/files/dbfiles: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/biokoda/actordb/cf48f819c6cd1a5d4d50ff977db1617b3cece460/rel/files/dbfiles -------------------------------------------------------------------------------- /rel/files/erl: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | 3 | ## This script replaces the default "erl" in erts-VSN/bin. This is necessary 4 | ## as escript depends on erl and in turn, erl depends on having access to a 5 | ## bootscript (start.boot). Note that this script is ONLY invoked as a side-effect 6 | ## of running escript -- the embedded node bypasses erl and uses erlexec directly 7 | ## (as it should). 8 | ## 9 | ## Note that this script makes the assumption that there is a start_clean.boot 10 | ## file available in $ROOTDIR/release/VSN. 11 | 12 | # Determine the abspath of where this script is executing from. 13 | ERTS_BIN_DIR=$(cd ${0%/*} && pwd) 14 | 15 | # Now determine the root directory -- this script runs from erts-VSN/bin, 16 | # so we simply need to strip off two dirs from the end of the ERTS_BIN_DIR 17 | # path. 18 | ROOTDIR=${ERTS_BIN_DIR%/*/*} 19 | 20 | # Parse out release and erts info 21 | START_ERL=`cat $ROOTDIR/releases/start_erl.data` 22 | ERTS_VSN=${START_ERL% *} 23 | APP_VSN=${START_ERL#* } 24 | 25 | BINDIR=$ROOTDIR/erts-$ERTS_VSN/bin 26 | EMU=beam 27 | PROGNAME=`echo $0 | sed 's/.*\\///'` 28 | CMD="$BINDIR/erlexec" 29 | export EMU 30 | export ROOTDIR 31 | export BINDIR 32 | export PROGNAME 33 | 34 | exec $CMD -boot $ROOTDIR/releases/$APP_VSN/start_clean ${1+"$@"} 35 | -------------------------------------------------------------------------------- /rel/files/nodetool: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env escript 2 | %%! +fnu 3 | %% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*- 4 | %% ex: ft=erlang ts=4 sw=4 et 5 | %% ------------------------------------------------------------------- 6 | %% 7 | %% nodetool: Helper Script for interacting with live nodes 8 | %% 9 | %% ------------------------------------------------------------------- 10 | 11 | %% installed by node_package (github.com/basho/node_package) 12 | 13 | main(Args) -> 14 | io:setopts([{encoding, utf8}]), 15 | ok = start_epmd(), 16 | %% Extract the args 17 | {RestArgs, TargetNode} = process_args(Args, [], undefined), 18 | 19 | %% Extract the RPC timeout from process dictionary, if it's defined 20 | RpcTimeout = case erlang:get(rpctimeout) of 21 | undefined -> 22 | 60000; 23 | Value -> 24 | Value 25 | end, 26 | 27 | %% process_args() has side-effects (e.g. when processing "-name"), 28 | %% so take care of app-starting business first. 29 | [application:start(App) || App <- [crypto, public_key, ssl]], 30 | 31 | %% any commands that don't need a running node 32 | case RestArgs of 33 | ["chkconfig", File] -> 34 | chkconfig(File); 35 | ["chkconfig", "-config", File|_] -> 36 | chkconfig(File); 37 | _ -> 38 | ok 39 | end, 40 | 41 | %% See if the node is currently running -- if it's not, we'll bail 42 | case {net_kernel:hidden_connect_node(TargetNode), net_adm:ping(TargetNode)} of 43 | {true, pong} -> 44 | ok; 45 | {false, pong} -> 46 | io:format(standard_error, "failed to connect to node ~p .\n", [TargetNode]), 47 | halt(1); 48 | {_, pang} -> 49 | io:format(standard_error, "Node ~p not responding to pings.\n", [TargetNode]), 50 | halt(1) 51 | end, 52 | 53 | case RestArgs of 54 | ["getpid"] -> 55 | io:format("~p\n", [list_to_integer(rpc:call(TargetNode, os, getpid, []))]); 56 | ["ping"] -> 57 | %% If we got this far, the node already responsed to a ping, so just dump 58 | %% a "pong" 59 | io:format("pong\n"); 60 | ["stop"] -> 61 | io:format("~p\n", [rpc:call(TargetNode, init, stop, [], RpcTimeout)]); 62 | ["restart"] -> 63 | io:format("~p\n", [rpc:call(TargetNode, init, restart, [], RpcTimeout)]); 64 | ["reboot"] -> 65 | io:format("~p\n", [rpc:call(TargetNode, init, reboot, [], RpcTimeout)]); 66 | ["rpc", Module, Function | RpcArgs] -> 67 | case rpc:call(TargetNode, list_to_atom(Module), list_to_atom(Function), 68 | [RpcArgs], RpcTimeout) of 69 | ok -> 70 | ok; 71 | {badrpc, Reason} -> 72 | io:format(standard_error, "RPC to ~p failed: ~p\n", [TargetNode, Reason]), 73 | halt(1); 74 | _ -> 75 | halt(1) 76 | end; 77 | ["rpc_infinity", Module, Function | RpcArgs] -> 78 | case rpc:call(TargetNode, list_to_atom(Module), list_to_atom(Function), [RpcArgs], infinity) of 79 | ok -> 80 | ok; 81 | {badrpc, Reason} -> 82 | io:format(standard_error, "RPC to ~p failed: ~p\n", [TargetNode, Reason]), 83 | halt(1); 84 | _ -> 85 | halt(1) 86 | end; 87 | ["rpcterms", Module, Function, ArgsAsString] -> 88 | case rpc:call(TargetNode, list_to_atom(Module), list_to_atom(Function), 89 | consult(ArgsAsString), RpcTimeout) of 90 | {badrpc, Reason} -> 91 | io:format(standard_error, "RPC to ~p failed: ~p\n", [TargetNode, Reason]), 92 | halt(1); 93 | Other -> 94 | io:format("~p\n", [Other]) 95 | end; 96 | Other -> 97 | io:format("Other: ~p\n", [Other]), 98 | io:format("Usage: nodetool {ping|stop|restart|reboot|chkconfig}\n") 99 | end, 100 | net_kernel:stop(). 101 | 102 | process_args([], Acc, TargetNode) -> 103 | {lists:reverse(Acc), TargetNode}; 104 | process_args(["-setcookie", Cookie | Rest], Acc, TargetNode) -> 105 | erlang:set_cookie(node(), list_to_atom(Cookie)), 106 | process_args(Rest, Acc, TargetNode); 107 | process_args(["-name", TargetName | Rest], Acc, _) -> 108 | ThisNode = append_node_suffix(TargetName, "_maint_"), 109 | {ok, _} = net_kernel:start([ThisNode, longnames]), 110 | process_args(Rest, Acc, nodename(TargetName)); 111 | process_args(["-sname", TargetName | Rest], Acc, _) -> 112 | ThisNode = append_node_suffix(TargetName, "_maint_"), 113 | {ok, _} = net_kernel:start([ThisNode, shortnames]), 114 | process_args(Rest, Acc, nodename(TargetName)); 115 | process_args(["-rpctimeout", TimeoutStr | Rest], Acc, TargetNode) -> 116 | Timeout = case TimeoutStr of 117 | "infinity" -> infinity; 118 | _ -> list_to_integer(TimeoutStr) 119 | end, 120 | erlang:put(rpctimeout, Timeout), 121 | process_args(Rest, Acc, TargetNode); 122 | process_args([Arg | Rest], Acc, Opts) -> 123 | process_args(Rest, [Arg | Acc], Opts). 124 | 125 | 126 | start_epmd() -> 127 | [] = os:cmd(epmd_path() ++ " -daemon"), 128 | ok. 129 | 130 | epmd_path() -> 131 | ErtsBinDir = filename:dirname(escript:script_name()), 132 | Name = "epmd", 133 | case os:find_executable(Name, ErtsBinDir) of 134 | false -> 135 | case os:find_executable(Name) of 136 | false -> 137 | io:format("Could not find epmd.~n"), 138 | halt(1); 139 | GlobalEpmd -> 140 | GlobalEpmd 141 | end; 142 | Epmd -> 143 | Epmd 144 | end. 145 | 146 | 147 | nodename(Name) -> 148 | case string:tokens(Name, "@") of 149 | [_Node, _Host] -> 150 | list_to_atom(Name); 151 | [Node] -> 152 | [_, Host] = string:tokens(atom_to_list(node()), "@"), 153 | list_to_atom(lists:concat([Node, "@", Host])) 154 | end. 155 | 156 | append_node_suffix(Name, Suffix) -> 157 | case string:tokens(Name, "@") of 158 | [Node, Host] -> 159 | list_to_atom(lists:concat([Node, Suffix, os:getpid(), "@", Host])); 160 | [Node] -> 161 | list_to_atom(lists:concat([Node, Suffix, os:getpid()])) 162 | end. 163 | 164 | chkconfig(File) -> 165 | case file:consult(File) of 166 | {ok, _} -> 167 | io:format("ok\n"), 168 | halt(0); 169 | {error, {Line, Mod, Term}} -> 170 | io:format(standard_error, 171 | ["Error on line ", 172 | file:format_error({Line, Mod, Term}), "\n"], []), 173 | halt(1); 174 | {error, R} -> 175 | io:format(standard_error, 176 | ["Error reading config file: ", 177 | file:format_error(R), "\n"], []), 178 | halt(1) 179 | end. 180 | 181 | 182 | %% 183 | %% Given a string or binary, parse it into a list of terms, ala file:consult/0 184 | %% 185 | consult(Str) when is_list(Str) -> 186 | consult([], Str, []); 187 | consult(Bin) when is_binary(Bin)-> 188 | consult([], binary_to_list(Bin), []). 189 | 190 | consult(Cont, Str, Acc) -> 191 | case erl_scan:tokens(Cont, Str, 0) of 192 | {done, Result, Remaining} -> 193 | case Result of 194 | {ok, Tokens, _} -> 195 | {ok, Term} = erl_parse:parse_term(Tokens), 196 | consult([], Remaining, [Term | Acc]); 197 | {eof, _Other} -> 198 | lists:reverse(Acc); 199 | {error, Info, _} -> 200 | {error, Info} 201 | end; 202 | {more, Cont1} -> 203 | consult(Cont1, eof, Acc) 204 | end. 205 | -------------------------------------------------------------------------------- /rel/files/sqlite_extensions_here: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/biokoda/actordb/cf48f819c6cd1a5d4d50ff977db1617b3cece460/rel/files/sqlite_extensions_here -------------------------------------------------------------------------------- /rel/files/sys.config: -------------------------------------------------------------------------------- 1 | [ 2 | %% SASL config 3 | {sasl, [ 4 | {sasl_error_logger, {file, "log/sasl-error.log"}}, 5 | {errlog_type, error}, 6 | {error_logger_mf_dir, "log/sasl"}, % Log directory 7 | {error_logger_mf_maxbytes, 10485760}, % 10 MB max file size 8 | {error_logger_mf_maxfiles, 5} % 5 files max 9 | ]} 10 | ]. 11 | 12 | -------------------------------------------------------------------------------- /rel/files/vm.args: -------------------------------------------------------------------------------- 1 | ## Name of the node 2 | -name node1@127.0.0.1 3 | 4 | ## Cookie for distributed erlang 5 | -setcookie actordb 6 | 7 | ## Heartbeat management; auto-restarts VM if it dies or becomes unresponsive 8 | ## (Disabled by default..use with caution!) 9 | ##-heart 10 | 11 | -smp enable 12 | 13 | ## Enable kernel poll. 14 | +K true 15 | 16 | ## This assumes a single disk. 3 per disk is enough. 17 | +A 3 18 | 19 | ## Max number of erlang processes. 20 | +P 256000 21 | 22 | ## Bind schedulers to cores: http://www.erlang.org/doc/man/erl.html#+sbt 23 | +stbt db 24 | 25 | +C multi_time_warp 26 | 27 | ## Max number of concurrent ports/sockets. 28 | -env ERL_MAX_PORTS 65536 29 | 30 | -env ERL_FULLSWEEP_AFTER 10 31 | 32 | ## Every client connection uses an ETS table. If you have lots of connections to the database you may 33 | ## want to increase this value. 34 | -env ERL_MAX_ETS_TABLES 8192 35 | -------------------------------------------------------------------------------- /rel/reltool.config: -------------------------------------------------------------------------------- 1 | {sys, [ 2 | {lib_dirs, ["../deps"]}, 3 | {erts, [{mod_cond, derived}, {app_file, strip}]}, 4 | {app_file, strip}, 5 | {rel, "actordb", "0.10.25", 6 | [ 7 | kernel, 8 | stdlib, 9 | sasl, 10 | lager, 11 | ranch, 12 | actordb_core, 13 | bkdcore, 14 | distreg, 15 | lager, 16 | adbt, 17 | crypto 18 | ]}, 19 | {rel, "start_clean", "", 20 | [ 21 | kernel, 22 | stdlib 23 | ]}, 24 | 25 | {boot_rel, "actordb"}, 26 | {profile, embedded}, 27 | {relocatable,true}, 28 | {debug_info,strip}, 29 | {incl_cond, derived}, 30 | {excl_archive_filters, [".*"]}, %% Do not archive built libs 31 | {excl_sys_filters, ["^bin/.*", "^erts.*/bin/(dialyzer|typer)", 32 | "^erts.*/(doc|info|include|lib|man|src)"]}, 33 | {excl_app_filters, ["\.gitignore"]}, 34 | {app, sasl, [{incl_cond, include}]}, 35 | {app, stdlib, [{incl_cond, include}]}, 36 | {app, kernel, [{incl_cond, include}]}, 37 | {app, actordb_core, [{incl_cond, include}]}, 38 | {app, lager, [{incl_cond, include}]}, 39 | {app, distreg, [{incl_cond, include}]}, 40 | {app, ranch, [{incl_cond, include}]}, 41 | {app, hipe, [{incl_cond, exclude}]}, 42 | {app, wx, [{incl_cond, exclude}]}, 43 | {app, common_test, [{incl_cond, exclude}]}, 44 | {app, test_server, [{incl_cond, exclude}]} 45 | ]}. 46 | 47 | {target_dir, "actordb"}. 48 | 49 | {overlay_vars, "vars.config"}. 50 | 51 | {overlay, [ 52 | {mkdir, "log/sasl"}, 53 | {mkdir, "data"}, 54 | {mkdir, "{{extensions_dir}}"}, 55 | {copy, "../deps/node_package/priv/base/erl", "{{erts_vsn}}/bin/erl"}, 56 | {copy, "files/nodetool", "{{erts_vsn}}/bin/nodetool"}, 57 | {copy, "../priv/actordb_console","{{erts_vsn}}/bin/actordb_console"}, 58 | {copy, "../priv/cmdshell","{{erts_vsn}}/bin/cmdshell"}, 59 | {template, "../deps/node_package/priv/base/runner", "bin/actordb"}, 60 | {template, "../deps/node_package/priv/base/env.sh", "lib/env.sh"}, 61 | {copy, "files/dbfiles", "data/dbfileshere"}, 62 | {copy, "files/sqlite_extensions_here", "{{extensions_dir}}/sqlite_extensions_here"}, 63 | 64 | {template, "files/app.config", "etc/app.config"}, 65 | {copy, "../etc/init.sql", "etc/init.example.sql"}, 66 | {template, "files/vm.args", "etc/vm.args"}, 67 | 68 | {template, "files/actordb_console", "bin/actordb_console"}, 69 | {copy, "../actordb_tool","bin/actordb_tool"} 70 | ]}. 71 | -------------------------------------------------------------------------------- /rel/vars.config: -------------------------------------------------------------------------------- 1 | %% -*- tab-width: 4;erlang-indent-level: 4;indent-tabs-mode: nil -*- 2 | %% ex: ft=erlang ts=4 sw=4 et 3 | 4 | {platform_bin_dir, "./bin"}. 5 | {platform_data_dir, "./data"}. 6 | {platform_etc_dir, "./etc"}. 7 | {platform_lib_dir, "./lib"}. 8 | {platform_log_dir, "./log"}. 9 | {platform_base_dir, "."}. 10 | {extensions_dir, "{{platform_base_dir}}/extensions"}. 11 | 12 | {runner_ulimit_warn,32768}. 13 | {platform_patch_dir,"actordb-patches"}. 14 | 15 | 16 | {runner_script_dir, "$(cd ${0%/*} && pwd)"}. 17 | {runner_base_dir, "{{runner_script_dir}}/.."}. 18 | {runner_etc_dir, "$RUNNER_BASE_DIR/etc"}. 19 | {runner_log_dir, "$RUNNER_BASE_DIR/log"}. 20 | {runner_lib_dir, "$RUNNER_BASE_DIR/lib"}. 21 | {runner_patch_dir, "$RUNNER_BASE_DIR/lib/actordb-patches"}. 22 | {pipe_dir, "/tmp/$RUNNER_BASE_DIR/"}. 23 | {runner_user, ""}. 24 | {runner_wait_process, "actordb_sup"}. 25 | -------------------------------------------------------------------------------- /src/README: -------------------------------------------------------------------------------- 1 | This is just a toplevel project. The actual ActorDB code is in: biokoda/actordb_core, biokoda/actordb_driver -------------------------------------------------------------------------------- /src/actordb_console.erl: -------------------------------------------------------------------------------- 1 | % This Source Code Form is subject to the terms of the Mozilla Public 2 | % License, v. 2.0. If a copy of the MPL was not distributed with this 3 | % file, You can obtain one at http://mozilla.org/MPL/2.0/. 4 | -module(actordb_console). 5 | -export([main/1,cmd/1, map_print/1, connection_event/2, query_time/6]). 6 | % -include_lib("actordb_core/include/actordb.hrl"). 7 | -define(PROMPT,"actordb>"). 8 | 9 | 10 | -define(COMMANDS,delim()++"Databases:\n"++ 11 | "use config (use c) initialize/add nodes and user account management\n"++ 12 | "use schema (use s) set schema\n"++ 13 | "use actordb (use a) (default) run queries on database\n"++ 14 | delim()++ 15 | "Commands:\n"++ 16 | "open (windows only) open and execute .sql file\n"++ 17 | "q exit\n"++ 18 | "h print this header\n"++ 19 | "commit (c) execute transaction\n"++ 20 | "rollback (r) abort transaction\n"++ 21 | "print (p) print transaction\n" 22 | "show (s) show schema\n" 23 | "show status show database status\n" 24 | "show queries show currently running queries\n" 25 | "show shards show shards on node\n"++delim()). 26 | 27 | delim() -> 28 | "*******************************************************************\n". 29 | 30 | connection_event(_F, _A) -> 31 | % TS = io_lib:format("~p ",[time()]), 32 | % AS = io_lib:format(_F,_A), 33 | % file:write_file("console.log",[TS,AS,"\n"],[append]), 34 | ok. 35 | 36 | query_time(_QMS, _QPO, _QST, _From, _CallFunc, _Args) -> 37 | % L = io_lib:format("Query in={~p,~p,~p}, src=~p, func=~p, args=~p~n",[_QMS, _QPO, _QST, _From,_CallFunc,_Args]), 38 | % file:write_file("console.log",L,[append]), 39 | ok. 40 | 41 | 42 | % curdb changed with use statements 43 | % actordb - default can run queries directly 44 | % config - for adding groups and nodes 45 | % schema - for changing schema 46 | -record(dp,{env = shell, curdb = actordb, req, resp, stop = false, buffer = [], wait = true, 47 | addr = "127.0.0.1", port = 33306, username = "", framed = false, 48 | password = "", execute, timeout_after = infinity, print, noshell = false}). 49 | 50 | main(Args) -> 51 | register(home,self()), 52 | % For some reason we must call this to load crypto immediately. 53 | % Otherwise it fails when using wx. 54 | application:ensure_all_started(crypto), 55 | crypto:hash(sha, "asdf"), 56 | case os:type() of 57 | {win32,_} -> 58 | spawn(fun() -> (catch actordb_wxconsole:wxrun()),halt(1) end), 59 | timer:sleep(50), 60 | P = parse_args(#dp{env = wx}, Args); 61 | % {unix,darwin} -> 62 | % spawn(fun() -> (catch wxrun()),halt(1) end), 63 | % P = parse_args(#dp{env = wx}, Args); 64 | _ -> 65 | {ok,Comfile} = file:read_file("/tmp/comfile"), 66 | [Req,Resp] = binary:split(Comfile,<<"\n">>), 67 | file:delete("/tmp/comfile"), 68 | ReqPipe = open_port(binary_to_list(Req), [in,eof,binary]), 69 | RespPipe = open_port(binary_to_list(Resp), [out,eof,binary]), 70 | P = setpw(parse_args(#dp{req = ReqPipe, resp = RespPipe, env = shell},Args)) 71 | end, 72 | dologin(P), 73 | case P#dp.execute of 74 | undefined -> 75 | case P#dp.curdb of 76 | actordb -> 77 | print(P,?COMMANDS), 78 | dopipe(P); 79 | _ -> 80 | change_prompt(P), 81 | print(P,?COMMANDS), 82 | dopipe(P) 83 | end; 84 | {script,Bin} -> 85 | cmd_lines(P,binary:split(Bin,<<"\n">>,[global])), 86 | halt(1); 87 | {qry,Q} -> 88 | send_query(P#dp{timeout_after = 2000},list_to_binary(Q)) 89 | end. 90 | 91 | setpw(#dp{password = prompt} = P) -> 92 | case dopipe(P#dp{timeout_after = 50}) of 93 | timeout -> 94 | print(P,"~~~~getpass"), 95 | Pw = dopipe(P), 96 | P#dp{password = Pw}; 97 | Pw -> 98 | P#dp{password = Pw} 99 | end; 100 | setpw(P) -> 101 | P. 102 | 103 | dologin(P) -> 104 | application:stop(actordb_client), 105 | PoolInfo = [{size, 1}, {max_overflow, 5}], 106 | WorkerParams = [{hostname, P#dp.addr}, 107 | {username, P#dp.username}, 108 | {password, P#dp.password}, 109 | {port,P#dp.port}, 110 | {framed, P#dp.framed} 111 | ], 112 | actordb_client:set_trace_callback(?MODULE), 113 | case actordb_client:start(PoolInfo,WorkerParams) of 114 | ok -> 115 | % actordb_client:set_trace_callback(?MODULE), 116 | % print(P,"Connected to DB\n"), 117 | ok; 118 | {error,{login_failed,Msg}} when P#dp.env == wx -> 119 | print(P,Msg), 120 | wxproc ! dologin; 121 | Err -> 122 | print(P,"Connect/login error: ~p~n",[Err]), 123 | cmd(P,<<"q">>) 124 | end. 125 | 126 | 127 | 128 | cmd_lines(P,[H|T]) -> 129 | case rem_spaces(H) of 130 | <<"//",_/binary>> -> 131 | cmd_lines(P,T); 132 | <<"%",_/binary>> -> 133 | cmd_lines(P,T); 134 | <<"--",_/binary>> -> 135 | cmd_lines(P,T); 136 | _ -> 137 | cmd_lines(cmd(P,H),T) 138 | end; 139 | cmd_lines(P,[]) -> 140 | P. 141 | 142 | rem_spaces(<<" ",X/binary>>) -> 143 | rem_spaces(X); 144 | rem_spaces(<<"\n",X/binary>>) -> 145 | rem_spaces(X); 146 | rem_spaces(<<"\r",X/binary>>) -> 147 | rem_spaces(X); 148 | rem_spaces(X) -> 149 | X. 150 | 151 | parse_args(P,["-h"|_]) -> 152 | L = "Flags:\n"++ 153 | " -h Print this help and exit.\n"++ 154 | " -u Set username. You will be prompted for password. Not required if ActorDB is uninitalized.\n"++ 155 | " -pw Set login password (optional). This will avoid the prompt.\n"++ 156 | " -f Execute statements from file and exit.\n"++ 157 | " -use actordb (def), config or schema.\n"++ 158 | " -q \"query\" Execute query and exit.\n"++ 159 | " -noshell Do not create a shell. Useful when running queries with -q.\n"++ 160 | " -framed Use framed thrift protocol (must be enabled on server as well).\n"++ 161 | " -print \n", 162 | % " -w wait for commit to send query to actordb\n", 163 | print(P,"Call with: actordb_console -u username IP[:ThriftPort]\n"++L), 164 | halt(1); 165 | parse_args(P,["-f",File|T]) -> 166 | {ok,F} = file:read_file(File), 167 | parse_args(P#dp{execute = {script,F}},T); 168 | parse_args(P,["-u",Username|T]) -> 169 | case P#dp.password of 170 | [_|_] -> 171 | parse_args(P#dp{username = Username},T); 172 | _ -> 173 | parse_args(P#dp{username = Username, password = prompt},T) 174 | end; 175 | parse_args(P,["-use",A|T]) when A == "a"; A == "actordb"; A == "actor" -> 176 | parse_args(P#dp{curdb = actordb},T); 177 | parse_args(P,["-use",A|T]) when A == "c"; A == "config" -> 178 | parse_args(P#dp{curdb = config},T); 179 | parse_args(P,["-use",A|T]) when A == "s"; A == "schema" -> 180 | parse_args(P#dp{curdb = schema},T); 181 | parse_args(P,["-q",Q|T]) -> 182 | parse_args(P#dp{execute = {qry, Q}, wait = false},T); 183 | parse_args(P,["-pw",Password|T]) -> 184 | parse_args(P#dp{password = Password},T); 185 | parse_args(P,["-w"|T]) -> 186 | parse_args(P#dp{wait = true},T); 187 | parse_args(P,["-framed"|T]) -> 188 | parse_args(P#dp{framed = true}, T); 189 | parse_args(P,["-print", "min"|T]) -> 190 | parse_args(P#dp{print = min},T); 191 | parse_args(P,["-print", "csv"|T]) -> 192 | parse_args(P#dp{print = csv},T); 193 | parse_args(P,["-print", "csvh"|T]) -> 194 | parse_args(P#dp{print = csvh},T); 195 | parse_args(P,["-print", _|T]) -> 196 | parse_args(P,T); 197 | parse_args(P,["-noshell"|T]) -> 198 | parse_args(P#dp{noshell = true},T); 199 | parse_args(P,["-"++Something|T]) -> 200 | print(P,"Unrecognized option: ~s",["-"++Something]), 201 | parse_args(P,T); 202 | parse_args(P,[Addr|T]) -> 203 | case string:tokens(Addr,":") of 204 | [Address,Port] -> 205 | parse_args(P#dp{addr = Address, port = list_to_integer(Port)},T); 206 | [Address] -> 207 | parse_args(P#dp{addr = Address},T) 208 | end; 209 | parse_args(P,[]) when P#dp.noshell, P#dp.execute == undefined -> 210 | halt(1); 211 | parse_args(P,[]) -> 212 | P. 213 | 214 | cmd(C) -> 215 | cmd(#dp{env = test},iolist_to_binary(C)). 216 | cmd(P,<<";",Rem/binary>>) -> 217 | cmd(P,Rem); 218 | cmd(P,<<>>) -> 219 | P; 220 | cmd(P,<<"dim=",Dim/binary>>) -> 221 | case binary:split(Dim,<<"\n">>) of 222 | [DimBin,Rem] -> 223 | ok; 224 | [DimBin] -> 225 | Rem = <<>> 226 | end, 227 | case binary:split(DimBin,<<",">>) of 228 | [_Rows,_Cols] -> 229 | % print(P,"Rows=~p,Cols=~p",[_Rows,_Cols]); 230 | ok; 231 | _ -> 232 | ok 233 | end, 234 | cmd(P,Rem); 235 | cmd(P,<<"h">>) -> 236 | print(P,?COMMANDS); 237 | cmd(P,<<"s">>) -> 238 | cmd(P,<<>>,show); 239 | cmd(P,<<"S">>) -> 240 | cmd(P,<<>>,show); 241 | cmd(P,<<"c">>) -> 242 | cmd(P,<<>>,commit); 243 | cmd(P,<<"C">>) -> 244 | cmd(P,<<>>,commit); 245 | cmd(P,<<"p">>) -> 246 | cmd(P,<<>>,print); 247 | cmd(P,<<"P">>) -> 248 | cmd(P,<<>>,print); 249 | cmd(P,<<"r">>) -> 250 | cmd(P,<<>>,rollback); 251 | cmd(P,<<"R">>) -> 252 | cmd(P,<<>>,rollback); 253 | cmd(_P,<<"q">>) -> 254 | case whereis(wxproc) of 255 | undefined -> 256 | halt(1); 257 | _ -> 258 | wxproc ! stop, 259 | timer:sleep(200), 260 | halt(1) 261 | end; 262 | cmd(P,Bin) when is_binary(Bin) -> 263 | cmd(P,Bin,actordb_sql:parse(Bin)). 264 | cmd(P,Bin,Tuple) -> 265 | case Tuple of 266 | {use,<<"c">>} -> 267 | cmd(P,<<>>,{use,<<"config">>}); 268 | {use,<<"s">>} -> 269 | cmd(P,<<>>,{use,<<"schema">>}); 270 | {use,<<"a">>} -> 271 | cmd(P,<<>>,{use,<<"actordb">>}); 272 | {use,<<"C">>} -> 273 | cmd(P,<<>>,{use,<<"config">>}); 274 | {use,<<"S">>} -> 275 | cmd(P,<<>>,{use,<<"schema">>}); 276 | {use,<<"A">>} -> 277 | cmd(P,<<>>,{use,<<"actordb">>}); 278 | {use,Name} -> 279 | case string:to_lower(binary_to_list(Name)) of 280 | "actordb" -> 281 | print_help(change_prompt(P#dp{curdb = actordb})); 282 | "config" -> 283 | print_help(change_prompt(P#dp{curdb = config})); 284 | "schema" -> 285 | print_help(change_prompt(P#dp{curdb = schema})); 286 | _ -> 287 | print(P,"Invalid db") 288 | end; 289 | {show,<<>>} -> 290 | cmd(P,Bin,{show,<<"schema">>}); 291 | {show,Show} -> 292 | case string:to_lower(binary_to_list(Show)) of 293 | "queries"++_ -> 294 | send_query(change_prompt(P#dp{buffer = []}), <<"show queries;">>); 295 | "schema"++_ -> 296 | cmd(P,Bin,show); 297 | "status"++_ -> 298 | send_query(change_prompt(P#dp{buffer = []}), <<"show status;">>); 299 | "shards"++_ -> 300 | send_query(change_prompt(P#dp{buffer = []}), <<"show shards;">>) 301 | end; 302 | show when P#dp.curdb == config -> 303 | send_cfg_query(change_prompt(P#dp{buffer = []}),<<"show schema;">>); 304 | show -> 305 | send_schema_query(change_prompt(P#dp{buffer = []}),<<"show schema;">>); 306 | print -> 307 | print(P,io_lib:fwrite("~s",[butil:iolist_join(lists:reverse(P#dp.buffer),"\n")])); 308 | rollback -> 309 | change_prompt(P#dp{buffer = []}); 310 | commit when P#dp.buffer == [] -> 311 | print(P,"Nothing to commit."); 312 | commit when P#dp.curdb == schema -> 313 | send_schema_query(change_prompt(P#dp{buffer = []}),lists:reverse(P#dp.buffer)); 314 | commit when P#dp.curdb == config -> 315 | send_cfg_query(change_prompt(P#dp{buffer = []}),lists:reverse(P#dp.buffer)); 316 | commit -> 317 | send_query(change_prompt(P#dp{buffer = []}),lists:reverse(P#dp.buffer)); 318 | {commit,_,_} -> 319 | cmd(P,<<>>,commit); 320 | _ when P#dp.curdb == actordb -> 321 | append(P,Bin); 322 | % Let actordb deal with it, unless it is config db 323 | {fail,_} when P#dp.curdb /= config andalso (P#dp.wait orelse P#dp.curdb == schema) -> 324 | append(P,Bin); 325 | {fail,_} -> 326 | print(P,"Unrecognized command."); 327 | % R when element(1,R) == show -> 328 | % cmd_show(P,R); 329 | {actor,Type,SubType} -> 330 | cmd_actor(P,{actor,Type,SubType},Bin); 331 | % create_table -> 332 | % change_prompt(cmd_create(P,Bin)); 333 | R when element(1,R) == select -> 334 | cmd_select(P,R,Bin); 335 | R when element(1,R) == insert -> 336 | cmd_insert(P,R,Bin); 337 | R when element(1,R) == update -> 338 | cmd_update(P,R,Bin); 339 | R when element(1,R) == delete -> 340 | cmd_delete(P,R,Bin); 341 | R when element(1,R) == management -> 342 | cmd_usermng(P,R,Bin); 343 | _ when is_tuple(Tuple), is_tuple(element(3,Tuple)), is_binary(element(2,Tuple)) -> 344 | RemBin = element(2,Tuple), 345 | ThisSize = byte_size(Bin) - byte_size(RemBin), 346 | NextSize = byte_size(RemBin), 347 | <> = Bin, 348 | cmd(cmd(P,This,element(1,Tuple)), Next); 349 | _ -> 350 | print(P,"Unrecognized command. ~p",[P#dp.curdb]) 351 | end. 352 | 353 | % cmd_show(#dp{curdb = actordb} = P,_R) -> 354 | % P; 355 | % cmd_show(P,_R) -> 356 | % P. 357 | 358 | append(P,<<>>) -> 359 | P; 360 | append(P,Bin) -> 361 | case binary:last(Bin) of 362 | $; -> 363 | change_prompt(P#dp{buffer = [Bin|P#dp.buffer]}); 364 | C when C == $\s; C == $\t; C == $\r; C == $\n -> 365 | S = byte_size(Bin)-1, 366 | <> = Bin, 367 | append(P,Bin1); 368 | _ -> 369 | change_prompt(P#dp{buffer = [[Bin,";"]|P#dp.buffer]}) 370 | end. 371 | 372 | cmd_actor(#dp{curdb = config} = P,_,_) -> 373 | print(P,"actor statements do not belong in config db"); 374 | cmd_actor(P,_,Bin) -> 375 | append(P,Bin). 376 | % cmd_actor(#dp{curdb = actordb} = P,{actor,_Type},Bin) -> 377 | % P#dp{buffer = [Bin|P#dp.buffer]}. 378 | 379 | cmd_insert(#dp{curdb = actordb, wait = false} = P,_,Bin) -> 380 | send_query(P,Bin); 381 | cmd_insert(#dp{curdb = config, wait = false} = P, _, Bin) -> 382 | send_cfg_query(P,Bin); 383 | cmd_insert(P,_,Bin) -> 384 | append(P,Bin). 385 | 386 | cmd_usermng(#dp{curdb = config} = P,_,Bin) -> 387 | append(P,Bin); 388 | cmd_usermng(P,_,_) -> 389 | print(P,"Not in config database."). 390 | 391 | cmd_update(#dp{curdb = actordb, wait = false} = P,_,Bin) -> 392 | send_query(P,Bin); 393 | cmd_update(#dp{curdb = config, wait = false} = P,_,Bin) -> 394 | send_cfg_query(P,Bin); 395 | cmd_update(P,_,Bin) -> 396 | append(P,Bin). 397 | 398 | cmd_select(#dp{curdb = actordb, wait = false} = P,_,Bin) -> 399 | send_query(P,Bin); 400 | cmd_select(#dp{curdb = actordb} = P,_,Bin) -> 401 | append(P,Bin); 402 | cmd_select(#dp{curdb = schema} = P,_,_) -> 403 | print(P,"select statements do not belong in schema."); 404 | cmd_select(P,_,Bin) -> 405 | send_cfg_query(P,Bin). 406 | 407 | % cmd_create(#dp{curdb = actordb, wait = false} = P,Bin) -> 408 | % send_query(P,Bin); 409 | % cmd_create(#dp{curdb = actordb} = P,Bin) -> 410 | % append(P,Bin); 411 | % cmd_create(P,_) -> 412 | % print(P,"Can not run create on current db."). 413 | 414 | cmd_delete(#dp{curdb = actordb, wait = false} = P,_R,Bin) -> 415 | send_query(P,Bin); 416 | cmd_delete(#dp{curdb = actordb} = P,_R,Bin) -> 417 | append(P,Bin); 418 | cmd_delete(P,_,Bin) -> 419 | append(P,Bin). 420 | 421 | send_cfg_query(P,Bin) -> 422 | Cfg = actordb_client:config([{query_timeout, 5000},{blob_tuple,true}]), 423 | case catch actordb_client:exec_config(Cfg,butil:tobin(Bin)) of 424 | {ok,{false,Map}} -> 425 | map_print(P,Map); 426 | {ok,{changes,_Rowid,_NChanged}} -> 427 | print(P,"Config updated.",[]); 428 | {error,{login_failed,_}} when P#dp.env == wx -> 429 | wxproc ! dologin, 430 | P; 431 | {'EXIT',{noproc,_}} -> 432 | print(P,"No session."), 433 | halt(1); 434 | Err -> 435 | print(P,"Error: ~p",[Err]) 436 | end. 437 | 438 | send_schema_query(P,Bin) -> 439 | Cfg = actordb_client:config([{query_timeout, 5000},{blob_tuple,true}]), 440 | case catch actordb_client:exec_schema(Cfg, butil:tobin(Bin)) of 441 | {ok,{false,Map}} -> 442 | map_print(P,Map); 443 | {ok,{changes,_Rowid,_NChanged}} -> 444 | print(P,"Schema updated.",[]); 445 | {error,{login_failed,_}} when P#dp.env == wx -> 446 | wxproc ! dologin, 447 | P; 448 | {'EXIT',{noproc,_}} -> 449 | print(P,"No session."), 450 | halt(1); 451 | Err -> 452 | print(P,"Error: ~p",[Err]) 453 | end. 454 | 455 | send_query(P,Bin) when P#dp.buffer /= [] -> 456 | send_query(P#dp{buffer = []},lists:reverse(append(P,Bin))); 457 | send_query(P,Bin) -> 458 | Cfg = actordb_client:config([{query_timeout, 5000},{blob_tuple,true}]), 459 | case catch actordb_client:exec(Cfg, butil:tobin(Bin)) of 460 | {ok,{false,Map}} -> 461 | map_print(P,Map); 462 | {ok,{changes,Rowid,NChanged}} -> 463 | print(P,"Rowid: ~p, Rows changed: ~p",[Rowid,NChanged]); 464 | {error,{login_failed,_}} when P#dp.env == wx -> 465 | wxproc ! dologin, 466 | P; 467 | {'EXIT',{noproc,_}} -> 468 | print(P,"No session."), 469 | halt(1); 470 | Err -> 471 | print(P,"Error: ~p",[Err]) 472 | end. 473 | 474 | print(P,F) -> 475 | print(P,F,[]). 476 | print(#dp{env = test} = P,F,A) -> 477 | io:format(F++"~n",A), 478 | P; 479 | print(#dp{env = wx} = P,F,A) -> 480 | io:format(F,A), 481 | io:format("\n"), 482 | wxproc ! {print,io_lib:fwrite(F,A)}, 483 | P; 484 | print(P,F,A) -> 485 | port_command(P#dp.resp, [unicode:characters_to_binary(io_lib:format(F,A)),<<"\r\n">>]), 486 | P. 487 | 488 | change_prompt(P) when P#dp.execute /= undefined -> 489 | P; 490 | change_prompt(#dp{env = wx} = P) -> 491 | case P#dp.curdb of 492 | actordb -> 493 | wxproc ! {prompt,"actordb"++uncommited(P)++"> "}; 494 | config -> 495 | wxproc ! {prompt,"actordb:config"++uncommited(P)++"> "}; 496 | schema -> 497 | wxproc ! {prompt,"actordb:schema"++uncommited(P)++"> "} 498 | end, 499 | P; 500 | change_prompt(P) -> 501 | case P#dp.curdb of 502 | actordb -> 503 | print(P,"~~~~actordb"++uncommited(P)++"> "); 504 | config -> 505 | print(P,"~~~~actordb:config"++uncommited(P)++"> "); 506 | schema -> 507 | print(P,"~~~~actordb:schema"++uncommited(P)++"> ") 508 | end. 509 | 510 | uncommited(#dp{buffer = []}) -> 511 | ""; 512 | uncommited(P) -> 513 | " ("++integer_to_list(length(P#dp.buffer))++")". 514 | 515 | print_help(P) when P#dp.execute /= undefined -> 516 | P; 517 | print_help(#dp{env = test} = P) -> 518 | P; 519 | print_help(#dp{curdb = actordb} = P) -> 520 | P; 521 | % print_help(#dp{curdb = users} = P) -> 522 | % print(P,"MySQL commands https://dev.mysql.com/doc/refman/5.1/en/user-account-management.html"); 523 | print_help(#dp{curdb = config} = P) -> 524 | % Url = "https://dev.mysql.com/doc/refman/5.1/en/user-account-management.html\n", 525 | Usr = "CREATE USER 'myuser' IDENTIFIED BY 'mypass'\n", 526 | Usrg = "GRANT read,write ON * to 'myuser'\n", 527 | U = "For user account management:\n"++Usr++Usrg, 528 | E = "To create/modify servers, run inserts to these tables: \n"++ 529 | "CREATE TABLE groups (name TEXT, type TEXT DEFAULT 'cluster');\n"++ 530 | "CREATE TABLE nodes (name TEXT, group_name TEXT);\n", 531 | N = "(optional) To store varius configuration info you can use:\n"++ 532 | "CREATE TABLE state (id TEXT, val);\n", 533 | print(P,delim()++U++delim()++E++delim()++N++delim()); 534 | print_help(#dp{curdb = schema} = P) -> 535 | S = "actor type1; CREATE TABLE tab (id INTEGER PRIMARY KEY, val TEXT);\n", 536 | R = "WARNING: Schema is not overwritten but appended.\n"++ 537 | " Any pre-existing type will have old and new\n"++ 538 | " statements as its schema.\n", 539 | print(P,delim()++"Create or modify schema for actor types. Example:\n"++S++delim()++R++delim()). 540 | 541 | dopipe(#dp{stop = true}) -> 542 | ok; 543 | dopipe(#dp{env = wx} = P) -> 544 | After = P#dp.timeout_after, 545 | receive 546 | {login,U,Pw} -> 547 | NP = P#dp{username = U, password = Pw}, 548 | dologin(NP), 549 | dopipe(NP); 550 | {dofile,Pth} -> 551 | {ok,Bin} = file:read_file(Pth), 552 | dopipe(cmd_lines(P,binary:split(Bin,<<"\n">>,[global]))); 553 | {exec,Str} -> 554 | case catch cmd(P,Str) of 555 | #dp{} = NP -> 556 | dopipe(NP); 557 | X -> 558 | print(P,io_lib:fwrite("~p",[X])), 559 | dopipe(P) 560 | end 561 | after After -> 562 | timeout 563 | end; 564 | dopipe(P) -> 565 | After = P#dp.timeout_after, 566 | receive 567 | {_, {data, Data}} -> 568 | Line = string:tokens(binary_to_list(Data),"\n"), 569 | case Line of 570 | [Q] when (Q == "Q" orelse Q == "q") andalso P#dp.password /= prompt -> 571 | print(P,"Bye!"), 572 | ok; 573 | _ when P#dp.password == prompt -> 574 | Data; 575 | _ -> 576 | case catch cmd(P,Data) of 577 | #dp{} = NP -> 578 | dopipe(NP); 579 | X -> 580 | % port_command(P#dp.resp, [io_lib:fwrite("~p",[X]),<<"\n">>]), 581 | print(P,io_lib:fwrite("~p",[X])), 582 | dopipe(P) 583 | end 584 | end; 585 | X -> 586 | port_command(P#dp.resp, [io_lib:fwrite("~p",[X]),<<"\n">>]), 587 | io:format("Received ~p~n",[X]) 588 | after After -> 589 | timeout 590 | end. 591 | 592 | map_print(M) when is_list(M) -> 593 | map_print(#dp{env = test},M); 594 | map_print(M) -> 595 | map_print([M]). 596 | map_print(#dp{print = Pr} = P,[]) when Pr /= undefined -> 597 | P; 598 | map_print(P,[]) -> 599 | print(P,"No results."); 600 | map_print(#dp{print = min} = P,L) -> 601 | Str = [begin 602 | ML = lists:keysort(1,[{atom_to_list(K),V} || {K,V} <- maps:to_list(M)]), 603 | [butil:iolist_join([to_unicode(V) || {_,V} <- ML],"|")] 604 | end || M <- L], 605 | print(P,butil:iolist_join(Str,"\n")), 606 | P; 607 | map_print(#dp{print = Csv} = P,L) when Csv == csv; Csv == csvh -> 608 | Str = [begin 609 | ML = lists:keysort(1,[{atom_to_list(K),V} || {K,V} <- maps:to_list(M)]), 610 | [butil:iolist_join([quote(V) || {_,V} <- ML],",")] 611 | end || M <- L], 612 | case Csv of 613 | csvh -> 614 | Keys = lists:sort([atom_to_list(K) || K <- maps:keys(hd(L))]), 615 | print(P,butil:iolist_join(Keys,",")); 616 | _ -> 617 | ok 618 | end, 619 | print(P,butil:iolist_join(Str,"\n")), 620 | P; 621 | map_print(P,M) -> 622 | Keys = maps:keys(hd(M)), 623 | map_print(P,Keys,M,[]). 624 | 625 | quote(X) when is_list(X); is_binary(X) -> 626 | [$\",to_unicode(re:replace(X, "\"", "\"\"", [global, unicode,{return,list}])),$\"]; 627 | quote(X) -> 628 | to_unicode(X). 629 | 630 | to_unicode(undefined) -> 631 | "null"; 632 | to_unicode({blob,B}) -> 633 | "0x"++binary_to_list(butil:dec2hex(B)); 634 | to_unicode(B) when is_binary(B) -> 635 | case unicode:characters_to_list(B) of 636 | R when is_list(R) -> 637 | R; 638 | _ -> 639 | "0x"++butil:tolist(butil:dec2hex(B)) 640 | end; 641 | to_unicode(B) when is_list(B) -> 642 | to_unicode(iolist_to_binary(B)); 643 | to_unicode(B) -> 644 | to_unicode(butil:tobin(B)). 645 | 646 | map_print(P,[Key|T],Maps,L) -> 647 | Lenk = length(butil:tolist(Key)), 648 | Len = lists:max([Lenk|[length(to_unicode(maps:get(Key,M))) || M <- Maps]]), 649 | map_print(P,T,Maps,[{Key,Len}|L]); 650 | map_print(P,[],Maps,L1) -> 651 | L = lists:reverse(L1), 652 | Width = lists:sum([Len || {_,Len} <- L]), 653 | Chars = length(L)+1 + Width, 654 | Delim = string:right("",Chars,$*), 655 | Delim1 = string:right("",Chars,$-), 656 | print(P,"~s",[Delim]), 657 | StrKeys = [io_lib:format("~ts",[string:left(to_unicode(K),Len+1,$\s)]) || {K,Len} <- L], 658 | print(P,"~ts|",[StrKeys]), 659 | print(P,"~s",[Delim1]), 660 | map_print1(P,Maps,L), 661 | print(P,"~s",[Delim1]). 662 | 663 | map_print1(P,[M|T],Keys) -> 664 | print(P,[[io_lib:format("~ts",[string:left(to_unicode(maps:get(K,M)),Len+1,$\s)]) || {K,Len} <- Keys],"|"]), 665 | map_print1(P,T,Keys); 666 | map_print1(_,[],_) -> 667 | []. 668 | -------------------------------------------------------------------------------- /src/actordb_wxconsole.erl: -------------------------------------------------------------------------------- 1 | -module(actordb_wxconsole). 2 | -include_lib("wx/include/wx.hrl"). 3 | -define(PROMPT,"actordb>"). 4 | -export([main/1, wxrun/0]). 5 | 6 | main(A) -> 7 | actordb_console:main(A). 8 | 9 | 10 | 11 | -record(wc,{wx, dlg, input, disp, prompt = ?PROMPT,history_pos = 0,current = "",history = []}). 12 | wxrun() -> 13 | register(wxproc,self()), 14 | Wx = wx:new(), 15 | Dlg = wxDialog:new(Wx,-1,"ActorDB Shell",[{size,{640,480}},{style,?wxRESIZE_BORDER bor ?wxDEFAULT_DIALOG_STYLE}]), 16 | Sizer = wxBoxSizer:new(?wxVERTICAL), 17 | TextDisplay = wxTextCtrl:new(Dlg,4,[{style, ?wxTE_MULTILINE bor ?wxTE_READONLY}]), 18 | TextInput = wxTextCtrl:new(Dlg,5,[{style, ?wxDEFAULT bor ?wxTE_PROCESS_ENTER}]), 19 | SzFlags = [{proportion, 0}, {border, 4}, {flag, ?wxALL}], 20 | wxSizer:add(Sizer,TextDisplay,[{flag, ?wxEXPAND},{proportion, 1}|SzFlags]), 21 | wxSizer:add(Sizer,TextInput,[{proportion, 0},{border, 4}, {flag, ?wxEXPAND}]), 22 | wxTextCtrl:setEditable(TextInput,true), 23 | wxDialog:setSizer(Dlg,Sizer), 24 | wxDialog:show(Dlg), 25 | wxWindow:setFocus(TextInput), 26 | wxTextCtrl:writeText(TextInput,?PROMPT), 27 | wxEvtHandler:connect(Dlg,close_window), 28 | wxEvtHandler:connect(TextInput,command_text_enter), 29 | wxEvtHandler:connect(TextDisplay,key_down), 30 | % I guess its a broken erlang wx implementation. I don't see how I can read 31 | % text from clipboard 32 | % wxEvtHandler:connect(TextInput,command_text_paste,[{skip,false}]), 33 | wxEvtHandler:connect(TextInput,key_down,[{callback,fun input/2},{userData,{TextInput,?PROMPT}}]), 34 | wxloop(#wc{wx = Wx, dlg = Dlg, input = TextInput, disp = TextDisplay}), 35 | wx:destroy(Wx). 36 | 37 | -record(lg,{dlg, uinp, pinp, btn}). 38 | 39 | wxloop(P) -> 40 | receive 41 | {prompt,Str} -> 42 | wxTextCtrl:setValue(P#wc.input,Str), 43 | wxTextCtrl:setInsertionPoint(P#wc.input,length(Str)), 44 | wxEvtHandler:disconnect(P#wc.input,key_down), 45 | wxEvtHandler:connect(P#wc.input,key_down,[{callback,fun input/2},{userData,{P#wc.input,Str}}]), 46 | self() ! {print,""}, 47 | wxloop(P#wc{prompt = Str}); 48 | {print,Str} -> 49 | wxTextCtrl:writeText(P#wc.disp,Str), 50 | wxTextCtrl:writeText(P#wc.disp,"\n"), 51 | wxloop(P); 52 | up -> 53 | case P#wc.history_pos of 54 | 0 -> 55 | Str = wxTextCtrl:getValue(P#wc.input), 56 | Cur = lists:sublist(Str,length(P#wc.prompt)+1,length(Str)); 57 | _ -> 58 | Cur = P#wc.current 59 | end, 60 | case catch lists:nth(P#wc.history_pos+1,P#wc.history) of 61 | {'EXIT',_} -> 62 | wxloop(P); 63 | NewStr -> 64 | wxTextCtrl:setValue(P#wc.input,P#wc.prompt++NewStr), 65 | wxTextCtrl:setInsertionPoint(P#wc.input,length(NewStr)+length(P#wc.prompt)), 66 | wxloop(P#wc{current = Cur, history_pos = P#wc.history_pos+1}) 67 | end; 68 | down -> 69 | case P#wc.history_pos > 0 of 70 | true -> 71 | case P#wc.history_pos > 1 of 72 | true -> 73 | Str = lists:nth(P#wc.history_pos-1,P#wc.history); 74 | false -> 75 | Str = P#wc.current 76 | end, 77 | wxTextCtrl:setValue(P#wc.input,P#wc.prompt++Str), 78 | wxTextCtrl:setInsertionPoint(P#wc.input,length(Str)+length(P#wc.prompt)), 79 | wxloop(P#wc{history_pos = P#wc.history_pos-1}); 80 | false -> 81 | wxloop(P) 82 | end; 83 | stop -> 84 | ok; 85 | Wx when Wx#wx.obj == P#wc.disp -> 86 | wxWindow:setFocus(P#wc.input), 87 | wxloop(P); 88 | Wx when Wx#wx.obj == P#wc.input -> 89 | Cmd = Wx#wx.event, 90 | case Cmd of 91 | #wxMouse{} -> 92 | % self() ! {print,"MOUSE!"}, 93 | wxloop(P); 94 | % #wxClipboardText{} -> 95 | % Clip = wxClipboard:get(), 96 | % self() ! {print,io_lib:fwrite("~p~n",[get(clip)])}, 97 | % wxloop(Disp,Input,Prompt); 98 | _ -> 99 | wxTextCtrl:setValue(P#wc.input,P#wc.prompt), 100 | wxTextCtrl:setInsertionPoint(P#wc.input,length(P#wc.prompt)), 101 | Str = Cmd#wxCommand.cmdString, 102 | Print = lists:sublist(Str,length(P#wc.prompt)+1,length(Str)), 103 | case Print of 104 | "open" -> 105 | % spawn(fun() -> 106 | % {wildCard,"*.sql"} 107 | File = wxFileDialog:new(P#wc.dlg,[{defaultDir,"."}]), 108 | case wxDialog:showModal(File) == ?wxID_OK of 109 | true -> 110 | home ! {dofile,wxFileDialog:getPath(File)}; 111 | _ -> 112 | ok 113 | end, 114 | wxFileDialog:destroy(File), 115 | wxWindow:setFocus(P#wc.input); 116 | "login" -> 117 | self() ! dologin, 118 | wxloop(P); 119 | _ -> 120 | self() ! {print,Str}, 121 | home ! {exec, unicode:characters_to_binary(Print)} 122 | end, 123 | wxloop(P#wc{history = [Print|P#wc.history]}) 124 | end; 125 | Wx when element(1,Wx) == wx -> 126 | Cmd = Wx#wx.event, 127 | case Cmd of 128 | #wxClose{} -> 129 | halt(1); 130 | _ -> 131 | wxloop(P) 132 | end; 133 | dologin -> 134 | Dlg = wxDialog:new(P#wc.dlg,7,"Login",[{style,?wxRESIZE_BORDER bor ?wxDEFAULT_DIALOG_STYLE}]), 135 | VSizer = wxBoxSizer:new(?wxVERTICAL), 136 | HSizer1 = wxBoxSizer:new(?wxHORIZONTAL), 137 | HSizer2 = wxBoxSizer:new(?wxHORIZONTAL), 138 | ULabel = wxTextCtrl:new(Dlg,-1,[{value,"Username:"},{style, ?wxDEFAULT bor ?wxTE_READONLY}]), 139 | PLabel = wxTextCtrl:new(Dlg,-1,[{value,"Password:"},{style, ?wxDEFAULT bor ?wxTE_READONLY}]), 140 | UInp = wxTextCtrl:new(Dlg,-1,[{style, ?wxDEFAULT bor ?wxTE_PROCESS_ENTER}]), 141 | PInp = wxTextCtrl:new(Dlg,-1,[{style, ?wxDEFAULT bor ?wxTE_PASSWORD bor ?wxTE_PROCESS_ENTER}]), 142 | Btn = wxButton:new(Dlg,-1,[{label,"Login"}]), 143 | SzFlags = [{proportion, 0}, {border, 4}, {flag, ?wxALL}], 144 | wxSizer:add(HSizer1,ULabel,[{flag, ?wxEXPAND},{proportion, 1}|SzFlags]), 145 | wxSizer:add(HSizer1,UInp,[{proportion, 0},{border, 4}, {flag, ?wxEXPAND}]), 146 | wxSizer:add(HSizer2,PLabel,[{flag, ?wxEXPAND},{proportion, 1}|SzFlags]), 147 | wxSizer:add(HSizer2,PInp,[{proportion, 0},{border, 4}, {flag, ?wxEXPAND}]), 148 | wxSizer:add(VSizer,HSizer1), 149 | wxSizer:add(VSizer,HSizer2), 150 | wxSizer:add(VSizer,Btn,[{proportion,0},{flag,?wxEXPAND}]), 151 | wxDialog:setSizerAndFit(Dlg,VSizer), 152 | UP = #lg{dlg = Dlg, uinp = UInp, pinp = PInp, btn = Btn}, 153 | wxEvtHandler:connect(Btn,command_button_clicked,[{callback,fun btn/2},{userData,UP}]), 154 | wxEvtHandler:connect(UInp,command_text_enter,[{callback,fun uinp/2},{userData,UP}]), 155 | wxEvtHandler:connect(PInp,command_text_enter,[{callback,fun pinp/2},{userData,UP}]), 156 | wxEvtHandler:connect(UInp,key_down,[{callback,fun ukey/2},{userData,UP}]), 157 | wxEvtHandler:connect(PInp,key_down,[{callback,fun pkey/2},{userData,UP}]), 158 | wxWindow:setFocus(UInp), 159 | wxDialog:showModal(Dlg), 160 | wxDialog:destroy(Dlg), 161 | wxWindow:setFocus(P#wc.input), 162 | wxloop(P) 163 | end. 164 | 165 | uinp(Wx,_Obj) -> 166 | P = Wx#wx.userData, 167 | wxWindow:setFocus(P#lg.pinp). 168 | pinp(Wx,_Obj) -> 169 | P = Wx#wx.userData, 170 | U = wxTextCtrl:getValue(P#lg.uinp), 171 | Pw = wxTextCtrl:getValue(P#lg.pinp), 172 | home ! {login,U,Pw}, 173 | wxWindow:close(P#lg.dlg). 174 | 175 | ukey(Wx,Obj) -> 176 | Cmd = Wx#wx.event, 177 | P = Wx#wx.userData, 178 | case Cmd of 179 | #wxKey{keyCode = ?WXK_TAB} -> 180 | wxWindow:setFocus(P#lg.pinp); 181 | _ -> 182 | wxEvent:skip(Obj) 183 | end. 184 | pkey(Wx,Obj) -> 185 | Cmd = Wx#wx.event, 186 | P = Wx#wx.userData, 187 | case Cmd of 188 | #wxKey{keyCode = ?WXK_TAB} -> 189 | wxWindow:setFocus(P#lg.btn); 190 | _ -> 191 | wxEvent:skip(Obj) 192 | end. 193 | btn(Wx,Obj) -> 194 | pinp(Wx,Obj). 195 | 196 | 197 | input(Wx, Obj) -> 198 | Cmd = Wx#wx.event, 199 | case Cmd of 200 | #wxKey{keyCode = ?WXK_UP} -> 201 | % wxEvent:skip(Obj); 202 | wxproc ! up, 203 | ok; 204 | #wxKey{keyCode = ?WXK_DOWN} -> 205 | % wxEvent:skip(Obj); 206 | wxproc ! down, 207 | ok; 208 | #wxKey{keyCode = ?WXK_HOME} -> 209 | {Input,Prompt} = Wx#wx.userData, 210 | wxTextCtrl:setInsertionPoint(Input,length(Prompt)); 211 | %wxEvent:skip(Obj); 212 | #wxKey{keyCode = K} when K == ?WXK_BACK; K == ?WXK_LEFT -> 213 | {Input,Prompt} = Wx#wx.userData, 214 | Len = length(wxTextCtrl:getValue(Input)), 215 | case Len > length(Prompt) of 216 | true -> 217 | wxEvent:skip(Obj); 218 | false -> 219 | ok 220 | end; 221 | _ -> 222 | wxEvent:skip(Obj) 223 | end. 224 | -------------------------------------------------------------------------------- /startdev.sh: -------------------------------------------------------------------------------- 1 | #!/bin/sh 2 | case $1 in 3 | valgrind) 4 | export VALGRIND_MISC_FLAGS="-v --fair-sched=yes --trace-children=yes --tool=memcheck --suppressions=../otp/erts/emulator/valgrind/suppress.standard --show-possibly-lost=no" 5 | exec ../otp/bin/cerl -valgrind -pa ebin deps/*/ebin -config etc/app.config -s actordb_core -args_file etc/vm.args 6 | ;; 7 | debug) 8 | exec ../otp/bin/cerl -debug -pa ebin deps/*/ebin -config etc/app.config -s actordb_core -args_file etc/vm.args 9 | ;; 10 | *) 11 | exec erl +C multi_time_warp -pa ebin deps/*/ebin -config etc/app.config -s actordb_core -args_file etc/vm.args +stbt db 12 | esac 13 | 14 | 15 | -------------------------------------------------------------------------------- /test/dist_test.erl: -------------------------------------------------------------------------------- 1 | % This Source Code Form is subject to the terms of the Mozilla Public 2 | % License, v. 2.0. If a copy of the MPL was not distributed with this 3 | % file, You can obtain one at http://mozilla.org/MPL/2.0/. 4 | 5 | % ./detest test/dist_test.erl single 6 | % ./detest test/dist_test.erl cluster 7 | 8 | -module(dist_test). 9 | -export([cfg/1,setup/1,cleanup/1,run/1]). 10 | -define(INF(F,Param),io:format("~p ~p ~p:~p ~s~n",[ltime(),self(),?MODULE,?LINE,io_lib:fwrite(F,Param)])). 11 | -define(INF(F),?INF(F,[])). 12 | -define(NUMACTORS,100). 13 | -include_lib("eunit/include/eunit.hrl"). 14 | -include("test_util.erl"). 15 | numactors() -> 16 | ?NUMACTORS. 17 | -define(ND1,[{name,node1},{rpcport,50101},{dist_port, 50100},{connect_offset, 1000}, 18 | {pmd_nodes,["'node1@127.0.0.1:50100'","'node2@127.0.0.1:50200'","'node3@127.0.0.1:50300'","'node4@127.0.0.1:50400'","'node5@127.0.0.1:50500'"]}]). 19 | -define(ND2,[{name,node2},{rpcport,50201},{dist_port, 50200},{connect_offset, 2000}, 20 | {pmd_nodes,["'node1@127.0.0.1:50100'","'node2@127.0.0.1:50200'","'node3@127.0.0.1:50300'","'node4@127.0.0.1:50400'","'node5@127.0.0.1:50500'"]}]). 21 | -define(ND3,[{name,node3},{rpcport,50301},{dist_port, 50300},{connect_offset, 3000}, 22 | {pmd_nodes,["'node1@127.0.0.1:50100'","'node2@127.0.0.1:50200'","'node3@127.0.0.1:50300'","'node4@127.0.0.1:50400'","'node5@127.0.0.1:50500'"]}]). 23 | -define(ND4,[{name,node4},{rpcport,50401},{dist_port, 50400},{connect_offset, 4000}, 24 | {pmd_nodes,["'node1@127.0.0.1:50100'","'node2@127.0.0.1:50200'","'node3@127.0.0.1:50300'","'node4@127.0.0.1:50400'","'node5@127.0.0.1:50500'"]}]). 25 | -define(ND5,[{name,node5},{rpcport,50501},{dist_port, 50500},{connect_offset, 5000}, 26 | {pmd_nodes,["'node1@127.0.0.1:50100'","'node2@127.0.0.1:50200'","'node3@127.0.0.1:50300'","'node4@127.0.0.1:50400'","'node5@127.0.0.1:50500'"]}]). 27 | -define(ALL_NODES,[?ND1,?ND2,?ND3,?ND4,?ND5]). 28 | 29 | %{erlcmd,"../otp/bin/cerl -valgrind"},{erlenv,[{"VALGRIND_MISC_FLAGS","-v --leak-check=full --tool=memcheck --track-origins=no "++ 30 | % "--suppressions=../otp/erts/emulator/valgrind/suppress.standard --show-possibly-lost=no"}]} 31 | cfg(Args) -> 32 | case Args of 33 | [TT|_] when TT == "single"; TT == "addsecond"; TT == "endless1"; TT == "addclusters"; TT == "mysql"; TT == "checkredirect" -> 34 | Nodes = [?ND1]; 35 | ["multicluster"|_] -> 36 | Nodes = [?ND1,?ND2,?ND3,?ND4]; 37 | [TT|_] when TT == "addthentake"; TT == "addcluster"; TT == "endless2"; TT == "queue" -> 38 | Nodes = [?ND1,?ND2]; 39 | {Nodes,_Groups} -> 40 | ok; 41 | [] = Nodes -> 42 | io:format("ERROR:~n"), 43 | io:format("No test type provided. Available tests: "++ 44 | "single, cluster, multicluster, mysql, addsecond, missingnode, addthentake, addcluster, failednodes, checkredirect, "++ 45 | "endless1, endless2, addclusters~n~n"), 46 | throw(noparam); 47 | ["partitions"] -> 48 | Nodes = [?ND1,?ND2,?ND3,?ND4,?ND5]; 49 | ["cluster"] -> 50 | Nodes = [?ND1,?ND2,?ND3,?ND4,?ND5]; 51 | _ -> 52 | Nodes = [?ND1,?ND2,?ND3] 53 | end, 54 | [ 55 | % these dtl files get nodes value as a parameter and whatever you add here. 56 | {global_cfg,[]}, 57 | % Config files per node. For every node, its property list is added when rendering. 58 | % if name contains app.config or vm.args it gets automatically added to run node command 59 | % do not set cookie or name of node in vm.args this is set by detest 60 | {per_node_cfg,["test/etc/app.config"]}, 61 | % cmd is appended to erl execute command, it should execute your app. 62 | % It can be set for every node individually. Add it to that list if you need it, it will override this value. 63 | {cmd,"-s actordb_core +S 2 +A 2 +sfwi 100 +sbwt none -start_epmd false -epmd_module actordb_pmd"}, 64 | 65 | % optional command to start erlang with 66 | % {erlcmd,"../otp/bin/cerl -valgrind"}, 67 | 68 | % optional environment variables for erlang 69 | %{erlenv,[{"VALGRIND_MISC_FLAGS","-v --leak-check=full --tool=memcheck --track-origins=no "++ 70 | % "--suppressions=../otp/erts/emulator/valgrind/suppress.standard --show-possibly-lost=no"}]}, 71 | 72 | % in ms, how long to wait to connect to node. If running with valgrind it takes a while. 73 | {connect_timeout,10000}, 74 | 75 | % in ms, how long to wait for application start once node is started 76 | {app_wait_timeout,10000}, 77 | 78 | % which app to wait for to consider node started 79 | {wait_for_app,actordb_core}, 80 | % What RPC to execute for stopping nodes (optional, def. is {init,stop,[]}) 81 | {stop,{actordb_core,stop_complete,[]}}, 82 | {internode_bw, 1024 * 128}, 83 | {nodes,Nodes} 84 | ]. 85 | 86 | % Before starting nodes 87 | setup(Param) -> 88 | filelib:ensure_dir([butil:ds_val(path,Param),"/log"]). 89 | 90 | % Nodes have been closed 91 | cleanup(_Param) -> 92 | ok. 93 | 94 | run(Param) -> 95 | ets:new(?GLOBAL, [named_table,public,set]), 96 | butil:ds_add(counter, 0, ?GLOBAL), 97 | [TestType|_] = butil:ds_val(args,Param), 98 | run(Param,TestType), 99 | ok. 100 | 101 | run(Param,TType) when TType == "single"; TType == "cluster"; TType == "multicluster" -> 102 | Nd1 = butil:ds_val(node1,Param), 103 | Nd2 = butil:ds_val(node2,Param), 104 | Nd3 = butil:ds_val(node3,Param), 105 | Nd4 = butil:ds_val(node4,Param), 106 | Nd5 = butil:ds_val(node5,Param), 107 | Ndl = [N || N <- [Nd1,Nd2,Nd3,Nd4,Nd5], N /= undefined], 108 | lager:info("~p",[Param]), 109 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 110 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 111 | timer:sleep(100), 112 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 113 | ok = wait_tree(Nd1,10000), 114 | % detest_net:reg_caller(), 115 | basic_write(Ndl), 116 | basic_read(Ndl), 117 | basic_write(Ndl), 118 | basic_read(Ndl), 119 | multiupdate_write(Ndl), 120 | multiupdate_read(Ndl), 121 | kv_readwrite(Ndl), 122 | basic_write(Ndl), 123 | basic_read(Ndl), 124 | copyactor(Ndl), 125 | [detest:stop_node(Nd) || Nd <- [Nd1,Nd2,Nd3,Nd4,Nd5], Nd /= undefined], 126 | detest:add_node(?ND1), 127 | case TType of 128 | "cluster" -> 129 | detest:add_node(?ND2), 130 | detest:add_node(?ND3), 131 | detest:add_node(?ND4), 132 | detest:add_node(?ND5); 133 | "multicluster" -> 134 | detest:add_node(?ND2), 135 | detest:add_node(?ND3), 136 | detest:add_node(?ND4); 137 | _ -> 138 | ok 139 | end, 140 | basic_write(Ndl), 141 | 142 | % detest_net:shape_traffic_rand(), 143 | % spawn_writes(2,undefined), 144 | 145 | detest_net:isolation_group_set([Nd2],nd2), 146 | spawn_writes(2,Ndl), 147 | detest_net:isolation_group_remove(nd2), 148 | err_write(Ndl), 149 | basic_write(Ndl), 150 | 151 | detest_net:isolation_group_set([Nd3],nd3), 152 | spawn_writes(2,Ndl), 153 | detest_net:isolation_group_remove(nd3), 154 | spawn_writes(2,Ndl), 155 | 156 | ?INF("Isolating nd1 nd2"), 157 | detest_net:isolation_group_set([Nd1,Nd2],nd1), 158 | Ndl1 = [N || N <- [Nd3,Nd4,Nd5], N /= undefined], 159 | spawn_writes(2,Ndl1), 160 | detest_net:isolation_group_remove(nd1), 161 | ?INF("Isolation end nd1 nd2"), 162 | 163 | ?INF("Isolating nd2 nd3"), 164 | detest_net:isolation_group_set([Nd2,Nd3],nd1), 165 | Ndl2 = [N || N <- [Nd4,Nd5], N /= undefined], 166 | spawn_writes(2,Ndl2), 167 | detest_net:isolation_group_remove(nd1), 168 | ?INF("End isolation nd2 nd3"), 169 | 170 | spawn_writes(2,Ndl), 171 | ok; 172 | % ok = check_multiupdate_deadlock(Ndl); 173 | run(Param,"queue") -> 174 | Nd1 = butil:ds_val(node1,Param), 175 | Nd2 = butil:ds_val(node2,Param), 176 | Ndl = [Nd1,Nd2], 177 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,"queue")],3000), 178 | timer:sleep(100), 179 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 180 | ok = wait_tree(Nd1,10000), 181 | rpc:call(Nd1,actordb_test,q_test,[20,3000]), 182 | ok; 183 | run(Param,"partitions") -> 184 | Nd1 = butil:ds_val(node1,Param), 185 | Nd2 = butil:ds_val(node2,Param), 186 | Nd3 = butil:ds_val(node3,Param), 187 | Nd4 = butil:ds_val(node4,Param), 188 | Nd5 = butil:ds_val(node5,Param), 189 | Ndl = [N || N <- [Nd1,Nd2,Nd3,Nd4,Nd5], N /= undefined], 190 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,"partitions")],6000), 191 | timer:sleep(100), 192 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema2()],3000), 193 | ok = wait_tree(Nd1,10000), 194 | timer:sleep(2000), 195 | % Create actor 196 | {ok,_} = exec(Ndl,<<"actor type1(ac1) create; insert into tab values (1,'sometext',2);">>), 197 | timer:sleep(2000), 198 | lager:info("Isolating nd1, nd2"), 199 | detest:isolate([Nd1,Nd2],minority), 200 | % timer:sleep(1000), 201 | lager:info("Calling write on ac1 to minority partition"), 202 | {error,consensus_timeout} = exec([Nd1],<<"actor type1(ac1) create; insert into tab values (2,'minority',2);">>,infinity), 203 | 204 | lager:info("Calling write on ac1 to majority partition"), 205 | Res1 = {ok,[{columns,{<<"id">>,<<"txt">>,<<"i">>}},{rows,[{1,<<"sometext">>,2.0}]}]}, 206 | Res2 = {ok,[{columns,{<<"id">>,<<"txt">>,<<"i">>}},{rows,[{2,<<"majority">>,2.0},{1,<<"sometext">>,2.0}]}]}, 207 | Res3 = {ok,[{columns,{<<"id">>,<<"txt">>,<<"i">>}}, 208 | {rows,[{3,<<"majority_2">>,2.0},{2,<<"majority">>,2.0},{1,<<"sometext">>,2.0}]}]}, 209 | Res4 = {ok,[{columns,{<<"id">>,<<"txt">>,<<"i">>}}, 210 | {rows,[{4,<<"majority_3">>,2.0},{3,<<"majority_2">>,2.0},{2,<<"majority">>,2.0},{1,<<"sometext">>,2.0}]}]}, 211 | Res1 = exec([Nd3],<<"actor type1(ac1); select * from tab;">>), 212 | {ok,_} = exec([Nd3],<<"actor type1(ac1) create; insert into tab values (2,'majority',2);">>), 213 | Res2 = exec([Nd3],<<"actor type1(ac1); select * from tab;">>), 214 | detest:isolate_end([Nd1,Nd2]), 215 | % timer:sleep(100), 216 | {ok,_} = exec([Nd3],<<"actor type1(ac1) create; insert into tab values (3,'majority_2',2);">>), 217 | Res3 = exec([Nd1],<<"actor type1(ac1); select * from tab;">>), 218 | lager:info("Majority wins!"), 219 | % timer:sleep(1000), 220 | detest:isolate([Nd4,Nd5],minority), 221 | {ok,_} = exec([Nd3],<<"actor type1(ac1) create; insert into tab values (4,'majority_3',2);">>), 222 | Res4 = exec([Nd3],<<"actor type1(ac1); select * from tab;">>), 223 | % lager:info("Read ~p",[Res4]), 224 | ok; 225 | run(Param,"remnode" = TType) -> 226 | Nd1 = butil:ds_val(node1,Param), 227 | Nd2 = butil:ds_val(node2,Param), 228 | Nd3 = butil:ds_val(node3,Param), 229 | Ndl = [Nd1,Nd2,Nd3], 230 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 231 | timer:sleep(100), 232 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 233 | ok = wait_tree(Nd1,10000), 234 | basic_write(Ndl), 235 | detest:stop_node(Nd3), 236 | {ok,_} = rpc:call(Nd1,actordb_config,exec,["delete from nodes where name like 'node3%'"],3000), 237 | timer:sleep(300), 238 | lager:info("Nodelist now: ~p",[rpc:call(Nd1,bkdcore,nodelist,[])]), 239 | lager:info("Nodelist now: ~p",[rpc:call(Nd1,ets,tab2list,[globalets])]), 240 | ok = wait_tree(Nd1,10000), 241 | basic_write(Ndl), 242 | ok; 243 | run(Param,"mysql" = TType) -> 244 | true = code:add_path("test/mysql.ez"), 245 | true = code:add_path("test/emysql.ez"), 246 | application:ensure_all_started(emysql), 247 | Nd1 = butil:ds_val(node1,Param), 248 | Ndl = [Nd1], 249 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 250 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 251 | timer:sleep(100), 252 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema2()],3000), 253 | {ok,_} = rpc:call(Nd1,actordb_config,exec,["CREATE USER 'myuser' IDENTIFIED BY 'mypass';GRANT read,write ON * to 'myuser';"],3000), 254 | 255 | ok = wait_tree(Nd1,10000), 256 | 257 | [_,Host] = string:tokens(butil:tolist(Nd1),"@"), 258 | lager:info("OK?"), 259 | POOL = (catch emysql:add_pool(pool, [{size,1},{host,Host},{port,butil:ds_val(rpcport,?ND1)-10000},{encoding,utf8},{user,"myuser"},{password,"mypass"}])), 260 | lager:info("OK ~p",[POOL]), 261 | MyOpt = [{host,Host},{port,butil:ds_val(rpcport,?ND1)-10000},{user,"myuser"},{password,"mypass"},{database,"actordb"}], 262 | {ok,Pid} = mysql:start_link(MyOpt), 263 | 264 | FirstInsert = [111,<<"aaaa">>,1.2], 265 | SecondInsert = [1,<<"insert with prepared statement!">>,3.0], 266 | ThirdInsert = [2,<<"second insert with prepared statement!">>,5.0], 267 | 268 | lager:info("SHOW SCHEMA; ~p",[mysql:query(Pid, <<"show schema">>)]), 269 | ok = mysql:query(Pid, <<"actor type1(ac1) create;INSERT INTO tab VALUES (111,'aaaa',1.2);">>), 270 | {ok,_Cols,[FirstInsert] = _Rows} = mysql:query(Pid, <<"actor type1(ac1); select * from tab;">>), 271 | lager:info("Cols=~p, rows=~p", [_Cols, _Rows]), 272 | 273 | {ok,Id} = mysql:prepare(Pid, <<"actor type1(ac1);INSERT INTO tab VALUES ($1,$2,$3);">>), 274 | 275 | ok = mysql:execute(Pid,Id,[1,"insert with prepared statement!",3.0]), 276 | 277 | {ok,_Cols,[SecondInsert,FirstInsert] = _Rows1} = mysql:query(Pid, <<"actor type1(ac1); select * from tab;">>), 278 | lager:info("Cols=~p, rows=~p", [_Cols, _Rows1]), 279 | 280 | {ok,Id1} = mysql:prepare(Pid, <<"actor type1(ac1);select * from tab where id=$1;">>), 281 | {ok,_Cols,[SecondInsert] = _Rows2} = mysql:execute(Pid, Id1, [1]), 282 | lager:info("Using select with prepared statement: Cols=~p, rows=~p", [_Cols, _Rows2]), 283 | 284 | {ok,[<<"token">>],[[<<"#r0000;">>]]} = mysql:query(Pid, <<"PREPARE stmt1 () FOR type1 AS select * from tab;">>), 285 | {ok,[<<"token">>],[[InsertToken]]} = mysql:query(Pid, <<"PREPARE stmt2 () FOR type1 AS insert into tab values ($1,$2,$3);">>), 286 | timer:sleep(300), 287 | {ok,_Cols,[SecondInsert,FirstInsert]} = mysql:query(Pid,<<"actor type1(ac1);EXECUTE stmt1 ();">>), 288 | 289 | {ok,Id2} = mysql:prepare(Pid, <<"actor type1(ac1);",InsertToken/binary>>), 290 | ok = mysql:execute(Pid,Id2,ThirdInsert), 291 | 292 | {ok,_Cols,[SecondInsert,ThirdInsert,FirstInsert]} = PrepRes = mysql:query(Pid,<<"actor type1(ac1);EXECUTE stmt1 ();">>), 293 | io:format("PrepRes ~p~n",[PrepRes]), 294 | 295 | [emysql:execute(pool,<<"actor type1(emysql) create;INSERT INTO tabau (txt) values ('",(butil:tobin(N))/binary,"-TEXT');">>) || N <- lists:seq(1,500)], 296 | emysql:execute(pool,<<"actor type1(emysql) create;INSERT INTO tabau (txt) values ('aa');">>), 297 | emysql:execute(pool,<<"actor type1(emysql) create;INSERT INTO tabau (txt) values ('bb');">>), 298 | emysql:execute(pool,<<"actor type1(emysql) create;INSERT INTO tabau (txt) values ('cc');">>), 299 | 300 | io:format("Res:~p~n",[emysql:execute(pool,"actor type1(emysql);select * from tabau;")]), 301 | io:format("Res:~p~n",[emysql:execute(pool,"actor type1(emysql);select * from sqlite_sequence;")]), 302 | io:format("mysql: ~p~n",[mysql:query(Pid,"actor type1(emysql);select * from tabau;")]), 303 | ok; 304 | run(Param,"checkredirect" = TType) -> 305 | [Nd1,Path] = butil:ds_vals([node1,path],Param), 306 | Ndl = [Nd1], 307 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 308 | timer:sleep(100), 309 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 310 | ok = wait_tree(Nd1,10000), 311 | basic_write(Ndl), 312 | 313 | % Add second node 314 | Nd2 = detest:add_node(?ND2), 315 | % Configure it in its own cluster 316 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[iolist_to_binary([grp(2), nds([Nd2],2)])],3000), 317 | ok = wait_modified_tree(Nd2,[Nd1,Nd2],30000), 318 | basic_write(Ndl), 319 | NoopRes = [rpc:call(Nd1,actordb_sqlproc,call, [{<<"ac",(butil:tobin(N))/binary>>,type1},[],noop,actordb_actor,onlylocal]) || N <- lists:seq(1,100)], 320 | Redirect = [ok || {redirect, _} <- NoopRes], 321 | true = length(Redirect) > length(NoopRes)*0.3, 322 | lager:info("Moved actors=~p, out of total=~p",[length(Redirect), length(NoopRes)]), 323 | ok; 324 | run(Param,"addsecond" = TType) -> 325 | [Nd1,Path] = butil:ds_vals([node1,path],Param), 326 | Ndl = [Nd1], 327 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,Path++"/node1/etc"],3000), 328 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 329 | timer:sleep(100), 330 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 331 | 332 | ok = wait_tree(Nd1,10000), 333 | basic_write(Ndl), 334 | basic_read(Ndl), 335 | %test_add_second(Ndl), 336 | Nd2 = detest:add_node(?ND2), 337 | % rpc:call(Nd1,actordb_cmd,cmd,[updatenodes,commit,Path++"/node1/etc"],3000), 338 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[nds([Nd2],1)],3000), 339 | 340 | ok = wait_modified_tree(Nd2,[Nd1,Nd2],30000), 341 | basic_write(Ndl), 342 | kv_readwrite(Ndl), 343 | multiupdate_write(Ndl), 344 | multiupdate_read(Ndl), 345 | basic_write(Ndl), 346 | basic_read(Ndl); 347 | run(Param,"missingnode" = TType) -> 348 | Nd1 = butil:ds_val(node1,Param), 349 | Nd2 = butil:ds_val(node2,Param), 350 | Nd3 = butil:ds_val(node3,Param), 351 | Ndl = [Nd1,Nd2,Nd3], 352 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 353 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 354 | timer:sleep(100), 355 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 356 | 357 | ok = wait_tree(Nd1,10000), 358 | basic_write(Ndl), 359 | basic_read(Ndl), 360 | basic_write(Ndl), 361 | basic_read(Ndl), 362 | kv_readwrite(Ndl), 363 | multiupdate_write(Ndl), 364 | multiupdate_read(Ndl), 365 | copyactor(Ndl), 366 | detest:stop_node(Nd3), 367 | basic_write(Ndl), 368 | basic_write(Ndl); 369 | run(Param,"addthentake" = TType) -> 370 | Path = butil:ds_val(path,Param), 371 | Nd1 = butil:ds_val(node1,Param), 372 | Nd2 = butil:ds_val(node2,Param), 373 | Ndl = [Nd1,Nd2], 374 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 375 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 376 | timer:sleep(100), 377 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 378 | 379 | ok = wait_tree(Nd1,10000), 380 | basic_write(Ndl), 381 | basic_read(Ndl), 382 | Nd3 = detest:add_node(?ND3), 383 | % rpc:call(Nd1,actordb_cmd,cmd,[updatenodes,commit,Path++"/node1/etc"],3000), 384 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[nds([Nd3],1)],3000), 385 | ok = wait_modified_tree(Nd3,[Nd1,Nd2,Nd3],30000), 386 | basic_read(Ndl), 387 | basic_write(Ndl), 388 | kv_readwrite(Ndl), 389 | multiupdate_write(Ndl), 390 | multiupdate_read(Ndl), 391 | detest:stop_node(Nd2), 392 | basic_write(Ndl), 393 | basic_read(Ndl), 394 | copyactor(Ndl); 395 | % run(Param,"addcluster") -> 396 | % Nd1 = butil:ds_val(node1,Param), 397 | % Nd2 = butil:ds_val(node2,Param), 398 | % Ndl = [Nd1,Nd2], 399 | % rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 400 | % ok = wait_tree(Nd1,10000), 401 | % basic_write(Ndl), 402 | % basic_read(Ndl), 403 | % kv_readwrite(Ndl), 404 | % Nd3 = detest:add_node(?ND3,[{global_cfg,[{"test/nodes.yaml",[{groups,?TWOGRPS([?ND1,?ND2],[?ND3,?ND4])}]},"test/schema.yaml"]}]), 405 | % Nd4 = detest:add_node(?ND4,[{global_cfg,[{"test/nodes.yaml",[{groups,?TWOGRPS([?ND1,?ND2],[?ND3,?ND4])}]},"test/schema.yaml"]}]), 406 | % rpc:call(Nd1,actordb_cmd,cmd,[updatenodes,commit,butil:ds_val(path,Param)++"/node1/etc"],3000), 407 | % ok = wait_modified_tree(Nd3,[Nd1,Nd2,Nd3],60000), 408 | % ok = wait_modified_tree(Nd4,[Nd1,Nd2,Nd3,Nd4],60000), 409 | % basic_write(Ndl), 410 | % basic_read(Ndl), 411 | % multiupdate_write(Ndl), 412 | % multiupdate_read(Ndl); 413 | run(Param,"failednodes" = TType) -> 414 | Nd1 = butil:ds_val(node1,Param), 415 | Nd2 = butil:ds_val(node2,Param), 416 | Nd3 = butil:ds_val(node3,Param), 417 | Ndl = [Nd1], 418 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 419 | timer:sleep(100), 420 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 421 | 422 | ok = wait_tree(Nd1,10000), 423 | basic_write(Ndl), 424 | basic_write(Ndl), 425 | basic_read(Ndl), 426 | kv_readwrite(Ndl), 427 | multiupdate_write(Ndl), 428 | multiupdate_read(Ndl), 429 | detest:stop_node(Nd2), 430 | basic_write(Ndl), 431 | detest:add_node(?ND2), 432 | basic_write(Ndl), 433 | detest:stop_node(Nd2), 434 | detest:stop_node(Nd3), 435 | detest:add_node(?ND2), 436 | detest:add_node(?ND3), 437 | basic_write(Ndl); 438 | run(Param,"endless"++Num = TType) -> 439 | Nd1 = butil:ds_val(node1,Param), 440 | NWriters = 5000, 441 | WriterMaxSleep = 10, 442 | case butil:toint(Num) of 443 | 1 -> 444 | Ndl = [Nd1]; 445 | 2 -> 446 | Nd2 = butil:ds_val(node2,Param), 447 | Ndl = [Nd1,Nd2] 448 | end, 449 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 450 | timer:sleep(100), 451 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 452 | 453 | ok = wait_tree(Nd1,600000), 454 | Home = self(), 455 | ets:new(writecounter, [named_table,public,set,{write_concurrency,true}]), 456 | butil:ds_add(wnum,0,writecounter), 457 | butil:ds_add(wnum_sec,0,writecounter), 458 | Pids = [spawn_monitor(fun() -> rseed(N),writer(Home,Nd1,N,WriterMaxSleep,0) end) || N <- lists:seq(1,NWriters)], 459 | lager:info("Test will run until you stop it or something crashes."), 460 | wait_crash(Ndl); 461 | run(Param,"addclusters" = TType) -> 462 | Nd1 = butil:ds_val(node1,Param), 463 | Ndl = [Nd1], 464 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[init(Ndl,TType)],3000), 465 | timer:sleep(100), 466 | {ok,_} = rpc:call(Nd1,actordb_config,exec_schema,[schema1()],3000), 467 | 468 | ok = wait_tree(Nd1,60000), 469 | AdNodesProc = spawn_link(fun() -> addclusters(butil:ds_val(path,Param),Nd1,[?ND1]) end), 470 | make_actors(0), 471 | AdNodesProc ! done; 472 | run(Param,Nm) -> 473 | lager:info("Unknown test type ~p",[Nm]). 474 | 475 | port(Nd) -> 476 | [Nd1,_] = string:tokens(butil:tolist(Nd),"@"), 477 | NdNm = list_to_atom(Nd1), 478 | hd([butil:ds_val(rpcport,Obj) || Obj <- ?ALL_NODES, butil:ds_val(name,Obj) == NdNm]). 479 | 480 | grp(N) -> 481 | "insert into groups values ('grp"++butil:tolist(N)++"','cluster');". 482 | nds(Ndl,Grp) -> 483 | [["insert into nodes values ('",butil:tolist(Nd),":",butil:tolist(port(Nd)),"','grp",butil:tolist(Grp),"');"] || Nd <- Ndl]. 484 | usr() -> 485 | "CREATE USER 'root' IDENTIFIED BY 'rootpass'". 486 | 487 | init(Ndl,TT) when TT == "single"; TT == "cluster"; TT == "addthentake"; TT == "addcluster"; TT == "endless2"; 488 | TT == "addsecond"; TT == "endless1"; TT == "addclusters"; TT == "mysql"; 489 | TT == "remnode"; TT == "partitions"; TT == "queue"; TT == "checkredirect"; TT == "missingnode" -> 490 | [grp(1),nds(Ndl,1),usr()]; 491 | init([N1,N2,N3,N4],"multicluster") -> 492 | [grp(1),grp(2),nds([N1,N2],1),nds([N3,N4],2),usr()]. 493 | 494 | schema1() -> 495 | ["actor type1;", 496 | "CREATE TABLE tab (id INTEGER PRIMARY KEY, txt TEXT, i INTEGER);", 497 | "CREATE TABLE tab1 (id INTEGER PRIMARY KEY, txt TEXT);", 498 | "CREATE TABLE tab2 (id INTEGER PRIMARY KEY, txt TEXT);", 499 | "actor thread;", 500 | "CREATE TABLE thread (id INTEGER PRIMARY KEY, msg TEXT, user INTEGER);", 501 | "actor user;", 502 | "CREATE TABLE userinfo (id INTEGER PRIMARY KEY, name TEXT);", 503 | "actor counters kv;", 504 | "CREATE TABLE actors (id TEXT UNIQUE, hash INTEGER, val INTEGER);", 505 | "actor filesystem kv;", 506 | "CREATE TABLE actors (id TEXT UNIQUE, hash INTEGER, size INTEGER);", 507 | "CREATE TABLE users (id INTEGER PRIMARY KEY AUTOINCREMENT, fileid TEXT, uid INTEGER, FOREIGN KEY (fileid) REFERENCES actors(id) ON DELETE CASCADE);"]. 508 | 509 | schema2() -> 510 | ["actor type1;", 511 | "CREATE TABLE tab (id INTEGER PRIMARY KEY, txt TEXT, i FLOAT);", 512 | "CREATE TABLE tabau (id INTEGER PRIMARY KEY AUTOINCREMENT, txt TEXT);"]. 513 | -------------------------------------------------------------------------------- /test/emysql.ez: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/biokoda/actordb/cf48f819c6cd1a5d4d50ff977db1617b3cece460/test/emysql.ez -------------------------------------------------------------------------------- /test/etc/app.config: -------------------------------------------------------------------------------- 1 | [ 2 | {actordb_core,[ 3 | 4 | {main_db_folder,"{{ basepath }}/{{ name }}"}, 5 | %{extra_db_folders,["{{ basepath }}/{{ name }}1"]}, 6 | 7 | % ActorDB uses LMDB storage engine, which requires a max size of database. 8 | % If in doubt, pick 90% of your HD free space. 9 | % If using multiple folders, this will limit DB size for every folder. 10 | % Values can be in TB, GB, MB. 11 | {max_db_size,"5GB"}, 12 | 13 | % Which port to listen to for mysql protocol. 0 for disabled 14 | {mysql_protocol, {{ rpcport|add:"-10000" }} }, 15 | {thrift_port, {{ rpcport|add:"-11000" }} }, 16 | 17 | % How many transaction managers in node. 18 | % They are used for transactions over more than one actor but not for writes to a single actor. 19 | % Higher number means more transaction managers that can work at the same time. 20 | % Too low the server will not be utilized well enough, too high and it will do too 21 | % much work at the same time. 22 | {num_transaction_managers,12}, 23 | 24 | {num_shards, 4}, 25 | 26 | % Maximum number of miliseconds to wait for DB queries to complete before they are canceled. 27 | % Set to 0 for no timeout. 28 | {query_timeout,60000}, 29 | 30 | % safe - Writes will always be synced to disk before replying to client. 31 | % Hopefully you are using a HD that actually executes a sync when told to do so. 32 | % This has a significant performance impact. 33 | % interval - in miliseconds. This will issue fsyncs on a timer. A compromise for speed and safety. 34 | % ActorDB is a distributed database, in case of a server crash writes will still be 35 | % replicated to other nodes, which means the degree of safety is reasonable for a lot of use cases. 36 | % fast - issue writes as fast as possible, let the OS/hardware decide when to sync. In case of crash, data is 37 | % is still replicated to other nodes, so it's not as dangerous as it may sound. 38 | % If some actors are more important than others, you can add an fsync flag to write queries. Successful write 39 | % queries with fsync flag are fsynced to disk before replying to client. 40 | %{fsync, safe}, 41 | {fsync, {interval,20000}}, 42 | %{fsync, fast}, 43 | 44 | % Every actor requires replication space. 45 | % This way if one of the nodes falls behind for whatever reason, they can receive what they missed. 46 | % Worst case scenario is at least one of the nodes being offline for a longer period of time. 47 | % These parameters govern how much replication space actors should hold. Once a node is too far behind 48 | % it will require a full resync of actor. 49 | % Values are in pages and are per actor. {5000,0.1} means keep max(5000,0.1*ActorSize) 50 | % of replication space. A page is 4096 bytes, but when stored on disk is actually compressed. 51 | {replication_space, {5000,0.1}}, 52 | 53 | {pmd,[{connect_offset,{{ connect_offset }} }, {nodes,[ 54 | {{ pmd_nodes|join:"," }} 55 | ]}]} 56 | ]}, 57 | 58 | 59 | {bkdcore,[ 60 | {connect_offset, {{ connect_offset }} }, 61 | {autocompile,[]}, 62 | {docompile,false}, 63 | {rpcport,{{ rpcport }}} 64 | ]}, 65 | 66 | {kernel, [ 67 | {inet_dist_listen_min, {{ rpcport|add:"-1" }} }, 68 | {inet_dist_listen_max, {{ rpcport|add:"-1" }} } 69 | ]}, 70 | 71 | {lager, [ 72 | {colored, true}, 73 | {handlers, [ 74 | {lager_console_backend,[debug,{lager_default_formatter, [time," ",pid," ",node," ",module," ",line, 75 | " [",severity,"] ", message, "\n"]}]}, 76 | {lager_file_backend, [{file, "{{ basepath }}/log/{{ name }}.error.log"}, {level, error}, {formatter, lager_default_formatter}, 77 | {formatter_config, [date, " ", time," [",severity,"] ",pid," ",module, " ",line, " ", message, "\n"]}]}, 78 | {lager_file_backend, [{file, "{{ basepath }}/log/{{ name }}.info.log"}, {level, debug}, {formatter, lager_default_formatter}, 79 | {formatter_config, [date, " ", time," [",severity,"] ",pid," ",module, " ",line, " ", message, "\n"]}]} 80 | ]} 81 | ]}, 82 | 83 | %% SASL config 84 | {sasl, [ 85 | {sasl_error_logger, {file, "{{ basepath }}/log/{{ name }}.sasl-error.log"}}, 86 | {errlog_type, error}, 87 | {error_logger_mf_dir, "{{ basepath }}/log/{{ name }}"}, % Log directory 88 | {error_logger_mf_maxbytes, 10485760}, % 10 MB max file size 89 | {error_logger_mf_maxfiles, 5} % 5 files max 90 | ]} 91 | ]. 92 | -------------------------------------------------------------------------------- /test/etc/simple_schema.yaml: -------------------------------------------------------------------------------- 1 | type1: 2 | - CREATE TABLE tab (id TEXT PRIMARY KEY, val INTEGER) WITHOUT ROWID 3 | -------------------------------------------------------------------------------- /test/mysql.ez: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/biokoda/actordb/cf48f819c6cd1a5d4d50ff977db1617b3cece460/test/mysql.ez -------------------------------------------------------------------------------- /test/repl_test.erl: -------------------------------------------------------------------------------- 1 | % This Source Code Form is subject to the terms of the Mozilla Public 2 | % License, v. 2.0. If a copy of the MPL was not distributed with this 3 | % file, You can obtain one at http://mozilla.org/MPL/2.0/. 4 | 5 | % Execute with: ./detest test/repl_test.erl 6 | -module(dist_test). 7 | -export([cfg/1,setup/1,cleanup/1,run/1]). 8 | -export([killconns/0]). 9 | -define(INF(F,Param),io:format("~p ~p:~p ~s~n",[ltime(),?MODULE,?LINE,io_lib:fwrite(F,Param)])). 10 | -define(INF(F),?INF(F,[])). 11 | -define(NUMACTORS,100). 12 | numactors() -> 13 | ?NUMACTORS. 14 | -include_lib("eunit/include/eunit.hrl"). 15 | -include("test_util.erl"). 16 | -include_lib("kernel/include/file.hrl"). 17 | 18 | % These nodes should be running from LXC on local machine 19 | -define(ND1,[{name,node1},{rpcport,45551},{ssh,"node1",22,"/opt/actordb",[{user,"root"}]}]). 20 | -define(ND2,[{name,node2},{rpcport,45552},{ssh,"node2",22,"/opt/actordb",[{user,"root"}]}]). 21 | -define(ND3,[{name,node3},{rpcport,45553},{ssh,"node3",22,"/opt/actordb",[{user,"root"}]}]). 22 | -define(ND4,[{name,node4},{rpcport,45554},{ssh,"node4",22,"/opt/actordb",[{user,"root"}]}]). 23 | -define(ND5,[{name,node5},{rpcport,45555},{ssh,"node5",22,"/opt/actordb",[{user,"root"}]}]). 24 | 25 | 26 | cfg(Args) -> 27 | case Args of 28 | ["partition"] -> 29 | Nodes = [?ND1,?ND2,?ND3,?ND4,?ND5], 30 | Groups = [[{name,"grp1"},{nodes,[node1]}],[{name,"grp2"},{nodes,[node2]}],[{name,"grp3"},{nodes,[node3]}], 31 | [{name,"grp4"},{nodes,[node4]}],[{name,"grp5"},{nodes,[node5]}]]; 32 | ["random"] -> 33 | Nodes = [?ND1,?ND2,?ND3,?ND4,?ND5], 34 | Groups = [[{name,"grp1"},{nodes,[node1,node2,node3,node4,node5]}]] 35 | end, 36 | [ 37 | % these dtl files get nodes value as a parameter and whatever you add here. 38 | {global_cfg,[{"test/etc/nodes.yaml",[{groups,Groups}]}, 39 | % schema does not need additional any parameters, but we do have to rename it 40 | {{"test/etc/simple_schema.yaml","schema.yaml"},[]}]}, 41 | % Config files per node. For every node, its property list is added when rendering. 42 | % if name contains app.config or vm.args it gets automatically added to run node command 43 | % do not set cookie or name of node in vm.args this is set by detest 44 | {per_node_cfg,["test/etc/app.config"]}, 45 | % cmd is appended to erl execute command, it should execute your app. 46 | % It can be set for every node individually. Add it to that list if you need it, it will override this value. 47 | {cmd,"-s actordb_core +S 2 +A 2"}, 48 | 49 | {detest_name,'detest@home'}, 50 | 51 | % in ms, how long to wait to connect to node. If running with valgrind it takes a while. 52 | {connect_timeout,20000}, 53 | 54 | % in ms, how long to wait for application start once node is started 55 | {app_wait_timeout,20000}, 56 | 57 | % which app to wait for to consider node started 58 | {wait_for_app,actordb_core}, 59 | % What RPC to execute for stopping nodes (optional, def. is {init,stop,[]}) 60 | {stop,{actordb_core,stop_complete,[]}}, 61 | {nodes,Nodes} 62 | ]. 63 | 64 | % Before starting nodes 65 | setup(Param) -> 66 | filelib:ensure_dir([butil:ds_val(path,Param),"/log/"]), 67 | butil:set_permission([butil:ds_val(path,Param),"/log"]). 68 | 69 | % Nodes have been closed 70 | cleanup(_Param) -> 71 | [detest:cmd(Nd,"iptables --flush") || Nd <- [?ND1,?ND2,?ND3,?ND4,?ND5]], 72 | ok. 73 | 74 | run(Param) -> 75 | case butil:ds_val(args,Param) of 76 | [TestType|_] -> 77 | run(TestType,Param); 78 | [] -> 79 | run("partition",Param) 80 | end. 81 | 82 | run("partition",Param) -> 83 | [Nd1,Nd2,Nd3,Nd4,Nd5|_] = Ndl = butil:ds_vals([node1,node2,node3,node4,node5],Param), 84 | lager:info("Calling node to init ~p, connected to: ~p",[Nd1,nodes(connected)]), 85 | "ok" = rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],10000), 86 | ok = wait_tree(Nd1,10000), 87 | timer:sleep(1000), 88 | 89 | lager:info("Isolating node1,node2, me ~p",[node()]), 90 | isolate([?ND1,?ND2],[?ND3,?ND4,?ND5]), 91 | rpc:call(Nd1,?MODULE,killconns,[]), 92 | rpc:call(Nd2,?MODULE,killconns,[]), 93 | rpc:call(Nd3,?MODULE,killconns,[]), 94 | rpc:call(Nd4,?MODULE,killconns,[]), 95 | rpc:call(Nd5,?MODULE,killconns,[]), 96 | timer:sleep(2000), 97 | 98 | % nd1 should be leader but now it can only communicate with node2 99 | {badrpc,_} = rpc:call(Nd1,actordb_sharedstate,write_global,[key,123],5000), 100 | lager:info("Abandoned call, trying in ~p",[Nd3]), 101 | [1,2,3,4,5,6,7,8,9,10] = rpc:call(Nd3,lists,seq,[1,10],5000), 102 | ok = rpc:call(Nd3,actordb_sharedstate,write_global,[key1,321],8000), 103 | 104 | 105 | %123 = rpc:call(Nd1,actordb_sharedstate,read,[<<"global">>,key],15000), 106 | 321 = rpc:call(Nd3,actordb_sharedstate,read,[<<"global">>,key1],15000), 107 | lager:info("REACHED END SUCCESSFULLY"), 108 | ok; 109 | run("random",Param) -> 110 | NWriters = 3, 111 | [Nd1,Nd2,Nd3,Nd4,Nd5|_] = Ndl = butil:ds_vals([node1,node2,node3,node4,node5],Param), 112 | "ok" = rpc:call(Nd1,actordb_cmd,cmd,[init,commit,butil:ds_val(path,Param)++"/node1/etc"],10000), 113 | ok = wait_tree(Nd1,10000), 114 | timer:sleep(1000), 115 | 116 | % Execute workers for every node. Eeach worker will send updates to its assigned node but they will all go 117 | % to the same actor. Every worker has his own row that he is working in. 118 | Home = self(), 119 | Pids = [spawn_monitor(fun() -> incwriter(Home,Nd,0) end) || Nd <- Ndl], 120 | 121 | ok. 122 | 123 | 124 | isolate([ToIsolate|T], [_|_] = IsolateFrom) -> 125 | [begin 126 | Cmd1 = "iptables -A INPUT -s "++butil:tolist(butil:ds_val(name,F))++" -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j DROP", 127 | Cmd2 = "iptables -A OUTPUT -s "++butil:tolist(butil:ds_val(name,ToIsolate))++" -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j DROP", 128 | lager:info("On node=~p, running: ~s",[butil:ds_val(name,ToIsolate),Cmd1]), 129 | lager:info("On node=~p, running: ~s",[butil:ds_val(name,F),Cmd2]), 130 | detest:cmd(ToIsolate,Cmd1), 131 | detest:cmd(F,Cmd2) 132 | end || F <- IsolateFrom], 133 | isolate(T,IsolateFrom); 134 | isolate([],_) -> 135 | ok. 136 | 137 | 138 | % Called on nodes. Kills all RPC connections. 139 | killconns() -> 140 | L = supervisor:which_children(ranch_server:get_connections_sup(bkdcore_in)), 141 | [exit(Pid,stop) || {bkdcore_rpc,Pid,worker,[bkdcore_rpc]} <- L]. 142 | 143 | 144 | % Increment counter. Every 100 writes check if number that should be written is actually the one that is in the DB. 145 | incwriter(Home,Nd,0) -> 146 | {ok,_} = exec([Nd],<<"actor type1(testactor) create;", 147 | "insert into tab values ('",(butil:tobin(Nd))/binary,"',1);">>), 148 | incwriter(Home,Nd,1); 149 | incwriter(Home,Nd,N) -> 150 | checkhome(Home), 151 | case N rem 100 of 152 | 0 -> 153 | case exec([Nd],<<"actor type1(testactor);", 154 | "select * from tab WHERE id='",(butil:tobin(Nd))/binary,"';">>) of 155 | {ok,[{columns,_},{rows,[{_,N}]}]} -> 156 | ok; 157 | {ok,[{columns,_},{rows,[{_,N1}]}]} -> 158 | lager:error("Value mismatch, should be=~p, is=~p",[N,N1]), 159 | exit(normal); 160 | {error,_} -> 161 | ok 162 | end; 163 | _ -> 164 | ok 165 | end, 166 | case exec([Nd],<<"actor type1(testactor);", 167 | "update tab set val=val+1 WHERE id='",(butil:tobin(Nd))/binary,"';">>) of 168 | {ok,_} -> 169 | incwriter(Home,Nd,N+1); 170 | {error,_Err} -> 171 | lager:info("Failed update ~p",[Nd]), 172 | incwriter(Home,Nd,N) 173 | end. 174 | -------------------------------------------------------------------------------- /test/test_util.erl: -------------------------------------------------------------------------------- 1 | % This Source Code Form is subject to the terms of the Mozilla Public 2 | % License, v. 2.0. If a copy of the MPL was not distributed with this 3 | % file, You can obtain one at http://mozilla.org/MPL/2.0/. 4 | -define(GLOBAL,globstate). 5 | 6 | basic_write(Ndl) -> 7 | basic_write(Ndl,1). 8 | basic_write(Ndl,Dir) -> 9 | basic_write(Ndl,<<"SOME TEXT">>,Dir). 10 | basic_write(Ndl,Txt,Dir) -> 11 | ?INF("Basic write",[]), 12 | Seq = case Dir of 13 | 1 -> 14 | [1,?NUMACTORS,1]; 15 | -1 -> 16 | [?NUMACTORS, 1, -1] 17 | end, 18 | [begin 19 | Now = flatnow(), 20 | ?INF("Write ac~p ~p",[N,Now]), 21 | run_sql(Ndl,<<"actor type1(ac",(integer_to_binary(N))/binary,") create; insert into tab values (", 22 | (integer_to_binary(Now))/binary,",'",Txt/binary,"',1);">>) 23 | % ?INF("~p",[R]) 24 | end 25 | || N <- apply(lists,seq,Seq)]. 26 | 27 | run_sql(Ndl,Sql) -> 28 | R = exec(Ndl,Sql), 29 | case R of 30 | {error,consensus_timeout} -> 31 | ?INF("Write retry after consensus_timeout: ~p",[Sql]), 32 | timer:sleep(50), 33 | run_sql(Ndl,Sql); 34 | {ok,Ok} -> 35 | {ok,Ok}; 36 | E -> 37 | ?INF("Write error ~p on sql=~p",[E,Sql]), 38 | exit(E) 39 | end. 40 | 41 | 42 | check_multiupdate_deadlock(Ndl) -> 43 | L = ["ac1","ac2","ac3","ac4","ac5"], 44 | Pids = [begin 45 | % Sort L randomly 46 | RL = lists:sort(fun(_,_) -> rand:uniform() > rand:uniform() end,L), 47 | {Pid,_} = spawn_monitor(fun() -> 48 | Ins = ["insert into tab values (", 49 | (integer_to_binary(flatnow())),",'deadlock?',1);"], 50 | SQL = [["actor type1(",Actor,"); ",Ins] || Actor <- RL], 51 | SQLB = iolist_to_binary(SQL), 52 | ?INF("Running: ~p",[SQLB]), 53 | {ok,_} = exec(Ndl,SQLB), 54 | exit(ok) 55 | end), 56 | Pid 57 | end || _ <- lists:seq(1,50)], 58 | wait_dl_resp(Pids). 59 | wait_dl_resp([H|T]) -> 60 | receive 61 | {'DOWN',_Monitor,_,H,ok} -> 62 | ?INF("got deadlock test multiupdate response"), 63 | wait_dl_resp(T); 64 | {'DOWN',_Monitor,_,H,Resp} -> 65 | Resp 66 | after 10000 -> 67 | timeout 68 | end; 69 | wait_dl_resp([]) -> 70 | ok. 71 | 72 | spawn_writes(N,Ndl) -> 73 | spawn_writes(N,Ndl,[]). 74 | spawn_writes(0,_,L) -> 75 | wait_async(L); 76 | spawn_writes(N,Ndl,L) -> 77 | case N rem 2 == 0 of 78 | true -> 79 | Dir = 1; 80 | false -> 81 | Dir = -1 82 | end, 83 | {Pid,_} = spawn_monitor(fun() -> detest_net:reg_caller(), basic_write(Ndl,Dir) end), 84 | spawn_writes(N-1,Ndl,[Pid|L]). 85 | 86 | wait_async([H|T]) -> 87 | receive 88 | {'DOWN',_Monitor,_,H,normal} -> 89 | wait_async(T); 90 | {'DOWN',_Monitor,_,H,Err} -> 91 | exit(Err) 92 | end; 93 | wait_async([]) -> 94 | ok. 95 | 96 | 97 | err_write(Ndl) -> 98 | err_write(Ndl,<<"SOME TEXT">>). 99 | err_write(Ndl,Txt) -> 100 | ?INF("Error write!",[]), 101 | [begin 102 | ?INF("Write ac~p",[N]), 103 | {sql_error,_} = _R = exec(Ndl,<<"actor type1(ac",(integer_to_binary(N))/binary,") create; insert into ta values (", 104 | (integer_to_binary(flatnow()))/binary,",'",Txt/binary,"',1);">>) 105 | % ?INF("~p",[R]) 106 | end 107 | || N <- lists:seq(1,?NUMACTORS)]. 108 | 109 | basic_read(Ndl) -> 110 | ?INF("Basic read",[]), 111 | [begin 112 | ?INF("Read ac~p",[N]), 113 | {ok,[{columns,_},{rows,[{_,<<_/binary>>,_}|_]}]} = 114 | exec(Ndl,<<"actor type1(ac",(integer_to_binary(N))/binary,") create safe; select * from tab;">>) 115 | end 116 | || N <- lists:seq(1,?NUMACTORS)]. 117 | 118 | copyactor(Ndl) -> 119 | ?INF("Copy actor",[]), 120 | {ok,_} = exec(Ndl,["actor type1(newcopy);PRAGMA copy=ac1;"]), 121 | {ok,[{columns,_},{rows,[{_,<<_/binary>>,_}|_]}]} = exec(Ndl,<<"actor type1(newcopy) create; select * from tab;">>), 122 | {ok,[{columns,_},{rows,Rows1}]} = exec(Ndl,["actor type1(*);pragma list;"]), 123 | ?debugFmt("List actors ~p",[Rows1]), 124 | ?assertEqual(true, lists:member({<<"newcopy">>},Rows1)), 125 | [begin 126 | {ok,_} = exec(Ndl,["actor type1(newcopy",integer_to_list(N),");PRAGMA copy=ac",integer_to_list(N),";"]), 127 | {ok,[{columns,_},{rows,[{_,<<_/binary>>,_}|_]}]} = exec(Ndl,<<"actor type1(newcopy) create; select * from tab;">>) 128 | end 129 | || N <- lists:seq(1,10)]. 130 | 131 | 132 | multiupdate_write(Ndl) -> 133 | ?debugFmt("multiupdates",[]), 134 | % Insert names of 2 actors in table tab2 of actor "all" 135 | ?assertMatch({ok,_},exec(Ndl,["actor type1(all) create;", 136 | "insert into tab2 values (1,'a1');", 137 | "insert into tab2 values (2,'a2');"])), 138 | 139 | ?debugFmt("multiupdate fail insert",[]), 140 | % Fail test 141 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor thread(first) create;", 142 | "insert into thread values (1,'a1',10);", 143 | "actor thread(second) create;", 144 | "insert into thread values (1,'a1',10);"])), 145 | ?assertMatch({ok,[{columns,{<<"id">>,<<"msg">>,<<"user">>}}, 146 | {rows,[{1,<<"a1">>,10}]}]}, 147 | exec(Ndl,["actor thread(first);select * from thread;"])), 148 | ?assertMatch({ok,[{columns,{<<"id">>,<<"msg">>,<<"user">>}}, 149 | {rows,[{1,<<"a1">>,10}]}]}, 150 | exec(Ndl,["actor thread(second);select * from thread;"])), 151 | ?debugFmt("multiupdates fail",[]), 152 | ?assertMatch({error,abandoned},exec(Ndl,["actor thread(first) create;", 153 | "update thread set msg='a3' where id=1;", 154 | "actor thread(second) create;", 155 | "update thread set msg='a3' where i=2;"])), 156 | ?debugFmt("multiupdates still old data",[]), 157 | ?assertMatch({ok,[{columns,{<<"id">>,<<"msg">>,<<"user">>}},{rows,[{1,<<"a1">>,10}]}]}, 158 | exec(Ndl,["actor thread(first);select * from thread;"])), 159 | ?assertMatch({ok,[{columns,{<<"id">>,<<"msg">>,<<"user">>}},{rows,[{1,<<"a1">>,10}]}]}, 160 | exec(Ndl,["actor thread(second);select * from thread;"])), 161 | 162 | ?debugFmt("multiupdates foreach insert",[]), 163 | % Select everything from tab2 for actor "all". 164 | % Actorname is in .txt column, for every row take that actor and insert value with same unique integer id. 165 | Res = exec(Ndl,["actor type1(all);", 166 | "{{ACTORS}}SELECT * FROM tab2;", 167 | "actor type1(foreach X.txt in ACTORS) create;", 168 | "insert into tab2 values ({{uniqid.s}},'{{X.txt}}');"]), 169 | % ?debugFmt("Res ~p~n",[Res]), 170 | ?assertMatch({ok,{changes,_,_}},Res), 171 | 172 | ?debugFmt("multiupdates delete actors",[]), 173 | ?assertMatch({ok,{changes,0,5}},exec(Ndl,["actor type1(ac100,ac99,ac98,ac97,ac96);PRAGMA delete;"])), 174 | ?debugFmt("Deleting individual actor",[]), 175 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor type1(ac95);PRAGMA delete;"])), 176 | 177 | ?debugFmt("multiupdates creating thread",[]), 178 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor thread(1) create;", 179 | "INSERT INTO thread VALUES (100,'message',10);", 180 | "INSERT INTO thread VALUES (101,'secondmsg',20);", 181 | "actor user(10) create;", 182 | "INSERT INTO userinfo VALUES (1,'user1');", 183 | "actor user(20) create;", 184 | "INSERT INTO userinfo VALUES (1,'user2');"])), 185 | ok. 186 | 187 | multiupdate_read(Ndl) -> 188 | ?INF("multiupdate read all type1",[]), 189 | Res = exec(Ndl,["actor type1(*);", 190 | "{{RESULT}}SELECT * FROM tab;"]), 191 | ?assertMatch({ok,[_,_]},Res), 192 | {ok,[{columns,Cols},{rows,Rows}]} = Res, 193 | ?INF("Result all actors ~p",[{Cols,lists:keysort(4,Rows)}]), 194 | ?assertEqual({<<"id">>,<<"txt">>,<<"i">>,<<"actor">>},Cols), 195 | % 6 actors were deleted, 2 were added 196 | ?assertEqual((numactors()-6)*2,length(Rows)), 197 | 198 | ?INF("multiupdate read thread and user",[]), 199 | % Add username column to result 200 | {ok,ResForum} = exec(Ndl,["actor thread(1);", 201 | "{{RESULT}}SELECT * FROM thread;" 202 | "actor user(for X.user in RESULT);", 203 | "{{A}}SELECT * FROM userinfo WHERE id=1;", 204 | "{{X.username=A.name}}" 205 | ]), 206 | ?assertMatch([{columns,{<<"id">>,<<"msg">>,<<"user">>,<<"username">>}}, 207 | {rows,[{101,<<"secondmsg">>,20,<<"user2">>}, 208 | {100,<<"message">>,10,<<"user1">>}]}], 209 | ResForum), 210 | {ok,[{columns,_},{rows,Rows1}]} = exec(Ndl,["actor type1(*);pragma list;"]), 211 | Num = numactors()-6+3, 212 | %?INF("Num=~p",[Num]), 213 | ?assertEqual(Num,length(Rows1)), 214 | %?INF("Rows ~p",[lists:sort(Rows1)]), 215 | ?assertMatch({ok,[{columns,_},{rows,[{Num}]}]},exec(Ndl,["actor type1(*);pragma count;"])), 216 | ok. 217 | 218 | 219 | 220 | kv_readwrite(Ndl) -> 221 | ?debugFmt("~p",[[iolist_to_binary(["actor counters(id",butil:tolist(N),");", 222 | "insert into actors values ('id",butil:tolist(N),"',{{hash(id",butil:tolist(N),")}},", 223 | butil:tolist(N),");"])|| N <- lists:seq(1,1)]]), 224 | [?assertMatch({ok,_},exec(Ndl,["actor counters(id",butil:tolist(N),");", 225 | "insert into actors values ('id",butil:tolist(N),"',{{hash(id",butil:tolist(N),")}},",butil:tolist(N),");"])) 226 | || N <- lists:seq(1,numactors())], 227 | [?assertMatch({ok,[{columns,_},{rows,[{_,_,N}]}]}, 228 | exec(Ndl,["actor counters(id",butil:tolist(N),");", 229 | "select * from actors where id='id",butil:tolist(N),"';"])) || N <- lists:seq(1,numactors())], 230 | ReadAll = ["actor counters(*);", 231 | "{{RESULT}}SELECT * FROM actors;"], 232 | All = exec(Ndl,ReadAll), 233 | ?debugFmt("All counters ~p",[All]), 234 | ?debugFmt("Select first 5",[]), 235 | ReadSome = ["actor counters(id1,id2,id3,id4,id5);", 236 | "{{RESULT}}SELECT * FROM actors where id='{{curactor}}';"], 237 | ?assertMatch({ok,[{columns,_}, 238 | {rows,[{<<"id5">>,_,5,<<"id5">>}, 239 | {<<"id4">>,_,4,<<"id4">>}, 240 | {<<"id3">>,_,3,<<"id3">>}, 241 | {<<"id2">>,_,2,<<"id2">>}, 242 | {<<"id1">>,_,1,<<"id1">>}]}]}, 243 | exec(Ndl,ReadSome)), 244 | ?debugFmt("Increment first 5",[]), 245 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor counters(id1,id2,id3,id4,id5);", 246 | "UPDATE actors SET val = val+1 WHERE id='{{curactor}}';"])), 247 | ?debugFmt("Select first 5 again ~p",[exec(Ndl,ReadSome)]), 248 | ?assertMatch({ok,[{columns,_}, 249 | {rows,[{<<"id5">>,_,6,<<"id5">>}, 250 | {<<"id4">>,_,5,<<"id4">>}, 251 | {<<"id3">>,_,4,<<"id3">>}, 252 | {<<"id2">>,_,3,<<"id2">>}, 253 | {<<"id1">>,_,2,<<"id1">>}]}]}, 254 | exec(Ndl,ReadSome)), 255 | ?debugFmt("delete 5 and 4",[]), 256 | % Not the right way to delete but it works (not transactional) 257 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor counters(id5,id4);PRAGMA delete;"])), 258 | ?assertMatch({ok,[{columns,_}, 259 | {rows,[{<<"id3">>,_,4,<<"id3">>}, 260 | {<<"id2">>,_,3,<<"id2">>}, 261 | {<<"id1">>,_,2,<<"id1">>}]}]}, 262 | exec(Ndl,ReadSome)), 263 | % the right way 264 | ?assertMatch({ok,{changes,_,_}},exec(Ndl,["actor counters(id3,id2);DELETE FROM actors WHERE id='{{curactor}}';"])), 265 | ?assertMatch({ok,[{columns,_}, 266 | {rows,[{<<"id1">>,_,2,<<"id1">>}]}]}, 267 | exec(Ndl,ReadSome)), 268 | ?assertMatch({ok,[{columns,_},{rows,_}]},All), 269 | 270 | 271 | % Multiple tables test 272 | [?assertMatch({ok,_},exec(Ndl,["actor filesystem(id",butil:tolist(N),");", 273 | "insert into actors values ('id",butil:tolist(N),"',{{hash(id",butil:tolist(N),")}},",butil:tolist(N),");", 274 | "insert into users (fileid,uid) values ('id",butil:tolist(N),"',",butil:tolist(N),");"])) 275 | || N <- lists:seq(1,numactors())], 276 | 277 | ok. 278 | 279 | writer(Home,Nd,N,SleepMax,RC) -> 280 | checkhome(Home), 281 | % Sleep a random amount from 0 to .. 282 | SleepFor = random:uniform(SleepMax), 283 | timer:sleep(butil:ceiling(SleepFor)), 284 | checkhome(Home), 285 | Start = os:timestamp(), 286 | case get(bin) of 287 | undefined -> 288 | Bin = base64:encode(crypto:strong_rand_bytes(128)), 289 | put(bin,Bin); 290 | Bin -> 291 | ok 292 | end, 293 | case exec([Nd],<<"actor type1(ac",(integer_to_binary(N))/binary,") create; insert into tab values (", 294 | (integer_to_binary(flatnow()))/binary,",'",(Bin)/binary,"',1);">>) of 295 | {ok,_} -> 296 | ok; 297 | Err -> 298 | exit(Err) 299 | end, 300 | Stop = os:timestamp(), 301 | Diff = timer:now_diff(Stop,Start) div 1000, 302 | % when quitting ets table may be gone so die quitely 303 | case catch ets:update_counter(writecounter,wnum,1) of 304 | X when is_integer(X) -> 305 | ok; 306 | _ -> 307 | exit(normal) 308 | end, 309 | case catch ets:update_counter(writecounter,wnum_sec,1) of 310 | X1 when is_integer(X1) -> 311 | ok; 312 | _ -> 313 | exit(normal) 314 | end, 315 | %lager:info("Write complete for ~p, runcount=~p, slept_for=~p, exec_time=~ps ~pms",[N,RC,SleepFor,Diff div 1000, Diff rem 1000]), 316 | writer(Home,Nd,N,SleepMax,RC+1). 317 | 318 | make_actors(N) when N > 100 -> 319 | timer:sleep(60000), 320 | ok; 321 | make_actors(N) -> 322 | case exec(nodes(connected),<<"actor type1(ac",(integer_to_binary(N))/binary,") create; insert into tab values (", 323 | (integer_to_binary(flatnow()))/binary,",'",(base64:encode(crypto:strong_rand_bytes(128)))/binary,"',1);">>) of 324 | {ok,_} -> 325 | ok; 326 | Err -> 327 | ?INF("Creating actor failed ~p",[Err]) 328 | % exit(Err) 329 | end, 330 | timer:sleep(100), 331 | make_actors(N+1). 332 | 333 | % We will keep adding single node clusters to the network. Cluster name is same as node name 334 | addclusters(Path,Nd1,Nodes) -> 335 | receive 336 | done -> 337 | exit(normal) 338 | after 0 -> 339 | ok 340 | end, 341 | timer:sleep(1000), 342 | Len = length(Nodes)+1, 343 | Port = 50000 + Len, 344 | NI = [{name,butil:toatom("node"++butil:tolist(Len))},{rpcport,Port}], 345 | Nodes1 = [NI|Nodes], 346 | 347 | % Grps = [[{name,butil:ds_val(name,Ndi)},{nodes,[butil:ds_val(name,Ndi)]}] || Ndi <- Nodes1], 348 | % cfg({Nodes1,Grps}) 349 | DistName = detest:add_node(NI), 350 | % rpc:call(Nd1,actordb_cmd,cmd,[updatenodes,commit,Path++"/node1/etc"],3000), 351 | {ok,_} = rpc:call(Nd1,actordb_config,exec,[[grp(Len),nds([DistName],Len)]],3000), 352 | spawn(fun() -> periodic_isolation(DistName,0) end), 353 | ok = wait_modified_tree(DistName,nodes(connected),30000), 354 | addclusters(Path,Nd1,Nodes1). 355 | 356 | periodic_isolation(Nd,N) when N > 2 -> 357 | ok; 358 | periodic_isolation(Nd,N) -> 359 | timer:sleep(5000), 360 | ?INF("Isolating ~p",[Nd]), 361 | detest:isolate(Nd,Nd), 362 | timer:sleep(2000), 363 | detest:isolate_end(Nd), 364 | ?INF("Isolate end ~p",[Nd]), 365 | periodic_isolation(Nd,N+1). 366 | 367 | wait_crash(L) -> 368 | wait_crash(L,element(2,os:timestamp()),0). 369 | wait_crash(L,Sec,N) -> 370 | case L -- nodes(connected) of 371 | [] -> 372 | receive 373 | {'DOWN',_Ref,_,_Pid,Reason} when Reason /= normal -> 374 | lager:error("Crash with reason ~p",[Reason]) 375 | after 30 -> 376 | Sec1 = element(2,os:timestamp()), 377 | case Sec of 378 | Sec1 -> 379 | ok; 380 | _ -> 381 | lager:info("Writes so far: ~p, insec ~p",[butil:ds_val(wnum,writecounter),butil:ds_val(wnum_sec,writecounter)]), 382 | butil:ds_add(wnum_sec,0,writecounter) 383 | end, 384 | wait_crash(L,Sec1,N+1) 385 | end; 386 | L1 -> 387 | lager:error("Stopping. Nodes gone: ~p",[L1]) 388 | end. 389 | 390 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 391 | % 392 | % UTILITY FUNCTIONS 393 | % 394 | %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% 395 | checkhome(Home) -> 396 | case erlang:is_process_alive(Home) of 397 | true -> 398 | ok; 399 | false -> 400 | exit(normal) 401 | end. 402 | rseed(N) -> 403 | {A,B,C} = now(), 404 | random:seed(A*erlang:phash2(["writer",now(),self()]),B+erlang:phash2([1,2,3,N]),C*N). 405 | flatnow() -> 406 | ets:update_counter(?GLOBAL, counter, {2,1}). 407 | % {MS,S,MiS} = now(), 408 | % MS*1000000000000 + S*1000000 + MiS. 409 | ltime() -> 410 | element(2,lager_util:localtime_ms()). 411 | exec(undefined,Bin) -> 412 | Nd = detest_net:majority_node(), 413 | ?INF("Majority node ~p",[Nd]), 414 | rpc:call(detest_net:majority_node(),actordb,exec,[iolist_to_binary(Bin)], 10000); 415 | exec(Nodes,Bin) -> 416 | rpc:call(findnd(Nodes),actordb,exec,[iolist_to_binary(Bin)], 10000). 417 | exec(Nodes,Bin,Timeout) -> 418 | rpc:call(findnd(Nodes),actordb,exec,[iolist_to_binary(Bin)],Timeout). 419 | 420 | findnd([H|T]) -> 421 | case lists:member(H,nodes(connected)) of 422 | true -> 423 | H; 424 | _ -> 425 | findnd(T) 426 | end. 427 | 428 | wait_tree(Nd,X) when X < 0 -> 429 | ?INF("Timeout waiting for shard for ~p",[Nd]), 430 | exit(timeout); 431 | wait_tree(Nd,N) -> 432 | case rpc:call(Nd,actordb_shardtree,all,[]) of 433 | {badrpc,_Err} -> 434 | ?INF("waiting for shard from ~p",[Nd]), 435 | timer:sleep(1000), 436 | wait_tree(Nd,N-1000); 437 | Tree -> 438 | ?INF("Have shard tree ~p~n ~p",[Nd,Tree]), 439 | timer:sleep(1000), 440 | ok 441 | end. 442 | 443 | wait_modified_tree(Nd,All,Milis) when is_integer(Milis) -> 444 | {A,B,C} = os:timestamp(), 445 | wait_modified_tree(Nd,All,{A,B+(Milis div 1000),C+(Milis rem 1000)*1000}); 446 | wait_modified_tree(Nd,All,StopAt) -> 447 | TDiff = timer:now_diff(os:timestamp(),StopAt), 448 | Remain = erlang:abs(TDiff) div 1000, 449 | case TDiff > 0 of 450 | true -> 451 | exit(timeout); 452 | false -> 453 | ?INF("Nodes connected on=~p are=~p",[Nd,rpc:call(Nd,erlang,nodes,[connected])]), 454 | case rpc:call(Nd,gen_server,call,[actordb_shardmngr,get_all_shards]) of 455 | {[_|_] = AllShards1,_Local} -> 456 | AllShards2 = lists:keysort(1,AllShards1), 457 | AllShards = [{From,To,To-From,Ndx} || {From,To,Ndx} <- AllShards2], 458 | ?INF("~p allshards ~p",[time(),AllShards]), 459 | [?INF("~p For nd ~p, beingtaken ~p",[time(),Ndx, 460 | rpc:call(Ndx,gen_server,call,[actordb_shardmngr,being_taken])]) || Ndx <- All], 461 | [?INF("~p For nd ~p, moves ~p",[time(),Ndx, 462 | rpc:call(Ndx,gen_server,call,[actordb_shardmvr,get_moves])]) || Ndx <- All], 463 | case lists:keymember(butil:tobin(dist_to_bkdnm(Nd)),4,AllShards) of 464 | false -> 465 | ?INF("not member of shard tree, timeleft=~p",[Remain]), 466 | timer:sleep(1000), 467 | wait_modified_tree(Nd,All,StopAt); 468 | true -> 469 | case rpc:call(Nd,gen_server,call,[actordb_shardmvr,get_moves]) of 470 | {[],[]} -> 471 | case lists:filter(fun({_,_,_,SNode}) -> SNode == butil:tobin(dist_to_bkdnm(Nd)) end,AllShards) of 472 | [_,_,_|_] -> 473 | ok; 474 | _X -> 475 | ?INF("get_moves empty, should have 3 shards ~p ~p",[Nd,_X]), 476 | % ?debugFmt("get_moves wrong num shards ~p~n ~p",[Nd,X]), 477 | timer:sleep(1000), 478 | wait_modified_tree(Nd,All,StopAt) 479 | end; 480 | _L -> 481 | ?INF("Still moving processes ~p, timeleft ~p",[Nd,Remain]), 482 | timer:sleep(1000), 483 | wait_modified_tree(Nd,All,StopAt) 484 | end 485 | end; 486 | {_,_Err} -> 487 | ?INF("Waiting for shard data from ~p, time left=~p",[Nd,Remain]), 488 | timer:sleep(1000), 489 | wait_modified_tree(Nd,All,StopAt) 490 | end 491 | end. 492 | 493 | dist_to_bkdnm(Nm) -> 494 | [BN|_] = string:tokens(atom_to_list(Nm),"@"), 495 | butil:tobin(BN). 496 | dist_to_ip(Nm) -> 497 | [_,IP] = string:tokens(atom_to_list(Nm),"@"), 498 | IP. 499 | -------------------------------------------------------------------------------- /winbuild: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env escript 2 | -mode(compile). 3 | 4 | % How to make a windows build: 5 | % 1. Install erlang and git for windows 6 | % 2. Add c:\Program Files\erl6.x.x to path 7 | % 3. clone actordb 8 | % 4. build rebar for windows, copy the rebar.cmd file to actordb folder 9 | % 5. rebar get-deps 10 | % 6. rebar compile (yamerl is not windows compatible, remove all makefiles and rebar will be able to compile it) 11 | % 7. winbuild.bat 12 | % 8. Take erlang installation and copy it to win folder. Remove erl.ini files in bin and erts/bin folders 13 | % 9. Use win folder. 14 | 15 | main(_Args) -> 16 | filelib:ensure_dir("win/ebin/"), 17 | filelib:ensure_dir("win/priv/"), 18 | filelib:ensure_dir("win/data/"), 19 | filelib:ensure_dir("win/etc/"), 20 | filelib:ensure_dir("win/log/"), 21 | filelib:ensure_dir("win/licenses/"), 22 | filelib:ensure_dir("win/extensions/"), 23 | 24 | [file:copy(F,"win/ebin/"++filename:basename(F)) || F <- filelib:wildcard("deps/*/ebin/*")], 25 | [file:copy(F,"win/priv/"++filename:basename(F)) || F <- filelib:wildcard("deps/*/priv/*.dll")], 26 | 27 | file:copy("LICENSE","win/licenses/LICENSE"), 28 | [file:copy(F,"win/licenses/LICENSE-"++hd(tl(filename:split(filename:dirname(F))))) || F <- filelib:wildcard("deps/*/LICENSE")], 29 | 30 | file:write_file("win/installdeps.bat",deps()), 31 | file:write_file("win/actordb.bat",run()), 32 | file:write_file("win/actordb_console.bat",ctrl()), 33 | file:write_file("win/stop-background-processes.bat",epmd()), 34 | 35 | os:cmd("escript.exe priv/mkconsole.escript"), 36 | file:copy("priv/actordb_wxconsole","win/priv/actordb_wxconsole"), 37 | 38 | file:copy("rel/files/vm.args","win/etc/vm.args"), 39 | file:copy("rel/files/sqlite_extensions_here","win/extensions/sqlite_extensions_here"), 40 | file:copy("etc/init.sql","win/etc/init.sql"), 41 | {ok,AppFile} = file:read_file("rel/files/app.config"), 42 | AppFile1 = re:replace(AppFile,"{{platform_data_dir}}","data",[global]), 43 | AppFile2 = re:replace(AppFile1,"{{platform_log_dir}}","log",[global]), 44 | AppFile3 = re:replace(AppFile2,"{{extensions_dir}}","extensions",[global]), 45 | file:write_file("win/etc/app.config",AppFile3), 46 | 47 | ToolCmd = "cl deps\\actordb_driver\\c_src\\tool.c deps\\actordb_driver\\c_src\\mdb.c deps\\actordb_driver\\c_src\\midl.c deps\\actordb_driver\\c_src\\lz4.c /D_TESTAPP_=1 /DSQLITE_DEFAULT_PAGE_SIZE=4096 /DSQLITE_DEFAULT_WAL_AUTOCHECKPOINT=0 /Ic_src Ws2_32.lib Advapi32.lib", 48 | os:cmd(ToolCmd), 49 | file:rename("tool.exe","win/tool.exe"), 50 | [file:delete(Nm) || Nm <- filelib:wildcard("*.obj")], 51 | ok. 52 | 53 | 54 | run() -> 55 | check("erl.exe") ++ 56 | "start \"\" \"%CD%/erl7.3/bin/werl\" -pa ebin -config etc/app.config -s actordb_core -args_file etc/vm.args\n"++ 57 | "exit\n". 58 | 59 | deps() -> 60 | check("")++ 61 | "start \"\" \"%CD%/erl7.3/vcredist_x64.exe\" /q /norestart\n"++ 62 | "exit\n". 63 | 64 | epmd() -> 65 | check("")++ 66 | "\"%CD%/erl7.3/erts-7.3/bin/epmd\" -kill\n"++ 67 | "exit\n". 68 | 69 | ctrl() -> 70 | check("escript.exe") ++ 71 | %"%CD%/erl7.2.1/bin/escript.exe priv/actordb_console %*\n"++ 72 | "start /min \"\" \"%CD%/erl7.3/bin/escript.exe\" priv/actordb_wxconsole %*\n". 73 | %"pause". 74 | 75 | check(_Name) -> 76 | "@echo off\n"++ 77 | "setlocal\n". 78 | 79 | % Used to clean up erlang folder. 80 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".erl"]. 81 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".lib"]. 82 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".pdb"]. 83 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".hrl"]. 84 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".html"]. 85 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".gif"]. 86 | %[file:delete(Nm) || Nm <- filelib:wildcard("**"), filename:extension(Nm) == ".pdf"]. 87 | --------------------------------------------------------------------------------