├── .gitignore ├── .travis.yml ├── CONTRIBUTORS.md ├── LICENSE ├── README.md ├── config └── config.exs ├── lib ├── rethinkdb.ex └── rethinkdb │ ├── connection.ex │ ├── connection │ ├── request.ex │ └── transport.ex │ ├── exception.ex │ ├── lambda.ex │ ├── prepare.ex │ ├── pseudotypes.ex │ ├── q.ex │ ├── query.ex │ ├── query │ ├── macros.ex │ └── term_info.json │ └── response.ex ├── mix.exs ├── mix.lock ├── test ├── cert │ ├── host.crt │ ├── host.key │ └── rootCA.pem ├── changes_test.exs ├── prepare_test.exs ├── query │ ├── administration_query_test.exs │ ├── aggregation_test.exs │ ├── control_structures_adv_test.exs │ ├── control_structures_test.exs │ ├── database_test.exs │ ├── date_time_test.exs │ ├── document_manipulation_test.exs │ ├── geospatial_adv_test.exs │ ├── geospatial_test.exs │ ├── joins_test.exs │ ├── math_logic_test.exs │ ├── selection_test.exs │ ├── string_manipulation_test.exs │ ├── table_db_test.exs │ ├── table_index_test.exs │ ├── table_test.exs │ ├── transformation_test.exs │ └── writing_data_test.exs ├── query_test.exs └── test_helper.exs └── tester.exs /.gitignore: -------------------------------------------------------------------------------- 1 | /_build 2 | /deps 3 | /docs 4 | /doc 5 | erl_crash.dump 6 | *.ez 7 | *.swp 8 | *.beam 9 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: elixir 2 | 3 | matrix: 4 | include: 5 | - elixir: 1.7 6 | otp_release: 21.0 7 | - elixir: 1.7 8 | otp_release: 20.3 9 | - elixir: 1.6 10 | otp_release: 21.0 11 | - elixir: 1.6 12 | otp_release: 20.3 13 | - elixir: 1.6 14 | otp_release: 19.3 15 | - elixir: 1.5 16 | otp_release: 20.3 17 | - elixir: 1.5 18 | otp_release: 19.3 19 | - elixir: 1.5 20 | otp_release: 18.3 21 | - elixir: 1.4 22 | otp_release: 20.3 23 | - elixir: 1.4 24 | otp_release: 19.3 25 | - elixir: 1.4 26 | otp_release: 18.3 27 | - elixir: 1.3 28 | otp_release: 19.3 29 | - elixir: 1.3 30 | otp_release: 18.3 31 | 32 | install: 33 | - mix local.rebar --force 34 | - mix local.hex --force 35 | - mix deps.get --only test 36 | 37 | addons: 38 | rethinkdb: '2.3' 39 | -------------------------------------------------------------------------------- /CONTRIBUTORS.md: -------------------------------------------------------------------------------- 1 | Just a place to list individuals who have helped out. Any contribution counts (even participating in a discussion on an open issue). Open a PR to add your name if you feel so inclined. 2 | 3 | * hamiltop 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 hamiltop 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | RethinkDB [![Build Status](https://travis-ci.org/hamiltop/rethinkdb-elixir.svg?branch=master)](https://travis-ci.org/hamiltop/rethinkdb-elixir) 2 | =========== 3 | Multiplexed RethinkDB client in pure Elixir. 4 | 5 | If you are coming here from elixir-rethinkdb, welcome! 6 | If you were expecting `Exrethinkdb` you are in the right place. We decided to change the name to just `RethinkDB` and the repo to `rethinkdb-elixir`. Sorry if it has caused confusion. Better now in the early stages than later! 7 | 8 | I just set up a channel on the Elixir slack, so if you are on there join #rethinkdb. 9 | 10 | ### Recent changes 11 | 12 | #### 0.4.0 13 | 14 | * Extract Changefeed out into separate [package](https://github.com/hamiltop/rethinkdb_changefeed) 15 | * Accept keyword options with queries 16 | 17 | ## Getting Started 18 | 19 | See [API documentation](http://hexdocs.pm/rethinkdb/) for more details. 20 | 21 | ### Connection 22 | 23 | Connections are managed by a process. Start the process by calling `start_link/1`. See [documentation for `Connection.start_link/1`](http://hexdocs.pm/rethinkdb/RethinkDB.Connection.html#start_link/1) for supported options. 24 | 25 | #### Basic Remote Connection 26 | 27 | ```elixir 28 | {:ok, conn} = RethinkDB.Connection.start_link([host: "10.0.0.17", port: 28015]) 29 | ``` 30 | 31 | #### Named Connection 32 | 33 | ```elixir 34 | {:ok, conn} = RethinkDB.Connection.start_link([name: :foo]) 35 | ``` 36 | 37 | #### Supervised Connection 38 | 39 | Start the supervisor with: 40 | 41 | ```elixir 42 | worker(RethinkDB.Connection, [[name: :foo]]) 43 | worker(RethinkDB.Connection, [[name: :bar, host: 'localhost', port: 28015]]) 44 | ``` 45 | 46 | #### Default Connection 47 | 48 | An `RethinkDB.Connection` does parallel queries via pipelining. It can and should be shared among multiple processes. Because of this, it is common to have one connection shared in your application. To create a default connection, we create a new module and `use RethinkDB.Connection`. 49 | 50 | ```elixir 51 | defmodule FooDatabase do 52 | use RethinkDB.Connection 53 | end 54 | ``` 55 | 56 | This connection can be supervised without a name (it will assume the module as the name). 57 | 58 | ```elixir 59 | worker(FooDatabase, []) 60 | ``` 61 | 62 | Queries can be run without providing a connection (it will use the name connection). 63 | 64 | ```elixir 65 | import RethinkDB.Query 66 | table("people") |> FooDatabase.run 67 | ``` 68 | 69 | #### Connection Pooling 70 | 71 | To use a connection pool, add Poolboy to your dependencies: 72 | 73 | ```elixir 74 | {:poolboy, "~> 1.5"} 75 | ``` 76 | 77 | Then, in your supervision tree, add: 78 | 79 | ```elixir 80 | worker(:poolboy, [[name: {:local, :rethinkdb_pool}, worker_module: RethinkDB.Connection, size: 10, max_overflow: 0], []) 81 | ``` 82 | 83 | NOTE: If you want to use changefeeds or any persistent queries, `max_overflow: 0` is required. 84 | 85 | Then use it in your code: 86 | 87 | ```elixir 88 | db = :poolboy.checkout(:rethinkdb_pool) 89 | table("people") |> db 90 | :poolboy.checkin(:rethinkdb_pool, db) 91 | ``` 92 | 93 | ### Query 94 | 95 | `RethinkDB.run/2` accepts a process as the second argument (to facilitate piping). 96 | 97 | #### Insert 98 | 99 | ```elixir 100 | 101 | q = Query.table("people") 102 | |> Query.insert(%{first_name: "John", last_name: "Smith"}) 103 | |> RethinkDB.run conn 104 | ``` 105 | 106 | #### Filter 107 | 108 | ```elixir 109 | q = Query.table("people") 110 | |> Query.filter(%{last_name: "Smith"}) 111 | |> RethinkDB.run conn 112 | ``` 113 | 114 | #### Functions 115 | 116 | RethinkDB supports RethinkDB functions in queries. There are two approaches you can take: 117 | 118 | Use RethinkDB operators 119 | 120 | ```elixir 121 | import RethinkDB.Query 122 | 123 | make_array([1,2,3]) |> map(fn (x) -> add(x, 1) end) 124 | ``` 125 | 126 | Use Elixir operators via the lambda macro 127 | 128 | ```elixir 129 | require RethinkDB.Lambda 130 | import RethinkDB.Lambda 131 | 132 | make_array([1,2,3]) |> map(lambda fn (x) -> x + 1 end) 133 | ``` 134 | 135 | #### Map 136 | 137 | ```elixir 138 | require RethinkDB.Lambda 139 | import Query 140 | import RethinkDB.Lambda 141 | 142 | conn = RethinkDB.connect 143 | 144 | table("people") 145 | |> has_fields(["first_name", "last_name"]) 146 | |> map(lambda fn (person) -> 147 | person[:first_name] + " " + person[:last_name] 148 | end) |> RethinkDB.run conn 149 | ``` 150 | 151 | See [query.ex](lib/rethinkdb/query.ex) for more basic queries. If you don't see something supported, please open an issue. We're moving fast and any guidance on desired features is helpful. 152 | 153 | #### Indexes 154 | 155 | ```elixir 156 | # Simple indexes 157 | # create 158 | result = Query.table("people") 159 | |> Query.index_create("first_name", Lambda.lambda fn(row) -> row["first_name"] end) 160 | |> RethinkDB.run conn 161 | 162 | # retrieve 163 | result = Query.table("people") 164 | |> Query.get_all(["Will"], index: "first_name") 165 | |> RethinkDB.run conn 166 | 167 | 168 | # Compound indexes 169 | # create 170 | result = Query.table("people") 171 | |> Query.index_create("full_name", Lambda.lambda fn(row) -> [row["first_name"], row["last_name"]] end) 172 | |> RethinkDB.run conn 173 | 174 | # retrieve 175 | result = Query.table("people") 176 | |> Query.get_all([["Will", "Smith"], ["James", "Bond"]], index: "full_name") 177 | |> RethinkDB.run conn 178 | ``` 179 | 180 | One limitation we have in Elixir is that we don't support varargs. So in JavaScript you would do `getAll(key1, key2, {index: "uniqueness"})`. In Elixir we have to do `get_all([key1, key2], index: "uniqueness")`. With a single key it becomes `get_all([key1], index: "uniqueness")` and when `key1` is `[partA, partB]` you have to do `get_all([[partA, partB]], index: "uniqueness")` 181 | 182 | ### Changes 183 | 184 | Change feeds can be consumed either incrementally (by calling `RethinkDB.next/1`) or via the Enumerable Protocol. 185 | 186 | ```elixir 187 | results = Query.table("people") 188 | |> Query.filter(%{last_name: "Smith"}) 189 | |> Query.changes 190 | |> RethinkDB.run conn 191 | # get one result 192 | first_change = RethinkDB.next results 193 | # get stream, chunked in groups of 5, Inspect 194 | results |> Stream.chunk(5) |> Enum.each &IO.inspect/1 195 | ``` 196 | 197 | ### Supervised Changefeeds 198 | 199 | Supervised Changefeeds (an OTP behavior for running a changefeed as a process) have been moved to their own repo to enable independent release cycles. See https://github.com/hamiltop/rethinkdb_changefeed 200 | 201 | ### Roadmap 202 | 203 | Version 1.0.0 will be limited to individual connections and implement the entire documented ReQL (as of rethinkdb 2.0) 204 | 205 | While not provided by this library, we will also include example code for: 206 | 207 | * Connection Pooling 208 | 209 | The goal for 1.0.0 is to be stable. Issues have been filed for work that needs to be completed before 1.0.0 and tagged with the 1.0.0 milestone. 210 | 211 | 212 | ### Example Apps 213 | 214 | Checkout the wiki page for various [example apps](https://github.com/hamiltop/rethinkdb-elixir/wiki/Example-Apps) 215 | 216 | ### Contributing 217 | 218 | Contributions are welcome. Take a look at the Issues. Anything that is tagged `Help Wanted` or `Feedback Wanted` is a good candidate for contributions. Even if you don't know where to start, respond to an interesting issue and you will be pointed in the right direction. 219 | 220 | #### Testing 221 | 222 | Be intentional. Whether you are writing production code or tests, make sure there is value in the test being written. 223 | -------------------------------------------------------------------------------- /config/config.exs: -------------------------------------------------------------------------------- 1 | # This file is responsible for configuring your application 2 | # and its dependencies with the aid of the Mix.Config module. 3 | use Mix.Config 4 | 5 | # This configuration is loaded before any dependency and is restricted 6 | # to this project. If another project depends on this project, this 7 | # file won't be loaded nor affect the parent project. For this reason, 8 | # if you want to provide default values for your application for third- 9 | # party users, it should be done in your mix.exs file. 10 | 11 | # Sample configuration: 12 | # 13 | # config :logger, :console, 14 | # level: :info, 15 | # format: "$date $time [$level] $metadata$message\n", 16 | # metadata: [:user_id] 17 | 18 | # It is also possible to import configuration files, relative to this 19 | # directory. For example, you can emulate configuration per environment 20 | # by uncommenting the line below and defining dev.exs, test.exs and such. 21 | # Configuration from the imported file will override the ones defined 22 | # here (which is why it is important to import them last). 23 | # 24 | # import_config "#{Mix.env}.exs" 25 | -------------------------------------------------------------------------------- /lib/rethinkdb.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB do 2 | @moduledoc """ 3 | Some convenience functions for interacting with RethinkDB. 4 | """ 5 | 6 | @doc """ 7 | See `RethinkDB.Connection.run/2` 8 | """ 9 | defdelegate run(query, pid), to: RethinkDB.Connection 10 | 11 | @doc """ 12 | See `RethinkDB.Connection.run/3` 13 | """ 14 | defdelegate run(query, pid, opts), to: RethinkDB.Connection 15 | 16 | @doc """ 17 | See `RethinkDB.Connection.next/1` 18 | """ 19 | defdelegate next(collection), to: RethinkDB.Connection 20 | 21 | @doc """ 22 | See `RethinkDB.Connection.close/1` 23 | """ 24 | defdelegate close(collection), to: RethinkDB.Connection 25 | end 26 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection do 2 | @moduledoc """ 3 | A module for managing connections. 4 | 5 | A `Connection` object is a process that can be started in various ways. 6 | 7 | It is recommended to start it as part of a supervision tree with a name: 8 | 9 | worker(RethinkDB.Connection, [[port: 28015, host: 'localhost', name: :rethinkdb_connection]]) 10 | 11 | Connections will by default connect asynchronously. If a connection fails, we retry with 12 | an exponential backoff. All queries will return `%RethinkDB.Exception.ConnectionClosed{}` 13 | until the connection is established. 14 | 15 | If `:sync_connect` is set to `true` then the process will crash if we fail to connect. It's 16 | recommended to only use this if the database is on the same host or if a rethinkdb proxy 17 | is running on the same host. If there's any chance of a network partition, it's recommended 18 | to stick with the default behavior. 19 | """ 20 | use Connection 21 | 22 | require Logger 23 | 24 | alias RethinkDB.Connection.Request 25 | alias RethinkDB.Connection.Transport 26 | 27 | @doc """ 28 | A convenience macro for naming connections. 29 | 30 | For convenience we provide the `use RethinkDB.Connection` macro, which automatically registers 31 | itself under the module name: 32 | 33 | defmodule FooDatabase, do: use RethinkDB.Connection 34 | 35 | Then in the supervision tree: 36 | 37 | worker(FooDatabase, [[port: 28015, host: 'localhost']]) 38 | 39 | When `use RethinkDB.Connection` is called, it will define: 40 | 41 | * `start_link` 42 | * `stop` 43 | * `run` 44 | 45 | All of these only differ from the normal `RethinkDB.Connection` functions in that they don't 46 | accept a connection. They will use the current module as the process name. `start_link` will 47 | start the connection under the module name. 48 | 49 | If you attempt to provide a name to `start_link`, it will raise an `ArgumentError`. 50 | """ 51 | defmacro __using__(_opts) do 52 | quote location: :keep do 53 | def start_link(opts \\ []) do 54 | if Keyword.has_key?(opts, :name) && opts[:name] != __MODULE__ do 55 | # The whole point of this macro is to provide an implicit process 56 | # name, so subverting it is considered an error. 57 | raise ArgumentError.exception( 58 | "Process name #{inspect(opts[:name])} conflicts with implicit name #{ 59 | inspect(__MODULE__) 60 | } provided by `use RethinkDB.Connection`" 61 | ) 62 | end 63 | 64 | RethinkDB.Connection.start_link(Keyword.put_new(opts, :name, __MODULE__)) 65 | end 66 | 67 | def run(query, opts \\ []) do 68 | RethinkDB.Connection.run(query, __MODULE__, opts) 69 | end 70 | 71 | def noreply_wait(timeout \\ 5000) do 72 | RethinkDB.Connection.noreply_wait(__MODULE__, timeout) 73 | end 74 | 75 | def stop do 76 | RethinkDB.Connection.stop(__MODULE__) 77 | end 78 | 79 | defoverridable start_link: 1, start_link: 0 80 | end 81 | end 82 | 83 | @doc """ 84 | Stop the connection. 85 | 86 | Stops the given connection. 87 | """ 88 | def stop(pid) do 89 | Connection.cast(pid, :stop) 90 | end 91 | 92 | @doc """ 93 | Run a query on a connection. 94 | 95 | Supports the following options: 96 | 97 | * `timeout` - How long to wait for a response 98 | * `db` - Default database to use for query. Can also be specified as part of the query. 99 | * `durability` - possible values are 'hard' and 'soft'. In soft durability mode RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. 100 | * `noreply` - set to true to not receive the result object or cursor and return immediately. 101 | * `profile` - whether or not to return a profile of the query’s execution (default: false). 102 | * `time_format` - what format to return times in (default: :native). Set this to :raw if you want times returned as JSON objects for exporting. 103 | * `binary_format` - what format to return binary data in (default: :native). Set this to :raw if you want the raw pseudotype. 104 | """ 105 | def run(query, conn, opts \\ []) do 106 | timeout = Keyword.get(opts, :timeout, 5000) 107 | conn_opts = Keyword.drop(opts, [:timeout]) 108 | noreply = Keyword.get(opts, :noreply, false) 109 | 110 | conn_opts = 111 | Connection.call(conn, :conn_opts) 112 | |> Map.take([:db]) 113 | |> Enum.to_list() 114 | |> Keyword.merge(conn_opts) 115 | 116 | query = prepare_and_encode(query, conn_opts) 117 | 118 | msg = 119 | case noreply do 120 | true -> {:query_noreply, query} 121 | false -> {:query, query} 122 | end 123 | 124 | case Connection.call(conn, msg, timeout) do 125 | {response, token} -> RethinkDB.Response.parse(response, token, conn, opts) 126 | :noreply -> :ok 127 | result -> result 128 | end 129 | end 130 | 131 | @doc """ 132 | Fetch the next dataset for a feed. 133 | 134 | Since a feed is tied to a particular connection, no connection is needed when calling 135 | `next`. 136 | """ 137 | def next(%{token: token, pid: pid, opts: opts}) do 138 | case Connection.call(pid, {:continue, token}, :infinity) do 139 | {response, token} -> RethinkDB.Response.parse(response, token, pid, opts) 140 | x -> x 141 | end 142 | end 143 | 144 | @doc """ 145 | Closes a feed. 146 | 147 | Since a feed is tied to a particular connection, no connection is needed when calling 148 | `close`. 149 | """ 150 | def close(%{token: token, pid: pid}) do 151 | {response, token} = Connection.call(pid, {:stop, token}, :infinity) 152 | RethinkDB.Response.parse(response, token, pid, []) 153 | end 154 | 155 | @doc """ 156 | `noreply_wait` ensures that previous queries with the noreply flag have been processed by the server. Note that this guarantee only applies to queries run on the given connection. 157 | """ 158 | def noreply_wait(conn, timeout \\ 5000) do 159 | {response, token} = Connection.call(conn, :noreply_wait, timeout) 160 | 161 | case RethinkDB.Response.parse(response, token, conn, []) do 162 | {:ok, %RethinkDB.Response{data: %{"t" => 4}}} -> :ok 163 | r -> r 164 | end 165 | end 166 | 167 | defp prepare_and_encode(query, opts) do 168 | query = RethinkDB.Prepare.prepare(query) 169 | 170 | # Right now :db can still be nil so we need to remove it 171 | opts = 172 | Enum.into(opts, %{}, fn 173 | {:db, db} -> 174 | {:db, RethinkDB.Prepare.prepare(RethinkDB.Query.db(db))} 175 | 176 | {k, v} -> 177 | {k, v} 178 | end) 179 | 180 | query = [1, query, opts] 181 | Poison.encode!(query) 182 | end 183 | 184 | @doc """ 185 | Start connection as a linked process 186 | 187 | Accepts a `Keyword` of options. Supported options: 188 | 189 | * `:host` - hostname to use to connect to database. Defaults to `'localhost'`. 190 | * `:port` - port on which to connect to database. Defaults to `28015`. 191 | * `:auth_key` - authorization key to use with database. Defaults to `nil`. 192 | * `:db` - default database to use with queries. Defaults to `nil`. 193 | * `:sync_connect` - whether to have `init` block until a connection succeeds. Defaults to `false`. 194 | * `:max_pending` - Hard cap on number of concurrent requests. Defaults to `10000` 195 | * `:ssl` - a dict of options. Support SSL options: 196 | * `:ca_certs` - a list of file paths to cacerts. 197 | """ 198 | def start_link(opts \\ []) do 199 | args = Keyword.take(opts, [:host, :port, :auth_key, :db, :sync_connect, :ssl, :max_pending]) 200 | Connection.start_link(__MODULE__, args, opts) 201 | end 202 | 203 | def init(opts) do 204 | host = 205 | case Keyword.get(opts, :host, 'localhost') do 206 | x when is_binary(x) -> String.to_charlist(x) 207 | x -> x 208 | end 209 | 210 | sync_connect = Keyword.get(opts, :sync_connect, false) 211 | ssl = Keyword.get(opts, :ssl) 212 | 213 | opts = 214 | Keyword.put(opts, :host, host) 215 | |> Keyword.put_new(:port, 28015) 216 | |> Keyword.put_new(:auth_key, "") 217 | |> Keyword.put_new(:max_pending, 10000) 218 | |> Keyword.drop([:sync_connect]) 219 | |> Enum.into(%{}) 220 | 221 | {transport, transport_opts} = 222 | case ssl do 223 | nil -> 224 | {%Transport.TCP{}, []} 225 | 226 | x -> 227 | {%Transport.SSL{}, 228 | Enum.map(Keyword.fetch!(x, :ca_certs), &{:cacertfile, &1}) ++ [verify: :verify_peer]} 229 | end 230 | 231 | state = %{ 232 | pending: %{}, 233 | current: {:start, ""}, 234 | token: 0, 235 | config: Map.put(opts, :transport, {transport, transport_opts}) 236 | } 237 | 238 | case sync_connect do 239 | true -> 240 | case connect(:sync, state) do 241 | {:backoff, _, _} -> {:stop, :econnrefused} 242 | x -> x 243 | end 244 | 245 | false -> 246 | {:connect, :init, state} 247 | end 248 | end 249 | 250 | def connect( 251 | _info, 252 | state = %{ 253 | config: %{ 254 | host: host, 255 | port: port, 256 | auth_key: auth_key, 257 | transport: {transport, transport_opts} 258 | } 259 | } 260 | ) do 261 | case Transport.connect( 262 | transport, 263 | host, 264 | port, 265 | [active: false, mode: :binary] ++ transport_opts 266 | ) do 267 | {:ok, socket} -> 268 | case handshake(socket, auth_key) do 269 | {:error, _} -> 270 | {:stop, :bad_handshake, state} 271 | 272 | :ok -> 273 | :ok = Transport.setopts(socket, active: :once) 274 | # TODO: investigate timeout vs hibernate 275 | {:ok, Map.put(state, :socket, socket)} 276 | end 277 | 278 | {:error, :econnrefused} -> 279 | backoff = min(Map.get(state, :timeout, 1000), 64000) 280 | {:backoff, backoff, Map.put(state, :timeout, backoff * 2)} 281 | end 282 | end 283 | 284 | def disconnect(info, state = %{pending: pending}) do 285 | pending 286 | |> Enum.each(fn {_token, pid} -> 287 | Connection.reply(pid, %RethinkDB.Exception.ConnectionClosed{}) 288 | end) 289 | 290 | new_state = 291 | state 292 | |> Map.delete(:socket) 293 | |> Map.put(:pending, %{}) 294 | |> Map.put(:current, {:start, ""}) 295 | 296 | # TODO: should we reconnect? 297 | {:stop, info, new_state} 298 | end 299 | 300 | def handle_call(:conn_opts, _from, state = %{config: opts}) do 301 | {:reply, opts, state} 302 | end 303 | 304 | def handle_call(_, _, state = %{pending: pending, config: %{max_pending: max_pending}}) 305 | when map_size(pending) > max_pending do 306 | {:reply, %RethinkDB.Exception.TooManyRequests{}, state} 307 | end 308 | 309 | def handle_call({:query_noreply, query}, _from, state = %{token: token}) do 310 | new_token = token + 1 311 | token = <> 312 | {:noreply, state} = Request.make_request(query, token, :noreply, %{state | token: new_token}) 313 | {:reply, :noreply, state} 314 | end 315 | 316 | def handle_call({:query, query}, from, state = %{token: token}) do 317 | new_token = token + 1 318 | token = <> 319 | Request.make_request(query, token, from, %{state | token: new_token}) 320 | end 321 | 322 | def handle_call({:continue, token}, from, state) do 323 | query = "[2]" 324 | Request.make_request(query, token, from, state) 325 | end 326 | 327 | def handle_call({:stop, token}, from, state) do 328 | query = "[3]" 329 | Request.make_request(query, token, from, state) 330 | end 331 | 332 | def handle_call(:noreply_wait, from, state = %{token: token}) do 333 | query = "[4]" 334 | new_token = token + 1 335 | token = <> 336 | Request.make_request(query, token, from, %{state | token: new_token}) 337 | end 338 | 339 | def handle_cast(:stop, state) do 340 | {:disconnect, :normal, state} 341 | end 342 | 343 | def handle_info({proto, _port, data}, state = %{socket: socket}) when proto in [:tcp, :ssl] do 344 | :ok = Transport.setopts(socket, active: :once) 345 | Request.handle_recv(data, state) 346 | end 347 | 348 | def handle_info({closed_msg, _port}, state) when closed_msg in [:ssl_closed, :tcp_closed] do 349 | {:disconnect, closed_msg, state} 350 | end 351 | 352 | def handle_info(msg, state) do 353 | Logger.debug("Received unhandled info: #{inspect(msg)} with state #{inspect(state)}") 354 | {:noreply, state} 355 | end 356 | 357 | def terminate(_reason, %{socket: socket}) do 358 | Transport.close(socket) 359 | :ok 360 | end 361 | 362 | def terminate(_reason, _state) do 363 | :ok 364 | end 365 | 366 | defp handshake(socket, auth_key) do 367 | :ok = Transport.send(socket, <<0x400C2D20::little-size(32)>>) 368 | :ok = Transport.send(socket, <<:erlang.iolist_size(auth_key)::little-size(32)>>) 369 | :ok = Transport.send(socket, auth_key) 370 | :ok = Transport.send(socket, <<0x7E6970C7::little-size(32)>>) 371 | 372 | case recv_until_null(socket, "") do 373 | "SUCCESS" -> :ok 374 | error = {:error, _} -> error 375 | end 376 | end 377 | 378 | defp recv_until_null(socket, acc) do 379 | case Transport.recv(socket, 1) do 380 | {:ok, "\0"} -> acc 381 | {:ok, a} -> recv_until_null(socket, acc <> a) 382 | x = {:error, _} -> x 383 | end 384 | end 385 | end 386 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection/request.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection.Request do 2 | @moduledoc false 3 | 4 | alias RethinkDB.Connection.Transport 5 | 6 | def make_request(query, token, from, state = %{pending: pending, socket: socket}) do 7 | new_pending = 8 | case from do 9 | :noreply -> pending 10 | _ -> Map.put_new(pending, token, from) 11 | end 12 | 13 | bsize = :erlang.size(query) 14 | payload = token <> <> <> query 15 | 16 | case Transport.send(socket, payload) do 17 | :ok -> 18 | {:noreply, %{state | pending: new_pending}} 19 | 20 | {:error, :closed} -> 21 | {:disconnect, :closed, %RethinkDB.Exception.ConnectionClosed{}, state} 22 | end 23 | end 24 | 25 | def make_request(_query, _token, _from, state) do 26 | {:reply, %RethinkDB.Exception.ConnectionClosed{}, state} 27 | end 28 | 29 | def handle_recv(data, state = %{current: {:start, leftover}}) do 30 | case leftover <> data do 31 | <> -> 32 | handle_recv("", %{state | current: {:token, token, leftover}}) 33 | 34 | new_data -> 35 | {:noreply, %{state | current: {:start, new_data}}} 36 | end 37 | end 38 | 39 | def handle_recv(data, state = %{current: {:token, token, leftover}}) do 40 | case leftover <> data do 41 | <> -> 42 | handle_recv("", %{state | current: {:length, length, token, leftover}}) 43 | 44 | new_data -> 45 | {:noreply, %{state | current: {:token, token, new_data}}} 46 | end 47 | end 48 | 49 | def handle_recv(data, state = %{current: {:length, length, token, leftover}, pending: pending}) do 50 | case leftover <> data do 51 | <> -> 52 | Connection.reply(pending[token], {response, token}) 53 | 54 | handle_recv("", %{ 55 | state 56 | | current: {:start, leftover}, 57 | pending: Map.delete(pending, token) 58 | }) 59 | 60 | new_data -> 61 | {:noreply, %{state | current: {:length, length, token, new_data}}} 62 | end 63 | end 64 | end 65 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection/transport.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection.Transport do 2 | defmodule(SSL, do: defstruct([:socket])) 3 | defmodule(TCP, do: defstruct([:socket])) 4 | 5 | def connect(%SSL{}, host, port, opts) do 6 | case :ssl.connect(host, port, opts) do 7 | {:ok, socket} -> {:ok, %SSL{socket: socket}} 8 | x -> x 9 | end 10 | end 11 | 12 | def connect(%TCP{}, host, port, opts) do 13 | case :gen_tcp.connect(host, port, opts) do 14 | {:ok, socket} -> {:ok, %TCP{socket: socket}} 15 | x -> x 16 | end 17 | end 18 | 19 | def send(%SSL{socket: socket}, data) do 20 | :ssl.send(socket, data) 21 | end 22 | 23 | def send(%TCP{socket: socket}, data) do 24 | :gen_tcp.send(socket, data) 25 | end 26 | 27 | def recv(%SSL{socket: socket}, n) do 28 | :ssl.recv(socket, n) 29 | end 30 | 31 | def recv(%TCP{socket: socket}, n) do 32 | :gen_tcp.recv(socket, n) 33 | end 34 | 35 | def setopts(%SSL{socket: socket}, opts) do 36 | :ssl.setopts(socket, opts) 37 | end 38 | 39 | def setopts(%TCP{socket: socket}, opts) do 40 | :inet.setopts(socket, opts) 41 | end 42 | 43 | def close(%SSL{socket: socket}), do: :ssl.close(socket) 44 | def close(%TCP{socket: socket}), do: :gen_tcp.close(socket) 45 | end 46 | -------------------------------------------------------------------------------- /lib/rethinkdb/exception.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Exception do 2 | @moduledoc false 3 | defmodule ConnectionClosed do 4 | @moduledoc false 5 | defstruct [] 6 | end 7 | 8 | defmodule TooManyRequests do 9 | @moduledoc false 10 | defstruct [] 11 | end 12 | end 13 | -------------------------------------------------------------------------------- /lib/rethinkdb/lambda.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Lambda do 2 | @moduledoc """ 3 | Macro for using native elixir functions in queries 4 | """ 5 | alias RethinkDB.Query 6 | 7 | @doc """ 8 | Macro for using native elixir functions in queries 9 | 10 | Wrapping an anonymous function in `lambda` will cause it to be converted at compile time 11 | into standard RethinkDB query syntax. Example: 12 | 13 | lambda(fn (x) -> 14 | x + 5 == x/2 15 | end) 16 | 17 | Becomes: 18 | 19 | fn (x) -> 20 | RethinkDB.Query.eq( 21 | RethinkDB.Query.add(x, 5), 22 | RethinkDB.Query.divide(x, 2) 23 | ) 24 | end 25 | 26 | """ 27 | defmacro lambda(block) do 28 | build(block) 29 | end 30 | 31 | defp build(block) do 32 | Macro.prewalk(block, fn 33 | {{:., _, [Access, :get]}, _, [arg1, arg2]} -> 34 | quote do 35 | Query.bracket(unquote(arg1), unquote(arg2)) 36 | end 37 | 38 | {:+, _, args} -> 39 | quote do: Query.add(unquote(args)) 40 | 41 | {:<>, _, args} -> 42 | quote do: Query.add(unquote(args)) 43 | 44 | {:++, _, args} -> 45 | quote do: Query.add(unquote(args)) 46 | 47 | {:-, _, args} -> 48 | quote do: Query.sub(unquote(args)) 49 | 50 | {:*, _, args} -> 51 | quote do: Query.mul(unquote(args)) 52 | 53 | {:/, _, args} -> 54 | quote do: Query.divide(unquote(args)) 55 | 56 | {:rem, _, [a, b]} -> 57 | quote do: Query.mod(unquote(a), unquote(b)) 58 | 59 | {:==, _, args} -> 60 | quote do: Query.eq(unquote(args)) 61 | 62 | {:!=, _, args} -> 63 | quote do: Query.ne(unquote(args)) 64 | 65 | {:<, _, args} -> 66 | quote do: Query.lt(unquote(args)) 67 | 68 | {:<=, _, args} -> 69 | quote do: Query.le(unquote(args)) 70 | 71 | {:>, _, args} -> 72 | quote do: Query.gt(unquote(args)) 73 | 74 | {:>=, _, args} -> 75 | quote do: Query.ge(unquote(args)) 76 | 77 | {:||, _, args} -> 78 | quote do: Query.or_r(unquote(args)) 79 | 80 | {:&&, _, args} -> 81 | quote do: Query.and_r(unquote(args)) 82 | 83 | {:if, _, [expr, [do: truthy, else: falsy]]} -> 84 | quote do 85 | Query.branch(unquote(expr), unquote(truthy), unquote(falsy)) 86 | end 87 | 88 | {:if, _, _} -> 89 | raise "You must include an else condition when using if in a ReQL Lambda" 90 | 91 | x -> 92 | x 93 | end) 94 | end 95 | end 96 | -------------------------------------------------------------------------------- /lib/rethinkdb/prepare.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Prepare do 2 | alias RethinkDB.Q 3 | @moduledoc false 4 | 5 | # This is a bunch of functions that transform the query from our data structures into 6 | # the over the wire format. The main role is to properly create unique function variable ids. 7 | 8 | def prepare(query) do 9 | {val, _vars} = prepare(query, {0, %{}}) 10 | val 11 | end 12 | 13 | defp prepare(%Q{query: query}, state) do 14 | prepare(query, state) 15 | end 16 | 17 | defp prepare(list, state) when is_list(list) do 18 | {list, state} = 19 | Enum.reduce(list, {[], state}, fn el, {acc, state} -> 20 | {el, state} = prepare(el, state) 21 | {[el | acc], state} 22 | end) 23 | 24 | {Enum.reverse(list), state} 25 | end 26 | 27 | defp prepare(map, state) when is_map(map) do 28 | {map, state} = 29 | Enum.reduce(map, {[], state}, fn {k, v}, {acc, state} -> 30 | {k, state} = prepare(k, state) 31 | {v, state} = prepare(v, state) 32 | {[{k, v} | acc], state} 33 | end) 34 | 35 | {Enum.into(map, %{}), state} 36 | end 37 | 38 | defp prepare(ref, state = {max, map}) when is_reference(ref) do 39 | case Map.get(map, ref) do 40 | nil -> {max + 1, {max + 1, Map.put_new(map, ref, max + 1)}} 41 | x -> {x, state} 42 | end 43 | end 44 | 45 | defp prepare({k, v}, state) do 46 | {k, state} = prepare(k, state) 47 | {v, state} = prepare(v, state) 48 | {[k, v], state} 49 | end 50 | 51 | defp prepare(el, state) do 52 | if is_binary(el) and not String.valid?(el) do 53 | {RethinkDB.Query.binary(el), state} 54 | else 55 | {el, state} 56 | end 57 | end 58 | end 59 | -------------------------------------------------------------------------------- /lib/rethinkdb/pseudotypes.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Pseudotypes do 2 | @moduledoc false 3 | defmodule Binary do 4 | @moduledoc false 5 | defstruct data: nil 6 | 7 | def parse(%{"$reql_type$" => "BINARY", "data" => data}, opts) do 8 | case Keyword.get(opts, :binary_format) do 9 | :raw -> 10 | %__MODULE__{data: data} 11 | 12 | _ -> 13 | :base64.decode(data) 14 | end 15 | end 16 | end 17 | 18 | defmodule Geometry do 19 | @moduledoc false 20 | defmodule Point do 21 | @moduledoc false 22 | defstruct coordinates: [] 23 | end 24 | 25 | defmodule Line do 26 | @moduledoc false 27 | defstruct coordinates: [] 28 | end 29 | 30 | defmodule Polygon do 31 | @moduledoc false 32 | defstruct coordinates: [] 33 | end 34 | 35 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => [x, y], "type" => "Point"}) do 36 | %Point{coordinates: {x, y}} 37 | end 38 | 39 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => coords, "type" => "LineString"}) do 40 | %Line{coordinates: Enum.map(coords, &List.to_tuple/1)} 41 | end 42 | 43 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => coords, "type" => "Polygon"}) do 44 | %Polygon{coordinates: for(points <- coords, do: Enum.map(points, &List.to_tuple/1))} 45 | end 46 | end 47 | 48 | defmodule Time do 49 | @moduledoc false 50 | defstruct epoch_time: nil, timezone: nil 51 | 52 | def parse( 53 | %{"$reql_type$" => "TIME", "epoch_time" => epoch_time, "timezone" => timezone}, 54 | opts 55 | ) do 56 | case Keyword.get(opts, :time_format) do 57 | :raw -> 58 | %__MODULE__{epoch_time: epoch_time, timezone: timezone} 59 | 60 | _ -> 61 | {seconds, ""} = Calendar.ISO.parse_offset(timezone) 62 | 63 | zone_abbr = 64 | case seconds do 65 | 0 -> "UTC" 66 | _ -> timezone 67 | end 68 | 69 | negative = seconds < 0 70 | seconds = abs(seconds) 71 | 72 | time_zone = 73 | case {div(seconds, 3600), rem(seconds, 3600)} do 74 | {0, 0} -> 75 | "Etc/UTC" 76 | 77 | {hours, 0} -> 78 | "Etc/GMT" <> 79 | if negative do 80 | "+" 81 | else 82 | "-" 83 | end <> Integer.to_string(hours) 84 | 85 | {hours, seconds} -> 86 | "Etc/GMT" <> 87 | if negative do 88 | "+" 89 | else 90 | "-" 91 | end <> 92 | Integer.to_string(hours) <> 93 | ":" <> String.pad_leading(Integer.to_string(seconds), 2, "0") 94 | end 95 | 96 | (epoch_time * 1000) 97 | |> trunc() 98 | |> DateTime.from_unix!(:milliseconds) 99 | |> struct(utc_offset: seconds, zone_abbr: zone_abbr, time_zone: time_zone) 100 | end 101 | end 102 | end 103 | 104 | def convert_reql_pseudotypes(nil, _opts), do: nil 105 | 106 | def convert_reql_pseudotypes(%{"$reql_type$" => "BINARY"} = data, opts) do 107 | Binary.parse(data, opts) 108 | end 109 | 110 | def convert_reql_pseudotypes(%{"$reql_type$" => "GEOMETRY"} = data, _opts) do 111 | Geometry.parse(data) 112 | end 113 | 114 | def convert_reql_pseudotypes(%{"$reql_type$" => "GROUPED_DATA"} = data, _opts) do 115 | parse_grouped_data(data) 116 | end 117 | 118 | def convert_reql_pseudotypes(%{"$reql_type$" => "TIME"} = data, opts) do 119 | Time.parse(data, opts) 120 | end 121 | 122 | def convert_reql_pseudotypes(list, opts) when is_list(list) do 123 | Enum.map(list, fn data -> convert_reql_pseudotypes(data, opts) end) 124 | end 125 | 126 | def convert_reql_pseudotypes(map, opts) when is_map(map) do 127 | Enum.map(map, fn {k, v} -> 128 | {k, convert_reql_pseudotypes(v, opts)} 129 | end) 130 | |> Enum.into(%{}) 131 | end 132 | 133 | def convert_reql_pseudotypes(string, _opts), do: string 134 | 135 | def parse_grouped_data(%{"$reql_type$" => "GROUPED_DATA", "data" => data}) do 136 | Enum.map(data, fn [k, data] -> 137 | {k, data} 138 | end) 139 | |> Enum.into(%{}) 140 | end 141 | 142 | def create_grouped_data(data) when is_map(data) do 143 | data = data |> Enum.map(fn {k, v} -> [k, v] end) 144 | %{"$reql_type$" => "GROUPED_DATA", "data" => data} 145 | end 146 | end 147 | -------------------------------------------------------------------------------- /lib/rethinkdb/q.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Q do 2 | @moduledoc false 3 | 4 | defstruct query: nil 5 | end 6 | 7 | defimpl Poison.Encoder, for: RethinkDB.Q do 8 | def encode(%{query: query}, options) do 9 | Poison.Encoder.encode(query, options) 10 | end 11 | end 12 | 13 | defimpl Inspect, for: RethinkDB.Q do 14 | @external_resource term_info = Path.join([__DIR__, "query", "term_info.json"]) 15 | 16 | @apidef term_info 17 | |> File.read!() 18 | |> Poison.decode!() 19 | |> Enum.into(%{}, fn {key, val} -> {val, key} end) 20 | 21 | def inspect(%RethinkDB.Q{query: [69, [[2, refs], lambda]]}, _) do 22 | # Replaces references within lambda functions 23 | # with capture syntax arguments (&1, &2, etc). 24 | refs 25 | |> Enum.map_reduce(1, &{{&1, "&#{&2}"}, &2 + 1}) 26 | |> elem(0) 27 | |> Enum.reduce("&(#{inspect(lambda)})", fn {ref, var}, lambda -> 28 | String.replace(lambda, "var(#{inspect(ref)})", var) 29 | end) 30 | end 31 | 32 | def inspect(%RethinkDB.Q{query: [index, args, opts]}, _) do 33 | # Converts function options (map) to keyword list. 34 | Kernel.inspect(%RethinkDB.Q{query: [index, args ++ [Map.to_list(opts)]]}) 35 | end 36 | 37 | def inspect(%RethinkDB.Q{query: [index, args]}, _options) do 38 | # Resolve index & args and return them as string. 39 | Map.get(@apidef, index) <> "(#{Enum.join(Enum.map(args, &Kernel.inspect/1), ", ")})" 40 | end 41 | end 42 | -------------------------------------------------------------------------------- /lib/rethinkdb/query.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Query do 2 | @moduledoc """ 3 | Querying API for RethinkDB 4 | """ 5 | 6 | alias RethinkDB.Q 7 | 8 | import RethinkDB.Query.Macros 9 | 10 | @type t :: %Q{} 11 | @type reql_string :: String.t() | t 12 | @type reql_number :: integer | float | t 13 | @type reql_array :: [term] | t 14 | @type reql_bool :: boolean | t 15 | @type reql_obj :: map | t 16 | @type reql_datum :: term 17 | @type reql_func0 :: (() -> term) | t 18 | @type reql_func1 :: (term -> term) | t 19 | @type reql_func2 :: (term, term -> term) | t 20 | @type reql_opts :: map 21 | @type reql_binary :: %RethinkDB.Pseudotypes.Binary{} | binary | t 22 | @type reql_geo_point :: %RethinkDB.Pseudotypes.Geometry.Point{} | {reql_number, reql_number} | t 23 | @type reql_geo_line :: %RethinkDB.Pseudotypes.Geometry.Line{} | t 24 | @type reql_geo_polygon :: %RethinkDB.Pseudotypes.Geometry.Polygon{} | t 25 | @type reql_geo :: reql_geo_point | reql_geo_line | reql_geo_polygon 26 | @type reql_time :: %RethinkDB.Pseudotypes.Time{} | t 27 | 28 | # 29 | # Aggregation Functions 30 | # 31 | 32 | @doc """ 33 | Takes a stream and partitions it into multiple groups based on the fields or 34 | functions provided. 35 | 36 | With the multi flag single documents can be assigned to multiple groups, 37 | similar to the behavior of multi-indexes. When multi is True and the grouping 38 | value is an array, documents will be placed in each group that corresponds to 39 | the elements of the array. If the array is empty the row will be ignored. 40 | """ 41 | @spec group( 42 | Q.reql_array(), 43 | Q.reql_func1() | Q.reql_string() | [Q.reql_func1() | Q.reql_string()] 44 | ) :: Q.t() 45 | operate_on_seq_and_list(:group, 144, opts: true) 46 | operate_on_two_args(:group, 144, opts: true) 47 | 48 | @doc """ 49 | Takes a grouped stream or grouped data and turns it into an array of objects 50 | representing the groups. Any commands chained after ungroup will operate on 51 | this array, rather than operating on each group individually. This is useful if 52 | you want to e.g. order the groups by the value of their reduction. 53 | 54 | The format of the array returned by ungroup is the same as the default native 55 | format of grouped data in the JavaScript driver and data explorer. 56 | end 57 | """ 58 | @spec ungroup(Q.t()) :: Q.t() 59 | operate_on_single_arg(:ungroup, 150) 60 | 61 | @doc """ 62 | Produce a single value from a sequence through repeated application of a 63 | reduction function. 64 | 65 | The reduction function can be called on: 66 | 67 | * two elements of the sequence 68 | * one element of the sequence and one result of a previous reduction 69 | * two results of previous reductions 70 | 71 | The reduction function can be called on the results of two previous 72 | reductions because the reduce command is distributed and parallelized across 73 | shards and CPU cores. A common mistaken when using the reduce command is to 74 | suppose that the reduction is executed from left to right. 75 | """ 76 | @spec reduce(Q.reql_array(), Q.reql_func2()) :: Q.t() 77 | operate_on_two_args(:reduce, 37) 78 | 79 | @doc """ 80 | Counts the number of elements in a sequence. If called with a value, counts 81 | the number of times that value occurs in the sequence. If called with a 82 | predicate function, counts the number of elements in the sequence where that 83 | function returns `true`. 84 | 85 | If count is called on a binary object, it will return the size of the object 86 | in bytes. 87 | """ 88 | @spec count(Q.reql_array()) :: Q.t() 89 | operate_on_single_arg(:count, 43) 90 | @spec count(Q.reql_array(), Q.reql_string() | Q.reql_func1()) :: Q.t() 91 | operate_on_two_args(:count, 43) 92 | 93 | @doc """ 94 | Sums all the elements of a sequence. If called with a field name, sums all 95 | the values of that field in the sequence, skipping elements of the sequence 96 | that lack that field. If called with a function, calls that function on every 97 | element of the sequence and sums the results, skipping elements of the sequence 98 | where that function returns `nil` or a non-existence error. 99 | 100 | Returns 0 when called on an empty sequence. 101 | """ 102 | @spec sum(Q.reql_array()) :: Q.t() 103 | operate_on_single_arg(:sum, 145) 104 | @spec sum(Q.reql_array(), Q.reql_string() | Q.reql_func1()) :: Q.t() 105 | operate_on_two_args(:sum, 145) 106 | 107 | @doc """ 108 | Averages all the elements of a sequence. If called with a field name, 109 | averages all the values of that field in the sequence, skipping elements of the 110 | sequence that lack that field. If called with a function, calls that function 111 | on every element of the sequence and averages the results, skipping elements of 112 | the sequence where that function returns None or a non-existence error. 113 | 114 | Produces a non-existence error when called on an empty sequence. You can 115 | handle this case with `default`. 116 | """ 117 | @spec avg(Q.reql_array()) :: Q.t() 118 | operate_on_single_arg(:avg, 146) 119 | @spec avg(Q.reql_array(), Q.reql_string() | Q.reql_func1()) :: Q.t() 120 | operate_on_two_args(:avg, 146) 121 | 122 | @doc """ 123 | Finds the minimum element of a sequence. The min command can be called with: 124 | 125 | * a field name, to return the element of the sequence with the smallest value in 126 | that field; 127 | * an index option, to return the element of the sequence with the smallest value in that 128 | index; 129 | * a function, to apply the function to every element within the sequence and 130 | return the element which returns the smallest value from the function, ignoring 131 | any elements where the function returns None or produces a non-existence error. 132 | 133 | Calling min on an empty sequence will throw a non-existence error; this can be 134 | handled using the `default` command. 135 | """ 136 | @spec min(Q.reql_array(), Q.reql_opts() | Q.reql_string() | Q.reql_func1()) :: Q.t() 137 | operate_on_single_arg(:min, 147) 138 | operate_on_two_args(:min, 147) 139 | 140 | @doc """ 141 | Finds the maximum element of a sequence. The max command can be called with: 142 | 143 | * a field name, to return the element of the sequence with the smallest value in 144 | that field; 145 | * an index, to return the element of the sequence with the smallest value in that 146 | index; 147 | * a function, to apply the function to every element within the sequence and 148 | return the element which returns the smallest value from the function, ignoring 149 | any elements where the function returns None or produces a non-existence error. 150 | 151 | Calling max on an empty sequence will throw a non-existence error; this can be 152 | handled using the `default` command. 153 | """ 154 | @spec max(Q.reql_array(), Q.reql_opts() | Q.reql_string() | Q.reql_func1()) :: Q.t() 155 | operate_on_single_arg(:max, 148) 156 | operate_on_two_args(:max, 148) 157 | 158 | @doc """ 159 | Removes duplicates from elements in a sequence. 160 | 161 | The distinct command can be called on any sequence, a table, or called on a 162 | table with an index. 163 | """ 164 | @spec distinct(Q.reql_array(), Q.reql_opts()) :: Q.t() 165 | operate_on_single_arg(:distinct, 42, opts: true) 166 | 167 | @doc """ 168 | When called with values, returns `true` if a sequence contains all the specified 169 | values. When called with predicate functions, returns `true` if for each 170 | predicate there exists at least one element of the stream where that predicate 171 | returns `true`. 172 | """ 173 | @spec contains(Q.reql_array(), Q.reql_array() | Q.reql_func1() | Q.t()) :: Q.t() 174 | operate_on_seq_and_list(:contains, 93) 175 | operate_on_two_args(:contains, 93) 176 | 177 | # 178 | # Control Strucutres 179 | # 180 | 181 | @doc """ 182 | `args` is a special term that’s used to splice an array of arguments into 183 | another term. This is useful when you want to call a variadic term such as 184 | `get_all` with a set of arguments produced at runtime. 185 | 186 | This is analogous to Elixir's `apply`. 187 | """ 188 | @spec args(Q.reql_array()) :: Q.t() 189 | operate_on_single_arg(:args, 154) 190 | 191 | @doc """ 192 | Encapsulate binary data within a query. 193 | 194 | The type of data binary accepts depends on the client language. In 195 | Elixir, it expects a Binary. Using a Binary object within a query implies 196 | the use of binary and the ReQL driver will automatically perform the coercion. 197 | 198 | Binary objects returned to the client in Elixir will also be 199 | Binary objects. This can be changed with the binary_format option :raw 200 | to run to return “raw” objects. 201 | 202 | Only a limited subset of ReQL commands may be chained after binary: 203 | 204 | * coerce_to can coerce binary objects to string types 205 | * count will return the number of bytes in the object 206 | * slice will treat bytes like array indexes (i.e., slice(10,20) will return bytes 207 | * 10–19) 208 | * type_of returns PTYPE 209 | * info will return information on a binary object. 210 | """ 211 | @spec binary(Q.reql_binary()) :: Q.t() 212 | def binary(%RethinkDB.Pseudotypes.Binary{data: data}), do: do_binary(data) 213 | def binary(data), do: do_binary(:base64.encode(data)) 214 | def do_binary(data), do: %Q{query: [155, [%{"$reql_type$" => "BINARY", "data" => data}]]} 215 | 216 | @doc """ 217 | Call an anonymous function using return values from other ReQL commands or 218 | queries as arguments. 219 | 220 | The last argument to do (or, in some forms, the only argument) is an expression 221 | or an anonymous function which receives values from either the previous 222 | arguments or from prefixed commands chained before do. The do command is 223 | essentially a single-element map, letting you map a function over just one 224 | document. This allows you to bind a query result to a local variable within the 225 | scope of do, letting you compute the result just once and reuse it in a complex 226 | expression or in a series of ReQL commands. 227 | 228 | Arguments passed to the do function must be basic data types, and cannot be 229 | streams or selections. (Read about ReQL data types.) While the arguments will 230 | all be evaluated before the function is executed, they may be evaluated in any 231 | order, so their values should not be dependent on one another. The type of do’s 232 | result is the type of the value returned from the function or last expression. 233 | """ 234 | @spec do_r(Q.reql_datum() | Q.reql_func0(), Q.reql_func1()) :: Q.t() 235 | operate_on_single_arg(:do_r, 64) 236 | # Can't do `operate_on_two_args` because we swap the order of args to make it 237 | # Elixir's idiomatic subject first order. 238 | def do_r(data, f) when is_function(f), do: %Q{query: [64, [wrap(f), wrap(data)]]} 239 | 240 | @doc """ 241 | If the `test` expression returns False or None, the false_branch will be 242 | evaluated. Otherwise, the true_branch will be evaluated. 243 | 244 | The branch command is effectively an if renamed due to language constraints. 245 | """ 246 | @spec branch(Q.reql_datum(), Q.reql_datum(), Q.reql_datum()) :: Q.t() 247 | operate_on_three_args(:branch, 65) 248 | 249 | @doc """ 250 | Loop over a sequence, evaluating the given write query for each element. 251 | """ 252 | @spec for_each(Q.reql_array(), Q.reql_func1()) :: Q.t() 253 | operate_on_two_args(:for_each, 68) 254 | 255 | @doc """ 256 | Generate a stream of sequential integers in a specified range. 257 | 258 | `range` takes 0, 1 or 2 arguments: 259 | 260 | * With no arguments, range returns an “infinite” stream from 0 up to and 261 | including the maximum integer value; 262 | * With one argument, range returns a stream from 0 up to but not 263 | including the end value; 264 | * With two arguments, range returns a stream from the start value up to 265 | but not including the end value. 266 | """ 267 | @spec range(Q.reql_number(), Q.req_number()) :: Q.t() 268 | operate_on_zero_args(:range, 173) 269 | operate_on_single_arg(:range, 173) 270 | operate_on_two_args(:range, 173) 271 | 272 | @doc """ 273 | Throw a runtime error. 274 | """ 275 | @spec error(Q.reql_string()) :: Q.t() 276 | operate_on_single_arg(:error, 12) 277 | 278 | @doc """ 279 | Handle non-existence errors. Tries to evaluate and return its first argument. 280 | If an error related to the absence of a value is thrown in the process, or if 281 | its first argument returns nil, returns its second argument. (Alternatively, 282 | the second argument may be a function which will be called with either the text 283 | of the non-existence error or nil.) 284 | """ 285 | @spec default(Q.t(), Q.t()) :: Q.t() 286 | operate_on_two_args(:default, 92) 287 | 288 | @doc """ 289 | Create a javascript expression. 290 | 291 | The only opt allowed is `timeout`. 292 | 293 | `timeout` is the number of seconds before `js` times out. The default value 294 | is 5 seconds. 295 | """ 296 | @spec js(Q.reql_string(), Q.opts()) :: Q.t() 297 | operate_on_single_arg(:js, 11, opts: true) 298 | 299 | @doc """ 300 | Convert a value of one type into another. 301 | 302 | * a sequence, selection or object can be coerced to an array 303 | * an array of key-value pairs can be coerced to an object 304 | * a string can be coerced to a number 305 | * any datum (single value) can be coerced to a string 306 | * a binary object can be coerced to a string and vice-versa 307 | """ 308 | @spec coerce_to(Q.reql_datum(), Q.reql_string()) :: Q.t() 309 | operate_on_two_args(:coerce_to, 51) 310 | 311 | @doc """ 312 | Gets the type of a value. 313 | """ 314 | @spec type_of(Q.reql_datum()) :: Q.t() 315 | operate_on_single_arg(:type_of, 52) 316 | 317 | @doc """ 318 | Get information about a ReQL value. 319 | """ 320 | @spec info(Q.t()) :: Q.t() 321 | operate_on_single_arg(:info, 79) 322 | 323 | @doc """ 324 | Parse a JSON string on the server. 325 | """ 326 | @spec json(Q.reql_string()) :: Q.t() 327 | operate_on_single_arg(:json, 98) 328 | 329 | @doc """ 330 | Serialize to JSON string on the server. 331 | """ 332 | @spec to_json(Q.reql_term()) :: Q.t() 333 | operate_on_single_arg(:to_json, 172) 334 | 335 | @doc """ 336 | Retrieve data from the specified URL over HTTP. The return type depends on 337 | the result_format option, which checks the Content-Type of the response by 338 | default. 339 | """ 340 | @spec http(Q.reql_string(), Q.reql_opts()) :: Q.t() 341 | operate_on_single_arg(:http, 153, opts: true) 342 | 343 | @doc """ 344 | Return a UUID (universally unique identifier), a string that can be used as a unique ID. 345 | 346 | Accepts optionally a string. If given, UUID will be derived from the strings SHA-1 hash. 347 | """ 348 | @spec uuid(Q.reql_string()) :: Q.t() 349 | operate_on_zero_args(:uuid, 169) 350 | operate_on_single_arg(:uuid, 169) 351 | 352 | # 353 | # Database Operations 354 | # 355 | 356 | @doc """ 357 | Create a database. A RethinkDB database is a collection of tables, similar to 358 | relational databases. 359 | 360 | If successful, the command returns an object with two fields: 361 | 362 | * dbs_created: always 1. 363 | * config_changes: a list containing one object with two fields, old_val and 364 | new_val: 365 | * old_val: always null. 366 | * new_val: the database’s new config value. 367 | 368 | If a database with the same name already exists, the command throws 369 | RqlRuntimeError. 370 | 371 | Note: Only alphanumeric characters and underscores are valid for the database 372 | name. 373 | """ 374 | @spec db_create(Q.reql_string()) :: Q.t() 375 | operate_on_single_arg(:db_create, 57) 376 | 377 | @doc """ 378 | Drop a database. The database, all its tables, and corresponding data will be deleted. 379 | 380 | If successful, the command returns an object with two fields: 381 | 382 | * dbs_dropped: always 1. 383 | * tables_dropped: the number of tables in the dropped database. 384 | * config_changes: a list containing one two-field object, old_val and new_val: 385 | * old_val: the database’s original config value. 386 | * new_val: always None. 387 | 388 | If the given database does not exist, the command throws RqlRuntimeError. 389 | """ 390 | @spec db_drop(Q.reql_string()) :: Q.t() 391 | operate_on_single_arg(:db_drop, 58) 392 | 393 | @doc """ 394 | List all database names in the system. The result is a list of strings. 395 | """ 396 | @spec db_list :: Q.t() 397 | operate_on_zero_args(:db_list, 59) 398 | 399 | # 400 | # Geospatial Queries 401 | # 402 | 403 | @doc """ 404 | Construct a circular line or polygon. A circle in RethinkDB is a polygon or 405 | line approximating a circle of a given radius around a given center, consisting 406 | of a specified number of vertices (default 32). 407 | 408 | The center may be specified either by two floating point numbers, the latitude 409 | (−90 to 90) and longitude (−180 to 180) of the point on a perfect sphere (see 410 | Geospatial support for more information on ReQL’s coordinate system), or by a 411 | point object. The radius is a floating point number whose units are meters by 412 | default, although that may be changed with the unit argument. 413 | 414 | Optional arguments available with circle are: 415 | 416 | - num_vertices: the number of vertices in the polygon or line. Defaults to 32. 417 | - geo_system: the reference ellipsoid to use for geographic coordinates. Possible 418 | values are WGS84 (the default), a common standard for Earth’s geometry, or 419 | unit_sphere, a perfect sphere of 1 meter radius. 420 | - unit: Unit for the radius distance. Possible values are m (meter, the default), 421 | km (kilometer), mi (international mile), nm (nautical mile), ft (international 422 | foot). 423 | - fill: if `true` (the default) the circle is filled, creating a polygon; if `false` 424 | the circle is unfilled (creating a line). 425 | """ 426 | @spec circle(Q.reql_geo(), Q.reql_number(), Q.reql_opts()) :: Q.t() 427 | operate_on_two_args(:circle, 165, opts: true) 428 | 429 | @doc """ 430 | Compute the distance between a point and another geometry object. At least one 431 | of the geometry objects specified must be a point. 432 | 433 | Optional arguments available with distance are: 434 | 435 | - geo_system: the reference ellipsoid to use for geographic coordinates. Possible 436 | values are WGS84 (the default), a common standard for Earth’s geometry, or 437 | unit_sphere, a perfect sphere of 1 meter radius. 438 | - unit: Unit to return the distance in. Possible values are m (meter, the 439 | default), km (kilometer), mi (international mile), nm (nautical mile), ft 440 | (international foot). 441 | 442 | If one of the objects is a polygon or a line, the point will be projected onto 443 | the line or polygon assuming a perfect sphere model before the distance is 444 | computed (using the model specified with geo_system). As a consequence, if the 445 | polygon or line is extremely large compared to Earth’s radius and the distance 446 | is being computed with the default WGS84 model, the results of distance should 447 | be considered approximate due to the deviation between the ellipsoid and 448 | spherical models. 449 | """ 450 | @spec distance(Q.reql_geo(), Q.reql_geo(), Q.reql_opts()) :: Q.t() 451 | operate_on_two_args(:distance, 162, opts: true) 452 | 453 | @doc """ 454 | Convert a Line object into a Polygon object. If the last point does not 455 | specify the same coordinates as the first point, polygon will close the polygon 456 | by connecting them. 457 | """ 458 | @spec fill(Q.reql_line()) :: Q.t() 459 | operate_on_single_arg(:fill, 167) 460 | 461 | @doc """ 462 | Convert a GeoJSON object to a ReQL geometry object. 463 | 464 | RethinkDB only allows conversion of GeoJSON objects which have ReQL 465 | equivalents: Point, LineString, and Polygon. MultiPoint, MultiLineString, and 466 | MultiPolygon are not supported. (You could, however, store multiple points, 467 | lines and polygons in an array and use a geospatial multi index with them.) 468 | 469 | Only longitude/latitude coordinates are supported. GeoJSON objects that use 470 | Cartesian coordinates, specify an altitude, or specify their own coordinate 471 | reference system will be rejected. 472 | """ 473 | @spec geojson(Q.reql_obj()) :: Q.t() 474 | operate_on_single_arg(:geojson, 157) 475 | 476 | @doc """ 477 | Convert a ReQL geometry object to a GeoJSON object. 478 | """ 479 | @spec to_geojson(Q.reql_obj()) :: Q.t() 480 | operate_on_single_arg(:to_geojson, 158) 481 | 482 | @doc """ 483 | Get all documents where the given geometry object intersects the geometry 484 | object of the requested geospatial index. 485 | 486 | The index argument is mandatory. This command returns the same results as 487 | `filter(r.row('index')) |> intersects(geometry)`. The total number of results 488 | is limited to the array size limit which defaults to 100,000, but can be 489 | changed with the `array_limit` option to run. 490 | """ 491 | @spec get_intersecting(Q.reql_array(), Q.reql_geo(), Q.reql_opts()) :: Q.t() 492 | operate_on_two_args(:get_intersecting, 166, opts: true) 493 | 494 | @doc """ 495 | Get all documents where the specified geospatial index is within a certain 496 | distance of the specified point (default 100 kilometers). 497 | 498 | The index argument is mandatory. Optional arguments are: 499 | 500 | * max_results: the maximum number of results to return (default 100). 501 | * unit: Unit for the distance. Possible values are m (meter, the default), km 502 | (kilometer), mi (international mile), nm (nautical mile), ft (international 503 | foot). 504 | * max_dist: the maximum distance from an object to the specified point (default 505 | 100 km). 506 | * geo_system: the reference ellipsoid to use for geographic coordinates. Possible 507 | values are WGS84 (the default), a common standard for Earth’s geometry, or 508 | unit_sphere, a perfect sphere of 1 meter radius. 509 | 510 | The return value will be an array of two-item objects with the keys dist and 511 | doc, set to the distance between the specified point and the document (in the 512 | units specified with unit, defaulting to meters) and the document itself, 513 | respectively. 514 | 515 | """ 516 | @spec get_nearest(Q.reql_array(), Q.reql_geo(), Q.reql_opts()) :: Q.t() 517 | operate_on_two_args(:get_nearest, 168, opts: true) 518 | 519 | @doc """ 520 | Tests whether a geometry object is completely contained within another. When 521 | applied to a sequence of geometry objects, includes acts as a filter, returning 522 | a sequence of objects from the sequence that include the argument. 523 | """ 524 | @spec includes(Q.reql_geo(), Q.reql_geo()) :: Q.t() 525 | operate_on_two_args(:includes, 164) 526 | 527 | @doc """ 528 | Tests whether two geometry objects intersect with one another. When applied to 529 | a sequence of geometry objects, intersects acts as a filter, returning a 530 | sequence of objects from the sequence that intersect with the argument. 531 | """ 532 | @spec intersects(Q.reql_geo(), Q.reql_geo()) :: Q.t() 533 | operate_on_two_args(:intersects, 163) 534 | 535 | @doc """ 536 | Construct a geometry object of type Line. The line can be specified in one of 537 | two ways: 538 | 539 | - Two or more two-item arrays, specifying latitude and longitude numbers of the 540 | line’s vertices; 541 | - Two or more Point objects specifying the line’s vertices. 542 | """ 543 | @spec line([Q.reql_geo()]) :: Q.t() 544 | operate_on_list(:line, 160) 545 | 546 | @doc """ 547 | Construct a geometry object of type Point. The point is specified by two 548 | floating point numbers, the longitude (−180 to 180) and latitude (−90 to 90) of 549 | the point on a perfect sphere. 550 | """ 551 | @spec point(Q.reql_geo()) :: Q.t() 552 | def point({la, lo}), do: point(la, lo) 553 | operate_on_two_args(:point, 159) 554 | 555 | @doc """ 556 | Construct a geometry object of type Polygon. The Polygon can be specified in 557 | one of two ways: 558 | 559 | Three or more two-item arrays, specifying latitude and longitude numbers of the 560 | polygon’s vertices; 561 | * Three or more Point objects specifying the polygon’s vertices. 562 | * Longitude (−180 to 180) and latitude (−90 to 90) of vertices are plotted on a 563 | perfect sphere. See Geospatial support for more information on ReQL’s 564 | coordinate system. 565 | 566 | If the last point does not specify the same coordinates as the first point, 567 | polygon will close the polygon by connecting them. You cannot directly 568 | construct a polygon with holes in it using polygon, but you can use polygon_sub 569 | to use a second polygon within the interior of the first to define a hole. 570 | """ 571 | @spec polygon([Q.reql_geo()]) :: Q.t() 572 | operate_on_list(:polygon, 161) 573 | 574 | @doc """ 575 | Use polygon2 to “punch out” a hole in polygon1. polygon2 must be completely 576 | contained within polygon1 and must have no holes itself (it must not be the 577 | output of polygon_sub itself). 578 | """ 579 | @spec polygon_sub(Q.reql_geo(), Q.reql_geo()) :: Q.t() 580 | operate_on_two_args(:polygon_sub, 171) 581 | 582 | # 583 | # Joins Queries 584 | # 585 | 586 | @doc """ 587 | Returns an inner join of two sequences. The returned sequence represents an 588 | intersection of the left-hand sequence and the right-hand sequence: each row of 589 | the left-hand sequence will be compared with each row of the right-hand 590 | sequence to find all pairs of rows which satisfy the predicate. Each matched 591 | pair of rows of both sequences are combined into a result row. In most cases, 592 | you will want to follow the join with `zip` to combine the left and right results. 593 | 594 | Note that `inner_join` is slower and much less efficient than using `eqJoin` or 595 | `flat_map` with `get_all`. You should avoid using `inner_join` in commands when 596 | possible. 597 | 598 | iex> table("people") |> inner_join( 599 | table("phone_numbers"), &(eq(&1["id"], &2["person_id"]) 600 | ) |> run 601 | 602 | """ 603 | @spec inner_join(Q.reql_array(), Q.reql_array(), Q.reql_func2()) :: Q.t() 604 | operate_on_three_args(:inner_join, 48) 605 | 606 | @doc """ 607 | Returns a left outer join of two sequences. The returned sequence represents 608 | a union of the left-hand sequence and the right-hand sequence: all documents in 609 | the left-hand sequence will be returned, each matched with a document in the 610 | right-hand sequence if one satisfies the predicate condition. In most cases, 611 | you will want to follow the join with `zip` to combine the left and right results. 612 | 613 | Note that `outer_join` is slower and much less efficient than using `flat_map` 614 | with `get_all`. You should avoid using `outer_join` in commands when possible. 615 | 616 | iex> table("people") |> outer_join( 617 | table("phone_numbers"), &(eq(&1["id"], &2["person_id"]) 618 | ) |> run 619 | 620 | """ 621 | @spec outer_join(Q.reql_array(), Q.reql_array(), Q.reql_func2()) :: Q.t() 622 | operate_on_three_args(:outer_join, 49) 623 | 624 | @doc """ 625 | Join tables using a field on the left-hand sequence matching primary keys or 626 | secondary indexes on the right-hand table. `eq_join` is more efficient than other 627 | ReQL join types, and operates much faster. Documents in the result set consist 628 | of pairs of left-hand and right-hand documents, matched when the field on the 629 | left-hand side exists and is non-null and an entry with that field’s value 630 | exists in the specified index on the right-hand side. 631 | 632 | The result set of `eq_join` is a stream or array of objects. Each object in the 633 | returned set will be an object of the form `{ left: <left-document>, right: 634 | <right-document> }`, where the values of left and right will be the joined 635 | documents. Use the zip command to merge the left and right fields together. 636 | 637 | iex> table("people") |> eq_join( 638 | "id", table("phone_numbers"), %{index: "person_id"} 639 | ) |> run 640 | 641 | """ 642 | @spec eq_join(Q.reql_array(), Q.reql_string(), Q.reql_array(), Keyword.t()) :: Q.t() 643 | operate_on_three_args(:eq_join, 50, opts: true) 644 | 645 | @doc """ 646 | Used to ‘zip’ up the result of a join by merging the ‘right’ fields into 647 | ‘left’ fields of each member of the sequence. 648 | 649 | iex> table("people") |> eq_join( 650 | "id", table("phone_numbers"), %{index: "person_id"} 651 | ) |> zip |> run 652 | 653 | """ 654 | @spec zip(Q.reql_array()) :: Q.t() 655 | operate_on_single_arg(:zip, 72) 656 | 657 | # 658 | # Math and Logic Queries 659 | # 660 | 661 | @doc """ 662 | Sum two numbers, concatenate two strings, or concatenate 2 arrays. 663 | 664 | iex> add(1, 2) |> run conn 665 | %RethinkDB.Record{data: 3} 666 | 667 | iex> add("hello", " world") |> run conn 668 | %RethinkDB.Record{data: "hello world"} 669 | 670 | iex> add([1,2], [3,4]) |> run conn 671 | %RethinkDB.Record{data: [1,2,3,4]} 672 | 673 | """ 674 | @spec add(Q.reql_number() | Q.reql_string(), Q.reql_number() | Q.reql_string()) :: Q.t() 675 | operate_on_two_args(:add, 24) 676 | 677 | @doc """ 678 | Add multiple values or concatenate multiple strings or arrays. 679 | 680 | iex> add([1, 2]) |> run conn 681 | %RethinkDB.Record{data: 3} 682 | 683 | iex> add(["hello", " world"]) |> run 684 | %RethinkDB.Record{data: "hello world"} 685 | 686 | iex> add(args([10,20,30])) |> run conn 687 | %RethinkDB.Record{data: 60} 688 | 689 | iex> add(args([[1, 2], [3, 4]])) |> run conn 690 | %RethinkDB.Record{data: [1, 2, 3, 4]} 691 | 692 | """ 693 | @spec add([Q.reql_number() | Q.reql_string() | Q.reql_array()] | Q.t()) :: Q.t() 694 | operate_on_list(:add, 24) 695 | operate_on_single_arg(:add, 24) 696 | 697 | @doc """ 698 | Subtract two numbers. 699 | 700 | iex> sub(1, 2) |> run conn 701 | %RethinkDB.Record{data: -1} 702 | 703 | """ 704 | @spec sub(Q.reql_number(), Q.reql_number()) :: Q.t() 705 | operate_on_two_args(:sub, 25) 706 | 707 | @doc """ 708 | Subtract multiple values. Left associative. 709 | 710 | iex> sub([9, 1, 2]) |> run conn 711 | %RethinkDB.Record{data: 6} 712 | 713 | """ 714 | @spec sub([Q.reql_number()]) :: Q.t() 715 | operate_on_list(:sub, 25) 716 | 717 | @doc """ 718 | Multiply two numbers, or make a periodic array. 719 | 720 | iex> mul(2,3) |> run conn 721 | %RethinkDB.Record{data: 6} 722 | 723 | iex> mul([1,2], 2) |> run conn 724 | %RethinkDB.Record{data: [1,2,1,2]} 725 | 726 | """ 727 | @spec mul(Q.reql_number() | Q.reql_array(), Q.reql_number() | Q.reql_array()) :: Q.t() 728 | operate_on_two_args(:mul, 26) 729 | 730 | @doc """ 731 | Multiply multiple values. 732 | 733 | iex> mul([2,3,4]) |> run conn 734 | %RethinkDB.Record{data: 24} 735 | 736 | """ 737 | @spec mul([Q.reql_number() | Q.reql_array()]) :: Q.t() 738 | operate_on_list(:mul, 26) 739 | 740 | @doc """ 741 | Divide two numbers. 742 | 743 | iex> divide(12, 4) |> run conn 744 | %RethinkDB.Record{data: 3} 745 | 746 | """ 747 | @spec divide(Q.reql_number(), Q.reql_number()) :: Q.t() 748 | operate_on_two_args(:divide, 27) 749 | 750 | @doc """ 751 | Divide a list of numbers. Left associative. 752 | 753 | iex> divide([12, 2, 3]) |> run conn 754 | %RethinkDB.Record{data: 2} 755 | 756 | """ 757 | @spec divide([Q.reql_number()]) :: Q.t() 758 | operate_on_list(:divide, 27) 759 | 760 | @doc """ 761 | Find the remainder when dividing two numbers. 762 | 763 | iex> mod(23, 4) |> run conn 764 | %RethinkDB.Record{data: 3} 765 | 766 | """ 767 | @spec mod(Q.reql_number(), Q.reql_number()) :: Q.t() 768 | operate_on_two_args(:mod, 28) 769 | 770 | @doc """ 771 | Compute the logical “and” of two values. 772 | 773 | iex> and(true, true) |> run conn 774 | %RethinkDB.Record{data: true} 775 | 776 | iex> and(false, true) |> run conn 777 | %RethinkDB.Record{data: false} 778 | """ 779 | @spec and_r(Q.reql_bool(), Q.reql_bool()) :: Q.t() 780 | operate_on_two_args(:and_r, 67) 781 | 782 | @doc """ 783 | Compute the logical “and” of all values in a list. 784 | 785 | iex> and_r([true, true, true]) |> run conn 786 | %RethinkDB.Record{data: true} 787 | 788 | iex> and_r([false, true, true]) |> run conn 789 | %RethinkDB.Record{data: false} 790 | """ 791 | @spec and_r([Q.reql_bool()]) :: Q.t() 792 | operate_on_list(:and_r, 67) 793 | 794 | @doc """ 795 | Compute the logical “or” of two values. 796 | 797 | iex> or_r(true, false) |> run conn 798 | %RethinkDB.Record{data: true} 799 | 800 | iex> or_r(false, false) |> run conn 801 | %RethinkDB.Record{data: false} 802 | 803 | """ 804 | @spec or_r(Q.reql_bool(), Q.reql_bool()) :: Q.t() 805 | operate_on_two_args(:or_r, 66) 806 | 807 | @doc """ 808 | Compute the logical “or” of all values in a list. 809 | 810 | iex> or_r([true, true, true]) |> run conn 811 | %RethinkDB.Record{data: true} 812 | 813 | iex> or_r([false, true, true]) |> run conn 814 | %RethinkDB.Record{data: false} 815 | 816 | """ 817 | @spec or_r([Q.reql_bool()]) :: Q.t() 818 | operate_on_list(:or_r, 66) 819 | 820 | @doc """ 821 | Test if two values are equal. 822 | 823 | iex> eq(1,1) |> run conn 824 | %RethinkDB.Record{data: true} 825 | 826 | iex> eq(1, 2) |> run conn 827 | %RethinkDB.Record{data: false} 828 | """ 829 | @spec eq(Q.reql_datum(), Q.reql_datum()) :: Q.t() 830 | operate_on_two_args(:eq, 17) 831 | 832 | @doc """ 833 | Test if all values in a list are equal. 834 | 835 | iex> eq([2, 2, 2]) |> run conn 836 | %RethinkDB.Record{data: true} 837 | 838 | iex> eq([2, 1, 2]) |> run conn 839 | %RethinkDB.Record{data: false} 840 | """ 841 | @spec eq([Q.reql_datum()]) :: Q.t() 842 | operate_on_list(:eq, 17) 843 | 844 | @doc """ 845 | Test if two values are not equal. 846 | 847 | iex> ne(1,1) |> run conn 848 | %RethinkDB.Record{data: false} 849 | 850 | iex> ne(1, 2) |> run conn 851 | %RethinkDB.Record{data: true} 852 | """ 853 | @spec ne(Q.reql_datum(), Q.reql_datum()) :: Q.t() 854 | operate_on_two_args(:ne, 18) 855 | 856 | @doc """ 857 | Test if all values in a list are not equal. 858 | 859 | iex> ne([2, 2, 2]) |> run conn 860 | %RethinkDB.Record{data: false} 861 | 862 | iex> ne([2, 1, 2]) |> run conn 863 | %RethinkDB.Record{data: true} 864 | """ 865 | @spec ne([Q.reql_datum()]) :: Q.t() 866 | operate_on_list(:ne, 18) 867 | 868 | @doc """ 869 | Test if one value is less than the other. 870 | 871 | iex> lt(2,1) |> run conn 872 | %RethinkDB.Record{data: false} 873 | 874 | iex> lt(1, 2) |> run conn 875 | %RethinkDB.Record{data: true} 876 | """ 877 | @spec lt(Q.reql_datum(), Q.reql_datum()) :: Q.t() 878 | operate_on_two_args(:lt, 19) 879 | 880 | @doc """ 881 | Test if all values in a list are less than the next. Left associative. 882 | 883 | iex> lt([1, 4, 2]) |> run conn 884 | %RethinkDB.Record{data: false} 885 | 886 | iex> lt([1, 4, 5]) |> run conn 887 | %RethinkDB.Record{data: true} 888 | """ 889 | @spec lt([Q.reql_datum()]) :: Q.t() 890 | operate_on_list(:lt, 19) 891 | 892 | @doc """ 893 | Test if one value is less than or equal to the other. 894 | 895 | iex> le(1,1) |> run conn 896 | %RethinkDB.Record{data: true} 897 | 898 | iex> le(1, 2) |> run conn 899 | %RethinkDB.Record{data: true} 900 | """ 901 | @spec le(Q.reql_datum(), Q.reql_datum()) :: Q.t() 902 | operate_on_two_args(:le, 20) 903 | 904 | @doc """ 905 | Test if all values in a list are less than or equal to the next. Left associative. 906 | 907 | iex> le([1, 4, 2]) |> run conn 908 | %RethinkDB.Record{data: false} 909 | 910 | iex> le([1, 4, 4]) |> run conn 911 | %RethinkDB.Record{data: true} 912 | """ 913 | @spec le([Q.reql_datum()]) :: Q.t() 914 | operate_on_list(:le, 20) 915 | 916 | @doc """ 917 | Test if one value is greater than the other. 918 | 919 | iex> gt(1,2) |> run conn 920 | %RethinkDB.Record{data: false} 921 | 922 | iex> gt(2,1) |> run conn 923 | %RethinkDB.Record{data: true} 924 | """ 925 | @spec gt(Q.reql_datum(), Q.reql_datum()) :: Q.t() 926 | operate_on_two_args(:gt, 21) 927 | 928 | @doc """ 929 | Test if all values in a list are greater than the next. Left associative. 930 | 931 | iex> gt([1, 4, 2]) |> run conn 932 | %RethinkDB.Record{data: false} 933 | 934 | iex> gt([10, 4, 2]) |> run conn 935 | %RethinkDB.Record{data: true} 936 | """ 937 | @spec gt([Q.reql_datum()]) :: Q.t() 938 | operate_on_list(:gt, 21) 939 | 940 | @doc """ 941 | Test if one value is greater than or equal to the other. 942 | 943 | iex> ge(1,1) |> run conn 944 | %RethinkDB.Record{data: true} 945 | 946 | iex> ge(2, 1) |> run conn 947 | %RethinkDB.Record{data: true} 948 | """ 949 | @spec ge(Q.reql_datum(), Q.reql_datum()) :: Q.t() 950 | operate_on_two_args(:ge, 22) 951 | 952 | @doc """ 953 | Test if all values in a list are greater than or equal to the next. Left associative. 954 | 955 | iex> le([1, 4, 2]) |> run conn 956 | %RethinkDB.Record{data: false} 957 | 958 | iex> le([10, 4, 4]) |> run conn 959 | %RethinkDB.Record{data: true} 960 | """ 961 | @spec ge([Q.reql_datum()]) :: Q.t() 962 | operate_on_list(:ge, 22) 963 | 964 | @doc """ 965 | Compute the logical inverse (not) of an expression. 966 | 967 | iex> not(true) |> run conn 968 | %RethinkDB.Record{data: false} 969 | 970 | """ 971 | @spec not_r(Q.reql_bool()) :: Q.t() 972 | operate_on_single_arg(:not_r, 23) 973 | 974 | @doc """ 975 | Generate a random float between 0 and 1. 976 | 977 | iex> random |> run conn 978 | %RethinkDB.Record{data: 0.43} 979 | 980 | """ 981 | @spec random :: Q.t() 982 | operate_on_zero_args(:random, 151) 983 | 984 | @doc """ 985 | Generate a random value in the range [0,upper). If upper is an integer then the 986 | random value will be an integer. If upper is a float it will be a float. 987 | 988 | iex> random(5) |> run conn 989 | %RethinkDB.Record{data: 3} 990 | 991 | iex> random(5.0) |> run conn 992 | %RethinkDB.Record{data: 3.7} 993 | 994 | """ 995 | @spec random(Q.reql_number()) :: Q.t() 996 | def random(upper) when is_float(upper), do: random(upper, float: true) 997 | operate_on_single_arg(:random, 151, opts: true) 998 | 999 | @doc """ 1000 | Generate a random value in the range [lower,upper). If either arg is an integer then the 1001 | random value will be an interger. If one of them is a float it will be a float. 1002 | 1003 | iex> random(5, 10) |> run conn 1004 | %RethinkDB.Record{data: 8} 1005 | 1006 | iex> random(5.0, 15.0,) |> run conn 1007 | %RethinkDB.Record{data: 8.34} 1008 | 1009 | """ 1010 | @spec random(Q.reql_number(), Q.reql_number()) :: Q.t() 1011 | def random(lower, upper) when is_float(lower) or is_float(upper) do 1012 | random(lower, upper, float: true) 1013 | end 1014 | 1015 | operate_on_two_args(:random, 151, opts: true) 1016 | 1017 | @doc """ 1018 | Rounds the given value to the nearest whole integer. 1019 | 1020 | For example, values of 1.0 up to but not including 1.5 will return 1.0, similar to floor; values of 1.5 up to 2.0 will return 2.0, similar to ceil. 1021 | """ 1022 | @spec round_r(Q.reql_number()) :: Q.t() 1023 | operate_on_single_arg(:round_r, 185) 1024 | 1025 | @doc """ 1026 | Rounds the given value up, returning the smallest integer value greater than or equal to the given value (the value’s ceiling). 1027 | """ 1028 | @spec ceil(Q.reql_number()) :: Q.t() 1029 | operate_on_single_arg(:ceil, 184) 1030 | 1031 | @doc """ 1032 | Rounds the given value down, returning the largest integer value less than or equal to the given value (the value’s floor). 1033 | """ 1034 | @spec floor(Q.reql_number()) :: Q.t() 1035 | operate_on_single_arg(:floor, 183) 1036 | 1037 | # 1038 | # Selection Queries 1039 | # 1040 | 1041 | @doc """ 1042 | Reference a database. 1043 | """ 1044 | @spec db(Q.reql_string()) :: Q.t() 1045 | operate_on_single_arg(:db, 14) 1046 | 1047 | @doc """ 1048 | Return all documents in a table. Other commands may be chained after table to 1049 | return a subset of documents (such as get and filter) or perform further 1050 | processing. 1051 | 1052 | There are two optional arguments. 1053 | 1054 | * useOutdated: if true, this allows potentially out-of-date data to be returned, 1055 | with potentially faster reads. It also allows you to perform reads from a 1056 | secondary replica if a primary has failed. Default false. 1057 | * identifierFormat: possible values are name and uuid, with a default of name. If 1058 | set to uuid, then system tables will refer to servers, databases and tables by 1059 | UUID rather than name. (This only has an effect when used with system tables.) 1060 | """ 1061 | @spec table(Q.reql_string(), Q.reql_opts()) :: Q.t() 1062 | @spec table(Q.t(), Q.reql_string(), Q.reql_opts()) :: Q.t() 1063 | operate_on_single_arg(:table, 15, opts: true) 1064 | operate_on_two_args(:table, 15, opts: true) 1065 | 1066 | @doc """ 1067 | Get a document by primary key. 1068 | 1069 | If no document exists with that primary key, get will return nil. 1070 | """ 1071 | @spec get(Q.t(), Q.reql_datum()) :: Q.t() 1072 | operate_on_two_args(:get, 16) 1073 | 1074 | @doc """ 1075 | Get all documents where the given value matches the value of the requested index. 1076 | """ 1077 | @spec get_all(Q.t(), Q.reql_array()) :: Q.t() 1078 | operate_on_seq_and_list(:get_all, 78, opts: true) 1079 | operate_on_two_args(:get_all, 78, opts: true) 1080 | 1081 | @doc """ 1082 | Get all documents between two keys. Accepts three optional arguments: index, 1083 | left_bound, and right_bound. If index is set to the name of a secondary index, 1084 | between will return all documents where that index’s value is in the specified 1085 | range (it uses the primary key by default). left_bound or right_bound may be 1086 | set to open or closed to indicate whether or not to include that endpoint of 1087 | the range (by default, left_bound is closed and right_bound is open). 1088 | """ 1089 | @spec between(Q.reql_array(), Q.t(), Q.t()) :: Q.t() 1090 | operate_on_three_args(:between, 182, opts: true) 1091 | 1092 | @doc """ 1093 | Get all the documents for which the given predicate is true. 1094 | 1095 | filter can be called on a sequence, selection, or a field containing an array 1096 | of elements. The return type is the same as the type on which the function was 1097 | called on. 1098 | 1099 | The body of every filter is wrapped in an implicit .default(False), which means 1100 | that if a non-existence errors is thrown (when you try to access a field that 1101 | does not exist in a document), RethinkDB will just ignore the document. The 1102 | default value can be changed by passing the named argument default. Setting 1103 | this optional argument to r.error() will cause any non-existence errors to 1104 | return a RqlRuntimeError. 1105 | """ 1106 | @spec filter(Q.reql_array(), Q.t()) :: Q.t() 1107 | operate_on_two_args(:filter, 39, opts: true) 1108 | 1109 | # 1110 | # String Manipulation Queries 1111 | # 1112 | 1113 | @doc """ 1114 | Checks a string for matches. 1115 | 1116 | Example: 1117 | 1118 | iex> "hello world" |> match("hello") |> run conn 1119 | iex> "hello world" |> match(~r(hello)) |> run conn 1120 | 1121 | """ 1122 | @spec match(Q.reql_string(), Regex.t() | Q.reql_string()) :: Q.t() 1123 | def match(string, regex = %Regex{}), do: match(string, Regex.source(regex)) 1124 | operate_on_two_args(:match, 97) 1125 | 1126 | @doc """ 1127 | Split a `string` on whitespace. 1128 | 1129 | iex> "abracadabra" |> split |> run conn 1130 | %RethinkDB.Record{data: ["abracadabra"]} 1131 | """ 1132 | @spec split(Q.reql_string()) :: Q.t() 1133 | operate_on_single_arg(:split, 149) 1134 | 1135 | @doc """ 1136 | Split a `string` on `separator`. 1137 | 1138 | iex> "abra-cadabra" |> split("-") |> run conn 1139 | %RethinkDB.Record{data: ["abra", "cadabra"]} 1140 | """ 1141 | @spec split(Q.reql_string(), Q.reql_string()) :: Q.t() 1142 | operate_on_two_args(:split, 149) 1143 | 1144 | @doc """ 1145 | Split a `string` with a given `separator` into `max_result` segments. 1146 | 1147 | iex> "a-bra-ca-da-bra" |> split("-", 2) |> run conn 1148 | %RethinkDB.Record{data: ["a", "bra", "ca-da-bra"]} 1149 | 1150 | """ 1151 | @spec split(Q.reql_string(), Q.reql_string() | nil, integer) :: Q.t() 1152 | operate_on_three_args(:split, 149) 1153 | 1154 | @doc """ 1155 | Convert a string to all upper case. 1156 | 1157 | iex> "hi" |> upcase |> run conn 1158 | %RethinkDB.Record{data: "HI"} 1159 | 1160 | """ 1161 | @spec upcase(Q.reql_string()) :: Q.t() 1162 | operate_on_single_arg(:upcase, 141) 1163 | 1164 | @doc """ 1165 | Convert a string to all down case. 1166 | 1167 | iex> "Hi" |> downcase |> run conn 1168 | %RethinkDB.Record{data: "hi"} 1169 | 1170 | """ 1171 | @spec downcase(Q.reql_string()) :: Q.t() 1172 | operate_on_single_arg(:downcase, 142) 1173 | 1174 | # 1175 | # Table Functions 1176 | # 1177 | 1178 | @doc """ 1179 | Create a table. A RethinkDB table is a collection of JSON documents. 1180 | 1181 | If successful, the command returns an object with two fields: 1182 | 1183 | * tables_created: always 1. 1184 | * config_changes: a list containing one two-field object, old_val and new_val: 1185 | * old_val: always nil. 1186 | * new_val: the table’s new config value. 1187 | 1188 | If a table with the same name already exists, the command throws 1189 | RqlRuntimeError. 1190 | 1191 | Note: Only alphanumeric characters and underscores are valid for the table name. 1192 | 1193 | When creating a table you can specify the following options: 1194 | 1195 | * primary_key: the name of the primary key. The default primary key is id. 1196 | * durability: if set to soft, writes will be acknowledged by the server 1197 | immediately and flushed to disk in the background. The default is hard: 1198 | acknowledgment of writes happens after data has been written to disk. 1199 | * shards: the number of shards, an integer from 1-32. Defaults to 1. 1200 | * replicas: either an integer or a mapping object. Defaults to 1. 1201 | If replicas is an integer, it specifies the number of replicas per shard. 1202 | Specifying more replicas than there are servers will return an error. 1203 | If replicas is an object, it specifies key-value pairs of server tags and the 1204 | number of replicas to assign to those servers: {:tag1 => 2, :tag2 => 4, :tag3 1205 | => 2, ...}. 1206 | * primary_replica_tag: the primary server specified by its server tag. Required 1207 | if replicas is an object; the tag must be in the object. This must not be 1208 | specified if replicas is an integer. 1209 | The data type of a primary key is usually a string (like a UUID) or a number, 1210 | but it can also be a time, binary object, boolean or an array. It cannot be an 1211 | object. 1212 | """ 1213 | @spec table_create(Q.t(), Q.reql_string(), Q.reql_opts()) :: Q.t() 1214 | operate_on_single_arg(:table_create, 60, opts: true) 1215 | operate_on_two_args(:table_create, 60, opts: true) 1216 | 1217 | @doc """ 1218 | Drop a table. The table and all its data will be deleted. 1219 | 1220 | If successful, the command returns an object with two fields: 1221 | 1222 | * tables_dropped: always 1. 1223 | * config_changes: a list containing one two-field object, old_val and new_val: 1224 | * old_val: the dropped table’s config value. 1225 | * new_val: always nil. 1226 | 1227 | If the given table does not exist in the database, the command throws RqlRuntimeError. 1228 | """ 1229 | @spec table_drop(Q.t(), Q.reql_string()) :: Q.t() 1230 | operate_on_single_arg(:table_drop, 61) 1231 | operate_on_two_args(:table_drop, 61) 1232 | 1233 | @doc """ 1234 | List all table names in a database. The result is a list of strings. 1235 | """ 1236 | @spec table_list(Q.t()) :: Q.t() 1237 | operate_on_zero_args(:table_list, 62) 1238 | operate_on_single_arg(:table_list, 62) 1239 | 1240 | @doc """ 1241 | Create a new secondary index on a table. Secondary indexes improve the speed of 1242 | many read queries at the slight cost of increased storage space and decreased 1243 | write performance. For more information about secondary indexes, read the 1244 | article “Using secondary indexes in RethinkDB.” 1245 | 1246 | RethinkDB supports different types of secondary indexes: 1247 | 1248 | * Simple indexes based on the value of a single field. 1249 | * Compound indexes based on multiple fields. 1250 | * Multi indexes based on arrays of values. 1251 | * Geospatial indexes based on indexes of geometry objects, created when the geo 1252 | optional argument is true. 1253 | * Indexes based on arbitrary expressions. 1254 | 1255 | The index_function can be an anonymous function or a binary representation 1256 | obtained from the function field of index_status. 1257 | 1258 | If successful, create_index will return an object of the form {:created => 1}. 1259 | If an index by that name already exists on the table, a RqlRuntimeError will be 1260 | thrown. 1261 | """ 1262 | @spec index_create(Q.t(), Q.reql_string(), Q.reql_func1(), Q.reql_opts()) :: Q.t() 1263 | operate_on_two_args(:index_create, 75, opts: true) 1264 | operate_on_three_args(:index_create, 75, opts: true) 1265 | 1266 | @doc """ 1267 | Delete a previously created secondary index of this table. 1268 | """ 1269 | @spec index_drop(Q.t(), Q.reql_string()) :: Q.t() 1270 | operate_on_two_args(:index_drop, 76) 1271 | 1272 | @doc """ 1273 | List all the secondary indexes of this table. 1274 | """ 1275 | @spec index_list(Q.t()) :: Q.t() 1276 | operate_on_single_arg(:index_list, 77) 1277 | 1278 | @doc """ 1279 | Rename an existing secondary index on a table. If the optional argument 1280 | overwrite is specified as true, a previously existing index with the new name 1281 | will be deleted and the index will be renamed. If overwrite is false (the 1282 | default) an error will be raised if the new index name already exists. 1283 | 1284 | The return value on success will be an object of the format {:renamed => 1}, or 1285 | {:renamed => 0} if the old and new names are the same. 1286 | 1287 | An error will be raised if the old index name does not exist, if the new index 1288 | name is already in use and overwrite is false, or if either the old or new 1289 | index name are the same as the primary key field name. 1290 | """ 1291 | @spec index_rename(Q.t(), Q.reql_string(), Q.reql_string(), Q.reql_opts()) :: Q.t() 1292 | operate_on_three_args(:index_rename, 156, opts: true) 1293 | 1294 | @doc """ 1295 | Get the status of the specified indexes on this table, or the status of all 1296 | indexes on this table if no indexes are specified. 1297 | """ 1298 | @spec index_status(Q.t(), Q.reql_string() | Q.reql_array()) :: Q.t() 1299 | operate_on_single_arg(:index_status, 139) 1300 | operate_on_seq_and_list(:index_status, 139) 1301 | operate_on_two_args(:index_status, 139) 1302 | 1303 | @doc """ 1304 | Wait for the specified indexes on this table to be ready, or for all indexes on 1305 | this table to be ready if no indexes are specified. 1306 | """ 1307 | @spec index_wait(Q.t(), Q.reql_string() | Q.reql_array()) :: Q.t() 1308 | operate_on_single_arg(:index_wait, 140) 1309 | operate_on_seq_and_list(:index_wait, 140) 1310 | operate_on_two_args(:index_wait, 140) 1311 | 1312 | # 1313 | # Writing Data Queries 1314 | # 1315 | 1316 | @doc """ 1317 | Insert documents into a table. Accepts a single document or an array of 1318 | documents. 1319 | 1320 | The optional arguments are: 1321 | 1322 | * durability: possible values are hard and soft. This option will override the 1323 | table or query’s durability setting (set in run). In soft durability mode 1324 | Rethink_dB will acknowledge the write immediately after receiving and caching 1325 | it, but before the write has been committed to disk. 1326 | * return_changes: if set to True, return a changes array consisting of 1327 | old_val/new_val objects describing the changes made. 1328 | * conflict: Determine handling of inserting documents with the same primary key 1329 | as existing entries. Possible values are "error", "replace" or "update". 1330 | * "error": Do not insert the new document and record the conflict as an error. 1331 | This is the default. 1332 | * "replace": Replace the old document in its entirety with the new one. 1333 | * "update": Update fields of the old document with fields from the new one. 1334 | * `lambda(id, old_doc, new_doc) :: resolved_doc`: a function that receives the 1335 | id, old and new documents as arguments and returns a document which will be 1336 | inserted in place of the conflicted one. 1337 | Insert returns an object that contains the following attributes: 1338 | 1339 | * inserted: the number of documents successfully inserted. 1340 | * replaced: the number of documents updated when conflict is set to "replace" or 1341 | "update". 1342 | * unchanged: the number of documents whose fields are identical to existing 1343 | documents with the same primary key when conflict is set to "replace" or 1344 | "update". 1345 | * errors: the number of errors encountered while performing the insert. 1346 | * first_error: If errors were encountered, contains the text of the first error. 1347 | * deleted and skipped: 0 for an insert operation. 1348 | * generated_keys: a list of generated primary keys for inserted documents whose 1349 | primary keys were not specified (capped to 100,000). 1350 | * warnings: if the field generated_keys is truncated, you will get the warning 1351 | “Too many generated keys (), array truncated to 100000.”. 1352 | * changes: if return_changes is set to True, this will be an array of objects, 1353 | one for each objected affected by the insert operation. Each object will have 1354 | * two keys: {"new_val": , "old_val": None}. 1355 | """ 1356 | @spec insert(Q.t(), Q.reql_obj() | Q.reql_array(), Keyword.t()) :: Q.t() 1357 | operate_on_two_args(:insert, 56, opts: true) 1358 | 1359 | @doc """ 1360 | Update JSON documents in a table. Accepts a JSON document, a ReQL expression, 1361 | or a combination of the two. 1362 | 1363 | The optional arguments are: 1364 | 1365 | * durability: possible values are hard and soft. This option will override the 1366 | table or query’s durability setting (set in run). In soft durability mode 1367 | RethinkDB will acknowledge the write immediately after receiving it, but before 1368 | the write has been committed to disk. 1369 | * return_changes: if set to True, return a changes array consisting of 1370 | old_val/new_val objects describing the changes made. 1371 | * non_atomic: if set to True, executes the update and distributes the result to 1372 | replicas in a non-atomic fashion. This flag is required to perform 1373 | non-deterministic updates, such as those that require reading data from another 1374 | table. 1375 | 1376 | Update returns an object that contains the following attributes: 1377 | 1378 | * replaced: the number of documents that were updated. 1379 | * unchanged: the number of documents that would have been modified except the new 1380 | value was the same as the old value. 1381 | * skipped: the number of documents that were skipped because the document didn’t 1382 | exist. 1383 | * errors: the number of errors encountered while performing the update. 1384 | * first_error: If errors were encountered, contains the text of the first error. 1385 | * deleted and inserted: 0 for an update operation. 1386 | * changes: if return_changes is set to True, this will be an array of objects, 1387 | one for each objected affected by the update operation. Each object will have 1388 | * two keys: {"new_val": , "old_val": }. 1389 | """ 1390 | @spec update(Q.t(), Q.reql_obj(), Keyword.t()) :: Q.t() 1391 | operate_on_two_args(:update, 53, opts: true) 1392 | 1393 | @doc """ 1394 | Replace documents in a table. Accepts a JSON document or a ReQL expression, and 1395 | replaces the original document with the new one. The new document must have the 1396 | same primary key as the original document. 1397 | 1398 | The optional arguments are: 1399 | 1400 | * durability: possible values are hard and soft. This option will override the 1401 | table or query’s durability setting (set in run). 1402 | In soft durability mode RethinkDB will acknowledge the write immediately after 1403 | receiving it, but before the write has been committed to disk. 1404 | * return_changes: if set to True, return a changes array consisting of 1405 | old_val/new_val objects describing the changes made. 1406 | * non_atomic: if set to True, executes the replacement and distributes the result 1407 | to replicas in a non-atomic fashion. This flag is required to perform 1408 | non-deterministic updates, such as those that require reading data from another 1409 | table. 1410 | 1411 | Replace returns an object that contains the following attributes: 1412 | 1413 | * replaced: the number of documents that were replaced 1414 | * unchanged: the number of documents that would have been modified, except that 1415 | the new value was the same as the old value 1416 | * inserted: the number of new documents added. You can have new documents 1417 | inserted if you do a point-replace on a key that isn’t in the table or you do a 1418 | replace on a selection and one of the documents you are replacing has been 1419 | deleted 1420 | * deleted: the number of deleted documents when doing a replace with None 1421 | * errors: the number of errors encountered while performing the replace. 1422 | * first_error: If errors were encountered, contains the text of the first error. 1423 | * skipped: 0 for a replace operation 1424 | * changes: if return_changes is set to True, this will be an array of objects, 1425 | one for each objected affected by the replace operation. Each object will have 1426 | * two keys: {"new_val": , "old_val": }. 1427 | """ 1428 | @spec replace(Q.t(), Q.reql_obj(), Keyword.t()) :: Q.t() 1429 | operate_on_two_args(:replace, 55, opts: true) 1430 | 1431 | @doc """ 1432 | Delete one or more documents from a table. 1433 | 1434 | The optional arguments are: 1435 | 1436 | * durability: possible values are hard and soft. This option will override the 1437 | table or query’s durability setting (set in run). 1438 | In soft durability mode RethinkDB will acknowledge the write immediately after 1439 | receiving it, but before the write has been committed to disk. 1440 | * return_changes: if set to True, return a changes array consisting of 1441 | old_val/new_val objects describing the changes made. 1442 | 1443 | Delete returns an object that contains the following attributes: 1444 | 1445 | * deleted: the number of documents that were deleted. 1446 | * skipped: the number of documents that were skipped. 1447 | For example, if you attempt to delete a batch of documents, and another 1448 | concurrent query deletes some of those documents first, they will be counted as 1449 | skipped. 1450 | * errors: the number of errors encountered while performing the delete. 1451 | * first_error: If errors were encountered, contains the text of the first error. 1452 | inserted, replaced, and unchanged: all 0 for a delete operation. 1453 | * changes: if return_changes is set to True, this will be an array of objects, 1454 | one for each objected affected by the delete operation. Each object will have 1455 | * two keys: {"new_val": None, "old_val": }. 1456 | """ 1457 | @spec delete(Q.t()) :: Q.t() 1458 | operate_on_single_arg(:delete, 54, opts: true) 1459 | 1460 | @doc """ 1461 | sync ensures that writes on a given table are written to permanent storage. 1462 | Queries that specify soft durability (durability='soft') do not give such 1463 | guarantees, so sync can be used to ensure the state of these queries. A call to 1464 | sync does not return until all previous writes to the table are persisted. 1465 | 1466 | If successful, the operation returns an object: {"synced": 1}. 1467 | 1468 | """ 1469 | @spec sync(Q.t()) :: Q.t() 1470 | operate_on_single_arg(:sync, 138) 1471 | 1472 | # 1473 | # Date and Time Queries 1474 | # 1475 | 1476 | @doc """ 1477 | Return a time object representing the current time in UTC. The command now() is 1478 | computed once when the server receives the query, so multiple instances of 1479 | r.now() will always return the same time inside a query. 1480 | """ 1481 | @spec now() :: Q.t() 1482 | operate_on_zero_args(:now, 103) 1483 | 1484 | @doc """ 1485 | Create a time object for a specific time. 1486 | 1487 | A few restrictions exist on the arguments: 1488 | 1489 | * year is an integer between 1400 and 9,999. 1490 | * month is an integer between 1 and 12. 1491 | * day is an integer between 1 and 31. 1492 | * hour is an integer. 1493 | * minutes is an integer. 1494 | * seconds is a double. Its value will be rounded to three decimal places 1495 | (millisecond-precision). 1496 | * timezone can be 'Z' (for UTC) or a string with the format ±[hh]:[mm]. 1497 | """ 1498 | @spec time(reql_number, reql_number, reql_number, reql_string) :: Q.t() 1499 | def time(year, month, day, timezone), do: %Q{query: [136, [year, month, day, timezone]]} 1500 | 1501 | @spec time( 1502 | reql_number, 1503 | reql_number, 1504 | reql_number, 1505 | reql_number, 1506 | reql_number, 1507 | reql_number, 1508 | reql_string 1509 | ) :: Q.t() 1510 | def time(year, month, day, hour, minute, second, timezone) do 1511 | %Q{query: [136, [year, month, day, hour, minute, second, timezone]]} 1512 | end 1513 | 1514 | @doc """ 1515 | Create a time object based on seconds since epoch. The first argument is a 1516 | double and will be rounded to three decimal places (millisecond-precision). 1517 | """ 1518 | @spec epoch_time(reql_number) :: Q.t() 1519 | operate_on_single_arg(:epoch_time, 101) 1520 | 1521 | @doc """ 1522 | Create a time object based on an ISO 8601 date-time string (e.g. 1523 | ‘2013-01-01T01:01:01+00:00’). We support all valid ISO 8601 formats except for 1524 | week dates. If you pass an ISO 8601 date-time without a time zone, you must 1525 | specify the time zone with the default_timezone argument. 1526 | """ 1527 | @spec iso8601(reql_string) :: Q.t() 1528 | operate_on_single_arg(:iso8601, 99, opts: true) 1529 | 1530 | @doc """ 1531 | Return a new time object with a different timezone. While the time stays the 1532 | same, the results returned by methods such as hours() will change since they 1533 | take the timezone into account. The timezone argument has to be of the ISO 8601 1534 | format. 1535 | """ 1536 | @spec in_timezone(Q.reql_time(), Q.reql_string()) :: Q.t() 1537 | operate_on_two_args(:in_timezone, 104) 1538 | 1539 | @doc """ 1540 | Return the timezone of the time object. 1541 | """ 1542 | @spec timezone(Q.reql_time()) :: Q.t() 1543 | operate_on_single_arg(:timezone, 127) 1544 | 1545 | @doc """ 1546 | Return if a time is between two other times (by default, inclusive for the 1547 | start, exclusive for the end). 1548 | """ 1549 | @spec during(Q.reql_time(), Q.reql_time(), Q.reql_time()) :: Q.t() 1550 | operate_on_three_args(:during, 105, opts: true) 1551 | 1552 | @doc """ 1553 | Return a new time object only based on the day, month and year (ie. the same 1554 | day at 00:00). 1555 | """ 1556 | @spec date(Q.reql_time()) :: Q.t() 1557 | operate_on_single_arg(:date, 106) 1558 | 1559 | @doc """ 1560 | Return the number of seconds elapsed since the beginning of the day stored in 1561 | the time object. 1562 | """ 1563 | @spec time_of_day(Q.reql_time()) :: Q.t() 1564 | operate_on_single_arg(:time_of_day, 126) 1565 | 1566 | @doc """ 1567 | Return the year of a time object. 1568 | """ 1569 | @spec year(Q.reql_time()) :: Q.t() 1570 | operate_on_single_arg(:year, 128) 1571 | 1572 | @doc """ 1573 | Return the month of a time object as a number between 1 and 12. 1574 | """ 1575 | @spec month(Q.reql_time()) :: Q.t() 1576 | operate_on_single_arg(:month, 129) 1577 | 1578 | @doc """ 1579 | Return the day of a time object as a number between 1 and 31. 1580 | """ 1581 | @spec day(Q.reql_time()) :: Q.t() 1582 | operate_on_single_arg(:day, 130) 1583 | 1584 | @doc """ 1585 | Return the day of week of a time object as a number between 1 and 7 (following 1586 | ISO 8601 standard). 1587 | """ 1588 | @spec day_of_week(Q.reql_time()) :: Q.t() 1589 | operate_on_single_arg(:day_of_week, 131) 1590 | 1591 | @doc """ 1592 | Return the day of the year of a time object as a number between 1 and 366 1593 | (following ISO 8601 standard). 1594 | """ 1595 | @spec day_of_year(Q.reql_time()) :: Q.t() 1596 | operate_on_single_arg(:day_of_year, 132) 1597 | 1598 | @doc """ 1599 | Return the hour in a time object as a number between 0 and 23. 1600 | """ 1601 | @spec hours(Q.reql_time()) :: Q.t() 1602 | operate_on_single_arg(:hours, 133) 1603 | 1604 | @doc """ 1605 | Return the minute in a time object as a number between 0 and 59. 1606 | """ 1607 | @spec minutes(Q.reql_time()) :: Q.t() 1608 | operate_on_single_arg(:minutes, 134) 1609 | 1610 | @doc """ 1611 | Return the seconds in a time object as a number between 0 and 59.999 (double precision). 1612 | """ 1613 | @spec seconds(Q.reql_time()) :: Q.t() 1614 | operate_on_single_arg(:seconds, 135) 1615 | 1616 | @doc """ 1617 | Convert a time object to a string in ISO 8601 format. 1618 | """ 1619 | @spec to_iso8601(Q.reql_time()) :: Q.t() 1620 | operate_on_single_arg(:to_iso8601, 100) 1621 | 1622 | @doc """ 1623 | Convert a time object to its epoch time. 1624 | """ 1625 | @spec to_epoch_time(Q.reql_time()) :: Q.t() 1626 | operate_on_single_arg(:to_epoch_time, 102) 1627 | 1628 | # 1629 | # Transformations Queries 1630 | # 1631 | 1632 | @doc """ 1633 | Transform each element of one or more sequences by applying a mapping function 1634 | to them. If map is run with two or more sequences, it will iterate for as many 1635 | items as there are in the shortest sequence. 1636 | 1637 | Note that map can only be applied to sequences, not single values. If you wish 1638 | to apply a function to a single value/selection (including an array), use the 1639 | do command. 1640 | """ 1641 | @spec map(Q.reql_array(), Q.reql_func1()) :: Q.t() 1642 | operate_on_two_args(:map, 38) 1643 | 1644 | @doc """ 1645 | Plucks one or more attributes from a sequence of objects, filtering out any 1646 | objects in the sequence that do not have the specified fields. Functionally, 1647 | this is identical to has_fields followed by pluck on a sequence. 1648 | """ 1649 | @spec with_fields(Q.reql_array(), Q.reql_array()) :: Q.t() 1650 | operate_on_seq_and_list(:with_fields, 96) 1651 | 1652 | @doc """ 1653 | Concatenate one or more elements into a single sequence using a mapping function. 1654 | """ 1655 | @spec flat_map(Q.reql_array(), Q.reql_func1()) :: Q.t() 1656 | operate_on_two_args(:flat_map, 40) 1657 | operate_on_two_args(:concat_map, 40) 1658 | 1659 | @doc """ 1660 | Sort the sequence by document values of the given key(s). To specify the 1661 | ordering, wrap the attribute with either r.asc or r.desc (defaults to 1662 | ascending). 1663 | 1664 | Sorting without an index requires the server to hold the sequence in memory, 1665 | and is limited to 100,000 documents (or the setting of the array_limit option 1666 | for run). Sorting with an index can be done on arbitrarily large tables, or 1667 | after a between command using the same index. 1668 | """ 1669 | @spec order_by(Q.reql_array(), Q.reql_datum()) :: Q.t() 1670 | # XXX this is clunky, revisit this sometime 1671 | operate_on_optional_second_arg(:order_by, 41) 1672 | 1673 | @doc """ 1674 | Skip a number of elements from the head of the sequence. 1675 | """ 1676 | @spec skip(Q.reql_array(), Q.reql_number()) :: Q.t() 1677 | operate_on_two_args(:skip, 70) 1678 | 1679 | @doc """ 1680 | End the sequence after the given number of elements. 1681 | """ 1682 | @spec limit(Q.reql_array(), Q.reql_number()) :: Q.t() 1683 | operate_on_two_args(:limit, 71) 1684 | 1685 | @doc """ 1686 | Return the elements of a sequence within the specified range. 1687 | """ 1688 | @spec slice(Q.reql_array(), Q.reql_number(), Q.reql_number()) :: Q.t() 1689 | operate_on_three_args(:slice, 30, opts: true) 1690 | 1691 | @doc """ 1692 | Get the nth element of a sequence, counting from zero. If the argument is 1693 | negative, count from the last element. 1694 | """ 1695 | @spec nth(Q.reql_array(), Q.reql_number()) :: Q.t() 1696 | operate_on_two_args(:nth, 45) 1697 | 1698 | @doc """ 1699 | Get the indexes of an element in a sequence. If the argument is a predicate, 1700 | get the indexes of all elements matching it. 1701 | """ 1702 | @spec offsets_of(Q.reql_array(), Q.reql_datum()) :: Q.t() 1703 | operate_on_two_args(:offsets_of, 87) 1704 | 1705 | @doc """ 1706 | Test if a sequence is empty. 1707 | """ 1708 | @spec is_empty(Q.reql_array()) :: Q.t() 1709 | operate_on_single_arg(:is_empty, 86) 1710 | 1711 | @doc """ 1712 | Concatenate two or more sequences. 1713 | """ 1714 | @spec union(Q.reql_array(), Q.reql_array()) :: Q.t() 1715 | operate_on_two_args(:union, 44) 1716 | 1717 | @doc """ 1718 | Select a given number of elements from a sequence with uniform random 1719 | distribution. Selection is done without replacement. 1720 | 1721 | If the sequence has less than the requested number of elements (i.e., calling 1722 | sample(10) on a sequence with only five elements), sample will return the 1723 | entire sequence in a random order. 1724 | """ 1725 | @spec sample(Q.reql_array(), Q.reql_number()) :: Q.t() 1726 | operate_on_two_args(:sample, 81) 1727 | 1728 | # 1729 | # Document Manipulation Queries 1730 | # 1731 | 1732 | @doc """ 1733 | Plucks out one or more attributes from either an object or a sequence of 1734 | objects (projection). 1735 | """ 1736 | @spec pluck(Q.reql_array(), Q.reql_array() | Q.reql_string()) :: Q.t() 1737 | operate_on_two_args(:pluck, 33) 1738 | 1739 | @doc """ 1740 | The opposite of pluck; takes an object or a sequence of objects, and returns 1741 | them with the specified paths removed. 1742 | """ 1743 | @spec without(Q.reql_array(), Q.reql_array() | Q.reql_string()) :: Q.t() 1744 | operate_on_two_args(:without, 34) 1745 | 1746 | @doc """ 1747 | Merge two or more objects together to construct a new object with properties 1748 | from all. When there is a conflict between field names, preference is given to 1749 | fields in the rightmost object in the argument list. 1750 | """ 1751 | @spec merge(Q.reql_array(), Q.reql_object() | Q.reql_func1()) :: Q.t() 1752 | operate_on_two_args(:merge, 35) 1753 | operate_on_list(:merge, 35) 1754 | operate_on_single_arg(:merge, 35) 1755 | 1756 | @doc """ 1757 | Append a value to an array. 1758 | """ 1759 | @spec append(Q.reql_array(), Q.reql_datum()) :: Q.t() 1760 | operate_on_two_args(:append, 29) 1761 | 1762 | @doc """ 1763 | Prepend a value to an array. 1764 | """ 1765 | @spec prepend(Q.reql_array(), Q.reql_datum()) :: Q.t() 1766 | operate_on_two_args(:prepend, 80) 1767 | 1768 | @doc """ 1769 | Remove the elements of one array from another array. 1770 | """ 1771 | @spec difference(Q.reql_array(), Q.reql_array()) :: Q.t() 1772 | operate_on_two_args(:difference, 95) 1773 | 1774 | @doc """ 1775 | Add a value to an array and return it as a set (an array with distinct values). 1776 | """ 1777 | @spec set_insert(Q.reql_array(), Q.reql_datum()) :: Q.t() 1778 | operate_on_two_args(:set_insert, 88) 1779 | 1780 | @doc """ 1781 | Intersect two arrays returning values that occur in both of them as a set (an 1782 | array with distinct values). 1783 | """ 1784 | @spec set_intersection(Q.reql_array(), Q.reql_datum()) :: Q.t() 1785 | operate_on_two_args(:set_intersection, 89) 1786 | 1787 | @doc """ 1788 | Add a several values to an array and return it as a set (an array with distinct 1789 | values). 1790 | """ 1791 | @spec set_union(Q.reql_array(), Q.reql_datum()) :: Q.t() 1792 | operate_on_two_args(:set_union, 90) 1793 | 1794 | @doc """ 1795 | Remove the elements of one array from another and return them as a set (an 1796 | array with distinct values). 1797 | """ 1798 | @spec set_difference(Q.reql_array(), Q.reql_datum()) :: Q.t() 1799 | operate_on_two_args(:set_difference, 91) 1800 | 1801 | @doc """ 1802 | Get a single field from an object. If called on a sequence, gets that field 1803 | from every object in the sequence, skipping objects that lack it. 1804 | """ 1805 | @spec get_field(Q.reql_obj() | Q.reql_array(), Q.reql_string()) :: Q.t() 1806 | operate_on_two_args(:get_field, 31) 1807 | 1808 | @doc """ 1809 | Test if an object has one or more fields. An object has a field if it has 1810 | that key and the key has a non-null value. For instance, the object {'a': 1811 | 1,'b': 2,'c': null} has the fields a and b. 1812 | """ 1813 | @spec has_fields(Q.reql_array(), Q.reql_array() | Q.reql_string()) :: Q.t() 1814 | operate_on_two_args(:has_fields, 32) 1815 | 1816 | @doc """ 1817 | Insert a value in to an array at a given index. Returns the modified array. 1818 | """ 1819 | @spec insert_at(Q.reql_array(), Q.reql_number(), Q.reql_datum()) :: Q.t() 1820 | operate_on_three_args(:insert_at, 82) 1821 | 1822 | @doc """ 1823 | Insert several values in to an array at a given index. Returns the modified array. 1824 | """ 1825 | @spec splice_at(Q.reql_array(), Q.reql_number(), Q.reql_datum()) :: Q.t() 1826 | operate_on_three_args(:splice_at, 85) 1827 | 1828 | @doc """ 1829 | Remove one or more elements from an array at a given index. Returns the modified array. 1830 | """ 1831 | @spec delete_at(Q.reql_array(), Q.reql_number(), Q.reql_number()) :: Q.t() 1832 | operate_on_two_args(:delete_at, 83) 1833 | operate_on_three_args(:delete_at, 83) 1834 | 1835 | @doc """ 1836 | Change a value in an array at a given index. Returns the modified array. 1837 | """ 1838 | @spec change_at(Q.reql_array(), Q.reql_number(), Q.reql_datum()) :: Q.t() 1839 | operate_on_three_args(:change_at, 84) 1840 | 1841 | @doc """ 1842 | Return an array containing all of the object’s keys. 1843 | """ 1844 | @spec keys(Q.reql_obj()) :: Q.t() 1845 | operate_on_single_arg(:keys, 94) 1846 | 1847 | @doc """ 1848 | Return an array containing all of the object’s values. 1849 | """ 1850 | @spec values(Q.reql_obj()) :: Q.t() 1851 | operate_on_single_arg(:values, 186) 1852 | 1853 | @doc """ 1854 | Replace an object in a field instead of merging it with an existing object in a 1855 | merge or update operation. 1856 | """ 1857 | @spec literal(Q.reql_object()) :: Q.t() 1858 | operate_on_single_arg(:literal, 137) 1859 | 1860 | @doc """ 1861 | Creates an object from a list of key-value pairs, where the keys must be 1862 | strings. r.object(A, B, C, D) is equivalent to r.expr([[A, B], [C, 1863 | D]]).coerce_to('OBJECT'). 1864 | """ 1865 | @spec object(Q.reql_array()) :: Q.t() 1866 | operate_on_list(:object, 143) 1867 | 1868 | # 1869 | # Administration 1870 | # 1871 | @spec config(Q.reql_term()) :: Q.t() 1872 | operate_on_single_arg(:config, 174) 1873 | 1874 | @spec rebalance(Q.reql_term()) :: Q.t() 1875 | operate_on_single_arg(:rebalance, 179) 1876 | 1877 | @spec reconfigure(Q.reql_term(), Q.reql_opts()) :: Q.t() 1878 | operate_on_single_arg(:reconfigure, 176, opts: true) 1879 | 1880 | @spec status(Q.reql_term()) :: Q.t() 1881 | operate_on_single_arg(:status, 175) 1882 | 1883 | @spec wait(Q.reql_term()) :: Q.t() 1884 | operate_on_single_arg(:wait, 177, opts: true) 1885 | 1886 | # 1887 | # Miscellaneous functions 1888 | # 1889 | 1890 | def make_array(array), do: %Q{query: [2, array]} 1891 | 1892 | operate_on_single_arg(:changes, 152, opts: true) 1893 | 1894 | def asc(key), do: %Q{query: [73, [key]]} 1895 | def desc(key), do: %Q{query: [74, [key]]} 1896 | 1897 | def func(f) when is_function(f) do 1898 | {_, arity} = :erlang.fun_info(f, :arity) 1899 | 1900 | args = 1901 | case arity do 1902 | 0 -> [] 1903 | _ -> Enum.map(1..arity, fn _ -> make_ref() end) 1904 | end 1905 | 1906 | params = Enum.map(args, &var/1) 1907 | 1908 | res = 1909 | case apply(f, params) do 1910 | x when is_list(x) -> make_array(x) 1911 | x -> x 1912 | end 1913 | 1914 | %Q{query: [69, [[2, args], res]]} 1915 | end 1916 | 1917 | def var(val), do: %Q{query: [10, [val]]} 1918 | def bracket(obj, key), do: %Q{query: [170, [obj, key]]} 1919 | 1920 | operate_on_zero_args(:minval, 180) 1921 | operate_on_zero_args(:maxval, 181) 1922 | end 1923 | -------------------------------------------------------------------------------- /lib/rethinkdb/query/macros.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Query.Macros do 2 | alias RethinkDB.Q 3 | alias RethinkDB.Query 4 | @moduledoc false 5 | 6 | defmacro operate_on_two_args(op, opcode, options \\ []) do 7 | opt_support = Keyword.get(options, :opts, false) 8 | 9 | quote do 10 | def unquote(op)(left, right) do 11 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 12 | end 13 | 14 | if unquote(opt_support) do 15 | def unquote(op)(left, right, opts) when is_map(opts) or is_list(opts) do 16 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)], make_opts(opts)]} 17 | end 18 | end 19 | end 20 | end 21 | 22 | defmacro operate_on_three_args(op, opcode, options \\ []) do 23 | opt_support = Keyword.get(options, :opts, false) 24 | 25 | quote do 26 | def unquote(op)(arg1, arg2, arg3) do 27 | %Q{query: [unquote(opcode), [wrap(arg1), wrap(arg2), wrap(arg3)]]} 28 | end 29 | 30 | if unquote(opt_support) do 31 | def unquote(op)(arg1, arg2, arg3, opts) when is_map(opts) or is_list(opts) do 32 | %Q{query: [unquote(opcode), [wrap(arg1), wrap(arg2), wrap(arg3)], make_opts(opts)]} 33 | end 34 | end 35 | end 36 | end 37 | 38 | defmacro operate_on_list(op, opcode, options \\ []) do 39 | opt_support = Keyword.get(options, :opts, false) 40 | 41 | quote do 42 | def unquote(op)(args) when is_list(args) do 43 | %Q{query: [unquote(opcode), Enum.map(args, &wrap/1)]} 44 | end 45 | 46 | if unquote(opt_support) do 47 | def unquote(op)(args, opts) when is_list(args) and (is_map(opts) or is_list(opts)) do 48 | %Q{query: [unquote(opcode), Enum.map(args, &wrap/1), make_opts(opts)]} 49 | end 50 | end 51 | end 52 | end 53 | 54 | defmacro operate_on_seq_and_list(op, opcode, options \\ []) do 55 | opt_support = Keyword.get(options, :opts, false) 56 | 57 | quote do 58 | def unquote(op)(seq, args) when is_list(args) and args != [] do 59 | %Q{query: [unquote(opcode), [wrap(seq) | Enum.map(args, &wrap/1)]]} 60 | end 61 | 62 | if unquote(opt_support) do 63 | def unquote(op)(seq, args, opts) 64 | when is_list(args) and args != [] and (is_map(opts) or is_list(opts)) do 65 | %Q{query: [unquote(opcode), [wrap(seq) | Enum.map(args, &wrap/1)], make_opts(opts)]} 66 | end 67 | end 68 | end 69 | end 70 | 71 | defmacro operate_on_single_arg(op, opcode, options \\ []) do 72 | opt_support = Keyword.get(options, :opts, false) 73 | 74 | quote do 75 | def unquote(op)(arg) do 76 | %Q{query: [unquote(opcode), [wrap(arg)]]} 77 | end 78 | 79 | if unquote(opt_support) do 80 | def unquote(op)(arg, opts) when is_map(opts) or is_list(opts) do 81 | %Q{query: [unquote(opcode), [wrap(arg)], make_opts(opts)]} 82 | end 83 | end 84 | end 85 | end 86 | 87 | defmacro operate_on_optional_second_arg(op, opcode) do 88 | quote do 89 | def unquote(op)(arg) do 90 | %Q{query: [unquote(opcode), [wrap(arg)]]} 91 | end 92 | 93 | def unquote(op)(left, right = %Q{}) do 94 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 95 | end 96 | 97 | def unquote(op)(arg, opts) when is_map(opts) do 98 | %Q{query: [unquote(opcode), [wrap(arg)], opts]} 99 | end 100 | 101 | def unquote(op)(left, right, opts) when is_map(opts) do 102 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)], opts]} 103 | end 104 | 105 | def unquote(op)(left, right) do 106 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 107 | end 108 | end 109 | end 110 | 111 | defmacro operate_on_zero_args(op, opcode, options \\ []) do 112 | opt_support = Keyword.get(options, :opts, false) 113 | 114 | quote do 115 | def unquote(op)(), do: %Q{query: [unquote(opcode)]} 116 | 117 | if unquote(opt_support) do 118 | def unquote(op)(opts) when is_map(opts) or is_list(opts) do 119 | %Q{query: [unquote(opcode), make_opts(opts)]} 120 | end 121 | end 122 | end 123 | end 124 | 125 | def wrap(list) when is_list(list), do: Query.make_array(Enum.map(list, &wrap/1)) 126 | def wrap(q = %Q{}), do: q 127 | 128 | def wrap(t = %RethinkDB.Pseudotypes.Time{}) do 129 | m = Map.from_struct(t) |> Map.put_new("$reql_type$", "TIME") 130 | wrap(m) 131 | end 132 | 133 | def wrap(t = %DateTime{utc_offset: utc_offset, std_offset: std_offset}) do 134 | offset = utc_offset + std_offset 135 | offset_negative = offset < 0 136 | offset_hour = div(abs(offset), 3600) 137 | offset_minute = rem(abs(offset), 3600) 138 | 139 | time_zone = 140 | if offset_negative do 141 | "-" 142 | else 143 | "+" 144 | end <> 145 | String.pad_leading(Integer.to_string(offset_hour), 2, "0") <> 146 | ":" <> String.pad_leading(Integer.to_string(offset_minute), 2, "0") 147 | 148 | wrap(%{ 149 | "$reql_type$" => "TIME", 150 | "epoch_time" => DateTime.to_unix(t, :milliseconds) / 1000, 151 | "timezone" => time_zone 152 | }) 153 | end 154 | 155 | def wrap(map) when is_map(map) do 156 | Enum.map(map, fn {k, v} -> 157 | {k, wrap(v)} 158 | end) 159 | |> Enum.into(%{}) 160 | end 161 | 162 | def wrap(f) when is_function(f), do: Query.func(f) 163 | def wrap(t) when is_tuple(t), do: wrap(Tuple.to_list(t)) 164 | def wrap(data), do: data 165 | 166 | def make_opts(opts) when is_map(opts), do: wrap(opts) 167 | def make_opts(opts) when is_list(opts), do: Enum.into(opts, %{}) 168 | end 169 | -------------------------------------------------------------------------------- /lib/rethinkdb/query/term_info.json: -------------------------------------------------------------------------------- 1 | {"has_fields":32,"db_list":59,"funcall":64,"random":151,"map":38,"to_geojson":158,"index_create":75,"ungroup":150,"get_intersecting":166,"get_all":78,"saturday":112,"error":12,"or":66,"reconfigure":176,"gt":21,"day":130,"september":122,"desc":74,"min":147,"nth":45,"splice_at":85,"make_array":2,"info":79,"mod":28,"set_insert":88,"thursday":110,"zip":72,"div":27,"db_drop":58,"skip":70,"insert":56,"may":118,"wait":177,"object":143,"range":173,"http":153,"july":120,"difference":95,"february":115,"outer_join":49,"without":34,"sub":25,"table_drop":61,"geojson":157,"fold":187,"minutes":134,"includes":164,"fill":167,"tuesday":108,"default":92,"date":106,"fn":69,"during":105,"august":121,"index_rename":156,"changes":152,"ceil":184,"binary":155,"limit":71,"offsets_of":87,"reduce":37,"time":136,"var":10,"get_nearest":168,"upcase":141,"type_of":52,"hours":133,"and":67,"polygon":161,"not":23,"asc":73,"match":97,"json":98,"distance":162,"inner_join":48,"filter":39,"minval":180,"set_difference":91,"slice":30,"status":175,"june":119,"round":185,"intersects":163,"sync":138,"monday":107,"make_obj":3,"prepend":80,"pluck":33,"table":15,"between":182,"merge":35,"order_by":41,"is_empty":86,"eq_join":50,"max":148,"day_of_year":132,"floor":183,"avg":146,"march":116,"delete_at":83,"rebalance":179,"seconds":135,"coerce_to":51,"delete":54,"update":53,"index_wait":140,"literal":137,"branch":65,"javascript":11,"sample":81,"epoch_time":101,"january":114,"sunday":113,"get":16,"le":20,"december":125,"db":14,"get_field":31,"april":117,"contains":93,"lt":19,"between_deprecated":36,"datum":1,"index_status":139,"set_intersection":89,"change_at":84,"mul":26,"replace":55,"october":123,"grant":188,"add":24,"table_create":60,"to_iso8601":100,"in_timezone":104,"day_of_week":131,"timezone":127,"maxval":181,"uuid":169,"concat_map":40,"split":149,"eq":17,"year":128,"downcase":142,"friday":111,"db_create":57,"count":43,"union":44,"bracket":170,"to_json_string":172,"insert_at":82,"month":129,"set_union":90,"ge":22,"point":159,"sum":145,"wednesday":109,"line":160,"now":103,"group":144,"november":124,"to_epoch_time":102,"append":29,"table_list":62,"time_of_day":126,"distinct":42,"index_drop":76,"index_list":77,"implicit_var":13,"config":174,"args":154,"values":186,"ne":18,"keys":94,"for_each":68,"circle":165,"iso8601":99,"with_fields":96,"polygon_sub":171} -------------------------------------------------------------------------------- /lib/rethinkdb/response.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Record do 2 | @moduledoc false 3 | defstruct data: "", profile: nil 4 | end 5 | 6 | defmodule RethinkDB.Collection do 7 | @moduledoc false 8 | defstruct data: [], profile: nil 9 | 10 | defimpl Enumerable, for: __MODULE__ do 11 | def reduce(%{data: data}, acc, fun) do 12 | Enumerable.reduce(data, acc, fun) 13 | end 14 | 15 | def count(%{data: data}), do: Enumerable.count(data) 16 | def member?(%{data: data}, el), do: Enumerable.member?(data, el) 17 | def slice(%{data: data}), do: Enumerable.slice(data) 18 | end 19 | end 20 | 21 | defmodule RethinkDB.Feed do 22 | @moduledoc false 23 | defstruct token: nil, data: nil, pid: nil, note: nil, profile: nil, opts: nil 24 | 25 | defimpl Enumerable, for: __MODULE__ do 26 | def reduce(changes, acc, fun) do 27 | stream = 28 | Stream.unfold(changes, fn 29 | x = %RethinkDB.Feed{data: []} -> 30 | {:ok, r} = RethinkDB.next(x) 31 | {r, struct(r, data: [])} 32 | 33 | x = %RethinkDB.Feed{} -> 34 | {x, struct(x, data: [])} 35 | 36 | x = %RethinkDB.Collection{} -> 37 | {x, nil} 38 | 39 | nil -> 40 | nil 41 | end) 42 | |> Stream.flat_map(fn el -> 43 | el.data 44 | end) 45 | 46 | stream.(acc, fun) 47 | end 48 | 49 | def count(_changes), do: raise("count/1 not supported for changes") 50 | def member?(_changes, _values), do: raise("member/2 not supported for changes") 51 | def slice(_changes), do: raise("slice/1 is not supported for changes") 52 | end 53 | end 54 | 55 | defmodule RethinkDB.Response do 56 | @moduledoc false 57 | defstruct token: nil, data: "", profile: nil 58 | 59 | def parse(raw_data, token, pid, opts) do 60 | d = Poison.decode!(raw_data) 61 | data = RethinkDB.Pseudotypes.convert_reql_pseudotypes(d["r"], opts) 62 | 63 | {code, resp} = 64 | case d["t"] do 65 | 1 -> {:ok, %RethinkDB.Record{data: hd(data)}} 66 | 2 -> {:ok, %RethinkDB.Collection{data: data}} 67 | 3 -> {:ok, %RethinkDB.Feed{token: token, data: data, pid: pid, note: d["n"], opts: opts}} 68 | 4 -> {:ok, %RethinkDB.Response{token: token, data: d}} 69 | 16 -> {:error, %RethinkDB.Response{token: token, data: d}} 70 | 17 -> {:error, %RethinkDB.Response{token: token, data: d}} 71 | 18 -> {:error, %RethinkDB.Response{token: token, data: d}} 72 | end 73 | 74 | {code, %{resp | :profile => d["p"]}} 75 | end 76 | end 77 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Mixfile do 2 | use Mix.Project 3 | 4 | def project do 5 | [ 6 | app: :rethinkdb, 7 | version: "0.4.0", 8 | elixir: "~> 1.0", 9 | description: "RethinkDB driver for Elixir", 10 | package: package(), 11 | deps: deps(), 12 | test_coverage: [tool: ExCoveralls] 13 | ] 14 | end 15 | 16 | def package do 17 | [ 18 | maintainers: ["Peter Hamilton"], 19 | licenses: ["MIT"], 20 | links: %{"GitHub" => "https://github.com/hamiltop/rethinkdb-elixir"} 21 | ] 22 | end 23 | 24 | # Configuration for the OTP application 25 | # 26 | # Type `mix help compile.app` for more information 27 | def application do 28 | env_apps = 29 | case Mix.env() do 30 | :test -> [:flaky_connection] 31 | _ -> [] 32 | end 33 | 34 | [applications: [:logger, :poison, :connection | env_apps]] 35 | end 36 | 37 | # Dependencies can be Hex packages: 38 | # 39 | # {:mydep, "~> 0.3.0"} 40 | # 41 | # Or git/path repositories: 42 | # 43 | # {:mydep, git: "https://github.com/elixir-lang/mydep.git", tag: "0.1.0"} 44 | # 45 | # Type `mix help deps` for more examples and options 46 | defp deps do 47 | [ 48 | {:poison, "~> 3.1"}, 49 | {:ex_doc, "~> 0.18", only: :dev}, 50 | {:flaky_connection, github: "norpan/flaky_connection", only: :test}, 51 | {:connection, "~> 1.0"}, 52 | {:excoveralls, "~> 0.8", only: :test}, 53 | {:dialyxir, "~> 0.5", only: :dev} 54 | ] 55 | end 56 | end 57 | -------------------------------------------------------------------------------- /mix.lock: -------------------------------------------------------------------------------- 1 | %{ 2 | "certifi": {:hex, :certifi, "2.3.1", "d0f424232390bf47d82da8478022301c561cf6445b5b5fb6a84d49a9e76d2639", [:rebar3], [{:parse_trans, "3.2.0", [hex: :parse_trans, repo: "hexpm", optional: false]}], "hexpm"}, 3 | "connection": {:hex, :connection, "1.0.4", "a1cae72211f0eef17705aaededacac3eb30e6625b04a6117c1b2db6ace7d5976", [:mix], [], "hexpm"}, 4 | "dialyxir": {:hex, :dialyxir, "0.5.1", "b331b091720fd93e878137add264bac4f644e1ddae07a70bf7062c7862c4b952", [:mix], [], "hexpm"}, 5 | "earmark": {:hex, :earmark, "1.2.5", "4d21980d5d2862a2e13ec3c49ad9ad783ffc7ca5769cf6ff891a4553fbaae761", [:mix], [], "hexpm"}, 6 | "ex_doc": {:hex, :ex_doc, "0.19.1", "519bb9c19526ca51d326c060cb1778d4a9056b190086a8c6c115828eaccea6cf", [:mix], [{:earmark, "~> 1.1", [hex: :earmark, repo: "hexpm", optional: false]}, {:makeup_elixir, "~> 0.7", [hex: :makeup_elixir, repo: "hexpm", optional: false]}], "hexpm"}, 7 | "excoveralls": {:hex, :excoveralls, "0.9.2", "299ea4903be7cb2959af0f919d258af116736ca8d507f86c12ef2184698e21a0", [:mix], [{:hackney, ">= 0.12.0", [hex: :hackney, repo: "hexpm", optional: false]}, {:jason, "~> 1.0", [hex: :jason, repo: "hexpm", optional: false]}], "hexpm"}, 8 | "exjsx": {:hex, :exjsx, "4.0.0", "60548841e0212df401e38e63c0078ec57b33e7ea49b032c796ccad8cde794b5c", [:mix], [{:jsx, "~> 2.8.0", [hex: :jsx, repo: "hexpm", optional: false]}], "hexpm"}, 9 | "flaky_connection": {:git, "https://github.com/norpan/flaky_connection.git", "10c190bf61874ff315191b8c8f2748912b97f45d", []}, 10 | "hackney": {:hex, :hackney, "1.13.0", "24edc8cd2b28e1c652593833862435c80661834f6c9344e84b6a2255e7aeef03", [:rebar3], [{:certifi, "2.3.1", [hex: :certifi, repo: "hexpm", optional: false]}, {:idna, "5.1.2", [hex: :idna, repo: "hexpm", optional: false]}, {:metrics, "1.0.1", [hex: :metrics, repo: "hexpm", optional: false]}, {:mimerl, "1.0.2", [hex: :mimerl, repo: "hexpm", optional: false]}, {:ssl_verify_fun, "1.1.1", [hex: :ssl_verify_fun, repo: "hexpm", optional: false]}], "hexpm"}, 11 | "idna": {:hex, :idna, "5.1.2", "e21cb58a09f0228a9e0b95eaa1217f1bcfc31a1aaa6e1fdf2f53a33f7dbd9494", [:rebar3], [{:unicode_util_compat, "0.3.1", [hex: :unicode_util_compat, repo: "hexpm", optional: false]}], "hexpm"}, 12 | "jason": {:hex, :jason, "1.1.1", "d3ccb840dfb06f2f90a6d335b536dd074db748b3e7f5b11ab61d239506585eb2", [:mix], [{:decimal, "~> 1.0", [hex: :decimal, repo: "hexpm", optional: true]}], "hexpm"}, 13 | "jsx": {:hex, :jsx, "2.8.3", "a05252d381885240744d955fbe3cf810504eb2567164824e19303ea59eef62cf", [:mix, :rebar3], [], "hexpm"}, 14 | "makeup": {:hex, :makeup, "0.5.1", "966c5c2296da272d42f1de178c1d135e432662eca795d6dc12e5e8787514edf7", [:mix], [{:nimble_parsec, "~> 0.2.2", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm"}, 15 | "makeup_elixir": {:hex, :makeup_elixir, "0.8.0", "1204a2f5b4f181775a0e456154830524cf2207cf4f9112215c05e0b76e4eca8b", [:mix], [{:makeup, "~> 0.5.0", [hex: :makeup, repo: "hexpm", optional: false]}, {:nimble_parsec, "~> 0.2.2", [hex: :nimble_parsec, repo: "hexpm", optional: false]}], "hexpm"}, 16 | "metrics": {:hex, :metrics, "1.0.1", "25f094dea2cda98213cecc3aeff09e940299d950904393b2a29d191c346a8486", [:rebar3], [], "hexpm"}, 17 | "mimerl": {:hex, :mimerl, "1.0.2", "993f9b0e084083405ed8252b99460c4f0563e41729ab42d9074fd5e52439be88", [:rebar3], [], "hexpm"}, 18 | "nimble_parsec": {:hex, :nimble_parsec, "0.2.2", "d526b23bdceb04c7ad15b33c57c4526bf5f50aaa70c7c141b4b4624555c68259", [:mix], [], "hexpm"}, 19 | "parse_trans": {:hex, :parse_trans, "3.2.0", "2adfa4daf80c14dc36f522cf190eb5c4ee3e28008fc6394397c16f62a26258c2", [:rebar3], [], "hexpm"}, 20 | "poison": {:hex, :poison, "3.1.0", "d9eb636610e096f86f25d9a46f35a9facac35609a7591b3be3326e99a0484665", [:mix], [], "hexpm"}, 21 | "ranch": {:hex, :ranch, "1.6.1", "2609724c9a7e163ca716a46c9087e037db4262935f5d866122ba158ed917c233", [:rebar3], [], "hexpm"}, 22 | "ssl_verify_fun": {:hex, :ssl_verify_fun, "1.1.1", "28a4d65b7f59893bc2c7de786dec1e1555bd742d336043fe644ae956c3497fbe", [:make, :rebar], [], "hexpm"}, 23 | "unicode_util_compat": {:hex, :unicode_util_compat, "0.3.1", "a1f612a7b512638634a603c8f401892afbf99b8ce93a45041f8aaca99cadb85e", [:rebar3], [], "hexpm"}, 24 | } 25 | -------------------------------------------------------------------------------- /test/cert/host.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDeDCCAmACCQCXLb1LngVNuTANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC 3 | VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 4 | aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI 5 | hvcNAQkBFg1mb29AZ21haWwuY29tMB4XDTE3MDIxOTIzMzMxNVoXDTE4MDcwNDIz 6 | MzMxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMQswCQYDVQQHEwJOWTEh 7 | MB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMREwDwYDVQQDEwgxMC4w 8 | LjAuMTEcMBoGCSqGSIb3DQEJARYNZm9vQGdtYWlsLmNvbTCCASIwDQYJKoZIhvcN 9 | AQEBBQADggEPADCCAQoCggEBAKGKWJJ3NDXapqJbJTwz9TGkBb4Re1o793BkFbzk 10 | XJpohRcbEMQWK3fzzjFU2NzDV7uDFR5Em6GBP9piGa8SGM4WgFUu6alRXSYRJ/BB 11 | QRX5Qm+MTMDhYIRjZAQiOVCVLHXl/eWMxrItffyUVLe8NQWDIOz8UoUWMrTlpWsi 12 | kSNUVjWOhYZGRHQcyriRxua35S7mCtk6DW0RKU0nG9cB7Nyc9YYKxpHT63Ki+/FH 13 | gmqAF1cJ0OqtN27FMY2aHgT3HvRbbtLGHCnkZa4HErmrj7rlXoacf0bVJYYYB7AG 14 | thfcK7nwUCmUdWNwS829WdV42Tvv12Ww6eZNSkiFgD9KEssCAwEAATANBgkqhkiG 15 | 9w0BAQsFAAOCAQEAhDzL0UsKN6Yxbce3QpWAaHTyFZU+NPckPm66GyEmwBMuvLVr 16 | d2oMzqOXZ3AW+rydh4i0GYZQYK2KXUgTxYfIz3fvylU0g4rlHI/Ej6gnFJ5g2k8v 17 | h2FLY6mp1SULVopxqURWQPIPm+ztz/wQYPmB1W9W8aQYdEBgoIAmKvxRnBmU7SuP 18 | L2sQmoPnh9pCCdS3djXLoj9pCUe7YDJCnxqOe8zpH3FOIykdCfsIphpPs4Mkw+LY 19 | N1+KHBoRwkj0JBqwaNLF3sjkXgi0v06l4DZ7WAy3Q2k3QD8tuiSFEM0g4/2y58Ts 20 | iFSH2inRL4NJIew2kx+IBHEQDxffgA62zhjxVw== 21 | -----END CERTIFICATE----- 22 | -------------------------------------------------------------------------------- /test/cert/host.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEogIBAAKCAQEAoYpYknc0NdqmolslPDP1MaQFvhF7Wjv3cGQVvORcmmiFFxsQ 3 | xBYrd/POMVTY3MNXu4MVHkSboYE/2mIZrxIYzhaAVS7pqVFdJhEn8EFBFflCb4xM 4 | wOFghGNkBCI5UJUsdeX95YzGsi19/JRUt7w1BYMg7PxShRYytOWlayKRI1RWNY6F 5 | hkZEdBzKuJHG5rflLuYK2ToNbREpTScb1wHs3Jz1hgrGkdPrcqL78UeCaoAXVwnQ 6 | 6q03bsUxjZoeBPce9Ftu0sYcKeRlrgcSuauPuuVehpx/RtUlhhgHsAa2F9wrufBQ 7 | KZR1Y3BLzb1Z1XjZO+/XZbDp5k1KSIWAP0oSywIDAQABAoIBAGqOdYp3syrrBgwG 8 | j3M82rpZ9afApFuLPtcWTfiBskvwMgphwhd2gEnputNzonFNMavw9Zc3rmlEdrg5 9 | CbQf/djDoveNsHgNwaIAoxWqFaLG/vnR1DdO83mgjjLj2Ga9X8yNX4Nx7wdNVtOr 10 | jI5+SYNPUgLBFjXPxLbq3MjkzlQ8myki26qvXB9GjgF3gWoCz1q6dvjhCDtithNd 11 | NoHQUI5YmjuJas3gZ7lUu21MuOAWi0oPFuMMznjPgaaZ3vJVMxmLc/YsnryqbAoA 12 | ZQhn8ZNeXxUDI2mxiuk/bYgQAqRxHqogvL4u2tmm/L9FSSHVJgB3IjoDMiTYQjbk 13 | hZg+A3ECgYEAzldbDxYv3aj02sDidMFQvCVIYJ/v1iJDBFVNN148Y+5b64xvGbVM 14 | 7XkHmYQctg/jIASGiYjGsyVpRT+fRvRleHjNrO4FyWE317nUjER1EZblpZiqeXW0 15 | c+Lf4rKBN3z5eNJRbO/GWAmSvjs33Xdt68YNeRB4v5NA/b01tXp3NlcCgYEAyGrR 16 | /ybLBh9D86mgW28zBD39TdLdvtcNSTWPhu/Ceh9KqhaUZrc/jb8StFxG/Cx5S/oc 17 | F3BHypGkD1NZDAc5NJoMPQixd1BZB3F9EBmJk219KttimAiM27CKmF+7RbSfhu8P 18 | 3uK1oLSo3sVOhGQFpXQZ2g51B1+ltAboTLBcNq0CgYB/BzRd01DgaxViXoCLVD95 19 | tJIcOhoSf8E2N7VzsqYG90TLfAchkoWrZGkTT0vFoX43xdF1diitPQjTwtkxe1/E 20 | jMpB/b6+PQV93z9EoxhXHch+6793Ssku1qryCuaV3HBQu1m5cNtwc2RNjHNV+iJH 21 | lgPRVhygA+1syED6WkxtvQKBgEjZe0exxC6PgtW5HM7flr2+AqsdMPlDllK8I1W7 22 | JQfbA/rbhknn5jQR9iyVNkBHsjeJzFhAuffKBMaFV2Ll5UdXj4dH96oVDKeF+x21 23 | CqsKK2s+n5H/2aOpgldsxNfLlgkoMK6l3btyr8d6FNZOvTatAxCeHK/3dnX/5MSr 24 | fnlpAoGADad3tXN+vJARd/vXHfsHZp8/nF+hDrteEC8jwv//l+10TErWDZvxxf+8 25 | vgK6zWZogitNjmj49A8/PzNJY45JiCodo1z2D22jP4fqT8okzax7+i+v6noIFVSA 26 | ikQbv1yjcf3CeVsihHs3bvcxAW+fF0yOQ1uNIL97ThS2gIJwAwY= 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /test/cert/rootCA.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIEbjCCA1agAwIBAgIJAOQ/G0GKEPPHMA0GCSqGSIb3DQEBCwUAMIGAMQswCQYD 3 | VQQGEwJVUzELMAkGA1UECBMCTlkxETAPBgNVBAcTCE5ldyBZb3JrMRkwFwYDVQQK 4 | ExBSZXRoaW5rREIgRWxpeGlyMRgwFgYDVQQDEw9mb28uZXhhbXBsZS5jb20xHDAa 5 | BgkqhkiG9w0BCQEWDWZvb0BnbWFpbC5jb20wHhcNMTcwMjE5MjMzMDQzWhcNMTkx 6 | MjEwMjMzMDQzWjCBgDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQH 7 | EwhOZXcgWW9yazEZMBcGA1UEChMQUmV0aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMP 8 | Zm9vLmV4YW1wbGUuY29tMRwwGgYJKoZIhvcNAQkBFg1mb29AZ21haWwuY29tMIIB 9 | IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv6zwu/jO6Y0WNO1jw6zCLdNu 10 | MBy7vjBY0mnF4MUZocP5VSxOE7OI/2KPS1cIbVBfbSQgKVDsl3T17JgZpUC1INlu 11 | u3J1J4UFUSXBRXZeGKTGaePtuLP5FwGabux18m42IgsBn0nA69QeTZCVnlYjsUoC 12 | UoJe+uldOanZQmylZQjO8alz5YjNw2T5YNE4laAutU9tPMJgJWRzG8+mUacY5/Y/ 13 | yVly2uPZ1CxG146MpHcQsApIMgovF9Uuuxjudf9nz4XOrjnUn6E5GxMh9HKBiQd8 14 | XbfH8DrDZf5ZquyytAtJ6T5m92mkje7T+76mOOKmKoDSl8xsDQgkUirOTb5g3wID 15 | AQABo4HoMIHlMB0GA1UdDgQWBBRDJaAOuwlhj7eIeILZFmN8qyVW8DCBtQYDVR0j 16 | BIGtMIGqgBRDJaAOuwlhj7eIeILZFmN8qyVW8KGBhqSBgzCBgDELMAkGA1UEBhMC 17 | VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 18 | aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI 19 | hvcNAQkBFg1mb29AZ21haWwuY29tggkA5D8bQYoQ88cwDAYDVR0TBAUwAwEB/zAN 20 | BgkqhkiG9w0BAQsFAAOCAQEAUpJVRQ0Bsy7jyLMTqKmR6qGiYKBM2/AaMRbn5pqi 21 | 0Uz/3Fu9a9POzI18j7ZxDD7HVGZvdjKc/d6+jx6PntReuXYwdkIjW19oBihYp3op 22 | iaA7ZU0nAsefeyVcmfPtM+Kn3OW5/uIgYVIOiSLfWT4HVQxnKOWdVfaYieoz1gRO 23 | wdXipeHwtsfjz3sDjJBoBIWdtysEPYsVCkAEcvlji6ugwWJ8SBqzdZl/NjWvecgW 24 | ppSzO46l6WAZJxLdapAddOucrFSCbGAdf3WmHHRURCVaCbRhyBwDJh+vq76Imkh4 25 | M13jlo5+4K2NF9QCUEzwnC47uUOp1HqGGUoaeW4nTA07wg== 26 | -----END CERTIFICATE----- 27 | -------------------------------------------------------------------------------- /test/changes_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ChangesTest do 2 | use ExUnit.Case, async: false 3 | alias RethinkDB.Feed 4 | use RethinkDB.Connection 5 | import RethinkDB.Query 6 | 7 | @table_name "changes_test_table_1" 8 | setup_all do 9 | start_link() 10 | table_create(@table_name) |> run 11 | 12 | on_exit(fn -> 13 | start_link() 14 | table_drop(@table_name) |> run 15 | end) 16 | 17 | :ok 18 | end 19 | 20 | setup do 21 | table(@table_name) |> delete |> run 22 | :ok 23 | end 24 | 25 | test "first change" do 26 | q = table(@table_name) |> changes 27 | {:ok, changes = %Feed{}} = run(q) 28 | 29 | t = 30 | Task.async(fn -> 31 | changes |> Enum.take(1) 32 | end) 33 | 34 | data = %{"test" => "d"} 35 | table(@table_name) |> insert(data) |> run 36 | [h | []] = Task.await(t) 37 | assert %{"new_val" => %{"test" => "d"}} = h 38 | end 39 | 40 | test "changes" do 41 | q = table(@table_name) |> changes 42 | {:ok, changes} = {:ok, %Feed{}} = run(q) 43 | 44 | t = 45 | Task.async(fn -> 46 | RethinkDB.Connection.next(changes) 47 | end) 48 | 49 | data = %{"test" => "data"} 50 | q = table(@table_name) |> insert(data) 51 | {:ok, res} = run(q) 52 | expected = res.data["id"] 53 | {:ok, changes} = Task.await(t) 54 | ^expected = changes.data |> hd |> Map.get("id") 55 | 56 | # test Enumerable 57 | t = 58 | Task.async(fn -> 59 | changes |> Enum.take(5) 60 | end) 61 | 62 | 1..6 63 | |> Enum.each(fn _ -> 64 | q = table(@table_name) |> insert(data) 65 | run(q) 66 | end) 67 | 68 | data = Task.await(t) 69 | 5 = Enum.count(data) 70 | end 71 | 72 | test "point_changes" do 73 | q = table(@table_name) |> get("0") |> changes 74 | {:ok, changes} = {:ok, %Feed{}} = run(q) 75 | 76 | t = 77 | Task.async(fn -> 78 | changes |> Enum.take(1) 79 | end) 80 | 81 | data = %{"id" => "0"} 82 | q = table(@table_name) |> insert(data) 83 | {:ok, _res} = run(q) 84 | [h | []] = Task.await(t) 85 | assert %{"new_val" => %{"id" => "0"}} = h 86 | end 87 | 88 | test "changes opts binary native" do 89 | q = table(@table_name) |> get("0") |> changes 90 | {:ok, changes} = {:ok, %Feed{}} = run(q) 91 | 92 | t = 93 | Task.async(fn -> 94 | changes |> Enum.take(1) 95 | end) 96 | 97 | data = %{"id" => "0", "binary" => binary(<<1>>)} 98 | q = table(@table_name) |> insert(data) 99 | {:ok, _res} = run(q) 100 | [h | []] = Task.await(t) 101 | assert %{"new_val" => %{"id" => "0", "binary" => <<1>>}} = h 102 | end 103 | 104 | test "changes opts binary raw" do 105 | q = table(@table_name) |> get("0") |> changes 106 | {:ok, changes} = {:ok, %Feed{}} = run(q, binary_format: :raw) 107 | 108 | t = 109 | Task.async(fn -> 110 | changes |> Enum.take(1) 111 | end) 112 | 113 | data = %{"id" => "0", "binary" => binary(<<1>>)} 114 | q = table(@table_name) |> insert(data) 115 | {:ok, _res} = run(q) 116 | [h | []] = Task.await(t) 117 | 118 | assert %{"new_val" => %{"id" => "0", "binary" => %RethinkDB.Pseudotypes.Binary{data: "AQ=="}}} = 119 | h 120 | end 121 | end 122 | -------------------------------------------------------------------------------- /test/prepare_test.exs: -------------------------------------------------------------------------------- 1 | defmodule PrepareTest do 2 | use ExUnit.Case, async: false 3 | import RethinkDB.Prepare 4 | 5 | test "single elements" do 6 | assert prepare(1) == 1 7 | assert prepare(make_ref()) == 1 8 | end 9 | 10 | test "list" do 11 | assert prepare([1, 2, 3]) == [1, 2, 3] 12 | assert prepare([1, 2, make_ref(), make_ref()]) == [1, 2, 1, 2] 13 | end 14 | 15 | test "nested list" do 16 | list = [1, [1, 2], [1, [1, 2]]] 17 | assert prepare(list) == list 18 | list = [1, [make_ref(), make_ref()], make_ref(), [1, 2]] 19 | assert prepare(list) == [1, [1, 2], 3, [1, 2]] 20 | end 21 | 22 | test "map" do 23 | map = %{a: 1, b: 2} 24 | assert prepare(map) == map 25 | map = %{a: 1, b: make_ref(), c: make_ref()} 26 | assert prepare(map) == %{a: 1, b: 1, c: 2} 27 | end 28 | end 29 | -------------------------------------------------------------------------------- /test/query/administration_query_test.exs: -------------------------------------------------------------------------------- 1 | defmodule AdministrationQueryTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | setup_all do 7 | start_link() 8 | :ok 9 | end 10 | 11 | @table_name "administration_table_1" 12 | setup do 13 | table_create(@table_name) |> run 14 | 15 | on_exit(fn -> 16 | table_drop(@table_name) |> run 17 | end) 18 | 19 | :ok 20 | end 21 | 22 | test "config" do 23 | {:ok, r} = table(@table_name) |> config |> run 24 | assert %RethinkDB.Record{data: %{"db" => "test"}} = r 25 | end 26 | 27 | test "rebalance" do 28 | {:ok, r} = table(@table_name) |> rebalance |> run 29 | assert %RethinkDB.Record{data: %{"rebalanced" => _}} = r 30 | end 31 | 32 | test "reconfigure" do 33 | {:ok, r} = table(@table_name) |> reconfigure(shards: 1, dry_run: true, replicas: 1) |> run 34 | assert %RethinkDB.Record{data: %{"reconfigured" => _}} = r 35 | end 36 | 37 | test "status" do 38 | {:ok, r} = table(@table_name) |> status |> run 39 | assert %RethinkDB.Record{data: %{"name" => @table_name}} = r 40 | end 41 | 42 | test "wait" do 43 | {:ok, r} = table(@table_name) |> wait(wait_for: :ready_for_writes) |> run 44 | assert %RethinkDB.Record{data: %{"ready" => 1}} = r 45 | end 46 | end 47 | -------------------------------------------------------------------------------- /test/query/aggregation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule AggregationTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Query 6 | 7 | alias RethinkDB.Record 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | setup_all do 13 | start_link() 14 | :ok 15 | end 16 | 17 | test "group on key name" do 18 | query = 19 | [ 20 | %{a: "hi", b: 1}, 21 | %{a: "hi", b: [1, 2, 3]}, 22 | %{a: "bye"} 23 | ] 24 | |> group("a") 25 | 26 | {:ok, %Record{data: data}} = query |> run 27 | 28 | assert data == %{ 29 | "bye" => [ 30 | %{"a" => "bye"} 31 | ], 32 | "hi" => [ 33 | %{"a" => "hi", "b" => 1}, 34 | %{"a" => "hi", "b" => [1, 2, 3]} 35 | ] 36 | } 37 | end 38 | 39 | test "group on function" do 40 | query = 41 | [ 42 | %{a: "hi", b: 1}, 43 | %{a: "hi", b: [1, 2, 3]}, 44 | %{a: "bye"}, 45 | %{a: "hello"} 46 | ] 47 | |> group( 48 | lambda(fn x -> 49 | x["a"] == "hi" || x["a"] == "hello" 50 | end) 51 | ) 52 | 53 | {:ok, %Record{data: data}} = query |> run 54 | 55 | assert data == %{ 56 | false: [ 57 | %{"a" => "bye"} 58 | ], 59 | true: [ 60 | %{"a" => "hi", "b" => 1}, 61 | %{"a" => "hi", "b" => [1, 2, 3]}, 62 | %{"a" => "hello"} 63 | ] 64 | } 65 | end 66 | 67 | test "group on multiple keys" do 68 | query = 69 | [ 70 | %{a: "hi", b: 1, c: 2}, 71 | %{a: "hi", b: 1, c: 3}, 72 | %{a: "hi", b: [1, 2, 3]}, 73 | %{a: "bye"}, 74 | %{a: "hello", b: 1} 75 | ] 76 | |> group([ 77 | lambda(fn x -> 78 | x["a"] == "hi" || x["a"] == "hello" 79 | end), 80 | "b" 81 | ]) 82 | 83 | {:ok, %Record{data: data}} = query |> run 84 | 85 | assert data == %{ 86 | [false, nil] => [ 87 | %{"a" => "bye"} 88 | ], 89 | [true, 1] => [ 90 | %{"a" => "hi", "b" => 1, "c" => 2}, 91 | %{"a" => "hi", "b" => 1, "c" => 3}, 92 | %{"a" => "hello", "b" => 1} 93 | ], 94 | [true, [1, 2, 3]] => [ 95 | %{"a" => "hi", "b" => [1, 2, 3]} 96 | ] 97 | } 98 | end 99 | 100 | test "ungroup" do 101 | query = 102 | [ 103 | %{a: "hi", b: 1, c: 2}, 104 | %{a: "hi", b: 1, c: 3}, 105 | %{a: "hi", b: [1, 2, 3]}, 106 | %{a: "bye"}, 107 | %{a: "hello", b: 1} 108 | ] 109 | |> group([ 110 | lambda(fn x -> 111 | x["a"] == "hi" || x["a"] == "hello" 112 | end), 113 | "b" 114 | ]) 115 | |> ungroup 116 | 117 | {:ok, %Record{data: data}} = query |> run 118 | 119 | assert data == [ 120 | %{ 121 | "group" => [false, nil], 122 | "reduction" => [ 123 | %{"a" => "bye"} 124 | ] 125 | }, 126 | %{ 127 | "group" => [true, [1, 2, 3]], 128 | "reduction" => [ 129 | %{"a" => "hi", "b" => [1, 2, 3]} 130 | ] 131 | }, 132 | %{ 133 | "group" => [true, 1], 134 | "reduction" => [ 135 | %{"a" => "hi", "b" => 1, "c" => 2}, 136 | %{"a" => "hi", "b" => 1, "c" => 3}, 137 | %{"a" => "hello", "b" => 1} 138 | ] 139 | } 140 | ] 141 | end 142 | 143 | test "reduce" do 144 | query = 145 | [1, 2, 3, 4] 146 | |> reduce( 147 | lambda(fn el, acc -> 148 | el + acc 149 | end) 150 | ) 151 | 152 | {:ok, %Record{data: data}} = run(query) 153 | assert data == 10 154 | end 155 | 156 | test "count" do 157 | query = [1, 2, 3, 4] |> count 158 | {:ok, %Record{data: data}} = run(query) 159 | assert data == 4 160 | end 161 | 162 | test "count with value" do 163 | query = [1, 2, 2, 3, 4] |> count(2) 164 | {:ok, %Record{data: data}} = run(query) 165 | assert data == 2 166 | end 167 | 168 | test "count with predicate" do 169 | query = 170 | [1, 2, 2, 3, 4] 171 | |> count( 172 | lambda(fn x -> 173 | rem(x, 2) == 0 174 | end) 175 | ) 176 | 177 | {:ok, %Record{data: data}} = run(query) 178 | assert data == 3 179 | end 180 | 181 | test "sum" do 182 | query = [1, 2, 3, 4] |> sum 183 | {:ok, %Record{data: data}} = run(query) 184 | assert data == 10 185 | end 186 | 187 | test "sum with field" do 188 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> sum("a") 189 | {:ok, %Record{data: data}} = run(query) 190 | assert data == 3 191 | end 192 | 193 | test "sum with function" do 194 | query = 195 | [1, 2, 3, 4] 196 | |> sum( 197 | lambda(fn x -> 198 | if x == 1 do 199 | nil 200 | else 201 | x * 2 202 | end 203 | end) 204 | ) 205 | 206 | {:ok, %Record{data: data}} = run(query) 207 | assert data == 18 208 | end 209 | 210 | test "avg" do 211 | query = [1, 2, 3, 4] |> avg 212 | {:ok, %Record{data: data}} = run(query) 213 | assert data == 2.5 214 | end 215 | 216 | test "avg with field" do 217 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> avg("a") 218 | {:ok, %Record{data: data}} = run(query) 219 | assert data == 1.5 220 | end 221 | 222 | test "avg with function" do 223 | query = 224 | [1, 2, 3, 4] 225 | |> avg( 226 | lambda(fn x -> 227 | if x == 1 do 228 | nil 229 | else 230 | x * 2 231 | end 232 | end) 233 | ) 234 | 235 | {:ok, %Record{data: data}} = run(query) 236 | assert data == 6 237 | end 238 | 239 | test "min" do 240 | query = [1, 2, 3, 4] |> Query.min() 241 | {:ok, %Record{data: data}} = run(query) 242 | assert data == 1 243 | end 244 | 245 | test "min with field" do 246 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> Query.min("b") 247 | {:ok, %Record{data: data}} = run(query) 248 | assert data == %{"b" => 3} 249 | end 250 | 251 | test "min with subquery field" do 252 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> Query.min(Query.downcase("B")) 253 | {:ok, %Record{data: data}} = run(query) 254 | assert data == %{"b" => 3} 255 | end 256 | 257 | test "min with function" do 258 | query = 259 | [1, 2, 3, 4] 260 | |> Query.min( 261 | lambda(fn x -> 262 | if x == 1 do 263 | # Note, there's a bug in rethinkdb (https://github.com/rethinkdb/rethinkdb/issues/4213) 264 | 100 265 | # which means we can't return null here 266 | else 267 | x * 2 268 | end 269 | end) 270 | ) 271 | 272 | {:ok, %Record{data: data}} = run(query) 273 | assert data == 2 274 | end 275 | 276 | test "max" do 277 | query = [1, 2, 3, 4] |> Query.max() 278 | {:ok, %Record{data: data}} = run(query) 279 | assert data == 4 280 | end 281 | 282 | test "max with field" do 283 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> Query.max("b") 284 | {:ok, %Record{data: data}} = run(query) 285 | assert data == %{"b" => 4} 286 | end 287 | 288 | test "max with subquery field" do 289 | query = [%{a: 1}, %{a: 2}, %{b: 3}, %{b: 4}] |> Query.max(Query.downcase("B")) 290 | {:ok, %Record{data: data}} = run(query) 291 | assert data == %{"b" => 4} 292 | end 293 | 294 | test "max with function" do 295 | query = 296 | [1, 2, 3, 4] 297 | |> Query.max( 298 | lambda(fn x -> 299 | if x == 4 do 300 | nil 301 | else 302 | x * 2 303 | end 304 | end) 305 | ) 306 | 307 | {:ok, %Record{data: data}} = run(query) 308 | assert data == 3 309 | end 310 | 311 | test "distinct" do 312 | query = [1, 2, 3, 3, 4, 4, 5] |> distinct 313 | {:ok, %Record{data: data}} = run(query) 314 | assert data == [1, 2, 3, 4, 5] 315 | end 316 | 317 | test "distinct with opts" do 318 | query = [1, 2, 3, 4] |> distinct(index: "stuff") 319 | assert %RethinkDB.Q{query: [_, _, %{index: "stuff"}]} = query 320 | end 321 | 322 | test "contains" do 323 | query = [1, 2, 3, 4] |> contains(4) 324 | {:ok, %Record{data: data}} = run(query) 325 | assert data == true 326 | end 327 | 328 | test "contains multiple values" do 329 | query = [1, 2, 3, 4] |> contains([4, 3]) 330 | {:ok, %Record{data: data}} = run(query) 331 | assert data == true 332 | end 333 | 334 | test "contains with function" do 335 | query = [1, 2, 3, 4] |> contains(lambda(&(&1 == 3))) 336 | {:ok, %Record{data: data}} = run(query) 337 | assert data == true 338 | end 339 | 340 | test "contains with multiple function" do 341 | query = [1, 2, 3, 4] |> contains([lambda(&(&1 == 3)), lambda(&(&1 == 5))]) 342 | {:ok, %Record{data: data}} = run(query) 343 | assert data == false 344 | end 345 | 346 | test "contains with multiple (mixed)" do 347 | query = [1, 2, 3, 4] |> contains([lambda(&(&1 == 3)), 2]) 348 | {:ok, %Record{data: data}} = run(query) 349 | assert data == true 350 | end 351 | end 352 | -------------------------------------------------------------------------------- /test/query/control_structures_adv_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ControlStructuresAdvTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Collection 7 | 8 | @table_name "control_test_table_1" 9 | setup_all do 10 | start_link() 11 | q = table_create(@table_name) 12 | run(q) 13 | 14 | on_exit(fn -> 15 | start_link() 16 | table_drop(@table_name) |> run 17 | end) 18 | 19 | :ok 20 | end 21 | 22 | setup do 23 | table(@table_name) |> delete |> run 24 | :ok 25 | end 26 | 27 | test "for_each" do 28 | table_query = table(@table_name) 29 | 30 | q = 31 | [1, 2, 3] 32 | |> for_each(fn x -> 33 | table_query |> insert(%{a: x}) 34 | end) 35 | 36 | run(q) 37 | {:ok, %Collection{data: data}} = run(table_query) 38 | assert Enum.count(data) == 3 39 | end 40 | end 41 | -------------------------------------------------------------------------------- /test/query/control_structures_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ControlStructuresTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Response 8 | 9 | setup_all do 10 | start_link() 11 | :ok 12 | end 13 | 14 | test "args" do 15 | q = [%{a: 5, b: 6}, %{a: 4, c: 7}] |> pluck(args(["a", "c"])) 16 | {:ok, %Record{data: data}} = run(q) 17 | assert data == [%{"a" => 5}, %{"a" => 4, "c" => 7}] 18 | end 19 | 20 | test "binary raw" do 21 | d = <<220, 2, 3, 4, 5, 192>> 22 | q = binary(d) 23 | {:ok, %Record{data: data}} = run(q, binary_format: :raw) 24 | assert data == %RethinkDB.Pseudotypes.Binary{data: :base64.encode(d)} 25 | q = binary(data) 26 | {:ok, %Record{data: result}} = run(q, binary_format: :raw) 27 | assert data == result 28 | end 29 | 30 | test "binary native" do 31 | d = <<220, 2, 3, 4, 5, 192>> 32 | q = binary(d) 33 | {:ok, %Record{data: data}} = run(q) 34 | assert data == d 35 | q = binary(data) 36 | {:ok, %Record{data: result}} = run(q, binary_format: :native) 37 | assert data == result 38 | end 39 | 40 | test "binary native no wrapper" do 41 | d = <<220, 2, 3, 4, 5, 192>> 42 | q = d 43 | {:ok, %Record{data: data}} = run(q) 44 | assert data == d 45 | q = data 46 | {:ok, %Record{data: result}} = run(q, binary_format: :native) 47 | assert data == result 48 | end 49 | 50 | test "do_r" do 51 | q = do_r(fn -> 5 end) 52 | {:ok, %Record{data: data}} = run(q) 53 | assert data == 5 54 | q = [1, 2, 3] |> do_r(fn x -> x end) 55 | {:ok, %Record{data: data}} = run(q) 56 | assert data == [1, 2, 3] 57 | end 58 | 59 | test "branch" do 60 | q = branch(true, 1, 2) 61 | {:ok, %Record{data: data}} = run(q) 62 | assert data == 1 63 | q = branch(false, 1, 2) 64 | {:ok, %Record{data: data}} = run(q) 65 | assert data == 2 66 | end 67 | 68 | test "error" do 69 | q = do_r(fn -> error("hello") end) 70 | {:error, %Response{data: data}} = run(q) 71 | assert data["r"] == ["hello"] 72 | end 73 | 74 | test "default" do 75 | q = 1 |> default("test") 76 | {:ok, %Record{data: data}} = run(q) 77 | assert data == 1 78 | q = nil |> default("test") 79 | {:ok, %Record{data: data}} = run(q) 80 | assert data == "test" 81 | end 82 | 83 | test "js" do 84 | q = js("[40,100,1,5,25,10].sort()") 85 | {:ok, %Record{data: data}} = run(q) 86 | # couldn't help myself... 87 | assert data == [1, 10, 100, 25, 40, 5] 88 | end 89 | 90 | test "coerce_to" do 91 | q = "91" |> coerce_to("number") 92 | {:ok, %Record{data: data}} = run(q) 93 | assert data == 91 94 | end 95 | 96 | test "type_of" do 97 | q = "91" |> type_of 98 | {:ok, %Record{data: data}} = run(q) 99 | assert data == "STRING" 100 | q = 91 |> type_of 101 | {:ok, %Record{data: data}} = run(q) 102 | assert data == "NUMBER" 103 | q = [91] |> type_of 104 | {:ok, %Record{data: data}} = run(q) 105 | assert data == "ARRAY" 106 | end 107 | 108 | test "info" do 109 | q = [91] |> info 110 | {:ok, %Record{data: %{"type" => type}}} = run(q) 111 | assert type == "ARRAY" 112 | end 113 | 114 | test "json" do 115 | q = "{\"a\": 5, \"b\": 6}" |> json 116 | {:ok, %Record{data: data}} = run(q) 117 | assert data == %{"a" => 5, "b" => 6} 118 | end 119 | 120 | test "http" do 121 | q = "http://httpbin.org/get" |> http 122 | {:ok, %Record{data: data}} = run(q) 123 | %{"args" => %{}, "headers" => _, "origin" => _, "url" => "http://httpbin.org/get"} = data 124 | end 125 | 126 | test "uuid" do 127 | q = uuid() 128 | {:ok, %Record{data: data}} = run(q) 129 | assert String.length(String.replace(data, "-", "")) == 32 130 | end 131 | end 132 | -------------------------------------------------------------------------------- /test/query/database_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DatabaseTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link() 10 | :ok 11 | end 12 | 13 | @db_name "db_test_db_1" 14 | @table_name "db_test_table_1" 15 | setup do 16 | q = table_drop(@table_name) 17 | run(q) 18 | q = table_create(@table_name) 19 | run(q) 20 | 21 | on_exit(fn -> 22 | q = table_drop(@table_name) 23 | run(q) 24 | db_drop(@db_name) |> run 25 | end) 26 | 27 | :ok 28 | end 29 | 30 | test "databases" do 31 | q = db_create(@db_name) 32 | {:ok, %Record{data: %{"dbs_created" => 1}}} = run(q) 33 | 34 | q = db_list() 35 | {:ok, %Record{data: dbs}} = run(q) 36 | assert Enum.member?(dbs, @db_name) 37 | 38 | q = db_drop(@db_name) 39 | {:ok, %Record{data: %{"dbs_dropped" => 1}}} = run(q) 40 | 41 | q = db_list() 42 | {:ok, %Record{data: dbs}} = run(q) 43 | assert !Enum.member?(dbs, @db_name) 44 | end 45 | end 46 | -------------------------------------------------------------------------------- /test/query/date_time_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DateTimeTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Pseudotypes.Time 8 | 9 | setup_all do 10 | start_link() 11 | :ok 12 | end 13 | 14 | test "now native" do 15 | {:ok, %Record{data: data}} = now() |> run 16 | assert %DateTime{} = data 17 | end 18 | 19 | test "now raw" do 20 | {:ok, %Record{data: data}} = now() |> run(time_format: :raw) 21 | assert %Time{} = data 22 | end 23 | 24 | test "time native" do 25 | {:ok, %Record{data: data}} = time(1970, 1, 1, "Z") |> run 26 | assert data == DateTime.from_unix!(0, :milliseconds) 27 | {:ok, %Record{data: data}} = time(1970, 1, 1, 0, 0, 1, "Z") |> run(binary_format: :native) 28 | assert data == DateTime.from_unix!(1000, :milliseconds) 29 | end 30 | 31 | test "time raw" do 32 | {:ok, %Record{data: data}} = time(1970, 1, 1, "Z") |> run(time_format: :raw) 33 | assert data.epoch_time == 0 34 | {:ok, %Record{data: data}} = time(1970, 1, 1, 0, 0, 1, "Z") |> run(time_format: :raw) 35 | assert data.epoch_time == 1 36 | end 37 | 38 | test "epoch_time native" do 39 | {:ok, %Record{data: data}} = epoch_time(1) |> run 40 | assert data == DateTime.from_unix!(1000, :milliseconds) 41 | end 42 | 43 | test "epoch_time raw" do 44 | {:ok, %Record{data: data}} = epoch_time(1) |> run(time_format: :raw) 45 | assert data.epoch_time == 1 46 | assert data.timezone == "+00:00" 47 | end 48 | 49 | test "iso8601 native" do 50 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run 51 | assert data == DateTime.from_unix!(0, :milliseconds) 52 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run 53 | 54 | assert data == 55 | DateTime.from_unix!(-3_600_000, :milliseconds) 56 | |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") 57 | end 58 | 59 | test "iso8601 raw" do 60 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run(time_format: :raw) 61 | assert data.epoch_time == 0 62 | assert data.timezone == "+00:00" 63 | 64 | {:ok, %Record{data: data}} = 65 | iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run(time_format: :raw) 66 | 67 | assert data.epoch_time == -3600 68 | assert data.timezone == "+01:00" 69 | end 70 | 71 | test "in_timezone native" do 72 | {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run 73 | 74 | assert data == 75 | DateTime.from_unix!(0, :milliseconds) 76 | |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") 77 | end 78 | 79 | test "in_timezone raw" do 80 | {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run(time_format: :raw) 81 | assert data.timezone == "+01:00" 82 | assert data.epoch_time == 0 83 | end 84 | 85 | test "timezone" do 86 | {:ok, %Record{data: data}} = %Time{epoch_time: 0, timezone: "+01:00"} |> timezone |> run 87 | assert data == "+01:00" 88 | end 89 | 90 | test "during" do 91 | a = epoch_time(5) 92 | b = epoch_time(10) 93 | c = epoch_time(7) 94 | {:ok, %Record{data: data}} = c |> during(a, b) |> run 95 | assert data == true 96 | {:ok, %Record{data: data}} = b |> during(a, c) |> run 97 | assert data == false 98 | end 99 | 100 | test "date native" do 101 | {:ok, %Record{data: data}} = epoch_time(5) |> date |> run 102 | assert data == DateTime.from_unix!(0, :milliseconds) 103 | end 104 | 105 | test "date raw" do 106 | {:ok, %Record{data: data}} = epoch_time(5) |> date |> run(time_format: :raw) 107 | assert data.epoch_time == 0 108 | end 109 | 110 | test "time_of_day" do 111 | {:ok, %Record{data: data}} = epoch_time(60 * 60 * 24 + 15) |> time_of_day |> run 112 | assert data == 15 113 | end 114 | 115 | test "year" do 116 | {:ok, %Record{data: data}} = epoch_time(2 * 365 * 60 * 60 * 24) |> year |> run 117 | assert data == 1972 118 | end 119 | 120 | test "month" do 121 | {:ok, %Record{data: data}} = epoch_time(2 * 30 * 60 * 60 * 24) |> month |> run 122 | assert data == 3 123 | end 124 | 125 | test "day" do 126 | {:ok, %Record{data: data}} = epoch_time(3 * 60 * 60 * 24) |> day |> run 127 | assert data == 4 128 | end 129 | 130 | test "day_of_week" do 131 | {:ok, %Record{data: data}} = epoch_time(3 * 60 * 60 * 24) |> day_of_week |> run 132 | assert data == 7 133 | end 134 | 135 | test "day_of_year" do 136 | {:ok, %Record{data: data}} = epoch_time(3 * 60 * 60 * 24) |> day_of_year |> run 137 | assert data == 4 138 | end 139 | 140 | test "hours" do 141 | {:ok, %Record{data: data}} = epoch_time(3 * 60 * 60) |> hours |> run 142 | assert data == 3 143 | end 144 | 145 | test "minutes" do 146 | {:ok, %Record{data: data}} = epoch_time(3 * 60) |> minutes |> run 147 | assert data == 3 148 | end 149 | 150 | test "seconds" do 151 | {:ok, %Record{data: data}} = epoch_time(3) |> seconds |> run 152 | assert data == 3 153 | end 154 | 155 | test "to_iso8601" do 156 | {:ok, %Record{data: data}} = epoch_time(3) |> to_iso8601 |> run 157 | assert data == "1970-01-01T00:00:03+00:00" 158 | end 159 | 160 | test "to_epoch_time" do 161 | {:ok, %Record{data: data}} = epoch_time(3) |> to_epoch_time |> run 162 | assert data == 3 163 | end 164 | end 165 | -------------------------------------------------------------------------------- /test/query/document_manipulation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DocumentManipulationTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link() 10 | :ok 11 | end 12 | 13 | test "pluck" do 14 | {:ok, %Record{data: data}} = 15 | [ 16 | %{a: 5, b: 6, c: 3}, 17 | %{a: 7, b: 8} 18 | ] 19 | |> pluck(["a", "b"]) 20 | |> run 21 | 22 | assert data == [ 23 | %{"a" => 5, "b" => 6}, 24 | %{"a" => 7, "b" => 8} 25 | ] 26 | end 27 | 28 | test "without" do 29 | {:ok, %Record{data: data}} = 30 | [ 31 | %{a: 5, b: 6, c: 3}, 32 | %{a: 7, b: 8} 33 | ] 34 | |> without("a") 35 | |> run 36 | 37 | assert data == [ 38 | %{"b" => 6, "c" => 3}, 39 | %{"b" => 8} 40 | ] 41 | end 42 | 43 | test "merge" do 44 | {:ok, %Record{data: data}} = %{a: 4} |> merge(%{b: 5}) |> run 45 | assert data == %{"a" => 4, "b" => 5} 46 | end 47 | 48 | test "merge list" do 49 | {:ok, %Record{data: data}} = args([%{a: 4}, %{b: 5}]) |> merge |> run 50 | assert data == %{"a" => 4, "b" => 5} 51 | end 52 | 53 | test "append" do 54 | {:ok, %Record{data: data}} = [1, 2] |> append(3) |> run 55 | assert data == [1, 2, 3] 56 | end 57 | 58 | test "prepend" do 59 | {:ok, %Record{data: data}} = [1, 2] |> prepend(3) |> run 60 | assert data == [3, 1, 2] 61 | end 62 | 63 | test "difference" do 64 | {:ok, %Record{data: data}} = [1, 2] |> difference([2]) |> run 65 | assert data == [1] 66 | end 67 | 68 | test "set_insert" do 69 | {:ok, %Record{data: data}} = [1, 2] |> set_insert(2) |> run 70 | assert data == [1, 2] 71 | {:ok, %Record{data: data}} = [1, 2] |> set_insert(3) |> run 72 | assert data == [1, 2, 3] 73 | end 74 | 75 | test "set_intersection" do 76 | {:ok, %Record{data: data}} = [1, 2] |> set_intersection([2, 3]) |> run 77 | assert data == [2] 78 | end 79 | 80 | test "set_union" do 81 | {:ok, %Record{data: data}} = [1, 2] |> set_union([2, 3]) |> run 82 | assert data == [1, 2, 3] 83 | end 84 | 85 | test "set_difference" do 86 | {:ok, %Record{data: data}} = [1, 2, 4] |> set_difference([2, 3]) |> run 87 | assert data == [1, 4] 88 | end 89 | 90 | test "get_field" do 91 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> get_field("a") |> run 92 | assert data == 5 93 | end 94 | 95 | test "has_fields" do 96 | {:ok, %Record{data: data}} = 97 | [ 98 | %{"b" => 6, "c" => 3}, 99 | %{"b" => 8} 100 | ] 101 | |> has_fields(["c"]) 102 | |> run 103 | 104 | assert data == [%{"b" => 6, "c" => 3}] 105 | end 106 | 107 | test "insert_at" do 108 | {:ok, %Record{data: data}} = [1, 2, 3] |> insert_at(1, 5) |> run 109 | assert data == [1, 5, 2, 3] 110 | end 111 | 112 | test "splice_at" do 113 | {:ok, %Record{data: data}} = [1, 2, 3] |> splice_at(1, [5, 6]) |> run 114 | assert data == [1, 5, 6, 2, 3] 115 | end 116 | 117 | test "delete_at" do 118 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> delete_at(1) |> run 119 | assert data == [1, 3, 4] 120 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> delete_at(1, 3) |> run 121 | assert data == [1, 4] 122 | end 123 | 124 | test "change_at" do 125 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> change_at(1, 7) |> run 126 | assert data == [1, 7, 3, 4] 127 | end 128 | 129 | test "keys" do 130 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> keys |> run 131 | assert data == ["a", "b"] 132 | end 133 | 134 | test "values" do 135 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> values |> run 136 | assert data == [5, 6] 137 | end 138 | 139 | test "literal" do 140 | {:ok, %Record{data: data}} = 141 | %{ 142 | a: 5, 143 | b: %{ 144 | c: 6 145 | } 146 | } 147 | |> merge(%{b: literal(%{d: 7})}) 148 | |> run 149 | 150 | assert data == %{ 151 | "a" => 5, 152 | "b" => %{ 153 | "d" => 7 154 | } 155 | } 156 | end 157 | 158 | test "object" do 159 | {:ok, %Record{data: data}} = object(["a", 1, "b", 2]) |> run 160 | assert data == %{"a" => 1, "b" => 2} 161 | end 162 | end 163 | -------------------------------------------------------------------------------- /test/query/geospatial_adv_test.exs: -------------------------------------------------------------------------------- 1 | defmodule GeospatialAdvTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | @table_name "geo_test_table_1" 9 | setup_all do 10 | start_link() 11 | table_create(@table_name) |> run 12 | table(@table_name) |> index_create("location", geo: true) |> run 13 | table(@table_name) |> index_wait("location") |> run 14 | 15 | on_exit(fn -> 16 | start_link() 17 | table_drop(@table_name) |> run 18 | end) 19 | 20 | :ok 21 | end 22 | 23 | setup do 24 | table(@table_name) |> delete |> run 25 | :ok 26 | end 27 | 28 | test "get_intersecting" do 29 | table(@table_name) |> insert(%{location: point(0.001, 0.001)}) |> run 30 | table(@table_name) |> insert(%{location: point(0.001, 0)}) |> run 31 | 32 | {:ok, %{data: data}} = 33 | table(@table_name) 34 | |> get_intersecting( 35 | circle({0, 0}, 5000), 36 | index: "location" 37 | ) 38 | |> run 39 | 40 | points = for x <- data, do: x["location"].coordinates 41 | assert Enum.sort(points) == [{0.001, 0}, {0.001, 0.001}] 42 | end 43 | 44 | test "get_nearest" do 45 | table(@table_name) |> insert(%{location: point(0.001, 0.001)}) |> run 46 | table(@table_name) |> insert(%{location: point(0.001, 0)}) |> run 47 | 48 | {:ok, %Record{data: data}} = 49 | table(@table_name) 50 | |> get_nearest( 51 | point({0, 0}), 52 | index: "location", 53 | max_dist: 5_000_000 54 | ) 55 | |> run 56 | 57 | assert Enum.count(data) == 2 58 | end 59 | end 60 | -------------------------------------------------------------------------------- /test/query/geospatial_test.exs: -------------------------------------------------------------------------------- 1 | defmodule GeospatialTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Pseudotypes.Geometry.Point 8 | alias RethinkDB.Pseudotypes.Geometry.Line 9 | alias RethinkDB.Pseudotypes.Geometry.Polygon 10 | 11 | setup_all do 12 | start_link() 13 | :ok 14 | end 15 | 16 | test "circle" do 17 | {:ok, %Record{data: data}} = circle({1, 1}, 5) |> run 18 | assert %Polygon{coordinates: [_h | []]} = data 19 | end 20 | 21 | test "circle with opts" do 22 | {:ok, %Record{data: data}} = circle({1, 1}, 5, num_vertices: 100, fill: true) |> run 23 | assert %Polygon{coordinates: [_h | []]} = data 24 | end 25 | 26 | test "distance" do 27 | {:ok, %Record{data: data}} = distance(point({1, 1}), point({2, 2})) |> run 28 | assert data == 156_876.14940188665 29 | end 30 | 31 | test "fill" do 32 | {:ok, %Record{data: data}} = fill(line([{1, 1}, {4, 5}, {2, 2}, {1, 1}])) |> run 33 | assert data == %Polygon{coordinates: [[{1, 1}, {4, 5}, {2, 2}, {1, 1}]]} 34 | end 35 | 36 | test "geojson" do 37 | {:ok, %Record{data: data}} = geojson(%{coordinates: [1, 1], type: "Point"}) |> run 38 | assert data == %Point{coordinates: {1, 1}} 39 | end 40 | 41 | test "geojson with holes" do 42 | coords = [square(0, 0, 10), square(1, 1, 1), square(4, 4, 1)] 43 | {:ok, %Record{data: data}} = geojson(%{type: "Polygon", coordinates: coords}) |> run 44 | assert data == %Polygon{coordinates: coords} 45 | end 46 | 47 | defp square(x, y, s) do 48 | [{x, y}, {x + s, y}, {x + s, y + s}, {x, y + s}, {x, y}] 49 | end 50 | 51 | test "to_geojson" do 52 | {:ok, %Record{data: data}} = point({1, 1}) |> to_geojson |> run 53 | assert data == %{"type" => "Point", "coordinates" => [1, 1]} 54 | end 55 | 56 | # TODO: get_intersecting, get_nearest, includes, intersects 57 | test "point" do 58 | {:ok, %Record{data: data}} = point({1, 1}) |> run 59 | assert data == %Point{coordinates: {1, 1}} 60 | end 61 | 62 | test "line" do 63 | {:ok, %Record{data: data}} = line([{1, 1}, {4, 5}]) |> run 64 | assert data == %Line{coordinates: [{1, 1}, {4, 5}]} 65 | end 66 | 67 | test "includes" do 68 | {:ok, %Record{data: data}} = 69 | [circle({0, 0}, 1000), circle({0.001, 0}, 1000), circle({100, 100}, 1)] 70 | |> includes(point(0, 0)) 71 | |> run 72 | 73 | assert Enum.count(data) == 2 74 | {:ok, %Record{data: data}} = circle({0, 0}, 1000) |> includes(point(0, 0)) |> run 75 | assert data == true 76 | {:ok, %Record{data: data}} = circle({0, 0}, 1000) |> includes(point(80, 80)) |> run 77 | assert data == false 78 | end 79 | 80 | test "intersects" do 81 | b = 82 | [ 83 | circle({0, 0}, 1000), 84 | circle({0, 0}, 1000), 85 | circle({80, 80}, 1) 86 | ] 87 | |> intersects(circle({0, 0}, 10)) 88 | 89 | {:ok, %Record{data: data}} = b |> run 90 | assert Enum.count(data) == 2 91 | {:ok, %Record{data: data}} = circle({0, 0}, 1000) |> intersects(circle({0, 0}, 1)) |> run 92 | assert data == true 93 | {:ok, %Record{data: data}} = circle({0, 0}, 1000) |> intersects(circle({80, 80}, 1)) |> run 94 | assert data == false 95 | end 96 | 97 | test "polygon" do 98 | {:ok, %Record{data: data}} = polygon([{0, 0}, {0, 1}, {1, 1}, {1, 0}]) |> run 99 | assert data.coordinates == [[{0, 0}, {0, 1}, {1, 1}, {1, 0}, {0, 0}]] 100 | end 101 | 102 | test "polygon_sub" do 103 | p1 = polygon([{0, 0}, {0, 1}, {1, 1}, {1, 0}]) 104 | p2 = polygon([{0.25, 0.25}, {0.25, 0.5}, {0.5, 0.5}, {0.5, 0.25}]) 105 | {:ok, %Record{data: data}} = p1 |> polygon_sub(p2) |> run 106 | 107 | assert data.coordinates == [ 108 | [{0, 0}, {0, 1}, {1, 1}, {1, 0}, {0, 0}], 109 | [{0.25, 0.25}, {0.25, 0.5}, {0.5, 0.5}, {0.5, 0.25}, {0.25, 0.25}] 110 | ] 111 | end 112 | end 113 | -------------------------------------------------------------------------------- /test/query/joins_test.exs: -------------------------------------------------------------------------------- 1 | defmodule JoinsTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Collection 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | @table_name "joins_test_table_1" 13 | setup_all do 14 | start_link() 15 | table_create(@table_name) |> run 16 | 17 | on_exit(fn -> 18 | start_link() 19 | table_drop(@table_name) |> run 20 | end) 21 | 22 | :ok 23 | end 24 | 25 | setup do 26 | table(@table_name) |> delete |> run 27 | :ok 28 | end 29 | 30 | test "inner join arrays" do 31 | left = [%{a: 1, b: 2}, %{a: 2, b: 3}] 32 | right = [%{a: 1, c: 4}, %{a: 2, c: 6}] 33 | 34 | q = 35 | inner_join( 36 | left, 37 | right, 38 | lambda(fn l, r -> 39 | l[:a] == r[:a] 40 | end) 41 | ) 42 | 43 | {:ok, %Record{data: data}} = run(q) 44 | 45 | assert data == [ 46 | %{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, 47 | %{"left" => %{"a" => 2, "b" => 3}, "right" => %{"a" => 2, "c" => 6}} 48 | ] 49 | 50 | {:ok, %Record{data: data}} = q |> zip |> run 51 | assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3, "c" => 6}] 52 | end 53 | 54 | test "outer join arrays" do 55 | left = [%{a: 1, b: 2}, %{a: 2, b: 3}] 56 | right = [%{a: 1, c: 4}] 57 | 58 | q = 59 | outer_join( 60 | left, 61 | right, 62 | lambda(fn l, r -> 63 | l[:a] == r[:a] 64 | end) 65 | ) 66 | 67 | {:ok, %Record{data: data}} = run(q) 68 | 69 | assert data == [ 70 | %{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, 71 | %{"left" => %{"a" => 2, "b" => 3}} 72 | ] 73 | 74 | {:ok, %Record{data: data}} = q |> zip |> run 75 | assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3}] 76 | end 77 | 78 | test "eq join arrays" do 79 | table_create("test_1") |> run 80 | table_create("test_2") |> run 81 | table("test_1") |> insert([%{id: 3, a: 1, b: 2}, %{id: 2, a: 2, b: 3}]) |> run 82 | table("test_2") |> insert([%{id: 1, c: 4}]) |> run 83 | q = eq_join(table("test_1"), :a, table("test_2"), index: :id) 84 | {:ok, %Collection{data: data}} = run(q) 85 | {:ok, %Collection{data: data2}} = q |> zip |> run 86 | table_drop("test_1") |> run 87 | table_drop("test_2") |> run 88 | 89 | assert data == [ 90 | %{"left" => %{"id" => 3, "a" => 1, "b" => 2}, "right" => %{"id" => 1, "c" => 4}} 91 | ] 92 | 93 | assert data2 == [%{"id" => 1, "a" => 1, "b" => 2, "c" => 4}] 94 | end 95 | end 96 | -------------------------------------------------------------------------------- /test/query/math_logic_test.exs: -------------------------------------------------------------------------------- 1 | defmodule MathLogicTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link() 10 | :ok 11 | end 12 | 13 | test "add scalars" do 14 | {:ok, %Record{data: data}} = add(1, 2) |> run 15 | assert data == 3 16 | end 17 | 18 | test "add list of scalars" do 19 | {:ok, %Record{data: data}} = add([1, 2]) |> run 20 | assert data == 3 21 | end 22 | 23 | test "concatenate two strings" do 24 | {:ok, %Record{data: data}} = add("hello ", "world") |> run 25 | assert data == "hello world" 26 | end 27 | 28 | test "concatenate list of strings" do 29 | {:ok, %Record{data: data}} = add(["hello", " ", "world"]) |> run 30 | assert data == "hello world" 31 | end 32 | 33 | test "concatenate two arrays" do 34 | {:ok, %Record{data: data}} = add([1, 2, 3], [3, 4, 5]) |> run 35 | assert data == [1, 2, 3, 3, 4, 5] 36 | end 37 | 38 | test "concatenate list of arrays" do 39 | {:ok, %Record{data: data}} = add([[1, 2, 3], [3, 4, 5], [5, 6, 7]]) |> run 40 | assert data == [1, 2, 3, 3, 4, 5, 5, 6, 7] 41 | end 42 | 43 | test "subtract two numbers" do 44 | {:ok, %Record{data: data}} = sub(5, 2) |> run 45 | assert data == 3 46 | end 47 | 48 | test "subtract list of numbers" do 49 | {:ok, %Record{data: data}} = sub([9, 3, 1]) |> run 50 | assert data == 5 51 | end 52 | 53 | test "multiply two numbers" do 54 | {:ok, %Record{data: data}} = mul(5, 2) |> run 55 | assert data == 10 56 | end 57 | 58 | test "multiply list of numbers" do 59 | {:ok, %Record{data: data}} = mul([1, 2, 3, 4, 5]) |> run 60 | assert data == 120 61 | end 62 | 63 | test "create periodic array" do 64 | {:ok, %Record{data: data}} = mul(3, [1, 2]) |> run 65 | assert data == [1, 2, 1, 2, 1, 2] 66 | end 67 | 68 | test "divide two numbers" do 69 | {:ok, %Record{data: data}} = divide(6, 3) |> run 70 | assert data == 2 71 | end 72 | 73 | test "divide list of numbers" do 74 | {:ok, %Record{data: data}} = divide([12, 3, 2]) |> run 75 | assert data == 2 76 | end 77 | 78 | test "find remainder when dividing two numbers" do 79 | {:ok, %Record{data: data}} = mod(23, 4) |> run 80 | assert data == 3 81 | end 82 | 83 | test "logical and of two values" do 84 | {:ok, %Record{data: data}} = and_r(true, true) |> run 85 | assert data == true 86 | end 87 | 88 | test "logical and of list" do 89 | {:ok, %Record{data: data}} = and_r([true, true, false]) |> run 90 | assert data == false 91 | end 92 | 93 | test "logical or of two values" do 94 | {:ok, %Record{data: data}} = or_r(true, false) |> run 95 | assert data == true 96 | end 97 | 98 | test "logical or of list" do 99 | {:ok, %Record{data: data}} = or_r([false, false, false]) |> run 100 | assert data == false 101 | end 102 | 103 | test "two numbers are equal" do 104 | {:ok, %Record{data: data}} = eq(1, 1) |> run 105 | assert data == true 106 | {:ok, %Record{data: data}} = eq(2, 1) |> run 107 | assert data == false 108 | end 109 | 110 | test "values in a list are equal" do 111 | {:ok, %Record{data: data}} = eq([1, 1, 1]) |> run 112 | assert data == true 113 | {:ok, %Record{data: data}} = eq([1, 2, 1]) |> run 114 | assert data == false 115 | end 116 | 117 | test "two numbers are not equal" do 118 | {:ok, %Record{data: data}} = ne(1, 1) |> run 119 | assert data == false 120 | {:ok, %Record{data: data}} = ne(2, 1) |> run 121 | assert data == true 122 | end 123 | 124 | test "values in a list are not equal" do 125 | {:ok, %Record{data: data}} = ne([1, 1, 1]) |> run 126 | assert data == false 127 | {:ok, %Record{data: data}} = ne([1, 2, 1]) |> run 128 | assert data == true 129 | end 130 | 131 | test "a number is less than the other" do 132 | {:ok, %Record{data: data}} = lt(2, 1) |> run 133 | assert data == false 134 | {:ok, %Record{data: data}} = lt(1, 2) |> run 135 | assert data == true 136 | end 137 | 138 | test "values in a list less than the next" do 139 | {:ok, %Record{data: data}} = lt([1, 4, 2]) |> run 140 | assert data == false 141 | {:ok, %Record{data: data}} = lt([1, 4, 5]) |> run 142 | assert data == true 143 | end 144 | 145 | test "a number is less than or equal to the other" do 146 | {:ok, %Record{data: data}} = le(1, 1) |> run 147 | assert data == true 148 | {:ok, %Record{data: data}} = le(1, 2) |> run 149 | assert data == true 150 | end 151 | 152 | test "values in a list less than or equal to the next" do 153 | {:ok, %Record{data: data}} = le([1, 4, 2]) |> run 154 | assert data == false 155 | {:ok, %Record{data: data}} = le([1, 4, 4]) |> run 156 | assert data == true 157 | end 158 | 159 | test "a number is greater than the other" do 160 | {:ok, %Record{data: data}} = gt(1, 1) |> run 161 | assert data == false 162 | {:ok, %Record{data: data}} = gt(2, 1) |> run 163 | assert data == true 164 | end 165 | 166 | test "values in a list greater than the next" do 167 | {:ok, %Record{data: data}} = gt([1, 4, 2]) |> run 168 | assert data == false 169 | {:ok, %Record{data: data}} = gt([10, 4, 2]) |> run 170 | assert data == true 171 | end 172 | 173 | test "a number is greater than or equal to the other" do 174 | {:ok, %Record{data: data}} = ge(1, 1) |> run 175 | assert data == true 176 | {:ok, %Record{data: data}} = ge(2, 1) |> run 177 | assert data == true 178 | end 179 | 180 | test "values in a list greater than or equal to the next" do 181 | {:ok, %Record{data: data}} = ge([1, 4, 2]) |> run 182 | assert data == false 183 | {:ok, %Record{data: data}} = ge([10, 4, 4]) |> run 184 | assert data == true 185 | end 186 | 187 | test "not operator" do 188 | {:ok, %Record{data: data}} = not_r(true) |> run 189 | assert data == false 190 | end 191 | 192 | test "random operator" do 193 | {:ok, %Record{data: data}} = random() |> run 194 | assert data >= 0.0 && data <= 1.0 195 | {:ok, %Record{data: data}} = random(100) |> run 196 | assert is_integer(data) && data >= 0 && data <= 100 197 | {:ok, %Record{data: data}} = random(100.0) |> run 198 | assert is_float(data) && data >= 0.0 && data <= 100.0 199 | {:ok, %Record{data: data}} = random(50, 100) |> run 200 | assert is_integer(data) && data >= 50 && data <= 100 201 | {:ok, %Record{data: data}} = random(50, 100.0) |> run 202 | assert is_float(data) && data >= 50.0 && data <= 100.0 203 | end 204 | 205 | test "round" do 206 | {:ok, %Record{data: data}} = round_r(0.3) |> run 207 | assert data == 0 208 | {:ok, %Record{data: data}} = round_r(0.6) |> run 209 | assert data == 1 210 | end 211 | 212 | test "ceil" do 213 | {:ok, %Record{data: data}} = ceil(0.3) |> run 214 | assert data == 1 215 | {:ok, %Record{data: data}} = ceil(0.6) |> run 216 | assert data == 1 217 | end 218 | 219 | test "floor" do 220 | {:ok, %Record{data: data}} = floor(0.3) |> run 221 | assert data == 0 222 | {:ok, %Record{data: data}} = floor(0.6) |> run 223 | assert data == 0 224 | end 225 | end 226 | -------------------------------------------------------------------------------- /test/query/selection_test.exs: -------------------------------------------------------------------------------- 1 | defmodule SelectionTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | require RethinkDB.Lambda 9 | import RethinkDB.Lambda 10 | 11 | @table_name "selection_test_table_1" 12 | setup_all do 13 | start_link() 14 | table_create(@table_name) |> run 15 | 16 | on_exit(fn -> 17 | start_link() 18 | table_drop(@table_name) |> run 19 | end) 20 | 21 | :ok 22 | end 23 | 24 | setup do 25 | table(@table_name) |> delete |> run 26 | :ok 27 | end 28 | 29 | test "get" do 30 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 31 | {:ok, %Record{data: data}} = table(@table_name) |> get("a") |> run 32 | assert data == %{"a" => 5, "id" => "a"} 33 | end 34 | 35 | test "get all" do 36 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 37 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 38 | {:ok, data} = table(@table_name) |> get_all(["a", "b"]) |> run 39 | 40 | assert Enum.sort(Enum.to_list(data)) == [ 41 | %{"a" => 5, "id" => "a"}, 42 | %{"a" => 5, "id" => "b"} 43 | ] 44 | end 45 | 46 | test "get all with index" do 47 | table(@table_name) |> insert(%{id: "a", other_id: "c"}) |> run 48 | table(@table_name) |> insert(%{id: "b", other_id: "d"}) |> run 49 | table(@table_name) |> index_create("other_id") |> run 50 | table(@table_name) |> index_wait("other_id") |> run 51 | {:ok, data} = table(@table_name) |> get_all(["c", "d"], index: "other_id") |> run 52 | 53 | assert Enum.sort(Enum.to_list(data)) == [ 54 | %{"id" => "a", "other_id" => "c"}, 55 | %{"id" => "b", "other_id" => "d"} 56 | ] 57 | end 58 | 59 | test "get all should be able to accept an empty list" do 60 | {:ok, result} = table(@table_name) |> get_all([]) |> run 61 | assert result.data == [] 62 | end 63 | 64 | test "between" do 65 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 66 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 67 | table(@table_name) |> insert(%{id: "c", a: 5}) |> run 68 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> between("b", "d") |> run 69 | assert Enum.count(data) == 2 70 | 71 | {:ok, %RethinkDB.Collection{data: data}} = 72 | table(@table_name) |> between(minval(), maxval()) |> run 73 | 74 | assert Enum.count(data) == 3 75 | end 76 | 77 | test "filter" do 78 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 79 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 80 | table(@table_name) |> insert(%{id: "c", a: 6}) |> run 81 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> filter(%{a: 6}) |> run 82 | assert Enum.count(data) == 1 83 | 84 | {:ok, %RethinkDB.Collection{data: data}} = 85 | table(@table_name) 86 | |> filter( 87 | lambda(fn x -> 88 | x["a"] == 5 89 | end) 90 | ) 91 | |> run 92 | 93 | assert Enum.count(data) == 2 94 | end 95 | end 96 | -------------------------------------------------------------------------------- /test/query/string_manipulation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule StringManipulationTest do 2 | use ExUnit.Case, async: false 3 | import RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | {:ok, pid} = RethinkDB.Connection.start_link() 10 | {:ok, %{conn: pid}} 11 | end 12 | 13 | test "match a string", context do 14 | {:ok, %Record{data: data}} = "hello world" |> match("hello") |> run(context.conn) 15 | assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} 16 | end 17 | 18 | test "match a regex", context do 19 | {:ok, %Record{data: data}} = "hello world" |> match(~r(hello)) |> run(context.conn) 20 | assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} 21 | end 22 | 23 | test "split a string", context do 24 | {:ok, %Record{data: data}} = "abracadabra" |> split |> run(context.conn) 25 | assert data == ["abracadabra"] 26 | {:ok, %Record{data: data}} = "abra-cadabra" |> split("-") |> run(context.conn) 27 | assert data == ["abra", "cadabra"] 28 | {:ok, %Record{data: data}} = "a-bra-ca-da-bra" |> split("-", 2) |> run(context.conn) 29 | assert data == ["a", "bra", "ca-da-bra"] 30 | end 31 | 32 | test "upcase", context do 33 | {:ok, %Record{data: data}} = "hi" |> upcase |> run(context.conn) 34 | assert data == "HI" 35 | end 36 | 37 | test "downcase", context do 38 | {:ok, %Record{data: data}} = "Hi" |> downcase |> run(context.conn) 39 | assert data == "hi" 40 | end 41 | end 42 | -------------------------------------------------------------------------------- /test/query/table_db_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableDBTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link() 10 | :ok 11 | end 12 | 13 | @db_name "table_db_test_db_1" 14 | @table_name "table_db_test_table_1" 15 | 16 | test "tables with specific database" do 17 | db_create(@db_name) |> run 18 | 19 | on_exit(fn -> 20 | db_drop(@db_name) |> run 21 | end) 22 | 23 | q = db(@db_name) |> table_create(@table_name) 24 | {:ok, %Record{data: %{"tables_created" => 1}}} = run(q) 25 | 26 | q = db(@db_name) |> table_list 27 | {:ok, %Record{data: tables}} = run(q) 28 | assert Enum.member?(tables, @table_name) 29 | 30 | q = db(@db_name) |> table_drop(@table_name) 31 | {:ok, %Record{data: %{"tables_dropped" => 1}}} = run(q) 32 | 33 | q = db(@db_name) |> table_list 34 | {:ok, %Record{data: tables}} = run(q) 35 | assert !Enum.member?(tables, @table_name) 36 | 37 | q = db(@db_name) |> table_create(@table_name, primary_key: "not_id") 38 | {:ok, %Record{data: result}} = run(q) 39 | %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result 40 | assert primary_key == "not_id" 41 | end 42 | end 43 | -------------------------------------------------------------------------------- /test/query/table_index_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableIndexTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Record 6 | 7 | setup_all do 8 | start_link() 9 | :ok 10 | end 11 | 12 | @table_name "table_index_test_table_1" 13 | setup do 14 | table_create(@table_name) |> run 15 | 16 | on_exit(fn -> 17 | table_drop(@table_name) |> run 18 | end) 19 | 20 | :ok 21 | end 22 | 23 | test "indexes" do 24 | {:ok, %Record{data: data}} = table(@table_name) |> index_create("hello") |> run 25 | assert data == %{"created" => 1} 26 | {:ok, %Record{data: data}} = table(@table_name) |> index_wait("hello") |> run 27 | 28 | assert [ 29 | %{ 30 | "function" => _, 31 | "geo" => false, 32 | "index" => "hello", 33 | "multi" => false, 34 | "outdated" => false, 35 | "ready" => true 36 | } 37 | ] = data 38 | 39 | {:ok, %Record{data: data}} = table(@table_name) |> index_status("hello") |> run 40 | 41 | assert [ 42 | %{ 43 | "function" => _, 44 | "geo" => false, 45 | "index" => "hello", 46 | "multi" => false, 47 | "outdated" => false, 48 | "ready" => true 49 | } 50 | ] = data 51 | 52 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 53 | assert data == ["hello"] 54 | table(@table_name) |> index_rename("hello", "goodbye") |> run 55 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 56 | assert data == ["goodbye"] 57 | table(@table_name) |> index_drop("goodbye") |> run 58 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 59 | assert data == [] 60 | end 61 | end 62 | -------------------------------------------------------------------------------- /test/query/table_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Record 6 | 7 | setup_all do 8 | start_link() 9 | :ok 10 | end 11 | 12 | @table_name "table_test_table_1" 13 | 14 | test "tables" do 15 | table_drop(@table_name) |> run 16 | 17 | on_exit(fn -> 18 | table_drop(@table_name) |> run 19 | end) 20 | 21 | q = table_create(@table_name) 22 | {:ok, %Record{data: %{"tables_created" => 1}}} = run(q) 23 | 24 | q = table_list() 25 | {:ok, %Record{data: tables}} = run(q) 26 | assert Enum.member?(tables, @table_name) 27 | 28 | q = table_drop(@table_name) 29 | {:ok, %Record{data: %{"tables_dropped" => 1}}} = run(q) 30 | 31 | q = table_list() 32 | {:ok, %Record{data: tables}} = run(q) 33 | assert !Enum.member?(tables, @table_name) 34 | 35 | q = table_create(@table_name, primary_key: "not_id") 36 | {:ok, %Record{data: result}} = run(q) 37 | %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result 38 | assert primary_key == "not_id" 39 | end 40 | end 41 | -------------------------------------------------------------------------------- /test/query/transformation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TransformationTest do 2 | use ExUnit.Case, async: false 3 | 4 | use RethinkDB.Connection 5 | import RethinkDB.Query 6 | 7 | alias RethinkDB.Record 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | setup_all do 13 | start_link() 14 | :ok 15 | end 16 | 17 | test "map" do 18 | {:ok, %Record{data: data}} = map([1, 2, 3], lambda(&(&1 + 1))) |> run 19 | assert data == [2, 3, 4] 20 | end 21 | 22 | test "with_fields" do 23 | {:ok, %Record{data: data}} = 24 | [ 25 | %{a: 5}, 26 | %{a: 6}, 27 | %{a: 7, b: 8} 28 | ] 29 | |> with_fields(["a", "b"]) 30 | |> run 31 | 32 | assert data == [%{"a" => 7, "b" => 8}] 33 | end 34 | 35 | test "flat_map" do 36 | {:ok, %Record{data: data}} = 37 | [ 38 | [1, 2, 3], 39 | [4, 5, 6], 40 | [7, 8, 9] 41 | ] 42 | |> flat_map( 43 | lambda(fn x -> 44 | x |> map(&(&1 * 2)) 45 | end) 46 | ) 47 | |> run 48 | 49 | assert data == [2, 4, 6, 8, 10, 12, 14, 16, 18] 50 | end 51 | 52 | test "order_by" do 53 | {:ok, %Record{data: data}} = 54 | [ 55 | %{a: 1}, 56 | %{a: 7}, 57 | %{a: 4}, 58 | %{a: 5}, 59 | %{a: 2} 60 | ] 61 | |> order_by("a") 62 | |> run 63 | 64 | assert data == [ 65 | %{"a" => 1}, 66 | %{"a" => 2}, 67 | %{"a" => 4}, 68 | %{"a" => 5}, 69 | %{"a" => 7} 70 | ] 71 | end 72 | 73 | test "order by descending attr" do 74 | data = [%{"rank" => 1}, %{"rank" => 2}, %{"rank" => 3}] 75 | 76 | q = data |> order_by(desc("rank")) 77 | {:ok, %{data: result}} = run(q) 78 | 79 | assert result == Enum.reverse(data) 80 | end 81 | 82 | test "skip" do 83 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> skip(2) |> run 84 | assert data == [3, 4] 85 | end 86 | 87 | test "limit" do 88 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> limit(2) |> run 89 | assert data == [1, 2] 90 | end 91 | 92 | test "slice" do 93 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> slice(1, 3) |> run 94 | assert data == [2, 3] 95 | end 96 | 97 | test "nth" do 98 | {:ok, %Record{data: data}} = [1, 2, 3, 4] |> nth(2) |> run 99 | assert data == 3 100 | end 101 | 102 | test "offsets_of" do 103 | {:ok, %Record{data: data}} = [1, 2, 3, 1, 4, 1] |> offsets_of(1) |> run 104 | assert data == [0, 3, 5] 105 | end 106 | 107 | test "is_empty" do 108 | {:ok, %Record{data: data}} = [] |> is_empty |> run 109 | assert data == true 110 | {:ok, %Record{data: data}} = [1, 2, 3, 1, 4, 1] |> is_empty |> run 111 | assert data == false 112 | end 113 | 114 | test "sample" do 115 | {:ok, %Record{data: data}} = [1, 2, 3, 1, 4, 1] |> sample(2) |> run 116 | assert Enum.count(data) == 2 117 | end 118 | end 119 | -------------------------------------------------------------------------------- /test/query/writing_data_test.exs: -------------------------------------------------------------------------------- 1 | defmodule WritingDataTest do 2 | use ExUnit.Case, async: false 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Collection 8 | 9 | @table_name "writing_data_test_table_1" 10 | setup_all do 11 | start_link() 12 | table_create(@table_name) |> run 13 | 14 | on_exit(fn -> 15 | start_link() 16 | table_drop(@table_name) |> run 17 | end) 18 | 19 | :ok 20 | end 21 | 22 | setup do 23 | table(@table_name) |> delete |> run 24 | :ok 25 | end 26 | 27 | test "insert" do 28 | table_query = table(@table_name) 29 | q = insert(table_query, %{name: "Hello", attr: "World"}) 30 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 31 | 32 | {:ok, %Collection{data: [%{"id" => ^key, "name" => "Hello", "attr" => "World"}]}} = 33 | run(table_query) 34 | end 35 | 36 | test "insert multiple" do 37 | table_query = table(@table_name) 38 | 39 | q = insert(table_query, [%{name: "Hello"}, %{name: "World"}]) 40 | {:ok, %Record{data: %{"inserted" => 2}}} = run(q) 41 | 42 | {:ok, %Collection{data: data}} = run(table_query) 43 | assert Enum.map(data, & &1["name"]) |> Enum.sort() == ["Hello", "World"] 44 | end 45 | 46 | test "insert conflict options" do 47 | table_query = table(@table_name) 48 | 49 | q = insert(table_query, [%{name: "Hello", value: 1}]) 50 | {:ok, %Record{data: %{"generated_keys" => [id], "inserted" => 1}}} = run(q) 51 | 52 | q = insert(table_query, [%{name: "Hello", id: id, value: 2}]) 53 | {:ok, %Record{data: %{"errors" => 1}}} = run(q) 54 | 55 | q = insert(table_query, [%{name: "World", id: id, value: 2}], %{conflict: "replace"}) 56 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 57 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 2}]}} = run(table_query) 58 | 59 | q = insert(table_query, [%{id: id, value: 3}], %{conflict: "update"}) 60 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 61 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 3}]}} = run(table_query) 62 | 63 | q = 64 | insert(table_query, [%{id: id, value: 3}], %{ 65 | conflict: fn _id, old, new -> 66 | merge(old, %{value: add(get_field(old, "value"), get_field(new, "value"))}) 67 | end 68 | }) 69 | 70 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 71 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 6}]}} = run(table_query) 72 | end 73 | 74 | test "update" do 75 | table_query = table(@table_name) 76 | q = insert(table_query, %{name: "Hello", attr: "World"}) 77 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 78 | 79 | record_query = table_query |> get(key) 80 | q = record_query |> update(%{name: "Hi"}) 81 | run(q) 82 | q = record_query 83 | {:ok, %Record{data: data}} = run(q) 84 | assert data == %{"id" => key, "name" => "Hi", "attr" => "World"} 85 | end 86 | 87 | test "replace" do 88 | table_query = table(@table_name) 89 | q = insert(table_query, %{name: "Hello", attr: "World"}) 90 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 91 | 92 | record_query = table_query |> get(key) 93 | q = record_query |> replace(%{id: key, name: "Hi"}) 94 | run(q) 95 | q = record_query 96 | {:ok, %Record{data: data}} = run(q) 97 | assert data == %{"id" => key, "name" => "Hi"} 98 | end 99 | 100 | test "sync" do 101 | table_query = table(@table_name) 102 | q = table_query |> sync 103 | {:ok, %Record{data: data}} = run(q) 104 | assert data == %{"synced" => 1} 105 | end 106 | end 107 | -------------------------------------------------------------------------------- /test/query_test.exs: -------------------------------------------------------------------------------- 1 | defmodule QueryTest do 2 | use ExUnit.Case, async: false 3 | alias RethinkDB.Query 4 | alias RethinkDB.Record 5 | alias RethinkDB.Collection 6 | use RethinkDB.Connection 7 | import RethinkDB.Query 8 | require RethinkDB.Lambda 9 | 10 | @table_name "query_test_table_1" 11 | setup_all do 12 | start_link() 13 | table_create(@table_name) |> run 14 | 15 | on_exit(fn -> 16 | start_link() 17 | table_drop(@table_name) |> run 18 | end) 19 | 20 | :ok 21 | end 22 | 23 | setup do 24 | table(@table_name) |> delete |> run 25 | :ok 26 | end 27 | 28 | test "make_array" do 29 | array = [%{"name" => "hello"}, %{"name:" => "world"}] 30 | q = Query.make_array(array) 31 | {:ok, %Record{data: data}} = run(q) 32 | assert Enum.sort(data) == Enum.sort(array) 33 | end 34 | 35 | test "map" do 36 | table_query = table(@table_name) 37 | 38 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 39 | 40 | {:ok, %Collection{data: data}} = 41 | table(@table_name) 42 | |> map( 43 | RethinkDB.Lambda.lambda(fn el -> 44 | el[:name] + " " + "with map" 45 | end) 46 | ) 47 | |> run 48 | 49 | assert Enum.sort(data) == ["Hello with map", "World with map"] 50 | end 51 | 52 | test "filter by map" do 53 | table_query = table(@table_name) 54 | 55 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 56 | 57 | {:ok, %Collection{data: data}} = 58 | table(@table_name) 59 | |> filter(%{name: "Hello"}) 60 | |> run 61 | 62 | assert Enum.map(data, & &1["name"]) == ["Hello"] 63 | end 64 | 65 | test "filter by lambda" do 66 | table_query = table(@table_name) 67 | 68 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 69 | 70 | {:ok, %Collection{data: data}} = 71 | table(@table_name) 72 | |> filter( 73 | RethinkDB.Lambda.lambda(fn el -> 74 | el[:name] == "Hello" 75 | end) 76 | ) 77 | |> run 78 | 79 | assert Enum.map(data, & &1["name"]) == ["Hello"] 80 | end 81 | 82 | test "nested functions" do 83 | a = 84 | make_array([1, 2, 3]) 85 | |> map(fn x -> 86 | make_array([4, 5, 6]) 87 | |> map(fn y -> 88 | [x, y] 89 | end) 90 | end) 91 | 92 | {:ok, %{data: data}} = run(a) 93 | 94 | assert data == [ 95 | [[1, 4], [1, 5], [1, 6]], 96 | [[2, 4], [2, 5], [2, 6]], 97 | [[3, 4], [3, 5], [3, 6]] 98 | ] 99 | end 100 | end 101 | -------------------------------------------------------------------------------- /test/test_helper.exs: -------------------------------------------------------------------------------- 1 | ExUnit.start(max_cases: 5) 2 | -------------------------------------------------------------------------------- /tester.exs: -------------------------------------------------------------------------------- 1 | import RethinkDB.Query 2 | 3 | c = RethinkDB.connect 4 | d = RethinkDB.connect 5 | f = RethinkDB.connect 6 | g = RethinkDB.connect 7 | 8 | q = 1..12 |> Enum.reduce("hi", fn (_, acc) -> 9 | add(acc, acc) 10 | end) 11 | 12 | 1..10 |> Enum.map(fn (_) -> 13 | {r, _} = :timer.tc(fn -> 14 | 1..10 |> Enum.map(fn (_) -> 15 | Task.async fn -> 16 | e = Enum.random([c,c,c,c]) 17 | q |> RethinkDB.run(e, :infinity) 18 | end 19 | end) |> Enum.map(&Task.await(&1, :infinity)) 20 | end) 21 | 22 | IO.inspect r 23 | end) 24 | --------------------------------------------------------------------------------