├── .gitignore ├── .travis.yml ├── CONTRIBUTORS.md ├── LICENSE ├── README.md ├── config └── config.exs ├── lib ├── rethinkdb.ex └── rethinkdb │ ├── connection.ex │ ├── connection │ ├── request.ex │ └── transport.ex │ ├── exception.ex │ ├── lambda.ex │ ├── prepare.ex │ ├── pseudotypes.ex │ ├── q.ex │ ├── query.ex │ ├── query │ ├── macros.ex │ └── term_info.json │ └── response.ex ├── mix.exs ├── mix.lock ├── test ├── cert │ ├── host.crt │ ├── host.key │ └── rootCA.pem ├── changes_test.exs ├── connection_test.exs ├── prepare_test.exs ├── query │ ├── administration_query_test.exs │ ├── aggregation_test.exs │ ├── control_structures_adv_test.exs │ ├── control_structures_test.exs │ ├── database_test.exs │ ├── date_time_test.exs │ ├── document_manipulation_test.exs │ ├── geospatial_adv_test.exs │ ├── geospatial_test.exs │ ├── joins_test.exs │ ├── math_logic_test.exs │ ├── selection_test.exs │ ├── string_manipulation_test.exs │ ├── table_db_test.exs │ ├── table_index_test.exs │ ├── table_test.exs │ ├── transformation_test.exs │ └── writing_data_test.exs ├── query_test.exs └── test_helper.exs └── tester.exs /.gitignore: -------------------------------------------------------------------------------- 1 | /_build 2 | /deps 3 | /docs 4 | /doc 5 | erl_crash.dump 6 | *.ez 7 | *.swp 8 | *.beam 9 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: elixir 2 | 3 | elixir: 4 | - 1.3 5 | 6 | otp_release: 7 | - 18.3 8 | - 19.1 9 | 10 | install: 11 | - mix local.rebar --force 12 | - mix local.hex --force 13 | - mix deps.get --only test 14 | 15 | addons: 16 | rethinkdb: '2.3' 17 | -------------------------------------------------------------------------------- /CONTRIBUTORS.md: -------------------------------------------------------------------------------- 1 | Just a place to list individuals who have helped out. Any contribution counts (even participating in a discussion on an open issue). Open a PR to add your name if you feel so inclined. 2 | 3 | * hamiltop 4 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2015 hamiltop 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | 23 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | RethinkDB [![Build Status](https://travis-ci.org/hamiltop/rethinkdb-elixir.svg?branch=master)](https://travis-ci.org/hamiltop/rethinkdb-elixir) 2 | =========== 3 | UPDATE: I am not actively developing this. 4 | 5 | Multiplexed RethinkDB client in pure Elixir. 6 | 7 | If you are coming here from elixir-rethinkdb, welcome! 8 | If you were expecting `Exrethinkdb` you are in the right place. We decided to change the name to just `RethinkDB` and the repo to `rethinkdb-elixir`. Sorry if it has caused confusion. Better now in the early stages than later! 9 | 10 | I just set up a channel on the Elixir slack, so if you are on there join #rethinkdb. 11 | 12 | ### Recent changes 13 | 14 | #### 0.4.0 15 | 16 | * Extract Changefeed out into separate [package](https://github.com/hamiltop/rethinkdb_changefeed) 17 | * Accept keyword options with queries 18 | 19 | ## Getting Started 20 | 21 | See [API documentation](http://hexdocs.pm/rethinkdb/) for more details. 22 | 23 | ### Connection 24 | 25 | Connections are managed by a process. Start the process by calling `start_link/1`. See [documentation for `Connection.start_link/1`](http://hexdocs.pm/rethinkdb/RethinkDB.Connection.html#start_link/1) for supported options. 26 | 27 | #### Basic Remote Connection 28 | 29 | ```elixir 30 | {:ok, conn} = RethinkDB.Connection.start_link([host: "10.0.0.17", port: 28015]) 31 | ``` 32 | 33 | #### Named Connection 34 | 35 | ```elixir 36 | {:ok, conn} = RethinkDB.Connection.start_link([name: :foo]) 37 | ``` 38 | 39 | #### Supervised Connection 40 | 41 | Start the supervisor with: 42 | 43 | ```elixir 44 | worker(RethinkDB.Connection, [[name: :foo]]) 45 | worker(RethinkDB.Connection, [[name: :bar, host: 'localhost', port: 28015]]) 46 | ``` 47 | 48 | #### Default Connection 49 | 50 | An `RethinkDB.Connection` does parallel queries via pipelining. It can and should be shared among multiple processes. Because of this, it is common to have one connection shared in your application. To create a default connection, we create a new module and `use RethinkDB.Connection`. 51 | 52 | ```elixir 53 | defmodule FooDatabase do 54 | use RethinkDB.Connection 55 | end 56 | ``` 57 | 58 | This connection can be supervised without a name (it will assume the module as the name). 59 | 60 | ```elixir 61 | worker(FooDatabase, []) 62 | ``` 63 | 64 | Queries can be run without providing a connection (it will use the name connection). 65 | 66 | ```elixir 67 | import RethinkDB.Query 68 | table("people") |> FooDatabase.run 69 | ``` 70 | 71 | #### Connection Pooling 72 | 73 | To use a connection pool, add Poolboy to your dependencies: 74 | 75 | ```elixir 76 | {:poolboy, "~> 1.5"} 77 | ``` 78 | 79 | Then, in your supervision tree, add: 80 | 81 | ```elixir 82 | worker(:poolboy, [[name: {:local, :rethinkdb_pool}, worker_module: RethinkDB.Connection, size: 10, max_overflow: 0], []) 83 | ``` 84 | 85 | NOTE: If you want to use changefeeds or any persistent queries, `max_overflow: 0` is required. 86 | 87 | Then use it in your code: 88 | 89 | ```elixir 90 | db = :poolboy.checkout(:rethinkdb_pool) 91 | table("people") |> db 92 | :poolboy.checkin(:rethinkdb_pool, db) 93 | ``` 94 | 95 | ### Query 96 | 97 | `RethinkDB.run/2` accepts a process as the second argument (to facilitate piping). 98 | 99 | #### Insert 100 | 101 | ```elixir 102 | 103 | q = Query.table("people") 104 | |> Query.insert(%{first_name: "John", last_name: "Smith"}) 105 | |> RethinkDB.run conn 106 | ``` 107 | 108 | #### Filter 109 | 110 | ```elixir 111 | q = Query.table("people") 112 | |> Query.filter(%{last_name: "Smith"}) 113 | |> RethinkDB.run conn 114 | ``` 115 | 116 | #### Functions 117 | 118 | RethinkDB supports RethinkDB functions in queries. There are two approaches you can take: 119 | 120 | Use RethinkDB operators 121 | 122 | ```elixir 123 | import RethinkDB.Query 124 | 125 | make_array([1,2,3]) |> map(fn (x) -> add(x, 1) end) 126 | ``` 127 | 128 | Use Elixir operators via the lambda macro 129 | 130 | ```elixir 131 | require RethinkDB.Lambda 132 | import RethinkDB.Lambda 133 | 134 | make_array([1,2,3]) |> map(lambda fn (x) -> x + 1 end) 135 | ``` 136 | 137 | #### Map 138 | 139 | ```elixir 140 | require RethinkDB.Lambda 141 | import Query 142 | import RethinkDB.Lambda 143 | 144 | conn = RethinkDB.connect 145 | 146 | table("people") 147 | |> has_fields(["first_name", "last_name"]) 148 | |> map(lambda fn (person) -> 149 | person[:first_name] + " " + person[:last_name] 150 | end) |> RethinkDB.run conn 151 | ``` 152 | 153 | See [query.ex](lib/rethinkdb/query.ex) for more basic queries. If you don't see something supported, please open an issue. We're moving fast and any guidance on desired features is helpful. 154 | 155 | #### Indexes 156 | 157 | ```elixir 158 | # Simple indexes 159 | # create 160 | result = Query.table("people") 161 | |> Query.index_create("first_name", Lambda.lambda fn(row) -> row["first_name"] end) 162 | |> RethinkDB.run conn 163 | 164 | # retrieve 165 | result = Query.table("people") 166 | |> Query.get_all(["Will"], index: "first_name") 167 | |> RethinkDB.run conn 168 | 169 | 170 | # Compound indexes 171 | # create 172 | result = Query.table("people") 173 | |> Query.index_create("full_name", Lambda.lambda fn(row) -> [row["first_name"], row["last_name"]] end) 174 | |> RethinkDB.run conn 175 | 176 | # retrieve 177 | result = Query.table("people") 178 | |> Query.get_all([["Will", "Smith"], ["James", "Bond"]], index: "full_name") 179 | |> RethinkDB.run conn 180 | ``` 181 | 182 | One limitation we have in Elixir is that we don't support varargs. So in JavaScript you would do `getAll(key1, key2, {index: "uniqueness"})`. In Elixir we have to do `get_all([key1, key2], index: "uniqueness")`. With a single key it becomes `get_all([key1], index: "uniqueness")` and when `key1` is `[partA, partB]` you have to do `get_all([[partA, partB]], index: "uniqueness")` 183 | 184 | ### Changes 185 | 186 | Change feeds can be consumed either incrementally (by calling `RethinkDB.next/1`) or via the Enumerable Protocol. 187 | 188 | ```elixir 189 | results = Query.table("people") 190 | |> Query.filter(%{last_name: "Smith"}) 191 | |> Query.changes 192 | |> RethinkDB.run conn 193 | # get one result 194 | first_change = RethinkDB.next results 195 | # get stream, chunked in groups of 5, Inspect 196 | results |> Stream.chunk(5) |> Enum.each &IO.inspect/1 197 | ``` 198 | 199 | ### Supervised Changefeeds 200 | 201 | Supervised Changefeeds (an OTP behavior for running a changefeed as a process) have been moved to their own repo to enable independent release cycles. See https://github.com/hamiltop/rethinkdb_changefeed 202 | 203 | ### Roadmap 204 | 205 | Version 1.0.0 will be limited to individual connections and implement the entire documented ReQL (as of rethinkdb 2.0) 206 | 207 | While not provided by this library, we will also include example code for: 208 | 209 | * Connection Pooling 210 | 211 | The goal for 1.0.0 is to be stable. Issues have been filed for work that needs to be completed before 1.0.0 and tagged with the 1.0.0 milestone. 212 | 213 | 214 | ### Example Apps 215 | 216 | Checkout the wiki page for various [example apps](https://github.com/hamiltop/rethinkdb-elixir/wiki/Example-Apps) 217 | 218 | ### Contributing 219 | 220 | Contributions are welcome. Take a look at the Issues. Anything that is tagged `Help Wanted` or `Feedback Wanted` is a good candidate for contributions. Even if you don't know where to start, respond to an interesting issue and you will be pointed in the right direction. 221 | 222 | #### Testing 223 | 224 | Be intentional. Whether you are writing production code or tests, make sure there is value in the test being written. 225 | -------------------------------------------------------------------------------- /config/config.exs: -------------------------------------------------------------------------------- 1 | # This file is responsible for configuring your application 2 | # and its dependencies with the aid of the Mix.Config module. 3 | use Mix.Config 4 | 5 | # This configuration is loaded before any dependency and is restricted 6 | # to this project. If another project depends on this project, this 7 | # file won't be loaded nor affect the parent project. For this reason, 8 | # if you want to provide default values for your application for third- 9 | # party users, it should be done in your mix.exs file. 10 | 11 | # Sample configuration: 12 | # 13 | # config :logger, :console, 14 | # level: :info, 15 | # format: "$date $time [$level] $metadata$message\n", 16 | # metadata: [:user_id] 17 | 18 | # It is also possible to import configuration files, relative to this 19 | # directory. For example, you can emulate configuration per environment 20 | # by uncommenting the line below and defining dev.exs, test.exs and such. 21 | # Configuration from the imported file will override the ones defined 22 | # here (which is why it is important to import them last). 23 | # 24 | # import_config "#{Mix.env}.exs" 25 | -------------------------------------------------------------------------------- /lib/rethinkdb.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB do 2 | 3 | @moduledoc """ 4 | Some convenience functions for interacting with RethinkDB. 5 | """ 6 | 7 | @doc """ 8 | See `RethinkDB.Connection.run/2` 9 | """ 10 | defdelegate run(query, pid), to: RethinkDB.Connection 11 | 12 | @doc """ 13 | See `RethinkDB.Connection.run/3` 14 | """ 15 | defdelegate run(query, pid, opts), to: RethinkDB.Connection 16 | 17 | @doc """ 18 | See `RethinkDB.Connection.next/1` 19 | """ 20 | defdelegate next(collection), to: RethinkDB.Connection 21 | 22 | @doc """ 23 | See `RethinkDB.Connection.close/1` 24 | """ 25 | defdelegate close(collection), to: RethinkDB.Connection 26 | 27 | end 28 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection do 2 | @moduledoc """ 3 | A module for managing connections. 4 | 5 | A `Connection` object is a process that can be started in various ways. 6 | 7 | It is recommended to start it as part of a supervision tree with a name: 8 | 9 | worker(RethinkDB.Connection, [[port: 28015, host: 'localhost', name: :rethinkdb_connection]]) 10 | 11 | Connections will by default connect asynchronously. If a connection fails, we retry with 12 | an exponential backoff. All queries will return `%RethinkDB.Exception.ConnectionClosed{}` 13 | until the connection is established. 14 | 15 | If `:sync_connect` is set to `true` then the process will crash if we fail to connect. It's 16 | recommended to only use this if the database is on the same host or if a rethinkdb proxy 17 | is running on the same host. If there's any chance of a network partition, it's recommended 18 | to stick with the default behavior. 19 | """ 20 | use Connection 21 | 22 | require Logger 23 | 24 | alias RethinkDB.Connection.Request 25 | alias RethinkDB.Connection.Transport 26 | 27 | @doc """ 28 | A convenience macro for naming connections. 29 | 30 | For convenience we provide the `use RethinkDB.Connection` macro, which automatically registers 31 | itself under the module name: 32 | 33 | defmodule FooDatabase, do: use RethinkDB.Connection 34 | 35 | Then in the supervision tree: 36 | 37 | worker(FooDatabase, [[port: 28015, host: 'localhost']]) 38 | 39 | When `use RethinkDB.Connection` is called, it will define: 40 | 41 | * `start_link` 42 | * `stop` 43 | * `run` 44 | 45 | All of these only differ from the normal `RethinkDB.Connection` functions in that they don't 46 | accept a connection. They will use the current module as the process name. `start_link` will 47 | start the connection under the module name. 48 | 49 | If you attempt to provide a name to `start_link`, it will raise an `ArgumentError`. 50 | """ 51 | defmacro __using__(_opts) do 52 | quote location: :keep do 53 | def start_link(opts \\ []) do 54 | if Dict.has_key?(opts, :name) && opts[:name] != __MODULE__ do 55 | # The whole point of this macro is to provide an implicit process 56 | # name, so subverting it is considered an error. 57 | raise ArgumentError.exception( 58 | "Process name #{inspect opts[:name]} conflicts with implicit name #{inspect __MODULE__} provided by `use RethinkDB.Connection`" 59 | ) 60 | end 61 | RethinkDB.Connection.start_link(Dict.put_new(opts, :name, __MODULE__)) 62 | end 63 | 64 | def run(query, opts \\ []) do 65 | RethinkDB.Connection.run(query, __MODULE__, opts) 66 | end 67 | 68 | def noreply_wait(timeout \\ 5000) do 69 | RethinkDB.Connection.noreply_wait(__MODULE__, timeout) 70 | end 71 | 72 | def stop do 73 | RethinkDB.Connection.stop(__MODULE__) 74 | end 75 | 76 | defoverridable [ start_link: 1, start_link: 0 ] 77 | end 78 | end 79 | 80 | @doc """ 81 | Stop the connection. 82 | 83 | Stops the given connection. 84 | """ 85 | def stop(pid) do 86 | Connection.cast(pid, :stop) 87 | end 88 | 89 | @doc """ 90 | Run a query on a connection. 91 | 92 | Supports the following options: 93 | 94 | * `timeout` - How long to wait for a response 95 | * `db` - Default database to use for query. Can also be specified as part of the query. 96 | * `durability` - possible values are 'hard' and 'soft'. In soft durability mode RethinkDB will acknowledge the write immediately after receiving it, but before the write has been committed to disk. 97 | * `noreply` - set to true to not receive the result object or cursor and return immediately. 98 | * `profile` - whether or not to return a profile of the query’s execution (default: false). 99 | * `time_format` - what format to return times in (default: :native). Set this to :raw if you want times returned as JSON objects for exporting. 100 | * `binary_format` - what format to return binary data in (default: :native). Set this to :raw if you want the raw pseudotype. 101 | """ 102 | def run(query, conn, opts \\ []) do 103 | timeout = Dict.get(opts, :timeout, 5000) 104 | conn_opts = Dict.drop(opts, [:timeout]) 105 | noreply = Dict.get(opts, :noreply, false) 106 | conn_opts = Connection.call(conn, :conn_opts) 107 | |> Dict.take([:db]) 108 | |> Dict.merge(conn_opts) 109 | query = prepare_and_encode(query, conn_opts) 110 | msg = case noreply do 111 | true -> {:query_noreply, query} 112 | false -> {:query, query} 113 | end 114 | case Connection.call(conn, msg, timeout) do 115 | {response, token} -> RethinkDB.Response.parse(response, token, conn, opts) 116 | :noreply -> :ok 117 | result -> result 118 | end 119 | end 120 | 121 | @doc """ 122 | Fetch the next dataset for a feed. 123 | 124 | Since a feed is tied to a particular connection, no connection is needed when calling 125 | `next`. 126 | """ 127 | def next(%{token: token, pid: pid, opts: opts}) do 128 | case Connection.call(pid, {:continue, token}, :infinity) do 129 | {response, token} -> RethinkDB.Response.parse(response, token, pid, opts) 130 | x -> x 131 | end 132 | end 133 | 134 | @doc """ 135 | Closes a feed. 136 | 137 | Since a feed is tied to a particular connection, no connection is needed when calling 138 | `close`. 139 | """ 140 | def close(%{token: token, pid: pid}) do 141 | {response, token} = Connection.call(pid, {:stop, token}, :infinity) 142 | RethinkDB.Response.parse(response, token, pid, []) 143 | end 144 | 145 | @doc """ 146 | `noreply_wait` ensures that previous queries with the noreply flag have been processed by the server. Note that this guarantee only applies to queries run on the given connection. 147 | """ 148 | def noreply_wait(conn, timeout \\ 5000) do 149 | {response, token} = Connection.call(conn, :noreply_wait, timeout) 150 | case RethinkDB.Response.parse(response, token, conn, []) do 151 | %RethinkDB.Response{data: %{"t" => 4}} -> :ok 152 | r -> r 153 | end 154 | end 155 | 156 | defp prepare_and_encode(query, opts) do 157 | query = RethinkDB.Prepare.prepare(query) 158 | 159 | # Right now :db can still be nil so we need to remove it 160 | opts = Enum.into(opts, %{}, fn 161 | {:db, db} -> 162 | {:db, RethinkDB.Prepare.prepare(RethinkDB.Query.db(db))} 163 | {k, v} -> 164 | {k, v} 165 | end) 166 | 167 | query = [1, query, opts] 168 | Poison.encode!(query) 169 | end 170 | 171 | 172 | @doc """ 173 | Start connection as a linked process 174 | 175 | Accepts a `Dict` of options. Supported options: 176 | 177 | * `:host` - hostname to use to connect to database. Defaults to `'localhost'`. 178 | * `:port` - port on which to connect to database. Defaults to `28015`. 179 | * `:auth_key` - authorization key to use with database. Defaults to `nil`. 180 | * `:db` - default database to use with queries. Defaults to `nil`. 181 | * `:sync_connect` - whether to have `init` block until a connection succeeds. Defaults to `false`. 182 | * `:max_pending` - Hard cap on number of concurrent requests. Defaults to `10000` 183 | * `:ssl` - a dict of options. Support SSL options: 184 | * `:ca_certs` - a list of file paths to cacerts. 185 | """ 186 | def start_link(opts \\ []) do 187 | args = Dict.take(opts, [:host, :port, :auth_key, :db, :sync_connect, :ssl, :max_pending]) 188 | Connection.start_link(__MODULE__, args, opts) 189 | end 190 | 191 | def init(opts) do 192 | host = case Dict.get(opts, :host, 'localhost') do 193 | x when is_binary(x) -> String.to_char_list x 194 | x -> x 195 | end 196 | sync_connect = Dict.get(opts, :sync_connect, false) 197 | ssl = Dict.get(opts, :ssl) 198 | opts = Dict.put(opts, :host, host) 199 | |> Dict.put_new(:port, 28015) 200 | |> Dict.put_new(:auth_key, "") 201 | |> Dict.put_new(:max_pending, 10000) 202 | |> Dict.drop([:sync_connect]) 203 | |> Enum.into(%{}) 204 | {transport, transport_opts} = case ssl do 205 | nil -> {%Transport.TCP{}, []} 206 | x -> {%Transport.SSL{}, Enum.map(Dict.fetch!(x, :ca_certs), &({:cacertfile, &1})) ++ [verify: :verify_peer]} 207 | end 208 | state = %{ 209 | pending: %{}, 210 | current: {:start, ""}, 211 | token: 0, 212 | config: Map.put(opts, :transport, {transport, transport_opts}) 213 | } 214 | case sync_connect do 215 | true -> 216 | case connect(:sync, state) do 217 | {:backoff, _, _} -> {:stop, :econnrefused} 218 | x -> x 219 | end 220 | false -> 221 | {:connect, :init, state} 222 | end 223 | end 224 | 225 | def connect(_info, state = %{config: %{host: host, port: port, auth_key: auth_key, transport: {transport, transport_opts}}}) do 226 | case Transport.connect(transport, host, port, [active: false, mode: :binary] ++ transport_opts) do 227 | {:ok, socket} -> 228 | case handshake(socket, auth_key) do 229 | {:error, _} -> {:stop, :bad_handshake, state} 230 | :ok -> 231 | :ok = Transport.setopts(socket, [active: :once]) 232 | # TODO: investigate timeout vs hibernate 233 | {:ok, Dict.put(state, :socket, socket)} 234 | end 235 | {:error, :econnrefused} -> 236 | backoff = min(Dict.get(state, :timeout, 1000), 64000) 237 | {:backoff, backoff, Dict.put(state, :timeout, backoff*2)} 238 | end 239 | end 240 | 241 | def disconnect(info, state = %{pending: pending}) do 242 | pending |> Enum.each(fn {_token, pid} -> 243 | Connection.reply(pid, %RethinkDB.Exception.ConnectionClosed{}) 244 | end) 245 | new_state = state 246 | |> Map.delete(:socket) 247 | |> Map.put(:pending, %{}) 248 | |> Map.put(:current, {:start, ""}) 249 | # TODO: should we reconnect? 250 | {:stop, info, new_state} 251 | end 252 | 253 | def handle_call(:conn_opts, _from, state = %{config: opts}) do 254 | {:reply, opts, state} 255 | end 256 | 257 | def handle_call(_, _, 258 | state = %{pending: pending, config: %{max_pending: max_pending}}) when map_size(pending) > max_pending do 259 | {:reply, %RethinkDB.Exception.TooManyRequests{}, state} 260 | end 261 | 262 | def handle_call({:query_noreply, query}, _from, state = %{token: token}) do 263 | new_token = token + 1 264 | token = << token :: little-size(64) >> 265 | {:noreply, state} = Request.make_request(query, token, :noreply, %{state | token: new_token}) 266 | {:reply, :noreply, state} 267 | end 268 | 269 | def handle_call({:query, query}, from, state = %{token: token}) do 270 | new_token = token + 1 271 | token = << token :: little-size(64) >> 272 | Request.make_request(query, token, from, %{state | token: new_token}) 273 | end 274 | 275 | def handle_call({:continue, token}, from, state) do 276 | query = "[2]" 277 | Request.make_request(query, token, from, state) 278 | end 279 | 280 | def handle_call({:stop, token}, from, state) do 281 | query = "[3]" 282 | Request.make_request(query, token, from, state) 283 | end 284 | 285 | def handle_call(:noreply_wait, from, state = %{token: token}) do 286 | query = "[4]" 287 | new_token = token + 1 288 | token = << token :: little-size(64) >> 289 | Request.make_request(query, token, from, %{state | token: new_token}) 290 | end 291 | 292 | def handle_cast(:stop, state) do 293 | {:disconnect, :normal, state}; 294 | end 295 | 296 | def handle_info({proto, _port, data}, state = %{socket: socket}) when proto in [:tcp, :ssl] do 297 | :ok = Transport.setopts(socket, [active: :once]) 298 | Request.handle_recv(data, state) 299 | end 300 | 301 | def handle_info({closed_msg, _port}, state) when closed_msg in [:ssl_closed, :tcp_closed] do 302 | {:disconnect, closed_msg, state} 303 | end 304 | 305 | def handle_info(msg, state) do 306 | Logger.debug("Received unhandled info: #{inspect(msg)} with state #{inspect state}") 307 | {:noreply, state} 308 | end 309 | 310 | def terminate(_reason, %{socket: socket}) do 311 | Transport.close(socket) 312 | :ok 313 | end 314 | 315 | def terminate(_reason, _state) do 316 | :ok 317 | end 318 | 319 | defp handshake(socket, auth_key) do 320 | :ok = Transport.send(socket, << 0x400c2d20 :: little-size(32) >>) 321 | :ok = Transport.send(socket, << :erlang.iolist_size(auth_key) :: little-size(32) >>) 322 | :ok = Transport.send(socket, auth_key) 323 | :ok = Transport.send(socket, << 0x7e6970c7 :: little-size(32) >>) 324 | case recv_until_null(socket, "") do 325 | "SUCCESS" -> :ok 326 | error = {:error, _} -> error 327 | end 328 | end 329 | 330 | defp recv_until_null(socket, acc) do 331 | case Transport.recv(socket, 1) do 332 | {:ok, "\0"} -> acc 333 | {:ok, a} -> recv_until_null(socket, acc <> a) 334 | x = {:error, _} -> x 335 | end 336 | end 337 | end 338 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection/request.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection.Request do 2 | @moduledoc false 3 | 4 | alias RethinkDB.Connection.Transport 5 | 6 | def make_request(query, token, from, state = %{pending: pending, socket: socket}) do 7 | new_pending = case from do 8 | :noreply -> pending 9 | _ -> Dict.put_new(pending, token, from) 10 | end 11 | bsize = :erlang.size(query) 12 | payload = token <> << bsize :: little-size(32) >> <> query 13 | case Transport.send(socket, payload) do 14 | :ok -> {:noreply, %{state | pending: new_pending}} 15 | {:error, :closed} -> 16 | {:disconnect, :closed, %RethinkDB.Exception.ConnectionClosed{}, state} 17 | end 18 | end 19 | 20 | def make_request(_query, _token, _from, state) do 21 | {:reply, %RethinkDB.Exception.ConnectionClosed{}, state} 22 | end 23 | 24 | def handle_recv(data, state = %{current: {:start, leftover}}) do 25 | case leftover <> data do 26 | << token :: binary-size(8), leftover :: binary >> -> 27 | handle_recv("", %{state | current: {:token, token, leftover}}) 28 | new_data -> 29 | {:noreply, %{state | current: {:start, new_data}}} 30 | end 31 | end 32 | def handle_recv(data, state = %{current: {:token, token, leftover}}) do 33 | case leftover <> data do 34 | << length :: little-size(32), leftover :: binary >> -> 35 | handle_recv("", %{state | current: {:length, length, token, leftover}}) 36 | new_data -> 37 | {:noreply, %{state | current: {:token, token, new_data}}} 38 | end 39 | end 40 | def handle_recv(data, state = %{current: {:length, length, token, leftover}, pending: pending}) do 41 | case leftover <> data do 42 | << response :: binary-size(length), leftover :: binary >> -> 43 | Connection.reply(pending[token], {response, token}) 44 | handle_recv("", %{state | current: {:start, leftover}, pending: Dict.delete(pending, token)}) 45 | new_data -> 46 | {:noreply, %{state | current: {:length, length, token, new_data}}} 47 | end 48 | end 49 | end 50 | -------------------------------------------------------------------------------- /lib/rethinkdb/connection/transport.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Connection.Transport do 2 | defmodule SSL, do: defstruct [:socket] 3 | defmodule TCP, do: defstruct [:socket] 4 | 5 | def connect(%SSL{}, host, port, opts) do 6 | case :ssl.connect(host, port, opts) do 7 | {:ok, socket} -> {:ok, %SSL{socket: socket}} 8 | x -> x 9 | end 10 | end 11 | 12 | def connect(%TCP{}, host, port, opts) do 13 | case :gen_tcp.connect(host, port, opts) do 14 | {:ok, socket} -> {:ok, %TCP{socket: socket}} 15 | x -> x 16 | end 17 | end 18 | 19 | def send(%SSL{socket: socket}, data) do 20 | :ssl.send(socket, data) 21 | end 22 | 23 | def send(%TCP{socket: socket}, data) do 24 | :gen_tcp.send(socket, data) 25 | end 26 | 27 | def recv(%SSL{socket: socket}, n) do 28 | :ssl.recv(socket, n) 29 | end 30 | 31 | def recv(%TCP{socket: socket}, n) do 32 | :gen_tcp.recv(socket, n) 33 | end 34 | 35 | def setopts(%SSL{socket: socket}, opts) do 36 | :ssl.setopts(socket, opts) 37 | end 38 | 39 | def setopts(%TCP{socket: socket}, opts) do 40 | :inet.setopts(socket, opts) 41 | end 42 | 43 | def close(%SSL{socket: socket}), do: :ssl.close(socket) 44 | def close(%TCP{socket: socket}), do: :gen_tcp.close(socket) 45 | end 46 | -------------------------------------------------------------------------------- /lib/rethinkdb/exception.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Exception do 2 | @moduledoc false 3 | defmodule ConnectionClosed do 4 | @moduledoc false 5 | defstruct [] 6 | end 7 | defmodule TooManyRequests do 8 | @moduledoc false 9 | defstruct [] 10 | end 11 | end 12 | -------------------------------------------------------------------------------- /lib/rethinkdb/lambda.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Lambda do 2 | @moduledoc """ 3 | Macro for using native elixir functions in queries 4 | """ 5 | alias RethinkDB.Query 6 | 7 | @doc """ 8 | Macro for using native elixir functions in queries 9 | 10 | Wrapping an anonymous function in `lambda` will cause it to be converted at compile time 11 | into standard RethinkDB query syntax. Example: 12 | 13 | lambda(fn (x) -> 14 | x + 5 == x/2 15 | end) 16 | 17 | Becomes: 18 | 19 | fn (x) -> 20 | RethinkDB.Query.eq( 21 | RethinkDB.Query.add(x, 5), 22 | RethinkDB.Query.divide(x, 2) 23 | ) 24 | end 25 | 26 | """ 27 | defmacro lambda(block) do 28 | build(block) 29 | end 30 | 31 | defp build(block) do 32 | Macro.prewalk block, fn 33 | {{:., _, [Access, :get]}, _, [arg1, arg2]} -> 34 | quote do 35 | Query.bracket(unquote(arg1), unquote(arg2)) 36 | end 37 | {:+, _, args} -> quote do: Query.add(unquote(args)) 38 | {:<>, _, args} -> quote do: Query.add(unquote(args)) 39 | {:++, _, args} -> quote do: Query.add(unquote(args)) 40 | {:-, _, args} -> quote do: Query.sub(unquote(args)) 41 | {:*, _, args} -> quote do: Query.mul(unquote(args)) 42 | {:/, _, args} -> quote do: Query.divide(unquote(args)) 43 | {:rem, _, [a, b]} -> quote do: Query.mod(unquote(a), unquote(b)) 44 | {:==, _, args} -> quote do: Query.eq(unquote(args)) 45 | {:!=, _, args} -> quote do: Query.ne(unquote(args)) 46 | {:<, _, args} -> quote do: Query.lt(unquote(args)) 47 | {:<=, _, args} -> quote do: Query.le(unquote(args)) 48 | {:>, _, args} -> quote do: Query.gt(unquote(args)) 49 | {:>=, _, args} -> quote do: Query.ge(unquote(args)) 50 | {:||, _, args} -> quote do: Query.or_r(unquote(args)) 51 | {:&&, _, args} -> quote do: Query.and_r(unquote(args)) 52 | {:if, _, [expr, [do: truthy, else: falsy]]} -> 53 | quote do 54 | Query.branch(unquote(expr), unquote(truthy), unquote(falsy)) 55 | end 56 | {:if, _, _} -> 57 | raise "You must include an else condition when using if in a ReQL Lambda" 58 | x -> x 59 | end 60 | end 61 | 62 | end 63 | -------------------------------------------------------------------------------- /lib/rethinkdb/prepare.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Prepare do 2 | alias RethinkDB.Q 3 | @moduledoc false 4 | 5 | # This is a bunch of functions that transform the query from our data structures into 6 | # the over the wire format. The main role is to properly create unique function variable ids. 7 | 8 | def prepare(query) do 9 | {val, _vars} = prepare(query, {0, %{}}) 10 | val 11 | end 12 | defp prepare(%Q{query: query}, state) do 13 | prepare(query, state) 14 | end 15 | defp prepare(list, state) when is_list(list) do 16 | {list, state} = Enum.reduce(list, {[], state}, fn (el, {acc, state}) -> 17 | {el, state} = prepare(el, state) 18 | {[el | acc], state} 19 | end) 20 | {Enum.reverse(list), state} 21 | end 22 | defp prepare(map, state) when is_map(map) do 23 | {map, state} = Enum.reduce(map, {[], state}, fn({k,v}, {acc, state}) -> 24 | {k, state} = prepare(k, state) 25 | {v, state} = prepare(v, state) 26 | {[{k, v} | acc], state} 27 | end) 28 | {Enum.into(map, %{}), state} 29 | end 30 | defp prepare(ref, state = {max, map}) when is_reference(ref) do 31 | case Dict.get(map,ref) do 32 | nil -> {max + 1, {max + 1, Dict.put_new(map, ref, max + 1)}} 33 | x -> {x, state} 34 | end 35 | end 36 | defp prepare({k,v}, state) do 37 | {k, state} = prepare(k, state) 38 | {v, state} = prepare(v, state) 39 | {[k,v], state} 40 | end 41 | defp prepare(el, state) do 42 | if is_binary(el) and not String.valid?(el) do 43 | {RethinkDB.Query.binary(el), state} 44 | else 45 | {el, state} 46 | end 47 | end 48 | end 49 | -------------------------------------------------------------------------------- /lib/rethinkdb/pseudotypes.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Pseudotypes do 2 | @moduledoc false 3 | defmodule Binary do 4 | @moduledoc false 5 | defstruct data: nil 6 | 7 | def parse(%{"$reql_type$" => "BINARY", "data" => data}, opts) do 8 | case Dict.get(opts, :binary_format) do 9 | :raw -> 10 | %__MODULE__{data: data} 11 | _ -> 12 | :base64.decode(data) 13 | end 14 | end 15 | end 16 | 17 | defmodule Geometry do 18 | @moduledoc false 19 | defmodule Point do 20 | @moduledoc false 21 | defstruct coordinates: [] 22 | end 23 | 24 | defmodule Line do 25 | @moduledoc false 26 | defstruct coordinates: [] 27 | end 28 | 29 | defmodule Polygon do 30 | @moduledoc false 31 | defstruct coordinates: [] 32 | end 33 | 34 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => [x,y], "type" => "Point"}) do 35 | %Point{coordinates: {x,y}} 36 | end 37 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => coords, "type" => "LineString"}) do 38 | %Line{coordinates: Enum.map(coords, &List.to_tuple/1)} 39 | end 40 | def parse(%{"$reql_type$" => "GEOMETRY", "coordinates" => coords, "type" => "Polygon"}) do 41 | %Polygon{coordinates: (for points <- coords, do: Enum.map points, &List.to_tuple/1)} 42 | end 43 | end 44 | 45 | defmodule Time do 46 | @moduledoc false 47 | defstruct epoch_time: nil, timezone: nil 48 | 49 | def parse(%{"$reql_type$" => "TIME", "epoch_time" => epoch_time, "timezone" => timezone}, opts) do 50 | case Dict.get(opts, :time_format) do 51 | :raw -> 52 | %__MODULE__{epoch_time: epoch_time, timezone: timezone} 53 | _ -> 54 | {seconds, ""} = Calendar.ISO.parse_offset(timezone) 55 | zone_abbr = case seconds do 56 | 0 -> "UTC" 57 | _ -> timezone 58 | end 59 | negative = seconds < 0 60 | seconds = abs(seconds) 61 | time_zone = case {div(seconds,3600),rem(seconds,3600)} do 62 | {0,0} -> "Etc/UTC" 63 | {hours,0} -> 64 | "Etc/GMT" <> if negative do "+" else "-" end <> Integer.to_string(hours) 65 | {hours,seconds} -> 66 | "Etc/GMT" <> if negative do "+" else "-" end <> Integer.to_string(hours) <> ":" <> 67 | String.pad_leading(Integer.to_string(seconds), 2, "0") 68 | end 69 | epoch_time * 1000 70 | |> trunc() 71 | |> DateTime.from_unix!(:milliseconds) 72 | |> struct(utc_offset: seconds, zone_abbr: zone_abbr, time_zone: time_zone) 73 | end 74 | end 75 | end 76 | 77 | def convert_reql_pseudotypes(nil, opts), do: nil 78 | def convert_reql_pseudotypes(%{"$reql_type$" => "BINARY"} = data, opts) do 79 | Binary.parse(data, opts) 80 | end 81 | def convert_reql_pseudotypes(%{"$reql_type$" => "GEOMETRY"} = data, opts) do 82 | Geometry.parse(data) 83 | end 84 | def convert_reql_pseudotypes(%{"$reql_type$" => "GROUPED_DATA"} = data, opts) do 85 | parse_grouped_data(data) 86 | end 87 | def convert_reql_pseudotypes(%{"$reql_type$" => "TIME"} = data, opts) do 88 | Time.parse(data, opts) 89 | end 90 | def convert_reql_pseudotypes(list, opts) when is_list(list) do 91 | Enum.map(list, fn data -> convert_reql_pseudotypes(data, opts) end) 92 | end 93 | def convert_reql_pseudotypes(map, opts) when is_map(map) do 94 | Enum.map(map, fn {k, v} -> 95 | {k, convert_reql_pseudotypes(v, opts)} 96 | end) |> Enum.into(%{}) 97 | end 98 | def convert_reql_pseudotypes(string, opts), do: string 99 | 100 | def parse_grouped_data(%{"$reql_type$" => "GROUPED_DATA", "data" => data}) do 101 | Enum.map(data, fn ([k, data]) -> 102 | {k, data} 103 | end) |> Enum.into(%{}) 104 | end 105 | def create_grouped_data(data) when is_map(data) do 106 | data = data |> Enum.map(fn {k,v} -> [k, v] end) 107 | %{"$reql_type$" => "GROUPED_DATA", "data" => data} 108 | end 109 | end 110 | -------------------------------------------------------------------------------- /lib/rethinkdb/q.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Q do 2 | @moduledoc false 3 | 4 | defstruct query: nil 5 | end 6 | 7 | defimpl Poison.Encoder, for: RethinkDB.Q do 8 | def encode(%{query: query}, options) do 9 | Poison.Encoder.encode(query, options) 10 | end 11 | end 12 | 13 | defimpl Inspect, for: RethinkDB.Q do 14 | @external_resource term_info = Path.join([__DIR__, "query", "term_info.json"]) 15 | 16 | @apidef term_info 17 | |> File.read!() 18 | |> Poison.decode!() 19 | |> Enum.into(%{}, fn {key, val} -> {val, key} end) 20 | 21 | def inspect(%RethinkDB.Q{query: [69, [[2, refs], lambda]]}, _) do 22 | # Replaces references within lambda functions 23 | # with capture syntax arguments (&1, &2, etc). 24 | refs 25 | |> Enum.map_reduce(1, &{{&1, "&#{&2}"}, &2 + 1}) 26 | |> elem(0) 27 | |> Enum.reduce("&(#{inspect lambda})", fn {ref, var}, lambda -> String.replace(lambda, "var(#{inspect ref})", var) end) 28 | end 29 | 30 | def inspect(%RethinkDB.Q{query: [index, args, opts]}, _) do 31 | # Converts function options (map) to keyword list. 32 | Kernel.inspect(%RethinkDB.Q{query: [index, args ++ [Map.to_list(opts)]]}) 33 | end 34 | 35 | def inspect(%RethinkDB.Q{query: [index, args]}, _options) do 36 | # Resolve index & args and return them as string. 37 | Map.get(@apidef, index) <> "(#{Enum.join(Enum.map(args, &Kernel.inspect/1), ", ")})" 38 | end 39 | end 40 | -------------------------------------------------------------------------------- /lib/rethinkdb/query.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Query do 2 | @moduledoc """ 3 | Querying API for RethinkDB 4 | """ 5 | 6 | alias RethinkDB.Q 7 | 8 | import RethinkDB.Query.Macros 9 | 10 | @type t :: %Q{} 11 | @type reql_string :: String.t|t 12 | @type reql_number :: integer|float|t 13 | @type reql_array :: [term]|t 14 | @type reql_bool :: boolean|t 15 | @type reql_obj :: %{}|t 16 | @type reql_datum :: term 17 | @type reql_func0 :: (() -> term)|t 18 | @type reql_func1 :: (term -> term)|t 19 | @type reql_func2 :: (term, term -> term)|t 20 | @type reql_opts :: %{} 21 | @type reql_binary :: %RethinkDB.Pseudotypes.Binary{}|binary|t 22 | @type reql_geo_point :: %RethinkDB.Pseudotypes.Geometry.Point{}|{reql_number,reql_number}|t 23 | @type reql_geo_line :: %RethinkDB.Pseudotypes.Geometry.Line{}|t 24 | @type reql_geo_polygon :: %RethinkDB.Pseudotypes.Geometry.Polygon{}|t 25 | @type reql_geo :: reql_geo_point|reql_geo_line|reql_geo_polygon 26 | @type reql_time :: %RethinkDB.Pseudotypes.Time{}|t 27 | 28 | # 29 | #Aggregation Functions 30 | # 31 | 32 | @doc """ 33 | Takes a stream and partitions it into multiple groups based on the fields or 34 | functions provided. 35 | 36 | With the multi flag single documents can be assigned to multiple groups, 37 | similar to the behavior of multi-indexes. When multi is True and the grouping 38 | value is an array, documents will be placed in each group that corresponds to 39 | the elements of the array. If the array is empty the row will be ignored. 40 | """ 41 | @spec group(Q.reql_array, Q.reql_func1 | Q.reql_string | [Q.reql_func1 | Q.reql_string] ) :: Q.t 42 | operate_on_seq_and_list(:group, 144, opts: true) 43 | operate_on_two_args(:group, 144, opts: true) 44 | 45 | @doc """ 46 | Takes a grouped stream or grouped data and turns it into an array of objects 47 | representing the groups. Any commands chained after ungroup will operate on 48 | this array, rather than operating on each group individually. This is useful if 49 | you want to e.g. order the groups by the value of their reduction. 50 | 51 | The format of the array returned by ungroup is the same as the default native 52 | format of grouped data in the JavaScript driver and data explorer. 53 | end 54 | """ 55 | @spec ungroup(Q.t) :: Q.t 56 | operate_on_single_arg(:ungroup, 150) 57 | 58 | @doc """ 59 | Produce a single value from a sequence through repeated application of a 60 | reduction function. 61 | 62 | The reduction function can be called on: 63 | 64 | * two elements of the sequence 65 | * one element of the sequence and one result of a previous reduction 66 | * two results of previous reductions 67 | 68 | The reduction function can be called on the results of two previous 69 | reductions because the reduce command is distributed and parallelized across 70 | shards and CPU cores. A common mistaken when using the reduce command is to 71 | suppose that the reduction is executed from left to right. 72 | """ 73 | @spec reduce(Q.reql_array, Q.reql_func2) :: Q.t 74 | operate_on_two_args(:reduce, 37) 75 | 76 | @doc """ 77 | Counts the number of elements in a sequence. If called with a value, counts 78 | the number of times that value occurs in the sequence. If called with a 79 | predicate function, counts the number of elements in the sequence where that 80 | function returns `true`. 81 | 82 | If count is called on a binary object, it will return the size of the object 83 | in bytes. 84 | """ 85 | @spec count(Q.reql_array) :: Q.t 86 | operate_on_single_arg(:count, 43) 87 | @spec count(Q.reql_array, Q.reql_string | Q.reql_func1) :: Q.t 88 | operate_on_two_args(:count, 43) 89 | 90 | @doc """ 91 | Sums all the elements of a sequence. If called with a field name, sums all 92 | the values of that field in the sequence, skipping elements of the sequence 93 | that lack that field. If called with a function, calls that function on every 94 | element of the sequence and sums the results, skipping elements of the sequence 95 | where that function returns `nil` or a non-existence error. 96 | 97 | Returns 0 when called on an empty sequence. 98 | """ 99 | @spec sum(Q.reql_array) :: Q.t 100 | operate_on_single_arg(:sum, 145) 101 | @spec sum(Q.reql_array, Q.reql_string|Q.reql_func1) :: Q.t 102 | operate_on_two_args(:sum, 145) 103 | 104 | @doc """ 105 | Averages all the elements of a sequence. If called with a field name, 106 | averages all the values of that field in the sequence, skipping elements of the 107 | sequence that lack that field. If called with a function, calls that function 108 | on every element of the sequence and averages the results, skipping elements of 109 | the sequence where that function returns None or a non-existence error. 110 | 111 | Produces a non-existence error when called on an empty sequence. You can 112 | handle this case with `default`. 113 | """ 114 | @spec avg(Q.reql_array) :: Q.t 115 | operate_on_single_arg(:avg, 146) 116 | @spec avg(Q.reql_array, Q.reql_string|Q.reql_func1) :: Q.t 117 | operate_on_two_args(:avg, 146) 118 | 119 | @doc """ 120 | Finds the minimum element of a sequence. The min command can be called with: 121 | 122 | * a field name, to return the element of the sequence with the smallest value in 123 | that field; 124 | * an index option, to return the element of the sequence with the smallest value in that 125 | index; 126 | * a function, to apply the function to every element within the sequence and 127 | return the element which returns the smallest value from the function, ignoring 128 | any elements where the function returns None or produces a non-existence error. 129 | 130 | Calling min on an empty sequence will throw a non-existence error; this can be 131 | handled using the `default` command. 132 | """ 133 | @spec min(Q.reql_array, Q.reql_opts | Q.reql_string | Q.reql_func1) :: Q.t 134 | operate_on_single_arg(:min, 147) 135 | operate_on_two_args(:min, 147) 136 | 137 | @doc """ 138 | Finds the maximum element of a sequence. The max command can be called with: 139 | 140 | * a field name, to return the element of the sequence with the smallest value in 141 | that field; 142 | * an index, to return the element of the sequence with the smallest value in that 143 | index; 144 | * a function, to apply the function to every element within the sequence and 145 | return the element which returns the smallest value from the function, ignoring 146 | any elements where the function returns None or produces a non-existence error. 147 | 148 | Calling max on an empty sequence will throw a non-existence error; this can be 149 | handled using the `default` command. 150 | """ 151 | @spec max(Q.reql_array, Q.reql_opts | Q.reql_string | Q.reql_func1) :: Q.t 152 | operate_on_single_arg(:max, 148) 153 | operate_on_two_args(:max, 148) 154 | 155 | @doc """ 156 | Removes duplicates from elements in a sequence. 157 | 158 | The distinct command can be called on any sequence, a table, or called on a 159 | table with an index. 160 | """ 161 | @spec distinct(Q.reql_array, Q.reql_opts) :: Q.t 162 | operate_on_single_arg(:distinct, 42, opts: true) 163 | 164 | @doc """ 165 | When called with values, returns `true` if a sequence contains all the specified 166 | values. When called with predicate functions, returns `true` if for each 167 | predicate there exists at least one element of the stream where that predicate 168 | returns `true`. 169 | """ 170 | @spec contains(Q.reql_array, Q.reql_array | Q.reql_func1 | Q.t) :: Q.t 171 | operate_on_seq_and_list(:contains, 93) 172 | operate_on_two_args(:contains, 93) 173 | 174 | # 175 | #Control Strucutres 176 | # 177 | 178 | @doc """ 179 | `args` is a special term that’s used to splice an array of arguments into 180 | another term. This is useful when you want to call a variadic term such as 181 | `get_all` with a set of arguments produced at runtime. 182 | 183 | This is analogous to Elixir's `apply`. 184 | """ 185 | @spec args(Q.reql_array) :: Q.t 186 | operate_on_single_arg(:args, 154) 187 | 188 | @doc """ 189 | Encapsulate binary data within a query. 190 | 191 | The type of data binary accepts depends on the client language. In 192 | Elixir, it expects a Binary. Using a Binary object within a query implies 193 | the use of binary and the ReQL driver will automatically perform the coercion. 194 | 195 | Binary objects returned to the client in Elixir will also be 196 | Binary objects. This can be changed with the binary_format option :raw 197 | to run to return “raw” objects. 198 | 199 | Only a limited subset of ReQL commands may be chained after binary: 200 | 201 | * coerce_to can coerce binary objects to string types 202 | * count will return the number of bytes in the object 203 | * slice will treat bytes like array indexes (i.e., slice(10,20) will return bytes 204 | * 10–19) 205 | * type_of returns PTYPE 206 | * info will return information on a binary object. 207 | """ 208 | @spec binary(Q.reql_binary) :: Q.t 209 | def binary(%RethinkDB.Pseudotypes.Binary{data: data}), do: do_binary(data) 210 | def binary(data), do: do_binary(:base64.encode(data)) 211 | def do_binary(data), do: %Q{query: [155, [%{"$reql_type$" => "BINARY", "data" => data}]]} 212 | 213 | @doc """ 214 | Call an anonymous function using return values from other ReQL commands or 215 | queries as arguments. 216 | 217 | The last argument to do (or, in some forms, the only argument) is an expression 218 | or an anonymous function which receives values from either the previous 219 | arguments or from prefixed commands chained before do. The do command is 220 | essentially a single-element map, letting you map a function over just one 221 | document. This allows you to bind a query result to a local variable within the 222 | scope of do, letting you compute the result just once and reuse it in a complex 223 | expression or in a series of ReQL commands. 224 | 225 | Arguments passed to the do function must be basic data types, and cannot be 226 | streams or selections. (Read about ReQL data types.) While the arguments will 227 | all be evaluated before the function is executed, they may be evaluated in any 228 | order, so their values should not be dependent on one another. The type of do’s 229 | result is the type of the value returned from the function or last expression. 230 | """ 231 | @spec do_r(Q.reql_datum | Q.reql_func0, Q.reql_func1) :: Q.t 232 | operate_on_single_arg(:do_r, 64) 233 | # Can't do `operate_on_two_args` because we swap the order of args to make it 234 | # Elixir's idiomatic subject first order. 235 | def do_r(data, f) when is_function(f), do: %Q{query: [64, [wrap(f), wrap(data)]]} 236 | 237 | @doc """ 238 | If the `test` expression returns False or None, the false_branch will be 239 | evaluated. Otherwise, the true_branch will be evaluated. 240 | 241 | The branch command is effectively an if renamed due to language constraints. 242 | """ 243 | @spec branch(Q.reql_datum, Q.reql_datum, Q.reql_datum) :: Q.t 244 | operate_on_three_args(:branch, 65) 245 | 246 | @doc """ 247 | Loop over a sequence, evaluating the given write query for each element. 248 | """ 249 | @spec for_each(Q.reql_array, Q.reql_func1) :: Q.t 250 | operate_on_two_args(:for_each, 68) 251 | 252 | @doc """ 253 | Generate a stream of sequential integers in a specified range. 254 | 255 | `range` takes 0, 1 or 2 arguments: 256 | 257 | * With no arguments, range returns an “infinite” stream from 0 up to and 258 | including the maximum integer value; 259 | * With one argument, range returns a stream from 0 up to but not 260 | including the end value; 261 | * With two arguments, range returns a stream from the start value up to 262 | but not including the end value. 263 | """ 264 | @spec range(Q.reql_number, Q.req_number) :: Q.t 265 | operate_on_zero_args(:range, 173) 266 | operate_on_single_arg(:range, 173) 267 | operate_on_two_args(:range, 173) 268 | 269 | @doc """ 270 | Throw a runtime error. 271 | """ 272 | @spec error(Q.reql_string) :: Q.t 273 | operate_on_single_arg(:error, 12) 274 | 275 | @doc """ 276 | Handle non-existence errors. Tries to evaluate and return its first argument. 277 | If an error related to the absence of a value is thrown in the process, or if 278 | its first argument returns nil, returns its second argument. (Alternatively, 279 | the second argument may be a function which will be called with either the text 280 | of the non-existence error or nil.) 281 | """ 282 | @spec default(Q.t, Q.t) :: Q.t 283 | operate_on_two_args(:default, 92) 284 | 285 | @doc """ 286 | Create a javascript expression. 287 | 288 | The only opt allowed is `timeout`. 289 | 290 | `timeout` is the number of seconds before `js` times out. The default value 291 | is 5 seconds. 292 | """ 293 | @spec js(Q.reql_string, Q.opts) :: Q.t 294 | operate_on_single_arg(:js, 11, opts: true) 295 | 296 | @doc """ 297 | Convert a value of one type into another. 298 | 299 | * a sequence, selection or object can be coerced to an array 300 | * an array of key-value pairs can be coerced to an object 301 | * a string can be coerced to a number 302 | * any datum (single value) can be coerced to a string 303 | * a binary object can be coerced to a string and vice-versa 304 | """ 305 | @spec coerce_to(Q.reql_datum, Q.reql_string) :: Q.t 306 | operate_on_two_args(:coerce_to, 51) 307 | 308 | @doc """ 309 | Gets the type of a value. 310 | """ 311 | @spec type_of(Q.reql_datum) :: Q.t 312 | operate_on_single_arg(:type_of, 52) 313 | 314 | @doc """ 315 | Get information about a ReQL value. 316 | """ 317 | @spec info(Q.t) :: Q.t 318 | operate_on_single_arg(:info, 79) 319 | 320 | @doc """ 321 | Parse a JSON string on the server. 322 | """ 323 | @spec json(Q.reql_string) :: Q.t 324 | operate_on_single_arg(:json, 98) 325 | 326 | @doc """ 327 | Serialize to JSON string on the server. 328 | """ 329 | @spec to_json(Q.reql_term) :: Q.t 330 | operate_on_single_arg(:to_json, 172) 331 | 332 | @doc """ 333 | Retrieve data from the specified URL over HTTP. The return type depends on 334 | the result_format option, which checks the Content-Type of the response by 335 | default. 336 | """ 337 | @spec http(Q.reql_string, Q.reql_opts) :: Q.t 338 | operate_on_single_arg(:http, 153, opts: true) 339 | 340 | @doc """ 341 | Return a UUID (universally unique identifier), a string that can be used as a unique ID. 342 | 343 | Accepts optionally a string. If given, UUID will be derived from the strings SHA-1 hash. 344 | """ 345 | @spec uuid(Q.reql_string) :: Q.t 346 | operate_on_zero_args(:uuid, 169) 347 | operate_on_single_arg(:uuid, 169) 348 | 349 | # 350 | #Database Operations 351 | # 352 | 353 | @doc """ 354 | Create a database. A RethinkDB database is a collection of tables, similar to 355 | relational databases. 356 | 357 | If successful, the command returns an object with two fields: 358 | 359 | * dbs_created: always 1. 360 | * config_changes: a list containing one object with two fields, old_val and 361 | new_val: 362 | * old_val: always null. 363 | * new_val: the database’s new config value. 364 | 365 | If a database with the same name already exists, the command throws 366 | RqlRuntimeError. 367 | 368 | Note: Only alphanumeric characters and underscores are valid for the database 369 | name. 370 | """ 371 | @spec db_create(Q.reql_string) :: Q.t 372 | operate_on_single_arg(:db_create, 57) 373 | 374 | @doc """ 375 | Drop a database. The database, all its tables, and corresponding data will be deleted. 376 | 377 | If successful, the command returns an object with two fields: 378 | 379 | * dbs_dropped: always 1. 380 | * tables_dropped: the number of tables in the dropped database. 381 | * config_changes: a list containing one two-field object, old_val and new_val: 382 | * old_val: the database’s original config value. 383 | * new_val: always None. 384 | 385 | If the given database does not exist, the command throws RqlRuntimeError. 386 | """ 387 | @spec db_drop(Q.reql_string) :: Q.t 388 | operate_on_single_arg(:db_drop, 58) 389 | 390 | @doc """ 391 | List all database names in the system. The result is a list of strings. 392 | """ 393 | @spec db_list :: Q.t 394 | operate_on_zero_args(:db_list, 59) 395 | 396 | # 397 | #Geospatial Queries 398 | # 399 | 400 | @doc """ 401 | Construct a circular line or polygon. A circle in RethinkDB is a polygon or 402 | line approximating a circle of a given radius around a given center, consisting 403 | of a specified number of vertices (default 32). 404 | 405 | The center may be specified either by two floating point numbers, the latitude 406 | (−90 to 90) and longitude (−180 to 180) of the point on a perfect sphere (see 407 | Geospatial support for more information on ReQL’s coordinate system), or by a 408 | point object. The radius is a floating point number whose units are meters by 409 | default, although that may be changed with the unit argument. 410 | 411 | Optional arguments available with circle are: 412 | 413 | - num_vertices: the number of vertices in the polygon or line. Defaults to 32. 414 | - geo_system: the reference ellipsoid to use for geographic coordinates. Possible 415 | values are WGS84 (the default), a common standard for Earth’s geometry, or 416 | unit_sphere, a perfect sphere of 1 meter radius. 417 | - unit: Unit for the radius distance. Possible values are m (meter, the default), 418 | km (kilometer), mi (international mile), nm (nautical mile), ft (international 419 | foot). 420 | - fill: if `true` (the default) the circle is filled, creating a polygon; if `false` 421 | the circle is unfilled (creating a line). 422 | """ 423 | @spec circle(Q.reql_geo, Q.reql_number, Q.reql_opts) :: Q.t 424 | operate_on_two_args(:circle, 165, opts: true) 425 | 426 | @doc """ 427 | Compute the distance between a point and another geometry object. At least one 428 | of the geometry objects specified must be a point. 429 | 430 | Optional arguments available with distance are: 431 | 432 | - geo_system: the reference ellipsoid to use for geographic coordinates. Possible 433 | values are WGS84 (the default), a common standard for Earth’s geometry, or 434 | unit_sphere, a perfect sphere of 1 meter radius. 435 | - unit: Unit to return the distance in. Possible values are m (meter, the 436 | default), km (kilometer), mi (international mile), nm (nautical mile), ft 437 | (international foot). 438 | 439 | If one of the objects is a polygon or a line, the point will be projected onto 440 | the line or polygon assuming a perfect sphere model before the distance is 441 | computed (using the model specified with geo_system). As a consequence, if the 442 | polygon or line is extremely large compared to Earth’s radius and the distance 443 | is being computed with the default WGS84 model, the results of distance should 444 | be considered approximate due to the deviation between the ellipsoid and 445 | spherical models. 446 | """ 447 | @spec distance(Q.reql_geo, Q.reql_geo, Q.reql_opts) :: Q.t 448 | operate_on_two_args(:distance, 162, opts: true) 449 | 450 | @doc """ 451 | Convert a Line object into a Polygon object. If the last point does not 452 | specify the same coordinates as the first point, polygon will close the polygon 453 | by connecting them. 454 | """ 455 | @spec fill(Q.reql_line) :: Q.t 456 | operate_on_single_arg(:fill, 167) 457 | 458 | @doc """ 459 | Convert a GeoJSON object to a ReQL geometry object. 460 | 461 | RethinkDB only allows conversion of GeoJSON objects which have ReQL 462 | equivalents: Point, LineString, and Polygon. MultiPoint, MultiLineString, and 463 | MultiPolygon are not supported. (You could, however, store multiple points, 464 | lines and polygons in an array and use a geospatial multi index with them.) 465 | 466 | Only longitude/latitude coordinates are supported. GeoJSON objects that use 467 | Cartesian coordinates, specify an altitude, or specify their own coordinate 468 | reference system will be rejected. 469 | """ 470 | @spec geojson(Q.reql_obj) :: Q.t 471 | operate_on_single_arg(:geojson, 157) 472 | 473 | @doc """ 474 | Convert a ReQL geometry object to a GeoJSON object. 475 | """ 476 | @spec to_geojson(Q.reql_obj) :: Q.t 477 | operate_on_single_arg(:to_geojson, 158) 478 | 479 | @doc """ 480 | Get all documents where the given geometry object intersects the geometry 481 | object of the requested geospatial index. 482 | 483 | The index argument is mandatory. This command returns the same results as 484 | `filter(r.row('index')) |> intersects(geometry)`. The total number of results 485 | is limited to the array size limit which defaults to 100,000, but can be 486 | changed with the `array_limit` option to run. 487 | """ 488 | @spec get_intersecting(Q.reql_array, Q.reql_geo, Q.reql_opts) :: Q.t 489 | operate_on_two_args(:get_intersecting, 166, opts: true) 490 | 491 | @doc """ 492 | Get all documents where the specified geospatial index is within a certain 493 | distance of the specified point (default 100 kilometers). 494 | 495 | The index argument is mandatory. Optional arguments are: 496 | 497 | * max_results: the maximum number of results to return (default 100). 498 | * unit: Unit for the distance. Possible values are m (meter, the default), km 499 | (kilometer), mi (international mile), nm (nautical mile), ft (international 500 | foot). 501 | * max_dist: the maximum distance from an object to the specified point (default 502 | 100 km). 503 | * geo_system: the reference ellipsoid to use for geographic coordinates. Possible 504 | values are WGS84 (the default), a common standard for Earth’s geometry, or 505 | unit_sphere, a perfect sphere of 1 meter radius. 506 | 507 | The return value will be an array of two-item objects with the keys dist and 508 | doc, set to the distance between the specified point and the document (in the 509 | units specified with unit, defaulting to meters) and the document itself, 510 | respectively. 511 | 512 | """ 513 | @spec get_nearest(Q.reql_array, Q.reql_geo, Q.reql_opts) :: Q.t 514 | operate_on_two_args(:get_nearest, 168, opts: true) 515 | 516 | @doc """ 517 | Tests whether a geometry object is completely contained within another. When 518 | applied to a sequence of geometry objects, includes acts as a filter, returning 519 | a sequence of objects from the sequence that include the argument. 520 | """ 521 | @spec includes(Q.reql_geo, Q.reql_geo) :: Q.t 522 | operate_on_two_args(:includes, 164) 523 | 524 | @doc """ 525 | Tests whether two geometry objects intersect with one another. When applied to 526 | a sequence of geometry objects, intersects acts as a filter, returning a 527 | sequence of objects from the sequence that intersect with the argument. 528 | """ 529 | @spec intersects(Q.reql_geo, Q.reql_geo) :: Q.t 530 | operate_on_two_args(:intersects, 163) 531 | 532 | @doc """ 533 | Construct a geometry object of type Line. The line can be specified in one of 534 | two ways: 535 | 536 | - Two or more two-item arrays, specifying latitude and longitude numbers of the 537 | line’s vertices; 538 | - Two or more Point objects specifying the line’s vertices. 539 | """ 540 | @spec line([Q.reql_geo]) :: Q.t 541 | operate_on_list(:line, 160) 542 | 543 | @doc """ 544 | Construct a geometry object of type Point. The point is specified by two 545 | floating point numbers, the longitude (−180 to 180) and latitude (−90 to 90) of 546 | the point on a perfect sphere. 547 | """ 548 | @spec point(Q.reql_geo) :: Q.t 549 | def point({la,lo}), do: point(la, lo) 550 | operate_on_two_args(:point, 159) 551 | 552 | @doc """ 553 | Construct a geometry object of type Polygon. The Polygon can be specified in 554 | one of two ways: 555 | 556 | Three or more two-item arrays, specifying latitude and longitude numbers of the 557 | polygon’s vertices; 558 | * Three or more Point objects specifying the polygon’s vertices. 559 | * Longitude (−180 to 180) and latitude (−90 to 90) of vertices are plotted on a 560 | perfect sphere. See Geospatial support for more information on ReQL’s 561 | coordinate system. 562 | 563 | If the last point does not specify the same coordinates as the first point, 564 | polygon will close the polygon by connecting them. You cannot directly 565 | construct a polygon with holes in it using polygon, but you can use polygon_sub 566 | to use a second polygon within the interior of the first to define a hole. 567 | """ 568 | @spec polygon([Q.reql_geo]) :: Q.t 569 | operate_on_list(:polygon, 161) 570 | 571 | @doc """ 572 | Use polygon2 to “punch out” a hole in polygon1. polygon2 must be completely 573 | contained within polygon1 and must have no holes itself (it must not be the 574 | output of polygon_sub itself). 575 | """ 576 | @spec polygon_sub(Q.reql_geo, Q.reql_geo) :: Q.t 577 | operate_on_two_args(:polygon_sub, 171) 578 | 579 | # 580 | #Joins Queries 581 | # 582 | 583 | @doc """ 584 | Returns an inner join of two sequences. The returned sequence represents an 585 | intersection of the left-hand sequence and the right-hand sequence: each row of 586 | the left-hand sequence will be compared with each row of the right-hand 587 | sequence to find all pairs of rows which satisfy the predicate. Each matched 588 | pair of rows of both sequences are combined into a result row. In most cases, 589 | you will want to follow the join with `zip` to combine the left and right results. 590 | 591 | Note that `inner_join` is slower and much less efficient than using `eqJoin` or 592 | `flat_map` with `get_all`. You should avoid using `inner_join` in commands when 593 | possible. 594 | 595 | iex> table("people") |> inner_join( 596 | table("phone_numbers"), &(eq(&1["id"], &2["person_id"]) 597 | ) |> run 598 | 599 | """ 600 | @spec inner_join(Q.reql_array, Q.reql_array, Q.reql_func2) :: Q.t 601 | operate_on_three_args(:inner_join, 48) 602 | 603 | @doc """ 604 | Returns a left outer join of two sequences. The returned sequence represents 605 | a union of the left-hand sequence and the right-hand sequence: all documents in 606 | the left-hand sequence will be returned, each matched with a document in the 607 | right-hand sequence if one satisfies the predicate condition. In most cases, 608 | you will want to follow the join with `zip` to combine the left and right results. 609 | 610 | Note that `outer_join` is slower and much less efficient than using `flat_map` 611 | with `get_all`. You should avoid using `outer_join` in commands when possible. 612 | 613 | iex> table("people") |> outer_join( 614 | table("phone_numbers"), &(eq(&1["id"], &2["person_id"]) 615 | ) |> run 616 | 617 | """ 618 | @spec outer_join(Q.reql_array, Q.reql_array, Q.reql_func2) :: Q.t 619 | operate_on_three_args(:outer_join, 49) 620 | 621 | @doc """ 622 | Join tables using a field on the left-hand sequence matching primary keys or 623 | secondary indexes on the right-hand table. `eq_join` is more efficient than other 624 | ReQL join types, and operates much faster. Documents in the result set consist 625 | of pairs of left-hand and right-hand documents, matched when the field on the 626 | left-hand side exists and is non-null and an entry with that field’s value 627 | exists in the specified index on the right-hand side. 628 | 629 | The result set of `eq_join` is a stream or array of objects. Each object in the 630 | returned set will be an object of the form `{ left: <left-document>, right: 631 | <right-document> }`, where the values of left and right will be the joined 632 | documents. Use the zip command to merge the left and right fields together. 633 | 634 | iex> table("people") |> eq_join( 635 | "id", table("phone_numbers"), %{index: "person_id"} 636 | ) |> run 637 | 638 | """ 639 | @spec eq_join(Q.reql_array, Q.reql_string, Q.reql_array, Keyword.t) :: Q.t 640 | operate_on_three_args(:eq_join, 50, opts: true) 641 | 642 | @doc """ 643 | Used to ‘zip’ up the result of a join by merging the ‘right’ fields into 644 | ‘left’ fields of each member of the sequence. 645 | 646 | iex> table("people") |> eq_join( 647 | "id", table("phone_numbers"), %{index: "person_id"} 648 | ) |> zip |> run 649 | 650 | """ 651 | @spec zip(Q.reql_array) :: Q.t 652 | operate_on_single_arg(:zip, 72) 653 | 654 | # 655 | #Math and Logic Queries 656 | # 657 | 658 | @doc """ 659 | Sum two numbers, concatenate two strings, or concatenate 2 arrays. 660 | 661 | iex> add(1, 2) |> run conn 662 | %RethinkDB.Record{data: 3} 663 | 664 | iex> add("hello", " world") |> run conn 665 | %RethinkDB.Record{data: "hello world"} 666 | 667 | iex> add([1,2], [3,4]) |> run conn 668 | %RethinkDB.Record{data: [1,2,3,4]} 669 | 670 | """ 671 | @spec add((Q.reql_number | Q.reql_string), (Q.reql_number | Q.reql_string)) :: Q.t 672 | operate_on_two_args(:add, 24) 673 | 674 | @doc """ 675 | Add multiple values. 676 | 677 | iex> add([1, 2]) |> run conn 678 | %RethinkDB.Record{data: 3} 679 | 680 | iex> add(["hello", " world"]) |> run 681 | %RethinkDB.Record{data: "hello world"} 682 | 683 | """ 684 | @spec add([(Q.reql_number | Q.reql_string | Q.reql_array)]) :: Q.t 685 | operate_on_list(:add, 24) 686 | 687 | @doc """ 688 | Subtract two numbers. 689 | 690 | iex> sub(1, 2) |> run conn 691 | %RethinkDB.Record{data: -1} 692 | 693 | """ 694 | @spec sub(Q.reql_number, Q.reql_number) :: Q.t 695 | operate_on_two_args(:sub, 25) 696 | 697 | @doc """ 698 | Subtract multiple values. Left associative. 699 | 700 | iex> sub([9, 1, 2]) |> run conn 701 | %RethinkDB.Record{data: 6} 702 | 703 | """ 704 | @spec sub([Q.reql_number]) :: Q.t 705 | operate_on_list(:sub, 25) 706 | 707 | @doc """ 708 | Multiply two numbers, or make a periodic array. 709 | 710 | iex> mul(2,3) |> run conn 711 | %RethinkDB.Record{data: 6} 712 | 713 | iex> mul([1,2], 2) |> run conn 714 | %RethinkDB.Record{data: [1,2,1,2]} 715 | 716 | """ 717 | @spec mul((Q.reql_number | Q.reql_array), (Q.reql_number | Q.reql_array)) :: Q.t 718 | operate_on_two_args(:mul, 26) 719 | @doc """ 720 | Multiply multiple values. 721 | 722 | iex> mul([2,3,4]) |> run conn 723 | %RethinkDB.Record{data: 24} 724 | 725 | """ 726 | @spec mul([(Q.reql_number | Q.reql_array)]) :: Q.t 727 | operate_on_list(:mul, 26) 728 | 729 | @doc """ 730 | Divide two numbers. 731 | 732 | iex> divide(12, 4) |> run conn 733 | %RethinkDB.Record{data: 3} 734 | 735 | """ 736 | @spec divide(Q.reql_number, Q.reql_number) :: Q.t 737 | operate_on_two_args(:divide, 27) 738 | @doc """ 739 | Divide a list of numbers. Left associative. 740 | 741 | iex> divide([12, 2, 3]) |> run conn 742 | %RethinkDB.Record{data: 2} 743 | 744 | """ 745 | @spec divide([Q.reql_number]) :: Q.t 746 | operate_on_list(:divide, 27) 747 | 748 | @doc """ 749 | Find the remainder when dividing two numbers. 750 | 751 | iex> mod(23, 4) |> run conn 752 | %RethinkDB.Record{data: 3} 753 | 754 | """ 755 | @spec mod(Q.reql_number, Q.reql_number) :: Q.t 756 | operate_on_two_args(:mod, 28) 757 | 758 | @doc """ 759 | Compute the logical “and” of two values. 760 | 761 | iex> and(true, true) |> run conn 762 | %RethinkDB.Record{data: true} 763 | 764 | iex> and(false, true) |> run conn 765 | %RethinkDB.Record{data: false} 766 | """ 767 | @spec and_r(Q.reql_bool, Q.reql_bool) :: Q.t 768 | operate_on_two_args(:and_r, 67) 769 | @doc """ 770 | Compute the logical “and” of all values in a list. 771 | 772 | iex> and_r([true, true, true]) |> run conn 773 | %RethinkDB.Record{data: true} 774 | 775 | iex> and_r([false, true, true]) |> run conn 776 | %RethinkDB.Record{data: false} 777 | """ 778 | @spec and_r([Q.reql_bool]) :: Q.t 779 | operate_on_list(:and_r, 67) 780 | 781 | @doc """ 782 | Compute the logical “or” of two values. 783 | 784 | iex> or_r(true, false) |> run conn 785 | %RethinkDB.Record{data: true} 786 | 787 | iex> or_r(false, false) |> run conn 788 | %RethinkDB.Record{data: false} 789 | 790 | """ 791 | @spec or_r(Q.reql_bool, Q.reql_bool) :: Q.t 792 | operate_on_two_args(:or_r, 66) 793 | @doc """ 794 | Compute the logical “or” of all values in a list. 795 | 796 | iex> or_r([true, true, true]) |> run conn 797 | %RethinkDB.Record{data: true} 798 | 799 | iex> or_r([false, true, true]) |> run conn 800 | %RethinkDB.Record{data: false} 801 | 802 | """ 803 | @spec or_r([Q.reql_bool]) :: Q.t 804 | operate_on_list(:or_r, 66) 805 | 806 | @doc """ 807 | Test if two values are equal. 808 | 809 | iex> eq(1,1) |> run conn 810 | %RethinkDB.Record{data: true} 811 | 812 | iex> eq(1, 2) |> run conn 813 | %RethinkDB.Record{data: false} 814 | """ 815 | @spec eq(Q.reql_datum, Q.reql_datum) :: Q.t 816 | operate_on_two_args(:eq, 17) 817 | @doc """ 818 | Test if all values in a list are equal. 819 | 820 | iex> eq([2, 2, 2]) |> run conn 821 | %RethinkDB.Record{data: true} 822 | 823 | iex> eq([2, 1, 2]) |> run conn 824 | %RethinkDB.Record{data: false} 825 | """ 826 | @spec eq([Q.reql_datum]) :: Q.t 827 | operate_on_list(:eq, 17) 828 | 829 | @doc """ 830 | Test if two values are not equal. 831 | 832 | iex> ne(1,1) |> run conn 833 | %RethinkDB.Record{data: false} 834 | 835 | iex> ne(1, 2) |> run conn 836 | %RethinkDB.Record{data: true} 837 | """ 838 | @spec ne(Q.reql_datum, Q.reql_datum) :: Q.t 839 | operate_on_two_args(:ne, 18) 840 | @doc """ 841 | Test if all values in a list are not equal. 842 | 843 | iex> ne([2, 2, 2]) |> run conn 844 | %RethinkDB.Record{data: false} 845 | 846 | iex> ne([2, 1, 2]) |> run conn 847 | %RethinkDB.Record{data: true} 848 | """ 849 | @spec ne([Q.reql_datum]) :: Q.t 850 | operate_on_list(:ne, 18) 851 | 852 | @doc """ 853 | Test if one value is less than the other. 854 | 855 | iex> lt(2,1) |> run conn 856 | %RethinkDB.Record{data: false} 857 | 858 | iex> lt(1, 2) |> run conn 859 | %RethinkDB.Record{data: true} 860 | """ 861 | @spec lt(Q.reql_datum, Q.reql_datum) :: Q.t 862 | operate_on_two_args(:lt, 19) 863 | @doc """ 864 | Test if all values in a list are less than the next. Left associative. 865 | 866 | iex> lt([1, 4, 2]) |> run conn 867 | %RethinkDB.Record{data: false} 868 | 869 | iex> lt([1, 4, 5]) |> run conn 870 | %RethinkDB.Record{data: true} 871 | """ 872 | @spec lt([Q.reql_datum]) :: Q.t 873 | operate_on_list(:lt, 19) 874 | 875 | @doc """ 876 | Test if one value is less than or equal to the other. 877 | 878 | iex> le(1,1) |> run conn 879 | %RethinkDB.Record{data: true} 880 | 881 | iex> le(1, 2) |> run conn 882 | %RethinkDB.Record{data: true} 883 | """ 884 | @spec le(Q.reql_datum, Q.reql_datum) :: Q.t 885 | operate_on_two_args(:le, 20) 886 | @doc """ 887 | Test if all values in a list are less than or equal to the next. Left associative. 888 | 889 | iex> le([1, 4, 2]) |> run conn 890 | %RethinkDB.Record{data: false} 891 | 892 | iex> le([1, 4, 4]) |> run conn 893 | %RethinkDB.Record{data: true} 894 | """ 895 | @spec le([Q.reql_datum]) :: Q.t 896 | operate_on_list(:le, 20) 897 | 898 | @doc """ 899 | Test if one value is greater than the other. 900 | 901 | iex> gt(1,2) |> run conn 902 | %RethinkDB.Record{data: false} 903 | 904 | iex> gt(2,1) |> run conn 905 | %RethinkDB.Record{data: true} 906 | """ 907 | @spec gt(Q.reql_datum, Q.reql_datum) :: Q.t 908 | operate_on_two_args(:gt, 21) 909 | @doc """ 910 | Test if all values in a list are greater than the next. Left associative. 911 | 912 | iex> gt([1, 4, 2]) |> run conn 913 | %RethinkDB.Record{data: false} 914 | 915 | iex> gt([10, 4, 2]) |> run conn 916 | %RethinkDB.Record{data: true} 917 | """ 918 | @spec gt([Q.reql_datum]) :: Q.t 919 | operate_on_list(:gt, 21) 920 | 921 | @doc """ 922 | Test if one value is greater than or equal to the other. 923 | 924 | iex> ge(1,1) |> run conn 925 | %RethinkDB.Record{data: true} 926 | 927 | iex> ge(2, 1) |> run conn 928 | %RethinkDB.Record{data: true} 929 | """ 930 | @spec ge(Q.reql_datum, Q.reql_datum) :: Q.t 931 | operate_on_two_args(:ge, 22) 932 | @doc """ 933 | Test if all values in a list are greater than or equal to the next. Left associative. 934 | 935 | iex> le([1, 4, 2]) |> run conn 936 | %RethinkDB.Record{data: false} 937 | 938 | iex> le([10, 4, 4]) |> run conn 939 | %RethinkDB.Record{data: true} 940 | """ 941 | @spec ge([Q.reql_datum]) :: Q.t 942 | operate_on_list(:ge, 22) 943 | 944 | @doc """ 945 | Compute the logical inverse (not) of an expression. 946 | 947 | iex> not(true) |> run conn 948 | %RethinkDB.Record{data: false} 949 | 950 | """ 951 | @spec not_r(Q.reql_bool) :: Q.t 952 | operate_on_single_arg(:not_r, 23) 953 | 954 | @doc """ 955 | Generate a random float between 0 and 1. 956 | 957 | iex> random |> run conn 958 | %RethinkDB.Record{data: 0.43} 959 | 960 | """ 961 | @spec random :: Q.t 962 | operate_on_zero_args(:random, 151) 963 | @doc """ 964 | Generate a random value in the range [0,upper). If upper is an integer then the 965 | random value will be an integer. If upper is a float it will be a float. 966 | 967 | iex> random(5) |> run conn 968 | %RethinkDB.Record{data: 3} 969 | 970 | iex> random(5.0) |> run conn 971 | %RethinkDB.Record{data: 3.7} 972 | 973 | """ 974 | @spec random(Q.reql_number) :: Q.t 975 | def random(upper) when is_float(upper), do: random(upper, float: true) 976 | operate_on_single_arg(:random, 151, opts: true) 977 | 978 | @doc """ 979 | Generate a random value in the range [lower,upper). If either arg is an integer then the 980 | random value will be an interger. If one of them is a float it will be a float. 981 | 982 | iex> random(5, 10) |> run conn 983 | %RethinkDB.Record{data: 8} 984 | 985 | iex> random(5.0, 15.0,) |> run conn 986 | %RethinkDB.Record{data: 8.34} 987 | 988 | """ 989 | @spec random(Q.reql_number, Q.reql_number) :: Q.t 990 | def random(lower, upper) when is_float(lower) or is_float(upper) do 991 | random(lower, upper, float: true) 992 | end 993 | operate_on_two_args(:random, 151, opts: true) 994 | 995 | @doc """ 996 | Rounds the given value to the nearest whole integer. 997 | 998 | For example, values of 1.0 up to but not including 1.5 will return 1.0, similar to floor; values of 1.5 up to 2.0 will return 2.0, similar to ceil. 999 | """ 1000 | @spec round_r(Q.reql_number) :: Q.t 1001 | operate_on_single_arg(:round_r, 185) 1002 | 1003 | @doc """ 1004 | Rounds the given value up, returning the smallest integer value greater than or equal to the given value (the value’s ceiling). 1005 | """ 1006 | @spec ceil(Q.reql_number) :: Q.t 1007 | operate_on_single_arg(:ceil, 184) 1008 | 1009 | @doc """ 1010 | Rounds the given value down, returning the largest integer value less than or equal to the given value (the value’s floor). 1011 | """ 1012 | @spec floor(Q.reql_number) :: Q.t 1013 | operate_on_single_arg(:floor, 183) 1014 | 1015 | # 1016 | #Selection Queries 1017 | # 1018 | 1019 | @doc """ 1020 | Reference a database. 1021 | """ 1022 | @spec db(Q.reql_string) :: Q.t 1023 | operate_on_single_arg(:db, 14) 1024 | 1025 | @doc """ 1026 | Return all documents in a table. Other commands may be chained after table to 1027 | return a subset of documents (such as get and filter) or perform further 1028 | processing. 1029 | 1030 | There are two optional arguments. 1031 | 1032 | * useOutdated: if true, this allows potentially out-of-date data to be returned, 1033 | with potentially faster reads. It also allows you to perform reads from a 1034 | secondary replica if a primary has failed. Default false. 1035 | * identifierFormat: possible values are name and uuid, with a default of name. If 1036 | set to uuid, then system tables will refer to servers, databases and tables by 1037 | UUID rather than name. (This only has an effect when used with system tables.) 1038 | """ 1039 | @spec table(Q.reql_string, Q.reql_opts) :: Q.t 1040 | @spec table(Q.t, Q.reql_string, Q.reql_opts) :: Q.t 1041 | operate_on_single_arg(:table, 15, opts: true) 1042 | operate_on_two_args(:table, 15, opts: true) 1043 | 1044 | @doc """ 1045 | Get a document by primary key. 1046 | 1047 | If no document exists with that primary key, get will return nil. 1048 | """ 1049 | @spec get(Q.t, Q.reql_datum) :: Q.t 1050 | operate_on_two_args(:get, 16) 1051 | 1052 | @doc """ 1053 | Get all documents where the given value matches the value of the requested index. 1054 | """ 1055 | @spec get_all(Q.t, Q.reql_array) :: Q.t 1056 | operate_on_seq_and_list(:get_all, 78, opts: true) 1057 | operate_on_two_args(:get_all, 78, opts: true) 1058 | 1059 | @doc """ 1060 | Get all documents between two keys. Accepts three optional arguments: index, 1061 | left_bound, and right_bound. If index is set to the name of a secondary index, 1062 | between will return all documents where that index’s value is in the specified 1063 | range (it uses the primary key by default). left_bound or right_bound may be 1064 | set to open or closed to indicate whether or not to include that endpoint of 1065 | the range (by default, left_bound is closed and right_bound is open). 1066 | """ 1067 | @spec between(Q.reql_array, Q.t, Q.t) :: Q.t 1068 | operate_on_three_args(:between, 182, opts: true) 1069 | 1070 | @doc """ 1071 | Get all the documents for which the given predicate is true. 1072 | 1073 | filter can be called on a sequence, selection, or a field containing an array 1074 | of elements. The return type is the same as the type on which the function was 1075 | called on. 1076 | 1077 | The body of every filter is wrapped in an implicit .default(False), which means 1078 | that if a non-existence errors is thrown (when you try to access a field that 1079 | does not exist in a document), RethinkDB will just ignore the document. The 1080 | default value can be changed by passing the named argument default. Setting 1081 | this optional argument to r.error() will cause any non-existence errors to 1082 | return a RqlRuntimeError. 1083 | """ 1084 | @spec filter(Q.reql_array, Q.t) :: Q.t 1085 | operate_on_two_args(:filter, 39, opts: true) 1086 | 1087 | # 1088 | #String Manipulation Queries 1089 | # 1090 | 1091 | @doc """ 1092 | Checks a string for matches. 1093 | 1094 | Example: 1095 | 1096 | iex> "hello world" |> match("hello") |> run conn 1097 | iex> "hello world" |> match(~r(hello)) |> run conn 1098 | 1099 | """ 1100 | @spec match( (Q.reql_string), (Regex.t|Q.reql_string) ) :: Q.t 1101 | def match(string, regex = %Regex{}), do: match(string, Regex.source(regex)) 1102 | operate_on_two_args(:match, 97) 1103 | 1104 | @doc """ 1105 | Split a `string` on whitespace. 1106 | 1107 | iex> "abracadabra" |> split |> run conn 1108 | %RethinkDB.Record{data: ["abracadabra"]} 1109 | """ 1110 | @spec split(Q.reql_string) :: Q.t 1111 | operate_on_single_arg(:split, 149) 1112 | 1113 | @doc """ 1114 | Split a `string` on `separator`. 1115 | 1116 | iex> "abra-cadabra" |> split("-") |> run conn 1117 | %RethinkDB.Record{data: ["abra", "cadabra"]} 1118 | """ 1119 | @spec split(Q.reql_string, Q.reql_string) :: Q.t 1120 | operate_on_two_args(:split, 149) 1121 | 1122 | @doc """ 1123 | Split a `string` with a given `separator` into `max_result` segments. 1124 | 1125 | iex> "a-bra-ca-da-bra" |> split("-", 2) |> run conn 1126 | %RethinkDB.Record{data: ["a", "bra", "ca-da-bra"]} 1127 | 1128 | """ 1129 | @spec split(Q.reql_string, (Q.reql_string|nil), integer) :: Q.t 1130 | operate_on_three_args(:split, 149) 1131 | 1132 | @doc """ 1133 | Convert a string to all upper case. 1134 | 1135 | iex> "hi" |> upcase |> run conn 1136 | %RethinkDB.Record{data: "HI"} 1137 | 1138 | """ 1139 | @spec upcase(Q.reql_string) :: Q.t 1140 | operate_on_single_arg(:upcase, 141) 1141 | 1142 | @doc """ 1143 | Convert a string to all down case. 1144 | 1145 | iex> "Hi" |> downcase |> run conn 1146 | %RethinkDB.Record{data: "hi"} 1147 | 1148 | """ 1149 | @spec downcase(Q.reql_string) :: Q.t 1150 | operate_on_single_arg(:downcase, 142) 1151 | 1152 | # 1153 | #Table Functions 1154 | # 1155 | 1156 | @doc """ 1157 | Create a table. A RethinkDB table is a collection of JSON documents. 1158 | 1159 | If successful, the command returns an object with two fields: 1160 | 1161 | * tables_created: always 1. 1162 | * config_changes: a list containing one two-field object, old_val and new_val: 1163 | * old_val: always nil. 1164 | * new_val: the table’s new config value. 1165 | 1166 | If a table with the same name already exists, the command throws 1167 | RqlRuntimeError. 1168 | 1169 | Note: Only alphanumeric characters and underscores are valid for the table name. 1170 | 1171 | When creating a table you can specify the following options: 1172 | 1173 | * primary_key: the name of the primary key. The default primary key is id. 1174 | * durability: if set to soft, writes will be acknowledged by the server 1175 | immediately and flushed to disk in the background. The default is hard: 1176 | acknowledgment of writes happens after data has been written to disk. 1177 | * shards: the number of shards, an integer from 1-32. Defaults to 1. 1178 | * replicas: either an integer or a mapping object. Defaults to 1. 1179 | If replicas is an integer, it specifies the number of replicas per shard. 1180 | Specifying more replicas than there are servers will return an error. 1181 | If replicas is an object, it specifies key-value pairs of server tags and the 1182 | number of replicas to assign to those servers: {:tag1 => 2, :tag2 => 4, :tag3 1183 | => 2, ...}. 1184 | * primary_replica_tag: the primary server specified by its server tag. Required 1185 | if replicas is an object; the tag must be in the object. This must not be 1186 | specified if replicas is an integer. 1187 | The data type of a primary key is usually a string (like a UUID) or a number, 1188 | but it can also be a time, binary object, boolean or an array. It cannot be an 1189 | object. 1190 | """ 1191 | @spec table_create(Q.t, Q.reql_string, Q.reql_opts) :: Q.t 1192 | operate_on_single_arg(:table_create, 60, opts: true) 1193 | operate_on_two_args(:table_create, 60, opts: true) 1194 | 1195 | @doc """ 1196 | Drop a table. The table and all its data will be deleted. 1197 | 1198 | If successful, the command returns an object with two fields: 1199 | 1200 | * tables_dropped: always 1. 1201 | * config_changes: a list containing one two-field object, old_val and new_val: 1202 | * old_val: the dropped table’s config value. 1203 | * new_val: always nil. 1204 | 1205 | If the given table does not exist in the database, the command throws RqlRuntimeError. 1206 | """ 1207 | @spec table_drop(Q.t, Q.reql_string) :: Q.t 1208 | operate_on_single_arg(:table_drop, 61) 1209 | operate_on_two_args(:table_drop, 61) 1210 | 1211 | @doc """ 1212 | List all table names in a database. The result is a list of strings. 1213 | """ 1214 | @spec table_list(Q.t) :: Q.t 1215 | operate_on_zero_args(:table_list, 62) 1216 | operate_on_single_arg(:table_list, 62) 1217 | 1218 | @doc """ 1219 | Create a new secondary index on a table. Secondary indexes improve the speed of 1220 | many read queries at the slight cost of increased storage space and decreased 1221 | write performance. For more information about secondary indexes, read the 1222 | article “Using secondary indexes in RethinkDB.” 1223 | 1224 | RethinkDB supports different types of secondary indexes: 1225 | 1226 | * Simple indexes based on the value of a single field. 1227 | * Compound indexes based on multiple fields. 1228 | * Multi indexes based on arrays of values. 1229 | * Geospatial indexes based on indexes of geometry objects, created when the geo 1230 | optional argument is true. 1231 | * Indexes based on arbitrary expressions. 1232 | 1233 | The index_function can be an anonymous function or a binary representation 1234 | obtained from the function field of index_status. 1235 | 1236 | If successful, create_index will return an object of the form {:created => 1}. 1237 | If an index by that name already exists on the table, a RqlRuntimeError will be 1238 | thrown. 1239 | """ 1240 | @spec index_create(Q.t, Q.reql_string, Q.reql_func1, Q.reql_opts) :: Q.t 1241 | operate_on_two_args(:index_create, 75, opts: true) 1242 | operate_on_three_args(:index_create, 75, opts: true) 1243 | 1244 | @doc """ 1245 | Delete a previously created secondary index of this table. 1246 | """ 1247 | @spec index_drop(Q.t, Q.reql_string) :: Q.t 1248 | operate_on_two_args(:index_drop, 76) 1249 | 1250 | @doc """ 1251 | List all the secondary indexes of this table. 1252 | """ 1253 | @spec index_list(Q.t) :: Q.t 1254 | operate_on_single_arg(:index_list, 77) 1255 | 1256 | @doc """ 1257 | Rename an existing secondary index on a table. If the optional argument 1258 | overwrite is specified as true, a previously existing index with the new name 1259 | will be deleted and the index will be renamed. If overwrite is false (the 1260 | default) an error will be raised if the new index name already exists. 1261 | 1262 | The return value on success will be an object of the format {:renamed => 1}, or 1263 | {:renamed => 0} if the old and new names are the same. 1264 | 1265 | An error will be raised if the old index name does not exist, if the new index 1266 | name is already in use and overwrite is false, or if either the old or new 1267 | index name are the same as the primary key field name. 1268 | """ 1269 | @spec index_rename(Q.t, Q.reql_string, Q.reql_string, Q.reql_opts) :: Q.t 1270 | operate_on_three_args(:index_rename, 156, opts: true) 1271 | 1272 | @doc """ 1273 | Get the status of the specified indexes on this table, or the status of all 1274 | indexes on this table if no indexes are specified. 1275 | """ 1276 | @spec index_status(Q.t, Q.reql_string|Q.reql_array) :: Q.t 1277 | operate_on_single_arg(:index_status, 139) 1278 | operate_on_seq_and_list(:index_status, 139) 1279 | operate_on_two_args(:index_status, 139) 1280 | 1281 | @doc """ 1282 | Wait for the specified indexes on this table to be ready, or for all indexes on 1283 | this table to be ready if no indexes are specified. 1284 | """ 1285 | @spec index_wait(Q.t, Q.reql_string|Q.reql_array) :: Q.t 1286 | operate_on_single_arg(:index_wait, 140) 1287 | operate_on_seq_and_list(:index_wait, 140) 1288 | operate_on_two_args(:index_wait, 140) 1289 | 1290 | # 1291 | #Writing Data Queries 1292 | # 1293 | 1294 | @doc """ 1295 | Insert documents into a table. Accepts a single document or an array of 1296 | documents. 1297 | 1298 | The optional arguments are: 1299 | 1300 | * durability: possible values are hard and soft. This option will override the 1301 | table or query’s durability setting (set in run). In soft durability mode 1302 | Rethink_dB will acknowledge the write immediately after receiving and caching 1303 | it, but before the write has been committed to disk. 1304 | * return_changes: if set to True, return a changes array consisting of 1305 | old_val/new_val objects describing the changes made. 1306 | * conflict: Determine handling of inserting documents with the same primary key 1307 | as existing entries. Possible values are "error", "replace" or "update". 1308 | * "error": Do not insert the new document and record the conflict as an error. 1309 | This is the default. 1310 | * "replace": Replace the old document in its entirety with the new one. 1311 | * "update": Update fields of the old document with fields from the new one. 1312 | * `lambda(id, old_doc, new_doc) :: resolved_doc`: a function that receives the 1313 | id, old and new documents as arguments and returns a document which will be 1314 | inserted in place of the conflicted one. 1315 | Insert returns an object that contains the following attributes: 1316 | 1317 | * inserted: the number of documents successfully inserted. 1318 | * replaced: the number of documents updated when conflict is set to "replace" or 1319 | "update". 1320 | * unchanged: the number of documents whose fields are identical to existing 1321 | documents with the same primary key when conflict is set to "replace" or 1322 | "update". 1323 | * errors: the number of errors encountered while performing the insert. 1324 | * first_error: If errors were encountered, contains the text of the first error. 1325 | * deleted and skipped: 0 for an insert operation. 1326 | * generated_keys: a list of generated primary keys for inserted documents whose 1327 | primary keys were not specified (capped to 100,000). 1328 | * warnings: if the field generated_keys is truncated, you will get the warning 1329 | “Too many generated keys (), array truncated to 100000.”. 1330 | * changes: if return_changes is set to True, this will be an array of objects, 1331 | one for each objected affected by the insert operation. Each object will have 1332 | * two keys: {"new_val": , "old_val": None}. 1333 | """ 1334 | @spec insert(Q.t, Q.reql_obj | Q.reql_array, Keyword.t) :: Q.t 1335 | operate_on_two_args(:insert, 56, opts: true) 1336 | 1337 | @doc """ 1338 | Update JSON documents in a table. Accepts a JSON document, a ReQL expression, 1339 | or a combination of the two. 1340 | 1341 | The optional arguments are: 1342 | 1343 | * durability: possible values are hard and soft. This option will override the 1344 | table or query’s durability setting (set in run). In soft durability mode 1345 | RethinkDB will acknowledge the write immediately after receiving it, but before 1346 | the write has been committed to disk. 1347 | * return_changes: if set to True, return a changes array consisting of 1348 | old_val/new_val objects describing the changes made. 1349 | * non_atomic: if set to True, executes the update and distributes the result to 1350 | replicas in a non-atomic fashion. This flag is required to perform 1351 | non-deterministic updates, such as those that require reading data from another 1352 | table. 1353 | 1354 | Update returns an object that contains the following attributes: 1355 | 1356 | * replaced: the number of documents that were updated. 1357 | * unchanged: the number of documents that would have been modified except the new 1358 | value was the same as the old value. 1359 | * skipped: the number of documents that were skipped because the document didn’t 1360 | exist. 1361 | * errors: the number of errors encountered while performing the update. 1362 | * first_error: If errors were encountered, contains the text of the first error. 1363 | * deleted and inserted: 0 for an update operation. 1364 | * changes: if return_changes is set to True, this will be an array of objects, 1365 | one for each objected affected by the update operation. Each object will have 1366 | * two keys: {"new_val": , "old_val": }. 1367 | """ 1368 | @spec update(Q.t, Q.reql_obj, Keyword.t) :: Q.t 1369 | operate_on_two_args(:update, 53, opts: true) 1370 | 1371 | @doc """ 1372 | Replace documents in a table. Accepts a JSON document or a ReQL expression, and 1373 | replaces the original document with the new one. The new document must have the 1374 | same primary key as the original document. 1375 | 1376 | The optional arguments are: 1377 | 1378 | * durability: possible values are hard and soft. This option will override the 1379 | table or query’s durability setting (set in run). 1380 | In soft durability mode RethinkDB will acknowledge the write immediately after 1381 | receiving it, but before the write has been committed to disk. 1382 | * return_changes: if set to True, return a changes array consisting of 1383 | old_val/new_val objects describing the changes made. 1384 | * non_atomic: if set to True, executes the replacement and distributes the result 1385 | to replicas in a non-atomic fashion. This flag is required to perform 1386 | non-deterministic updates, such as those that require reading data from another 1387 | table. 1388 | 1389 | Replace returns an object that contains the following attributes: 1390 | 1391 | * replaced: the number of documents that were replaced 1392 | * unchanged: the number of documents that would have been modified, except that 1393 | the new value was the same as the old value 1394 | * inserted: the number of new documents added. You can have new documents 1395 | inserted if you do a point-replace on a key that isn’t in the table or you do a 1396 | replace on a selection and one of the documents you are replacing has been 1397 | deleted 1398 | * deleted: the number of deleted documents when doing a replace with None 1399 | * errors: the number of errors encountered while performing the replace. 1400 | * first_error: If errors were encountered, contains the text of the first error. 1401 | * skipped: 0 for a replace operation 1402 | * changes: if return_changes is set to True, this will be an array of objects, 1403 | one for each objected affected by the replace operation. Each object will have 1404 | * two keys: {"new_val": , "old_val": }. 1405 | """ 1406 | @spec replace(Q.t, Q.reql_obj, Keyword.t) :: Q.t 1407 | operate_on_two_args(:replace, 55, opts: true) 1408 | 1409 | @doc """ 1410 | Delete one or more documents from a table. 1411 | 1412 | The optional arguments are: 1413 | 1414 | * durability: possible values are hard and soft. This option will override the 1415 | table or query’s durability setting (set in run). 1416 | In soft durability mode RethinkDB will acknowledge the write immediately after 1417 | receiving it, but before the write has been committed to disk. 1418 | * return_changes: if set to True, return a changes array consisting of 1419 | old_val/new_val objects describing the changes made. 1420 | 1421 | Delete returns an object that contains the following attributes: 1422 | 1423 | * deleted: the number of documents that were deleted. 1424 | * skipped: the number of documents that were skipped. 1425 | For example, if you attempt to delete a batch of documents, and another 1426 | concurrent query deletes some of those documents first, they will be counted as 1427 | skipped. 1428 | * errors: the number of errors encountered while performing the delete. 1429 | * first_error: If errors were encountered, contains the text of the first error. 1430 | inserted, replaced, and unchanged: all 0 for a delete operation. 1431 | * changes: if return_changes is set to True, this will be an array of objects, 1432 | one for each objected affected by the delete operation. Each object will have 1433 | * two keys: {"new_val": None, "old_val": }. 1434 | """ 1435 | @spec delete(Q.t) :: Q.t 1436 | operate_on_single_arg(:delete, 54, opts: true) 1437 | 1438 | @doc """ 1439 | sync ensures that writes on a given table are written to permanent storage. 1440 | Queries that specify soft durability (durability='soft') do not give such 1441 | guarantees, so sync can be used to ensure the state of these queries. A call to 1442 | sync does not return until all previous writes to the table are persisted. 1443 | 1444 | If successful, the operation returns an object: {"synced": 1}. 1445 | 1446 | """ 1447 | @spec sync(Q.t) :: Q.t 1448 | operate_on_single_arg(:sync, 138) 1449 | 1450 | # 1451 | #Date and Time Queries 1452 | # 1453 | 1454 | @doc """ 1455 | Return a time object representing the current time in UTC. The command now() is 1456 | computed once when the server receives the query, so multiple instances of 1457 | r.now() will always return the same time inside a query. 1458 | """ 1459 | @spec now() :: Q.t 1460 | operate_on_zero_args(:now, 103) 1461 | 1462 | @doc """ 1463 | Create a time object for a specific time. 1464 | 1465 | A few restrictions exist on the arguments: 1466 | 1467 | * year is an integer between 1400 and 9,999. 1468 | * month is an integer between 1 and 12. 1469 | * day is an integer between 1 and 31. 1470 | * hour is an integer. 1471 | * minutes is an integer. 1472 | * seconds is a double. Its value will be rounded to three decimal places 1473 | (millisecond-precision). 1474 | * timezone can be 'Z' (for UTC) or a string with the format ±[hh]:[mm]. 1475 | """ 1476 | @spec time(reql_number, reql_number, reql_number, reql_string) :: Q.t 1477 | def time(year, month, day, timezone), do: %Q{query: [136, [year, month, day, timezone]]} 1478 | @spec time(reql_number, reql_number, reql_number, reql_number, reql_number, reql_number, reql_string) :: Q.t 1479 | def time(year, month, day, hour, minute, second, timezone) do 1480 | %Q{query: [136, [year, month, day, hour, minute, second, timezone]]} 1481 | end 1482 | 1483 | @doc """ 1484 | Create a time object based on seconds since epoch. The first argument is a 1485 | double and will be rounded to three decimal places (millisecond-precision). 1486 | """ 1487 | @spec epoch_time(reql_number) :: Q.t 1488 | operate_on_single_arg(:epoch_time, 101) 1489 | 1490 | @doc """ 1491 | Create a time object based on an ISO 8601 date-time string (e.g. 1492 | ‘2013-01-01T01:01:01+00:00’). We support all valid ISO 8601 formats except for 1493 | week dates. If you pass an ISO 8601 date-time without a time zone, you must 1494 | specify the time zone with the default_timezone argument. 1495 | """ 1496 | @spec iso8601(reql_string) :: Q.t 1497 | operate_on_single_arg(:iso8601, 99, opts: true) 1498 | 1499 | @doc """ 1500 | Return a new time object with a different timezone. While the time stays the 1501 | same, the results returned by methods such as hours() will change since they 1502 | take the timezone into account. The timezone argument has to be of the ISO 8601 1503 | format. 1504 | """ 1505 | @spec in_timezone(Q.reql_time, Q.reql_string) :: Q.t 1506 | operate_on_two_args(:in_timezone, 104) 1507 | 1508 | @doc """ 1509 | Return the timezone of the time object. 1510 | """ 1511 | @spec timezone(Q.reql_time) :: Q.t 1512 | operate_on_single_arg(:timezone, 127) 1513 | 1514 | @doc """ 1515 | Return if a time is between two other times (by default, inclusive for the 1516 | start, exclusive for the end). 1517 | """ 1518 | @spec during(Q.reql_time, Q.reql_time, Q.reql_time) :: Q.t 1519 | operate_on_three_args(:during, 105, opts: true) 1520 | 1521 | @doc """ 1522 | Return a new time object only based on the day, month and year (ie. the same 1523 | day at 00:00). 1524 | """ 1525 | @spec date(Q.reql_time) :: Q.t 1526 | operate_on_single_arg(:date, 106) 1527 | 1528 | @doc """ 1529 | Return the number of seconds elapsed since the beginning of the day stored in 1530 | the time object. 1531 | """ 1532 | @spec time_of_day(Q.reql_time) :: Q.t 1533 | operate_on_single_arg(:time_of_day, 126) 1534 | 1535 | @doc """ 1536 | Return the year of a time object. 1537 | """ 1538 | @spec year(Q.reql_time) :: Q.t 1539 | operate_on_single_arg(:year, 128) 1540 | 1541 | @doc """ 1542 | Return the month of a time object as a number between 1 and 12. 1543 | """ 1544 | @spec month(Q.reql_time) :: Q.t 1545 | operate_on_single_arg(:month, 129) 1546 | 1547 | @doc """ 1548 | Return the day of a time object as a number between 1 and 31. 1549 | """ 1550 | @spec day(Q.reql_time) :: Q.t 1551 | operate_on_single_arg(:day, 130) 1552 | 1553 | @doc """ 1554 | Return the day of week of a time object as a number between 1 and 7 (following 1555 | ISO 8601 standard). 1556 | """ 1557 | @spec day_of_week(Q.reql_time) :: Q.t 1558 | operate_on_single_arg(:day_of_week, 131) 1559 | 1560 | @doc """ 1561 | Return the day of the year of a time object as a number between 1 and 366 1562 | (following ISO 8601 standard). 1563 | """ 1564 | @spec day_of_year(Q.reql_time) :: Q.t 1565 | operate_on_single_arg(:day_of_year, 132) 1566 | 1567 | @doc """ 1568 | Return the hour in a time object as a number between 0 and 23. 1569 | """ 1570 | @spec hours(Q.reql_time) :: Q.t 1571 | operate_on_single_arg(:hours, 133) 1572 | 1573 | @doc """ 1574 | Return the minute in a time object as a number between 0 and 59. 1575 | """ 1576 | @spec minutes(Q.reql_time) :: Q.t 1577 | operate_on_single_arg(:minutes, 134) 1578 | 1579 | @doc """ 1580 | Return the seconds in a time object as a number between 0 and 59.999 (double precision). 1581 | """ 1582 | @spec seconds(Q.reql_time) :: Q.t 1583 | operate_on_single_arg(:seconds, 135) 1584 | 1585 | @doc """ 1586 | Convert a time object to a string in ISO 8601 format. 1587 | """ 1588 | @spec to_iso8601(Q.reql_time) :: Q.t 1589 | operate_on_single_arg(:to_iso8601, 100) 1590 | 1591 | @doc """ 1592 | Convert a time object to its epoch time. 1593 | """ 1594 | @spec to_epoch_time(Q.reql_time) :: Q.t 1595 | operate_on_single_arg(:to_epoch_time, 102) 1596 | 1597 | # 1598 | #Transformations Queries 1599 | # 1600 | 1601 | @doc """ 1602 | Transform each element of one or more sequences by applying a mapping function 1603 | to them. If map is run with two or more sequences, it will iterate for as many 1604 | items as there are in the shortest sequence. 1605 | 1606 | Note that map can only be applied to sequences, not single values. If you wish 1607 | to apply a function to a single value/selection (including an array), use the 1608 | do command. 1609 | """ 1610 | @spec map(Q.reql_array, Q.reql_func1) :: Q.t 1611 | operate_on_two_args(:map, 38) 1612 | 1613 | @doc """ 1614 | Plucks one or more attributes from a sequence of objects, filtering out any 1615 | objects in the sequence that do not have the specified fields. Functionally, 1616 | this is identical to has_fields followed by pluck on a sequence. 1617 | """ 1618 | @spec with_fields(Q.reql_array, Q.reql_array) :: Q.t 1619 | operate_on_seq_and_list(:with_fields, 96) 1620 | 1621 | @doc """ 1622 | Concatenate one or more elements into a single sequence using a mapping function. 1623 | """ 1624 | @spec flat_map(Q.reql_array, Q.reql_func1) :: Q.t 1625 | operate_on_two_args(:flat_map, 40) 1626 | operate_on_two_args(:concat_map, 40) 1627 | 1628 | @doc """ 1629 | Sort the sequence by document values of the given key(s). To specify the 1630 | ordering, wrap the attribute with either r.asc or r.desc (defaults to 1631 | ascending). 1632 | 1633 | Sorting without an index requires the server to hold the sequence in memory, 1634 | and is limited to 100,000 documents (or the setting of the array_limit option 1635 | for run). Sorting with an index can be done on arbitrarily large tables, or 1636 | after a between command using the same index. 1637 | """ 1638 | @spec order_by(Q.reql_array, Q.reql_datum) :: Q.t 1639 | # XXX this is clunky, revisit this sometime 1640 | operate_on_optional_second_arg(:order_by, 41) 1641 | 1642 | @doc """ 1643 | Skip a number of elements from the head of the sequence. 1644 | """ 1645 | @spec skip(Q.reql_array, Q.reql_number) :: Q.t 1646 | operate_on_two_args(:skip, 70) 1647 | 1648 | @doc """ 1649 | End the sequence after the given number of elements. 1650 | """ 1651 | @spec limit(Q.reql_array, Q.reql_number) :: Q.t 1652 | operate_on_two_args(:limit, 71) 1653 | 1654 | @doc """ 1655 | Return the elements of a sequence within the specified range. 1656 | """ 1657 | @spec slice(Q.reql_array, Q.reql_number, Q.reql_number) :: Q.t 1658 | operate_on_three_args(:slice, 30, opts: true) 1659 | 1660 | @doc """ 1661 | Get the nth element of a sequence, counting from zero. If the argument is 1662 | negative, count from the last element. 1663 | """ 1664 | @spec nth(Q.reql_array, Q.reql_number) :: Q.t 1665 | operate_on_two_args(:nth, 45) 1666 | 1667 | @doc """ 1668 | Get the indexes of an element in a sequence. If the argument is a predicate, 1669 | get the indexes of all elements matching it. 1670 | """ 1671 | @spec offsets_of(Q.reql_array, Q.reql_datum) :: Q.t 1672 | operate_on_two_args(:offsets_of, 87) 1673 | 1674 | @doc """ 1675 | Test if a sequence is empty. 1676 | """ 1677 | @spec is_empty(Q.reql_array) :: Q.t 1678 | operate_on_single_arg(:is_empty, 86) 1679 | 1680 | @doc """ 1681 | Concatenate two or more sequences. 1682 | """ 1683 | @spec union(Q.reql_array, Q.reql_array) :: Q.t 1684 | operate_on_two_args(:union, 44) 1685 | 1686 | @doc """ 1687 | Select a given number of elements from a sequence with uniform random 1688 | distribution. Selection is done without replacement. 1689 | 1690 | If the sequence has less than the requested number of elements (i.e., calling 1691 | sample(10) on a sequence with only five elements), sample will return the 1692 | entire sequence in a random order. 1693 | """ 1694 | @spec sample(Q.reql_array, Q.reql_number) :: Q.t 1695 | operate_on_two_args(:sample, 81) 1696 | 1697 | # 1698 | #Document Manipulation Queries 1699 | # 1700 | 1701 | @doc """ 1702 | Plucks out one or more attributes from either an object or a sequence of 1703 | objects (projection). 1704 | """ 1705 | @spec pluck(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t 1706 | operate_on_two_args(:pluck, 33) 1707 | 1708 | @doc """ 1709 | The opposite of pluck; takes an object or a sequence of objects, and returns 1710 | them with the specified paths removed. 1711 | """ 1712 | @spec without(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t 1713 | operate_on_two_args(:without, 34) 1714 | 1715 | @doc """ 1716 | Merge two or more objects together to construct a new object with properties 1717 | from all. When there is a conflict between field names, preference is given to 1718 | fields in the rightmost object in the argument list. 1719 | """ 1720 | @spec merge(Q.reql_array, Q.reql_object|Q.reql_func1) :: Q.t 1721 | operate_on_two_args(:merge, 35) 1722 | operate_on_list(:merge, 35) 1723 | operate_on_single_arg(:merge, 35) 1724 | 1725 | @doc """ 1726 | Append a value to an array. 1727 | """ 1728 | @spec append(Q.reql_array, Q.reql_datum) :: Q.t 1729 | operate_on_two_args(:append, 29) 1730 | 1731 | @doc """ 1732 | Prepend a value to an array. 1733 | """ 1734 | @spec prepend(Q.reql_array, Q.reql_datum) :: Q.t 1735 | operate_on_two_args(:prepend, 80) 1736 | 1737 | @doc """ 1738 | Remove the elements of one array from another array. 1739 | """ 1740 | @spec difference(Q.reql_array, Q.reql_array) :: Q.t 1741 | operate_on_two_args(:difference, 95) 1742 | 1743 | @doc """ 1744 | Add a value to an array and return it as a set (an array with distinct values). 1745 | """ 1746 | @spec set_insert(Q.reql_array, Q.reql_datum) :: Q.t 1747 | operate_on_two_args(:set_insert, 88) 1748 | 1749 | @doc """ 1750 | Intersect two arrays returning values that occur in both of them as a set (an 1751 | array with distinct values). 1752 | """ 1753 | @spec set_intersection(Q.reql_array, Q.reql_datum) :: Q.t 1754 | operate_on_two_args(:set_intersection, 89) 1755 | 1756 | @doc """ 1757 | Add a several values to an array and return it as a set (an array with distinct 1758 | values). 1759 | """ 1760 | @spec set_union(Q.reql_array, Q.reql_datum) :: Q.t 1761 | operate_on_two_args(:set_union, 90) 1762 | 1763 | @doc """ 1764 | Remove the elements of one array from another and return them as a set (an 1765 | array with distinct values). 1766 | """ 1767 | @spec set_difference(Q.reql_array, Q.reql_datum) :: Q.t 1768 | operate_on_two_args(:set_difference, 91) 1769 | 1770 | @doc """ 1771 | Get a single field from an object. If called on a sequence, gets that field 1772 | from every object in the sequence, skipping objects that lack it. 1773 | """ 1774 | @spec get_field(Q.reql_obj|Q.reql_array, Q.reql_string) :: Q.t 1775 | operate_on_two_args(:get_field, 31) 1776 | 1777 | @doc """ 1778 | Test if an object has one or more fields. An object has a field if it has 1779 | that key and the key has a non-null value. For instance, the object {'a': 1780 | 1,'b': 2,'c': null} has the fields a and b. 1781 | """ 1782 | @spec has_fields(Q.reql_array, Q.reql_array|Q.reql_string) :: Q.t 1783 | operate_on_two_args(:has_fields, 32) 1784 | 1785 | @doc """ 1786 | Insert a value in to an array at a given index. Returns the modified array. 1787 | """ 1788 | @spec insert_at(Q.reql_array, Q.reql_number, Q.reql_datum) :: Q.t 1789 | operate_on_three_args(:insert_at, 82) 1790 | 1791 | @doc """ 1792 | Insert several values in to an array at a given index. Returns the modified array. 1793 | """ 1794 | @spec splice_at(Q.reql_array, Q.reql_number, Q.reql_datum) :: Q.t 1795 | operate_on_three_args(:splice_at, 85) 1796 | 1797 | @doc """ 1798 | Remove one or more elements from an array at a given index. Returns the modified array. 1799 | """ 1800 | @spec delete_at(Q.reql_array, Q.reql_number, Q.reql_number) :: Q.t 1801 | operate_on_two_args(:delete_at, 83) 1802 | operate_on_three_args(:delete_at, 83) 1803 | 1804 | @doc """ 1805 | Change a value in an array at a given index. Returns the modified array. 1806 | """ 1807 | @spec change_at(Q.reql_array, Q.reql_number, Q.reql_datum) :: Q.t 1808 | operate_on_three_args(:change_at, 84) 1809 | 1810 | @doc """ 1811 | Return an array containing all of the object’s keys. 1812 | """ 1813 | @spec keys(Q.reql_obj) :: Q.t 1814 | operate_on_single_arg(:keys, 94) 1815 | 1816 | @doc """ 1817 | Return an array containing all of the object’s values. 1818 | """ 1819 | @spec values(Q.reql_obj) :: Q.t 1820 | operate_on_single_arg(:values, 186) 1821 | 1822 | @doc """ 1823 | Replace an object in a field instead of merging it with an existing object in a 1824 | merge or update operation. 1825 | """ 1826 | @spec literal(Q.reql_object) :: Q.t 1827 | operate_on_single_arg(:literal, 137) 1828 | 1829 | @doc """ 1830 | Creates an object from a list of key-value pairs, where the keys must be 1831 | strings. r.object(A, B, C, D) is equivalent to r.expr([[A, B], [C, 1832 | D]]).coerce_to('OBJECT'). 1833 | """ 1834 | @spec object(Q.reql_array) :: Q.t 1835 | operate_on_list(:object, 143) 1836 | 1837 | # 1838 | # Administration 1839 | # 1840 | @spec config(Q.reql_term) :: Q.t 1841 | operate_on_single_arg(:config, 174) 1842 | 1843 | @spec rebalance(Q.reql_term) :: Q.t 1844 | operate_on_single_arg(:rebalance, 179) 1845 | 1846 | @spec reconfigure(Q.reql_term, Q.reql_opts) :: Q.t 1847 | operate_on_single_arg(:reconfigure, 176, opts: true) 1848 | 1849 | @spec status(Q.reql_term) :: Q.t 1850 | operate_on_single_arg(:status, 175) 1851 | 1852 | @spec wait(Q.reql_term) :: Q.t 1853 | operate_on_single_arg(:wait, 177, opts: true) 1854 | 1855 | # 1856 | # Miscellaneous functions 1857 | # 1858 | 1859 | def make_array(array), do: %Q{query: [2, array]} 1860 | 1861 | operate_on_single_arg(:changes, 152, opts: true) 1862 | 1863 | def asc(key), do: %Q{query: [73, [key]]} 1864 | def desc(key), do: %Q{query: [74, [key]]} 1865 | 1866 | def func(f) when is_function(f) do 1867 | {_, arity} = :erlang.fun_info(f, :arity) 1868 | 1869 | args = case arity do 1870 | 0 -> [] 1871 | _ -> Enum.map(1..arity, fn _ -> make_ref end) 1872 | end 1873 | params = Enum.map(args, &var/1) 1874 | 1875 | res = case apply(f, params) do 1876 | x when is_list(x) -> make_array(x) 1877 | x -> x 1878 | end 1879 | %Q{query: [69, [[2, args], res]]} 1880 | end 1881 | 1882 | def var(val), do: %Q{query: [10, [val]]} 1883 | def bracket(obj, key), do: %Q{query: [170, [obj, key]]} 1884 | 1885 | operate_on_zero_args(:minval, 180) 1886 | operate_on_zero_args(:maxval, 181) 1887 | end 1888 | -------------------------------------------------------------------------------- /lib/rethinkdb/query/macros.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Query.Macros do 2 | alias RethinkDB.Q 3 | alias RethinkDB.Query 4 | @moduledoc false 5 | 6 | defmacro operate_on_two_args(op, opcode, options \\ []) do 7 | opt_support = Keyword.get(options, :opts, false) 8 | quote do 9 | def unquote(op)(left, right) do 10 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 11 | end 12 | if unquote(opt_support) do 13 | def unquote(op)(left, right, opts) when is_map(opts) or is_list(opts) do 14 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)], make_opts(opts)]} 15 | end 16 | end 17 | end 18 | end 19 | defmacro operate_on_three_args(op, opcode, options \\ []) do 20 | opt_support = Keyword.get(options, :opts, false) 21 | quote do 22 | def unquote(op)(arg1, arg2, arg3) do 23 | %Q{query: [unquote(opcode), [wrap(arg1), wrap(arg2), wrap(arg3)]]} 24 | end 25 | if unquote(opt_support) do 26 | def unquote(op)(arg1, arg2, arg3, opts) when is_map(opts) or is_list(opts) do 27 | %Q{query: [unquote(opcode), [wrap(arg1), wrap(arg2), wrap(arg3)], make_opts(opts)]} 28 | end 29 | end 30 | end 31 | end 32 | defmacro operate_on_list(op, opcode, options \\ []) do 33 | opt_support = Keyword.get(options, :opts, false) 34 | quote do 35 | def unquote(op)(args) when is_list(args) do 36 | %Q{query: [unquote(opcode), Enum.map(args, &wrap/1)]} 37 | end 38 | if unquote(opt_support) do 39 | def unquote(op)(args, opts) when is_list(args) and (is_map(opts) or is_list(opts)) do 40 | %Q{query: [unquote(opcode), Enum.map(args, &wrap/1), make_opts(opts)]} 41 | end 42 | end 43 | end 44 | end 45 | defmacro operate_on_seq_and_list(op, opcode, options \\ []) do 46 | opt_support = Keyword.get(options, :opts, false) 47 | quote do 48 | def unquote(op)(seq, args) when is_list(args) and args != [] do 49 | %Q{query: [unquote(opcode), [wrap(seq) | Enum.map(args, &wrap/1)]]} 50 | end 51 | if unquote(opt_support) do 52 | def unquote(op)(seq, args, opts) when is_list(args) and args != [] and (is_map(opts) or is_list(opts)) do 53 | %Q{query: [unquote(opcode), [wrap(seq) | Enum.map(args, &wrap/1)], make_opts(opts)]} 54 | end 55 | end 56 | end 57 | end 58 | defmacro operate_on_single_arg(op, opcode, options \\ []) do 59 | opt_support = Keyword.get(options, :opts, false) 60 | quote do 61 | def unquote(op)(arg) do 62 | %Q{query: [unquote(opcode), [wrap(arg)]]} 63 | end 64 | if unquote(opt_support) do 65 | def unquote(op)(arg, opts) when is_map(opts) or is_list(opts) do 66 | %Q{query: [unquote(opcode), [wrap(arg)], make_opts(opts)]} 67 | end 68 | end 69 | end 70 | end 71 | defmacro operate_on_optional_second_arg(op, opcode) do 72 | quote do 73 | def unquote(op)(arg) do 74 | %Q{query: [unquote(opcode), [wrap(arg)]]} 75 | end 76 | def unquote(op)(left, right = %Q{}) do 77 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 78 | end 79 | def unquote(op)(arg, opts) when is_map(opts) do 80 | %Q{query: [unquote(opcode), [wrap(arg)], opts]} 81 | end 82 | def unquote(op)(left, right, opts) when is_map(opts) do 83 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)], opts]} 84 | end 85 | def unquote(op)(left, right) do 86 | %Q{query: [unquote(opcode), [wrap(left), wrap(right)]]} 87 | end 88 | end 89 | end 90 | 91 | defmacro operate_on_zero_args(op, opcode, options \\ []) do 92 | opt_support = Keyword.get(options, :opts, false) 93 | quote do 94 | def unquote(op)(), do: %Q{query: [unquote(opcode)]} 95 | if unquote(opt_support) do 96 | def unquote(op)(opts) when is_map(opts) or is_list(opts) do 97 | %Q{query: [unquote(opcode), make_opts(opts)]} 98 | end 99 | end 100 | end 101 | end 102 | 103 | def wrap(list) when is_list(list), do: Query.make_array(Enum.map(list, &wrap/1)) 104 | def wrap(q = %Q{}), do: q 105 | def wrap(t = %RethinkDB.Pseudotypes.Time{}) do 106 | m = Map.from_struct(t) |> Map.put_new("$reql_type$", "TIME") 107 | wrap(m) 108 | end 109 | def wrap(t = %DateTime{utc_offset: utc_offset, std_offset: std_offset}) do 110 | offset = utc_offset + std_offset 111 | offset_negative = offset < 0 112 | offset_hour = div(abs(offset), 3600) 113 | offset_minute = rem(abs(offset), 3600) 114 | time_zone = 115 | if offset_negative do "-" else "+" end <> 116 | String.pad_leading(Integer.to_string(offset_hour), 2, "0") <> 117 | ":" <> 118 | String.pad_leading(Integer.to_string(offset_minute), 2, "0") 119 | wrap(%{ 120 | "$reql_type$" => "TIME", 121 | "epoch_time" => DateTime.to_unix(t, :milliseconds) / 1000, 122 | "timezone" => time_zone 123 | }) 124 | end 125 | def wrap(map) when is_map(map) do 126 | Enum.map(map, fn {k,v} -> 127 | {k, wrap(v)} 128 | end) |> Enum.into(%{}) 129 | end 130 | def wrap(f) when is_function(f), do: Query.func(f) 131 | def wrap(t) when is_tuple(t), do: wrap(Tuple.to_list(t)) 132 | def wrap(data), do: data 133 | 134 | def make_opts(opts) when is_map(opts), do: wrap(opts) 135 | def make_opts(opts) when is_list(opts), do: Enum.into(opts, %{}) 136 | end 137 | -------------------------------------------------------------------------------- /lib/rethinkdb/query/term_info.json: -------------------------------------------------------------------------------- 1 | {"has_fields":32,"db_list":59,"funcall":64,"random":151,"map":38,"to_geojson":158,"index_create":75,"ungroup":150,"get_intersecting":166,"get_all":78,"saturday":112,"error":12,"or":66,"reconfigure":176,"gt":21,"day":130,"september":122,"desc":74,"min":147,"nth":45,"splice_at":85,"make_array":2,"info":79,"mod":28,"set_insert":88,"thursday":110,"zip":72,"div":27,"db_drop":58,"skip":70,"insert":56,"may":118,"wait":177,"object":143,"range":173,"http":153,"july":120,"difference":95,"february":115,"outer_join":49,"without":34,"sub":25,"table_drop":61,"geojson":157,"fold":187,"minutes":134,"includes":164,"fill":167,"tuesday":108,"default":92,"date":106,"fn":69,"during":105,"august":121,"index_rename":156,"changes":152,"ceil":184,"binary":155,"limit":71,"offsets_of":87,"reduce":37,"time":136,"var":10,"get_nearest":168,"upcase":141,"type_of":52,"hours":133,"and":67,"polygon":161,"not":23,"asc":73,"match":97,"json":98,"distance":162,"inner_join":48,"filter":39,"minval":180,"set_difference":91,"slice":30,"status":175,"june":119,"round":185,"intersects":163,"sync":138,"monday":107,"make_obj":3,"prepend":80,"pluck":33,"table":15,"between":182,"merge":35,"order_by":41,"is_empty":86,"eq_join":50,"max":148,"day_of_year":132,"floor":183,"avg":146,"march":116,"delete_at":83,"rebalance":179,"seconds":135,"coerce_to":51,"delete":54,"update":53,"index_wait":140,"literal":137,"branch":65,"javascript":11,"sample":81,"epoch_time":101,"january":114,"sunday":113,"get":16,"le":20,"december":125,"db":14,"get_field":31,"april":117,"contains":93,"lt":19,"between_deprecated":36,"datum":1,"index_status":139,"set_intersection":89,"change_at":84,"mul":26,"replace":55,"october":123,"grant":188,"add":24,"table_create":60,"to_iso8601":100,"in_timezone":104,"day_of_week":131,"timezone":127,"maxval":181,"uuid":169,"concat_map":40,"split":149,"eq":17,"year":128,"downcase":142,"friday":111,"db_create":57,"count":43,"union":44,"bracket":170,"to_json_string":172,"insert_at":82,"month":129,"set_union":90,"ge":22,"point":159,"sum":145,"wednesday":109,"line":160,"now":103,"group":144,"november":124,"to_epoch_time":102,"append":29,"table_list":62,"time_of_day":126,"distinct":42,"index_drop":76,"index_list":77,"implicit_var":13,"config":174,"args":154,"values":186,"ne":18,"keys":94,"for_each":68,"circle":165,"iso8601":99,"with_fields":96,"polygon_sub":171} -------------------------------------------------------------------------------- /lib/rethinkdb/response.ex: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Record do 2 | @moduledoc false 3 | defstruct data: "", profile: nil 4 | end 5 | 6 | defmodule RethinkDB.Collection do 7 | @moduledoc false 8 | defstruct data: [], profile: nil 9 | 10 | defimpl Enumerable, for: __MODULE__ do 11 | def reduce(%{data: data}, acc, fun) do 12 | Enumerable.reduce(data, acc, fun) 13 | end 14 | 15 | def count(%{data: data}), do: Enumerable.count(data) 16 | def member?(%{data: data}, el), do: Enumerable.member?(data, el) 17 | end 18 | end 19 | 20 | defmodule RethinkDB.Feed do 21 | @moduledoc false 22 | defstruct token: nil, data: nil, pid: nil, note: nil, profile: nil, opts: nil 23 | 24 | defimpl Enumerable, for: __MODULE__ do 25 | def reduce(changes, acc, fun) do 26 | stream = Stream.unfold(changes, fn 27 | x = %RethinkDB.Feed{data: []} -> 28 | {:ok, r} = RethinkDB.next(x) 29 | {r, struct(r, data: [])} 30 | x = %RethinkDB.Feed{} -> 31 | {x, struct(x, data: [])} 32 | x = %RethinkDB.Collection{} -> {x, nil} 33 | nil -> nil 34 | end) |> Stream.flat_map(fn (el) -> 35 | el.data 36 | end) 37 | stream.(acc, fun) 38 | end 39 | def count(_changes), do: raise "count/1 not supported for changes" 40 | def member?(_changes, _values), do: raise "member/2 not supported for changes" 41 | end 42 | end 43 | 44 | defmodule RethinkDB.Response do 45 | @moduledoc false 46 | defstruct token: nil, data: "", profile: nil 47 | 48 | def parse(raw_data, token, pid, opts) do 49 | d = Poison.decode!(raw_data) 50 | data = RethinkDB.Pseudotypes.convert_reql_pseudotypes(d["r"], opts) 51 | {code, resp} = case d["t"] do 52 | 1 -> {:ok, %RethinkDB.Record{data: hd(data)}} 53 | 2 -> {:ok, %RethinkDB.Collection{data: data}} 54 | 3 -> {:ok, %RethinkDB.Feed{token: token, data: data, pid: pid, note: d["n"], opts: opts}} 55 | 4 -> {:ok, %RethinkDB.Response{token: token, data: d}} 56 | 16 -> {:error, %RethinkDB.Response{token: token, data: d}} 57 | 17 -> {:error, %RethinkDB.Response{token: token, data: d}} 58 | 18 -> {:error, %RethinkDB.Response{token: token, data: d}} 59 | end 60 | {code, %{resp | :profile => d["p"]}} 61 | end 62 | end 63 | -------------------------------------------------------------------------------- /mix.exs: -------------------------------------------------------------------------------- 1 | defmodule RethinkDB.Mixfile do use Mix.Project 2 | 3 | def project do 4 | [app: :rethinkdb, 5 | version: "0.4.0", 6 | elixir: "~> 1.0", 7 | description: "RethinkDB driver for Elixir", 8 | package: package, 9 | deps: deps, 10 | test_coverage: [tool: ExCoveralls]] 11 | end 12 | 13 | def package do 14 | [ 15 | maintainers: ["Peter Hamilton"], 16 | licenses: ["MIT"], 17 | links: %{"GitHub" => "https://github.com/hamiltop/rethinkdb-elixir"} 18 | ] 19 | end 20 | 21 | # Configuration for the OTP application 22 | # 23 | # Type `mix help compile.app` for more information 24 | def application do 25 | env_apps = case Mix.env do 26 | :test -> [:flaky_connection] 27 | _ -> [] 28 | end 29 | [applications: [:logger, :poison, :connection | env_apps]] 30 | end 31 | 32 | # Dependencies can be Hex packages: 33 | # 34 | # {:mydep, "~> 0.3.0"} 35 | # 36 | # Or git/path repositories: 37 | # 38 | # {:mydep, git: "https://github.com/elixir-lang/mydep.git", tag: "0.1.0"} 39 | # 40 | # Type `mix help deps` for more examples and options 41 | defp deps do 42 | [ 43 | {:poison, "~> 3.0"}, 44 | {:earmark, "~> 0.1", only: :dev}, 45 | {:ex_doc, "~> 0.7", only: :dev}, 46 | {:flaky_connection, github: "hamiltop/flaky_connection", only: :test}, 47 | {:connection, "~> 1.0.1"}, 48 | {:excoveralls, "~> 0.3.11", only: :test}, 49 | {:dialyze, "~> 0.2.0", only: :test} 50 | ] 51 | end 52 | end 53 | -------------------------------------------------------------------------------- /mix.lock: -------------------------------------------------------------------------------- 1 | %{"certifi": {:hex, :certifi, "0.3.0", "389d4b126a47895fe96d65fcf8681f4d09eca1153dc2243ed6babad0aac1e763", [:rebar3], []}, 2 | "connection": {:hex, :connection, "1.0.1", "16bf178158088f29513a34a742d4311cd39f2c52425559d679ecb28a568c5c0b", [:mix], []}, 3 | "dialyze": {:hex, :dialyze, "0.2.0", "ecabf292e9f4bd0f7d844981f899a85c0300b30ff2dd1cdfef0c81a6496466f1", [:mix], []}, 4 | "earmark": {:hex, :earmark, "0.1.19", "ffec54f520a11b711532c23d8a52b75a74c09697062d10613fa2dbdf8a9db36e", [:mix], []}, 5 | "ex_doc": {:hex, :ex_doc, "0.10.0", "f49c237250b829df986486b38f043e6f8e19d19b41101987f7214543f75947ec", [:mix], [{:earmark, "~> 0.1.17 or ~> 0.2", [hex: :earmark, optional: true]}]}, 6 | "excoveralls": {:hex, :excoveralls, "0.3.11", "cd1abaf07db5bed9cf7891d86470247c8b3c8739d7758679071ce1920bb09dbc", [:mix], [{:exjsx, "~> 3.0", [hex: :exjsx, optional: false]}, {:hackney, ">= 0.12.0", [hex: :hackney, optional: false]}]}, 7 | "exjsx": {:hex, :exjsx, "3.2.0", "7136cc739ace295fc74c378f33699e5145bead4fdc1b4799822d0287489136fb", [:mix], [{:jsx, "~> 2.6.2", [hex: :jsx, optional: false]}]}, 8 | "flaky_connection": {:git, "https://github.com/hamiltop/flaky_connection.git", "e3a09e7198e1b155f35291ffad438966648a8156", []}, 9 | "hackney": {:hex, :hackney, "1.4.8", "c8c6977ed55cc5095e3929f6d94a6f732dd2e31ae42a7b9236d5574ec3f5be13", [:rebar3], [{:certifi, "0.3.0", [hex: :certifi, optional: false]}, {:idna, "1.0.3", [hex: :idna, optional: false]}, {:mimerl, "1.0.2", [hex: :mimerl, optional: false]}, {:ssl_verify_hostname, "1.0.5", [hex: :ssl_verify_hostname, optional: false]}]}, 10 | "idna": {:hex, :idna, "1.0.3", "d456a8761cad91c97e9788c27002eb3b773adaf5c893275fc35ba4e3434bbd9b", [:rebar3], []}, 11 | "inch_ex": {:hex, :inch_ex, "0.2.4"}, 12 | "jsx": {:hex, :jsx, "2.6.2", "213721e058da0587a4bce3cc8a00ff6684ced229c8f9223245c6ff2c88fbaa5a", [:mix, :rebar], []}, 13 | "mimerl": {:hex, :mimerl, "1.0.2", "993f9b0e084083405ed8252b99460c4f0563e41729ab42d9074fd5e52439be88", [:rebar3], []}, 14 | "poison": {:hex, :poison, "3.1.0", "d9eb636610e096f86f25d9a46f35a9facac35609a7591b3be3326e99a0484665", [:mix], [], "hexpm"}, 15 | "ranch": {:hex, :ranch, "1.1.0", "f7ed6d97db8c2a27cca85cacbd543558001fc5a355e93a7bff1e9a9065a8545b", [:make], []}, 16 | "ssl_verify_hostname": {:hex, :ssl_verify_hostname, "1.0.5", "2e73e068cd6393526f9fa6d399353d7c9477d6886ba005f323b592d389fb47be", [:make], []}} 17 | -------------------------------------------------------------------------------- /test/cert/host.crt: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIDeDCCAmACCQCXLb1LngVNuTANBgkqhkiG9w0BAQsFADCBgDELMAkGA1UEBhMC 3 | VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 4 | aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI 5 | hvcNAQkBFg1mb29AZ21haWwuY29tMB4XDTE3MDIxOTIzMzMxNVoXDTE4MDcwNDIz 6 | MzMxNVowezELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMQswCQYDVQQHEwJOWTEh 7 | MB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMREwDwYDVQQDEwgxMC4w 8 | LjAuMTEcMBoGCSqGSIb3DQEJARYNZm9vQGdtYWlsLmNvbTCCASIwDQYJKoZIhvcN 9 | AQEBBQADggEPADCCAQoCggEBAKGKWJJ3NDXapqJbJTwz9TGkBb4Re1o793BkFbzk 10 | XJpohRcbEMQWK3fzzjFU2NzDV7uDFR5Em6GBP9piGa8SGM4WgFUu6alRXSYRJ/BB 11 | QRX5Qm+MTMDhYIRjZAQiOVCVLHXl/eWMxrItffyUVLe8NQWDIOz8UoUWMrTlpWsi 12 | kSNUVjWOhYZGRHQcyriRxua35S7mCtk6DW0RKU0nG9cB7Nyc9YYKxpHT63Ki+/FH 13 | gmqAF1cJ0OqtN27FMY2aHgT3HvRbbtLGHCnkZa4HErmrj7rlXoacf0bVJYYYB7AG 14 | thfcK7nwUCmUdWNwS829WdV42Tvv12Ww6eZNSkiFgD9KEssCAwEAATANBgkqhkiG 15 | 9w0BAQsFAAOCAQEAhDzL0UsKN6Yxbce3QpWAaHTyFZU+NPckPm66GyEmwBMuvLVr 16 | d2oMzqOXZ3AW+rydh4i0GYZQYK2KXUgTxYfIz3fvylU0g4rlHI/Ej6gnFJ5g2k8v 17 | h2FLY6mp1SULVopxqURWQPIPm+ztz/wQYPmB1W9W8aQYdEBgoIAmKvxRnBmU7SuP 18 | L2sQmoPnh9pCCdS3djXLoj9pCUe7YDJCnxqOe8zpH3FOIykdCfsIphpPs4Mkw+LY 19 | N1+KHBoRwkj0JBqwaNLF3sjkXgi0v06l4DZ7WAy3Q2k3QD8tuiSFEM0g4/2y58Ts 20 | iFSH2inRL4NJIew2kx+IBHEQDxffgA62zhjxVw== 21 | -----END CERTIFICATE----- 22 | -------------------------------------------------------------------------------- /test/cert/host.key: -------------------------------------------------------------------------------- 1 | -----BEGIN RSA PRIVATE KEY----- 2 | MIIEogIBAAKCAQEAoYpYknc0NdqmolslPDP1MaQFvhF7Wjv3cGQVvORcmmiFFxsQ 3 | xBYrd/POMVTY3MNXu4MVHkSboYE/2mIZrxIYzhaAVS7pqVFdJhEn8EFBFflCb4xM 4 | wOFghGNkBCI5UJUsdeX95YzGsi19/JRUt7w1BYMg7PxShRYytOWlayKRI1RWNY6F 5 | hkZEdBzKuJHG5rflLuYK2ToNbREpTScb1wHs3Jz1hgrGkdPrcqL78UeCaoAXVwnQ 6 | 6q03bsUxjZoeBPce9Ftu0sYcKeRlrgcSuauPuuVehpx/RtUlhhgHsAa2F9wrufBQ 7 | KZR1Y3BLzb1Z1XjZO+/XZbDp5k1KSIWAP0oSywIDAQABAoIBAGqOdYp3syrrBgwG 8 | j3M82rpZ9afApFuLPtcWTfiBskvwMgphwhd2gEnputNzonFNMavw9Zc3rmlEdrg5 9 | CbQf/djDoveNsHgNwaIAoxWqFaLG/vnR1DdO83mgjjLj2Ga9X8yNX4Nx7wdNVtOr 10 | jI5+SYNPUgLBFjXPxLbq3MjkzlQ8myki26qvXB9GjgF3gWoCz1q6dvjhCDtithNd 11 | NoHQUI5YmjuJas3gZ7lUu21MuOAWi0oPFuMMznjPgaaZ3vJVMxmLc/YsnryqbAoA 12 | ZQhn8ZNeXxUDI2mxiuk/bYgQAqRxHqogvL4u2tmm/L9FSSHVJgB3IjoDMiTYQjbk 13 | hZg+A3ECgYEAzldbDxYv3aj02sDidMFQvCVIYJ/v1iJDBFVNN148Y+5b64xvGbVM 14 | 7XkHmYQctg/jIASGiYjGsyVpRT+fRvRleHjNrO4FyWE317nUjER1EZblpZiqeXW0 15 | c+Lf4rKBN3z5eNJRbO/GWAmSvjs33Xdt68YNeRB4v5NA/b01tXp3NlcCgYEAyGrR 16 | /ybLBh9D86mgW28zBD39TdLdvtcNSTWPhu/Ceh9KqhaUZrc/jb8StFxG/Cx5S/oc 17 | F3BHypGkD1NZDAc5NJoMPQixd1BZB3F9EBmJk219KttimAiM27CKmF+7RbSfhu8P 18 | 3uK1oLSo3sVOhGQFpXQZ2g51B1+ltAboTLBcNq0CgYB/BzRd01DgaxViXoCLVD95 19 | tJIcOhoSf8E2N7VzsqYG90TLfAchkoWrZGkTT0vFoX43xdF1diitPQjTwtkxe1/E 20 | jMpB/b6+PQV93z9EoxhXHch+6793Ssku1qryCuaV3HBQu1m5cNtwc2RNjHNV+iJH 21 | lgPRVhygA+1syED6WkxtvQKBgEjZe0exxC6PgtW5HM7flr2+AqsdMPlDllK8I1W7 22 | JQfbA/rbhknn5jQR9iyVNkBHsjeJzFhAuffKBMaFV2Ll5UdXj4dH96oVDKeF+x21 23 | CqsKK2s+n5H/2aOpgldsxNfLlgkoMK6l3btyr8d6FNZOvTatAxCeHK/3dnX/5MSr 24 | fnlpAoGADad3tXN+vJARd/vXHfsHZp8/nF+hDrteEC8jwv//l+10TErWDZvxxf+8 25 | vgK6zWZogitNjmj49A8/PzNJY45JiCodo1z2D22jP4fqT8okzax7+i+v6noIFVSA 26 | ikQbv1yjcf3CeVsihHs3bvcxAW+fF0yOQ1uNIL97ThS2gIJwAwY= 27 | -----END RSA PRIVATE KEY----- 28 | -------------------------------------------------------------------------------- /test/cert/rootCA.pem: -------------------------------------------------------------------------------- 1 | -----BEGIN CERTIFICATE----- 2 | MIIEbjCCA1agAwIBAgIJAOQ/G0GKEPPHMA0GCSqGSIb3DQEBCwUAMIGAMQswCQYD 3 | VQQGEwJVUzELMAkGA1UECBMCTlkxETAPBgNVBAcTCE5ldyBZb3JrMRkwFwYDVQQK 4 | ExBSZXRoaW5rREIgRWxpeGlyMRgwFgYDVQQDEw9mb28uZXhhbXBsZS5jb20xHDAa 5 | BgkqhkiG9w0BCQEWDWZvb0BnbWFpbC5jb20wHhcNMTcwMjE5MjMzMDQzWhcNMTkx 6 | MjEwMjMzMDQzWjCBgDELMAkGA1UEBhMCVVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQH 7 | EwhOZXcgWW9yazEZMBcGA1UEChMQUmV0aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMP 8 | Zm9vLmV4YW1wbGUuY29tMRwwGgYJKoZIhvcNAQkBFg1mb29AZ21haWwuY29tMIIB 9 | IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAv6zwu/jO6Y0WNO1jw6zCLdNu 10 | MBy7vjBY0mnF4MUZocP5VSxOE7OI/2KPS1cIbVBfbSQgKVDsl3T17JgZpUC1INlu 11 | u3J1J4UFUSXBRXZeGKTGaePtuLP5FwGabux18m42IgsBn0nA69QeTZCVnlYjsUoC 12 | UoJe+uldOanZQmylZQjO8alz5YjNw2T5YNE4laAutU9tPMJgJWRzG8+mUacY5/Y/ 13 | yVly2uPZ1CxG146MpHcQsApIMgovF9Uuuxjudf9nz4XOrjnUn6E5GxMh9HKBiQd8 14 | XbfH8DrDZf5ZquyytAtJ6T5m92mkje7T+76mOOKmKoDSl8xsDQgkUirOTb5g3wID 15 | AQABo4HoMIHlMB0GA1UdDgQWBBRDJaAOuwlhj7eIeILZFmN8qyVW8DCBtQYDVR0j 16 | BIGtMIGqgBRDJaAOuwlhj7eIeILZFmN8qyVW8KGBhqSBgzCBgDELMAkGA1UEBhMC 17 | VVMxCzAJBgNVBAgTAk5ZMREwDwYDVQQHEwhOZXcgWW9yazEZMBcGA1UEChMQUmV0 18 | aGlua0RCIEVsaXhpcjEYMBYGA1UEAxMPZm9vLmV4YW1wbGUuY29tMRwwGgYJKoZI 19 | hvcNAQkBFg1mb29AZ21haWwuY29tggkA5D8bQYoQ88cwDAYDVR0TBAUwAwEB/zAN 20 | BgkqhkiG9w0BAQsFAAOCAQEAUpJVRQ0Bsy7jyLMTqKmR6qGiYKBM2/AaMRbn5pqi 21 | 0Uz/3Fu9a9POzI18j7ZxDD7HVGZvdjKc/d6+jx6PntReuXYwdkIjW19oBihYp3op 22 | iaA7ZU0nAsefeyVcmfPtM+Kn3OW5/uIgYVIOiSLfWT4HVQxnKOWdVfaYieoz1gRO 23 | wdXipeHwtsfjz3sDjJBoBIWdtysEPYsVCkAEcvlji6ugwWJ8SBqzdZl/NjWvecgW 24 | ppSzO46l6WAZJxLdapAddOucrFSCbGAdf3WmHHRURCVaCbRhyBwDJh+vq76Imkh4 25 | M13jlo5+4K2NF9QCUEzwnC47uUOp1HqGGUoaeW4nTA07wg== 26 | -----END CERTIFICATE----- 27 | -------------------------------------------------------------------------------- /test/changes_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ChangesTest do 2 | use ExUnit.Case, async: true 3 | alias RethinkDB.Feed 4 | use RethinkDB.Connection 5 | import RethinkDB.Query 6 | 7 | @table_name "changes_test_table_1" 8 | setup_all do 9 | start_link 10 | table_create(@table_name) |> run 11 | on_exit fn -> 12 | start_link 13 | table_drop(@table_name) |> run 14 | end 15 | :ok 16 | end 17 | 18 | setup do 19 | table(@table_name) |> delete |> run 20 | :ok 21 | end 22 | 23 | test "first change" do 24 | q = table(@table_name) |> changes 25 | {:ok, changes = %Feed{}} = run(q) 26 | 27 | t = Task.async fn -> 28 | changes |> Enum.take(1) 29 | end 30 | data = %{"test" => "d"} 31 | table(@table_name) |> insert(data) |> run 32 | [h|[]] = Task.await(t) 33 | assert %{"new_val" => %{"test" => "d"}} = h 34 | end 35 | 36 | test "changes" do 37 | q = table(@table_name) |> changes 38 | {:ok, changes} = {:ok, %Feed{}} = run(q) 39 | t = Task.async fn -> 40 | RethinkDB.Connection.next(changes) 41 | end 42 | data = %{"test" => "data"} 43 | q = table(@table_name) |> insert(data) 44 | {:ok, res} = run(q) 45 | expected = res.data["id"] 46 | {:ok, changes} = Task.await(t) 47 | ^expected = changes.data |> hd |> Map.get("id") 48 | 49 | # test Enumerable 50 | t = Task.async fn -> 51 | changes |> Enum.take(5) 52 | end 53 | 1..6 |> Enum.each(fn _ -> 54 | q = table(@table_name) |> insert(data) 55 | run(q) 56 | end) 57 | data = Task.await(t) 58 | 5 = Enum.count(data) 59 | end 60 | 61 | test "point_changes" do 62 | q = table(@table_name) |> get("0") |> changes 63 | {:ok, changes} = {:ok, %Feed{}} = run(q) 64 | t = Task.async fn -> 65 | changes |> Enum.take(1) 66 | end 67 | data = %{"id" => "0"} 68 | q = table(@table_name) |> insert(data) 69 | {:ok, res} = run(q) 70 | expected = res.data["id"] 71 | [h|[]] = Task.await(t) 72 | assert %{"new_val" => %{"id" => "0"}} = h 73 | end 74 | 75 | test "changes opts binary native" do 76 | q = table(@table_name) |> get("0") |> changes 77 | {:ok, changes} = {:ok, %Feed{}} = run(q) 78 | t = Task.async fn -> 79 | changes |> Enum.take(1) 80 | end 81 | data = %{"id" => "0", "binary" => binary(<<1>>)} 82 | q = table(@table_name) |> insert(data) 83 | {:ok, res} = run(q) 84 | expected = res.data["id"] 85 | [h|[]] = Task.await(t) 86 | assert %{"new_val" => %{"id" => "0", "binary" => <<1>>}} = h 87 | end 88 | 89 | test "changes opts binary raw" do 90 | q = table(@table_name) |> get("0") |> changes 91 | {:ok, changes} = {:ok, %Feed{}} = run(q, [binary_format: :raw]) 92 | t = Task.async fn -> 93 | changes |> Enum.take(1) 94 | end 95 | data = %{"id" => "0", "binary" => binary(<<1>>)} 96 | q = table(@table_name) |> insert(data) 97 | {:ok, res} = run(q) 98 | expected = res.data["id"] 99 | [h|[]] = Task.await(t) 100 | assert %{"new_val" => %{"id" => "0", "binary" => %RethinkDB.Pseudotypes.Binary{data: "AQ=="}}} = h 101 | end 102 | end 103 | -------------------------------------------------------------------------------- /test/connection_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ConnectionTest do 2 | use ExUnit.Case, async: true 3 | import Supervisor.Spec 4 | use RethinkDB.Connection 5 | import RethinkDB.Query 6 | 7 | require Logger 8 | 9 | test "Connections can be supervised" do 10 | children = [worker(RethinkDB.Connection, [])] 11 | {:ok, sup} = Supervisor.start_link(children, strategy: :one_for_one) 12 | assert Supervisor.count_children(sup) == %{active: 1, specs: 1, supervisors: 0, workers: 1} 13 | Process.exit(sup, :kill) 14 | end 15 | 16 | test "using Connection works with supervision" do 17 | children = [worker(__MODULE__, [])] 18 | {:ok, sup} = Supervisor.start_link(children, strategy: :one_for_one) 19 | assert Supervisor.count_children(sup) == %{active: 1, specs: 1, supervisors: 0, workers: 1} 20 | Process.exit(sup, :kill) 21 | end 22 | 23 | test "using Connection will raise if name is provided" do 24 | assert_raise ArgumentError, fn -> 25 | start_link(name: :test) 26 | end 27 | end 28 | 29 | test "reconnects if initial connect fails" do 30 | {:ok, c} = start_link(port: 28014) 31 | Process.unlink(c) 32 | %RethinkDB.Exception.ConnectionClosed{} = table_list |> run 33 | conn = FlakyConnection.start('localhost', 28015, [local_port: 28014]) 34 | :timer.sleep(1000) 35 | {:ok, %RethinkDB.Record{}} = RethinkDB.Query.table_list |> run 36 | ref = Process.monitor(c) 37 | FlakyConnection.stop(conn) 38 | receive do 39 | {:DOWN, ^ref, _, _, _} -> :ok 40 | end 41 | end 42 | 43 | test "replies to pending queries on disconnect" do 44 | conn = FlakyConnection.start('localhost', 28015) 45 | {:ok, c} = start_link(port: conn.port) 46 | Process.unlink(c) 47 | table = "foo_flaky_test" 48 | RethinkDB.Query.table_create(table)|> run 49 | on_exit fn -> 50 | start_link 51 | :timer.sleep(100) 52 | RethinkDB.Query.table_drop(table) |> run 53 | GenServer.cast(__MODULE__, :stop) 54 | end 55 | table(table) |> index_wait |> run 56 | {:ok, change_feed} = table(table) |> changes |> run 57 | task = Task.async fn -> 58 | RethinkDB.Connection.next change_feed 59 | end 60 | :timer.sleep(100) 61 | ref = Process.monitor(c) 62 | FlakyConnection.stop(conn) 63 | %RethinkDB.Exception.ConnectionClosed{} = Task.await(task) 64 | receive do 65 | {:DOWN, ^ref, _, _, _} -> :ok 66 | end 67 | end 68 | 69 | test "supervised connection restarts on disconnect" do 70 | conn = FlakyConnection.start('localhost', 28015) 71 | children = [worker(__MODULE__, [[port: conn.port]])] 72 | {:ok, sup} = Supervisor.start_link(children, strategy: :one_for_one) 73 | assert Supervisor.count_children(sup) == %{active: 1, specs: 1, supervisors: 0, workers: 1} 74 | 75 | FlakyConnection.stop(conn) 76 | :timer.sleep(100) # this is a band-aid for a race condition in this test 77 | 78 | assert Supervisor.count_children(sup) == %{active: 1, specs: 1, supervisors: 0, workers: 1} 79 | 80 | Process.exit(sup, :normal) 81 | end 82 | 83 | test "connection accepts default db" do 84 | {:ok, c} = RethinkDB.Connection.start_link(db: "new_test") 85 | db_create("new_test") |> RethinkDB.run(c) 86 | db("new_test") |> table_create("new_test_table") |> RethinkDB.run(c) 87 | {:ok, %{data: data}} = table_list |> RethinkDB.run(c) 88 | assert data == ["new_test_table"] 89 | end 90 | 91 | test "connection accepts max_pending" do 92 | {:ok, c} = RethinkDB.Connection.start_link(max_pending: 1) 93 | res = Enum.map(1..100, fn (_) -> 94 | Task.async fn -> 95 | now |> RethinkDB.run(c) 96 | end 97 | end) |> Enum.map(&Task.await/1) 98 | assert Enum.any?(res, &(&1 == %RethinkDB.Exception.TooManyRequests{})) 99 | end 100 | 101 | test "sync connection" do 102 | {:error, :econnrefused} = Connection.start(RethinkDB.Connection, [port: 28014, sync_connect: true]) 103 | conn = FlakyConnection.start('localhost', 28015, [local_port: 28014]) 104 | {:ok, pid} = Connection.start(RethinkDB.Connection, [port: 28014, sync_connect: true]) 105 | FlakyConnection.stop(conn) 106 | Process.exit(pid, :shutdown) 107 | end 108 | 109 | test "ssl connection" do 110 | conn = FlakyConnection.start('localhost', 28015, [ssl: [keyfile: "./test/cert/host.key", certfile: "./test/cert/host.crt"]]) 111 | {:ok, c} = RethinkDB.Connection.start_link(port: conn.port, ssl: [ca_certs: ["./test/cert/rootCA.pem"]], sync_connect: true) 112 | {:ok, %{data: _}} = table_list |> RethinkDB.run(c) 113 | end 114 | end 115 | 116 | defmodule ConnectionRunTest do 117 | use ExUnit.Case, async: true 118 | use RethinkDB.Connection 119 | import RethinkDB.Query 120 | 121 | setup_all do 122 | start_link 123 | :ok 124 | end 125 | 126 | test "run(conn, opts) with :db option" do 127 | db_create("db_option_test") |> run 128 | table_create("db_option_test_table") |> run(db: "db_option_test") 129 | 130 | {:ok, %{data: data}} = db("db_option_test") |> table_list |> run 131 | 132 | db_drop("db_option_test") |> run 133 | 134 | assert data == ["db_option_test_table"] 135 | end 136 | 137 | test "run(conn, opts) with :durability option" do 138 | table_drop("durability_test_table") |> run 139 | {:ok, response} = table_create("durability_test_table") |> run(durability: "soft") 140 | durability = response.data["config_changes"] 141 | |> List.first 142 | |> Map.fetch!("new_val") 143 | |> Map.fetch!("durability") 144 | 145 | table_drop("durability_test_table") |> run 146 | 147 | assert durability == "soft" 148 | end 149 | 150 | test "run with :noreply option" do 151 | :ok = make_array([1,2,3]) |> run(noreply: true) 152 | noreply_wait 153 | end 154 | 155 | test "run with :profile options" do 156 | {:ok, resp} = make_array([1,2,3]) |> run(profile: true) 157 | assert [%{"description" => _, "duration(ms)" => _, 158 | "sub_tasks" => _}] = resp.profile 159 | end 160 | end 161 | -------------------------------------------------------------------------------- /test/prepare_test.exs: -------------------------------------------------------------------------------- 1 | defmodule PrepareTest do 2 | use ExUnit.Case, async: true 3 | import RethinkDB.Prepare 4 | 5 | test "single elements" do 6 | assert prepare(1) == 1 7 | assert prepare(make_ref) == 1 8 | end 9 | 10 | test "list" do 11 | assert prepare([1,2,3]) == [1,2,3] 12 | assert prepare([1,2,make_ref,make_ref]) == [1,2,1,2] 13 | end 14 | 15 | test "nested list" do 16 | list = [1, [1,2], [1, [1, 2]]] 17 | assert prepare(list) == list 18 | list = [1, [make_ref, make_ref], make_ref, [1, 2]] 19 | assert prepare(list) == [1, [1, 2], 3, [1, 2]] 20 | end 21 | 22 | test "map" do 23 | map = %{a: 1, b: 2} 24 | assert prepare(map) == map 25 | map = %{a: 1, b: make_ref, c: make_ref} 26 | assert prepare(map) == %{a: 1, b: 1, c: 2} 27 | end 28 | end 29 | -------------------------------------------------------------------------------- /test/query/administration_query_test.exs: -------------------------------------------------------------------------------- 1 | defmodule AdministrationQueryTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | setup_all do 7 | start_link 8 | :ok 9 | end 10 | 11 | @table_name "administration_table_1" 12 | setup do 13 | table_create(@table_name) |> run 14 | on_exit fn -> 15 | table_drop(@table_name) |> run 16 | end 17 | :ok 18 | end 19 | 20 | test "config" do 21 | {:ok, r} = table(@table_name) |> config |> run 22 | assert %RethinkDB.Record{data: %{"db" => "test"}} = r 23 | end 24 | 25 | test "rebalance" do 26 | {:ok, r} = table(@table_name) |> rebalance |> run 27 | assert %RethinkDB.Record{data: %{"rebalanced" => _}} = r 28 | end 29 | 30 | test "reconfigure" do 31 | {:ok, r} = table(@table_name) |> reconfigure(shards: 1, dry_run: true, replicas: 1) |> run 32 | assert %RethinkDB.Record{data: %{"reconfigured" => _}} = r 33 | end 34 | 35 | test "status" do 36 | {:ok, r} = table(@table_name) |> status |> run 37 | assert %RethinkDB.Record{data: %{"name" => @table_name}} = r 38 | end 39 | 40 | test "wait" do 41 | {:ok, r} = table(@table_name) |> wait(wait_for: :ready_for_writes) |> run 42 | assert %RethinkDB.Record{data: %{"ready" => 1}} = r 43 | end 44 | end 45 | -------------------------------------------------------------------------------- /test/query/aggregation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule AggregationTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Query 6 | 7 | alias RethinkDB.Record 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | setup_all do 13 | start_link 14 | :ok 15 | end 16 | 17 | test "group on key name" do 18 | query = [ 19 | %{a: "hi", b: 1}, 20 | %{a: "hi", b: [1,2,3]}, 21 | %{a: "bye"} 22 | ] 23 | |> group("a") 24 | {:ok, %Record{data: data}} = query |> run 25 | assert data == %{ 26 | "bye" => [ 27 | %{"a" => "bye"} 28 | ], 29 | "hi" => [ 30 | %{"a" => "hi", "b" => 1}, 31 | %{"a" => "hi", "b" => [1,2,3]} 32 | ] 33 | } 34 | end 35 | 36 | test "group on function" do 37 | query = [ 38 | %{a: "hi", b: 1}, 39 | %{a: "hi", b: [1,2,3]}, 40 | %{a: "bye"}, 41 | %{a: "hello"} 42 | ] 43 | |> group(lambda fn (x) -> 44 | (x["a"] == "hi") || (x["a"] == "hello") 45 | end) 46 | {:ok, %Record{data: data}} = query |> run 47 | assert data == %{ 48 | false: [ 49 | %{"a" => "bye"}, 50 | ], 51 | true: [ 52 | %{"a" => "hi", "b" => 1}, 53 | %{"a" => "hi", "b" => [1,2,3]}, 54 | %{"a" => "hello"} 55 | ] 56 | } 57 | end 58 | 59 | test "group on multiple keys" do 60 | query = [ 61 | %{a: "hi", b: 1, c: 2}, 62 | %{a: "hi", b: 1, c: 3}, 63 | %{a: "hi", b: [1,2,3]}, 64 | %{a: "bye"}, 65 | %{a: "hello", b: 1} 66 | ] 67 | |> group([lambda(fn (x) -> 68 | (x["a"] == "hi") || (x["a"] == "hello") 69 | end), "b"]) 70 | {:ok, %Record{data: data}} = query |> run 71 | assert data == %{ 72 | [false, nil] => [ 73 | %{"a" => "bye"}, 74 | ], 75 | [true, 1] => [ 76 | %{"a" => "hi", "b" => 1, "c" => 2}, 77 | %{"a" => "hi", "b" => 1, "c" => 3}, 78 | %{"a" => "hello", "b" => 1}, 79 | ], 80 | [true, [1,2,3]] => [ 81 | %{"a" => "hi", "b" => [1,2,3]} 82 | ] 83 | } 84 | end 85 | 86 | test "ungroup" do 87 | query = [ 88 | %{a: "hi", b: 1, c: 2}, 89 | %{a: "hi", b: 1, c: 3}, 90 | %{a: "hi", b: [1,2,3]}, 91 | %{a: "bye"}, 92 | %{a: "hello", b: 1} 93 | ] 94 | |> group([lambda(fn (x) -> 95 | (x["a"] == "hi") || (x["a"] == "hello") 96 | end), "b"]) 97 | |> ungroup 98 | {:ok, %Record{data: data}} = query |> run 99 | assert data == [ 100 | %{ 101 | "group" => [false, nil], 102 | "reduction" => [ 103 | %{"a" => "bye"}, 104 | ] 105 | }, 106 | %{ 107 | "group" => [true, [1,2,3]], 108 | "reduction" => [ 109 | %{"a" => "hi", "b" => [1,2,3]} 110 | ] 111 | }, 112 | %{ 113 | "group" => [true, 1], 114 | "reduction" => [ 115 | %{"a" => "hi", "b" => 1, "c" => 2}, 116 | %{"a" => "hi", "b" => 1, "c" => 3}, 117 | %{"a" => "hello", "b" => 1}, 118 | ] 119 | } 120 | ] 121 | end 122 | 123 | test "reduce" do 124 | query = [1,2,3,4] |> reduce(lambda fn(el, acc) -> 125 | el + acc 126 | end) 127 | {:ok, %Record{data: data}} = run query 128 | assert data == 10 129 | end 130 | 131 | test "count" do 132 | query = [1,2,3,4] |> count 133 | {:ok, %Record{data: data}} = run query 134 | assert data == 4 135 | end 136 | 137 | test "count with value" do 138 | query = [1,2,2,3,4] |> count(2) 139 | {:ok, %Record{data: data}} = run query 140 | assert data == 2 141 | end 142 | 143 | test "count with predicate" do 144 | query = [1,2,2,3,4] |> count(lambda fn(x) -> 145 | rem(x, 2) == 0 146 | end) 147 | {:ok, %Record{data: data}} = run query 148 | assert data == 3 149 | end 150 | 151 | test "sum" do 152 | query = [1,2,3,4] |> sum 153 | {:ok, %Record{data: data}} = run query 154 | assert data == 10 155 | end 156 | 157 | test "sum with field" do 158 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> sum("a") 159 | {:ok, %Record{data: data}} = run query 160 | assert data == 3 161 | end 162 | 163 | test "sum with function" do 164 | query = [1,2,3,4] |> sum(lambda fn (x) -> 165 | if x == 1 do 166 | nil 167 | else 168 | x * 2 169 | end 170 | end) 171 | {:ok, %Record{data: data}} = run query 172 | assert data == 18 173 | end 174 | 175 | test "avg" do 176 | query = [1,2,3,4] |> avg 177 | {:ok, %Record{data: data}} = run query 178 | assert data == 2.5 179 | end 180 | 181 | test "avg with field" do 182 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> avg("a") 183 | {:ok, %Record{data: data}} = run query 184 | assert data == 1.5 185 | end 186 | 187 | test "avg with function" do 188 | query = [1,2,3,4] |> avg(lambda fn (x) -> 189 | if x == 1 do 190 | nil 191 | else 192 | x * 2 193 | end 194 | end) 195 | {:ok, %Record{data: data}} = run query 196 | assert data == 6 197 | end 198 | 199 | test "min" do 200 | query = [1,2,3,4] |> Query.min 201 | {:ok, %Record{data: data}} = run query 202 | assert data == 1 203 | end 204 | 205 | test "min with field" do 206 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.min("b") 207 | {:ok, %Record{data: data}} = run query 208 | assert data == %{"b" => 3} 209 | end 210 | 211 | test "min with subquery field" do 212 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.min(Query.downcase("B")) 213 | {:ok, %Record{data: data}} = run query 214 | assert data == %{"b" => 3} 215 | end 216 | 217 | test "min with function" do 218 | query = [1,2,3,4] |> Query.min(lambda fn (x) -> 219 | if x == 1 do 220 | 100 # Note, there's a bug in rethinkdb (https://github.com/rethinkdb/rethinkdb/issues/4213) 221 | # which means we can't return null here 222 | else 223 | x * 2 224 | end 225 | end) 226 | {:ok, %Record{data: data}} = run query 227 | assert data == 2 228 | end 229 | 230 | test "max" do 231 | query = [1,2,3,4] |> Query.max 232 | {:ok, %Record{data: data}} = run query 233 | assert data == 4 234 | end 235 | 236 | test "max with field" do 237 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.max("b") 238 | {:ok, %Record{data: data}} = run query 239 | assert data == %{"b" => 4} 240 | end 241 | 242 | test "max with subquery field" do 243 | query = [%{a: 1},%{a: 2},%{b: 3},%{b: 4}] |> Query.max(Query.downcase("B")) 244 | {:ok, %Record{data: data}} = run query 245 | assert data == %{"b" => 4} 246 | end 247 | 248 | test "max with function" do 249 | query = [1,2,3,4] |> Query.max(lambda fn (x) -> 250 | if x == 4 do 251 | nil 252 | else 253 | x * 2 254 | end 255 | end) 256 | {:ok, %Record{data: data}} = run query 257 | assert data == 3 258 | end 259 | 260 | test "distinct" do 261 | query = [1,2,3,3,4,4,5] |> distinct 262 | {:ok, %Record{data: data}} = run query 263 | assert data == [1,2,3,4,5] 264 | end 265 | 266 | test "distinct with opts" do 267 | query = [1,2,3,4] |> distinct(index: "stuff") 268 | assert %RethinkDB.Q{query: [_, _, %{index: "stuff"}]} = query 269 | end 270 | 271 | test "contains" do 272 | query = [1,2,3,4] |> contains(4) 273 | {:ok, %Record{data: data}} = run query 274 | assert data == true 275 | end 276 | 277 | test "contains multiple values" do 278 | query = [1,2,3,4] |> contains([4, 3]) 279 | {:ok, %Record{data: data}} = run query 280 | assert data == true 281 | end 282 | 283 | test "contains with function" do 284 | query = [1,2,3,4] |> contains(lambda &(&1 == 3)) 285 | {:ok, %Record{data: data}} = run query 286 | assert data == true 287 | end 288 | 289 | test "contains with multiple function" do 290 | query = [1,2,3,4] |> contains([lambda(&(&1 == 3)), lambda(&(&1 == 5))]) 291 | {:ok, %Record{data: data}} = run query 292 | assert data == false 293 | end 294 | 295 | test "contains with multiple (mixed)" do 296 | query = [1,2,3,4] |> contains([lambda(&(&1 == 3)), 2]) 297 | {:ok, %Record{data: data}} = run query 298 | assert data == true 299 | end 300 | end 301 | -------------------------------------------------------------------------------- /test/query/control_structures_adv_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ControlStructuresAdvTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Collection 7 | 8 | @table_name "control_test_table_1" 9 | setup_all do 10 | start_link 11 | q = table_create(@table_name) 12 | run(q) 13 | on_exit fn -> 14 | start_link 15 | table_drop(@table_name) |> run 16 | end 17 | :ok 18 | end 19 | 20 | setup do 21 | table(@table_name) |> delete |> run 22 | :ok 23 | end 24 | 25 | test "for_each" do 26 | table_query = table(@table_name) 27 | q = [1,2,3] |> for_each(fn(x) -> 28 | table_query |> insert(%{a: x}) 29 | end) 30 | run q 31 | {:ok, %Collection{data: data}} = run table_query 32 | assert Enum.count(data) == 3 33 | end 34 | end 35 | -------------------------------------------------------------------------------- /test/query/control_structures_test.exs: -------------------------------------------------------------------------------- 1 | defmodule ControlStructuresTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Response 8 | 9 | setup_all do 10 | start_link 11 | :ok 12 | end 13 | 14 | test "args" do 15 | q = [%{a: 5, b: 6}, %{a: 4, c: 7}] |> pluck(args(["a","c"])) 16 | {:ok, %Record{data: data}} = run q 17 | assert data == [%{"a" => 5}, %{"a" => 4, "c" => 7}] 18 | end 19 | 20 | test "binary raw" do 21 | d = << 220, 2, 3, 4, 5, 192 >> 22 | q = binary d 23 | {:ok, %Record{data: data}} = run q, [binary_format: :raw] 24 | assert data == %RethinkDB.Pseudotypes.Binary{data: :base64.encode(d)} 25 | q = binary data 26 | {:ok, %Record{data: result}} = run q, [binary_format: :raw] 27 | assert data == result 28 | end 29 | 30 | test "binary native" do 31 | d = << 220, 2, 3, 4, 5, 192 >> 32 | q = binary d 33 | {:ok, %Record{data: data}} = run q 34 | assert data == d 35 | q = binary data 36 | {:ok, %Record{data: result}} = run q, [binary_format: :native] 37 | assert data == result 38 | end 39 | 40 | test "binary native no wrapper" do 41 | d = << 220, 2, 3, 4, 5, 192 >> 42 | q = d 43 | {:ok, %Record{data: data}} = run q 44 | assert data == d 45 | q = data 46 | {:ok, %Record{data: result}} = run q, [binary_format: :native] 47 | assert data == result 48 | end 49 | 50 | test "do_r" do 51 | q = do_r fn -> 5 end 52 | {:ok, %Record{data: data}} = run q 53 | assert data == 5 54 | q = [1,2,3] |> do_r(fn x -> x end) 55 | {:ok, %Record{data: data}} = run q 56 | assert data == [1,2,3] 57 | end 58 | 59 | test "branch" do 60 | q = branch(true, 1, 2) 61 | {:ok, %Record{data: data}} = run q 62 | assert data == 1 63 | q = branch(false, 1, 2) 64 | {:ok, %Record{data: data}} = run q 65 | assert data == 2 66 | end 67 | 68 | test "error" do 69 | q = do_r(fn -> error("hello") end) 70 | {:error, %Response{data: data}} = run q 71 | assert data["r"] == ["hello"] 72 | end 73 | 74 | test "default" do 75 | q = 1 |> default("test") 76 | {:ok, %Record{data: data}} = run q 77 | assert data == 1 78 | q = nil |> default("test") 79 | {:ok, %Record{data: data}} = run q 80 | assert data == "test" 81 | end 82 | 83 | test "js" do 84 | q = js "[40,100,1,5,25,10].sort()" 85 | {:ok, %Record{data: data}} = run q 86 | assert data == [1,10,100,25,40,5] # couldn't help myself... 87 | end 88 | 89 | test "coerce_to" do 90 | q = "91" |> coerce_to("number") 91 | {:ok, %Record{data: data}} = run q 92 | assert data == 91 93 | end 94 | 95 | test "type_of" do 96 | q = "91" |> type_of 97 | {:ok, %Record{data: data}} = run q 98 | assert data == "STRING" 99 | q = 91 |> type_of 100 | {:ok, %Record{data: data}} = run q 101 | assert data == "NUMBER" 102 | q = [91] |> type_of 103 | {:ok, %Record{data: data}} = run q 104 | assert data == "ARRAY" 105 | end 106 | 107 | test "info" do 108 | q = [91] |> info 109 | {:ok, %Record{data: %{"type" => type}}} = run q 110 | assert type == "ARRAY" 111 | end 112 | 113 | test "json" do 114 | q = "{\"a\": 5, \"b\": 6}" |> json 115 | {:ok, %Record{data: data}} = run q 116 | assert data == %{"a" => 5, "b" => 6} 117 | end 118 | 119 | test "http" do 120 | q = "http://httpbin.org/get" |> http 121 | {:ok, %Record{data: data}} = run q 122 | %{"args" => %{}, 123 | "headers" => _, 124 | "origin" => _, "url" => "http://httpbin.org/get"} = data 125 | end 126 | 127 | test "uuid" do 128 | q = uuid 129 | {:ok, %Record{data: data}} = run q 130 | assert String.length(String.replace(data, "-", "")) == 32 131 | end 132 | end 133 | -------------------------------------------------------------------------------- /test/query/database_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DatabaseTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link 10 | :ok 11 | end 12 | 13 | @db_name "db_test_db_1" 14 | @table_name "db_test_table_1" 15 | setup do 16 | q = table_drop(@table_name) 17 | run(q) 18 | q = table_create(@table_name) 19 | run(q) 20 | on_exit fn -> 21 | q = table_drop(@table_name) 22 | run(q) 23 | db_drop(@db_name) |> run 24 | end 25 | :ok 26 | end 27 | 28 | test "databases" do 29 | q = db_create(@db_name) 30 | {:ok, %Record{data: %{"dbs_created" => 1}}} = run(q) 31 | 32 | q = db_list 33 | {:ok, %Record{data: dbs}} = run(q) 34 | assert Enum.member?(dbs, @db_name) 35 | 36 | q = db_drop(@db_name) 37 | {:ok, %Record{data: %{"dbs_dropped" => 1}}} = run(q) 38 | 39 | q = db_list 40 | {:ok, %Record{data: dbs}} = run(q) 41 | assert !Enum.member?(dbs, @db_name) 42 | end 43 | end 44 | -------------------------------------------------------------------------------- /test/query/date_time_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DateTimeTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Pseudotypes.Time 8 | 9 | setup_all do 10 | start_link 11 | :ok 12 | end 13 | 14 | test "now native" do 15 | {:ok, %Record{data: data}} = now |> run 16 | assert %DateTime{} = data 17 | end 18 | 19 | test "now raw" do 20 | {:ok, %Record{data: data}} = now |> run [time_format: :raw] 21 | assert %Time{} = data 22 | end 23 | 24 | test "time native" do 25 | {:ok, %Record{data: data}} = time(1970,1,1,"Z") |> run 26 | assert data == DateTime.from_unix!(0, :milliseconds) 27 | {:ok, %Record{data: data}} = time(1970,1,1,0,0,1,"Z") |> run [binary_format: :native] 28 | assert data == DateTime.from_unix!(1000, :milliseconds) 29 | end 30 | 31 | test "time raw" do 32 | {:ok, %Record{data: data}} = time(1970,1,1,"Z") |> run [time_format: :raw] 33 | assert data.epoch_time == 0 34 | {:ok, %Record{data: data}} = time(1970,1,1,0,0,1,"Z") |> run [time_format: :raw] 35 | assert data.epoch_time == 1 36 | end 37 | 38 | test "epoch_time native" do 39 | {:ok, %Record{data: data}} = epoch_time(1) |> run 40 | assert data == DateTime.from_unix!(1000, :milliseconds) 41 | end 42 | 43 | test "epoch_time raw" do 44 | {:ok, %Record{data: data}} = epoch_time(1) |> run [time_format: :raw] 45 | assert data.epoch_time == 1 46 | assert data.timezone == "+00:00" 47 | end 48 | 49 | test "iso8601 native" do 50 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run 51 | assert data == DateTime.from_unix!(0, :milliseconds) 52 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run 53 | assert data == DateTime.from_unix!(-3600000, :milliseconds) |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") 54 | end 55 | 56 | test "iso8601 raw" do 57 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00+00:00") |> run [time_format: :raw] 58 | assert data.epoch_time == 0 59 | assert data.timezone == "+00:00" 60 | {:ok, %Record{data: data}} = iso8601("1970-01-01T00:00:00", default_timezone: "+01:00") |> run [time_format: :raw] 61 | assert data.epoch_time == -3600 62 | assert data.timezone == "+01:00" 63 | end 64 | 65 | test "in_timezone native" do 66 | {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run 67 | assert data == DateTime.from_unix!(0, :milliseconds) |> struct(utc_offset: 3600, time_zone: "Etc/GMT-1", zone_abbr: "+01:00") 68 | end 69 | 70 | test "in_timezone raw" do 71 | {:ok, %Record{data: data}} = epoch_time(0) |> in_timezone("+01:00") |> run [time_format: :raw] 72 | assert data.timezone == "+01:00" 73 | assert data.epoch_time == 0 74 | end 75 | 76 | test "timezone" do 77 | {:ok, %Record{data: data}} = %Time{epoch_time: 0, timezone: "+01:00"} |> timezone |> run 78 | assert data == "+01:00" 79 | end 80 | 81 | test "during" do 82 | a = epoch_time(5) 83 | b = epoch_time(10) 84 | c = epoch_time(7) 85 | {:ok, %Record{data: data}} = c |> during(a,b) |> run 86 | assert data == true 87 | {:ok, %Record{data: data}} = b |> during(a,c) |> run 88 | assert data == false 89 | end 90 | 91 | test "date native" do 92 | {:ok, %Record{data: data}} = epoch_time(5) |> date |> run 93 | assert data == DateTime.from_unix!(0, :milliseconds) 94 | end 95 | 96 | test "date raw" do 97 | {:ok, %Record{data: data}} = epoch_time(5) |> date |> run [time_format: :raw] 98 | assert data.epoch_time == 0 99 | end 100 | 101 | test "time_of_day" do 102 | {:ok, %Record{data: data}} = epoch_time(60*60*24 + 15) |> time_of_day |> run 103 | assert data == 15 104 | end 105 | 106 | test "year" do 107 | {:ok, %Record{data: data}} = epoch_time(2*365*60*60*24) |> year |> run 108 | assert data == 1972 109 | end 110 | 111 | test "month" do 112 | {:ok, %Record{data: data}} = epoch_time(2*30*60*60*24) |> month |> run 113 | assert data == 3 114 | end 115 | 116 | test "day" do 117 | {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day |> run 118 | assert data == 4 119 | end 120 | 121 | test "day_of_week" do 122 | {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day_of_week |> run 123 | assert data == 7 124 | end 125 | 126 | test "day_of_year" do 127 | {:ok, %Record{data: data}} = epoch_time(3*60*60*24) |> day_of_year |> run 128 | assert data == 4 129 | end 130 | 131 | test "hours" do 132 | {:ok, %Record{data: data}} = epoch_time(3*60*60) |> hours |> run 133 | assert data == 3 134 | end 135 | 136 | test "minutes" do 137 | {:ok, %Record{data: data}} = epoch_time(3*60) |> minutes |> run 138 | assert data == 3 139 | end 140 | 141 | test "seconds" do 142 | {:ok, %Record{data: data}} = epoch_time(3) |> seconds |> run 143 | assert data == 3 144 | end 145 | 146 | test "to_iso8601" do 147 | {:ok, %Record{data: data}} = epoch_time(3) |> to_iso8601 |> run 148 | assert data == "1970-01-01T00:00:03+00:00" 149 | end 150 | 151 | test "to_epoch_time" do 152 | {:ok, %Record{data: data}} = epoch_time(3) |> to_epoch_time |> run 153 | assert data == 3 154 | end 155 | 156 | end 157 | -------------------------------------------------------------------------------- /test/query/document_manipulation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule DocumentManipulationTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link 10 | :ok 11 | end 12 | 13 | test "pluck" do 14 | {:ok, %Record{data: data}} = [ 15 | %{a: 5, b: 6, c: 3}, 16 | %{a: 7, b: 8} 17 | ] |> pluck(["a", "b"]) |> run 18 | assert data == [ 19 | %{"a" => 5, "b" => 6}, 20 | %{"a" => 7, "b" => 8} 21 | ] 22 | end 23 | 24 | test "without" do 25 | {:ok, %Record{data: data}} = [ 26 | %{a: 5, b: 6, c: 3}, 27 | %{a: 7, b: 8} 28 | ] |> without("a") |> run 29 | assert data == [ 30 | %{"b" => 6, "c" => 3}, 31 | %{"b" => 8} 32 | ] 33 | end 34 | 35 | test "merge" do 36 | {:ok, %Record{data: data}} = %{a: 4} |> merge(%{b: 5}) |> run 37 | assert data == %{"a" => 4, "b" => 5} 38 | end 39 | 40 | test "merge list" do 41 | {:ok, %Record{data: data}} = args([%{a: 4}, %{b: 5}]) |> merge |> run 42 | assert data == %{"a" => 4, "b" => 5} 43 | end 44 | 45 | test "append" do 46 | {:ok, %Record{data: data}} = [1,2] |> append(3) |> run 47 | assert data == [1,2,3] 48 | end 49 | 50 | test "prepend" do 51 | {:ok, %Record{data: data}} = [1,2] |> prepend(3) |> run 52 | assert data == [3,1,2] 53 | end 54 | 55 | test "difference" do 56 | {:ok, %Record{data: data}} = [1,2] |> difference([2]) |> run 57 | assert data == [1] 58 | end 59 | 60 | test "set_insert" do 61 | {:ok, %Record{data: data}} = [1,2] |> set_insert(2) |> run 62 | assert data == [1,2] 63 | {:ok, %Record{data: data}} = [1,2] |> set_insert(3) |> run 64 | assert data == [1,2,3] 65 | end 66 | 67 | test "set_intersection" do 68 | {:ok, %Record{data: data}} = [1,2] |> set_intersection([2,3]) |> run 69 | assert data == [2] 70 | end 71 | 72 | test "set_union" do 73 | {:ok, %Record{data: data}} = [1,2] |> set_union([2,3]) |> run 74 | assert data == [1,2,3] 75 | end 76 | 77 | test "set_difference" do 78 | {:ok, %Record{data: data}} = [1,2,4] |> set_difference([2,3]) |> run 79 | assert data == [1,4] 80 | end 81 | 82 | test "get_field" do 83 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> get_field("a") |> run 84 | assert data == 5 85 | end 86 | 87 | test "has_fields" do 88 | {:ok, %Record{data: data}} = [ 89 | %{"b" => 6, "c" => 3}, 90 | %{"b" => 8} 91 | ] |> has_fields(["c"]) |> run 92 | assert data == [%{"b" => 6, "c" => 3}] 93 | end 94 | 95 | test "insert_at" do 96 | {:ok, %Record{data: data}} = [1,2,3] |> insert_at(1, 5) |> run 97 | assert data == [1,5,2,3] 98 | end 99 | 100 | test "splice_at" do 101 | {:ok, %Record{data: data}} = [1,2,3] |> splice_at(1, [5,6]) |> run 102 | assert data == [1,5,6,2,3] 103 | end 104 | 105 | test "delete_at" do 106 | {:ok, %Record{data: data}} = [1,2,3,4] |> delete_at(1) |> run 107 | assert data == [1,3,4] 108 | {:ok, %Record{data: data}} = [1,2,3,4] |> delete_at(1,3) |> run 109 | assert data == [1,4] 110 | end 111 | 112 | test "change_at" do 113 | {:ok, %Record{data: data}} = [1,2,3,4] |> change_at(1,7) |> run 114 | assert data == [1,7,3,4] 115 | end 116 | 117 | test "keys" do 118 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> keys |> run 119 | assert data == ["a", "b"] 120 | end 121 | 122 | test "values" do 123 | {:ok, %Record{data: data}} = %{a: 5, b: 6} |> values |> run 124 | assert data == [5, 6] 125 | end 126 | 127 | test "literal" do 128 | {:ok, %Record{data: data}} = %{ 129 | a: 5, 130 | b: %{ 131 | c: 6 132 | } 133 | } |> merge(%{b: literal(%{d: 7})}) |> run 134 | assert data == %{ 135 | "a" => 5, 136 | "b" => %{ 137 | "d" => 7 138 | } 139 | } 140 | end 141 | 142 | test "object" do 143 | {:ok, %Record{data: data}} = object(["a", 1, "b", 2]) |> run 144 | assert data == %{"a" => 1, "b" => 2} 145 | end 146 | end 147 | -------------------------------------------------------------------------------- /test/query/geospatial_adv_test.exs: -------------------------------------------------------------------------------- 1 | defmodule GeospatialAdvTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | @table_name "geo_test_table_1" 9 | setup_all do 10 | start_link 11 | table_create(@table_name) |> run 12 | table(@table_name) |> index_create("location", geo: true) |> run 13 | table(@table_name) |> index_wait("location") |> run 14 | on_exit fn -> 15 | start_link 16 | table_drop(@table_name) |> run 17 | end 18 | :ok 19 | end 20 | 21 | setup do 22 | table(@table_name) |> delete |> run 23 | :ok 24 | end 25 | 26 | test "get_intersecting" do 27 | table(@table_name) |> insert( 28 | %{location: point(0.001,0.001)} 29 | ) |> run 30 | table(@table_name) |> insert( 31 | %{location: point(0.001,0)} 32 | ) |> run 33 | {:ok, %{data: data}} = table(@table_name) |> get_intersecting( 34 | circle({0,0}, 5000), index: "location" 35 | ) |> run 36 | points = for x <- data, do: x["location"].coordinates 37 | assert Enum.sort(points) == [{0.001, 0}, {0.001,0.001}] 38 | end 39 | 40 | test "get_nearest" do 41 | table(@table_name) |> insert( 42 | %{location: point(0.001,0.001)} 43 | ) |> run 44 | table(@table_name) |> insert( 45 | %{location: point(0.001,0)} 46 | ) |> run 47 | {:ok, %Record{data: data}} = table(@table_name) |> get_nearest( 48 | point({0,0}), index: "location", max_dist: 5000000 49 | ) |> run 50 | assert Enum.count(data) == 2 51 | end 52 | end 53 | -------------------------------------------------------------------------------- /test/query/geospatial_test.exs: -------------------------------------------------------------------------------- 1 | defmodule GeospatialTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Pseudotypes.Geometry.Point 8 | alias RethinkDB.Pseudotypes.Geometry.Line 9 | alias RethinkDB.Pseudotypes.Geometry.Polygon 10 | 11 | setup_all do 12 | start_link 13 | :ok 14 | end 15 | 16 | test "circle" do 17 | {:ok, %Record{data: data}} = circle({1,1}, 5) |> run 18 | assert %Polygon{coordinates: [_h | []]} = data 19 | end 20 | 21 | test "circle with opts" do 22 | {:ok, %Record{data: data}} = circle({1,1}, 5, num_vertices: 100, fill: true) |> run 23 | assert %Polygon{coordinates: [_h |[]]} = data 24 | end 25 | test "distance" do 26 | {:ok, %Record{data: data}} = distance(point({1,1}), point({2,2})) |> run 27 | assert data == 156876.14940188665 28 | end 29 | 30 | test "fill" do 31 | {:ok, %Record{data: data}} = fill(line([{1,1}, {4,5}, {2,2}, {1,1}])) |> run 32 | assert data == %Polygon{coordinates: [[{1,1}, {4,5}, {2,2}, {1,1}]]} 33 | end 34 | 35 | test "geojson" do 36 | {:ok, %Record{data: data}} = geojson(%{coordinates: [1,1], type: "Point"}) |> run 37 | assert data == %Point{coordinates: {1,1}} 38 | end 39 | 40 | test "geojson with holes" do 41 | coords = [ square(0,0,10), square(1,1,1), square(4,4,1) ] 42 | {:ok, %Record{data: data}} = geojson(%{type: "Polygon", coordinates: coords}) |> run 43 | assert data == %Polygon{coordinates: coords} 44 | end 45 | 46 | defp square(x,y,s) do 47 | [{x,y}, {x+s,y}, {x+s,y+s}, {x,y+s}, {x,y}] 48 | end 49 | 50 | test "to_geojson" do 51 | {:ok, %Record{data: data}} = point({1,1}) |> to_geojson |> run 52 | assert data == %{"type" => "Point", "coordinates" => [1,1]} 53 | end 54 | 55 | # TODO: get_intersecting, get_nearest, includes, intersects 56 | test "point" do 57 | {:ok, %Record{data: data}} = point({1,1}) |> run 58 | assert data == %Point{coordinates: {1, 1}} 59 | end 60 | 61 | test "line" do 62 | {:ok, %Record{data: data}} = line([{1,1}, {4,5}]) |> run 63 | assert data == %Line{coordinates: [{1, 1}, {4,5}]} 64 | end 65 | 66 | 67 | test "includes" do 68 | {:ok, %Record{data: data}} = [circle({0,0}, 1000), circle({0.001,0}, 1000), circle({100,100}, 1)] |> includes( 69 | point(0,0) 70 | ) |> run 71 | assert Enum.count(data) == 2 72 | {:ok, %Record{data: data}} = circle({0,0}, 1000) |> includes(point(0,0)) |> run 73 | assert data == true 74 | {:ok, %Record{data: data}} = circle({0,0}, 1000) |> includes(point(80,80)) |> run 75 | assert data == false 76 | end 77 | 78 | test "intersects" do 79 | b = [ 80 | circle({0,0}, 1000), circle({0,0}, 1000), circle({80,80}, 1) 81 | ] |> intersects( 82 | circle({0,0}, 10) 83 | ) 84 | {:ok, %Record{data: data}} = b |> run 85 | assert Enum.count(data) == 2 86 | {:ok, %Record{data: data}} = circle({0,0}, 1000) |> intersects(circle({0,0}, 1)) |> run 87 | assert data == true 88 | {:ok, %Record{data: data}} = circle({0,0}, 1000) |> intersects(circle({80,80}, 1)) |> run 89 | assert data == false 90 | end 91 | 92 | test "polygon" do 93 | {:ok, %Record{data: data}} = polygon([{0,0}, {0,1}, {1,1}, {1,0}]) |> run 94 | assert data.coordinates == [[{0,0}, {0,1}, {1,1}, {1,0}, {0,0}]] 95 | end 96 | 97 | test "polygon_sub" do 98 | p1 = polygon([{0,0}, {0,1}, {1,1}, {1,0}]) 99 | p2 = polygon([{0.25,0.25}, {0.25,0.5}, {0.5,0.5}, {0.5,0.25}]) 100 | {:ok, %Record{data: data}} = p1 |> polygon_sub(p2) |> run 101 | assert data.coordinates == [[{0,0}, {0,1}, {1,1}, {1,0}, {0,0}], [{0.25,0.25}, {0.25,0.5}, {0.5,0.5}, {0.5,0.25}, {0.25,0.25}]] 102 | end 103 | end 104 | -------------------------------------------------------------------------------- /test/query/joins_test.exs: -------------------------------------------------------------------------------- 1 | defmodule JoinsTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Collection 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | @table_name "joins_test_table_1" 13 | setup_all do 14 | start_link 15 | table_create(@table_name) |> run 16 | on_exit fn -> 17 | start_link 18 | table_drop(@table_name) |> run 19 | end 20 | :ok 21 | end 22 | 23 | setup do 24 | table(@table_name) |> delete |> run 25 | :ok 26 | end 27 | 28 | test "inner join arrays" do 29 | left = [%{a: 1, b: 2}, %{a: 2, b: 3}] 30 | right = [%{a: 1, c: 4}, %{a: 2, c: 6}] 31 | q = inner_join(left, right,lambda fn l, r -> 32 | l[:a] == r[:a] 33 | end) 34 | {:ok, %Record{data: data}} = run q 35 | assert data == [%{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, 36 | %{"left" => %{"a" => 2, "b" => 3}, "right" => %{"a" => 2, "c" => 6}}] 37 | {:ok, %Record{data: data}} = q |> zip |> run 38 | assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3, "c" => 6}] 39 | end 40 | 41 | test "outer join arrays" do 42 | left = [%{a: 1, b: 2}, %{a: 2, b: 3}] 43 | right = [%{a: 1, c: 4}] 44 | q = outer_join(left, right, lambda fn l, r -> 45 | l[:a] == r[:a] 46 | end) 47 | {:ok, %Record{data: data}} = run q 48 | assert data == [%{"left" => %{"a" => 1, "b" => 2}, "right" => %{"a" => 1, "c" => 4}}, 49 | %{"left" => %{"a" => 2, "b" => 3}}] 50 | {:ok, %Record{data: data}} = q |> zip |> run 51 | assert data == [%{"a" => 1, "b" => 2, "c" => 4}, %{"a" => 2, "b" => 3}] 52 | end 53 | 54 | test "eq join arrays" do 55 | table_create("test_1") |> run 56 | table_create("test_2") |> run 57 | table("test_1") |> insert([%{id: 3, a: 1, b: 2}, %{id: 2, a: 2, b: 3}]) |> run 58 | table("test_2") |> insert([%{id: 1, c: 4}]) |> run 59 | q = eq_join(table("test_1"), :a, table("test_2"), index: :id) 60 | {:ok, %Collection{data: data}} = run q 61 | {:ok, %Collection{data: data2}} = q |> zip |> run 62 | table_drop("test_1") |> run 63 | table_drop("test_2") |> run 64 | assert data == [ 65 | %{"left" => %{"id" => 3, "a" => 1, "b" => 2}, "right" => %{"id" => 1, "c" => 4}} 66 | ] 67 | assert data2 == [%{"id" => 1, "a" => 1, "b" => 2, "c" => 4}] 68 | end 69 | end 70 | -------------------------------------------------------------------------------- /test/query/math_logic_test.exs: -------------------------------------------------------------------------------- 1 | defmodule MathLogicTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link 10 | :ok 11 | end 12 | 13 | test "add scalars" do 14 | {:ok, %Record{data: data}} = add(1,2) |> run 15 | assert data == 3 16 | end 17 | 18 | test "add list of scalars" do 19 | {:ok, %Record{data: data}} = add([1,2]) |> run 20 | assert data == 3 21 | end 22 | 23 | test "concatenate two strings" do 24 | {:ok, %Record{data: data}} = add("hello ","world") |> run 25 | assert data == "hello world" 26 | end 27 | 28 | test "concatenate list of strings" do 29 | {:ok, %Record{data: data}} = add(["hello", " ", "world"]) |> run 30 | assert data == "hello world" 31 | end 32 | 33 | test "concatenate two arrays" do 34 | {:ok, %Record{data: data}} = add([1,2,3],[3,4,5]) |> run 35 | assert data == [1,2,3,3,4,5] 36 | end 37 | test "concatenate list of arrays" do 38 | {:ok, %Record{data: data}} = add([[1,2,3],[3,4,5],[5,6,7]]) |> run 39 | assert data == [1,2,3,3,4,5,5,6,7] 40 | end 41 | 42 | test "subtract two numbers" do 43 | {:ok, %Record{data: data}} = sub(5,2) |> run 44 | assert data == 3 45 | end 46 | 47 | test "subtract list of numbers" do 48 | {:ok, %Record{data: data}} = sub([9,3,1]) |> run 49 | assert data == 5 50 | end 51 | 52 | test "multiply two numbers" do 53 | {:ok, %Record{data: data}} = mul(5,2) |> run 54 | assert data == 10 55 | end 56 | 57 | test "multiply list of numbers" do 58 | {:ok, %Record{data: data}} = mul([1,2,3,4,5]) |> run 59 | assert data == 120 60 | end 61 | 62 | test "create periodic array" do 63 | {:ok, %Record{data: data}} = mul(3, [1,2]) |> run 64 | assert data == [1,2,1,2,1,2] 65 | end 66 | 67 | test "divide two numbers" do 68 | {:ok, %Record{data: data}} = divide(6, 3) |> run 69 | assert data == 2 70 | end 71 | 72 | test "divide list of numbers" do 73 | {:ok, %Record{data: data}} = divide([12,3,2]) |> run 74 | assert data == 2 75 | end 76 | 77 | test "find remainder when dividing two numbers" do 78 | {:ok, %Record{data: data}} = mod(23, 4) |> run 79 | assert data == 3 80 | end 81 | 82 | test "logical and of two values" do 83 | {:ok, %Record{data: data}} = and_r(true, true) |> run 84 | assert data == true 85 | end 86 | 87 | test "logical and of list" do 88 | {:ok, %Record{data: data}} = and_r([true, true, false]) |> run 89 | assert data == false 90 | end 91 | 92 | test "logical or of two values" do 93 | {:ok, %Record{data: data}} = or_r(true, false) |> run 94 | assert data == true 95 | end 96 | 97 | test "logical or of list" do 98 | {:ok, %Record{data: data}} = or_r([false, false, false]) |> run 99 | assert data == false 100 | end 101 | 102 | test "two numbers are equal" do 103 | {:ok, %Record{data: data}} = eq(1, 1) |> run 104 | assert data == true 105 | {:ok, %Record{data: data}} = eq(2, 1) |> run 106 | assert data == false 107 | end 108 | 109 | test "values in a list are equal" do 110 | {:ok, %Record{data: data}} = eq([1, 1, 1]) |> run 111 | assert data == true 112 | {:ok, %Record{data: data}} = eq([1, 2, 1]) |> run 113 | assert data == false 114 | end 115 | 116 | test "two numbers are not equal" do 117 | {:ok, %Record{data: data}} = ne(1, 1) |> run 118 | assert data == false 119 | {:ok, %Record{data: data}} = ne(2, 1) |> run 120 | assert data == true 121 | end 122 | 123 | test "values in a list are not equal" do 124 | {:ok, %Record{data: data}} = ne([1, 1, 1]) |> run 125 | assert data == false 126 | {:ok, %Record{data: data}} = ne([1, 2, 1]) |> run 127 | assert data == true 128 | end 129 | 130 | test "a number is less than the other" do 131 | {:ok, %Record{data: data}} = lt(2, 1) |> run 132 | assert data == false 133 | {:ok, %Record{data: data}} = lt(1, 2) |> run 134 | assert data == true 135 | end 136 | 137 | test "values in a list less than the next" do 138 | {:ok, %Record{data: data}} = lt([1, 4, 2]) |> run 139 | assert data == false 140 | {:ok, %Record{data: data}} = lt([1, 4, 5]) |> run 141 | assert data == true 142 | end 143 | 144 | test "a number is less than or equal to the other" do 145 | {:ok, %Record{data: data}} = le(1, 1) |> run 146 | assert data == true 147 | {:ok, %Record{data: data}} = le(1, 2) |> run 148 | assert data == true 149 | end 150 | 151 | test "values in a list less than or equal to the next" do 152 | {:ok, %Record{data: data}} = le([1, 4, 2]) |> run 153 | assert data == false 154 | {:ok, %Record{data: data}} = le([1, 4, 4]) |> run 155 | assert data == true 156 | end 157 | 158 | test "a number is greater than the other" do 159 | {:ok, %Record{data: data}} = gt(1, 1) |> run 160 | assert data == false 161 | {:ok, %Record{data: data}} = gt(2, 1) |> run 162 | assert data == true 163 | end 164 | 165 | test "values in a list greater than the next" do 166 | {:ok, %Record{data: data}} = gt([1, 4, 2]) |> run 167 | assert data == false 168 | {:ok, %Record{data: data}} = gt([10, 4, 2]) |> run 169 | assert data == true 170 | end 171 | 172 | test "a number is greater than or equal to the other" do 173 | {:ok, %Record{data: data}} = ge(1, 1) |> run 174 | assert data == true 175 | {:ok, %Record{data: data}} = ge(2, 1) |> run 176 | assert data == true 177 | end 178 | 179 | test "values in a list greater than or equal to the next" do 180 | {:ok, %Record{data: data}} = ge([1, 4, 2]) |> run 181 | assert data == false 182 | {:ok, %Record{data: data}} = ge([10, 4, 4]) |> run 183 | assert data == true 184 | end 185 | 186 | test "not operator" do 187 | {:ok, %Record{data: data}} = not_r(true) |> run 188 | assert data == false 189 | end 190 | 191 | test "random operator" do 192 | {:ok, %Record{data: data}} = random |> run 193 | assert data >= 0.0 && data <= 1.0 194 | {:ok, %Record{data: data}} = random(100) |> run 195 | assert is_integer(data) && data >= 0 && data <= 100 196 | {:ok, %Record{data: data}} = random(100.0) |> run 197 | assert is_float(data) && data >= 0.0 && data <= 100.0 198 | {:ok, %Record{data: data}} = random(50, 100) |> run 199 | assert is_integer(data) && data >= 50 && data <= 100 200 | {:ok, %Record{data: data}} = random(50, 100.0) |> run 201 | assert is_float(data) && data >= 50.0 && data <= 100.0 202 | end 203 | 204 | test "round" do 205 | {:ok, %Record{data: data}} = round_r(0.3) |> run 206 | assert data == 0 207 | {:ok, %Record{data: data}} = round_r(0.6) |> run 208 | assert data == 1 209 | end 210 | 211 | test "ceil" do 212 | {:ok, %Record{data: data}} = ceil(0.3) |> run 213 | assert data == 1 214 | {:ok, %Record{data: data}} = ceil(0.6) |> run 215 | assert data == 1 216 | end 217 | 218 | test "floor" do 219 | {:ok, %Record{data: data}} = floor(0.3) |> run 220 | assert data == 0 221 | {:ok, %Record{data: data}} = floor(0.6) |> run 222 | assert data == 0 223 | end 224 | end 225 | -------------------------------------------------------------------------------- /test/query/selection_test.exs: -------------------------------------------------------------------------------- 1 | defmodule SelectionTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | require RethinkDB.Lambda 9 | import RethinkDB.Lambda 10 | 11 | @table_name "selection_test_table_1" 12 | setup_all do 13 | start_link 14 | table_create(@table_name) |> run 15 | on_exit fn -> 16 | start_link 17 | table_drop(@table_name) |> run 18 | end 19 | :ok 20 | end 21 | 22 | setup do 23 | table(@table_name) |> delete |> run 24 | :ok 25 | end 26 | 27 | test "get" do 28 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 29 | {:ok, %Record{data: data}} = table(@table_name) |> get("a") |> run 30 | assert data == %{"a" => 5, "id" => "a"} 31 | end 32 | 33 | test "get all" do 34 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 35 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 36 | {:ok, data} = table(@table_name) |> get_all(["a", "b"]) |> run 37 | assert Enum.sort(Enum.to_list(data)) == [ 38 | %{"a" => 5, "id" => "a"}, 39 | %{"a" => 5, "id" => "b"} 40 | ] 41 | end 42 | 43 | test "get all with index" do 44 | table(@table_name) |> insert(%{id: "a", other_id: "c"}) |> run 45 | table(@table_name) |> insert(%{id: "b", other_id: "d"}) |> run 46 | table(@table_name) |> index_create("other_id") |> run 47 | table(@table_name) |> index_wait("other_id") |> run 48 | {:ok, data} = table(@table_name) |> get_all(["c", "d"], index: "other_id") |> run 49 | assert Enum.sort(Enum.to_list(data)) == [ 50 | %{"id" => "a", "other_id" => "c"}, 51 | %{"id" => "b", "other_id" => "d"} 52 | ] 53 | end 54 | 55 | test "get all should be able to accept an empty list" do 56 | {:ok, result} = table(@table_name) |> get_all([]) |> run 57 | assert result.data == [] 58 | end 59 | 60 | test "between" do 61 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 62 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 63 | table(@table_name) |> insert(%{id: "c", a: 5}) |> run 64 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> between("b", "d") |> run 65 | assert Enum.count(data) == 2 66 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> between(minval, maxval) |> run 67 | assert Enum.count(data) == 3 68 | end 69 | 70 | test "filter" do 71 | table(@table_name) |> insert(%{id: "a", a: 5}) |> run 72 | table(@table_name) |> insert(%{id: "b", a: 5}) |> run 73 | table(@table_name) |> insert(%{id: "c", a: 6}) |> run 74 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> filter(%{a: 6}) |> run 75 | assert Enum.count(data) == 1 76 | {:ok, %RethinkDB.Collection{data: data}} = table(@table_name) |> filter( 77 | lambda fn (x) -> 78 | x["a"] == 5 79 | end) |> run 80 | assert Enum.count(data) == 2 81 | end 82 | end 83 | -------------------------------------------------------------------------------- /test/query/string_manipulation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule StringManipulationTest do 2 | use ExUnit.Case, async: true 3 | import RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | {:ok, pid} = RethinkDB.Connection.start_link 10 | {:ok, %{conn: pid}} 11 | end 12 | 13 | test "match a string", context do 14 | {:ok, %Record{data: data}} = "hello world" |> match("hello") |> run(context.conn) 15 | assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} 16 | end 17 | 18 | test "match a regex", context do 19 | {:ok, %Record{data: data}} = "hello world" |> match(~r(hello)) |> run(context.conn) 20 | assert data == %{"end" => 5, "groups" => [], "start" => 0, "str" => "hello"} 21 | end 22 | 23 | test "split a string", context do 24 | {:ok, %Record{data: data}} = "abracadabra" |> split |> run(context.conn) 25 | assert data == ["abracadabra"] 26 | {:ok, %Record{data: data}} = "abra-cadabra" |> split("-") |> run(context.conn) 27 | assert data == ["abra", "cadabra"] 28 | {:ok, %Record{data: data}} = "a-bra-ca-da-bra" |> split("-", 2) |> run(context.conn) 29 | assert data == ["a", "bra", "ca-da-bra"] 30 | end 31 | 32 | test "upcase", context do 33 | {:ok, %Record{data: data}} = "hi" |> upcase |> run(context.conn) 34 | assert data == "HI" 35 | end 36 | test "downcase", context do 37 | {:ok, %Record{data: data}} = "Hi" |> downcase |> run(context.conn) 38 | assert data == "hi" 39 | end 40 | end 41 | -------------------------------------------------------------------------------- /test/query/table_db_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableDBTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | 8 | setup_all do 9 | start_link 10 | :ok 11 | end 12 | 13 | @db_name "table_db_test_db_1" 14 | @table_name "table_db_test_table_1" 15 | 16 | test "tables with specific database" do 17 | db_create(@db_name) |> run 18 | on_exit fn -> 19 | db_drop(@db_name) |> run 20 | end 21 | 22 | q = db(@db_name) |> table_create(@table_name) 23 | {:ok, %Record{data: %{"tables_created" => 1}}} = run q 24 | 25 | q = db(@db_name) |> table_list 26 | {:ok, %Record{data: tables}} = run q 27 | assert Enum.member?(tables, @table_name) 28 | 29 | q = db(@db_name) |> table_drop(@table_name) 30 | {:ok, %Record{data: %{"tables_dropped" => 1}}} = run q 31 | 32 | q = db(@db_name) |> table_list 33 | {:ok, %Record{data: tables}} = run q 34 | assert !Enum.member?(tables, @table_name) 35 | 36 | q = db(@db_name) |> table_create(@table_name, primary_key: "not_id") 37 | {:ok, %Record{data: result}} = run q 38 | %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result 39 | assert primary_key == "not_id" 40 | end 41 | end 42 | -------------------------------------------------------------------------------- /test/query/table_index_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableIndexTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Record 6 | 7 | setup_all do 8 | start_link 9 | :ok 10 | end 11 | 12 | @table_name "table_index_test_table_1" 13 | setup do 14 | table_create(@table_name) |> run 15 | on_exit fn -> 16 | table_drop(@table_name) |> run 17 | end 18 | :ok 19 | end 20 | 21 | test "indexes" do 22 | {:ok, %Record{data: data}} = table(@table_name) |> index_create("hello") |> run 23 | assert data == %{"created" => 1} 24 | {:ok, %Record{data: data}} = table(@table_name) |> index_wait("hello") |> run 25 | assert [ 26 | %{"function" => _, "geo" => false, "index" => "hello", 27 | "multi" => false, "outdated" => false,"ready" => true} 28 | ] = data 29 | {:ok, %Record{data: data}} = table(@table_name) |> index_status("hello") |> run 30 | assert [ 31 | %{"function" => _, "geo" => false, "index" => "hello", 32 | "multi" => false, "outdated" => false,"ready" => true} 33 | ] = data 34 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 35 | assert data == ["hello"] 36 | table(@table_name) |> index_rename("hello", "goodbye") |> run 37 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 38 | assert data == ["goodbye"] 39 | table(@table_name) |> index_drop("goodbye") |> run 40 | {:ok, %Record{data: data}} = table(@table_name) |> index_list |> run 41 | assert data == [] 42 | end 43 | end 44 | -------------------------------------------------------------------------------- /test/query/table_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TableTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | alias RethinkDB.Record 6 | 7 | setup_all do 8 | start_link 9 | :ok 10 | end 11 | 12 | @table_name "table_test_table_1" 13 | 14 | test "tables" do 15 | table_drop(@table_name) |> run 16 | on_exit fn -> 17 | table_drop(@table_name) |> run 18 | end 19 | q = table_create(@table_name) 20 | {:ok, %Record{data: %{"tables_created" => 1}}} = run q 21 | 22 | q = table_list 23 | {:ok, %Record{data: tables}} = run q 24 | assert Enum.member?(tables, @table_name) 25 | 26 | q = table_drop(@table_name) 27 | {:ok, %Record{data: %{"tables_dropped" => 1}}} = run q 28 | 29 | q = table_list 30 | {:ok, %Record{data: tables}} = run q 31 | assert !Enum.member?(tables, @table_name) 32 | 33 | q = table_create(@table_name, primary_key: "not_id") 34 | {:ok, %Record{data: result}} = run q 35 | %{"config_changes" => [%{"new_val" => %{"primary_key" => primary_key}}]} = result 36 | assert primary_key == "not_id" 37 | end 38 | end 39 | -------------------------------------------------------------------------------- /test/query/transformation_test.exs: -------------------------------------------------------------------------------- 1 | defmodule TransformationTest do 2 | use ExUnit.Case, async: true 3 | 4 | use RethinkDB.Connection 5 | import RethinkDB.Query 6 | 7 | alias RethinkDB.Record 8 | 9 | require RethinkDB.Lambda 10 | import RethinkDB.Lambda 11 | 12 | setup_all do 13 | start_link 14 | :ok 15 | end 16 | 17 | test "map" do 18 | {:ok, %Record{data: data}} = map([1,2,3], lambda &(&1 + 1)) |> run 19 | assert data == [2,3,4] 20 | end 21 | 22 | test "with_fields" do 23 | {:ok, %Record{data: data}} = [ 24 | %{a: 5}, 25 | %{a: 6}, 26 | %{a: 7, b: 8} 27 | ] |> with_fields(["a","b"]) |> run 28 | assert data == [%{"a" => 7, "b" => 8}] 29 | end 30 | 31 | test "flat_map" do 32 | {:ok, %Record{data: data}} = [ 33 | [1,2,3], 34 | [4,5,6], 35 | [7,8,9] 36 | ] |> flat_map(lambda fn (x) -> 37 | x |> map(&(&1*2)) 38 | end) |> run 39 | assert data == [2,4,6,8,10,12,14,16,18] 40 | end 41 | 42 | test "order_by" do 43 | {:ok, %Record{data: data}} = [ 44 | %{a: 1}, 45 | %{a: 7}, 46 | %{a: 4}, 47 | %{a: 5}, 48 | %{a: 2} 49 | ] |> order_by("a") |> run 50 | assert data == [ 51 | %{"a" => 1}, 52 | %{"a" => 2}, 53 | %{"a" => 4}, 54 | %{"a" => 5}, 55 | %{"a" => 7} 56 | ] 57 | end 58 | 59 | test "order by descending attr" do 60 | data = [%{"rank" => 1}, %{"rank" => 2}, %{"rank" => 3}] 61 | 62 | q = data |> order_by(desc("rank")) 63 | {:ok, %{data: result}} = run(q) 64 | 65 | assert result == Enum.reverse(data) 66 | end 67 | 68 | test "skip" do 69 | {:ok, %Record{data: data}} = [1,2,3,4] |> skip(2) |> run 70 | assert data == [3,4] 71 | end 72 | 73 | test "limit" do 74 | {:ok, %Record{data: data}} = [1,2,3,4] |> limit(2) |> run 75 | assert data == [1,2] 76 | end 77 | 78 | test "slice" do 79 | {:ok, %Record{data: data}} = [1,2,3,4] |> slice(1,3) |> run 80 | assert data == [2,3] 81 | end 82 | 83 | test "nth" do 84 | {:ok, %Record{data: data}} = [1,2,3,4] |> nth(2) |> run 85 | assert data == 3 86 | end 87 | 88 | test "offsets_of" do 89 | {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> offsets_of(1) |> run 90 | assert data == [0,3,5] 91 | end 92 | 93 | test "is_empty" do 94 | {:ok, %Record{data: data}} = [] |> is_empty |> run 95 | assert data == true 96 | {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> is_empty |> run 97 | assert data == false 98 | end 99 | 100 | test "sample" do 101 | {:ok, %Record{data: data}} = [1,2,3,1,4,1] |> sample(2) |> run 102 | assert Enum.count(data) == 2 103 | end 104 | end 105 | -------------------------------------------------------------------------------- /test/query/writing_data_test.exs: -------------------------------------------------------------------------------- 1 | defmodule WritingDataTest do 2 | use ExUnit.Case, async: true 3 | use RethinkDB.Connection 4 | import RethinkDB.Query 5 | 6 | alias RethinkDB.Record 7 | alias RethinkDB.Collection 8 | 9 | @table_name "writing_data_test_table_1" 10 | setup_all do 11 | start_link 12 | table_create(@table_name) |> run 13 | on_exit fn -> 14 | start_link 15 | table_drop(@table_name) |> run 16 | end 17 | :ok 18 | end 19 | 20 | setup do 21 | table(@table_name) |> delete |> run 22 | :ok 23 | end 24 | 25 | test "insert" do 26 | table_query = table(@table_name) 27 | q = insert(table_query, %{name: "Hello", attr: "World"}) 28 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 29 | 30 | {:ok, %Collection{data: [%{"id" => ^key, "name" => "Hello", "attr" => "World"}]}} = run(table_query) 31 | end 32 | 33 | test "insert multiple" do 34 | table_query = table(@table_name) 35 | 36 | q = insert(table_query, [%{name: "Hello"}, %{name: "World"}]) 37 | {:ok, %Record{data: %{"inserted" => 2}}} = run(q) 38 | 39 | {:ok, %Collection{data: data}} = run(table_query) 40 | assert Enum.map(data, &(&1["name"])) |> Enum.sort == ["Hello", "World"] 41 | end 42 | 43 | test "insert conflict options" do 44 | table_query = table(@table_name) 45 | 46 | q = insert(table_query, [%{name: "Hello", value: 1}]) 47 | {:ok, %Record{data: %{"generated_keys"=> [id], "inserted" => 1}}} = run(q) 48 | 49 | q = insert(table_query, [%{name: "Hello", id: id, value: 2}]) 50 | {:ok, %Record{data: %{"errors" => 1}}} = run(q) 51 | 52 | q = insert(table_query, [%{name: "World", id: id, value: 2}], %{conflict: "replace"}) 53 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 54 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 2}]}} = run(table_query) 55 | 56 | q = insert(table_query, [%{id: id, value: 3}], %{conflict: "update"}) 57 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 58 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 3}]}} = run(table_query) 59 | 60 | q = insert(table_query, [%{id: id, value: 3}], %{conflict: fn(_id, old, new) -> 61 | merge(old, %{value: add(get_field(old, "value"), get_field(new, "value"))}) end}) 62 | {:ok, %Record{data: %{"replaced" => 1}}} = run(q) 63 | {:ok, %Collection{data: [%{"id" => ^id, "name" => "World", "value" => 6}]}} = run(table_query) 64 | end 65 | 66 | test "update" do 67 | table_query = table(@table_name) 68 | q = insert(table_query, %{name: "Hello", attr: "World"}) 69 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 70 | 71 | record_query = table_query |> get(key) 72 | q = record_query |> update(%{name: "Hi"}) 73 | run q 74 | q = record_query 75 | {:ok, %Record{data: data}} = run q 76 | assert data == %{"id" => key, "name" => "Hi", "attr" => "World"} 77 | end 78 | 79 | test "replace" do 80 | table_query = table(@table_name) 81 | q = insert(table_query, %{name: "Hello", attr: "World"}) 82 | {:ok, %Record{data: %{"inserted" => 1, "generated_keys" => [key]}}} = run(q) 83 | 84 | record_query = table_query |> get(key) 85 | q = record_query |> replace(%{id: key, name: "Hi"}) 86 | run q 87 | q = record_query 88 | {:ok, %Record{data: data}} = run q 89 | assert data == %{"id" => key, "name" => "Hi"} 90 | end 91 | 92 | test "sync" do 93 | table_query = table(@table_name) 94 | q = table_query |> sync 95 | {:ok, %Record{data: data}} = run q 96 | assert data == %{"synced" => 1} 97 | end 98 | 99 | end 100 | -------------------------------------------------------------------------------- /test/query_test.exs: -------------------------------------------------------------------------------- 1 | defmodule QueryTest do 2 | use ExUnit.Case, async: true 3 | alias RethinkDB.Query 4 | alias RethinkDB.Record 5 | alias RethinkDB.Collection 6 | use RethinkDB.Connection 7 | import RethinkDB.Query 8 | require RethinkDB.Lambda 9 | 10 | @table_name "query_test_table_1" 11 | setup_all do 12 | start_link 13 | table_create(@table_name) |> run 14 | on_exit fn -> 15 | start_link 16 | table_drop(@table_name) |> run 17 | end 18 | :ok 19 | end 20 | 21 | setup do 22 | table(@table_name) |> delete |> run 23 | :ok 24 | end 25 | 26 | 27 | test "make_array" do 28 | array = [%{"name" => "hello"}, %{"name:" => "world"}] 29 | q = Query.make_array(array) 30 | {:ok, %Record{data: data}} = run(q) 31 | assert Enum.sort(data) == Enum.sort(array) 32 | end 33 | 34 | test "map" do 35 | table_query = table(@table_name) 36 | 37 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 38 | 39 | {:ok, %Collection{data: data}} = table(@table_name) 40 | |> map( RethinkDB.Lambda.lambda fn (el) -> 41 | el[:name] + " " + "with map" 42 | end) |> run 43 | assert Enum.sort(data) == ["Hello with map", "World with map"] 44 | end 45 | 46 | test "filter by map" do 47 | table_query = table(@table_name) 48 | 49 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 50 | 51 | {:ok, %Collection{data: data}} = table(@table_name) 52 | |> filter(%{name: "Hello"}) 53 | |> run 54 | assert Enum.map(data, &(&1["name"])) == ["Hello"] 55 | end 56 | 57 | test "filter by lambda" do 58 | table_query = table(@table_name) 59 | 60 | insert(table_query, [%{name: "Hello"}, %{name: "World"}]) |> run 61 | 62 | {:ok, %Collection{data: data}} = table(@table_name) 63 | |> filter(RethinkDB.Lambda.lambda fn (el) -> 64 | el[:name] == "Hello" 65 | end) 66 | |> run 67 | assert Enum.map(data, &(&1["name"])) == ["Hello"] 68 | end 69 | 70 | test "nested functions" do 71 | a = make_array([1,2,3]) |> map(fn (x) -> 72 | make_array([4,5,6]) |> map(fn (y) -> 73 | [x, y] 74 | end) 75 | end) 76 | {:ok, %{data: data}} = run(a) 77 | assert data == [ 78 | [[1,4], [1,5], [1,6]], 79 | [[2,4], [2,5], [2,6]], 80 | [[3,4], [3,5], [3,6]] 81 | ] 82 | end 83 | end 84 | -------------------------------------------------------------------------------- /test/test_helper.exs: -------------------------------------------------------------------------------- 1 | ExUnit.start(max_cases: 5) 2 | -------------------------------------------------------------------------------- /tester.exs: -------------------------------------------------------------------------------- 1 | import RethinkDB.Query 2 | 3 | c = RethinkDB.connect 4 | d = RethinkDB.connect 5 | f = RethinkDB.connect 6 | g = RethinkDB.connect 7 | 8 | q = 1..12 |> Enum.reduce("hi", fn (_, acc) -> 9 | add(acc, acc) 10 | end) 11 | 12 | 1..10 |> Enum.map(fn (_) -> 13 | {r, _} = :timer.tc(fn -> 14 | 1..10 |> Enum.map(fn (_) -> 15 | Task.async fn -> 16 | e = Enum.random([c,c,c,c]) 17 | q |> RethinkDB.run(e, :infinity) 18 | end 19 | end) |> Enum.map(&Task.await(&1, :infinity)) 20 | end) 21 | 22 | IO.inspect r 23 | end) 24 | --------------------------------------------------------------------------------