46 | Tip callouts indicate handy summaries, recipes, or best practices.
47 |
48 |
49 |
50 | Advanced callouts provide additional information on corner cases or underlying mechanisms. Feel free to skip these on your first read-through---come back to them later for extra information.
51 |
52 |
53 |
54 | Warning callouts indicate common pitfalls and gotchas. Make sure you read these to avoid problems, and come back to them if you're having trouble getting your code to run.
55 |
56 |
--------------------------------------------------------------------------------
/src/pages/0-preface.md:
--------------------------------------------------------------------------------
1 | # Preface {-}
2 |
3 | ## What is Slick? {-}
4 |
5 | [Slick][link-slick] is a Scala library for working with relational databases.
6 | That means it allows you to model a schema, run queries, insert data, and update data.
7 |
8 | Using Slick, you can write queries in Scala, giving you typed-checked database access.
9 | The style of queries makes working with a database similar to working with regular Scala collections.
10 |
11 | We've seen that developers using Slick for the first time often need help getting the most from it.
12 | For example, you need to know a few key concepts, such as:
13 |
14 | - _queries_: which compose using combinators such as `map`, `flatMap`, and `filter`;
15 |
16 | - _actions_: the things you can run against a database, which themselves compose; and
17 |
18 | - _futures_: which are the result of actions, and also support a set of combinators.
19 |
20 | We've produced _Essential Slick_ as a guide for those who want to get started using Slick.
21 | This material is aimed at beginner-to-intermediate Scala developers. You need:
22 |
23 | * a working knowledge of Scala
24 | (we recommend [Essential Scala][link-essential-scala] or an equivalent book);
25 |
26 | * experience with relational databases
27 | (familiarity with concepts such as rows, columns, joins, indexes, SQL);
28 |
29 | * an installed JDK 8 or later, along with a programmer's text editor or IDE; and
30 |
31 | * the [sbt][link-sbt] build tool.
32 |
33 | The material presented focuses on Slick version 3.3. Examples use [H2][link-h2-home] as the relational database.
34 |
35 | ## How to Contact Us {-}
36 |
37 | You can provide feedback on this text via:
38 |
39 | * [issues][link-book-issues] and [pull requests][link-book-pr] on the [source repository][link-book-repo] for this text;
40 |
41 | * [our Gitter channel][link-underscore-gitter]; or
42 |
43 | * email to [hello@underscore.io][link-email-underscore] using the subject line of "Essential Slick".
44 |
45 | ## Getting help using Slick {-}
46 |
47 | If you have questions about using Slick, ask a question on the [Slick Gitter channel][link-slick-gitter] or use the ["slick" tag at Stackoverflow][link-slick-so].
48 |
49 |
50 |
--------------------------------------------------------------------------------
/src/pages/1-basics.md:
--------------------------------------------------------------------------------
1 | # Basics {#Basics}
2 |
3 | ## Orientation
4 |
5 | Slick is a Scala library for accessing relational databases using an interface similar to the Scala collections library. You can treat queries like collections, transforming and combining them with methods like `map`, `flatMap`, and `filter` before sending them to the database to fetch results. This is how we'll be working with Slick for the majority of this text.
6 |
7 | Standard Slick queries are written in plain Scala. These are _type safe_ expressions that benefit from compile time error checking. They also _compose_, allowing us to build complex queries from simple fragments before running them against the database. If writing queries in Scala isn't your style, you'll be pleased to know that Slick also allows you to write plain SQL queries.
8 |
9 | In addition to querying, Slick helps you with all the usual trappings of relational database, including connecting to a database, creating a schema, setting up transactions, and so on. You can even drop down below Slick to deal with JDBC (Java Database Connectivity) directly, if that's something you're familiar with and find you need.
10 |
11 | This book provides a compact, no-nonsense guide to everything you need to know to use Slick in a commercial setting:
12 |
13 | - Chapter 1 provides an abbreviated overview of the library as a whole, demonstrating the fundamentals of data modelling, connecting to the database, and running queries.
14 | - Chapter 2 covers basic select queries, introducing Slick's query language and delving into some of the details of type inference and type checking.
15 | - Chapter 3 covers queries for inserting, updating, and deleting data.
16 | - Chapter 4 discusses data modelling, including defining custom column and table types.
17 | - Chapter 5 looks at actions and how you combine multiple actions together.
18 | - Chapter 6 explores advanced select queries, including joins and aggregates.
19 | - Chapter 7 provides a brief overview of _Plain SQL_ queries---a useful tool when you need fine control over the SQL sent to your database.
20 |
21 |
22 | **Slick isn't an ORM**
23 |
24 | If you're familiar with other database libraries such as [Hibernate][link-hibernate] or [Active Record][link-active-record], you might expect Slick to be an _Object-Relational Mapping (ORM)_ tool. It is not, and it's best not to think of Slick in this way.
25 |
26 | ORMs attempt to map object oriented data models onto relational database backends. By contrast, Slick provides a more database-like set of tools such as queries, rows and columns. We're not going to argue the pros and cons of ORMs here, but if this is an area that interests you, take a look at the [Coming from ORM to Slick][link-ref-orm] article in the Slick manual.
27 |
28 | If you aren't familiar with ORMs, congratulations. You already have one less thing to worry about!
29 |
30 |
31 |
32 | ## Running the Examples and Exercises
33 |
34 | The aim of this first chapter is to provide a high-level overview of the core concepts involved in Slick, and get you up and running with a simple end-to-end example. You can grab this example now by cloning the Git repo of exercises for this book:
35 |
36 | ```bash
37 | bash$ git clone git@github.com:underscoreio/essential-slick-code.git
38 | Cloning into 'essential-slick-code'...
39 |
40 | bash$ cd essential-slick-code
41 |
42 | bash$ ls -1
43 | README.md
44 | chapter-01
45 | chapter-02
46 | chapter-03
47 | chapter-04
48 | chapter-05
49 | chapter-06
50 | chapter-07
51 | ```
52 |
53 | Each chapter of the book is associated with a separate sbt project that provides a combination of examples and exercises. We've bundled everything you need to run sbt in the directory for each chapter.
54 |
55 | We'll be using a running example of a chat application similar to _Slack_, _Gitter_, or _IRC_. The app will grow and evolve as we proceed through the book. By the end it will have users, messages, and rooms, all modelled using tables, relationships, and queries.
56 |
57 | For now, we will start with a simple conversation between two famous celebrities. Change to the `chapter-01` directory now, use the `sbt` command to start sbt, and compile and run the example to see what happens:
58 |
59 | ```bash
60 | bash$ cd chapter-01
61 |
62 | bash$ sbt
63 | # sbt log messages...
64 |
65 | > compile
66 | # More sbt log messages...
67 |
68 | > run
69 | Creating database table
70 |
71 | Inserting test data
72 |
73 | Selecting all messages:
74 | Message("Dave","Hello, HAL. Do you read me, HAL?",1)
75 | Message("HAL","Affirmative, Dave. I read you.",2)
76 | Message("Dave","Open the pod bay doors, HAL.",3)
77 | Message("HAL","I'm sorry, Dave. I'm afraid I can't do that.",4)
78 |
79 | Selecting only messages from HAL:
80 | Message("HAL","Affirmative, Dave. I read you.",2)
81 | Message("HAL","I'm sorry, Dave. I'm afraid I can't do that.",4)
82 | ```
83 |
84 | If you get output similar to the above, congratulations! You're all set up and ready to run with the examples and exercises throughout the rest of this book. If you encounter any errors, let us know on our [Gitter channel][link-underscore-gitter] and we'll do what we can to help out.
85 |
86 |
87 | **New to sbt?**
88 |
89 | The first time you run sbt, it will download a lot of library dependencies from the Internet and cache them on your hard drive. This means two things:
90 |
91 | - you need a working Internet connection to get started; and
92 | - the first `compile` command you issue could take a while to complete.
93 |
94 | If you haven't used sbt before, you may find the [sbt Getting Started Guide][link-sbt-tutorial] useful.
95 |
96 |
97 |
98 | ## Working Interactively in the sbt Console
99 |
100 | Slick queries run asynchronously as `Future` values.
101 | These are fiddly to work with in the Scala REPL, but we do want you to be able to explore Slick via the REPL.
102 | So to get you up to speed quickly,
103 | the example projects define an `exec` method and import the base requirements to run examples from the console.
104 |
105 | You can see this by starting `sbt` and then running the `console` command.
106 | Which will give output similar to:
107 |
108 | ```scala
109 | > console
110 | [info] Starting scala interpreter...
111 | [info]
112 | Welcome to Scala 2.12.1 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_112).
113 | Type in expressions for evaluation. Or try :help.
114 |
115 | scala> import slick.jdbc.H2Profile.api._
116 | import Example._
117 | import scala.concurrent.duration._
118 | import scala.concurrent.Await
119 | import scala.concurrent.ExecutionContext.Implicits.global
120 | db: slick.jdbc.H2Profile.backend.Database = slick.jdbc.JdbcBackend$DatabaseDef@ac9a820
121 | exec: [T](program: slick.jdbc.H2Profile.api.DBIO[T])T
122 | res0: Option[Int] = Some(4)
123 | scala>
124 | ```
125 |
126 | Our `exec` helper runs a query and waits for the output.
127 | There is a complete explanation of `exec` and these imports later in the chapter.
128 | For now, here's a small example which fetches all the `message` rows:
129 |
130 | ```scala
131 | exec(messages.result)
132 | // res1: Seq[Example.MessageTable#TableElementType] =
133 | // Vector(Message(Dave,Hello, HAL. Do you read me, HAL?,1),
134 | // Message(HAL,Affirmative, Dave. I read you.,2),
135 | // Message(Dave,Open the pod bay doors, HAL.,3),
136 | // Message(HAL,I'm sorry, Dave. I'm afraid I can't do that.,4))
137 | ```
138 |
139 | But we're getting ahead of ourselves.
140 | We'll work through building up queries and running them, and using `exec`, as we work through this chapter.
141 | If the above works for you, great---you have a development environment set up and ready to go.
142 |
143 | ## Example: A Sequel Odyssey
144 |
145 | The test application we saw above creates an in-memory database using [H2][link-h2-home], creates a single table, populates it with test data, and then runs some example queries. The rest of this section will walk you through the code and provide an overview of things to come. We'll reproduce the essential parts of the code in the text, but you can follow along in the codebase for the exercises as well.
146 |
147 |
148 | **Choice of Database**
149 |
150 | All of the examples in this book use the [H2][link-h2-home] database. H2 is written in Java and runs in-process beside our application code. We've picked H2 because it allows us to forego any system administration and skip to writing Scala.
151 |
152 | You might prefer to use _MySQL_, _PostgreSQL_, or some other database---and you can. In [Appendix A](#altdbs) we point you at the changes you'll need to make to work with other databases. However, we recommend sticking with H2 for at least this first chapter so you can build confidence using Slick without running into database-specific complications.
153 |
154 |
155 |
156 | ### Library Dependencies
157 |
158 | Before diving into Scala code, let's look at the sbt configuration. You'll find this in `build.sbt` in the example:
159 |
160 | ```scala
161 | name := "essential-slick-chapter-01"
162 |
163 | version := "1.0.0"
164 |
165 | scalaVersion := "2.13.3"
166 |
167 | libraryDependencies ++= Seq(
168 | "com.typesafe.slick" %% "slick" % "3.3.3",
169 | "com.h2database" % "h2" % "1.4.200",
170 | "ch.qos.logback" % "logback-classic" % "1.2.3"
171 | )
172 | ```
173 |
174 | This file declares the minimum library dependencies for a Slick project:
175 |
176 | - Slick itself;
177 |
178 | - the H2 database; and
179 |
180 | - a logging library.
181 |
182 | If we were using a separate database like MySQL or PostgreSQL, we would substitute the H2 dependency for the JDBC driver for that database.
183 |
184 | ### Importing Library Code
185 |
186 | Database management systems are not created equal. Different systems support different data types, different dialects of SQL, and different querying capabilities. To model these capabilities in a way that can be checked at compile time, Slick provides most of its API via a database-specific _profile_. For example, we access most of the Slick API for H2 via the following `import`:
187 |
188 | ```scala mdoc:silent
189 | import slick.jdbc.H2Profile.api._
190 | ```
191 |
192 | Slick makes heavy use of implicit conversions and extension methods, so we generally need to include this import anywhere where we're working with queries or the database. [Chapter 5](#Modelling) looks how you can keep a specific database profile out of your code until necessary.
193 |
194 | ### Defining our Schema
195 |
196 | Our first job is to tell Slick what tables we have in our database and how to map them onto Scala values and types. The most common representation of data in Scala is a case class, so we start by defining a `Message` class representing a row in our single example table:
197 |
198 | ```scala mdoc
199 | case class Message(
200 | sender: String,
201 | content: String,
202 | id: Long = 0L)
203 | ```
204 |
205 | Next we define a `Table` object, which corresponds to our database table and tells Slick how to map back and forth between database data and instances of our case class:
206 |
207 | ```scala mdoc
208 | class MessageTable(tag: Tag) extends Table[Message](tag, "message") {
209 |
210 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
211 | def sender = column[String]("sender")
212 | def content = column[String]("content")
213 |
214 | def * = (sender, content, id).mapTo[Message]
215 | }
216 | ```
217 |
218 | `MessageTable` defines three `column`s: `id`, `sender`, and `content`. It defines the names and types of these columns, and any constraints on them at the database level. For example, `id` is a column of `Long` values, which is also an auto-incrementing primary key.
219 |
220 | The `*` method provides a _default projection_ that maps between columns in the table and instances of our case class.
221 | Slick's `mapTo` macro creates a two-way mapping between the three columns and the three fields in `Message`.
222 |
223 | We'll cover projections and default projections in detail in [Chapter 5](#Modelling).
224 | For now, all we need to know is that this line allows us to query the database and get back `Messages` instead of tuples of `(String, String, Long)`.
225 |
226 | The `tag` on the first line is an implementation detail that allows Slick to manage multiple uses of the table in a single query.
227 | Think of it like a table alias in SQL. We don't need to provide tags in our user code---Slick takes care of them automatically.
228 |
229 | ### Example Queries
230 |
231 | Slick allows us to define and compose queries in advance of running them against the database. We start by defining a `TableQuery` object that represents a simple `SELECT *` style query on our message table:
232 |
233 | ```scala mdoc
234 | val messages = TableQuery[MessageTable]
235 | ```
236 |
237 | Note that we're not _running_ this query at the moment---we're simply defining it as a means to build other queries. For example, we can create a `SELECT * WHERE` style query using a combinator called `filter`:
238 |
239 | ```scala mdoc
240 | val halSays = messages.filter(_.sender === "HAL")
241 | ```
242 |
243 | Again, we haven't run this query yet---we've defined it as a building block for yet more queries. This demonstrates an important part of Slick's query language---it is made from _composable_ elements that permit a lot of valuable code re-use.
244 |
245 |
246 | **Lifted Embedding**
247 |
248 | If you're a fan of terminology, know that what we have discussed so far is called the _lifted embedding_ approach in Slick:
249 |
250 | - define data types to store row data (case classes, tuples, or other types);
251 | - define `Table` objects representing mappings between our data types and the database;
252 | - define `TableQueries` and combinators to build useful queries before we run them against the database.
253 |
254 | Lifted embedding is the standard way to work with Slick. We will discuss the other approach, called _Plain SQL querying_, in [Chapter 7](#PlainSQL).
255 |
256 |
257 |
258 | ### Configuring the Database
259 |
260 | We've written all of the code so far without connecting to the database. Now it's time to open a connection and run some SQL. We start by defining a `Database` object which acts as a factory for managing connections and transactions:
261 |
262 | ```scala mdoc
263 | val db = Database.forConfig("chapter01")
264 | ```
265 |
266 | The parameter to `Database.forConfig` determines which configuration to use from the `application.conf` file.
267 | This file is found in `src/main/resources`. It looks like this:
268 |
269 | ```scala
270 | chapter01 {
271 | driver = "org.h2.Driver"
272 | url = "jdbc:h2:mem:chapter01"
273 | keepAliveConnection = true
274 | connectionPool = disabled
275 | }
276 | ```
277 |
278 | This syntax comes from the [Typesafe Config][link-config] library, which is also used by Akka and the Play framework.
279 |
280 | The parameters we're providing are intended to configure the underlying JDBC layer.
281 | The `driver` parameter is the fully qualified class name of the JDBC driver for our chosen DBMS.
282 |
283 | The `url` parameter is the standard [JDBC connection URL][link-jdbc-connection-url],
284 | and in this case we're creating an in-memory database called `"chapter01"`.
285 |
286 | By default the H2 in-memory database is deleted when the last connection is closed.
287 | As we will be running multiple connections in our examples,
288 | we enable `keepAliveConnection` to keep the data around until our program completes.
289 |
290 | Slick manages database connections and transactions using auto-commit.
291 | We'll look at transactions in [Chapter 4](#combining).
292 |
293 |
294 | **JDBC**
295 |
296 | If you don't have a background working with Java, you may not have heard of Java Database Connectivity (JDBC). It's a specification for accessing databases in a vendor
297 | neutral way. That is, it aims to be independent of the specific database you are connecting to.
298 |
299 | The specification is mirrored by a library implemented for each database you want to connect to. This library is called the _JDBC driver_.
300 |
301 | JDBC works with _connection strings_, which are URLs like the one above that tell the driver where your database is and how to connect to it (e.g. by providing login credentials).
302 |
303 |
304 |
305 | ### Creating the Schema
306 |
307 | Now that we have a database configured as `db`, we can use it.
308 |
309 | Let's start with a `CREATE` statement for `MessageTable`, which we build using methods of our `TableQuery` object, `messages`. The Slick method `schema` gets the schema description. We can see what that would be via the `createStatements` method:
310 |
311 | ```scala mdoc
312 | messages.schema.createStatements.mkString
313 | ```
314 |
315 | But we've not sent this to the database yet. We've just printed the statement, to check it is what we think it should be.
316 |
317 | In Slick, what we run against the database is an _action_. This is how we create an action for the `messages` schema:
318 |
319 | ```scala mdoc
320 | val action: DBIO[Unit] = messages.schema.create
321 | ```
322 |
323 | The result of this `messages.schema.create` expression is a `DBIO[Unit]`. This is an object representing a DB action that, when run, completes with a result of type `Unit`. Anything we run against a database is a `DBIO[T]` (or a `DBIOAction`, more generally). This includes queries, updates, schema alterations, and so on.
324 |
325 |
326 | **DBIO and DBIOAction**
327 |
328 | In this book we will talk about actions as having the type `DBIO[T]`.
329 |
330 | This is a simplification. The more general type is `DBIOAction`, and specifically for this example, it is a `DBIOAction[Unit, NoStream, Effect.Schema]`. The details of all of this we will get to later in the book.
331 |
332 | But `DBIO[T]` is a type alias supplied by Slick, and is perfectly fine to use.
333 |
334 |
335 |
336 | Let's run this action:
337 |
338 | ```scala mdoc
339 | import scala.concurrent.Future
340 | val future: Future[Unit] = db.run(action)
341 | ```
342 |
343 | The result of `run` is a `Future[T]`, where `T` is the type of result returned by the database. Creating a schema is a side-effecting operation so the result type is `Future[Unit]`. This matches the type `DBIO[Unit]` of the action we started with.
344 |
345 | `Future`s are asynchronous. That's to say, they are placeholders for values that will eventually appear. We say that a future _completes_ at some point. In production code, futures allow us to chain together computations without blocking to wait for a result. However, in simple examples like this we can block until our action completes:
346 |
347 | ```scala mdoc
348 | import scala.concurrent.Await
349 | import scala.concurrent.duration._
350 | val result = Await.result(future, 2.seconds)
351 | ```
352 |
353 | ### Inserting Data
354 |
355 | Once our table is set up, we need to insert some test data. We'll define a helper method to create a few test `Messages` for demonstration purposes:
356 |
357 | ```scala mdoc
358 | def freshTestData = Seq(
359 | Message("Dave", "Hello, HAL. Do you read me, HAL?"),
360 | Message("HAL", "Affirmative, Dave. I read you."),
361 | Message("Dave", "Open the pod bay doors, HAL."),
362 | Message("HAL", "I'm sorry, Dave. I'm afraid I can't do that.")
363 | )
364 | ```
365 |
366 | The insert of this test data is an action:
367 |
368 | ```scala mdoc
369 | val insert: DBIO[Option[Int]] = messages ++= freshTestData
370 | ```
371 |
372 | The `++=` method of `message` accepts a sequence of `Message` objects and translates them to a bulk `INSERT` query (`freshTestData` is a regular Scala `Seq[Message]`).
373 | We run the `insert` via `db.run`, and when the future completes our table is populated with data:
374 |
375 | ```scala mdoc
376 | val insertAction: Future[Option[Int]] = db.run(insert)
377 | ```
378 |
379 | The result of an insert operation is the number of rows inserted.
380 | The `freshTestData` contains four messages, so in this case the result is `Some(4)` when the future completes:
381 |
382 | ```scala mdoc
383 | val rowCount = Await.result(insertAction, 2.seconds)
384 | ```
385 |
386 | The result is optional because the underlying Java APIs do not guarantee a count of rows for batch inserts---some databases simply return `None`.
387 | We discuss single and batch inserts and updates further in [Chapter 3](#Modifying).
388 |
389 | ### Selecting Data
390 |
391 | Now our database has a few rows in it, we can start selecting data. We do this by taking a query, such as `messages` or `halSays`, and turning it into an action via the `result` method:
392 |
393 | ```scala mdoc
394 | val messagesAction: DBIO[Seq[Message]] = messages.result
395 |
396 | val messagesFuture: Future[Seq[Message]] = db.run(messagesAction)
397 |
398 | val messagesResults = Await.result(messagesFuture, 2.seconds)
399 | ```
400 |
401 | ```scala mdoc:invisible
402 | assert(messagesResults.length == 4, "Expected 4 results")
403 | ```
404 |
405 | We can see the SQL issued to H2 using the `statements` method on the action:
406 |
407 | ```scala mdoc
408 | val sql = messages.result.statements.mkString
409 | ```
410 |
411 | ```scala mdoc:invisible
412 | assert(sql == """select "sender", "content", "id" from "message"""", s"Expected: $sql")
413 | ```
414 |
415 |
416 | **The `exec` Helper Method**
417 |
418 | In our applications we should avoid blocking on `Future`s whenever possible.
419 | However, in the examples in this book we'll be making heavy use of `Await.result`.
420 | We will introduce a helper method called `exec` to make the examples easier to read:
421 |
422 | ```scala mdoc
423 | def exec[T](action: DBIO[T]): T =
424 | Await.result(db.run(action), 2.seconds)
425 | ```
426 |
427 | All `exec` does is run the supplied action and wait for the result.
428 | For example, to run a select query we can write:
429 |
430 | ```scala
431 | exec(messages.result)
432 | ```
433 |
434 | Use of `Await.result` is strongly discouraged in production code.
435 | Many web frameworks provide direct means of working with `Future`s without blocking.
436 | In these cases, the best approach is simply to transform the `Future` query result
437 | to a `Future` of an HTTP response and send that to the client.
438 |
439 |
440 |
441 | If we want to retrieve a subset of the messages in our table,
442 | we can run a modified version of our query.
443 | For example, calling `filter` on `messages` creates a modified query with
444 | a `WHERE` expression that retrieves the expected rows:
445 |
446 | ```scala mdoc
447 | messages.filter(_.sender === "HAL").result.statements.mkString
448 | ```
449 |
450 | To run this query, we convert it to an action using `result`,
451 | run it against the database with `db.run`, and await the final result with `exec`:
452 |
453 | ```scala mdoc
454 | exec(messages.filter(_.sender === "HAL").result)
455 | ```
456 |
457 | We actually generated this query earlier and stored it in the variable `halSays`.
458 | We can get exactly the same results from the database by running this variable instead:
459 |
460 | ```scala mdoc
461 | exec(halSays.result)
462 | ```
463 |
464 | Notice that we created our original `halSays` before connecting to the database.
465 | This demonstrates perfectly the notion of composing a query from small parts and running it later on.
466 |
467 | We can even stack modifiers to create queries with multiple additional clauses.
468 | For example, we can `map` over the query to retrieve a subset of the columns.
469 | This modifies the `SELECT` clause in the SQL and the return type of the `result`:
470 |
471 | ```scala mdoc
472 | halSays.map(_.id).result.statements.mkString
473 |
474 | exec(halSays.map(_.id).result)
475 | ```
476 |
477 | ### Combining Queries with For Comprehensions
478 |
479 | `Query` is a _monad_. It implements the methods `map`, `flatMap`, `filter`, and `withFilter`, making it compatible with Scala for comprehensions.
480 | For example, you will often see Slick queries written in this style:
481 |
482 | ```scala mdoc
483 | val halSays2 = for {
484 | message <- messages if message.sender === "HAL"
485 | } yield message
486 | ```
487 |
488 | Remember that for comprehensions are aliases for chains of method calls.
489 | All we are doing here is building a query with a `WHERE` clause on it.
490 | We don't touch the database until we execute the query:
491 |
492 | ```scala mdoc
493 | exec(halSays2.result)
494 | ```
495 |
496 | ### Actions Combine
497 |
498 | Like `Query`, `DBIOAction` is also a monad. It implements the same methods described above, and shares the same compatibility with for comprehensions.
499 |
500 | We can combine the actions to create the schema, insert the data, and query results into one action. We can do this before we have a database connection, and we run the action like any other.
501 | To do this, Slick provides a number of useful action combinators. We can use `andThen`, for example:
502 |
503 | ```scala mdoc
504 | val actions: DBIO[Seq[Message]] = (
505 | messages.schema.create andThen
506 | (messages ++= freshTestData) andThen
507 | halSays.result
508 | )
509 | ```
510 |
511 | What `andThen` does is combine two actions so that the result of the first action is thrown away.
512 | The end result of the above `actions` is the last action in the `andThen` chain.
513 |
514 | If you want to get funky, `>>` is another name for `andThen`:
515 |
516 | ```scala mdoc
517 | val sameActions: DBIO[Seq[Message]] = (
518 | messages.schema.create >>
519 | (messages ++= freshTestData) >>
520 | halSays.result
521 | )
522 | ```
523 |
524 | Combining actions is an important feature of Slick.
525 | For example, one reason for combining actions is to wrap them inside a transaction.
526 | In [Chapter 4](#combining) we'll see this, and also that actions can be composed with for comprehensions, just like queries.
527 |
528 |
529 | *Queries, Actions, Futures... Oh My!*
530 |
531 | The difference between queries, actions, and futures is a big point of confusion for newcomers to Slick 3. The three types share many properties: they all have methods like `map`, `flatMap`, and `filter`, they are all compatible with for comprehensions, and they all flow seamlessly into one another through methods in the Slick API. However, their semantics are quite different:
532 |
533 | - `Query` is used to build SQL for a single query. Calls to `map` and `filter` modify clauses to the SQL, but only one query is created.
534 |
535 | - `DBIOAction` is used to build sequences of SQL queries. Calls to `map` and `filter` chain queries together and transform their results once they are retrieved in the database. `DBIOAction` is also used to delineate transactions.
536 |
537 | - `Future` is used to transform the asynchronous result of running a `DBIOAction`. Transformations on `Future`s happen after we have finished speaking to the database.
538 |
539 | In many cases (for example select queries) we create a `Query` first and convert it to a `DBIOAction` using the `result` method. In other cases (for example insert queries), the Slick API gives us a `DBIOAction` immediately, bypassing `Query`. In all cases, we _run_ a `DBIOAction` using `db.run(...)`, turning it into a `Future` of the result.
540 |
541 | We recommend taking the time to thoroughly understand `Query`, `DBIOAction`, and `Future`. Learn how they are used, how they are similar, how they differ, what their type parameters represent, and how they flow into one another. This is perhaps the single biggest step you can take towards demystifying Slick 3.
542 |
543 |
544 |
545 | ## Take Home Points
546 |
547 | In this chapter we've seen a broad overview of the main aspects of Slick, including defining a schema, connecting to the database, and issuing queries to retrieve data.
548 |
549 | We typically model data from the database as case classes and tuples that map to rows from a table. We define the mappings between these types and the database using `Table` classes such as `MessageTable`.
550 |
551 | We define queries by creating `TableQuery` objects such as `messages` and transforming them with combinators such as `map` and `filter`.
552 | These transformations look like transformations on collections, but they are used to build SQL code rather than manipulate the results returned.
553 |
554 | We execute a query by creating an action object via its `result` method. Actions are used to build sequences of related queries and wrap them in transactions.
555 |
556 | Finally, we run the action against the database by passing it to the `run` method of the database object. We are given back a `Future` of the result. When the future completes, the result is available.
557 |
558 | The query language is the one of the richest and most significant parts of Slick. We will spend the entire next chapter discussing the various queries and transformations available.
559 |
560 | ## Exercise: Bring Your Own Data
561 |
562 | Let's get some experience with Slick by running queries against the example database.
563 | Start sbt using the `sbt` command and type `console` to enter the interactive Scala console.
564 | We've configured sbt to run the example application before giving you control,
565 | so you should start off with the test database set up and ready to go:
566 |
567 | ```bash
568 | bash$ sbt
569 | # sbt logging...
570 |
571 | > console
572 | # More sbt logging...
573 | # Application runs...
574 |
575 | scala>
576 | ```
577 |
578 | Start by inserting an extra line of dialog into the database.
579 | This line hit the cutting room floor late in the development of the film 2001,
580 | but we're happy to reinstate it here:
581 |
582 | ```scala mdoc
583 | Message("Dave","What if I say 'Pretty please'?")
584 | ```
585 |
586 | You'll need to insert the row using the `+=` method on `messages`.
587 | Alternatively you could put the message in a `Seq` and use `++=`.
588 | We've included some common pitfalls in the solution in case you get stuck.
589 |
590 |
591 | Here's the solution:
592 |
593 | ```scala mdoc
594 | exec(messages += Message("Dave","What if I say 'Pretty please'?"))
595 | ```
596 |
597 | The return value indicates that `1` row was inserted.
598 | Because we're using an auto-incrementing primary key, Slick ignores the `id` field for our `Message` and asks the database to allocate an `id` for the new row.
599 | It is possible to get the insert query to return the new `id` instead of the row count, as we shall see next chapter.
600 |
601 | Here are some things that might go wrong:
602 |
603 | If you don't pass the action created by `+=` to `db` to be run, you'll get back the `Action` object instead.
604 |
605 | ```scala mdoc
606 | messages += Message("Dave","What if I say 'Pretty please'?")
607 | ```
608 |
609 | If you don't wait for the future to complete, you'll see just the future itself:
610 |
611 | ```scala mdoc
612 | val f = db.run(messages += Message("Dave","What if I say 'Pretty please'?"))
613 | ```
614 |
615 | ```scala mdoc:invisible
616 |
617 | // Post-exercise clean up
618 | // We inserted a new message for Dave twice in the last solution.
619 | // We need to fix this so the next exercise doesn't contain confusing duplicates
620 |
621 | // NB: this block is not inside {}s because doing that triggered:
622 | // Could not initialize class $line41.$read$$iw$$iw$$iw$$iw$$iw$$iw$
623 |
624 | import scala.concurrent.ExecutionContext.Implicits.global
625 | val ex1cleanup: DBIO[Int] = for {
626 | _ <- messages.filter(_.content === "What if I say 'Pretty please'?").delete
627 | m = Message("Dave","What if I say 'Pretty please'?", 5L)
628 | _ <- messages.forceInsert(m)
629 | count <- messages.filter(_.content === "What if I say 'Pretty please'?").length.result
630 | } yield count
631 | val rowCountCheck = exec(ex1cleanup)
632 | assert(rowCountCheck == 1, s"Wrong number of rows after cleaning up ex1: $rowCountCheck")
633 | ```
634 |
635 |
636 |
637 | Now retrieve the new dialog by selecting all messages sent by Dave.
638 | You'll need to build the appropriate query using `messages.filter`, and create the action to be run by using its `result` method.
639 | Don't forget to run the query by using the `exec` helper method we provided.
640 |
641 | Again, we've included some common pitfalls in the solution.
642 |
643 |
644 | Here's the code:
645 |
646 | ```scala mdoc
647 | exec(messages.filter(_.sender === "Dave").result)
648 | ```
649 |
650 | If that's hard to read, we can print each message in turn.
651 | As the `Future` will evaluate to a collection of `Message`, we can `foreach` over that with a function of `Message => Unit`, such as `println`:
652 |
653 | ```scala mdoc
654 | val sentByDave: Seq[Message] = exec(messages.filter(_.sender === "Dave").result)
655 | sentByDave.foreach(println)
656 | ```
657 |
658 | Here are some things that might go wrong:
659 |
660 | Note that the parameter to `filter` is built using a triple-equals operator, `===`, not a regular `==`.
661 | If you use `==` you'll get an interesting compile error:
662 |
663 | ```scala mdoc:fail
664 | exec(messages.filter(_.sender == "Dave").result)
665 | ```
666 |
667 | The trick here is to notice that we're not actually trying to compare `_.sender` and `"Dave"`.
668 | A regular equality expression evaluates to a `Boolean`, whereas `===` builds an SQL expression of type `Rep[Boolean]`
669 | (Slick uses the `Rep` type to represent expressions over `Column`s as well as `Column`s themselves).
670 | The error message is baffling when you first see it but makes sense once you understand what's going on.
671 |
672 | Finally, if you forget to call `result`,
673 | you'll end up with a compilation error as `exec` and the call it is wrapping `db.run` both expect actions:
674 |
675 | ```scala mdoc:fail
676 | exec(messages.filter(_.sender === "Dave"))
677 | ```
678 |
679 | `Query` types tend to be verbose, which can be distracting from the actual cause of the problem
680 | (which is that we're not expecting a `Query` object at all).
681 | We will discuss `Query` types in more detail next chapter.
682 |
683 |
684 |
--------------------------------------------------------------------------------
/src/pages/3-modifying.md:
--------------------------------------------------------------------------------
1 | ```scala mdoc:invisible
2 | import slick.jdbc.H2Profile.api._
3 |
4 | case class Message(
5 | sender: String,
6 | content: String,
7 | id: Long = 0L)
8 |
9 | class MessageTable(tag: Tag) extends Table[Message](tag, "message") {
10 |
11 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
12 | def sender = column[String]("sender")
13 | def content = column[String]("content")
14 |
15 | def * = (sender, content, id).mapTo[Message]
16 | }
17 |
18 | lazy val messages = TableQuery[MessageTable]
19 |
20 | import scala.concurrent.{Await,Future}
21 | import scala.concurrent.duration._
22 |
23 | val db = Database.forConfig("chapter03")
24 |
25 | def exec[T](action: DBIO[T]): T = Await.result(db.run(action), 2.seconds)
26 |
27 | def freshTestData = Seq(
28 | Message("Dave", "Hello, HAL. Do you read me, HAL?"),
29 | Message("HAL", "Affirmative, Dave. I read you."),
30 | Message("Dave", "Open the pod bay doors, HAL."),
31 | Message("HAL", "I'm sorry, Dave. I'm afraid I can't do that.")
32 | )
33 |
34 | exec(messages.schema.create andThen (messages ++= freshTestData))
35 | ```
36 |
37 | # Creating and Modifying Data {#Modifying}
38 |
39 | In the last chapter we saw how to retrieve data from the database using select queries. In this chapter we will look at modifying stored data using insert, update, and delete queries.
40 |
41 | SQL veterans will know that update and delete queries share many similarities with select queries. The same is true in Slick, where we use the `Query` monad and combinators to build the different kinds of query. Ensure you are familiar with the content of [Chapter 2](#selecting) before proceeding.
42 |
43 | ## Inserting Rows
44 |
45 | As we saw in [Chapter 1](#Basics), adding new data looks like an append operation on a mutable collection. We can use the `+=` method to insert a single row into a table, and `++=` to insert multiple rows. We'll discuss both of these operations below.
46 |
47 | ### Inserting Single Rows
48 |
49 | To insert a single row into a table we use the `+=` method. Note that, unlike the select queries we've seen, this creates a `DBIOAction` immediately without an intermediate `Query`:
50 |
51 | ```scala mdoc
52 | val insertAction =
53 | messages += Message("HAL", "No. Seriously, Dave, I can't let you in.")
54 |
55 | exec(insertAction)
56 | ```
57 |
58 | We've left the `DBIO[Int]` type annotation off of `action`, so you'll see the specific type Slick is using.
59 | It's not important for this discussion, but worth knowing that Slick has a number of different kinds of `DBIOAction` classes in use under the hood.
60 |
61 | The result of the action is the number of rows inserted. However, it is often useful to return something else, such as the primary key generated for the new row. We can get this information using a method called `returning`. Before we get to that, we first need to understand where the primary key comes from.
62 |
63 | ### Primary Key Allocation
64 |
65 | When inserting data, we need to tell the database whether or not to allocate primary keys for the new rows. It is common practice to declare auto-incrementing primary keys, allowing the database to allocate values automatically if we don't manually specify them in the SQL.
66 |
67 | Slick allows us to allocate auto-incrementing primary keys via an option on the column definition. Recall the definition of `MessageTable` from Chapter 1, which looked like this:
68 |
69 | ```scala
70 | class MessageTable(tag: Tag) extends Table[Message](tag, "message") {
71 |
72 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
73 | def sender = column[String]("sender")
74 | def content = column[String]("content")
75 |
76 | def * = (sender, content, id).mapTo[Message]
77 | }
78 | ```
79 |
80 | The `O.AutoInc` option specifies that the `id` column is auto-incrementing, meaning that Slick can omit the column in the corresponding SQL:
81 |
82 | ```scala mdoc
83 | insertAction.statements.head
84 | ```
85 |
86 | As a convenience, in our example code we put the `id` field at the end of the case class and gave it a default value of `0L`.
87 | This allows us to skip the field when creating new objects of type `Message`:
88 |
89 | ```scala
90 | case class Message(
91 | sender: String,
92 | content: String,
93 | id: Long = 0L
94 | )
95 | ```
96 |
97 | ```scala mdoc
98 | Message("Dave", "You're off my Christmas card list.")
99 | ```
100 |
101 | There is nothing special about our default value of `0L`---it doesn't signify anything in particular.
102 | It is the `O.AutoInc` option that determines the behaviour of `+=`.
103 |
104 | Sometimes we want to override the database's default auto-incrementing behaviour and specify our own primary key. Slick provides a `forceInsert` method that does just this:
105 |
106 | ```scala mdoc:silent
107 | val forceInsertAction = messages forceInsert Message(
108 | "HAL",
109 | "I'm a computer, what would I do with a Christmas card anyway?",
110 | 1000L)
111 | ```
112 |
113 | Notice that the SQL generated for this action includes a manually specified ID,
114 | and that running the action results in a record with the ID being inserted:
115 |
116 | ```scala mdoc
117 | forceInsertAction.statements.head
118 |
119 | exec(forceInsertAction)
120 |
121 | exec(messages.filter(_.id === 1000L).result)
122 | ```
123 |
124 | ### Retrieving Primary Keys on Insert
125 |
126 | When the database allocates primary keys for us it's often the case that we want to get the key back after an insert.
127 | Slick supports this via the `returning` method:
128 |
129 | ```scala mdoc
130 | val insertDave: DBIO[Long] =
131 | messages returning messages.map(_.id) += Message("Dave", "Point taken.")
132 |
133 | val pk: Long = exec(insertDave)
134 | ```
135 |
136 | ```scala mdoc:invisible
137 | assert(pk == 1001L, "Text below expects PK of 1001L")
138 | ```
139 |
140 | The argument to `messages returning` is a `Query` over the same table, which is why `messages.map(_.id)` makes sense here.
141 | The query specifies what data we'd like the database to return once the insert has finished.
142 |
143 | We can demonstrate that the return value is a primary key by looking up the record we just inserted:
144 |
145 | ```scala mdoc
146 | exec(messages.filter(_.id === 1001L).result.headOption)
147 | ```
148 |
149 | For convenience, we can save a few keystrokes and define an insert query that always returns the primary key:
150 |
151 | ```scala mdoc
152 | lazy val messagesReturningId = messages returning messages.map(_.id)
153 |
154 | exec(messagesReturningId += Message("HAL", "Humans, eh."))
155 | ```
156 |
157 | Using `messagesReturningId` will return the `id` value, rather than the count of the number of rows inserted.
158 |
159 | ### Retrieving Rows on Insert {#retrievingRowsOnInsert}
160 |
161 | Some databases allow us to retrieve the complete inserted record, not just the primary key.
162 | For example, we could ask for the whole `Message` back:
163 |
164 | ```scala
165 | exec(messages returning messages +=
166 | Message("Dave", "So... what do we do now?"))
167 | ```
168 |
169 | Not all databases provide complete support for the `returning` method.
170 | H2 only allows us to retrieve the primary key from an insert.
171 |
172 | If we tried this with H2, we get a runtime error:
173 |
174 | ```scala mdoc:crash
175 | exec(messages returning messages +=
176 | Message("Dave", "So... what do we do now?"))
177 | ```
178 |
179 | This is a shame, but getting the primary key is often all we need.
180 |
181 |
182 | **Profile Capabilities**
183 |
184 | The Slick manual contains a comprehensive table of the [capabilities for each database profile][link-ref-dbs]. The ability to return complete records from an insert query is referenced as the `jdbc.returnInsertOther` capability.
185 |
186 | The API documentation for each profile also lists the capabilities that the profile *doesn't* have. For an example, the top of the [H2 Profile Scaladoc][link-ref-h2driver] page points out several of its shortcomings.
187 |
188 |
189 | If we want to get a complete populated `Message` back from a database without `jdbc.returnInsertOther` support, we retrieve the primary key and manually add it to the inserted record. Slick simplifies this with another method, `into`:
190 |
191 | ```scala mdoc
192 | val messagesReturningRow =
193 | messages returning messages.map(_.id) into { (message, id) =>
194 | message.copy(id = id)
195 | }
196 |
197 | val insertMessage: DBIO[Message] =
198 | messagesReturningRow += Message("Dave", "You're such a jerk.")
199 |
200 | exec(insertMessage)
201 | ```
202 |
203 | The `into` method allows us to specify a function to combine the record and the new primary key. It's perfect for emulating the `jdbc.returnInsertOther` capability, although we can use it for any post-processing we care to imagine on the inserted data.
204 |
205 | ### Inserting Specific Columns {#insertingSpecificColumns}
206 |
207 | If our database table contains a lot of columns with default values,
208 | it is sometimes useful to specify a subset of columns in our insert queries.
209 | We can do this by `mapping` over a query before calling `insert`:
210 |
211 | ```scala mdoc
212 | messages.map(_.sender).insertStatement
213 | ```
214 |
215 | The parameter type of the `+=` method is matched to the *unpacked* type of the query:
216 |
217 | ```scala mdoc
218 | messages.map(_.sender)
219 | ```
220 |
221 | ... so we execute this query by passing it a `String` for the `sender`:
222 |
223 | ```scala mdoc:silent:crash
224 | exec(messages.map(_.sender) += "HAL")
225 | ```
226 |
227 | The query fails at runtime because the `content` column is non-nullable in our schema.
228 | No matter. We'll cover nullable columns when discussing schemas in [Chapter 5](#Modelling).
229 |
230 |
231 | ### Inserting Multiple Rows
232 |
233 | Suppose we want to insert several `Message`s at the same time. We could just use `+=` to insert each one in turn. However, this would result in a separate query being issued to the database for each record, which could be slow for large numbers of inserts.
234 |
235 | As an alternative, Slick supports *batch inserts*, where all the inserts are sent to the database in one go. We've seen this already in the first chapter:
236 |
237 | ```scala mdoc
238 | val testMessages = Seq(
239 | Message("Dave", "Hello, HAL. Do you read me, HAL?"),
240 | Message("HAL", "Affirmative, Dave. I read you."),
241 | Message("Dave", "Open the pod bay doors, HAL."),
242 | Message("HAL", "I'm sorry, Dave. I'm afraid I can't do that.")
243 | )
244 |
245 | exec(messages ++= testMessages)
246 | ```
247 |
248 | This code prepares one SQL statement and uses it for each row in the `Seq`.
249 | In principle Slick could optimize this insert further using database-specific features.
250 | This can result in a significant boost in performance when inserting many records.
251 |
252 | As we saw earlier this chapter, the default return value of a single insert is the number of rows inserted. The multi-row insert above is also returning the number of rows, except this time the type is `Option[Int]`. The reason for this is that the JDBC specification permits the underlying database driver to indicate that the number of rows inserted is unknown.
253 |
254 | Slick also provides a batch version of `messages returning...`, including the `into` method. We can use the `messagesReturningRow` query we defined last section and write:
255 |
256 | ```scala mdoc
257 | exec(messagesReturningRow ++= testMessages)
258 | ```
259 |
260 | ### More Control over Inserts {#moreControlOverInserts}
261 |
262 | At this point we've inserted fixed data into the database.
263 | Sometimes you need more flexibility, including inserting data based on another query.
264 | Slick supports this via `forceInsertQuery`.
265 |
266 |
267 | The argument to `forceInsertQuery` is a query. So the form is:
268 |
269 | ```scala
270 | insertExpression.forceInsertQuery(selectExpression)
271 | ```
272 |
273 | Our `selectExpression` can be pretty much anything, but it needs to match the columns required by our `insertExpression`.
274 |
275 | As an example, our query could check to see if a particular row of data already exists, and insert it if it doesn't.
276 | That is, an "insert if doesn't exist" function.
277 |
278 | Let's say we only want the director to be able to say "Cut!" once. The SQL would end up like this:
279 |
280 | ~~~ sql
281 | insert into "messages" ("sender", "content")
282 | select 'Stanley', 'Cut!'
283 | where
284 | not exists(
285 | select
286 | "id", "sender", "content"
287 | from
288 | "messages" where "sender" = 'Stanley'
289 | and "content" = 'Cut!')
290 | ~~~
291 |
292 | That looks quite involved, but we can build it up gradually.
293 |
294 | The tricky part of this is the `select 'Stanley', 'Cut!'` part, as there is no `FROM` clause there.
295 | We saw an example of how to create that in [Chapter 2](#constantQueries), with `Query.apply`. For this situation it would be:
296 |
297 | ```scala mdoc
298 | val data = Query(("Stanley", "Cut!"))
299 | ```
300 |
301 | `data` is a constant query that returns a fixed value---a tuple of two columns. It's the equivalent of running `SELECT 'Stanley', 'Cut!';` against the database, which is one part of the query we need.
302 |
303 | We also need to be able to test to see if the data already exists. That's straightforward:
304 |
305 | ```scala mdoc:silent
306 | val exists =
307 | messages.
308 | filter(m => m.sender === "Stanley" && m.content === "Cut!").
309 | exists
310 | ```
311 |
312 | We want to use the `data` when the row _doesn't_ exist, so combine the `data` and `exists` with `filterNot` rather than `filter`:
313 |
314 | ```scala mdoc:silent
315 | val selectExpression = data.filterNot(_ => exists)
316 | ```
317 |
318 | Finally, we need to apply this query with `forceInsertQuery`.
319 | But remember the column types for the insert and select need to match up.
320 | So we `map` on `messages` to make sure that's the case:
321 |
322 | ```scala mdoc
323 | val forceAction =
324 | messages.
325 | map(m => m.sender -> m.content).
326 | forceInsertQuery(selectExpression)
327 |
328 | exec(forceAction)
329 |
330 | exec(forceAction)
331 | ```
332 |
333 | The first time we run the query, the message is inserted.
334 | The second time, no rows are affected.
335 |
336 | In summary, `forceInsertQuery` provides a way to build-up more complicated inserts.
337 | If you find situations beyond the power of this method,
338 | you can always make use of Plain SQL inserts, described in [Chapter 7](#PlainSQL).
339 |
340 |
341 | ## Deleting Rows
342 |
343 | Slick lets us delete rows using the same `Query` objects we saw in [Chapter 2](#selecting).
344 | That is, we specify which rows to delete using the `filter` method, and then call `delete`:
345 |
346 | ```scala mdoc
347 | val removeHal: DBIO[Int] =
348 | messages.filter(_.sender === "HAL").delete
349 |
350 | exec(removeHal)
351 | ```
352 |
353 | The return value is the number of rows affected.
354 |
355 | The SQL generated for the action can be seen by calling `delete.statements`:
356 |
357 | ```scala mdoc
358 | messages.filter(_.sender === "HAL").delete.statements.head
359 | ```
360 |
361 | Note that it is an error to use `delete` in combination with `map`. We can only call `delete` on a `TableQuery`:
362 |
363 | ```scala mdoc:fail
364 | messages.map(_.content).delete
365 | ```
366 |
367 |
368 | ## Updating Rows {#UpdatingRows}
369 |
370 | So far we've only looked at inserting new data and deleting existing data. But what if we want to update existing data without deleting it first? Slick lets us create SQL `UPDATE` actions via the kinds of `Query` values we've been using for selecting and deleting rows.
371 |
372 |
373 |
374 | **Restoring Data**
375 |
376 | In the last section we removed all the rows for HAL. Before continuing with updating rows, we should put them back:
377 |
378 | ```scala mdoc
379 | exec(messages.delete andThen (messages ++= freshTestData) andThen messages.result)
380 | ```
381 |
382 | _Action combinators_, such as `andThen`, are the subject of the next chapter.
383 |
384 |
385 | ### Updating a Single Field
386 |
387 | In the `Messages` we've created so far we've referred to the computer from *2001: A Space Odyssey* as "`HAL`", but the correct name is "`HAL 9000`".
388 | Let's fix that.
389 |
390 | We start by creating a query to select the rows to modify, and the columns to change:
391 |
392 | ```scala mdoc
393 | val updateQuery =
394 | messages.filter(_.sender === "HAL").map(_.sender)
395 | ```
396 |
397 | We can use `update` to turn this into an action to run.
398 | Update requires the new values for the column we want to change:
399 |
400 | ```scala mdoc
401 | exec(updateQuery.update("HAL 9000"))
402 | ```
403 |
404 | We can retrieve the SQL for this query by calling `updateStatment` instead of `update`:
405 |
406 | ```scala mdoc
407 | updateQuery.updateStatement
408 | ```
409 |
410 | Let's break down the code in the Scala expression.
411 | By building our update query from the `messages` `TableQuery`, we specify that we want to update records in the `message` table in the database:
412 |
413 | ```scala mdoc
414 | val messagesByHal = messages.filter(_.sender === "HAL")
415 | ```
416 |
417 | We only want to update the `sender` column, so we use `map` to reduce the query to just that column:
418 |
419 | ```scala mdoc
420 | val halSenderCol = messagesByHal.map(_.sender)
421 | ```
422 |
423 | Finally we call the `update` method, which takes a parameter of the *unpacked* type (in this case `String`):
424 |
425 | ```scala mdoc
426 | val action: DBIO[Int] = halSenderCol.update("HAL 9000")
427 | ```
428 |
429 | Running that action would return the number of rows changed.
430 |
431 | ### Updating Multiple Fields
432 |
433 | We can update more than one field at the same time by mapping the query to a tuple of the columns we care about...
434 |
435 | ```scala mdoc:invisible
436 | val assurance_10167 = exec(messages.filter(_.content like "I'm sorry, Dave%").result)
437 | assert(assurance_10167.map(_.id) == Seq(1016L), s"Text below assumes ID 1016 exists: found $assurance_10167")
438 | ```
439 |
440 | ```scala mdoc
441 | // 1016 is "I'm sorry, Dave...."
442 | val query = messages.
443 | filter(_.id === 1016L).
444 | map(message => (message.sender, message.content))
445 | ```
446 |
447 | ...and then supplying the tuple values we want to used in the update:
448 |
449 | ```scala mdoc
450 | val updateAction: DBIO[Int] =
451 | query.update(("HAL 9000", "Sure, Dave. Come right in."))
452 |
453 | exec(updateAction)
454 |
455 | exec(messages.filter(_.sender === "HAL 9000").result)
456 | ```
457 |
458 | Again, we can see the SQL we're running using the `updateStatement` method. The returned SQL contains two `?` placeholders, one for each field as expected:
459 |
460 | ```scala mdoc
461 | messages.
462 | filter(_.id === 1016L).
463 | map(message => (message.sender, message.content)).
464 | updateStatement
465 | ```
466 |
467 | We can even use `mapTo` to use case classes as the parameter to `update`:
468 |
469 | ```scala mdoc
470 | case class NameText(name: String, text: String)
471 |
472 | val newValue = NameText("Dave", "Now I totally don't trust you.")
473 |
474 | messages.
475 | filter(_.id === 1016L).
476 | map(m => (m.sender, m.content).mapTo[NameText]).
477 | update(newValue)
478 | ```
479 |
480 | ### Updating with a Computed Value
481 |
482 | Let's now turn to more interesting updates. How about converting every message to be all capitals? Or adding an exclamation mark to the end of each message? Both of these queries involve expressing the desired result in terms of the current value in the database. In SQL we might write something like:
483 |
484 | ~~~ sql
485 | update "message" set "content" = CONCAT("content", '!')
486 | ~~~
487 |
488 | This is not currently supported by `update` in Slick, but there are ways to achieve the same result.
489 | One such way is to use Plain SQL queries, which we cover in [Chapter 7](#PlainSQL).
490 | Another is to perform a *client-side update* by defining a Scala function to capture the change to each row:
491 |
492 | ```scala mdoc
493 | def exclaim(msg: Message): Message =
494 | msg.copy(content = msg.content + "!")
495 | ```
496 |
497 | We can update rows by selecting the relevant data from the database, applying this function, and writing the results back individually. Note that approach can be quite inefficient for large datasets---it takes `N + 1` queries to apply an update to `N` results.
498 |
499 | You may be tempted to write something like this:
500 |
501 | ```scala mdoc
502 | def modify(msg: Message): DBIO[Int] =
503 | messages.filter(_.id === msg.id).update(exclaim(msg))
504 |
505 | // Don't do it this way:
506 | for {
507 | msg <- exec(messages.result)
508 | } yield exec(modify(msg))
509 | ```
510 |
511 | This will have the desired effect, but at some cost.
512 | What we have done there is use our own `exec` method which will wait for results.
513 | We use it to fetch all rows, and then we use it on each row to modify the row.
514 | That's a lot of waiting.
515 | There is also no support for transactions as we `db.run` each action separately.
516 |
517 | A better approach is to turn our logic into a single `DBIO` action using _action combinators_.
518 | This, together with transactions, is the topic of the next chapter.
519 |
520 | However, for this particular example, we recommend using Plain SQL ([Chapter 7](#PlainSQL)) instead of client-side updates.
521 |
522 |
523 | ## Take Home Points
524 |
525 | For modifying the rows in the database we have seen that:
526 |
527 | * inserts are via a `+=` or `++=` call on a table;
528 |
529 | * updates are via an `update` call on a query, but are somewhat limited when you need to update using the existing row value; and
530 |
531 | * deletes are via a `delete` call to a query.
532 |
533 | Auto-incrementing values are inserted by Slick, unless forced. The auto-incremented values can be returned from the insert by using `returning`.
534 |
535 | Databases have different capabilities. The limitations of each profile is listed in the profile's Scala Doc page.
536 |
537 |
538 | ## Exercises
539 |
540 | The code for this chapter is in the [GitHub repository][link-example] in the _chapter-03_ folder. As with chapter 1 and 2, you can use the `run` command in SBT to execute the code against an H2 database.
541 |
542 |
543 |
544 | **Where Did My Data Go?**
545 |
546 | Several of the exercises in this chapter require you to delete or update content from the database.
547 | We've shown you above how to restore you data,
548 | but if you want to explore and change the schema you might want to completely reset the schema.
549 |
550 | In the example code we provide a `populate` method you can use:
551 |
552 | ``` scala
553 | exec(populate)
554 | ```
555 |
556 | This will drop, create, and populate the `messages` table with known values.
557 |
558 | Populate is defined as:
559 |
560 | ```scala mdoc
561 | import scala.concurrent.ExecutionContext.Implicits.global
562 |
563 | def populate: DBIOAction[Option[Int], NoStream, Effect.All] =
564 | for {
565 | // Drop table if it already exists, then create the table:
566 | _ <- messages.schema.drop.asTry andThen messages.schema.create
567 | // Add some data:
568 | count <- messages ++= freshTestData
569 | } yield count
570 | ```
571 |
572 | We'll meet `asTry` and `andThen` in the next chapter.
573 |
574 |
575 |
576 | ### Get to the Specifics
577 |
578 | In [Inserting Specific Columns](#insertingSpecificColumns) we looked at only inserting the sender column:
579 |
580 | ```scala mdoc:silent
581 | messages.map(_.sender) += "HAL"
582 | ```
583 |
584 | This failed when we tried to use it as we didn't meet the requirements of the `message` table schema.
585 | For this to succeed we need to include `content` as well as `sender`.
586 |
587 | Rewrite the above query to include the `content` column.
588 |
589 |
590 | The requirements of the `messages` table is `sender` and `content` can not be null.
591 | Given this, we can correct our query:
592 |
593 | ```scala mdoc
594 | val senderAndContent = messages.map { m => (m.sender, m.content) }
595 | val insertSenderContent = senderAndContent += ( ("HAL","Helllllo Dave") )
596 | exec(insertSenderContent)
597 | ```
598 |
599 | We have used `map` to create a query that works on the two columns we care about.
600 | To insert using that query, we supply the two field values.
601 |
602 | In case you're wondering, we've out the extra parentheses around the column values
603 | to be clear it is a single value which is a tuple of two values.
604 |
605 |
606 | ### Bulk All the Inserts
607 |
608 | Insert the conversation below between Alice and Bob, returning the messages populated with `id`s.
609 |
610 | ```scala mdoc:silent
611 | val conversation = List(
612 | Message("Bob", "Hi Alice"),
613 | Message("Alice","Hi Bob"),
614 | Message("Bob", "Are you sure this is secure?"),
615 | Message("Alice","Totally, why do you ask?"),
616 | Message("Bob", "Oh, nothing, just wondering."),
617 | Message("Alice","Ten was too many messages"),
618 | Message("Bob", "I could do with a sleep"),
619 | Message("Alice","Let's just get to the point"),
620 | Message("Bob", "Okay okay, no need to be tetchy."),
621 | Message("Alice","Humph!"))
622 | ```
623 |
624 |
625 | For this we need to use a batch insert (`++=`) and `into`:
626 |
627 | ```scala mdoc
628 | val messageRows =
629 | messages returning messages.map(_.id) into { (message, id) =>
630 | message.copy(id = id)
631 | }
632 |
633 | exec(messageRows ++= conversation).foreach(println)
634 | ```
635 |
636 |
637 | ### No Apologies
638 |
639 | Write a query to delete messages that contain "sorry".
640 |
641 |
642 | The pattern is to define a query to select the data, and then use it with `delete`:
643 |
644 | ```scala mdoc
645 | messages.filter(_.content like "%sorry%").delete
646 | ```
647 |
648 |
649 |
650 | ### Update Using a For Comprehension
651 |
652 | Rewrite the update statement below to use a for comprehension.
653 |
654 | ```scala mdoc
655 | val rebootLoop = messages.
656 | filter(_.sender === "HAL").
657 | map(msg => (msg.sender, msg.content)).
658 | update(("HAL 9000", "Rebooting, please wait..."))
659 | ```
660 |
661 | Which style do you prefer?
662 |
663 |
664 | We've split this into a `query` and then an `update`:
665 |
666 | ```scala mdoc
667 | val halMessages = for {
668 | message <- messages if message.sender === "HAL"
669 | } yield (message.sender, message.content)
670 |
671 | val rebootLoopUpdate = halMessages.update(("HAL 9000", "Rebooting, please wait..."))
672 | ```
673 |
674 |
675 | ### Selective Memory
676 |
677 | Delete `HAL`s first two messages. This is a more difficult exercise.
678 |
679 | You don't know the IDs of the messages, or the content of them.
680 | But you do know the IDs increase.
681 |
682 | Hints:
683 |
684 | - First write a query to select the two messages. Then see if you can find a way to use it as a subquery.
685 |
686 | - You can use `in` in a query to see if a value is in a set of values returned from a query.
687 |
688 |
689 | We've selected HAL's message IDs, sorted by the ID, and used this query inside a filter:
690 |
691 | ```scala mdoc
692 | val selectiveMemory =
693 | messages.filter{
694 | _.id in messages.
695 | filter { _.sender === "HAL" }.
696 | sortBy { _.id.asc }.
697 | map {_.id}.
698 | take(2)
699 | }.delete
700 |
701 | selectiveMemory.statements.head
702 | ```
703 |
704 |
705 |
--------------------------------------------------------------------------------
/src/pages/7-plain_sql.md:
--------------------------------------------------------------------------------
1 | ```scala mdoc:invisible
2 | import slick.jdbc.H2Profile.api._
3 | import scala.concurrent.{Await,Future}
4 | import scala.concurrent.duration._
5 | import scala.concurrent.ExecutionContext.Implicits.global
6 |
7 | val db = Database.forConfig("chapter07")
8 |
9 | def exec[T](action: DBIO[T]): T = Await.result(db.run(action), 4.seconds)
10 | ```
11 | # Plain SQL {#PlainSQL}
12 |
13 | Slick supports Plain SQL queries in addition to the lifted embedded style we've seen up to this point. Plain queries don't compose as nicely as lifted, or offer quite the same type safely. But they enable you to execute essentially arbitrary SQL when you need to. If you're unhappy with a particular query produced by Slick, dropping into Plain SQL is the way to go.
14 |
15 | In this section we will see that:
16 |
17 | - the [interpolators][link-scala-interpolation] `sql` (for select) and `sqlu` (for updates) are used to create Plain SQL queries;
18 |
19 | - values can be safely substituted into queries using a `${expresson}` syntax;
20 |
21 | - custom types can be used in Plain SQL, as long as there is a converter in scope; and
22 |
23 | - the `tsql` interpolator can be used to check the syntax and types of a query via a database at compile time.
24 |
25 |
26 | **A Table to Work With**
27 |
28 | For the examples that follow, we'll set up a table for rooms.
29 | For now we'll do this as we have in other chapters using the lifted embedded style:
30 |
31 | ```scala mdoc
32 | case class Room(title: String, id: Long = 0L)
33 |
34 | class RoomTable(tag: Tag) extends Table[Room](tag, "room") {
35 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
36 | def title = column[String]("title")
37 | def * = (title, id).mapTo[Room]
38 | }
39 |
40 | lazy val rooms = TableQuery[RoomTable]
41 |
42 | val roomSetup = DBIO.seq(
43 | rooms.schema.create,
44 | rooms ++= Seq(Room("Air Lock"), Room("Pod"), Room("Brain Room"))
45 | )
46 |
47 | val setupResult = exec(roomSetup)
48 | ```
49 |
50 |
51 | ## Selects
52 |
53 | Let's start with a simple example of returning a list of room IDs.
54 |
55 | ```scala mdoc
56 | val action = sql""" select "id" from "room" """.as[Long]
57 |
58 | Await.result(db.run(action), 2.seconds)
59 | ```
60 |
61 | Running a Plain SQL query looks similar to other queries we've seen in this book: call `db.run` as usual.
62 |
63 | The big difference is with the construction of the query. We supply both the SQL we want to run and specify the expected result type using `as[T]`.
64 | And the result we get back is an action to run, rather than a `Query`.
65 |
66 | The `as[T]` method is pretty flexible. Let's get back the room ID and room title:
67 |
68 | ```scala mdoc
69 | val roomInfo = sql""" select "id", "title" from "room" """.as[(Long,String)]
70 |
71 | exec(roomInfo)
72 | ```
73 |
74 | Notice we specified a tuple of `(Long, String)` as the result type. This matches the columns in our SQL `SELECT` statement.
75 |
76 | Using `as[T]` we can build up arbitrary result types. Later we'll see how we can use our own application case classes too.
77 |
78 | One of the most useful features of the SQL interpolators is being able to reference Scala values in a query:
79 |
80 | ```scala mdoc
81 | val roomName = "Pod"
82 |
83 | val podRoomAction = sql"""
84 | select
85 | "id", "title"
86 | from
87 | "room"
88 | where
89 | "title" = $roomName """.as[(Long,String)].headOption
90 |
91 | exec(podRoomAction)
92 | ```
93 |
94 | Notice how `$roomName` is used to reference a Scala value `roomName`.
95 | This value is incorporated safely into the query.
96 | That is, you don't have to worry about SQL injection attacks when you use the SQL interpolators in this way.
97 |
98 |
99 | **The Danger of Strings**
100 |
101 | The SQL interpolators are essential for situations where you need full control over the SQL to be run. Be aware there is some loss of compile-time safety. For example:
102 |
103 | ```scala mdoc
104 | val t = 42
105 |
106 | val badAction =
107 | sql""" select "id" from "room" where "title" = $t """.as[Long]
108 | ```
109 |
110 | This compiles, but fails at runtime as the type of the `title` column is a `String` and we've provided an `Int`:
111 |
112 | ```scala mdoc
113 | exec(badAction.asTry)
114 | ```
115 |
116 | The equivalent query using the lifted embedded style would have caught the problem at compile time.
117 | The `tsql` interpolator, described later in this chapter, helps here by connecting to a database at compile time to check the query and types.
118 |
119 | Another danger is with the `#$` style of substitution. This is called _splicing_, and is used when you _don't_ want SQL escaping to apply. For example, perhaps the name of the table you want to use may change:
120 |
121 | ```scala mdoc
122 | val table = "room"
123 | val splicedAction = sql""" select "id" from "#$table" """.as[Long]
124 | ```
125 |
126 | In this situation we do not want the value of `table` to be treated as a `String`. If we did, it'd be an invalid query: `select "id" from "'message'"` (notice the double quotes and single quotes around the table name, which is not valid SQL).
127 |
128 | This means you can produce unsafe SQL with splicing. The golden rule is to never use `#$` with input supplied by users.
129 |
130 | To be sure you remember it, say it again with us: never use `#$` with input supplied by users.
131 |
132 |
133 |
134 | ### Select with Custom Types
135 |
136 | Out of the box Slick knows how to convert many data types to and from SQL data types. The examples we've seen so far include turning a Scala `String` into a SQL string, and a SQL BIGINT to a Scala `Long`. These conversions are available via `as[T]`.
137 |
138 | If we want to work with a type that Slick doesn't know about, we need to provide a conversion. That's the role of the `GetResult` type class.
139 |
140 | For an example, let's set up a table for messages with some interesting structure:
141 |
142 | ```scala mdoc
143 | import org.joda.time.DateTime
144 |
145 | case class Message(
146 | sender : String,
147 | content : String,
148 | created : DateTime,
149 | updated : Option[DateTime],
150 | id : Long = 0L
151 | )
152 | ```
153 |
154 | The point of interest for the moment is that we have a `created` field of type `DateTime`.
155 | This is from Joda Time, and Slick does not ship with built-in support for this type.
156 |
157 | This is the query we want to run:
158 |
159 | ```scala mdoc:fail
160 | sql""" select "created" from "message" """.as[DateTime]
161 | ```
162 |
163 | OK, that won't compile as Slick doesn't know anything about `DateTime`.
164 | For this to compile we need to provide an instance of `GetResult[DateTime]`:
165 |
166 | ```scala mdoc:silent
167 | import slick.jdbc.GetResult
168 | import java.sql.Timestamp
169 | import org.joda.time.DateTimeZone.UTC
170 | ```
171 | ```scala mdoc
172 | implicit val GetDateTime =
173 | GetResult[DateTime](r => new DateTime(r.nextTimestamp(), UTC))
174 | ```
175 |
176 | `GetResult` is wrapping up a function from `r` (a `PositionedResult`) to `DateTime`. The `PositionedResult` provides access to the database value (via `nextTimestamp`, `nextLong`, `nextBigDecimal` and so on). We use the value from `nextTimestamp` to feed into the constructor for `DateTime`.
177 |
178 | The name of this value doesn't matter.
179 | What's important is that the value is implicit and the type is `GetResult[DateTime]`.
180 | This allows the compiler to lookup our conversion function when we mention a `DateTime`.
181 |
182 | Now we can construct our action:
183 |
184 | ```scala mdoc
185 | sql""" select "created" from "message" """.as[DateTime]
186 | ```
187 |
188 | ### Case Classes
189 |
190 | As you've probably guessed, returning a case class from a Plain SQL query means providing a `GetResult` for the case class. Let's work through an example for the messages table.
191 |
192 | Recall that a message contains: an ID, some content, the sender ID, a timestamp, and an optional timestamp.
193 |
194 | To provide a `GetResult[Message]` we need all the types inside the `Message` to have `GetResult` instances.
195 | We've already tackled `DateTime`.
196 | And Slick knows how to handle `Long` and `String`.
197 | So that leaves us with `Option[DateTime]` and the `Message` itself.
198 |
199 | For optional values, Slick provides `nextXXXOption` methods, such as `nextLongOption`.
200 | For the optional date time we read the database value using `nextTimestampOption()` and then `map` to the right type:
201 |
202 | ```scala mdoc
203 | implicit val GetOptionalDateTime = GetResult[Option[DateTime]](r =>
204 | r.nextTimestampOption().map(ts => new DateTime(ts, UTC))
205 | )
206 | ```
207 |
208 | With all the individual columns mapped we can pull them together in a `GetResult` for `Message`.
209 | There are two helper methods which make it easier to construct these instances:
210 |
211 | - `<<` for calling the appropriate _nextXXX_ method; and
212 |
213 | - `<` when the value is optional.
214 |
215 | We can use them like this:
216 |
217 | ```scala mdoc
218 | implicit val GetMessage = GetResult(r =>
219 | Message(sender = r.<<,
220 | content = r.<<,
221 | created = r.<<,
222 | updated = r.<,
223 | id = r.<<)
224 | )
225 | ```
226 |
227 | This works because we've provided implicits for the components of the case class.
228 | As the types of the fields are known, `<<` and `<` can use the implicit `GetResult[T]` for the type of each type.
229 |
230 | Now we can select into `Message` values:
231 |
232 | ```scala mdoc
233 | val messageAction: DBIO[Seq[Message]] =
234 | sql""" select * from "message" """.as[Message]
235 | ```
236 |
237 | In all likelihood you'll prefer the lifted embedded style over Plain SQL in this specific example.
238 | But if you do find yourself using Plain SQL, for performance reasons perhaps, it's useful to know how to convert database values up into meaningful domain types.
239 |
240 |
241 |
242 | **`SELECT *`**
243 |
244 | We sometimes use `SELECT *` in this chapter to fit our code examples onto the page.
245 | You should avoid this in your code base as it leads to brittle code.
246 |
247 | An example: if, outside of Slick, a table is modified to add a column, the results from the query will unexpectedly change. You code may not longer be able to map results.
248 |
249 |
250 |
251 |
252 |
253 | ## Updates
254 |
255 | Back in [Chapter 3](#UpdatingRows) we saw how to modify rows with the `update` method.
256 | We noted that batch updates were challenging when we wanted to use the row's current value.
257 | The example we used was appending an exclamation mark to a message's content:
258 |
259 | ```sql
260 | UPDATE "message" SET "content" = CONCAT("content", '!')
261 | ```
262 |
263 | Plain SQL updates will allow us to do this. The interpolator is `sqlu`:
264 |
265 | ```scala mdoc
266 | val updateAction =
267 | sqlu"""UPDATE "message" SET "content" = CONCAT("content", '!')"""
268 | ```
269 |
270 | The `action` we have constructed, just like other actions, is not run until we evaluate it via `db.run`. But when it is run, it will append the exclamation mark to each row value, which is what we couldn't do as efficiently with the lifted embedded style.
271 |
272 | Just like the `sql` interpolator, we also have access to `$` for binding to variables:
273 |
274 | ```scala mdoc
275 | val char = "!"
276 | val interpolatorAction =
277 | sqlu"""UPDATE "message" SET "content" = CONCAT("content", $char)"""
278 | ```
279 |
280 | This gives us two benefits: the compiler will point out typos in variables names,
281 | but also the input is sanitized against [SQL injection attacks][link-wikipedia-injection].
282 |
283 | In this case, the statement that Slick generates will be:
284 |
285 | ```scala mdoc
286 | interpolatorAction.statements.head
287 | ```
288 |
289 |
290 | ### Updating with Custom Types
291 |
292 | Working with basic types like `String` and `Int` is fine, but sometimes you want to update using a richer type.
293 | We saw the `GetResult` type class for mapping select results, and for updates this is mirrored with the `SetParameter` type class.
294 |
295 | We can teach Slick how to set `DateTime` parameters like this:
296 |
297 | ```scala mdoc
298 | import slick.jdbc.SetParameter
299 |
300 | implicit val SetDateTime = SetParameter[DateTime](
301 | (dt, pp) => pp.setTimestamp(new Timestamp(dt.getMillis))
302 | )
303 | ```
304 |
305 | The value `pp` is a `PositionedParameters`. This is an implementation detail of Slick, wrapping a SQL statement and a placeholder for a value.
306 | Effectively we're saying how to treat a `DateTime` regardless of where it appears in the update statement.
307 |
308 | In addition to a `Timestamp` (via `setTimestamp`), you can set: `Boolean`, `Byte`, `Short`, `Int`, `Long`, `Float`, `Double`, `BigDecimal`, `Array[Byte]`, `Blob`, `Clob`, `Date`, `Time`, as well as `Object` and `null`. There are _setXXX_ methods on `PositionedParameters` for `Option` types, too.
309 |
310 | There's further symmetry with `GetResuts` in that we could have used `>>` in our `SetParameter`:
311 |
312 | ```scala mdoc:nest
313 | implicit val SetDateTime = SetParameter[DateTime](
314 | (dt, pp) => pp >> new Timestamp(dt.getMillis))
315 | ```
316 |
317 | With this in place we can construct Plain SQL updates using `DateTime` instances:
318 |
319 | ```scala mdoc
320 | val now =
321 | sqlu"""UPDATE "message" SET "created" = ${DateTime.now}"""
322 | ```
323 |
324 | Without the `SetParameter[DateTime]` instance the compiler would tell you:
325 |
326 | ```scala
327 | could not find implicit SetParameter[DateTime]
328 | ```
329 |
330 |
331 |
332 | ## Typed Checked Plain SQL
333 |
334 | We've mentioned the risks of Plain SQL, which can be summarized as not discovering a problem with your query until runtime. The `tsql` interpolator removes some of this risk, but at the cost of requiring a connection to a database at compile time.
335 |
336 |
337 | **Run the Code**
338 |
339 | These examples won't run in the REPL.
340 | To try these out, use the `tsql.scala` file inside the `chapter-07` folder.
341 | This is all in the [example code base on GitHub][link-example].
342 |
343 |
344 | ### Compile Time Database Connections
345 |
346 | To get started with `tsql` we provide a database configuration information on a class:
347 |
348 | ```scala
349 | import slick.backend.StaticDatabaseConfig
350 |
351 | @StaticDatabaseConfig("file:src/main/resources/application.conf#tsql")
352 | object TsqlExample {
353 | // queries go here
354 | }
355 | ```
356 |
357 | The `@StaticDatabaseConfig` syntax is called an _annotation_. This particular `StaticDatabaseConfig` annotation is telling Slick to use the connection called "tsql" in our configuration file. That entry will look like this:
358 |
359 | ```scala
360 | tsql {
361 | profile = "slick.jdbc.H2Profile$"
362 | db {
363 | connectionPool = disabled
364 | url = "jdbc:h2:mem:chapter06; INIT=
365 | runscript from 'src/main/resources/integration-schema.sql'"
366 | driver = "org.h2.Driver"
367 | keepAliveConnection = false
368 | }
369 | }
370 | ```
371 |
372 | Note the `$` in the profile class name is not a typo. The class name is being passed to Java's `Class.forName`, but of course Java doesn't have a singleton as such. The Slick configuration does the right thing to load `$MODULE` when it sees `$`. This interoperability with Java is described in [Chapter 29 of _Programming in Scala_][link-pins-interop].
373 |
374 | You won't have seen this when we introduced the database configuration in Chapter 1. That's because this `tsql` configuration has a different format, and combines the Slick profile (`slick.jdbc.H2Profile`) and the JDBC driver (`org.h2.Drvier`) in one entry.
375 |
376 | A consequence of supplying a `@StaticDatabaseConfig` is that you can define one databases configuration for your application and a different one for the compiler to use. That is, perhaps you are running an application, or test suite, against an in-memory database, but validating the queries at compile time against a full-populated production-like integration database.
377 |
378 | In the example above, and the accompanying example code, we use an in-memory database to make Slick easy to get started with. However, an in-memory database is empty by default, and that would be no use for checking queries against. To work around that we provide an `INIT` script to populate the in-memory database.
379 | For our purposes, the `integration-schema.sql` file only needs to contain one line:
380 |
381 | ```sql
382 | create table "message" (
383 | "content" VARCHAR NOT NULL,
384 | "id" BIGSERIAL NOT NULL PRIMARY KEY
385 | );
386 | ```
387 |
388 |
389 | ### Type Checked Plain SQL
390 |
391 | With the `@StaticDatabaseConfig` in place we can use `tsql`:
392 |
393 | ```scala
394 | val action: DBIO[Seq[String]] = tsql""" select "content" from "message" """
395 | ```
396 |
397 | You can run that query as you would `sql` or `sqlu` query.
398 | You can also use custom types via `SetParameter` type class. However, `GetResult` type classes are not supported for `tsql`.
399 |
400 | Let's get the query wrong and see what happens:
401 |
402 | ```scala
403 | val action: DBIO[Seq[String]] =
404 | tsql"""select "content", "id" from "message""""
405 | ```
406 |
407 | Do you see what's wrong? If not, don't worry because the compiler will find the problem:
408 |
409 | ```scala
410 | type mismatch;
411 | [error] found : SqlStreamingAction[
412 | Vector[(String, Int)],
413 | (String, Int),Effect ]
414 | [error] required : DBIO[Seq[String]]
415 | ```
416 |
417 | The compiler wants a `String` for each row, because that's what we've declared the result to be.
418 | However it has found, via the database, that the query will return `(String,Int)` rows.
419 |
420 | If we had omitted the type declaration, the action would have the inferred type of `DBIO[Seq[(String,Int)]]`.
421 | So if you want to catch these kinds of mismatches, it's good practice to declare the type you expect when using `tsql`.
422 |
423 | Let's see other kinds of errors the compiler will find.
424 |
425 | How about if the SQL is just wrong:
426 |
427 | ```scala
428 | val action: DBIO[Seq[String]] =
429 | tsql"""select "content" from "message" where"""
430 | ```
431 |
432 | This is incomplete SQL, and the compiler tells us:
433 |
434 | ```scala
435 | exception during macro expansion: ERROR: syntax error at end of input
436 | [error] Position: 38
437 | [error] tsql"""select "content" from "message" WHERE"""
438 | [error] ^
439 | ```
440 |
441 | And if we get a column name wrong...
442 |
443 | ```scala
444 | val action: DBIO[Seq[String]] =
445 | tsql"""select "text" from "message" where"""
446 | ```
447 |
448 | ...that's also a compile error too:
449 |
450 | ```scala
451 | Exception during macro expansion: ERROR: column "text" does not exist
452 | [error] Position: 8
453 | [error] tsql"""select "text" from "message""""
454 | [error] ^
455 | ```
456 |
457 | Of course, in addition to selecting rows, you can insert:
458 |
459 | ```scala
460 | val greeting = "Hello"
461 | val action: DBIO[Seq[Int]] =
462 | tsql"""insert into "message" ("content") values ($greeting)"""
463 | ```
464 |
465 | Note that at run time, when we execute the query, a new row will be inserted.
466 | At compile time, Slick uses a facility in JDBC to compile the query and retrieve the meta data without having to run the query.
467 | In other words, at compile time the database is not mutated.
468 |
469 |
470 | ## Take Home Points
471 |
472 | Plain SQL allows you a way out of any limitations you find with Slick's lifted embedded style of querying.
473 |
474 | Two main string interpolators for SQL are provided: `sql` and `sqlu`:
475 |
476 | - Values can be safely substituted into Plain SQL queries using `${expression}`.
477 |
478 | - Custom types can be used with the interpolators providing an implicit `GetResult` (select) or `SetParameter` (update) is in scope for the type.
479 |
480 | - Raw values can be spliced into a query with `#$`. Use this with care: end-user supplied information should never be spliced into a query.
481 |
482 | The `tsql` interpolator will check Plain SQL queries against a database at compile time. The database connection is used to validate the query syntax, and also discover the types of the columns being selected. To make best use of this, always declare the type of the query you expect from `tsql`.
483 |
484 |
485 | ## Exercises
486 |
487 | For these exercises we will use a combination of messages and users.
488 | We'll set this up using the lifted embedded style:
489 |
490 | ```scala mdoc:reset:invisible
491 | import slick.jdbc.H2Profile.api._
492 | import scala.concurrent.{Await,Future}
493 | import scala.concurrent.duration._
494 | import scala.concurrent.ExecutionContext.Implicits.global
495 |
496 | val db = Database.forConfig("chapter07")
497 |
498 | def exec[T](action: DBIO[T]): T = Await.result(db.run(action), 4.seconds)
499 | ```
500 |
501 | ```scala mdoc:silent
502 | case class User(
503 | name : String,
504 | email : Option[String] = None,
505 | id : Long = 0L
506 | )
507 |
508 | class UserTable(tag: Tag) extends Table[User](tag, "user") {
509 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
510 | def name = column[String]("name")
511 | def email = column[Option[String]]("email")
512 | def * = (name, email, id).mapTo[User]
513 | }
514 |
515 | lazy val users = TableQuery[UserTable]
516 | lazy val insertUsers = users returning users.map(_.id)
517 |
518 | case class Message(senderId: Long, content: String, id: Long = 0L)
519 |
520 | class MessageTable(tag: Tag) extends Table[Message](tag, "message") {
521 | def id = column[Long]("id", O.PrimaryKey, O.AutoInc)
522 | def senderId = column[Long]("sender_id")
523 | def content = column[String]("content")
524 | def * = (senderId, content, id).mapTo[Message]
525 | }
526 |
527 | lazy val messages = TableQuery[MessageTable]
528 |
529 | val setup = for {
530 | _ <- (users.schema ++ messages.schema).create
531 | daveId <- insertUsers += User("Dave")
532 | halId <- insertUsers += User("HAL")
533 | rowsAdded <- messages ++= Seq(
534 | Message(daveId, "Hello, HAL. Do you read me, HAL?"),
535 | Message(halId, "Affirmative, Dave. I read you."),
536 | Message(daveId, "Open the pod bay doors, HAL."),
537 | Message(halId, "I'm sorry, Dave. I'm afraid I can't do that.")
538 | )
539 | } yield rowsAdded
540 |
541 | exec(setup)
542 | ```
543 |
544 | ### Plain Selects
545 |
546 | Let's get warmed up with some simple exercises.
547 |
548 | Write the following four queries as Plain SQL queries:
549 |
550 | - Count the number of rows in the message table.
551 |
552 | - Select the content from the messages table.
553 |
554 | - Select the length of each message ("content") in the messages table.
555 |
556 | - Select the content and length of each message.
557 |
558 | Tips:
559 |
560 | - Remember that you need to use double quotes around table and column names in the SQL.
561 |
562 | - We gave the database tables names which are singular: `message`, `user`, etc.
563 |
564 |
565 | The SQL statements are relatively simple. You need to take care to make the `as[T]` align to the result of the query.
566 |
567 | ```scala mdoc
568 | val q1 = sql""" select count(*) from "message" """.as[Int]
569 | val a1 = exec(q1)
570 |
571 | val q2 = sql""" select "content" from "message" """.as[String]
572 | val a2 = exec(q2)
573 | a2.foreach(println)
574 |
575 | val q3 = sql""" select length("content") from "message" """.as[Int]
576 | val a3 = exec(q3)
577 |
578 | val q4 = sql""" select "content", length("content") from "message" """.as[(String,Int)]
579 | val a4 = exec(q4)
580 | a4.foreach(println)
581 | ```
582 |
583 | ```scala mdoc:invisible
584 | assert(a1.head == 4, s"Expected 4 results for a1, not $a1")
585 | assert(a2.length == 4, s"Expected 4 results for a2, not $a2")
586 | assert(a3 == Seq(32,30,28,44), s"Expected specific lenghts, not $a3")
587 | assert(a4.length == 4, s"Expected 4 results for a4, not $a4")
588 | ```
589 |
590 |
591 | ### Conversion
592 |
593 | Convert the following lifted embedded query to a Plain SQL query.
594 |
595 | ```scala mdoc
596 | val whoSaidThat =
597 | messages.join(users).on(_.senderId === _.id).
598 | filter{ case (message,user) =>
599 | message.content === "Open the pod bay doors, HAL."}.
600 | map{ case (message,user) => user.name }
601 |
602 | exec(whoSaidThat.result)
603 | ```
604 |
605 | Tips:
606 |
607 | - If you're not familiar with SQL syntax, peak at the statement generated for `whoSaidThat` given above.
608 |
609 | - Remember that strings in SQL are wrapped in single quotes, not double quotes.
610 |
611 | - In the database, the sender's ID is `sender_id`.
612 |
613 |
614 |
615 | There are various ways to implement this query in SQL. Here's one of them...
616 |
617 | ```scala mdoc
618 | val whoSaidThatPlain = sql"""
619 | select
620 | "name" from "user" u
621 | join
622 | "message" m on u."id" = m."sender_id"
623 | where
624 | m."content" = 'Open the pod bay doors, HAL.'
625 | """.as[String]
626 |
627 | exec(whoSaidThatPlain)
628 | ```
629 |
630 |
631 |
632 | ### Substitution
633 |
634 | Complete the implementation of this method using a Plain SQL query:
635 |
636 | ```scala
637 | def whoSaid(content: String): DBIO[Seq[String]] =
638 | ???
639 | ```
640 |
641 | Running `whoSaid("Open the pod bay doors, HAL.")` should return a list of the people who said that. Which should be Dave.
642 |
643 | This should be a small change to your solution to the last exercise.
644 |
645 |
646 | The solution requires the use of a `$` substitution:
647 |
648 | ```scala mdoc
649 | def whoSaid(content: String): DBIO[Seq[String]] =
650 | sql"""
651 | select
652 | "name" from "user" u
653 | join
654 | "message" m on u."id" = m."sender_id"
655 | where
656 | m."content" = $content
657 | """.as[String]
658 |
659 | exec(whoSaid("Open the pod bay doors, HAL."))
660 |
661 | exec(whoSaid("Affirmative, Dave. I read you."))
662 | ```
663 |
664 |
665 |
666 | ### First and Last
667 |
668 | This H2 query returns the alphabetically first and last messages:
669 |
670 | ```scala mdoc
671 | exec(sql"""
672 | select min("content"), max("content")
673 | from "message" """.as[(String,String)]
674 | )
675 | ```
676 |
677 | In this exercise we want you to write a `GetResult` type class instance so that the result of the query is one of these:
678 |
679 | ```scala mdoc:silent
680 | case class FirstAndLast(first: String, last: String)
681 | ```
682 |
683 | The steps are:
684 |
685 | 1. Remember to `import slick.jdbc.GetResult`.
686 |
687 | 2. Provide an implicit value for `GetResult[FirstAndLast]`
688 |
689 | 3. Make the query use `as[FirstAndLast]`
690 |
691 |
705 |
706 |
707 | ### Plain Change
708 |
709 | We can use Plain SQL to modify the database.
710 | That means inserting rows, updating rows, deleting rows, and also modifying the schema.
711 |
712 | Go ahead and create a new table, using Plain SQL, to store the crew's jukebox playlist.
713 | Just store a song title. Insert a row into the table.
714 |
715 |
716 | For modifications we use `sqlu`, not `sql`:
717 |
718 | ```scala mdoc
719 | exec(sqlu""" create table "jukebox" ("title" text) """)
720 |
721 | exec(sqlu""" insert into "jukebox"("title")
722 | values ('Bicycle Built for Two') """)
723 |
724 | exec(sql""" select "title" from "jukebox" """.as[String])
725 | ```
726 |
727 |
728 |
729 | ### Robert Tables
730 |
731 | We're building a web site that allows searching for users by their email address:
732 |
733 | ```scala mdoc
734 | def lookup(email: String) =
735 | sql"""select "id" from "user" where "email" = '#${email}'"""
736 |
737 | // Example use:
738 | exec(lookup("dave@example.org").as[Long].headOption)
739 | ```
740 |
741 | What the problem with this code?
742 |
743 |
744 | If you are familiar with [xkcd's Little Bobby Tables](http://xkcd.com/327/),
745 | the title of the exercise has probably tipped you off: `#$` does not escape input.
746 |
747 | This means a user could use a carefully crafted email address to do evil:
748 |
749 | ```scala mdoc
750 | val evilAction = lookup("""';DROP TABLE "user";--- """).as[Long]
751 | exec(evilAction)
752 | ```
753 |
754 | This "email address" turns into two queries:
755 |
756 | ~~~ sql
757 | SELECT * FROM "user" WHERE "email" = '';
758 | ~~~
759 |
760 | and
761 |
762 | ~~~ sql
763 | DROP TABLE "user";
764 | ~~~
765 |
766 | Trying to access the users table after this will produce:
767 |
768 | ```scala mdoc
769 | exec(users.result.asTry)
770 | ```
771 |
772 | Yes, the table was dropped by the query.
773 |
774 | Never use `#$` with user supplied input.
775 |
776 |
--------------------------------------------------------------------------------
/src/pages/8-databases.md:
--------------------------------------------------------------------------------
1 | # Using Different Database Products {#altdbs}
2 |
3 | As mentioned during the introduction, H2 is used throughout the book for examples.
4 | However Slick also supports PostgreSQL, MySQL, Derby, SQLite, Oracle, and Microsoft Access.
5 |
6 | There was a time when you needed a commercial license from Lightbend to use Slick in production with Oracle, SQL Server, or DB2.
7 | This restriction was removed in early 2016[^slick-blog-open].
8 | However, there was an effort to build free and open profiles, resulting in the FreeSlick project.
9 | These profiles continue to be available, and you can find out more about this from the [FreeSlick GitHub page](https://github.com/smootoo/freeslick).
10 |
11 | [^slick-blog-open]: [https://scala-slick.org/news/2016/02/01/slick-extensions-licensing-change.html](https://scala-slick.org/news/2016/02/01/slick-extensions-licensing-change.html).
12 |
13 | ## Changes
14 |
15 | If you want to use a different database for the exercises in the book,
16 | you will need to make changes detailed below.
17 |
18 | In summary you will need to ensure that:
19 |
20 | * you have installed the database (details beyond the scope of this book);
21 | * a database is available with the correct name;
22 | * the `build.sbt` file has the correct dependency;
23 | * the correct JDBC driver is referenced in the code; and
24 | * the correct Slick profile is used.
25 |
26 | Each chapter uses its own database---so these steps will need to be applied for each chapter.
27 |
28 | We've given detailed instructions for two populated databases below.
29 |
30 | ## PostgreSQL
31 |
32 | ### Create a Database
33 |
34 | Create a database named `chapter-01` with user `essential`. This will be used for all examples and can be created with the following:
35 |
36 | ~~~ sql
37 | CREATE DATABASE "chapter-01" WITH ENCODING 'UTF8';
38 | CREATE USER "essential" WITH PASSWORD 'trustno1';
39 | GRANT ALL ON DATABASE "chapter-01" TO essential;
40 | ~~~
41 |
42 | Confirm the database has been created and can be accessed:
43 |
44 | ~~~ bash
45 | $ psql -d chapter-01 essential
46 | ~~~
47 |
48 | ### Update `build.sbt` Dependencies
49 |
50 | Replace
51 |
52 | ~~~ scala
53 | "com.h2database" % "h2" % "1.4.185"
54 | ~~~
55 |
56 | with
57 |
58 | ~~~ scala
59 | "org.postgresql" % "postgresql" % "9.3-1100-jdbc41"
60 | ~~~
61 |
62 | If you are already in SBT, type `reload` to load this changed build file.
63 | If you are using an IDE, don't forget to regenerate any IDE project files.
64 |
65 | ### Update JDBC References
66 |
67 | Replace `application.conf` parameters with:
68 |
69 | ~~~ json
70 | chapter01 = {
71 | connectionPool = disabled
72 | url = jdbc:postgresql:chapter-01
73 | driver = org.postgresql.Driver
74 | keepAliveConnection = true
75 | users = essential
76 | password = trustno1
77 | }
78 | ~~~
79 |
80 | ### Update Slick Profile
81 |
82 | Change the import from
83 |
84 | ```scala
85 | slick.jdbc.H2Profile.api._
86 | ```
87 |
88 | to
89 |
90 | ```scala
91 | slick.jdbc.PostgresProfile.api._
92 | ```
93 |
94 |
95 | ## MySQL
96 |
97 | ### Create a Database
98 |
99 | Create a database named `chapter-01` with user `essential`. This will be used for all examples and can be created with the following:
100 |
101 | ~~~ sql
102 | CREATE USER 'essential'@'localhost' IDENTIFIED BY 'trustno1';
103 | CREATE DATABASE `chapter-01` CHARACTER SET utf8 COLLATE utf8_bin;
104 | GRANT ALL ON `chapter-01`.* TO 'essential'@'localhost';
105 | FLUSH PRIVILEGES;
106 | ~~~
107 |
108 | Confirm the database has been created and can be accessed:
109 |
110 | ~~~ bash
111 | $ mysql -u essential chapter-01 -p
112 | ~~~
113 |
114 | ### Update `build.sbt` Dependencies
115 |
116 | Replace
117 |
118 | ~~~ scala
119 | "com.h2database" % "h2" % "1.4.185"
120 | ~~~
121 |
122 | with
123 |
124 | ~~~ scala
125 | "mysql" % "mysql-connector-java" % "5.1.34"
126 | ~~~
127 |
128 | If you are already in SBT, type `reload` to load this changed build file.
129 | If you are using an IDE, don't forget to regenerate any IDE project files.
130 |
131 | ### Update JDBC References
132 |
133 | Replace `Database.forURL` parameters with:
134 |
135 | ~~~ json
136 | chapter01 = {
137 | connectionPool = disabled
138 | url = jdbc:mysql://localhost:3306/chapter-01
139 | &useUnicode=true
140 | &characterEncoding=UTF-8
141 | &autoReconnect=true
142 | driver = com.mysql.jdbc.Driver
143 | keepAliveConnection = true
144 | users = essential
145 | password = trustno1
146 | }
147 | ~~~
148 |
149 | Note that we've formatted the `connectionPool` line to make it legible.
150 | In reality all those `&` parameters will be on the same line.
151 |
152 |
153 | ### Update Slick DriverProfile
154 |
155 | Change the import from
156 |
157 | ```scala
158 | slick.jdbc.H2Profile.api._
159 | ```
160 |
161 | to
162 |
163 | ```scala
164 | slick.jdbc.MySQLProfile.api._
165 | ```
166 |
--------------------------------------------------------------------------------
/src/pages/end-of-preview.md:
--------------------------------------------------------------------------------
1 |
2 | ---
3 |
4 | # You've Reached the End of the Preview {-}
5 |
6 | Thank's for taking an interest in _Essential Slick_.
7 |
8 | We hope we've make a useful introduction for teams who want to be productive quickly with Slick.
9 |
10 | Underscore is a [consultancy and software development](https://underscore.io/consulting/) company,
11 | and the text is based on our experience working with teams making their first steps with Slick.
12 |
13 | The full version of the book includes chapters on:
14 |
15 | - Basics
16 | - Selecting Data
17 | - Creating and Modifying Data
18 | - Action Combinators and Transactions
19 | - Data Modelling
20 | - Joins and Aggregates
21 | - Plain SQL
22 | - Appendix A: Using Different Database Products
23 | - Appendix B: Solutions to Exercises
24 |
25 | Customers receive free lifetime updates of the book for Slick 2.1 and 3.
26 |
27 | Purchase a copy from: [https://underscore.io/books/essential-slick/](https://underscore.io/books/essential-slick/)
28 |
29 | ## Related Titles from Underscore {-}
30 |
31 | - [Creative Scala](https://underscore.io/books/creative-scala/), aimed at developers who have no prior experience in Scala.
32 |
33 | - [Essential Scala](https://underscore.io/books/essential-scala/), the book and course that teaches Scala from the basics of its syntax to advanced problem solving techniques.
34 |
35 | - [Essential Play](https://underscore.io/books/essential-play/), covering a comprehensive set of topics required to create web sites and web services in Play.
36 |
37 | - [Advanced Scala with Cats](https://underscore.io/books/advanced-scala/), for experienced Scala developers who want to take the next step in engineering using modern functional programming.
38 |
39 | We also offer [training courses](https://underscore.io/training/) delivered online and on site.
40 |
41 |
--------------------------------------------------------------------------------
/src/pages/links.md:
--------------------------------------------------------------------------------
1 | [link-example]: https://github.com/underscoreio/essential-slick-code/tree/3.3
2 | [link-book-issues]: https://github.com/underscoreio/essential-slick/issues
3 | [link-book-pr]: https://github.com/underscoreio/essential-slick/pulls
4 | [link-book-repo]: https://github.com/underscoreio/essential-slick/
5 | [link-slick]: https://scala-slick.org/
6 | [link-slick-gitter]:https://gitter.im/slick/slick
7 | [link-slick-so]: https://stackoverflow.com/questions/tagged/slick
8 | [link-ref-dbs]: https://scala-slick.org/doc/3.3.3/supported-databases.html
9 | [link-ref-gen]: https://scala-slick.org/doc/3.3.3/code-generation.html
10 | [link-ref-h2driver]: https://scala-slick.org/doc/3.3.3/api/#slick.jdbc.H2Profile
11 | [link-ref-orm]: https://scala-slick.org/doc/3.3.3/orm-to-slick.html
12 | [link-ref-actions]: https://scala-slick.org/doc/3.3.3/dbio.html
13 | [link-slick-column-options]: https://scala-slick.org/doc/3.3.3/api/index.html#slick.ast.ColumnOption
14 | [link-slick-hlist]: https://scala-slick.org/doc/3.3.3/api/#slick.collection.heterogeneous.HList
15 | [link-source-dbpublisher]: https://scala-slick.org/doc/3.3.3/api/index.html#slick.basic.DatabasePublisher
16 | [link-slick-streaming]: https://scala-slick.org/doc/3.3.3/dbio.html#streaming
17 | [link-slick-ug-time]: https://scala-slick.org/doc/3.3.3/upgrade.html
18 | [link-source-extmeth]: https://github.com/slick/slick/blob/v3.3.3/slick/src/main/scala/slick/lifted/ExtensionMethods.scala
19 | [link-hikari]: https://brettwooldridge.github.io/HikariCP/
20 | [link-config]: https://github.com/typesafehub/config
21 | [link-jodatime]: https://www.joda.org/joda-time/
22 | [link-logback]: https://logback.qos.ch/
23 | [link-hibernate]: https://hibernate.org
24 | [link-active-record]: https://guides.rubyonrails.org/active_record_basics.html
25 | [link-jdbc-connection-url]: https://docs.oracle.com/javase/tutorial/jdbc/basics/connecting.html
26 | [link-slf4j]: https://www.slf4j.org/
27 | [link-wartremover]: https://github.com/puffnfresh/wartremover
28 | [link-shapeless]: https://github.com/milessabin/shapeless
29 | [link-slickless]: https://github.com/underscoreio/slickless
30 | [link-visual-joins]: https://blog.codinghorror.com/a-visual-explanation-of-sql-joins/
31 | [link-akka-streams]: https://akka.io/docs/
32 | [link-reactive-streams]: https://www.reactive-streams.org/
33 | [link-wikipedia-joins]: https://en.wikipedia.org/wiki/Join_(SQL)
34 | [link-wikipedia-injection]: https://en.wikipedia.org/wiki/SQL_injection
35 | [link-dw-effect-blog]: https://danielwestheide.com/blog/2015/06/28/put-your-writes-where-your-master-is-compile-time-restriction-of-slick-effect-types.html
36 | [link-fkri]: https://en.wikipedia.org/wiki/Foreign_key#Referential_actions
37 | [link-pins-interop]: https://www.artima.com/pins1ed/combining-scala-and-java.html#i-855208314-1
38 | [link-scala-interpolation]: https://docs.scala-lang.org/overviews/core/string-interpolation.html
39 | [link-scala-option]: https://www.scala-lang.org/api/current/scala/Option.html
40 | [link-scala-value-classes]: https://docs.scala-lang.org/overviews/core/value-classes.html
41 | [link-sip-named-default]: https://docs.scala-lang.org/sips/completed/named-and-default-arguments.html
42 | [link-h2-home]: https://www.h2database.com
43 | [link-h2-jdbc-url]: https://www.h2database.com/html/tutorial.html#connecting_using_jdbc
44 | [link-mysql-download]: https://dev.mysql.com/downloads/mysql/
45 | [link-mysql]: https://www.mysql.com/
46 | [link-postgres-download]: https://www.postgresql.org/download/
47 | [link-postgres]: https://www.postgresql.org/
48 | [link-twitter-jono]: https://twitter.com/jonoabroad
49 | [link-twitter-noel]: https://twitter.com/noelwelsh
50 | [link-twitter-dave]: https://twitter.com/davegurnell
51 | [link-twitter-richard]: https://twitter.com/d6y
52 | [link-twitter-underscore]: https://twitter.com/underscoreio
53 | [link-underscore-gitter]: https://gitter.im/underscoreio/scala
54 | [link-underscore]: https://underscore.io
55 | [link-email-underscore]: mailto:hello@underscore.io?subject=Essential%20Slick
56 | [link-newsletter]: https://underscore.io/blog/newsletters/
57 | [link-essential-scala]: https://underscore.io/training/courses/essential-scala
58 | [link-sbt]: https://scala-sbt.org
59 | [link-sbt-tutorial]: https://www.scala-sbt.org/1.x/docs/Getting-Started.html
60 | [link-essential-play]: https://underscore.io/training/courses/essential-play/
61 | [link-mdoc]: https://scalameta.org/mdoc/
62 | [link-renato]: https://twitter.com/renatocaval
63 | [link-ottinger]: https://github.com/jottinger
64 | [link-meredith]: https://twitter.com/Gentmen
65 | [link-simon]: https://github.com/yanns
66 | [link-trevor]: https://github.com/trevorsibanda
67 |
--------------------------------------------------------------------------------
/src/pages/solutions.md:
--------------------------------------------------------------------------------
1 | # Solutions to Exercises {#solutions}
2 |
3 |
Copies of this, and related topics,
2 | can be found at https://underscore.io/training.
3 | Team discounts, when available, may also be found at that address.
4 | Contact the authors at hello@underscore.io.
5 |
6 |
7 | Underscore provides consulting, software development, and training in Scala and functional programming.
8 | You can find us on the web at https://underscore.io
9 | and on Twitter at @underscoreio.
10 |
11 |
In addition to writing software,
12 | we provide other training courses, books, and mentoring
13 | to help you and your team create better software and have more fun.
14 | For more information please visit
15 | https://underscore.io/training.
16 |
--------------------------------------------------------------------------------
/src/templates/cover-notes.tex:
--------------------------------------------------------------------------------
1 | \begin{center}
2 |
3 | Copies of this, and related topics,
4 | can be found at \href{https://underscore.io/training}{https://underscore.io/training}.
5 | Team discounts, when available, may also be found at that address.
6 | Contact the authors at \href{mailto:hello@underscore.io}{hello@underscore.io}.
7 |
8 | \vspace{3em}
9 |
10 | Underscore provides consulting, software development, and training in Scala and functional programming.
11 | You can find us on the web at \href{https://underscore.io}{https://underscore.io}
12 | and on Twitter at \href{https://twitter.com/underscoreio}{@underscoreio}.
13 |
14 | In addition to writing software,
15 | we provide other training courses, workshops, books, and mentoring
16 | to help you and your team create better software and have more fun.
17 | For more information please visit
18 | \href{https://underscore.io/training}{https://underscore.io/training}.
19 |
20 | \end{center}
21 |
--------------------------------------------------------------------------------
/src/templates/images/brand.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/underscoreio/essential-slick/f951634930d11dbfb19f52f4e3a4f2dae70c201d/src/templates/images/brand.pdf
--------------------------------------------------------------------------------
/src/templates/images/brand.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
108 |
--------------------------------------------------------------------------------
/src/templates/images/hero-arrow-overlay-white.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
6 |
--------------------------------------------------------------------------------
/src/templates/images/hero-left-overlay-white.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/underscoreio/essential-slick/f951634930d11dbfb19f52f4e3a4f2dae70c201d/src/templates/images/hero-left-overlay-white.pdf
--------------------------------------------------------------------------------
/src/templates/images/hero-left-overlay-white.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
7 |
--------------------------------------------------------------------------------
/src/templates/images/hero-right-overlay-white.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/underscoreio/essential-slick/f951634930d11dbfb19f52f4e3a4f2dae70c201d/src/templates/images/hero-right-overlay-white.pdf
--------------------------------------------------------------------------------
/src/templates/images/hero-right-overlay-white.svg:
--------------------------------------------------------------------------------
1 |
2 |
3 |
7 |
--------------------------------------------------------------------------------
/src/templates/template.epub.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 | $pagetitle$
9 | $if(highlighting-css)$
10 |
13 | $endif$
14 | $for(css)$
15 |
16 | $endfor$
17 |
18 |
19 | $if(titlepage)$
20 |