Anorm, simple SQL data access
Play includes a simple data access layer called Anorm that uses plain SQL to interact with the database and provides an API to parse and transform the resulting datasets.
Anorm is Not an Object Relational Mapper
In the following documentation, we will use the MySQL world sample database.
If you want to enable it for your application, follow the MySQL website instructions, and configure it as explained on the Scala database page ScalaDatabase]].
Overview
It can feel strange to return to plain old SQL to access an SQL database these days, especially for Java developers accustomed to using a high-level Object Relational Mapper like Hibernate to completely hide this aspect.
Although we agree that these tools are almost required in Java, we think that they are not needed at all when you have the power of a higher-level programming language like Scala. On the contrary, they will quickly become counter-productive.
Using JDBC is a pain, but we provide a better API
We agree that using the JDBC API directly is tedious, particularly in Java. You have to deal with checked exceptions everywhere and iterate over and over around the ResultSet to transform this raw dataset into your own data structure.
We provide a simpler API for JDBC; using Scala you don’t need to bother with exceptions, and transforming data is really easy with a functional language. In fact, the goal of the Play Scala SQL access layer is to provide several APIs to effectively transform JDBC data into other Scala structures.
You don’t need another DSL to access relational databases
SQL is already the best DSL for accessing relational databases. We don’t need to invent something new. Moreover the SQL syntax and features can differ from one database vendor to another.
If you try to abstract this point with another proprietary SQL like DSL you will have to deal with several ‘dialects’ dedicated for each vendor (like Hibernate ones), and limit yourself by not using a particular database’s interesting features.
Play will sometimes provide you with pre-filled SQL statements, but the idea is not to hide the fact that we use SQL under the hood. Play just saves typing a bunch of characters for trivial queries, and you can always fall back to plain old SQL.
A type safe DSL to generate SQL is a mistake
Some argue that a type safe DSL is better since all your queries are checked by the compiler. Unfortunately the compiler checks your queries based on a meta-model definition that you often write yourself by ‘mapping’ your data structure to the database schema.
There are no guarantees that this meta-model is correct. Even if the compiler says that your code and your queries are correctly typed, it can still miserably fail at runtime because of a mismatch in your actual database definition.
Take Control of your SQL code
Object Relational Mapping works well for trivial cases, but when you have to deal with complex schemas or existing databases, you will spend most of your time fighting with your ORM to make it generate the SQL queries you want.
Writing SQL queries yourself can be tedious for a simple ‘Hello World’ application, but for any real-life application, you will eventually save time and simplify your code by taking full control of your SQL code.
Add Anorm to your project
You will need to add Anorm and JDBC plugin to your dependencies :
Table of contents:
See release notes
Executing SQL queries
To start you need to learn how to execute SQL queries.
First, import anorm._
, and then simply use the SQL
object to create queries. You need a Connection
to run a query, and you can retrieve one from the play.api.db.DB
helper with the help of DI:
The execute()
method returns a Boolean value indicating whether the execution was successful.
To execute an update, you can use executeUpdate()
, which returns the number of rows updated.
If you are inserting data that has an auto-generated Long
primary key, you can call executeInsert()
.
When a key generated is on insertion is not a single Long
, executeInsert
can be passed a ResultSetParser
to return the correct key.
Since Scala supports multi-line strings, feel free to use them for complex SQL statements:
If your SQL query needs dynamic parameters, you can declare placeholders like {name}
in the query string, and later assign a value to them:
The curly braces can be escaped using
\
:SQL("SELECT * FROM test WHERE code = '\{foo\}'")
.
You can also use string interpolation to pass parameters (see details thereafter).
In case several columns are found with same name in query result, for example columns named code
in both Country
and CountryLanguage
tables, there can be ambiguity. By default a mapping like following one will use the last column:
If Country.Code
is ‘First’ and CountryLanguage
is ‘Second’, then in previous example code
value will be ‘Second’. Ambiguity can be resolved using qualified column name, with table name:
When a column is aliased, typically using SQL AS
, its value can also be resolved. Following example parses column with country_lang
alias.
The columns can also be specified by position, rather than name:
If the columns are not strictly defined (e.g. with types that can vary), the SqlParser.folder
can be used to fold each row in a custom way.
Table alias:
With some databases, it’s possible to define aliases for table (or for sub-query), as in the following example.
Unfortunately, such aliases are not supported in JDBC, so Anorm introduces the ColumnAliaser
to be able to define user aliases over columns.
Passing parameters
Values can be easily bound as query parameters using Anorm.
With placeholders in the query statement, the .on(..)
binding can be used.
Otherwise, the Anorm Interpolation is available.
String Interpolation
Since Scala 2.10 supports custom String Interpolation there is also a 1-step alternative to SQL(queryString).on(params)
seen before. You can abbreviate the code as:
It also supports multi-line string and inline expresions:
This feature tries to make faster, more concise and easier to read the way to retrieve data in Anorm. Please, feel free to use it wherever you see a combination of SQL().on()
functions (or even an only SQL()
without parameters).
By using #$value
instead of $value
, interpolated value will be part of the prepared statement, rather being passed as a parameter when executing this SQL statement (e.g. #$cmd
and #$table
in example bellow).
Multi-value parameter
An Anorm parameter can be multi-value, like a sequence of string. In such case, values will be prepared to be passed appropriately in JDBC.
On purpose multi-value parameter must strictly be declared with one of supported types (List
, Seq
, Set
, SortedSet
, Stream
, Vector
and SeqParameter
). Value of a subtype must be passed as parameter with supported:
In case parameter type is JDBC array (java.sql.Array
), its value can be passed as Array[T]
, as long as element type T
is a supported one.
A column can also be multi-value if its type is JDBC array (java.sql.Array
), then it can be mapped to either array or list (Array[T]
or List[T]
), provided type of element (T
) is also supported in column mapping.
Convenient parsing functions is also provided for arrays with
SqlParser.array[T](...)
andSqlParser.list[T](...)
.
Batch update
When you need to execute a same update several times with different arguments, a batch query can be used (e.g. to execute a batch of insertions).
A batch update must be called with at least one list of parameter. If a batch is executed with the mandatory first list of parameter being empty (e.g.
Nil
), only one statement will be executed (without parameter), which is equivalent toSQL(statement).executeUpdate()
.
Custom parameter conversions
It’s possible to define custom or database specific conversion for parameters.
If involved type accept null
value, it must be appropriately handled in conversion. The NotNullGuard
trait can be used to explicitly refuse null
values in parameter conversion: new ToStatement[T] with NotNullGuard { /* ... */ }
.
DB specific parameter can be explicitly passed as opaque value.
In this case at your own risk, setObject
will be used on statement.
Generated parameter conversions
Anorm also provides utility to generate parameter conversions for case classes.
Using Macro.ParameterProjection
is possible to customize the parameter names, instead of using the names of the class properties by default.
For a property bar
of a case class Foo
which whose type is itself a case class Bar
, the appropriate instance of ToParameterList[Bar]
will be resolved from the implicit scope to be able to generate ToParameterList[Foo]
.
By default, the nested properties (e.g. Bar.w
) are represented using _
as separator, as for bar_w
in the previous example. A custom separator can be specified.
A sealed trait with some known subclasses can also be supported.
The value classes can be supported, if the underlying value itself <: Any
.
The
anorm.macro.debug
system property can be set totrue
(e.g.sbt -Danorm.macro.debug=true ...
) to debug the generated parsers.
A type which is provided a ToParameterList
instance can be used to bind a value as parameters.
Also see the additional module for Enumeratum.
Parsing rows
Anorm provides several ways to handle and parse the row retrieved by the database queries.
Generated row parsers
The macro namedParser[T]
can be used to create a RowParser[T]
at compile-time, for any case class T
.
The similar macros indexedParser[T]
and offsetParser[T]
are available to get column values by positions instead of names.
To indicate custom names for the columns to be parsed, the macro parser[T](names)
can be used.
It’s also possible to configure the named parsers using a naming strategy for the corresponding columns.
A custom column naming can be defined using
ColumnNaming(String => String)
.
The RowParser
exposed in the implicit scope can be used as nested one generated by the macros.
A row parser for sealed trait can be generated by the macro sealedParser
.
As it can be seen in the previous example with the generated code working with the discriminator value, a column named “classname” is expected to specify the fully qualified name of a subtype (e.g. scalaguide.sql.MacroFixtures.Bar
for the child case class Bar
).
The discriminator strategy can be customized.
The value classes are also supported, by generated instances of anorm.Column
.
The
anorm.macro.debug
system property can be set totrue
(e.g.sbt -Danorm.macro.debug=true ...
) to debug the generated parsers.
Parser API
You can use the parser API to create custom parsers that can handle the result of the queries.
Note: This is really useful, since most queries in a web application will return similar data sets. For example, if you have defined a parser able to parse a
Country
from a result set, and anotherLanguage
parser, you can then easily compose them to parse both Country and Language from a join query.First you need to
import anorm.SqlParser._
Parsing a single result
First you need a RowParser
, i.e. a parser able to parse one row to a Scala value. For example we can define a parser to transform a single column result set row, to a Scala Long
:
Then we have to transform it into a ResultSetParser
. Here we will create a parser that parse a single row:
So this parser will parse a result set to return a Long
. It is useful to parse to result produced by a simple SQL select count
query:
If expected single result is optional (0 or 1 row), then scalar
parser can be combined with singleOpt
:
Parsing a single optional result
Let’s say you want to retrieve the country_id from the country name, but the query might return null. We’ll use the singleOpt parser :
Parsing a more complex result
Let’s write a more complete parser:
It will create a RowParser
able to parse a row containing a String name
column and an Integer population
column. Then we can create a ResultSetParser
that will parse as many rows of this kind as it can, using *
:
As you see, this query’s result type is List[String ~ Int]
- a list of country name and population items.
You can also rewrite the same code as:
Now what about the String ~ Int
type? This is an Anorm type that is not really convenient to use outside of your database access code. You would rather have a simple tuple (String, Int)
instead. You can use the map
function on a RowParser
to transform its result to a more convenient type:
Note: We created a tuple
(String, Int)
here, but there is nothing stopping you from transforming theRowParser
result to any other type, such as a custom case class.
Now, because transforming A ~ B ~ C
types to (A, B, C)
is a common task, we provide a flatten
function that does exactly that. So you finally write:
A RowParser
can be combined with any function to applied it with extracted columns.
Note: The mapping function must be partially applied (syntax
fn _
) when givento
the parser (see SLS 6.26.2, 6.26.5 - Eta expansion).
If list should not be empty, parser.+
can be used instead of parser.*
.
Anorm is providing parser combinators other than the most common ~
one: ~>
, <~
.
Complete example
Now let’s try with a complete example. How to parse the result of the following query to retrieve the country name and all spoken languages for a country code?
Let’s start by parsing all rows as a List[(String,String)]
(a list of name,language tuple):
Now we get this kind of result:
We can then use the Scala collection API, to transform it to the expected result:
Finally, we get this convenient function:
To continue, let’s complicate our example to separate the official language from the others:
If you try this on the MySQL world sample database, you will get:
Handling optional/nullable values
If a column in database can contain Null
values, you need to parse it as an Option
type.
For example, the indepYear
of the Country
table is nullable, so you need to match it as Option[Int]
:
If you try to match this column as Int
it won’t be able to parse Null
values. Suppose you try to retrieve the column content as Int
directly from the dictionary:
This will produce an UnexpectedNullableFound(COUNTRY.INDEPYEAR)
exception if it encounters a null value, so you need to map it properly to an Option[Int]
.
A nullable parameter is also passed as Option[T]
, T
being parameter base type (see Parameters section thereafter).
Passing directly
None
for a NULL value is not supported, as inferred asOption[Nothing]
(Nothing
being unsafe for a parameter value). In this case,Option.empty[T]
must be used.
Using Pattern Matching
You can also use Pattern Matching to match and extract the Row
content. In this case the column name doesn’t matter. Only the order and the type of the parameters is used to match.
The following example transforms each row to the correct Scala type:
Using for-comprehension
A row parser can be defined as for-comprehension, working with SQL result type. It can be useful when working with lot of column, possibly to work around case class limit.
Streaming results
Query results can be processed row per row, not having all loaded in memory.
In the following example we will count the number of country rows.
In previous example, either it’s the successful
Long
result (right), or the list of errors (left).
Result can also be partially processed:
It’s possible to use a custom streaming:
The parsing API can be used with streaming, using RowParser
on each cursor .row
. The previous example can be updated with row parser.
Akka Stream
The query result from Anorm can be processed as Source with Akka Stream.
To do so, the Anorm Akka module must be used.
This module is tested with Akka Stream 2.4.12.
Once this library is available, the query can be used as streaming source.
It materializes a Future
containing either the number of read rows from the source if successful, or the exception if row parsing failed.
This could be useful to actually close the connection afterwards.
Iteratee
It’s possible to use Anorm along with Play Iteratees, using the following dependencies.
For a Play application, as
play-iteratees
is provided there is no need to add this dependency.
Since Scala 2.13,
play-iteratees
is no longer available.
Then the parsed results from Anorm can be turned into Enumerator
.
Retrieving data along with execution context
Moreover data, query execution involves context information like SQL warnings that may be raised (and may be fatal or not), especially when working with stored SQL procedure.
Way to get context information along with query data is to use executeQuery()
:
JDBC mappings
As already seen in this documentation, Anorm provides builtins converters between JDBC and JVM types.
Also see the additional modules:
Column parsers
Following table describes which JDBC numeric types (getters on java.sql.ResultSet
, first column) can be parsed to which Java/Scala types (e.g. integer column can be read as double value).
↓JDBC / JVM➞ | BigDecimal1 | BigInteger2 | Boolean | Byte | Double | Float | Int | Long | Short |
---|---|---|---|---|---|---|---|---|---|
BigDecimal1 | Yes | Yes | No | No | Yes | No | Yes | Yes | No |
BigInteger2 | Yes | Yes | No | No | Yes | Yes | Yes | Yes | No |
Boolean | No | No | Yes | Yes | No | No | Yes | Yes | Yes |
Byte | Yes | Yes | No | Yes | Yes | Yes | Yes | Yes | Yes |
Double | Yes | No | No | No | Yes | No | No | No | No |
Float | Yes | No | No | No | Yes | Yes | No | No | No |
Int | Yes | Yes | No | No | Yes | Yes | Yes | Yes | No |
Long | Yes | Yes | No | No | No | No | Yes | Yes | No |
Short | Yes | Yes | No | Yes | Yes | Yes | Yes | Yes | Yes |
- Types
java.math.BigDecimal
andscala.math.BigDecimal
. - Types
java.math.BigInteger
andscala.math.BigInt
.
The second table shows mappings for the other supported types.
↓JDBC / JVM➞ | Array[T]3 | Char | List3 | String | UUID4 |
---|---|---|---|---|---|
Array5 | Yes | No | Yes | No | No |
Clob | No | Yes | No | Yes | No |
Iterable6 | Yes | No | Yes | No | No |
Long | No | No | No | No | No |
String | No | Yes | No | Yes | Yes |
UUID | No | No | No | No | Yes |
-
- Array which type
T
of elements is supported.
- Array which type
-
- Type
java.util.UUID
.
- Type
-
- Type
java.sql.Array
.
- Type
-
- Type
java.lang.Iterable[_]
.
- Type
Optional column can be parsed as Option[T]
, as soon as T
is supported.
Binary data types are also supported.
↓JDBC / JVM➞ | Array[Byte] | InputStream1 |
---|---|---|
Array[Byte] | Yes | Yes |
Blob2 | Yes | Yes |
Clob3 | No | No |
InputStream4 | Yes | Yes |
Reader5 | No | No |
- Type
java.io.InputStream
. - Type
java.sql.Blob
. - Type
java.sql.Clob
. - Type
java.io.Reader
.
CLOBs/TEXTs can be extracted as so:
Here we specifically chose to use map
, as we want an exception if the row isn’t in the format we expect.
Extracting binary data is similarly possible:
For types where column support is provided by Anorm, convenient functions are available to ease writing custom parsers. Each of these functions parses column either by name or index (> 1).
Type | Function |
---|---|
Array[Byte] | byteArray |
Boolean | bool |
Byte | byte |
Date | date |
Double | double |
Float | float |
InputStream1 | binaryStream |
Int | int |
Long | long |
Short | short |
String | str |
- Type
java.io.InputStream
.
The Joda and Java 8 temporal types are also supported.
↓JDBC / JVM➞ | Date1 | DateTime2 | Instant3 | Long |
---|---|---|---|---|
Date | Yes | Yes | Yes | Yes |
Long | Yes | Yes | Yes | Yes |
Timestamp | Yes | Yes | Yes | Yes |
Timestamp wrapper5 | Yes | Yes | Yes | Yes |
- Types
java.util.Date
,org.joda.time.LocalDate
andjava.time.LocalDate
. - Types
org.joda.time.DateTime
,org.joda.time.LocalDateTime
,java.time.LocalDateTime
andjava.time.ZonedDateTime
. - Type
org.joda.time.Instant
andjava.time.Instant
(see Java 8). - Any type with a getter
getTimestamp
returning ajava.sql.Timestamp
.
It’s possible to add custom mapping, for example if underlying DB doesn’t support boolean datatype and returns integer instead. To do so, you have to provide a new implicit conversion for Column[T]
, where T
is the target Scala type:
Parameters bindings
The following table indicates how JVM types are mapped to JDBC parameter types:
JVM | JDBC | Nullable |
---|---|---|
Array[T]1 | Array2 with T mapping for each element |
Yes |
BigDecimal3 | BigDecimal | Yes |
BigInteger4 | BigDecimal | Yes |
Boolean5 | Boolean | Yes |
Byte6 | Byte | Yes |
Char7/String | String | Yes |
Date/Timestamp | Timestamp | Yes |
Double8 | Double | Yes |
Float9 | Float | Yes |
Int10 | Int | Yes |
List[T] | Multi-value11, with T mapping for each element |
No |
Long12 | Long | Yes |
Object13 | Object | Yes |
Option[T] | T being type if some defined value |
No |
Seq[T] | Multi-value, with T mapping for each element |
No |
Set[T]14 | Multi-value, with T mapping for each element |
No |
Short15 | Short | Yes |
SortedSet[T]16 | Multi-value, with T mapping for each element |
No |
Stream[T] | Multi-value, with T mapping for each element |
No |
UUID | String17 | No |
Vector | Multi-value, with T mapping for each element |
No |
- Type Scala
Array[T]
. - Type
java.sql.Array
. - Types
java.math.BigDecimal
andscala.math.BigDecimal
. - Types
java.math.BigInteger
andscala.math.BigInt
. - Types
Boolean
andjava.lang.Boolean
. - Types
Byte
andjava.lang.Byte
. - Types
Char
andjava.lang.Character
. - Types compatible with
java.util.Date
, and any wrapper type withgetTimestamp: java.sql.Timestamp
. - Types
Double
andjava.lang.Double
. - Types
Float
andjava.lang.Float
. - Types
Int
andjava.lang.Integer
. - Types
Long
andjava.lang.Long
. - Type
anorm.Object
, wrapping opaque object. - Multi-value parameter, with one JDBC placeholder (
?
) added for each element. - Type
scala.collection.immutable.Set
. - Types
Short
andjava.lang.Short
. - Type
scala.collection.immutable.SortedSet
. - Not-null value extracted using
.toString
.
Passing
None
for a nullable parameter is deprecated, and typesafeOption.empty[T]
must be use instead.
Large and stream parameters are also supported.
JVM | JDBC |
---|---|
Array[Byte] | Long varbinary |
Blob1 | Blob |
InputStream2 | Long varbinary |
Reader3 | Long varchar |
- Type
java.sql.Blob
- Type
java.io.InputStream
- Type
java.io.Reader
Joda and Java 8 temporal types are supported as parameters:
JVM | JDBC |
---|---|
DateTime1 | Timestamp |
Instant2 | Timestamp |
LocalDate3 | Timestamp |
LocalDateTime4 | Timestamp |
ZonedDateTime5 | Timestamp |
- Type
org.joda.time.DateTime
. - Types
org.joda.time.Instant
andjava.time.Instant
. - Types
org.joda.time.LocalDate
andjava.time.LocalDate
. - Types
org.joda.time.LocalDateTime
,org.joda.time.LocalDate
andjava.time.LocalDateTime
. - Type
java.time.ZonedDateTime
To enable Joda types as parameter, the import anorm.JodaParameterMetaData._
must be used.
Troubleshooting
This section gathers some errors/warnings you can encounter when using Anorm.
value SQL is not a member of StringContext
; This compilation error is raised when using the Anorm interpolation without the appropriate import.
It can be fixed by adding the package import: import anorm._
type mismatch; found : T; required : anorm.ParameterValue
; This compilation error occurs when a value of type T
is passed as parameter, whereas this T
type is not supported. You need to ensure that a anorm.ToStatement[T]
and a anorm.ParameterMetaData[T]
can be found in the implicit scope (see parameter conversions).
On .executeInsert()
, you can get the error TypeDoesNotMatch(Cannot convert <value>: class <T> to Long for column ColumnName(<C>)
. This occurs when the key returned by the database on insertion is not compatible with Long
(the default key parser). It can be fixed by providing the appropriate key parser; e.g. if the database returns a text key: SQL"...".executeInsert(scalar[String].singleOpt)
(get an Option[String]
as insertion key).
Edge cases
The type of a parameter should be visible, to be properly set on SQL statement.
Using value as Any
, explicitly or due to erasure, leads to compilation error No implicit view available from Any => anorm.ParameterValue
.
In some cases, some JDBC drivers returns a result set positioned on the first row rather than before this first row (e.g. stored procedured with Oracle JDBC driver).
To handle such edge-case, .withResultSetOnFirstRow(true)
can be used as following.