Database access with ScalikeJDBC
2016-02-29
In the previous part of this series, we added Scaldi to our sample application. This time we are going to start using a database access library called ScalikeJDBC.
Introduction
My examples are based on the blog-ecosystem-intro
branch of the main book repo, so if you want to follow, just grab it here:
$ git clone https://github.com/denisftw/modern-web-scala .
$ git checkout blog-ecosystem-intro
Generally, if you need to access your database from Scala, there are many available libraries. The most popular ones are:
I personally found Slick overly sophisticated, but this is a good library that has many fans and you should check it out. As for ScalikeJDBC, I use it by default in all my projects, because it offers a great balance between flexibility and reliability while staying extremely lightweight.
Installation
The ScalikeJDBC official website says that in order to start you need the following:
- the library itself
- the
logback-classic
implementation for slf4j - a JDBC driver for your database
I would also suggest adding scalikejdbc-config
, because it makes configuration so much easier. Let's add all these dependencies to the build.sbt
file:
libraryDependencies ++= Seq(
// ...
"org.scalikejdbc" %% "scalikejdbc" % "2.3.5",
"org.scalikejdbc" %% "scalikejdbc-config" % "2.3.5",
"com.h2database" % "h2" % "1.4.191",
"ch.qos.logback" % "logback-classic" % "1.1.3"
)
We should also specify connection properties in the application.conf
file:
db.default.driver="org.h2.Driver"
db.default.url="jdbc:h2:file:./db/default"
db.default.user="sa"
db.default.password=""
db.default.poolInitialSize=10
db.default.poolMaxSize=20
db.default.connectionTimeoutMillis=1000
Finally, we need to configure ScalikeJDBC. Since we're using scalikejdbc-config
, the initialization is extremely simple. We only need to invoke DBs.setupAll()
when the app starts and DBs.closeAll()
when it stops. Since our application life cycle is already managed by Scaldi, we need to add the following to the Initializer
class.
class Initializer {
def init() = {
DBs.setupAll()
}
def stop() = {
DBs.closeAll()
}
}
It looks very simple but that's really all to it and now we are ready to work with the database.
Working with data
We need some real data, so let's create a table called users
and add one record. It's better to allow Play to handle schema evolutions than do it ourselves, so let's put the schema changes to the 1.sql
file inside the conf/evolutions/default
directory:
# --- !Ups
create table users(
user_id uuid primary key,
user_code varchar(255) not null,
first_name varchar(255) null,
last_name varchar(255) null,
created timestamp not null
);
insert into users values(RANDOM_UUID(), 'joeb', 'Joe', 'Black', CURRENT_TIMESTAMP());
# --- !Downs
drop table users;
Since we want to use Play evolutions, we need to activate the corresponding module in build.sbt
:
libraryDependencies ++= Seq(
jdbc,
cache,
evolutions,
ws,
// ...
)
And if we want to make Play apply the evolutions automatically, we need to add this to application.conf
:
play.evolutions.autoApply=true
Now, in our app we want to deal with Scala objects and not tabular data, so let's create a case class User
that will represent a database record:
case class User(userId: UUID, userCode: String, firstName: Option[String],
lastName: Option[String], created: DateTime)
Note that since we declared both first_name
and last_name
as nullable columns, we need to reflect it by making corresponding fields Option
s. ScalikeJDBC will map database NULL
s to Scala None
s automatically and we only just need to use the right method. Let's also create a companion object and define a method that will be responsible for the actual mapping:
object User {
def fromDb(rs: WrappedResultSet): User = {
val idStr = rs.string("user_id")
val userCode = rs.string("user_code")
val firstName = rs.stringOpt("first_name")
val lastName = rs.stringOpt("last_name")
val created = rs.jodaDateTime("created")
User(UUID.fromString(idStr), userCode, firstName, lastName, created)
}
}
Note that we're using rs.string()
for getting non-nullable strings and rs.stringOpt()
for getting nullable ones. ScalikeJDBC also supports JodaTime out of the box and provides corresponding methods for convenient mappings.
Finally, we need to define a service that we can use to retrieve user records from the database:
class UserService {
def allUsers: Seq[User] = DB.readOnly { implicit session =>
sql"select * from users".map(User.fromDb).list().apply()
}
}
The allUsers
method demonstrates a typical usage of ScalikeJDBC's SQLInterpolation. We write our query and then map the result to the domain class. After that we're using list()
to reflect the fact that we expect the sequence of records and finally, we're calling apply()
that does the actual work.
Note that the select
query looks like a string, but behind the scenes it converts into a statically-typed PreparedStatement
. It basically means that we can pass string arguments to the query without introducing any security risks:
class UserService {
// ...
def findUser(userCode: String): Option[User] = DB.readOnly { implicit session =>
sql"select * from users where user_code = $userCode limit 1".
map(User.fromDb).headOption().apply()
}
}
Here we're using headOption()
instead of list()
, to reflect that we expect one result at most.
Conclusion
I hope now you understand why ScalikeJDBC is such a great library. In addition to SQLInterpolation, it also provides query DSL, which is more type-safe. If you want to know more about using ScalikeJDBC as well as many other tools for building real Web applications, check out my book here.