Latest release: May 5, 2017 • version 0.107.0 • CHANGELOG
Requirements: iOS 8.0+ / OSX 10.9+ / watchOS 2.0+ • Xcode 8.1+ • Swift 3
- Swift 2.2: use the version 0.80.2
- Swift 2.3: use the version 0.81.2
Follow @groue on Twitter for release announcements and usage tips.
GRDB provides raw access to SQL and advanced SQLite features, because one sometimes enjoys a sharp tool. It has robust concurrency primitives, so that multi-threaded applications can efficiently use their databases. It grants your application models with persistence and fetching methods, so that you don't have to deal with SQL and raw database rows when you don't want to.
Compared to SQLite.swift or FMDB, GRDB can spare you a lot of glue code. Compared to Core Data or Realm, it can simplify your multi-threaded applications.
It comes with up-to-date documentation, general articles, sample code, and a lot of interesting resolved issues that may answer your eventual questions and foster best practices.
Features • Usage • Installation • Documentation • FAQ
GRDB ships with a low-level SQLite API, and high-level tools that help dealing with databases:
- Records: fetching and persistence methods for your custom structs and class hierarchies
- Query Interface: a swift way to avoid the SQL language
- WAL Mode Support: that means extra performance for multi-threaded applications
- Migrations: transform your database as your application evolves
- Database Changes Observation: perform post-commit and post-rollback actions
- Fetched Records Controller: automated tracking of changes in a query results, and UITableView animations
- Full-Text Search: Perform efficient and customizable full-text searches.
- Encryption with SQLCipher
- Support for custom SQLite builds
- Reactive extensions for RxSwift
More than a set of tools that leverage SQLite abilities, GRDB is also:
- Safer: read the blog post Four different ways to handle SQLite concurrency
- Faster: see Comparing the Performances of Swift SQLite libraries
- Well documented & tested
For a general overview of how a protocol-oriented library impacts database accesses, have a look at How to build an iOS application with SQLite and GRDB.swift.
Open a connection to the database:
import GRDB
// Simple database connection
let dbQueue = try DatabaseQueue(path: "/path/to/database.sqlite")
// Enhanced multithreading based on SQLite's WAL mode
let dbPool = try DatabasePool(path: "/path/to/database.sqlite")
try dbQueue.inDatabase { db in
try db.execute(
"CREATE TABLE pointOfInterests (" +
"id INTEGER PRIMARY KEY, " +
"title TEXT NOT NULL, " +
"favorite BOOLEAN NOT NULL DEFAULT 0, " +
"latitude DOUBLE NOT NULL, " +
"longitude DOUBLE NOT NULL" +
")")
try db.execute(
"INSERT INTO pointOfInterests (title, favorite, latitude, longitude) " +
"VALUES (?, ?, ?, ?)",
arguments: ["Paris", true, 48.85341, 2.3488])
let parisId = db.lastInsertedRowID
}
Fetch database rows and values:
try dbQueue.inDatabase { db in
let rows = try Row.fetchCursor(db, "SELECT * FROM pointOfInterests")
while let row = try rows.next() {
let title: String = row.value(named: "title")
let isFavorite: Bool = row.value(named: "favorite")
let coordinate = CLLocationCoordinate2DMake(
row.value(named: "latitude"),
row.value(named: "longitude"))
}
let poiCount = try Int.fetchOne(db, "SELECT COUNT(*) FROM pointOfInterests")! // Int
let poiTitles = try String.fetchAll(db, "SELECT title FROM pointOfInterests") // [String]
}
// Extraction
let poiCount = try dbQueue.inDatabase { db in
try Int.fetchOne(db, "SELECT COUNT(*) FROM pointOfInterests")!
}
Insert and fetch records:
struct PointOfInterest {
var id: Int64?
var title: String
var isFavorite: Bool
var coordinate: CLLocationCoordinate2D
}
// snip: turn PointOfInterest into a "record" by adopting the protocols that
// provide fetching and persistence methods.
try dbQueue.inDatabase { db in
var berlin = PointOfInterest(
id: nil,
title: "Berlin",
isFavorite: false,
coordinate: CLLocationCoordinate2DMake(52.52437, 13.41053))
try berlin.insert(db)
berlin.id // some value
berlin.isFavorite = true
try berlin.update(db)
// Fetch [PointOfInterest] from SQL
let pois = try PointOfInterest.fetchAll(db, "SELECT * FROM pointOfInterests")
}
Avoid SQL with the query interface:
try dbQueue.inDatabase { db in
try db.create(table: "pointOfInterests") { t in
t.column("id", .integer).primaryKey()
t.column("title", .text).notNull()
t.column("favorite", .boolean).notNull().defaults(to: false)
t.column("longitude", .double).notNull()
t.column("latitude", .double).notNull()
}
// PointOfInterest?
let paris = try PointOfInterest.fetchOne(db, key: 1)
// PointOfInterest?
let titleColumn = Column("title")
let berlin = try PointOfInterest.filter(titleColumn == "Berlin").fetchOne(db)
// [PointOfInterest]
let favoriteColumn = Column("favorite")
let favoritePois = try PointOfInterest
.filter(favoriteColumn)
.order(titleColumn)
.fetchAll(db)
}
GRDB runs on top of SQLite: you should get familiar with the SQLite FAQ. For general and detailed information, jump to the SQLite Documentation.
Reference
- GRDB Reference (generated by Jazzy)
Getting Started
- Installation
- Database Connections: Connect to SQLite databases
SQLite and SQL
- SQLite API: The low-level SQLite API • executing updates • fetch queries
Records and the Query Interface
- Records: Fetching and persistence methods for your custom structs and class hierarchies.
- Query Interface: A swift way to generate SQL • table creation • requests
Application Tools
- Migrations: Transform your database as your application evolves.
- Full-Text Search: Perform efficient and customizable full-text searches.
- Database Changes Observation: Perform post-commit and post-rollback actions.
- FetchedRecordsController: Automatic database changes tracking, plus UITableView animations.
- Encryption: Encrypt your database with SQLCipher.
- Backup: Dump the content of a database to another.
- GRDB Extension Guide: When a feature is lacking, extend GRDB right from your application.
Good to Know
- Avoiding SQL Injection
- Error Handling
- Unicode
- Memory Management
- Data Protection
- Concurrency
- Performance
The installation procedures below have GRDB use the version of SQLite that ships with the target operating system.
See Encryption for the installation procedure of GRDB with SQLCipher.
See Custom SQLite builds for the installation procedure of GRDB with a customized build of SQLite 3.18.0.
CocoaPods is a dependency manager for Xcode projects. To use GRDB.swift with CocoaPods (version 1.1 or higher), specify in your Podfile
:
use_frameworks!
pod 'GRDB.swift'
The Swift Package Manager automates the distribution of Swift code. To use GRDB.swift with SPM, add a dependency to your Package.swift
file:
let package = Package(
...
dependencies: [
.Package(url: "https://github.com/groue/GRDB.swift.git", majorVersion: 0)
]
)
Note that Linux is not currently supported.
Carthage does not support the variety of frameworks built by GRDB (standard SQLite, custom SQLite, SQLCipher).
Any pull request that has the make test_CarthageBuild
command successfully complete will be greatly appreciated, though. Bring your local Xcode guru!
-
Download a copy of GRDB.swift, or clone its repository and make sure you use the latest tagged version with the
git checkout v0.107.0
command. -
Embed the
GRDB.xcodeproj
project in your own project. -
Add the
GRDBOSX
,GRDBiOS
, orGRDBWatchOS
target in the Target Dependencies section of the Build Phases tab of your application target (extension target for WatchOS). -
Add the
GRDB.framework
from the targetted platform to the Embedded Binaries section of the General tab of your application target (extension target for WatchOS). -
(WatchOS only). Add
libsqlite3.tbd
to the Linked Frameworks and Libraries section of the General tab of your extension target.
See GRDBDemoiOS for an example of such integration.
GRDB provides two classes for accessing SQLite databases: DatabaseQueue
and DatabasePool
:
import GRDB
// Pick one:
let dbQueue = try DatabaseQueue(path: "/path/to/database.sqlite")
let dbPool = try DatabasePool(path: "/path/to/database.sqlite")
The differences are:
- Database pools allow concurrent database accesses (this can improve the performance of multithreaded applications).
- Unless read-only, database pools open your SQLite database in the WAL mode.
- Database queues support in-memory databases.
If you are not sure, choose DatabaseQueue. You will always be able to switch to DatabasePool later.
Open a database queue with the path to a database file:
import GRDB
let dbQueue = try DatabaseQueue(path: "/path/to/database.sqlite")
let inMemoryDBQueue = DatabaseQueue()
SQLite creates the database file if it does not already exist. The connection is closed when the database queue gets deallocated.
A database queue can be used from any thread. The inDatabase
and inTransaction
methods are synchronous, and block the current thread until your database statements are executed in a protected dispatch queue. They safely serialize the database accesses:
// Execute database statements:
try dbQueue.inDatabase { db in
try db.create(table: "pointOfInterests") { ... }
try PointOfInterest(...).insert(db)
}
// Wrap database statements in a transaction:
try dbQueue.inTransaction { db in
if let poi = try PointOfInterest.fetchOne(db, key: 1) {
try poi.delete(db)
}
return .commit
}
// Read values:
try dbQueue.inDatabase { db in
let pois = try PointOfInterest.fetchAll(db)
let poiCount = try PointOfInterest.fetchCount(db)
}
// Extract a value from the database:
let poiCount = try dbQueue.inDatabase { db in
try PointOfInterest.fetchCount(db)
}
A database queue needs your application to follow rules in order to deliver its safety guarantees. Please refer to the Concurrency chapter.
See DemoApps/GRDBDemoiOS/Database.swift for a sample code that sets up a database queue on iOS.
var config = Configuration()
config.readonly = true
config.foreignKeysEnabled = true // Default is already true
config.trace = { print($0) } // Prints all SQL statements
let dbQueue = try DatabaseQueue(
path: "/path/to/database.sqlite",
configuration: config)
See Configuration for more details.
A Database Pool allows concurrent database accesses.
When more efficient than database queues, database pools also require a good mastery of database transactions. Details follow. If you don't feel comfortable with transactions, use a database queue instead.
import GRDB
let dbPool = try DatabasePool(path: "/path/to/database.sqlite")
SQLite creates the database file if it does not already exist. The connection is closed when the database pool gets deallocated.
☝️ Note: unless read-only, a database pool opens your database in the SQLite "WAL mode". The WAL mode does not fit all situations. Please have a look at https://www.sqlite.org/wal.html.
A database pool can be used from any thread. The read
, write
and writeInTransaction
methods are synchronous, and block the current thread until your database statements are executed in a protected dispatch queue. They safely isolate the database accesses:
// Execute database statements:
try dbPool.write { db in
try db.create(table: "pointOfInterests") { ... }
try PointOfInterest(...).insert(db)
}
// Wrap database statements in a transaction:
try dbPool.writeInTransaction { db in
if let poi = try PointOfInterest.fetchOne(db, key: 1) {
try poi.delete(db)
}
return .commit
}
// Read values:
try dbPool.read { db in
let pois = try PointOfInterest.fetchAll(db)
let poiCount = try PointOfInterest.fetchCount(db)
}
// Extract a value from the database:
let poiCount = try dbPool.read { db in
try PointOfInterest.fetchCount(db)
}
Database pools allow several threads to access the database at the same time:
-
When you don't need to modify the database, prefer the
read
method, because several threads can perform reads in parallel.The total number of concurrent reads is limited. When the maximum number has been reached, a read waits for another read to complete. That maximum number can be configured.
-
Conversely, writes are serialized. They don't block reads, but GRDB guarantees
read
closures an immutable view of the last committed state of the database.☝️ To provide
read
closures an immutable view of the last executed writing block as a whole, usewriteInTransaction
instead ofwrite
.
A database pool needs your application to follow rules in order to deliver its safety guarantees. Please refer to the Concurrency chapter.
See Advanced DatabasePool for more DatabasePool hotness.
For a sample code that sets up a database pool on iOS, see DemoApps/GRDBDemoiOS/Database.swift, and replace DatabaseQueue with DatabasePool.
var config = Configuration()
config.readonly = true
config.foreignKeysEnabled = true // Default is already true
config.trace = { print($0) } // Prints all SQL statements
config.maximumReaderCount = 10 // The default is 5
let dbPool = try DatabasePool(
path: "/path/to/database.sqlite",
configuration: config)
See Configuration for more details.
Database pools are more memory-hungry than database queues. See Memory Management for more information.
In this section of the documentation, we will talk SQL. Jump to the query interface if SQL is not your cup of tea.
Advanced topics:
- Custom Value Types
- Prepared Statements
- Custom SQL Functions
- Database Schema Introspection
- Row Adapters
- Raw SQLite Pointers
Once granted with a database connection, the execute
method executes the SQL statements that do not return any database row, such as CREATE TABLE
, INSERT
, DELETE
, ALTER
, etc.
For example:
try dbQueue.inDatabase { db in
try db.execute(
"CREATE TABLE persons (" +
"id INTEGER PRIMARY KEY," +
"name TEXT NOT NULL," +
"age INT" +
")")
try db.execute(
"INSERT INTO persons (name, age) VALUES (:name, :age)",
arguments: ["name": "Barbara", "age": 39])
// Join multiple statements with a semicolon:
try db.execute(
"INSERT INTO persons (name, age) VALUES (?, ?); " +
"INSERT INTO persons (name, age) VALUES (?, ?)",
arguments: ["Arthur", 36, "Barbara", 39])
}
The ?
and colon-prefixed keys like :name
in the SQL query are the statements arguments. You pass arguments with arrays or dictionaries, as in the example above. See Values for more information on supported arguments types (Bool, Int, String, Date, Swift enums, etc.).
Never ever embed values directly in your SQL strings, and always use arguments instead. See Avoiding SQL Injection for more information.
After an INSERT statement, you can get the row ID of the inserted row:
try db.execute(
"INSERT INTO persons (name, age) VALUES (?, ?)",
arguments: ["Arthur", 36])
let personId = db.lastInsertedRowID
Don't miss Records, that provide classic persistence methods:
let person = Person(name: "Arthur", age: 36)
try person.insert(db)
let personId = person.id
Database connections let you fetch database rows, plain values, and custom models aka "records".
Rows are the raw results of SQL queries:
try dbQueue.inDatabase { db in
if let row = try Row.fetchOne(db, "SELECT * FROM wines WHERE id = ?", arguments: [1]) {
let name: String = row.value(named: "name")
let color: Color = row.value(named: "color")
print(name, color)
}
}
Values are the Bool, Int, String, Date, Swift enums, etc. stored in row columns:
try dbQueue.inDatabase { db in
let urls = try URL.fetchCursor(db, "SELECT url FROM wines")
while let url = try urls.next() {
print(url)
}
}
Records are your application objects that can initialize themselves from rows:
let wines = try dbQueue.inDatabase { db in
try Wine.fetchAll(db, "SELECT * FROM wines")
}
Throughout GRDB, you can always fetch cursors, arrays, or single values of any fetchable type (database row, simple value, or custom record):
try Type.fetchCursor(...) // DatabaseCursor<Type>
try Type.fetchAll(...) // [Type]
try Type.fetchOne(...) // Type?
-
fetchCursor
returns a cursor over fetched values:let rows = try Row.fetchCursor(db, "SELECT ...") // DatabaseCursor<Row>
-
fetchAll
returns an array:let persons = try Person.fetchAll(db, "SELECT ...") // [Person]
-
fetchOne
returns a single optional value, and consumes a single database row (if any).let count = try Int.fetchOne(db, "SELECT COUNT(*) ...") // Int?
Whenever you consume several rows from the database, you can fetch a Cursor, or an Array.
Arrays contain copies of database values and may be consumed on any thread. But they can take a lot of memory. Conversely, cursors iterate over database results in a lazy fashion, don't consume much memory, and are generally more efficient. But they must be consumed in a protected dispatch queue:
let rows = try Row.fetchAll(db, "SELECT ...") // [Row]
let rows = try Row.fetchCursor(db, "SELECT ...") // DatabaseCursor<Row>
A common way to iterate over the elements of a cursor is to use a while
loop:
let rows = try Row.fetchCursor(db, "SELECT ...")
while let row = try rows.next() {
let url: URL = row.value(named: "url")
print(url)
}
You can also use the forEach
method:
try rows.forEach { row in
let url: URL = row.value(named: "url")
print(url)
}
Don't modify the fetched results during a cursor iteration:
// Undefined behavior
while let row = try rows.next() {
try db.execute("DELETE ...")
}
Cursors come with default implementations for many operations similar to those defined by lazy sequences of the Swift Standard Library: contains
, enumerated
, filter
, first
, flatMap
, forEach
, joined
, map
, reduce
:
// Enumerate all Github links
try URL.fetchCursor(db, "SELECT url FROM links")
.filter { url in url.host == "github.com" }
.enumerated()
.forEach { (index, url) in ... }
Fetch cursors of rows, arrays, or single rows (see fetching methods):
try dbQueue.inDatabase { db in
try Row.fetchCursor(db, "SELECT ...", arguments: ...) // DatabaseCursor<Row>
try Row.fetchAll(db, "SELECT ...", arguments: ...) // [Row]
try Row.fetchOne(db, "SELECT ...", arguments: ...) // Row?
let rows = try Row.fetchCursor(db, "SELECT * FROM wines")
while let row = try rows.next() {
let name: String = row.value(named: "name")
let color: Color = row.value(named: "color")
print(name, color)
}
}
let rows = try dbQueue.inDatabase { db in
try Row.fetchAll(db, "SELECT * FROM persons")
}
Arguments are optional arrays or dictionaries that fill the positional ?
and colon-prefixed keys like :name
in the query:
let rows = try Row.fetchAll(db,
"SELECT * FROM persons WHERE name = ?",
arguments: ["Arthur"])
let rows = try Row.fetchAll(db,
"SELECT * FROM persons WHERE name = :name",
arguments: ["name": "Arthur"])
See Values for more information on supported arguments types (Bool, Int, String, Date, Swift enums, etc.), and StatementArguments for a detailed documentation of SQLite arguments.
Unlike row arrays that contain copies of the database rows, row cursors are close to the SQLite metal, and require a little care:
☝️ Don't turn a row cursor into an array, with
Array(rowCursor)
for example: you would not get the distinct rows you expect. To get a row array, useRow.fetchAll(...)
. Generally speaking, make sure you copy a row whenever you extract it from a cursor for later use:row.copy()
.
Read column values by index or column name:
let name: String = row.value(atIndex: 0) // 0 is the leftmost column
let name: String = row.value(named: "name") // Leftmost matching column - lookup is case-insensitive
let name: String = row.value(Column("name")) // Using query interface's Column
Make sure to ask for an optional when the value may be NULL:
let name: String? = row.value(named: "name")
The value
function returns the type you ask for. See Values for more information on supported value types:
let bookCount: Int = row.value(named: "bookCount")
let bookCount64: Int64 = row.value(named: "bookCount")
let hasBooks: Bool = row.value(named: "bookCount") // false when 0
let string: String = row.value(named: "date") // "2015-09-11 18:14:15.123"
let date: Date = row.value(named: "date") // Date
self.date = row.value(named: "date") // Depends on the type of the property.
You can also use the as
type casting operator:
row.value(...) as Int
row.value(...) as Int?
⚠️ Warning: avoid theas!
andas?
operators, because they misbehave in the context of type inference (see rdar://21676393):if let int = row.value(...) as? Int { ... } // BAD - doesn't work if let int = row.value(...) as Int? { ... } // GOOD
Generally speaking, you can extract the type you need, provided it can be converted from the underlying SQLite value:
-
Successful conversions include:
- All numeric SQLite values to all numeric Swift types, and Bool (zero is the only false boolean).
- Text SQLite values to Swift String.
- Blob SQLite values to Foundation Data.
See Values for more information on supported types (Bool, Int, String, Date, Swift enums, etc.)
-
NULL returns nil.
let row = try Row.fetchOne(db, "SELECT NULL")! row.value(atIndex: 0) as Int? // nil row.value(atIndex: 0) as Int // fatal error: could not convert NULL to Int.
There is one exception, though: the DatabaseValue type:
row.value(atIndex: 0) as DatabaseValue // DatabaseValue.null
-
Missing columns return nil.
let row = try Row.fetchOne(db, "SELECT 'foo' AS foo")! row.value(named: "missing") as String? // nil row.value(named: "missing") as String // fatal error: no such column: missing
You can explicitly check for a column presence with the
hasColumn
method. -
Invalid conversions throw a fatal error.
let row = try Row.fetchOne(db, "SELECT 'Mom’s birthday'")! row.value(atIndex: 0) as String // "Mom’s birthday" row.value(atIndex: 0) as Date? // fatal error: could not convert "Mom’s birthday" to Date. row.value(atIndex: 0) as Date // fatal error: could not convert "Mom’s birthday" to Date.
This fatal error can be avoided with the DatabaseValueConvertible.fromDatabaseValue() method.
-
SQLite has a weak type system, and provides convenience conversions that can turn Blob to String, String to Int, etc.
GRDB will sometimes let those conversions go through:
let rows = try Row.fetchCursor(db, "SELECT '20 small cigars'") while let row = try rows.next() { row.value(atIndex: 0) as Int // 20 }
Don't freak out: those conversions did not prevent SQLite from becoming the immensely successful database engine you want to use. And GRDB adds safety checks described just above. You can also prevent those convenience conversions altogether by using the DatabaseValue type.
DatabaseValue is an intermediate type between SQLite and your values, which gives information about the raw value stored in the database.
You get DatabaseValue just like other value types:
let dbv: DatabaseValue = row.value(atIndex: 0)
let dbv: DatabaseValue = row.value(named: "name")
// Check for NULL:
dbv.isNull // Bool
// All the five storage classes supported by SQLite:
switch dbv.storage {
case .null: print("NULL")
case .int64(let int64): print("Int64: \(int64)")
case .double(let double): print("Double: \(double)")
case .string(let string): print("String: \(string)")
case .blob(let data): print("Data: \(data)")
}
You can extract regular values (Bool, Int, String, Date, Swift enums, etc.) from DatabaseValue with the DatabaseValueConvertible.fromDatabaseValue() method:
let dbv: DatabaseValue = row.value(named: "bookCount")
let bookCount = Int.fromDatabaseValue(dbv) // Int?
let bookCount64 = Int64.fromDatabaseValue(dbv) // Int64?
let hasBooks = Bool.fromDatabaseValue(dbv) // Bool?, false when 0
let dbv: DatabaseValue = row.value(named: "date")
let string = String.fromDatabaseValue(dbv) // "2015-09-11 18:14:15.123"
let date = Date.fromDatabaseValue(dbv) // Date?
fromDatabaseValue
returns nil for invalid conversions:
let row = try Row.fetchOne(db, "SELECT 'Mom’s birthday'")!
let dbv: DatabaseValue = row.value(at: 0)
let string = String.fromDatabaseValue(dbv) // "Mom’s birthday"
let int = Int.fromDatabaseValue(dbv) // nil
let date = Date.fromDatabaseValue(dbv) // nil
This turns out useful when you process untrusted databases. Compare:
let date: Date? = row.value(atIndex: 0) // fatal error: could not convert "Mom’s birthday" to Date.
let date = Date.fromDatabaseValue(row.value(atIndex: 0)) // nil
Row adopts the standard Collection protocol, and can be seen as a dictionary of DatabaseValue:
// All the (columnName, databaseValue) tuples, from left to right:
for (columnName, databaseValue) in row {
...
}
You can build rows from dictionaries (standard Swift dictionaries and NSDictionary). See Values for more information on supported types:
let row: Row = ["name": "foo", "date": nil]
let row = Row(["name": "foo", "date": nil])
let row = Row(/* [AnyHashable: Any] */) // nil if invalid dictionary
Yet rows are not real dictionaries: they are ordered, and may contain duplicate keys:
let row = try Row.fetchOne(db, "SELECT 1 AS foo, 2 AS foo")!
row.columnNames // ["foo", "foo"]
row.databaseValues // [1, 2]
row.value(named: "foo") // 1 (leftmost matching column)
for (columnName, databaseValue) in row { ... } // ("foo", 1), ("foo", 2)
Instead of rows, you can directly fetch values. Like rows, fetch them as cursors, arrays, or single values (see fetching methods). Values are extracted from the leftmost column of the SQL queries:
try dbQueue.inDatabase { db in
try Int.fetchCursor(db, "SELECT ...", arguments: ...) // DatabaseCursor<Int>
try Int.fetchAll(db, "SELECT ...", arguments: ...) // [Int]
try Int.fetchOne(db, "SELECT ...", arguments: ...) // Int?
// When database may contain NULL:
try Optional<Int>.fetchCursor(db, "SELECT ...", arguments: ...) // DatabaseCursor<Int?>
try Optional<Int>.fetchAll(db, "SELECT ...", arguments: ...) // [Int?]
}
let personCount = try dbQueue.inDatabase { db in
try Int.fetchOne(db, "SELECT COUNT(*) FROM persons")!
}
fetchOne
returns an optional value which is nil in two cases: either the SELECT statement yielded no row, or one row with a NULL value.
There are many supported value types (Bool, Int, String, Date, Swift enums, etc.). See Values for more information:
let count = try Int.fetchOne(db, "SELECT COUNT(*) FROM persons")! // Int
let urls = try URL.fetchAll(db, "SELECT url FROM links") // [URL]
GRDB ships with built-in support for the following value types:
-
Swift Standard Library: Bool, Double, Float, Int, Int32, Int64, String, Swift enums.
-
Foundation: Data, Date, DateComponents, NSNull, NSNumber, NSString, URL, UUID.
-
CoreGraphics: CGFloat.
-
DatabaseValue, the type which gives information about the raw value stored in the database.
-
Full-Text Patterns: FTS3Pattern and FTS5Pattern.
-
Generally speaking, all types that adopt the DatabaseValueConvertible protocol.
Values can be used as statement arguments:
let url: URL = ...
let verified: Bool = ...
try db.execute(
"INSERT INTO links (url, verified) VALUES (?, ?)",
arguments: [url, verified])
Values can be extracted from rows:
let rows = try Row.fetchCursor(db, "SELECT * FROM links")
while let row = try rows.next() {
let url: URL = row.value(named: "url")
let verified: Bool = row.value(named: "verified")
}
Values can be directly fetched:
let urls = try URL.fetchAll(db, "SELECT url FROM links") // [URL]
Use values in Records:
class Link : Record {
var url: URL
var isVerified: Bool
required init(row: Row) {
url = row.value(named: "url")
isVerified = row.value(named: "verified")
super.init(row: row)
}
override var persistentDictionary: [String: DatabaseValueConvertible?] {
return ["url": url, "verified": isVerified]
}
}
Use values in the query interface:
let url: URL = ...
let link = try Link.filter(urlColumn == url).fetchOne(db)
Data suits the BLOB SQLite columns. It can be stored and fetched from the database just like other values:
let rows = try Row.fetchCursor(db, "SELECT data, ...")
while let row = try rows.next() {
let data: Data = row.value(named: "data")
}
At each step of the request iteration, the row.value
method creates two copies of the database bytes: one fetched by SQLite, and another, stored in the Swift Data value.
You have the opportunity to save memory by not copying the data fetched by SQLite:
while let row = try rows.next() {
let data = row.dataNoCopy(named: "data") // Data?
}
The non-copied data does not live longer than the iteration step: make sure that you do not use it past this point.
Date and DateComponents can be stored and fetched from the database.
Here is the support provided by GRDB for the various date formats supported by SQLite:
SQLite format | Date | DateComponents |
---|---|---|
YYYY-MM-DD | Read ¹ | Read/Write |
YYYY-MM-DD HH:MM | Read ¹ | Read/Write |
YYYY-MM-DD HH:MM:SS | Read ¹ | Read/Write |
YYYY-MM-DD HH:MM:SS.SSS | Read/Write ¹ | Read/Write |
YYYY-MM-DDTHH:MM | Read ¹ | Read |
YYYY-MM-DDTHH:MM:SS | Read ¹ | Read |
YYYY-MM-DDTHH:MM:SS.SSS | Read ¹ | Read |
HH:MM | Read/Write | |
HH:MM:SS | Read/Write | |
HH:MM:SS.SSS | Read/Write | |
Julian Day Number | Read ² | |
now |
¹ Dates are stored and read in the UTC time zone. Missing components are assumed to be zero.
² See https://en.wikipedia.org/wiki/Julian_day
Date can be stored and fetched from the database just like other values:
try db.execute(
"INSERT INTO persons (creationDate, ...) VALUES (?, ...)",
arguments: [Date(), ...])
let creationDate: Date = row.value(named: "creationDate")
Dates are stored using the format "YYYY-MM-DD HH:MM:SS.SSS" in the UTC time zone. It is precise to the millisecond.
☝️ Note: this format was chosen because it is the only format that is:
- Comparable (
ORDER BY date
works)- Comparable with the SQLite keyword CURRENT_TIMESTAMP (
WHERE date > CURRENT_TIMESTAMP
works)- Able to feed SQLite date & time functions
- Precise enough
Yet this format may not fit your needs. For example, you may want to store dates as timestamps. In this case, store and load Doubles instead of Date, and perform the required conversions.
DateComponents is indirectly supported, through the DatabaseDateComponents helper type.
DatabaseDateComponents reads date components from all date formats supported by SQLite, and stores them in the format of your choice, from HH:MM to YYYY-MM-DD HH:MM:SS.SSS.
DatabaseDateComponents can be stored and fetched from the database just like other values:
let components = DateComponents()
components.year = 1973
components.month = 9
components.day = 18
// Store "1973-09-18"
let dbComponents = DatabaseDateComponents(components, format: .YMD)
try db.execute(
"INSERT INTO persons (birthDate, ...) VALUES (?, ...)",
arguments: [dbComponents, ...])
// Read "1973-09-18"
let row = try Row.fetchOne(db, "SELECT birthDate ...")!
let dbComponents: DatabaseDateComponents = row.value(named: "birthDate")
dbComponents.format // .YMD (the actual format found in the database)
dbComponents.dateComponents // DateComponents
NSNumber can be stored and fetched from the database just like other values. Floating point NSNumbers are stored as Double. Integer and boolean, as Int64. Integers that don't fit Int64 won't be stored: you'll get a fatal error instead. Be cautious when an NSNumber contains an UInt64, for example.
NSDecimalNumber deserves a longer discussion:
SQLite has no support for decimal numbers. Given the table below, SQLite will actually store integers or doubles:
CREATE TABLE transfers (
amount DECIMAL(10,5) -- will store integer or double, actually
)
This means that computations will not be exact:
try db.execute("INSERT INTO transfers (amount) VALUES (0.1)")
try db.execute("INSERT INTO transfers (amount) VALUES (0.2)")
let sum = try NSDecimalNumber.fetchOne(db, "SELECT SUM(amount) FROM transfers")!
// Yikes! 0.3000000000000000512
print(sum)
Don't blame SQLite or GRDB, and instead store your decimal numbers differently.
A classic technique is to store integers instead, since SQLite performs exact computations of integers. For example, don't store Euros, but store cents instead:
// Store
let amount = NSDecimalNumber(string: "0.1") // 0.1
let integerAmount = amount.multiplying(byPowerOf10: 2).int64Value // 100
try db.execute("INSERT INTO transfers (amount) VALUES (?)", arguments: [integerAmount])
// Read
let integerAmount = try Int64.fetchOne(db, "SELECT SUM(amount) FROM transfers")! // 100
let amount = NSDecimalNumber(value: integerAmount).multiplying(byPowerOf10: -2) // 0.1
UUID can be stored and fetched from the database just like other values. GRDB stores uuids as 16-bytes data blobs.
Swift enums and generally all types that adopt the RawRepresentable protocol can be stored and fetched from the database just like their raw values:
enum Color : Int {
case red, white, rose
}
enum Grape : String {
case chardonnay, merlot, riesling
}
// Declare empty DatabaseValueConvertible adoption
extension Color : DatabaseValueConvertible { }
extension Grape : DatabaseValueConvertible { }
// Store
try db.execute(
"INSERT INTO wines (grape, color) VALUES (?, ?)",
arguments: [Grape.merlot, Color.red])
// Read
let rows = try Row.fetchCursor(db, "SELECT * FROM wines")
while let row = try rows.next() {
let grape: Grape = row.value(named: "grape")
let color: Color = row.value(named: "color")
}
When a database value does not match any enum case, you get a fatal error. This fatal error can be avoided with the DatabaseValueConvertible.fromDatabaseValue() method:
let row = try Row.fetchOne(db, "SELECT 'syrah'")!
row.value(atIndex: 0) as String // "syrah"
row.value(atIndex: 0) as Grape? // fatal error: could not convert "syrah" to Grape.
row.value(atIndex: 0) as Grape // fatal error: could not convert "syrah" to Grape.
Grape.fromDatabaseValue(row.value(atIndex: 0)) // nil
The DatabaseQueue.inTransaction()
and DatabasePool.writeInTransaction()
methods open an SQLite transaction and run their closure argument in a protected dispatch queue. They block the current thread until your database statements are executed:
try dbQueue.inTransaction { db in
let wine = Wine(color: .red, name: "Pomerol")
try wine.insert(db)
return .commit
}
If an error is thrown within the transaction body, the transaction is rollbacked and the error is rethrown by the inTransaction
method. If you return .rollback
from your closure, the transaction is also rollbacked, but no error is thrown.
If you want to insert a transaction between other database statements, you can use the Database.inTransaction() function, or even raw SQL statements:
try dbQueue.inDatabase { db in // or dbPool.write { db in
...
try db.inTransaction {
...
return .commit
}
...
try db.execute("BEGIN TRANSACTION")
...
try db.execute("COMMIT")
}
You can ask a database if a transaction is currently opened:
func myCriticalMethod(_ db: Database) throws {
precondition(db.isInsideTransaction, "This method requires a transaction")
try ...
}
Yet, you have a better option than checking for transactions: critical sections of your application should use savepoints, described below:
func myCriticalMethod(_ db: Database) throws {
try db.inSavepoint {
// Here the database is guaranteed to be inside a transaction.
try ...
}
}
Statements grouped in a savepoint can be rollbacked without invalidating a whole transaction:
try dbQueue.inTransaction { db in
try db.inSavepoint {
try db.execute("DELETE ...")
try db.execute("INSERT ...") // need to rollback the delete above if this fails
return .commit
}
// Other savepoints, etc...
return .commit
}
If an error is thrown within the savepoint body, the savepoint is rollbacked and the error is rethrown by the inSavepoint
method. If you return .rollback
from your closure, the body is also rollbacked, but no error is thrown.
Unlike transactions, savepoints can be nested. They implicitly open a transaction if no one was opened when the savepoint begins. As such, they behave just like nested transactions. Yet the database changes are only committed to disk when the outermost savepoint is committed:
try dbQueue.inDatabase { db in
try db.inSavepoint {
...
try db.inSavepoint {
...
return .commit
}
...
return .commit // writes changes to disk
}
}
SQLite savepoints are more than nested transactions, though. For advanced savepoints uses, use SQL queries.
SQLite supports three kinds of transactions: deferred, immediate, and exclusive. GRDB defaults to immediate.
The transaction kind can be changed in the database configuration, or for each transaction:
// A connection with default DEFERRED transactions:
var config = Configuration()
config.defaultTransactionKind = .deferred
let dbQueue = try DatabaseQueue(path: "...", configuration: config)
// Opens a DEFERRED transaction:
dbQueue.inTransaction { db in ... }
// Opens an EXCLUSIVE transaction:
dbQueue.inTransaction(.exclusive) { db in ... }
Conversion to and from the database is based on the DatabaseValueConvertible
protocol:
protocol DatabaseValueConvertible {
/// Returns a value that can be stored in the database.
var databaseValue: DatabaseValue { get }
/// Returns a value initialized from databaseValue, if possible.
static func fromDatabaseValue(_ databaseValue: DatabaseValue) -> Self?
}
All types that adopt this protocol can be used like all other values (Bool, Int, String, Date, Swift enums, etc.)
The databaseValue
property returns DatabaseValue, a type that wraps the five values supported by SQLite: NULL, Int64, Double, String and Data. Since DatabaseValue has no public initializer, use DatabaseValue.null
, or another type that already adopts the protocol: 1.databaseValue
, "foo".databaseValue
, etc. Conversion to DatabaseValue must not fail.
The fromDatabaseValue()
factory method returns an instance of your custom type if the databaseValue contains a suitable value. If the databaseValue does not contain a suitable value, such as "foo" for Date, fromDatabaseValue
must return nil (GRDB will interpret this nil result as a conversion error, and react accordingly).
The GRDB Extension Guide contains sample code that has UIColor adopt DatabaseValueConvertible.
Prepared Statements let you prepare an SQL query and execute it later, several times if you need, with different arguments.
There are two kinds of prepared statements: select statements, and update statements:
try dbQueue.inDatabase { db in
let updateSQL = "INSERT INTO persons (name, age) VALUES (:name, :age)"
let updateStatement = try db.makeUpdateStatement(updateSQL)
let selectSQL = "SELECT * FROM persons WHERE name = ?"
let selectStatement = try db.makeSelectStatement(selectSQL)
}
The ?
and colon-prefixed keys like :name
in the SQL query are the statement arguments. You set them with arrays or dictionaries (arguments are actually of type StatementArguments, which happens to adopt the ExpressibleByArrayLiteral and ExpressibleByDictionaryLiteral protocols).
updateStatement.arguments = ["name": "Arthur", "age": 41]
selectStatement.arguments = ["Arthur"]
After arguments are set, you can execute the prepared statement:
try updateStatement.execute()
Select statements can be used wherever a raw SQL query string would fit (see fetch queries):
let rows = try Row.fetchCursor(selectStatement) // DatabaseCursor<Row>
let persons = try Person.fetchAll(selectStatement) // [Person]
let person = try Person.fetchOne(selectStatement) // Person?
You can set the arguments at the moment of the statement execution:
try updateStatement.execute(arguments: ["name": "Arthur", "age": 41])
let person = try Person.fetchOne(selectStatement, arguments: ["Arthur"])
☝️ Note: it is a programmer error to reuse a prepared statement that has failed: GRDB may crash if you do so.
See row queries, value queries, and Records for more information.
When the same query will be used several times in the lifetime of your application, you may feel a natural desire to cache prepared statements.
Don't cache statements yourself.
☝️ Note: This is because you don't have the necessary tools. Statements are tied to specific SQLite connections and dispatch queues which you don't manage yourself, especially when you use database pools. A change in the database schema may, or may not invalidate a statement. On systems earlier than iOS 8.2 and OSX 10.10 that don't have the sqlite3_close_v2 function, SQLite connections won't close properly if statements have been kept alive.
Instead, use the cachedUpdateStatement
and cachedSelectStatement
methods. GRDB does all the hard caching and memory management stuff for you:
let updateStatement = try db.cachedUpdateStatement(sql)
let selectStatement = try db.cachedSelectStatement(sql)
Should a cached prepared statement throw an error, don't reuse it (it is a programmer error). Instead, reload it from the cache.
SQLite lets you define SQL functions.
A custom SQL function extends SQLite. It can be used in raw SQL queries. And when SQLite needs to evaluate it, it calls your custom code.
let reverseString = DatabaseFunction("reverseString", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
// Extract string value, if any...
guard let string = String.fromDatabaseValue(values[0]) else {
return nil
}
// ... and return reversed string:
return String(string.characters.reversed())
}
dbQueue.add(function: reverseString) // Or dbPool.add(function: ...)
try dbQueue.inDatabase { db in
// "oof"
try String.fetchOne(db, "SELECT reverseString('foo')")!
}
The function argument takes an array of DatabaseValue, and returns any valid value (Bool, Int, String, Date, Swift enums, etc.) The number of database values is guaranteed to be argumentCount.
SQLite has the opportunity to perform additional optimizations when functions are "pure", which means that their result only depends on their arguments. So make sure to set the pure argument to true when possible.
Functions can take a variable number of arguments:
When you don't provide any explicit argumentCount, the function can take any number of arguments:
let averageOf = DatabaseFunction("averageOf", pure: true) { (values: [DatabaseValue]) in
let doubles = values.flatMap { Double.fromDatabaseValue($0) }
return doubles.reduce(0, +) / Double(doubles.count)
}
dbQueue.add(function: averageOf)
try dbQueue.inDatabase { db in
// 2.0
try Double.fetchOne(db, "SELECT averageOf(1, 2, 3)")!
}
Functions can throw:
let sqrt = DatabaseFunction("sqrt", argumentCount: 1, pure: true) { (values: [DatabaseValue]) in
guard let double = Double.fromDatabaseValue(values[0]) else {
return nil
}
guard double >= 0 else {
throw DatabaseError(message: "invalid negative number")
}
return sqrt(double)
}
dbQueue.add(function: sqrt)
// SQLite error 1 with statement `SELECT sqrt(-1)`: invalid negative number
try dbQueue.inDatabase { db in
try Double.fetchOne(db, "SELECT sqrt(-1)")!
}
Use custom functions in the query interface:
// SELECT reverseString("name") FROM persons
Person.select(reverseString.apply(nameColumn))
GRDB ships with built-in SQL functions that perform unicode-aware string transformations. See Unicode.
SQLite provides database schema introspection tools, such as the sqlite_master table, and the pragma table_info:
try db.create(table: "persons") { t in
t.column("id", .integer).primaryKey()
t.column("name", .text)
}
// <Row type:"table" name:"persons" tbl_name:"persons" rootpage:2
// sql:"CREATE TABLE persons(id INTEGER PRIMARY KEY, name TEXT)">
for row in try Row.fetchAll(db, "SELECT * FROM sqlite_master") {
print(row)
}
// <Row cid:0 name:"id" type:"INTEGER" notnull:0 dflt_value:NULL pk:1>
// <Row cid:1 name:"name" type:"TEXT" notnull:0 dflt_value:NULL pk:0>
for row in try Row.fetchAll(db, "PRAGMA table_info('persons')") {
print(row)
}
GRDB provides five high-level methods as well:
try db.tableExists("persons") // Bool, true if the table exists
try db.columnCount(in: "persons") // Int, the number of columns in table
try db.indexes(on: "persons") // [IndexInfo], the indexes defined on the table
try db.table("persons", hasUniqueKey: ["email"]) // Bool, true if column(s) is a unique key
try db.primaryKey("persons") // PrimaryKeyInfo?
Primary key is nil when table has no primary key:
// CREATE TABLE items (name TEXT)
let itemPk = try db.primaryKey("items") // nil
Primary keys have one or several columns. Single-column primary keys may contain the auto-incremented row id:
// CREATE TABLE persons (
// id INTEGER PRIMARY KEY,
// name TEXT
// )
let personPk = try db.primaryKey("persons")!
personPk.columns // ["id"]
personPk.rowIDColumn // "id"
// CREATE TABLE countries (
// isoCode TEXT NOT NULL PRIMARY KEY
// name TEXT
// )
let countryPk = db.primaryKey("countries")!
countryPk.columns // ["isoCode"]
countryPk.rowIDColumn // nil
// CREATE TABLE citizenships (
// personID INTEGER NOT NULL REFERENCES persons(id)
// countryIsoCode TEXT NOT NULL REFERENCES countries(isoCode)
// PRIMARY KEY (personID, countryIsoCode)
// )
let citizenshipsPk = db.primaryKey("citizenships")!
citizenshipsPk.columns // ["personID", "countryIsoCode"]
citizenshipsPk.rowIDColumn // nil
Row adapters let you present database rows in the way expected by the row consumers.
They basically help two incompatible row interfaces to work together. For example, a row consumer expects a column named "consumed", but the produced row has a column named "produced".
In this case, the ColumnMapping
row adapter comes in handy:
// Fetch a 'produced' column, and consume a 'consumed' column:
let adapter = ColumnMapping(["consumed": "produced"])
let row = try Row.fetchOne(db, "SELECT 'Hello' AS produced", adapter: adapter)!
row.value(named: "consumed") // "Hello"
row.value(named: "produced") // nil
Row adapters are values that adopt the RowAdapter protocol. You can implement your own custom adapters, or use one of the four built-in adapters:
ColumnMapping renames columns. Build one with a dictionary whose keys are adapted column names, and values the column names in the raw row:
// <Row newName:"Hello">
let adapter = ColumnMapping(["newName": "oldName"])
let row = try Row.fetchOne(db, "SELECT 'Hello' AS oldName", adapter: adapter)!
SuffixRowAdapter
hides the first columns in a row:
// <Row b:1 c:2>
let adapter = SuffixRowAdapter(fromIndex: 1)
let row = try Row.fetchOne(db, "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!
RangeRowAdapter
only exposes a range of columns.
// <Row b:1>
let adapter = RangeRowAdapter(1..<2)
let row = try Row.fetchOne(db, "SELECT 0 AS a, 1 AS b, 2 AS c", adapter: adapter)!
ScopeAdapter
defines row scopes:
let adapter = ScopeAdapter([
"left": RangeRowAdapter(0..<2),
"right": RangeRowAdapter(2..<4)])
let row = try Row.fetchOne(db, "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!
ScopeAdapter does not change the columns and values of the fetched row. Instead, it defines scopes, which you access with the scoped(on:)
method. It returns an optional Row, which is nil if the scope is missing.
row // <Row a:0 b:1 c:2 d:3>
row.scoped(on: "left") // <Row a:0 b:1>
row.scoped(on: "right") // <Row c:2 d:3>
row.scoped(on: "missing") // nil
Scopes can be nested:
let adapter = ScopeAdapter([
"left": ScopeAdapter([
"left": RangeRowAdapter(0..<1),
"right": RangeRowAdapter(1..<2)]),
"right": ScopeAdapter([
"left": RangeRowAdapter(2..<3),
"right": RangeRowAdapter(3..<4)])
])
let row = try Row.fetchOne(db, "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!
let leftRow = row.scoped(on: "left")!
leftRow.scoped(on: "left") // <Row a:0>
leftRow.scoped(on: "right") // <Row b:1>
let rightRow = row.scoped(on: "right")!
rightRow.scoped(on: "left") // <Row c:2>
rightRow.scoped(on: "right") // <Row d:3>
Any adapter can be extended with scopes:
let adapter = RangeRowAdapter(0..<2)
.addingScopes(["remainder": SuffixRowAdapter(fromIndex: 2)])
let row = try Row.fetchOne(db, "SELECT 0 AS a, 1 AS b, 2 AS c, 3 AS d", adapter: adapter)!
row // <Row a:0 b:1>
row.scoped(on: "remainder") // <Row c:2 d:3>
If not all SQLite APIs are exposed in GRDB, you can still use the SQLite C Interface and call SQLite C functions.
let sqliteVersion = String(cString: sqlite3_libversion())
Those functions are embedded right into the GRDB framework, unless you use GRDB through the Swift Package Manager. In this case, you need to import the CSQLite
module:
// When you use the Swift Package Manager
import CSQLite
let sqliteVersion = String(cString: sqlite3_libversion())
Raw pointers to database connections and statements are available through the Database.sqliteConnection
and Statement.sqliteStatement
properties:
try dbQueue.inDatabase { db in
// The raw pointer to a database connection:
let sqliteConnection = db.sqliteConnection
// The raw pointer to a statement:
let statement = try db.makeSelectStatement("SELECT ...")
let sqliteStatement = statement.sqliteStatement
}
☝️ Notes
- Those pointers are owned by GRDB: don't close connections or finalize statements created by GRDB.
- GRDB opens SQLite connections in the "multi-thread mode", which (oddly) means that they are not thread-safe. Make sure you touch raw databases and statements inside their dedicated dispatch queues.
- Use the raw SQLite C Interface at your own risk. GRDB won't prevent you from shooting yourself in the foot.
Before jumping in the low-level wagon, here is the list of all SQLite APIs used by GRDB:
sqlite3_backup_finish
,sqlite3_backup_init
,sqlite3_backup_step
: see Backupsqlite3_bind_blob
,sqlite3_bind_double
,sqlite3_bind_int64
,sqlite3_bind_null
,sqlite3_bind_parameter_count
,sqlite3_bind_parameter_name
,sqlite3_bind_text
,sqlite3_clear_bindings
,sqlite3_column_blob
,sqlite3_column_bytes
,sqlite3_column_count
,sqlite3_column_double
,sqlite3_column_int64
,sqlite3_column_name
,sqlite3_column_text
,sqlite3_column_type
,sqlite3_exec
,sqlite3_finalize
,sqlite3_prepare_v2
,sqlite3_reset
,sqlite3_step
: see Executing Updates, Fetch Queries, Prepared Statements, Valuessqlite3_busy_handler
,sqlite3_busy_timeout
: see Configuration.busyModesqlite3_changes
,sqlite3_total_changes
: see Database.changesCount and Database.totalChangesCountsqlite3_close
,sqlite3_close_v2
,sqlite3_next_stmt
,sqlite3_open_v2
: see Database Connectionssqlite3_commit_hook
,sqlite3_rollback_hook
,sqlite3_update_hook
: see Database Changes Observation, FetchedRecordsControllersqlite3_config
: see Error Logsqlite3_create_collation_v2
: see String Comparisonsqlite3_create_function_v2
,sqlite3_result_blob
,sqlite3_result_double
,sqlite3_result_error
,sqlite3_result_error_code
,sqlite3_result_int64
,sqlite3_result_null
,sqlite3_result_text
,sqlite3_user_data
,sqlite3_value_blob
,sqlite3_value_bytes
,sqlite3_value_double
,sqlite3_value_int64
,sqlite3_value_text
,sqlite3_value_type
: see Custom SQL Functionssqlite3_db_release_memory
: see Memory Managementsqlite3_errcode
,sqlite3_errmsg
,sqlite3_errstr
,sqlite3_extended_result_codes
: see Error Handlingsqlite3_key
,sqlite3_rekey
: see Encryptionsqlite3_last_insert_rowid
: see Executing Updatessqlite3_preupdate_count
,sqlite3_preupdate_depth
,sqlite3_preupdate_hook
,sqlite3_preupdate_new
,sqlite3_preupdate_old
: see Support for SQLite Pre-Update Hookssqlite3_set_authorizer
: reserved by GRDBsqlite3_sql
: see Statement.sqlsqlite3_trace
: see Configuration.tracesqlite3_wal_checkpoint_v2
: see DatabasePool.checkpoint
On top of the SQLite API, GRDB provides protocols and a class that help manipulating database rows as regular objects named "records":
try dbQueue.inDatabase { db in
if let poi = try PointOfInterest.fetchOne(db, key: 1) {
poi.isFavorite = true
try poi.update(db)
}
}
Of course, you need to open a database connection, and create a database table first.
Your custom structs and classes can adopt each protocol individually, and opt in to focused sets of features. Or you can subclass the Record
class, and get the full toolkit in one go: fetching methods, persistence methods, and changes tracking. See the list of record methods for an overview.
☝️ Note: if you are familiar with Core Data's NSManagedObject or Realm's Object, you may experience a cultural shock: GRDB records are not uniqued, and do not auto-update. This is both a purpose, and a consequence of protocol-oriented programming. You should read How to build an iOS application with SQLite and GRDB.swift for a general introduction.
Overview
Protocols and the Record class
- RowConvertible Protocol
- TableMapping Protocol
- Persistable Protocol
- Record Class
- The Implicit RowID Primary Key
- List of Record Methods
To insert a record in the database, subclass the Record class or adopt the Persistable protocol, and call the insert
method:
class Person : Record { ... }
let person = Person(name: "Arthur", email: "[email protected]")
try person.insert(db)
Record subclasses and types that adopt the RowConvertible protocol can be fetched from the database:
class Person : Record { ... }
let persons = try Person.fetchAll(db, "SELECT ...", arguments: ...) // [Person]
Add the TableMapping protocol and you can stop writing SQL:
let persons = try Person.filter(emailColumn != nil).order(nameColumn).fetchAll(db) // [Person]
let person = try Person.fetchOne(db, key: 1) // Person?
let person = try Person.fetchOne(db, key: ["email": "[email protected]"]) // Person?
let countries = try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
See fetching methods, and the query interface.
Record subclasses and types that adopt the Persistable protocol can be updated in the database:
let person = try Person.fetchOne(db, key: 1)!
person.name = "Arthur"
try person.update(db)
Record subclasses track changes, so that you can avoid useless updates:
let person = try Person.fetchOne(db, key: 1)!
person.name = "Arthur"
if person.hasPersistentChangedValues {
try person.update(db)
}
For batch updates, execute an SQL query:
try db.execute("UPDATE persons SET synchronized = 1")
Record subclasses and types that adopt the Persistable protocol can be deleted from the database:
let person = try Person.fetchOne(db, key: 1)!
try person.delete(db)
The TableMapping protocol gives you methods that delete according to primary key or any unique index:
try Person.deleteOne(db, key: 1)
try Person.deleteOne(db, key: ["email": "[email protected]"])
try Country.deleteAll(db, keys: ["FR", "US"])
For batch deletes, see the query interface:
try Person.filter(emailColumn == nil).deleteAll(db)
Record subclasses and types that adopt the TableMapping protocol can be counted:
let personWithEmailCount = try Person.filter(emailColumn != nil).fetchCount(db) // Int
You can now jump to:
The RowConvertible protocol grants fetching methods to any type that can be built from a database row:
protocol RowConvertible {
/// Row initializer
init(row: Row)
}
To use RowConvertible, subclass the Record class, or adopt it explicitely. For example:
struct PointOfInterest {
var id: Int64?
var title: String
var coordinate: CLLocationCoordinate2D
}
extension PointOfInterest : RowConvertible {
init(row: Row) {
id = row.value(named: "id")
title = row.value(named: "title")
coordinate = CLLocationCoordinate2DMake(
row.value(named: "latitude"),
row.value(named: "longitude"))
}
}
See column values for more information about the row.value()
method.
☝️ Note: for performance reasons, the same row argument to
init(row:)
is reused during the iteration of a fetch query. If you want to keep the row for later use, make sure to store a copy:self.row = row.copy()
.
RowConvertible allows adopting types to be fetched from SQL queries:
try PointOfInterest.fetchCursor(db, "SELECT ...", arguments:...) // DatabaseCursor<PointOfInterest>
try PointOfInterest.fetchAll(db, "SELECT ...", arguments:...) // [PointOfInterest]
try PointOfInterest.fetchOne(db, "SELECT ...", arguments:...) // PointOfInterest?
See fetching methods for information about the fetchCursor
, fetchAll
and fetchOne
methods. See StatementArguments for more information about the query arguments.
RowConvertible types usually consume rows by column name:
extension PointOfInterest : RowConvertible {
init(row: Row) {
id = row.value(named: "id") // "id"
title = row.value(named: "title") // "title"
coordinate = CLLocationCoordinate2DMake(
row.value(named: "latitude"), // "latitude"
row.value(named: "longitude")) // "longitude"
}
}
Occasionnally, you'll want to write a complex SQL query that uses different column names. In this case, row adapters are there to help you mapping raw column names to the names expected by your RowConvertible types.
Adopt the TableMapping protocol on top of RowConvertible, and you are granted with the full query interface.
protocol TableMapping {
static var databaseTableName: String { get }
static var selectsRowID: Bool { get }
}
The databaseTableName
type property is the name of a database table. selectsRowID
is optional, and documented in the The Implicit RowID Primary Key chapter.
To use TableMapping, subclass the Record class, or adopt it explicitely. For example:
extension PointOfInterest : TableMapping {
static let databaseTableName = "pointOfInterests"
}
Adopting types can be fetched without SQL, using the query interface:
let paris = try PointOfInterest.filter(nameColumn == "Paris").fetchOne(db)
TableMapping can also fetch and delete records by primary key:
// Fetch
try Person.fetchOne(db, key: 1) // Person?
try Person.fetchAll(db, keys: [1, 2, 3]) // [Person]
try Country.fetchOne(db, key: "FR") // Country?
try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
// Delete
try Person.deleteOne(db, key: 1)
try Country.deleteAll(db, keys: ["FR", "US"])
When the table has no explicit primary key, GRDB uses the hidden "rowid" column:
// SELECT * FROM documents WHERE rowid = 1
try Document.fetchOne(db, key: 1) // Document?
// DELETE FROM documents WHERE rowid = 1
try Document.deleteOne(db, key: 1)
For multiple-column primary keys and unique keys defined by unique indexes, provide a dictionary:
// SELECT * FROM citizenships WHERE personID = 1 AND countryISOCode = 'FR'
try Citizenship.fetchOne(db, key: ["personID": 1, "countryISOCode": "FR"]) // Citizenship?
// DELETE FROM persons WHERE email = '[email protected]'
try Person.deleteOne(db, key: ["email": "[email protected]"])
GRDB provides two protocols that let adopting types store themselves in the database:
protocol MutablePersistable : TableMapping {
/// The name of the database table (from TableMapping)
static var databaseTableName: String { get }
/// The values persisted in the database
var persistentDictionary: [String: DatabaseValueConvertible?] { get }
/// Optional method that lets your adopting type store its rowID upon
/// successful insertion. Don't call it directly: it is called for you.
mutating func didInsert(with rowID: Int64, for column: String?)
}
protocol Persistable : MutablePersistable {
/// Non-mutating version of the optional didInsert(with:for:)
func didInsert(with rowID: Int64, for column: String?)
}
Yes, two protocols instead of one. Both grant exactly the same advantages. Here is how you pick one or the other:
-
If your type is a struct that mutates on insertion, choose
MutablePersistable
.For example, your table has an INTEGER PRIMARY KEY and you want to store the inserted id on successful insertion. Or your table has a UUID primary key, and you want to automatically generate one on insertion.
-
Otherwise, stick with
Persistable
. Particularly if your type is a class.
The persistentDictionary
property returns a dictionary whose keys are column names, and values any DatabaseValueConvertible value (Bool, Int, String, Date, Swift enums, etc.) See Values for more information.
The optional didInsert
method lets the adopting type store its rowID after successful insertion. If your table has an INTEGER PRIMARY KEY column, you are likely to define this method. Otherwise, you can safely ignore it. It is called from a protected dispatch queue, and serialized with all database updates.
To use those protocols, subclass the Record class, or adopt one of them explicitely. For example:
extension PointOfInterest : MutablePersistable {
/// The values persisted in the database
var persistentDictionary: [String: DatabaseValueConvertible?] {
return [
"id": id,
"title": title,
"latitude": coordinate.latitude,
"longitude": coordinate.longitude]
}
// Update id upon successful insertion:
mutating func didInsert(with rowID: Int64, for column: String?) {
id = rowID
}
}
var paris = PointOfInterest(
id: nil,
title: "Paris",
coordinate: CLLocationCoordinate2DMake(48.8534100, 2.3488000))
try paris.insert(db)
paris.id // some value
When your record has many columns, the Swift compiler may take a long time compiling the persistentDictionary
property:
// May be long to compile
var persistentDictionary: [String: DatabaseValueConvertible?] {
return [
"id": id,
"title": title,
// many other columns
]
}
If this happens, check the "Compilation takes a long time" FAQ.
Record subclasses and types that adopt Persistable are given default implementations for methods that insert, update, and delete:
try pointOfInterest.insert(db) // INSERT
try pointOfInterest.update(db) // UPDATE
try pointOfInterest.update(db, columns: ...) // UPDATE
try pointOfInterest.save(db) // Inserts or updates
try pointOfInterest.delete(db) // DELETE
pointOfInterest.exists(db) // Bool
-
insert
,update
,save
anddelete
can throw a DatabaseError whenever an SQLite integrity check fails. -
update
can also throw a PersistenceError of type recordNotFound, should the update fail because there is no matching row in the database.When saving an object that may or may not already exist in the database, prefer the
save
method: -
save
makes sure your values are stored in the database.It performs an UPDATE if the record has a non-null primary key, and then, if no row was modified, an INSERT. It directly perfoms an INSERT if the record has no primary key, or a null primary key.
Despite the fact that it may execute two SQL statements,
save
behaves as an atomic operation: GRDB won't allow any concurrent thread to sneak in (see concurrency). -
delete
returns whether a database row was deleted or not.
All primary keys are supported, including composite primary keys that span several columns, and the implicit rowid primary key.
Your custom type may want to perform extra work when the persistence methods are invoked.
For example, it may want to have its UUID automatically set before inserting. Or it may want to validate its values before saving.
When you subclass Record, you simply have to override the customized method, and call super
:
class Person : Record {
var uuid: UUID?
override func insert(_ db: Database) throws {
if uuid == nil {
uuid = UUID()
}
try super.insert(db)
}
}
If you use the raw Persistable protocol, use one of the special methods performInsert
, performUpdate
, performSave
, performDelete
, or performExists
:
struct Link : Persistable {
var url: URL
func insert(_ db: Database) throws {
try validate()
try performInsert(db)
}
func update(_ db: Database, columns: Set<String>) throws {
try validate()
try performUpdate(db, columns: columns)
}
func validate() throws {
if url.host == nil {
throw ValidationError("url must be absolute.")
}
}
}
☝️ Note: the special methods
performInsert
,performUpdate
, etc. are reserved for your custom implementations. Do not use them elsewhere. Do not provide another implementation for those methods.☝️ Note: it is recommended that you do not implement your own version of the
save
method. Its default implementation forwards the job toupdate
orinsert
: these are the methods that may need customization, notsave
.
Insertions and updates can create conflicts: for example, a query may attempt to insert a duplicate row that violates a unique index.
Those conflicts normally end with an error. Yet SQLite let you alter the default behavior, and handle conflicts with specific policies. For example, the INSERT OR REPLACE
statement handles conflicts with the "replace" policy which replaces the conflicting row instead of throwing an error.
The five different policies are: abort (the default), replace, rollback, fail, and ignore.
SQLite let you specify conflict policies at two different places:
-
At the table level
// CREATE TABLE persons ( // id INTEGER PRIMARY KEY, // email TEXT UNIQUE ON CONFLICT REPLACE // ) try db.create(table: "persons") { t in t.column("id", .integer).primaryKey() t.column("email", .text).unique(onConflict: .replace) // <-- } // Despite the unique index on email, both inserts succeed. // The second insert replaces the first row: try db.execute("INSERT INTO persons (email) VALUES (?)", arguments: ["[email protected]"]) try db.execute("INSERT INTO persons (email) VALUES (?)", arguments: ["[email protected]"])
-
At the query level:
// CREATE TABLE persons ( // id INTEGER PRIMARY KEY, // email TEXT UNIQUE // ) try db.create(table: "persons") { t in t.column("id", .integer).primaryKey() t.column("email", .text) } // Again, despite the unique index on email, both inserts succeed. try db.execute("INSERT OR REPLACE INTO persons (email) VALUES (?)", arguments: ["[email protected]"]) try db.execute("INSERT OR REPLACE INTO persons (email) VALUES (?)", arguments: ["[email protected]"])
When you want to handle conflicts at the query level, specify a custom persistenceConflictPolicy
in your type that adopts the MutablePersistable or Persistable protocol. It will alter the INSERT and UPDATE queries run by the insert
, update
and save
persistence methods:
protocol MutablePersistable {
/// The policy that handles SQLite conflicts when records are inserted
/// or updated.
///
/// This property is optional: its default value uses the ABORT policy
/// for both insertions and updates, and has GRDB generate regular
/// INSERT and UPDATE queries.
static var persistenceConflictPolicy: PersistenceConflictPolicy { get }
}
struct Person : MutablePersistable {
static let persistenceConflictPolicy = PersistenceConflictPolicy(
insert: .replace,
update: .replace)
}
// INSERT OR REPLACE INTO persons (...) VALUES (...)
try person.insert(db)
☝️ Note: the
ignore
policy does not play well at all with thedidInsert
method which notifies the rowID of inserted records. Choose your poison:
- if you specify the
ignore
policy at the table level, don't implement thedidInsert
method: it will be called with some random id in case of failed insert.- if you specify the
ignore
policy at the query level, thedidInsert
method is never called.
⚠️ Warning:ON CONFLICT REPLACE
may delete rows so that inserts and updates can succeed. Those deletions are not reported to transaction observers (this might change in a future release of SQLite).
Record is a class that is designed to be subclassed, and provides the full toolkit in one go: fetching and persistence methods, as well as changes tracking (see the list of record methods for an overview).
Record subclasses inherit their features from the RowConvertible, TableMapping, and Persistable protocols. Check their documentation for more information.
For example, here is a fully functional Record subclass:
class PointOfInterest : Record {
var id: Int64?
var title: String
var coordinate: CLLocationCoordinate2D
/// The table name
override class var databaseTableName: String {
return "pointOfInterests"
}
/// Initialize from a database row
required init(row: Row) {
id = row.value(named: "id")
title = row.value(named: "title")
coordinate = CLLocationCoordinate2DMake(
row.value(named: "latitude"),
row.value(named: "longitude"))
super.init(row: row)
}
/// The values persisted in the database
override var persistentDictionary: [String: DatabaseValueConvertible?] {
return [
"id": id,
"title": title,
"latitude": coordinate.latitude,
"longitude": coordinate.longitude]
}
/// Update record ID after a successful insertion
override func didInsert(with rowID: Int64, for column: String?) {
id = rowID
}
}
The Record class provides changes tracking.
The update()
method always executes an UPDATE statement. When the record has not been edited, this costly database access is generally useless.
Avoid it with the hasPersistentChangedValues
property, which returns whether the record has changes that have not been saved:
// Saves the person if it has changes that have not been saved:
if person.hasPersistentChangedValues {
try person.save(db)
}
The hasPersistentChangedValues
flag is false after a record has been fetched or saved into the database. Subsequent modifications may set it, or not: hasPersistentChangedValues
is based on value comparison. Setting a property to the same value does not set the changed flag:
let person = Person(name: "Barbara", age: 35)
person.hasPersistentChangedValues // true
try person.insert(db)
person.hasPersistentChangedValues // false
person.name = "Barbara"
person.hasPersistentChangedValues // false
person.age = 36
person.hasPersistentChangedValues // true
person.persistentChangedValues // ["age": 35]
For an efficient algorithm which synchronizes the content of a database table with a JSON payload, check JSONSynchronization.playground.
All SQLite tables have a primary key. Even when the primary key is not explicit:
// No explicit primary key
try db.create(table: "events") { t in
t.column("message", .text)
t.column("date", .datetime)
}
// No way to define an explicit primary key
try db.create(virtualTable: "books", using: FTS4()) { t in
t.column("title")
t.column("author")
t.column("body")
}
The implicit primary key is stored in the hidden column rowid
. Hidden means that SELECT *
does not select it, and yet it can be selected and queried: SELECT *, rowid ... WHERE rowid = 1
.
Some GRDB methods will automatically use this hidden column when a table has no explicit primary key:
// SELECT * FROM events WHERE rowid = 1
let event = try Event.fetchOne(db, key: 1)
// DELETE FROM books WHERE rowid = 1
try Book.deleteOne(db, key: 1)
By default, a record type that wraps a table without any explicit primary key doesn't know about the hidden rowid column.
Without primary key, records don't have any identity, and the persistence method can behave in undesired fashion: update()
throws errors, save()
always performs insertions and may break constraints, exists()
is always false.
When SQLite won't let you provide an explicit primary key (as in full-text tables, for example), you may want to make your record type fully aware of the hidden rowid column:
-
Have the
selectsRowID
static property from the TableMapping protocol be true.struct Event : TableMapping { static let selectsRowID = true } // When you subclass Record, you need an override: class Book : Record { override class var selectsRowID: Bool { return true } }
GRDB will then select the
rowid
column by default:// SELECT *, rowid FROM events let events = try Event.fetchAll(db)
-
Have
init(row:)
from the RowConvertible protocol consume the "rowid" column:struct Event : RowConvertible { var id: Int64? init(row: Row) { id = row.value(named: "rowid") } }
If you prefer using the Column type from the query interface, use the
Column.rowID
constant:init(row: Row) { id = row.value(Column.rowID) }
Your fetched records will then know their ids:
let event = try Event.fetchOne(db)! event.id // some value
-
Include the rowid in your
persistentDictionary
, and keep it in thedidInsert(with:for:)
method (both from the Persistable and MutablePersistable protocols):struct Event : MutablePersistable { var id: Int64? var persistentDictionary: [String: DatabaseValueConvertible?] { return [ "rowid": id, "message": message, "date": date, ] } mutating func didInsert(with rowID: Int64, for column: String?) { id = rowID } }
You will then be able to track your record ids, update them, or check for their existence:
let event = Event(message: "foo", date: Date()) // Insertion sets the record id: try event.insert(db) event.id // some value // Record can be updated: event.message = "bar" try event.update(db) // Record knows if it exists: event.exists(db) // true
This is the list of record methods, along with their required protocols. The Record Class adopts all these protocols.
Method | Protocols | Notes |
---|---|---|
Inserting and Updating Records | ||
try record.insert(db) |
Persistable | |
try record.save(db) |
Persistable | |
try record.update(db) |
Persistable | |
try record.update(db, columns: ...) |
Persistable | |
Checking Record Existence | ||
record.exists(db) |
Persistable | |
Deleting Records | ||
try record.delete(db) |
Persistable | |
try Type.deleteOne(db, key: ...) |
TableMapping | ¹ |
try Type.deleteAll(db) |
TableMapping | |
try Type.deleteAll(db, keys: ...) |
TableMapping | ¹ |
try Type.filter(...).deleteAll(db) |
TableMapping | ² |
Counting Records | ||
Type.fetchCount(db) |
TableMapping | |
Type.filter(...).fetchCount(db) |
TableMapping | ² |
Fetching Record Cursors | ||
Type.fetchCursor(db) |
RowConvertible & TableMapping | |
Type.fetchCursor(db, keys: ...) |
RowConvertible & TableMapping | ¹ |
Type.fetchCursor(db, sql) |
RowConvertible | ³ |
Type.fetchCursor(statement) |
RowConvertible | ⁴ |
Type.filter(...).fetchCursor(db) |
RowConvertible & TableMapping | ² |
Fetching Record Arrays | ||
Type.fetchAll(db) |
RowConvertible & TableMapping | |
Type.fetchAll(db, keys: ...) |
RowConvertible & TableMapping | ¹ |
Type.fetchAll(db, sql) |
RowConvertible | ³ |
Type.fetchAll(statement) |
RowConvertible | ⁴ |
Type.filter(...).fetchAll(db) |
RowConvertible & TableMapping | ² |
Fetching Individual Records | ||
Type.fetchOne(db) |
RowConvertible & TableMapping | |
Type.fetchOne(db, key: ...) |
RowConvertible & TableMapping | ¹ |
Type.fetchOne(db, sql) |
RowConvertible | ³ |
Type.fetchOne(statement) |
RowConvertible | ⁴ |
Type.filter(...).fetchOne(db) |
RowConvertible & TableMapping | ² |
Changes Tracking | ||
record.hasPersistentChangedValues |
Record | |
record.persistentChangedValues |
Record |
¹ All unique keys are supported: primary keys (single-column, composite, implicit RowID) and unique indexes:
try Person.fetchOne(db, key: 1) // Person?
try Person.fetchOne(db, key: ["email": "[email protected]"]) // Person?
try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
² See Fetch Requests:
let request = Person.filter(emailColumn != nil).order(nameColumn)
let persons = try request.fetchAll(db) // [Person]
let count = try request.fetchCount(db) // Int
³ See SQL queries:
let persons = try Person.fetchAll("SELECT * FROM persons WHERE id = ?", arguments: [1]) // [Person]
⁴ See Prepared Statements:
let statement = try db.makeSelectStatement("SELECT * FROM persons WHERE id = ?")
let persons = try Person.fetchAll(statement, arguments: [1]) // [Person]
The query interface lets you write pure Swift instead of SQL:
try dbQueue.inDatabase { db in
// Update database schema
try db.create(table: "wines") { t in ... }
// Fetch records
let wines = try Wine.filter(origin == "Burgundy").order(price).fetchAll(db)
// Count
let count = try Wine.filter(color == Color.red).fetchCount(db)
// Delete
try Wine.filter(corked == true).deleteAll(db)
}
You need to open a database connection before you can query the database.
Please bear in mind that the query interface can not generate all possible SQL queries. You may also prefer writing SQL, and this is just OK. From little snippets to full queries, your SQL skills are welcome:
try dbQueue.inDatabase { db in
// Update database schema (with SQL)
try db.execute("CREATE TABLE wines (...)")
// Fetch records (with SQL)
let wines = try Wine.fetchAll(db,
"SELECT * FROM wines WHERE origin = ? ORDER BY price",
arguments: ["Burgundy"])
// Count (with an SQL snippet)
let count = try Wine
.filter(sql: "color = ?", arguments: [Color.red])
.fetchCount(db)
// Delete (with SQL)
try db.execute("DELETE FROM wines WHERE corked")
}
So don't miss the SQL API.
- Database Schema
- Requests
- Expressions
- Fetching from Requests
- Fetching by Key
- Fetching Aggregated Values
- Delete Requests
- Custom Requests
- GRDB Extension Guide
Once granted with a database connection, you can setup your database schema without writing SQL:
// CREATE TABLE pointOfInterests (
// id INTEGER PRIMARY KEY,
// title TEXT,
// favorite BOOLEAN NOT NULL DEFAULT 0,
// latitude DOUBLE NOT NULL,
// longitude DOUBLE NOT NULL
// )
try db.create(table: "pointOfInterests") { t in
t.column("id", .integer).primaryKey()
t.column("title", .text)
t.column("favorite", .boolean).notNull().defaults(to: false)
t.column("longitude", .double).notNull()
t.column("latitude", .double).notNull()
}
The create(table:)
method covers nearly all SQLite table creation features. For virtual tables, see Full-Text Search, or use raw SQL.
SQLite has many reference documents about table creation:
- CREATE TABLE
- Datatypes In SQLite Version 3
- SQLite Foreign Key Support
- ON CONFLICT
- The WITHOUT ROWID Optimization
Configure table creation:
// CREATE TABLE example ( ... )
try db.create(table: "example") { t in ... }
// CREATE TEMPORARY TABLE example IF NOT EXISTS (
try db.create(table: "example", temporary: true, ifNotExists: true) { t in
Add regular columns with their name and type (text, integer, double, numeric, boolean, blob, date and datetime) - see SQLite data types:
// name TEXT,
// creationDate DATETIME,
t.column("name", .text)
t.column("creationDate", .datetime)
Define not null columns, and set default values:
// email TEXT NOT NULL,
t.column("email", .text).notNull()
// name TEXT NOT NULL DEFAULT 'Anonymous',
t.column("name", .text).notNull().defaults(to: "Anonymous")
Use an individual column as primary, unique, or foreign key. When defining a foreign key, the referenced column is the primary key of the referenced table (unless you specify otherwise):
// id INTEGER PRIMARY KEY,
t.column("id", .integer).primaryKey()
// email TEXT UNIQUE,
t.column("email", .text).unique()
// countryCode TEXT REFERENCES countries(code) ON DELETE CASCADE,
t.column("countryCode", .text).references("countries", onDelete: .cascade)
Perform integrity checks on individual columns, and SQLite will only let conforming rows in. In the example below, the $0
closure variable is a column which lets you build any SQL expression.
// name TEXT CHECK (LENGTH(name) > 0)
// age INTEGER CHECK (age > 0)
t.column("name", .text).check { length($0) > 0 }
t.column("age", .integer).check(sql: "age > 0")
Other table constraints can involve several columns:
// PRIMARY KEY (a, b),
t.primaryKey(["a", "b"])
// UNIQUE (a, b) ON CONFLICT REPLACE,
t.uniqueKey(["a", "b"], onConfict: .replace)
// FOREIGN KEY (a, b) REFERENCES parents(c, d),
t.foreignKey(["a", "b"], references: "parent")
// CHECK (a + b < 10),
t.check(Column("a") + Column("b") < 10)
// CHECK (a + b < 10)
t.check(sql: "a + b < 10")
}
SQLite lets you rename tables, and add columns to existing tables:
// ALTER TABLE referers RENAME TO referrers
try db.rename(table: "referers", to: "referrers")
// ALTER TABLE persons ADD COLUMN url TEXT
try db.alter(table: "persons") { t in
t.add(column: "url", .text)
}
☝️ Note: SQLite restricts the possible table alterations, and may require you to recreate dependent triggers or views. See the documentation of the ALTER TABLE for details. See Advanced Database Schema Changes for a way to lift restrictions.
Drop tables with the drop(table:)
method:
try db.drop(table: "obsolete")
Create indexes with the create(index:)
method:
// CREATE UNIQUE INDEX byEmail ON users(email)
try db.create(index: "byEmail", on: "users", columns: ["email"], unique: true)
Relevant SQLite documentation:
The query interface requests let you fetch values from the database:
let request = Person.filter(emailColumn != nil).order(nameColumn)
let persons = try request.fetchAll(db) // [Person]
let count = try request.fetchCount(db) // Int
All requests start from a type that adopts the TableMapping
protocol, such as a Record
subclass (see Records):
class Person : Record { ... }
Declare the table columns that you want to use for filtering, or sorting:
let idColumn = Column("id")
let nameColumn = Column("name")
You can now build requests with the following methods: all
, none
, select
, distinct
, filter
, matching
, group
, having
, order
, reversed
, limit
. All those methods return another request, which you can further refine by applying another method: Person.select(...).filter(...).order(...)
.
-
all()
,none()
: the requests for all rows, or no row.// SELECT * FROM persons Person.all()
The hidden
rowid
column can be selected as well when you need it. -
select(expression, ...)
defines the selected columns.// SELECT id, name FROM persons Person.select(idColumn, nameColumn) // SELECT MAX(age) AS maxAge FROM persons Person.select(max(ageColumn).aliased("maxAge"))
-
distinct()
performs uniquing.// SELECT DISTINCT name FROM persons Person.select(nameColumn).distinct()
-
filter(expression)
applies conditions.// SELECT * FROM persons WHERE id IN (1, 2, 3) Person.filter([1,2,3].contains(idColumn)) // SELECT * FROM persons WHERE (name IS NOT NULL) AND (height > 1.75) Person.filter(nameColumn != nil && heightColumn > 1.75)
-
matching(pattern)
performs full-text search.// SELECT * FROM documents WHERE documents MATCH 'sqlite database' let pattern = FTS3Pattern(matchingAllTokensIn: "SQLite database") Document.matching(pattern)
When the pattern is nil, no row will match.
-
group(expression, ...)
groups rows.// SELECT name, MAX(age) FROM persons GROUP BY name Person .select(nameColumn, max(ageColumn)) .group(nameColumn)
-
having(expression)
applies conditions on grouped rows.// SELECT name, MAX(age) FROM persons GROUP BY name HAVING MIN(age) >= 18 Person .select(nameColumn, max(ageColumn)) .group(nameColumn) .having(min(ageColumn) >= 18)
-
order(ordering, ...)
sorts.// SELECT * FROM persons ORDER BY name Person.order(nameColumn) // SELECT * FROM persons ORDER BY score DESC, name Person.order(scoreColumn.desc, nameColumn)
Each
order
call clears any previous ordering:// SELECT * FROM persons ORDER BY name Person.order(scoreColumn).order(nameColumn)
-
reversed()
reverses the eventual orderings.// SELECT * FROM persons ORDER BY score ASC, name DESC Person.order(scoreColumn.desc, nameColumn).reversed()
If no ordering was specified, the result is ordered by rowID in reverse order.
// SELECT * FROM persons ORDER BY _rowid_ DESC Person.all().reversed()
-
limit(limit, offset: offset)
limits and pages results.// SELECT * FROM persons LIMIT 5 Person.limit(5) // SELECT * FROM persons LIMIT 5 OFFSET 10 Person.limit(5, offset: 10)
You can refine requests by chaining those methods:
// SELECT * FROM persons WHERE (email IS NOT NULL) ORDER BY name
Person.order(nameColumn).filter(emailColumn != nil)
The select
, order
, group
, and limit
methods ignore and replace previously applied selection, orderings, grouping, and limits. On the opposite, filter
, matching
, and having
methods extend the query:
Person // SELECT * FROM persons
.filter(nameColumn != nil) // WHERE (name IS NOT NULL)
.filter(emailColumn != nil) // AND (email IS NOT NULL)
.order(nameColumn) // - ignored -
.order(ageColumn) // ORDER BY age
.limit(20, offset: 40) // - ignored -
.limit(10) // LIMIT 10
Raw SQL snippets are also accepted, with eventual arguments:
// SELECT DATE(creationDate), COUNT(*) FROM persons WHERE name = 'Arthur' GROUP BY date(creationDate)
Person
.select(sql: "DATE(creationDate), COUNT(*)")
.filter(sql: "name = ?", arguments: ["Arthur"])
.group(sql: "DATE(creationDate)")
Feed requests with SQL expressions built from your Swift code:
-
=
,<>
,<
,<=
,>
,>=
,IS
,IS NOT
Comparison operators are based on the Swift operators
==
,!=
,===
,!==
,<
,<=
,>
,>=
:// SELECT * FROM persons WHERE (name = 'Arthur') Person.filter(nameColumn == "Arthur") // SELECT * FROM persons WHERE (name IS NULL) Person.filter(nameColumn == nil) // SELECT * FROM persons WHERE (age IS 18) Person.filter(ageColumn === 18) // SELECT * FROM rectangles WHERE width < height Rectangle.filter(widthColumn < heightColumn)
☝️ Note: SQLite string comparison, by default, is case-sensitive and not Unicode-aware. See string comparison if you need more control.
-
*
,/
,+
,-
SQLite arithmetic operators are derived from their Swift equivalent:
// SELECT ((temperature * 1.8) + 32) AS farenheit FROM persons Planet.select((temperatureColumn * 1.8 + 32).aliased("farenheit"))
☝️ Note: an expression like
nameColumn + "rrr"
will be interpreted by SQLite as a numerical addition (with funny results), not as a string concatenation. -
AND
,OR
,NOT
The SQL logical operators are derived from the Swift
&&
,||
and!
:// SELECT * FROM persons WHERE ((NOT verified) OR (age < 18)) Person.filter(!verifiedColumn || ageColumn < 18)
-
BETWEEN
,IN
,IN (subquery)
,NOT IN
,NOT IN (subquery)
To check inclusion in a collection, call the
contains
method on any Swift sequence:// SELECT * FROM persons WHERE id IN (1, 2, 3) Person.filter([1, 2, 3].contains(idColumn)) // SELECT * FROM persons WHERE id NOT IN (1, 2, 3) Person.filter(![1, 2, 3].contains(idColumn)) // SELECT * FROM persons WHERE age BETWEEN 0 AND 17 Person.filter((0...17).contains(ageColumn)) // SELECT * FROM persons WHERE (age >= 0) AND (age < 18) Person.filter((0..<18).contains(ageColumn)) // SELECT * FROM persons WHERE name BETWEEN 'A' AND 'z' Person.filter(("A"..."z").contains(nameColumn)) // SELECT * FROM persons WHERE (name >= 'A') AND (name < 'z') Person.filter(("A"..<"z").contains(nameColumn))
☝️ Note: SQLite string comparison, by default, is case-sensitive and not Unicode-aware. See string comparison if you need more control.
To check inclusion in a subquery, call the
contains
method on another request:// SELECT * FROM events // WHERE userId IN (SELECT id FROM persons WHERE verified) let verifiedUserIds = User.select(idColumn).filter(verifiedColumn) Event.filter(verifiedUserIds.contains(userIdColumn))
-
EXISTS (subquery)
,NOT EXISTS (subquery)
To check is a subquery would return any row, use the
exists
property on another request:// SELECT * FROM persons // WHERE EXISTS (SELECT * FROM books // WHERE books.ownerId = persons.id) Person.filter(Book.filter(sql: "books.ownerId = persons.id").exists)
-
LIKE
The SQLite LIKE operator is available as the
like
method:// SELECT * FROM persons WHERE (email LIKE '%@example.com') Person.filter(emailColumn.like("%@example.com"))
☝️ Note: the SQLite LIKE operator is case-unsensitive but not Unicode-aware. For example, the expression
'a' LIKE 'A'
is true but'æ' LIKE 'Æ'
is false. -
MATCH
The full-text MATCH operator is available through FTS3Pattern (for FTS3 and FTS4 tables) and FTS5Pattern (for FTS5):
FTS3 and FTS4:
let pattern = FTS3Pattern(matchingAllTokensIn: "SQLite database") // SELECT * FROM documents WHERE documents MATCH 'sqlite database' Document.matching(pattern) // SELECT * FROM documents WHERE content MATCH 'sqlite database' Document.filter(contentColumn.match(pattern))
FTS5:
let pattern = FTS5Pattern(matchingAllTokensIn: "SQLite database") // SELECT * FROM documents WHERE documents MATCH 'sqlite database' Document.matching(pattern)
-
ABS
,AVG
,COUNT
,LENGTH
,MAX
,MIN
,SUM
:Those are based on the
abs
,average
,count
,length
,max
,min
andsum
Swift functions:// SELECT MIN(age), MAX(age) FROM persons Person.select(min(ageColumn), max(ageColumn)) // SELECT COUNT(name) FROM persons Person.select(count(nameColumn)) // SELECT COUNT(DISTINCT name) FROM persons Person.select(count(distinct: nameColumn))
-
IFNULL
Use the Swift
??
operator:// SELECT IFNULL(name, 'Anonymous') FROM persons Person.select(nameColumn ?? "Anonymous") // SELECT IFNULL(name, email) FROM persons Person.select(nameColumn ?? emailColumn)
-
LOWER
,UPPER
The query interface does not give access to those SQLite functions. Nothing against them, but they are not unicode aware.
Instead, GRDB extends SQLite with SQL functions that call the Swift built-in string functions
capitalized
,lowercased
,uppercased
,localizedCapitalized
,localizedLowercased
andlocalizedUppercased
:Person.select(nameColumn.uppercased())
☝️ Note: When comparing strings, you'd rather use a collation:
let name: String = ... // Not recommended nameColumn.uppercased() == name.uppercased() // Better nameColumn.collating(.caseInsensitiveCompare) == name
-
Custom SQL functions
You can apply your own custom SQL functions:
let f = DatabaseFunction("f", ...) // SELECT f(name) FROM persons Person.select(f.apply(nameColumn))
Once you have a request, you can fetch the records at the origin of the request:
// Some request based on `Person`
let request = Person.filter(...)... // QueryInterfaceRequest<Person>
// Fetch persons:
try request.fetchCursor(db) // DatabaseCursor<Person>
try request.fetchAll(db) // [Person]
try request.fetchOne(db) // Person?
See fetching methods for information about the fetchCursor
, fetchAll
and fetchOne
methods.
For example:
let allPersons = try Person.fetchAll(db) // [Person]
let arthur = try Person.filter(nameColumn == "Arthur").fetchOne(db) // Person?
When the selected columns don't fit the source type, change your target: any other type that adopts the RowConvertible protocol, plain database rows, and even values:
let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?
let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!
let minScore = row.value(atIndex: 0) as Int
let maxScore = row.value(atIndex: 1) as Int
Fetching records according to their primary key is a very common task. It has a shortcut which accepts any single-column primary key:
// SELECT * FROM persons WHERE id = 1
try Person.fetchOne(db, key: 1) // Person?
// SELECT * FROM persons WHERE id IN (1, 2, 3)
try Person.fetchAll(db, keys: [1, 2, 3]) // [Person]
// SELECT * FROM persons WHERE isoCode = 'FR'
try Country.fetchOne(db, key: "FR") // Country?
// SELECT * FROM countries WHERE isoCode IN ('FR', 'US')
try Country.fetchAll(db, keys: ["FR", "US"]) // [Country]
When the table has no explicit primary key, GRDB uses the hidden "rowid" column:
// SELECT * FROM documents WHERE rowid = 1
try Document.fetchOne(db, key: 1) // Document?
For multiple-column primary keys and unique keys defined by unique indexes, provide a dictionary:
// SELECT * FROM citizenships WHERE personID = 1 AND countryISOCode = 'FR'
try Citizenship.fetchOne(db, key: ["personID": 1, "countryISOCode": "FR"]) // Citizenship?
// SELECT * FROM persons WHERE email = '[email protected]'
try Person.fetchOne(db, key: ["email": "[email protected]"]) // Person?
Requests can count. The fetchCount()
method returns the number of rows that would be returned by a fetch request:
// SELECT COUNT(*) FROM persons
let count = try Person.fetchCount(db) // Int
// SELECT COUNT(*) FROM persons WHERE email IS NOT NULL
let count = try Person.filter(emailColumn != nil).fetchCount(db)
// SELECT COUNT(DISTINCT name) FROM persons
let count = try Person.select(nameColumn).distinct().fetchCount(db)
// SELECT COUNT(*) FROM (SELECT DISTINCT name, age FROM persons)
let count = try Person.select(nameColumn, ageColumn).distinct().fetchCount(db)
Other aggregated values can also be selected and fetched (see SQL Functions):
let request = Player.select(max(scoreColumn))
let maxScore = try Int.fetchOne(db, request) // Int?
let request = Player.select(min(scoreColumn), max(scoreColumn))
let row = try Row.fetchOne(db, request)!
let minScore = row.value(atIndex: 0) as Int
let maxScore = row.value(atIndex: 1) as Int
Requests can delete records, with the deleteAll()
method:
// DELETE FROM persons WHERE email IS NULL
let request = Person.filter(emailColumn == nil)
try request.deleteAll(db)
Deleting records according to their primary key is also quite common. It has a shortcut which accepts any single-column primary key:
// DELETE FROM persons WHERE id = 1
try Person.deleteOne(db, key: 1)
// DELETE FROM persons WHERE id IN (1, 2, 3)
try Person.deleteAll(db, keys: [1, 2, 3])
// DELETE FROM persons WHERE isoCode = 'FR'
try Country.deleteOne(db, key: "FR")
// DELETE FROM countries WHERE isoCode IN ('FR', 'US')
try Country.deleteAll(db, keys: ["FR", "US"])
When the table has no explicit primary key, GRDB uses the hidden "rowid" column:
// DELETE FROM documents WHERE rowid = 1
try Document.deleteOne(db, key: 1)
For multiple-column primary keys and unique keys defined by unique indexes, provide a dictionary:
// DELETE FROM citizenships WHERE personID = 1 AND countryISOCode = 'FR'
try Citizenship.deleteOne(db, key: ["personID": 1, "countryISOCode": "FR"])
// DELETE FROM persons WHERE email = '[email protected]'
Person.deleteOne(db, key: ["email": "[email protected]"])
Until now, we have seen requests created from any type that adopts the TableMapping protocol:
let request = Person.all() // QueryInterfaceRequest<Person>
Those requests of type QueryInterfaceRequest
can fetch, count, and delete records:
try request.fetchCursor(db) // DatabaseCursor<Person>
try request.fetchAll() // [Person]
try request.fetchOne() // Person?
try request.fetchCount() // Int
try request.deleteAll()
When the query interface can not generate the SQL you need, you can still fallback to raw SQL:
// Custom SQL is always welcome
try Person.fetchAll(db, "SELECT ...") // [Person]
But you may prefer to bring some elegance back in, and build custom requests on top of the Request
and TypedRequest
protocols:
// No custom SQL in sight
try Person.customRequest().fetchAll(db) // [Person]
Unlike QueryInterfaceRequest, these protocols can't delete. But they can fetch and count:
/// The protocol for all types that define a way to fetch values from
/// a database.
protocol Request {
/// A tuple that contains a prepared statement that is ready to be
/// executed, and an eventual row adapter.
func prepare(_ db: Database) throws -> (SelectStatement, RowAdapter?)
/// The number of rows fetched by the request.
func fetchCount(_ db: Database) throws -> Int
}
/// The protocol for typed fetch requests.
protocol TypedRequest : Request {
/// The fetched type
associatedtype Fetched
}
The prepare
method returns a tuple made of a prepared statement and an optional row adapter. The prepared statement tells which SQL query should be executed. The row adapter can help presenting the fetched rows in the way expected by the row consumers (we'll see an example below).
The fetchCount
method has a default implementation that builds a correct but naive SQL query from the statement returned by prepare
: SELECT COUNT(*) FROM (...)
. Adopting types can refine the counting SQL by customizing their fetchCount
implementation.
A Request doesn't know what to fetch, but it can feed the fetching methods of any fetchable type (Row, value, or record):
let request: Request = ...
try Row.fetchCursor(db, request) // DatabaseCursor<Row>
try String.fetchAll(db, request) // [String]
try Person.fetchOne(db, request) // Person?
A TypedRequest knows exactly what it has to do when its Fetched associated type is fetchable (Row, value, or record):
let request = ... // Some TypedRequest that fetches Person
try request.fetchCursor(db) // DatabaseCursor<Person>
try request.fetchAll(db) // [Person]
try request.fetchOne(db) // Person?
To build custom requests, you can create your own type that adopts the protocols, or derive requests from other requests, or use one of the built-in concrete types:
- Request: the protocol for all requests
- TypedRequest: the protocol for all typed requests
- SQLRequest: a Request built from raw SQL
- AnyRequest: a type-erased Request
- AnyTypedRequest: a type-erased TypedRequest
Rebind the fetched type of requests:
let maxScore = Player.select(max(scoreColumn))
.bound(to: Int.self)
.fetchOne(db)
Bind custom SQL requests to your record types:
extension Person {
static func customRequest(...) -> AnyTypedRequest<Person> {
let request = SQLRequest("SELECT ...", arguments: ...)
return request.bound(to: Person.self)
}
}
try Person.customRequest(...).fetchAll(db) // [Person]
try Person.customRequest(...).fetchCount(db) // Int
Use row adapters to ease the consumption of complex rows:
struct BookAuthorPair : RowConvertible {
let book: Book
let author: Author
init(row: Row) {
book = Book(row: row.scoped(on: "books")!)
author = Author(row: row.scoped(on: "authors")!)
}
static func all() -> AnyTypedRequest<BookAuthorPair> {
return SQLRequest(
"SELECT books.*, authors.* " +
"FROM books " +
"JOIN authors ON authors.id = books.authorID")
.bound(to: BookAuthorPair.self)
.adapted { db in
// The scopes are used in init(row:)
try ScopeAdapter([
"books": SuffixRowAdapter(fromIndex: 0),
"authors": SuffixRowAdapter(fromIndex: db.columnCount(in: "books"))])
}
}
static func fetchAll(_ db: Database) throws -> [BookAuthorPair] {
return try all().fetchAll(db)
}
}
for pair in try BookAuthorPair.fetchAll(db) {
print("\(pair.book.title) by \(pair.author.name)")
}
On top of the APIs described above, GRDB provides a toolkit for applications. While none of those are mandatory, all of them help dealing with the database:
- Migrations: Transform your database as your application evolves.
- Full-Text Search: Perform efficient and customizable full-text searches.
- Database Changes Observation: Perform post-commit and post-rollback actions.
- FetchedRecordsController: Automatic database changes tracking, plus UITableView animations.
- Encryption: Encrypt your database with SQLCipher.
- Backup: Dump the content of a database to another.
Migrations are a convenient way to alter your database schema over time in a consistent and easy way.
Migrations run in order, once and only once. When a user upgrades your application, only non-applied migrations are run.
Inside each migration, you typically define and update your database tables according to your evolving application needs:
var migrator = DatabaseMigrator()
// v1 database
migrator.registerMigration("v1") { db in
try db.create(table: "persons") { t in ... }
try db.create(table: "books") { t in ... }
try db.create(index: ...)
}
// v2 database
migrator.registerMigration("v2") { db in
try db.alter(table: "persons") { t in ... }
}
// Migrations for future versions will be inserted here:
//
// // v3 database
// migrator.registerMigration("v3") { db in
// ...
// }
try migrator.migrate(dbQueue) // or migrator.migrate(dbPool)
Each migration runs in a separate transaction. Should one throw an error, its transaction is rollbacked, subsequent migrations do not run, and the error is eventually thrown by migrator.migrate(dbQueue)
.
The memory of applied migrations is stored in the database itself (in a reserved table).
SQLite does not support many schema changes, and won't let you drop a table column with "ALTER TABLE ... DROP COLUMN ...", for example.
Yet any kind of schema change is still possible. The SQLite documentation explains in detail how to do so: https://www.sqlite.org/lang_altertable.html#otheralter. This technique requires the temporary disabling of foreign key checks, and is supported by the registerMigrationWithDeferredForeignKeyCheck
function:
// Add a NOT NULL constraint on persons.name:
migrator.registerMigrationWithDeferredForeignKeyCheck("AddNotNullCheckOnName") { db in
try db.create(table: "new_persons") { t in
t.column("id", .integer).primaryKey()
t.column("name", .text).notNull()
}
try db.execute("INSERT INTO new_persons SELECT * FROM persons")
try db.drop(table: "persons")
try db.rename(table: "new_persons", to: "persons")
}
While your migration code runs with disabled foreign key checks, those are re-enabled and checked at the end of the migration, regardless of eventual errors.
Full-Text Search is an efficient way to search a corpus of textual documents.
// Create full-text tables
try db.create(virtualTable: "books", using: FTS4()) { t in // or FTS3(), or FTS5()
t.column("author")
t.column("title")
t.column("body")
}
// Populate full-text table with records or SQL
try Book(...).insert(db)
try db.execute(
"INSERT INTO books (author, title, body) VALUES (?, ?, ?)",
arguments: [...])
// Build search patterns
let pattern = FTS3Pattern(matchingPhrase: "Moby-Dick")
// Search with the query interface or SQL
let books = try Book.matching(pattern).fetchAll(db)
let books = try Book.fetchAll(db,
"SELECT * FROM books WHERE books MATCH ?",
arguments: [pattern])
- Choosing the Full-Text Engine
- Create Full-Text Virtual Tables: FTS3/4, FTS5
- Choosing a Tokenizer: FTS3/4, FTS5
- Search Patterns: FTS3/4, FTS5
- Sorting by Relevance: FTS5
- External Content Full-Text Tables: FTS4/5
- Full-Text Records: FTS3/4/5
- Unicode Full-Text Gotchas: FTS3/4/5. Unicorns don't exist.
- Custom Tokenizers: FTS5. Leverage extra full-text features such as synonyms or stop words. Avoid unicode gotchas.
- Sample Code: WWDC Companion, an iOS app that stores, displays, and lets the user search the WWDC 2016 sessions.
SQLite supports three full-text engines: FTS3, FTS4 and FTS5.
Generally speaking, FTS5 is better than FTS4 which improves on FTS3. But this does not really tell which engine to choose for your application. Instead, make your choice depend on:
-
The full-text features needed by the application:
Full-Text Needs FTS3 FTS4 FTS5 ❓ Queries Words searches (documents that contain "database") X X X Prefix searches (documents that contain a word starting with "data") X X X Phrases searches (documents that contain the phrase "SQLite database") X X X Boolean searches (documents that contain "SQLite" or "database") X X X Proximity search (documents that contain "SQLite" near "database") X X X ✂️ Tokenization Ascii case insensitivity (have "DATABASE" match "database") X X X Unicode case insensitivity (have "ÉLÉGANCE" match "élégance") X X X Latin diacritics insensitivity (have "elegance" match "élégance") X X X English Stemming (have "frustration" match "frustrated") X X X English Stemming and Ascii case insensitivity X X X English Stemming and Unicode case insensitivity X English Stemming and Latin diacritics insensitivity X Synonyms (have "1st" match "first") ¹ ¹ X ² Pinyin and Romaji (have "romaji" match "ローマ字") ¹ ¹ X ² Stop words (don't index, and don't match words like "and" and "the") ¹ ¹ X ² Spell checking (have "alamaba" match "alabama") ¹ ¹ ¹ Other Features
Ranking (sort results by relevance) ¹ ¹ X Snippets (display a few words around a match) X X X ¹ Requires extra setup, possibly hard to implement.
² Requires a custom tokenizer.
For a full feature list, read the SQLite documentation. Some missing features can be achieved with extra application code.
-
The speed versus disk space constraints. Roughly speaking, FTS4 and FTS5 are faster than FTS3, but use more space. FTS4 only supports content compression.
-
The location of the indexed text in your database schema. Only FTS4 and FTS5 support "contentless" and "external content" tables.
-
The SQLite library integrated in your application. The version of SQLite that ships with iOS, macOS and watchOS support FTS3 and FTS4 out of the box, but not FTS5. To use FTS5, you'll need a custom SQLite build that activates the
SQLITE_ENABLE_FTS5
compilation option. -
See FST3 vs. FTS4 and FTS5 vs. FTS3/4 for more differences.
☝️ Note: In case you were still wondering, it is recommended to read the SQLite documentation: FTS3 & FTS4 and FTS5.
FTS3 and FTS4 full-text tables store and index textual content.
Create tables with the create(virtualTable:using:)
method:
// CREATE VIRTUAL TABLE documents USING fts3(content)
try db.create(virtualTable: "documents", using: FTS3()) { t in
t.column("content")
}
// CREATE VIRTUAL TABLE documents USING fts4(content)
try db.create(virtualTable: "documents", using: FTS4()) { t in
t.column("content")
}
All columns in a full-text table contain text. If you need to index a table that contains other kinds of values, you need an "external content" full-text table.
You can specify a tokenizer:
// CREATE VIRTUAL TABLE books USING fts4(
// tokenize=porter,
// author,
// title,
// body
// )
try db.create(virtualTable: "books", using: FTS4()) { t in
t.tokenizer = .porter
t.column("author")
t.column("title")
t.column("body")
}
FTS4 supports options:
// CREATE VIRTUAL TABLE books USING fts4(
// content,
// uuid,
// content="",
// compress=zip,
// uncompress=unzip,
// prefix="2,4",
// notindexed=uuid,
// languageid=lid
// )
try db.create(virtualTable: "documents", using: FTS4()) { t in
t.content = ""
t.compress = "zip"
t.uncompress = "unzip"
t.prefixes = [2, 4]
t.column("content")
t.column("uuid").notIndexed()
t.column("lid").asLanguageId()
}
The content
option is involved in "contentless" and "external content" full-text tables. GRDB can help you defining full-text tables that automatically synchronize with their content table. See External Content Full-Text Tables.
See SQLite documentation for more information.
A tokenizer defines what "matching" means. Depending on the tokenizer you choose, full-text searches won't return the same results.
SQLite ships with three built-in FTS3/4 tokenizers: simple
, porter
and unicode61
that use different algorithms to match queries with indexed content:
try db.create(virtualTable: "books", using: FTS4()) { t in
// Pick one:
t.tokenizer = .simple // default
t.tokenizer = .porter
t.tokenizer = .unicode61(...)
}
See below some examples of matches:
content | query | simple | porter | unicode61 |
---|---|---|---|---|
Foo | Foo | X | X | X |
Foo | FOO | X | X | X |
Jérôme | Jérôme | X ¹ | X ¹ | X ¹ |
Jérôme | JÉRÔME | X ¹ | ||
Jérôme | Jerome | X ¹ | ||
Database | Databases | X | ||
Frustration | Frustrated | X |
¹ Don't miss Unicode Full-Text Gotchas
-
simple
try db.create(virtualTable: "books", using: FTS4()) { t in t.tokenizer = .simple // default }
The default "simple" tokenizer is case-insensitive for ASCII characters. It matches "foo" with "FOO", but not "Jérôme" with "JÉRÔME".
It does not provide stemming, and won't match "databases" with "database".
It does not strip diacritics from latin script characters, and won't match "jérôme" with "jerome".
-
porter
try db.create(virtualTable: "books", using: FTS4()) { t in t.tokenizer = .porter }
The "porter" tokenizer compares English words according to their roots: it matches "database" with "databases", and "frustration" with "frustrated".
It does not strip diacritics from latin script characters, and won't match "jérôme" with "jerome".
-
unicode61
try db.create(virtualTable: "books", using: FTS4()) { t in t.tokenizer = .unicode61() t.tokenizer = .unicode61(removeDiacritics: false) }
The "unicode61" tokenizer is case-insensitive for unicode characters. It matches "Jérôme" with "JÉRÔME".
It strips diacritics from latin script characters by default, and matches "jérôme" with "jerome". This behavior can be disabled, as in the example above.
It does not provide stemming, and won't match "databases" with "database".
See SQLite tokenizers for more information.
Full-text search in FTS3 and FTS4 tables is performed with search patterns:
database
matches all documents that contain "database"data*
matches all documents that contain a word starting with "data"SQLite database
matches all documents that contain both "SQLite" and "database"SQLite OR database
matches all documents that contain "SQLite" or "database""SQLite database"
matches all documents that contain the "SQLite database" phrase
Not all search patterns are valid: they must follow the Full-Text Index Queries Grammar.
The FTS3Pattern type helps you validating patterns, and building valid patterns from untrusted strings (such as strings typed by users):
struct FTS3Pattern {
init(rawPattern: String) throws
init?(matchingAnyTokenIn string: String)
init?(matchingAllTokensIn string: String)
init?(matchingPhrase string: String)
}
The first initializer validates your raw patterns against the query grammar, and may throw a DatabaseError:
// OK: FTS3Pattern
let pattern = try FTS3Pattern(rawPattern: "sqlite AND database")
// DatabaseError: malformed MATCH expression: [AND]
let pattern = try FTS3Pattern(rawPattern: "AND")
The three other initializers don't throw. They build a valid pattern from any string, including strings provided by users of your application. They let you find documents that match all given words, any given word, or a full phrase, depending on the needs of your application:
let query = "SQLite database"
// Matches documents that contain "SQLite" or "database"
let pattern = FTS3Pattern(matchingAnyTokenIn: query)
// Matches documents that contain both "SQLite" and "database"
let pattern = FTS3Pattern(matchingAllTokensIn: query)
// Matches documents that contain "SQLite database"
let pattern = FTS3Pattern(matchingPhrase: query)
They return nil when no pattern could be built from the input string:
let pattern = FTS3Pattern(matchingAnyTokenIn: "") // nil
let pattern = FTS3Pattern(matchingAnyTokenIn: "*") // nil
FTS3Pattern are regular values. You can use them as query arguments:
let documents = try Document.fetchAll(db,
"SELECT * FROM documents WHERE content MATCH ?",
arguments: [pattern])
Use them in the query interface:
// Search in all columns
let documents = try Document.matching(pattern).fetchAll(db)
// Search in a specific column:
let documents = try Document.filter(Column("content").match(pattern)).fetchAll(db)
FTS5 full-text tables store and index textual content.
To use FTS5, you'll need a custom SQLite build that activates the SQLITE_ENABLE_FTS5
compilation option.
Create FTS5 tables with the create(virtualTable:using:)
method:
// CREATE VIRTUAL TABLE documents USING fts5(content)
try db.create(virtualTable: "documents", using: FTS5()) { t in
t.column("content")
}
All columns in a full-text table contain text. If you need to index a table that contains other kinds of values, you need an "external content" full-text table.
You can specify a tokenizer:
// CREATE VIRTUAL TABLE books USING fts5(
// tokenize='porter',
// author,
// title,
// body
// )
try db.create(virtualTable: "books", using: FTS5()) { t in
t.tokenizer = .porter()
t.column("author")
t.column("title")
t.column("body")
}
FTS5 supports options:
// CREATE VIRTUAL TABLE books USING fts5(
// content,
// uuid UNINDEXED,
// content='table',
// content_rowid='id',
// prefix='2 4',
// columnsize=0,
// detail=column
// )
try db.create(virtualTable: "documents", using: FTS5()) { t in
t.column("content")
t.column("uuid").notIndexed()
t.content = "table"
t.contentRowID = "id"
t.prefixes = [2, 4]
t.columnSize = 0
t.detail = "column"
}
The content
and contentRowID
options are involved in "contentless" and "external content" full-text tables. GRDB can help you defining full-text tables that automatically synchronize with their content table. See External Content Full-Text Tables.
See SQLite documentation for more information.
A tokenizer defines what "matching" means. Depending on the tokenizer you choose, full-text searches won't return the same results.
SQLite ships with three built-in FTS5 tokenizers: ascii
, porter
and unicode61
that use different algorithms to match queries with indexed content.
try db.create(virtualTable: "books", using: FTS5()) { t in
// Pick one:
t.tokenizer = .unicode61() // default
t.tokenizer = .unicode61(...)
t.tokenizer = .ascii
t.tokenizer = .porter(...)
}
See below some examples of matches:
content | query | ascii | unicode61 | porter on ascii | porter on unicode61 |
---|---|---|---|---|---|
Foo | Foo | X | X | X | X |
Foo | FOO | X | X | X | X |
Jérôme | Jérôme | X ¹ | X ¹ | X ¹ | X ¹ |
Jérôme | JÉRÔME | X ¹ | X ¹ | ||
Jérôme | Jerome | X ¹ | X ¹ | ||
Database | Databases | X | X | ||
Frustration | Frustrated | X | X |
¹ Don't miss Unicode Full-Text Gotchas
-
unicode61
try db.create(virtualTable: "books", using: FTS5()) { t in t.tokenizer = .unicode61() t.tokenizer = .unicode61(removeDiacritics: false) }
The default "unicode61" tokenizer is case-insensitive for unicode characters. It matches "Jérôme" with "JÉRÔME".
It strips diacritics from latin script characters by default, and matches "jérôme" with "jerome". This behavior can be disabled, as in the example above.
It does not provide stemming, and won't match "databases" with "database".
-
ascii
try db.create(virtualTable: "books", using: FTS5()) { t in t.tokenizer = .ascii }
The "ascii" tokenizer is case-insensitive for ASCII characters. It matches "foo" with "FOO", but not "Jérôme" with "JÉRÔME".
It does not provide stemming, and won't match "databases" with "database".
It does not strip diacritics from latin script characters, and won't match "jérôme" with "jerome".
-
porter
try db.create(virtualTable: "books", using: FTS5()) { t in t.tokenizer = .porter() // porter wrapping unicode61 (the default) t.tokenizer = .porter(.ascii) // porter wrapping ascii t.tokenizer = .porter(.unicode61(removeDiacritics: false)) // porter wrapping unicode61 without diacritics stripping }
The porter tokenizer is a wrapper tokenizer which compares English words according to their roots: it matches "database" with "databases", and "frustration" with "frustrated".
It strips diacritics from latin script characters if it wraps unicode61, and does not if it wraps ascii (see the example above).
See SQLite tokenizers for more information, and custom FTS5 tokenizers in order to add your own tokenizers.
Full-text search in FTS5 tables is performed with search patterns:
database
matches all documents that contain "database"data*
matches all documents that contain a word starting with "data"SQLite database
matches all documents that contain both "SQLite" and "database"SQLite OR database
matches all documents that contain "SQLite" or "database""SQLite database"
matches all documents that contain the "SQLite database" phrase
Not all search patterns are valid: they must follow the Full-Text Query Syntax.
The FTS5Pattern type helps you validating patterns, and building valid patterns from untrusted strings (such as strings typed by users):
extension Database {
func makeFTS5Pattern(rawPattern: String, forTable table: String) throws -> FTS5Pattern
}
struct FTS5Pattern {
init?(matchingAnyTokenIn string: String)
init?(matchingAllTokensIn string: String)
init?(matchingPhrase string: String)
}
The Database.makeFTS5Pattern(rawPattern:forTable:)
method validates your raw patterns against the query grammar and the columns of the targeted table, and may throw a DatabaseError:
// OK: FTS5Pattern
try db.makeFTS5Pattern(rawPattern: "sqlite", forTable: "books")
// DatabaseError: syntax error near \"AND\"
try db.makeFTS5Pattern(rawPattern: "AND", forTable: "books")
// DatabaseError: no such column: missing
try db.makeFTS5Pattern(rawPattern: "missing: sqlite", forTable: "books")
The FTS5Pattern initializers don't throw. They build a valid pattern from any string, including strings provided by users of your application. They let you find documents that match all given words, any given word, or a full phrase, depending on the needs of your application:
let query = "SQLite database"
// Matches documents that contain "SQLite" or "database"
let pattern = FTS5Pattern(matchingAnyTokenIn: query)
// Matches documents that contain both "SQLite" and "database"
let pattern = FTS5Pattern(matchingAllTokensIn: query)
// Matches documents that contain "SQLite database"
let pattern = FTS5Pattern(matchingPhrase: query)
They return nil when no pattern could be built from the input string:
let pattern = FTS5Pattern(matchingAnyTokenIn: "") // nil
let pattern = FTS5Pattern(matchingAnyTokenIn: "*") // nil
FTS5Pattern are regular values. You can use them as query arguments:
let documents = try Document.fetchAll(db,
"SELECT * FROM documents WHERE documents MATCH ?",
arguments: [pattern])
Use them in the query interface:
let documents = try Document.matching(pattern).fetchAll(db)
FTS5 can sort results by relevance (most to least relevant):
// SQL
let documents = try Document.fetchAll(db,
"SELECT * FROM documents WHERE documents MATCH ? ORDER BY rank",
arguments: [pattern])
// Query Interface
let documents = try Document.matching(pattern).order(Column.rank).fetchAll(db)
For more information about the ranking algorithm, as well as extra options, read Sorting by Auxiliary Function Results
GRDB does not provide any ranking for FTS3 and FTS4. See SQLite's Search Application Tips if you really need it.
An external content table does not store the indexed text. Instead, it indexes the text stored in another table.
This is very handy when you want to index a table that can not be declared as a full-text table (because it contains non-textual values, for example). You just have to define an external content full-text table that refers to the regular table.
The two tables must be kept up-to-date, so that the full-text index matches the content of the regular table. This synchronization happens automatically if you use the synchronize(withTable:)
method in your full-text table definition:
// A regular table
try db.create(table: "books") { t in
t.column("author", .text)
t.column("title", .text)
t.column("content", .text)
...
}
// A full-text table synchronized with the regular table
try db.create(virtualTable: "books_ft", using: FTS4()) { t in // or FTS5()
t.synchronize(withTable: "books")
t.column("author")
t.column("title")
t.column("content")
}
The eventual content already present in the regular table is indexed, and every insert, update or delete that happens in the regular table is automatically applied to the full-text index by the mean of SQL triggers.
For more information, see the SQLite documentation about external content tables: FTS4, FTS5.
See also WWDC Companion, a sample app that uses external content tables to store, display, and let the user search the WWDC 2016 sessions.
When you need to perform a full-text search, and the external content table contains all the data you need, you can simply query the full-text table.
But if you need to load columns from the regular table, and in the same time perform a full-text search, then you will need to query both tables at the same time.
That is because SQLite will throw an error when you try to perform a full-text search on a regular table:
// SQLite error 1: unable to use function MATCH in the requested context
// SELECT * FROM books WHERE books MATCH '...'
let books = Book.matching(pattern).fetchAll(db)
The solution is to perform a joined request, using raw SQL:
let sql = "SELECT books.* " +
"FROM books " +
"JOIN books_ft ON " +
"books_ft.rowid = books.rowid AND " +
"books_ft MATCH ?"
let books = Book.fetchAll(db, sql, arguments: [pattern])
You can define record types around the full-text virtual tables.
However these tables don't have any explicit primary key. Instead, they use the implicit rowid primary key: a special hidden column named rowid
.
You will have to expose this hidden column in order to fetch, delete, and update full-text records by primary key.
The SQLite built-in tokenizers for FTS3, FTS4 and FTS5 are generally unicode-aware, with a few caveats, and limitations.
Generally speaking, matches may fail when content and query don't use the same unicode normalization. SQLite actually exhibits inconsistent behavior in this regard.
For example, for "aimé" to match "aimé", they better have the same normalization: the NFC "aim\u{00E9}" form may not match its NFD "aime\u{0301}" equivalent. Most strings that you get from Swift, UIKit and Cocoa use NFC, so be careful with NFD inputs (such as strings from the HFS+ file system, or strings that you can't trust like network inputs). Use String.precomposedStringWithCanonicalMapping to turn a string into NFC.
Besides, if you want "fi" to match the ligature "fi" (U+FB01), then you need to normalize your indexed contents and inputs to NFKC or NFKD. Use String.precomposedStringWithCompatibilityMapping to turn a string into NFKC.
Unicode normalization is not the end of the story, because it won't help "Encyclopaedia" match "Encyclopædia", "Mueller", "Müller", "Grossmann", "Großmann", or "Diyarbakır", "DIYARBAKIR". The String.applyingTransform method can help.
GRDB lets you write custom FTS5 tokenizers that can transparently deal with all these issues. For FTS3 and FTS4, you'll need to pre-process your strings before injecting them in the full-text engine.
Happy indexing!
The TransactionObserver
protocol lets you observe database changes:
protocol TransactionObserver : class {
/// Filters database changes that should be notified the the
/// `databaseDidChange(with:)` method.
func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool
/// Notifies a database change:
/// - event.kind (insert, update, or delete)
/// - event.tableName
/// - event.rowID
///
/// For performance reasons, the event is only valid for the duration of
/// this method call. If you need to keep it longer, store a copy:
/// event.copy().
func databaseDidChange(with event: DatabaseEvent)
/// An opportunity to rollback pending changes by throwing an error.
func databaseWillCommit() throws
/// Database changes have been committed.
func databaseDidCommit(_ db: Database)
/// Database changes have been rollbacked.
func databaseDidRollback(_ db: Database)
}
To activate a transaction observer, add it to the database queue or pool:
let observer = MyObserver()
dbQueue.add(transactionObserver: observer)
Database holds weak references to its transaction observers: they are not retained, and stop getting notifications after they are deallocated.
A transaction observer is notified of all database changes: inserts, updates and deletes. This includes indirect changes triggered by ON DELETE and ON UPDATE actions associated to foreign keys.
☝️ Note: the changes that are not notified are changes to internal system tables (such as
sqlite_master
), changes toWITHOUT ROWID
tables, and the deletion of duplicate rows triggered byON CONFLICT REPLACE
clauses (this last exception might change in a future release of SQLite).
Notified changes are not actually written to disk until databaseDidCommit
is called. On the other side, databaseDidRollback
confirms their invalidation:
try dbQueue.inTransaction { db in
try db.execute("INSERT ...") // 1. didChange
try db.execute("UPDATE ...") // 2. didChange
return .commit // 3. willCommit, 4. didCommit
}
try dbQueue.inTransaction { db in
try db.execute("INSERT ...") // 1. didChange
try db.execute("UPDATE ...") // 2. didChange
return .rollback // 3. didRollback
}
Database statements that are executed outside of an explicit transaction do not drop off the radar:
try dbQueue.inDatabase { db in
try db.execute("INSERT ...") // 1. didChange, 2. willCommit, 3. didCommit
try db.execute("UPDATE ...") // 4. didChange, 5. willCommit, 6. didCommit
}
Changes that are on hold because of a savepoint are only notified after the savepoint has been released. This makes sure that notified events are only events that have an opportunity to be committed:
try dbQueue.inTransaction { db in
try db.execute("INSERT ...") // 1. didChange
try db.execute("SAVEPOINT foo")
try db.execute("UPDATE ...") // delayed
try db.execute("UPDATE ...") // delayed
try db.execute("RELEASE SAVEPOINT foo") // 2. didChange, 3. didChange
try db.execute("SAVEPOINT foo")
try db.execute("UPDATE ...") // not notified
try db.execute("ROLLBACK TO SAVEPOINT foo")
return .commit // 4. willCommit, 5. didCommit
}
Eventual errors thrown from databaseWillCommit
are exposed to the application code:
do {
try dbQueue.inTransaction { db in
...
return .commit // 1. willCommit (throws), 2. didRollback
}
} catch {
// 3. The error thrown by the transaction observer.
}
☝️ Note: all callbacks are called in a protected dispatch queue, and serialized with all database updates.
☝️ Note: the databaseDidChange(with:) and databaseWillCommit() callbacks must not touch the SQLite database. This limitation does not apply to databaseDidCommit and databaseDidRollback which can use their database argument.
FetchedRecordsController is based on the TransactionObserver protocol.
See also TableChangeObserver.swift, which shows a transaction observer that notifies of modified database tables with NSNotificationCenter.
Transaction observers can avoid being notified of some database changes they are not interested in.
At first sight, this looks somewhat redundant with the checks that observers can perform in their databaseDidChange
method. But the code below is inefficient:
// BAD: An inefficient way to track the "persons" table:
class PersonObserver: TransactionObserver {
var personsTableModified = false
func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
// Observe all events
return true
}
func databaseDidChange(with event: DatabaseEvent) {
if event.tableName == "persons" {
personsTableModified = true
}
}
func databaseDidRollback(_ db: Database) {
// Get ready for next transaction
personsTableModified = false
}
func databaseDidCommit(_ db: Database) {
if personsTableModified {
// Process committed changes to the persons table
}
// Get ready for next transaction
personsTableModified = false
}
}
The databaseDidChange
method is invoked for each insertion, deletion, and update of individual rows, of any table. When there are many changed rows, the observer will spend of a lot of time performing the same check again and again.
Instead, filter events in the observes(eventsOfKind:)
method. This will prevent databaseDidChange
from being called for changes you're not interested into, and is much more efficient:
// GOOD: An efficient way to track the "persons" table:
class PersonObserver: TransactionObserver {
var personsTableModified = false
func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
// Only observe changes to the "persons" table.
return eventKind.tableName == "persons"
}
func databaseDidChange(with event: DatabaseEvent) {
personsTableModified = true // Guaranteed
}
func databaseDidRollback(_ db: Database) {
// Get ready for next transaction
personsTableModified = false
}
func databaseDidCommit(_ db: Database) {
if personsTableModified {
// Process committed changes to the persons table
}
// Get ready for next transaction
personsTableModified = false
}
}
The observes(eventsOfKind:)
method is also able to inspect the columns that are about to be changed:
class PersonObserver: TransactionObserver {
func observes(eventsOfKind eventKind: DatabaseEventKind) -> Bool {
// Only observe changes to the "name" column of the "persons" table.
switch eventKind {
case .insert(let tableName):
return tableName == "persons"
case .delete(let tableName):
return tableName == "persons"
case .update(let tableName, let columnNames):
return tableName == "persons" && columnNames.contains("name")
}
}
}
A custom SQLite build can activate SQLite "preupdate hooks". In this case, TransactionObserverType gets an extra callback which lets you observe individual column values in the rows modified by a transaction:
protocol TransactionObserverType : class {
#if SQLITE_ENABLE_PREUPDATE_HOOK
/// Notifies before a database change (insert, update, or delete)
/// with change information (initial / final values for the row's
/// columns).
///
/// The event is only valid for the duration of this method call. If you
/// need to keep it longer, store a copy: event.copy().
func databaseWillChange(with event: DatabasePreUpdateEvent)
#endif
}
You use FetchedRecordsController to track changes in the results of an SQLite request.
FetchedRecordsController can also feed table views, collection views, and animate cells when the results of the request change.
It looks and behaves very much like Core Data's NSFetchedResultsController.
Given a fetch request, and a type that adopts the RowConvertible protocol, such as a subclass of the Record class, a FetchedRecordsController is able to track changes in the results of the fetch request, notify of those changes, and return the results of the request in a form that is suitable for a table view or a collection view, with one cell per fetched record.
See GRDBDemoiOS for an sample app that uses FetchedRecordsController.
- Creating the Fetched Records Controller
- Responding to Changes
- The Changes Notifications
- Modifying the Fetch Request
- Table and Collection Views
- [Implementing the Table View Datasource Methods](#implementing-the-table-view-datasource methods)
- Implementing Table View Updates
- FetchedRecordsController Concurrency
When you initialize a fetched records controller, you provide the following mandatory information:
- A database connection
- The type of the fetched records. It must be a type that adopts the RowConvertible protocol, such as a subclass of the Record class
- A fetch request
class Person : Record { ... }
let dbQueue = DatabaseQueue(...) // Or DatabasePool
// Using a Request from the Query Interface:
let controller = FetchedRecordsController(
dbQueue,
request: Person.order(Column("name")))
// Using SQL, and eventual arguments:
let controller = FetchedRecordsController<Person>(
dbQueue,
sql: "SELECT * FROM persons ORDER BY name WHERE countryIsoCode = ?",
arguments: ["FR"])
The fetch request can involve several database tables. The fetched records controller will only track changes in the columns and tables used by the fetch request.
let controller = FetchedRecordsController<Person>(
dbQueue,
sql: "SELECT persons.name, COUNT(books.id) AS bookCount " +
"FROM persons " +
"LEFT JOIN books ON books.authorId = persons.id " +
"GROUP BY persons.id " +
"ORDER BY persons.name")
After creating an instance, you invoke performFetch()
to actually execute
the fetch.
try controller.performFetch()
In general, FetchedRecordsController is designed to respond to changes at the database layer, by notifying when database rows change location or values.
Changes are not reflected until they are applied in the database by a successful transaction. Transactions can be explicit, or implicit:
try dbQueue.inTransaction { db in
try person1.insert(db)
try person2.insert(db)
return .commit // Explicit transaction
}
try dbQueue.inDatabase { db in
try person1.insert(db) // Implicit transaction
try person2.insert(db) // Implicit transaction
}
When you apply several changes to the database, you should group them in a single explicit transaction. The controller will then notify of all changes together.
An instance of FetchedRecordsController notifies that the controller’s fetched records have been changed by the mean of callbacks:
let controller = try FetchedRecordsController(...)
controller.trackChanges(
// controller's records are about to change:
willChange: { controller in ... },
// notification of individual record changes:
onChange: { (controller, record, change) in ... },
// controller's records have changed:
didChange: { controller in ... })
try controller.performFetch()
See Implementing Table View Updates for more detail on table view updates.
All callbacks are optional. When you only need to grab the latest results, you can omit the didChange
argument name:
controller.trackChanges { controller in
let newPersons = controller.fetchedRecords // [Person]
}
Callbacks have the fetched record controller itself as an argument: use it in order to avoid memory leaks:
// BAD: memory leak
controller.trackChanges { _ in
let newPersons = controller.fetchedRecords
}
// GOOD
controller.trackChanges { controller in
let newPersons = controller.fetchedRecords
}
Callbacks are invoked asynchronously. See FetchedRecordsController Concurrency for more information.
Values fetched from inside callbacks may be inconsistent with the controller's records. This is because after database has changed, and before the controller had the opportunity to invoke callbacks in the main thread, other database changes can happen.
To avoid inconsistencies, provide a fetchAlongside
argument to the trackChanges
method, as below:
controller.trackChanges(
fetchAlongside: { db in
// Fetch any extra value, for example the number of fetched records:
return try Person.fetchCount(db)
},
didChange: { (controller, count) in
// The extra value is the second argument.
let recordsCount = controller.fetchedRecords.count
assert(count == recordsCount) // guaranteed
})
Whenever the fetched records controller can not look for changes after a transaction has potentially modified the tracked request, an error handler is called. The request observation is not stopped, though: future transactions may successfully be handled, and the notified changes will then be based on the last successful fetch.
controller.trackErrors { (controller, error) in
print("Missed a transaction because \(error)")
}
You can change a fetched records controller's fetch request or SQL query.
controller.setRequest(Person.order(Column("name")))
controller.setRequest(sql: "SELECT ...", arguments: ...)
The notification callbacks are notified of eventual changes if the new request fetches a different set of records.
☝️ Note: This behavior differs from Core Data's NSFetchedResultsController, which does not notify of record changes when the fetch request is replaced.
Change callbacks are invoked asynchronously. This means that modifying the request from the main thread does not immediately triggers callbacks. When you need to take immediate action, force the controller to refresh immediately with its performFetch
method. In this case, changes callbacks are not called:
// Change request on the main thread:
controller.setRequest(Person.order(Column("name")))
// Here callbacks have not been called yet.
// You can cancel them, and refresh records immediately:
try controller.performFetch()
FetchedRecordsController let you feed table and collection views, and keep them up-to-date with the database content.
For nice animated updates, a fetched records controller needs to recognize identical records between two different result sets. When records adopt the TableMapping protocol, they are automatically compared according to their primary key:
class Person : TableMapping { ... }
let controller = FetchedRecordsController(
dbQueue,
request: Person.all())
For other types, the fetched records controller needs you to be more explicit:
let controller = FetchedRecordsController(
dbQueue,
request: ...,
isSameRecord: { (person1, person2) in person1.id == person2.id })
The table view data source asks the fetched records controller to provide relevant information:
func numberOfSections(in tableView: UITableView) -> Int {
return fetchedRecordsController.sections.count
}
func tableView(_ tableView: UITableView, numberOfRowsInSection section: Int) -> Int {
return fetchedRecordsController.sections[section].numberOfRecords
}
func tableView(_ tableView: UITableView, cellForRowAt indexPath: IndexPath) -> UITableViewCell {
let cell = ...
let record = fetchedRecordsController.record(at: indexPath)
// Configure the cell
return cell
}
☝️ Note: In its current state, FetchedRecordsController does not support grouping table view rows into custom sections: it generates a unique section.
When changes in the fetched records should reload the whole table view, you can simply tell so:
controller.trackChanges { [unowned self] _ in
self.tableView.reloadData()
}
Yet, FetchedRecordsController can notify that the controller’s fetched records have been changed due to some add, remove, move, or update operations, and help applying animated changes to a UITableView.
For animated table view updates, use the willChange
and didChange
callbacks to bracket events provided by the fetched records controller, as illustrated in the following example:
// Assume self has a tableView property, and a cell configuration
// method named configure(_:at:).
controller.trackChanges(
// controller's records are about to change:
willChange: { [unowned self] _ in
self.tableView.beginUpdates()
},
// notification of individual record changes:
onChange: { [unowned self] (controller, record, change) in
switch change {
case .insertion(let indexPath):
self.tableView.insertRows(at: [indexPath], with: .fade)
case .deletion(let indexPath):
self.tableView.deleteRows(at: [indexPath], with: .fade)
case .update(let indexPath, _):
if let cell = self.tableView.cellForRow(at: indexPath) {
self.configure(cell, at: indexPath)
}
case .move(let indexPath, let newIndexPath, _):
self.tableView.deleteRows(at: [indexPath], with: .fade)
self.tableView.insertRows(at: [newIndexPath], with: .fade)
// // Alternate technique which actually moves cells around:
// let cell = self.tableView.cellForRow(at: indexPath)
// self.tableView.moveRow(at: indexPath, to: newIndexPath)
// if let cell = cell {
// self.configure(cell, at: newIndexPath)
// }
}
},
// controller's records have changed:
didChange: { [unowned self] _ in
self.tableView.endUpdates()
})
See GRDBDemoiOS for an sample app that uses FetchedRecordsController.
☝️ Note: our sample code above uses
unowned
references to the table view controller. This is a safe pattern as long as the table view controller owns the fetched records controller, and is deallocated from the main thread (this is usually the case). In other situations, prefer weak references.
A fetched records controller can not be used from any thread.
When the database itself can be read and modified from any thread, fetched records controllers must be used from the main thread. Record changes are also notified on the main thread.
Change callbacks are invoked asynchronously. This means that changes made from the main thread are not immediately notified. When you need to take immediate action, force the controller to refresh immediately with its performFetch
method. In this case, changes callbacks are not called:
// Change database on the main thread:
try dbQueue.inDatabase { db in
try Person(...).insert(db)
}
// Here callbacks have not been called yet.
// You can cancel them, and refresh records immediately:
try controller.performFetch()
☝️ Note: when the main thread does not fit your needs, give a serial dispatch queue to the controller initializer: the controller must then be used from this queue, and record changes are notified on this queue as well.
let queue = DispatchQueue() queue.async { let controller = try FetchedRecordsController(..., queue: queue) controller.trackChanges { /* in queue */ } try controller.performFetch() }
GRDB can encrypt your database with SQLCipher v3.4.0.
This requires a manual installation of GRDB:
-
Clone the GRDB.swift git repository, checkout the latest tagged version, and download SQLCipher sources:
cd [GRDB.swift directory] git checkout v0.107.0 git submodule update --init SQLCipher/src
-
Embed the
GRDB.xcodeproj
project in your own project. -
Add the
GRDBCipherOSX
orGRDBCipheriOS
target in the Target Dependencies section of the Build Phases tab of your application target. -
Add the
GRDBCipher.framework
from the targetted platform to the Embedded Binaries section of the General tab of your target.
You create and open an encrypted database by providing a passphrase to your database connection:
import GRDBCipher
var configuration = Configuration()
configuration.passphrase = "secret"
let dbQueue = try DatabaseQueue(path: "...", configuration: configuration)
You can change the passphrase of an already encrypted database:
try dbQueue.change(passphrase: "newSecret")
Providing a passphrase won't encrypt a clear-text database that already exists, though. SQLCipher can't do that, and you will get an error instead: SQLite error 26: file is encrypted or is not a database
.
To encrypt an existing clear-text database, create a new and empty encrypted database, and copy the content of the clear-text database in it. The technique to do that is documented by SQLCipher. With GRDB, it gives:
// The clear-text database
let clearDBQueue = try DatabaseQueue(path: "/path/to/clear.db")
// The encrypted database, at some distinct location:
var configuration = Configuration()
configuration.passphrase = "secret"
let encryptedDBQueue = try DatabaseQueue(path: "/path/to/encrypted.db", configuration: config)
try clearDBQueue.inDatabase { db in
try db.execute("ATTACH DATABASE ? AS encrypted KEY ?", arguments: [encryptedDBQueue.path, "secret"])
try db.execute("SELECT sqlcipher_export('encrypted')")
try db.execute("DETACH DATABASE encrypted")
}
// Now the copy is done, and the clear-text database can be deleted.
You can backup (copy) a database into another.
Backups can for example help you copying an in-memory database to and from a database file when you implement NSDocument subclasses.
let source: DatabaseQueue = ... // or DatabasePool
let destination: DatabaseQueue = ... // or DatabasePool
try source.backup(to: destination)
The backup
method blocks the current thread until the destination database contains the same contents as the source database.
When the source is a database pool, concurrent writes can happen during the backup. Those writes may, or may not, be reflected in the backup, but they won't trigger any error.
This chapter covers general topics that you should be aware of.
- Avoiding SQL Injection
- Error Handling
- Unicode
- Memory Management
- Data Protection
- Concurrency
- Performance
SQL injection is a technique that lets an attacker nuke your database.
Here is an example of code that is vulnerable to SQL injection:
// BAD BAD BAD
let name = textField.text
try dbQueue.inDatabase { db in
try db.execute("UPDATE students SET name = '\(name)' WHERE id = \(id)")
}
If the user enters a funny string like Robert'; DROP TABLE students; --
, SQLite will see the following SQL, and drop your database table instead of updating a name as intended:
UPDATE students SET name = 'Robert';
DROP TABLE students;
--' WHERE id = 1
To avoid those problems, never embed raw values in your SQL queries. The only correct technique is to provide arguments to your SQL queries:
// Good
let name = textField.text
try dbQueue.inDatabase { db in
try db.execute(
"UPDATE students SET name = ? WHERE id = ?",
arguments: [name, id])
}
See Executing Updates for more information on statement arguments.
GRDB can throw DatabaseError, PersistenceError, or crash your program with a fatal error.
Considering that a local database is not some JSON loaded from a remote server, GRDB focuses on trusted databases. Dealing with untrusted databases requires extra care.
DatabaseError are thrown on SQLite errors:
do {
try db.execute(
"INSERT INTO pets (masterId, name) VALUES (?, ?)",
arguments: [1, "Bobby"])
} catch let error as DatabaseError {
// The SQLite error code: 19 (SQLITE_CONSTRAINT)
error.resultCode
// The extended error code: 787 (SQLITE_CONSTRAINT_FOREIGNKEY)
error.extendedResultCode
// The eventual SQLite message: FOREIGN KEY constraint failed
error.message
// The eventual erroneous SQL query
// "INSERT INTO pets (masterId, name) VALUES (?, ?)"
error.sql
// Full error description:
// "SQLite error 787 with statement `INSERT INTO pets (masterId, name)
// VALUES (?, ?)` arguments [1, "Bobby"]: FOREIGN KEY constraint failed""
error.description
}
SQLite uses codes to distinguish between various errors:
do {
try ...
} catch let error as DatabaseError where error.extendedResultCode == .SQLITE_CONSTRAINT_FOREIGNKEY {
// foreign key constraint error
} catch let error as DatabaseError where error.resultCode == .SQLITE_CONSTRAINT {
// any other constraint error
} catch let error as DatabaseError {
// any other database error
}
In the example above, error.extendedResultCode
is a precise extended result code, and error.resultCode
is a less precise primary result code. Extended result codes are refinements of primary result codes, as SQLITE_CONSTRAINT_FOREIGNKEY
is to SQLITE_CONSTRAINT
, for example. See SQLite result codes for more information.
As a convenience, extended result codes match their primary result code in a switch statement:
do {
try ...
} catch let error as DatabaseError {
switch error.extendedResultCode {
case ResultCode.SQLITE_CONSTRAINT_FOREIGNKEY:
// foreign key constraint error
case ResultCode.SQLITE_CONSTRAINT:
// any other constraint error
default:
// any other database error
}
}
⚠️ Warning: SQLite has progressively introduced extended result codes accross its versions. For example,SQLITE_CONSTRAINT_FOREIGNKEY
wasn't introduced yet on iOS 8.1. The SQLite release notes are unfortunately not quite clear about that: write your handling of extended result codes with care.
PersistenceError is thrown by the Persistable protocol, in a single case: when the update
method could not find any row to update:
do {
try person.update(db)
} catch PersistenceError.recordNotFound {
// There was nothing to update
}
Fatal errors notify that the program, or the database, has to be changed.
They uncover programmer errors, false assumptions, and prevent misuses. Here are a few examples:
-
The code asks for a non-optional value, when the database contains NULL:
// fatal error: could not convert NULL to String. let name: String = row.value(named: "name")
Solution: fix the contents of the database, use NOT NULL constraints, or load an optional:
let name: String? = row.value(named: "name")
-
The code asks for a Date, when the database contains garbage:
// fatal error: could not convert "Mom’s birthday" to Date. let date: Date? = row.value(named: "date")
Solution: fix the contents of the database, or use DatabaseValue to handle all possible cases:
let dbv: DatabaseValue = row.value(named: "date") if dbv.isNull { // Handle NULL if let date = Date.fromDatabaseValue(dbv) { // Handle valid date } else { // Handle invalid date }
-
The database can't guarantee that the code does what it says:
// fatal error: table persons has no unique index on column email try Person.deleteOne(db, key: ["email": "[email protected]"])
Solution: add a unique index to the persons.email column, or use the
deleteAll
method to make it clear that you may delete more than one row:try Person.filter(Column("email") == "[email protected]").deleteAll(db)
-
Database connections are not reentrant:
// fatal error: Database methods are not reentrant. dbQueue.inDatabase { db in dbQueue.inDatabase { db in ... } }
Solution: avoid reentrancy, and instead pass a database connection along.
Let's consider the code below:
let sql = "SELECT ..."
// Some untrusted arguments for the query
let arguments: [String: Any] = ...
let rows = try Row.fetchCursor(db, sql, arguments: StatementArguments(arguments))
while let row = try rows.next() {
// Some untrusted database value:
let date: Date? = row.value(atIndex: 0)
}
It has two opportunities to throw fatal errors:
- Untrusted arguments: The dictionary may contain values that do not conform to the DatabaseValueConvertible protocol, or may miss keys required by the statement.
- Untrusted database content: The row may contain a non-null value that can't be turned into a date.
In such a situation, you can still avoid fatal errors by exposing and handling each failure point, one level down in the GRDB API:
// Untrusted arguments
if let arguments = StatementArguments(arguments) {
let statement = try db.makeSelectStatement(sql)
try statement.validate(arguments: arguments)
statement.unsafeSetArguments(arguments)
var cursor = try Row.fetchCursor(statement)
while let row = try iterator.next() {
// Untrusted database content
let dbv: DatabaseValue = row.value(atIndex: 0)
if dbv.isNull {
// Handle NULL
if let date = Date.fromDatabaseValue(dbv) {
// Handle valid date
} else {
// Handle invalid date
}
}
}
See prepared statements and DatabaseValue for more information.
SQLite can be configured to invoke a callback function containing an error code and a terse error message whenever anomalies occur.
It is recommended that you setup, early in the lifetime of your application, the error logging callback:
Database.logError = { (resultCode, message) in
NSLog("%@", "SQLite error \(resultCode): \(message)")
}
See The Error And Warning Log for more information.
SQLite lets you store unicode strings in the database.
However, SQLite does not provide any unicode-aware string transformations or comparisons.
The UPPER
and LOWER
built-in SQLite functions are not unicode-aware:
// "JéRôME"
try String.fetchOne(db, "SELECT UPPER('Jérôme')")
GRDB extends SQLite with SQL functions that call the Swift built-in string functions capitalized
, lowercased
, uppercased
, localizedCapitalized
, localizedLowercased
and localizedUppercased
:
// "JÉRÔME"
let uppercase = DatabaseFunction.uppercase
try String.fetchOne(db, "SELECT \(uppercased.name)('Jérôme')")
Those unicode-aware string functions are also readily available in the query interface:
Person.select(nameColumn.uppercased)
SQLite compares strings in many occasions: when you sort rows according to a string column, or when you use a comparison operator such as =
and <=
.
The comparison result comes from a collating function, or collation. SQLite comes with three built-in collations that do not support Unicode: binary, nocase, and rtrim.
GRDB comes with five extra collations that leverage unicode-aware comparisons based on the standard Swift String comparison functions and operators:
unicodeCompare
(uses the built-in<=
and==
Swift operators)caseInsensitiveCompare
localizedCaseInsensitiveCompare
localizedCompare
localizedStandardCompare
A collation can be applied to a table column. All comparisons involving this column will then automatically trigger the comparison function:
try db.create(table: "persons") { t in
// Guarantees case-insensitive email unicity
t.column("email", .text).unique().collate(.nocase)
// Sort names in a localized case insensitive way
t.column("name", .text).collate(.localizedCaseInsensitiveCompare)
}
// Persons are sorted in a localized case insensitive way:
let persons = try Person.order(nameColumn).fetchAll(db)
⚠️ Warning: SQLite requires host applications to provide the definition of any collation other than binary, nocase and rtrim. When a database file has to be shared or migrated to another SQLite library of platform (such as the Android version of your application), make sure you provide a compatible collation.
If you can't or don't want to define the comparison behavior of a column (see warning above), you can still use an explicit collation in SQL requests and in the query interface:
let collation = DatabaseCollation.localizedCaseInsensitiveCompare
let persons = try Person.fetchAll(db,
"SELECT * FROM persons ORDER BY name COLLATE \(collation.name))")
let persons = try Person.order(nameColumn.collating(collation)).fetchAll(db)
You can also define your own collations:
let collation = DatabaseCollation("customCollation") { (lhs, rhs) -> NSComparisonResult in
// return the comparison of lhs and rhs strings.
}
dbQueue.add(collation: collation) // Or dbPool.add(collation: ...)
Both SQLite and GRDB use non-essential memory that help them perform better.
You can reclaim this memory with the releaseMemory
method:
// Release as much memory as possible.
dbQueue.releaseMemory()
dbPool.releaseMemory()
This method blocks the current thread until all current database accesses are completed, and the memory collected.
The iOS operating system likes applications that do not consume much memory.
Database queues and pools can call the releaseMemory
method for you, when application receives memory warnings, and when application enters background: call the setupMemoryManagement
method after creating the queue or pool instance:
let dbQueue = try DatabaseQueue(...)
dbQueue.setupMemoryManagement(in: UIApplication.sharedApplication())
Data Protection lets you protect files so that they are encrypted and unavailable until the device is unlocked.
Data protection can be enabled globally for all files created by an application.
You can also explicitly protect a database, by configuring its enclosing directory. This will not only protect the database file, but also all temporary files created by SQLite (including the persistent .shm
and .wal
files created by database pools).
For example, to explicitely use complete protection:
// Paths
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let directoryPath = (documentsPath as NSString).appendingPathComponent("database")
let databasePath = (directoryPath as NSString).appendingPathComponent("db.sqlite")
// Create directory if needed
let fm = FileManager.default
var isDirectory: ObjCBool = false
if !fm.fileExists(atPath: directoryPath, isDirectory: &isDirectory) {
try fm.createDirectory(atPath: directoryPath, withIntermediateDirectories: false)
} else if !isDirectory.boolValue {
throw NSError(domain: NSCocoaErrorDomain, code: NSFileWriteFileExistsError, userInfo: nil)
}
// Enable data protection
try fm.setAttributes([.protectionKey : FileProtectionType.complete], ofItemAtPath: directoryPath)
// Open database
let dbQueue = try DatabaseQueue(path: databasePath)
When a database is protected, an application that runs in the background on a locked device won't be able to read or write from it. Instead, it will get DatabaseError with code SQLITE_IOERR
(10) "disk I/O error", or SQLITE_AUTH
(23) "not authorized".
You can catch those errors and wait for UIApplicationDelegate.applicationProtectedDataDidBecomeAvailable(_:) or UIApplicationProtectedDataDidBecomeAvailable notification in order to retry the failed database operation.
- Guarantees and Rules
- Advanced DatabasePool
- DatabaseWriter and DatabaseReader Protocols
- Dealing with External Connections
GRDB ships with two concurrency modes:
- DatabaseQueue opens a single database connection, and serializes all database accesses.
- DatabasePool manages a pool of several database connections, and allows concurrent reads and writes.
Both foster application safety: regardless of the concurrency mode you choose, GRDB provides you with the same guarantees, as long as you follow three rules.
-
Guarantee 1: writes are always serialized. At every moment, there is no more than a single thread that is writing into the database.
-
Guarantee 2: reads are always isolated. This means that they are guaranteed an immutable view of the last committed state of the database, and that you can perform subsequent fetches without fearing eventual concurrent writes to mess with your application logic:
try dbPool.read { db in // or dbQueue.inDatabase { ... } // Guaranteed to be equal let count1 = try Person.fetchCount(db) let count2 = try Person.fetchCount(db) }
-
Guarantee 3: requests don't fail, unless a database constraint violation, a programmer mistake, or a very low-level issue such as a disk error or an unreadable database file. GRDB grants correct use of SQLite, and particularly avoids locking errors and other SQLite misuses.
Those guarantees hold as long as you follow three rules:
-
☝️ Rule 1: Have a unique instance of DatabaseQueue or DatabasePool connected to any database file.
This means that opening a new connection each time you access the database is probably a very bad idea. Do share a single connection instead.
See, for example, DemoApps/GRDBDemoiOS/Database.swift for a sample code that properly sets up a single database queue that is available throughout the application.
If there are several instances of database queues or pools that access the same database, a multi-threaded application will eventually face "database is locked" errors. See Dealing with External Connections.
// SAFE CONCURRENCY func currentUser(_ db: Database) throws -> User? { return try User.fetchOne(db) } // dbQueue is a singleton defined somewhere in your app let user = try dbQueue.inDatabase { db in // or dbPool.read { ... } try currentUser(db) } // UNSAFE CONCURRENCY // This method fails when some other thread is currently writing into // the database. func currentUser() throws -> User? { let dbQueue = try DatabaseQueue(...) return try dbQueue.inDatabase { db in try User.fetchOne(db) } } let user = try currentUser()
-
☝️ Rule 2: Group related statements within a single call to a DatabaseQueue or DatabasePool database access method.
Those methods isolate your groups of related statements against eventual database updates performed by other threads, and guarantee a consistent view of the database. This isolation is only guaranteed inside the closure argument of those methods. Two consecutive calls do not guarantee isolation:
// SAFE CONCURRENCY try dbPool.read { db in // or dbQueue.inDatabase { ... } // Guaranteed to be equal: let count1 = try PointOfInterest.fetchCount(db) let count2 = try PointOfInterest.fetchCount(db) } // UNSAFE CONCURRENCY // Those two values may be different because some other thread may have // modified the database between the two blocks: let count1 = try dbPool.read { db in try PointOfInterest.fetchCount(db) } let count2 = try dbPool.read { db in try PointOfInterest.fetchCount(db) }
In the same vein, when you fetch values that depends on some database updates, group them:
// SAFE CONCURRENCY try dbPool.write { db in // The count is guaranteed to be non-zero try PointOfInterest(...).insert(db) let count = try PointOfInterest.fetchCount(db) } // UNSAFE CONCURRENCY // The count may be zero because some other thread may have performed // a deletion between the two blocks: try dbPool.write { db in try PointOfInterest(...).insert(db) } let count = try dbPool.read { db in try PointOfInterest.fetchCount(db) }
On that last example, see Advanced DatabasePool if you look after extra performance.
-
☝️ Rule 3: When you perform several modifications of the database that temporarily put the database in an inconsistent state, group those modifications within a transaction:
// SAFE CONCURRENCY try dbPool.writeInTransaction { db in // or dbQueue.inTransaction { ... } try Credit(destinationAccout, amount).insert(db) try Debit(sourceAccount, amount).insert(db) return .commit } // UNSAFE CONCURRENCY try dbPool.write { db in // or dbQueue.inDatabase { ... } try Credit(destinationAccout, amount).insert(db) try Debit(sourceAccount, amount).insert(db) }
Without transaction,
DatabasePool.read { ... }
may see the first statement, but not the second, and access a database where the balance of accounts is not zero. A highly bug-prone situation.So do use transactions in order to guarantee database consistency accross your application threads: that's what they are made for.
Database pools are very concurrent, since all reads can run in parallel, and can even run during write operations. But writes are still serialized: at any given point in time, there is no more than a single thread that is writing into the database.
When your application modifies the database, and then reads some value that depends on those modifications, you may want to avoid locking the writer queue longer than necessary:
try dbPool.write { db in
// Increment the number of persons
try Person(...).insert(db)
// Read the number of persons. The writer queue is still locked :-(
let count = try Person.fetchCount(db)
}
A wrong solution is to chain a write then a read, as below. Don't do that, because another thread may modify the database in between, and make the read unreliable:
// WRONG
try dbPool.write { db in
// Increment the number of persons
try Person(...).insert(db)
}
try dbPool.read { db in
// Read some random value :-(
let count = try Person.fetchCount(db)
}
The correct solution is the readFromCurrentState
method, which must be called from within a write block:
// CORRECT
try dbPool.write { db in
// Increment the number of persons
try Person(...).insert(db)
try dbPool.readFromCurrentState { db
// Read the number of persons. The writer queue has been unlocked :-)
let count = try Person.fetchCount(db)
}
}
readFromCurrentState
blocks until it can guarantee its closure argument an isolated access to the last committed state of the database. It then asynchronously executes the closure. If the isolated access can't be established, readFromCurrentState
throws an error, and the closure is not executed.
The closure can run concurrently with eventual updates performed after readFromCurrentState
: those updates won't be visible from within the closure. In the example below, the number of persons is guaranteed to be non-zero, even though it is fetched concurrently with the person deletion:
try dbPool.write { db in
// Increment the number of persons
try Person(...).insert(db)
try dbPool.readFromCurrentState { db
// Guaranteed to be non-zero
let count = try Person.fetchCount(db)
}
try Person.deleteAll(db)
}
Transaction Observers can also use readFromCurrentState
in their databaseDidCommit
method in order to process database changes without blocking other threads that want to write into the database.
Both DatabaseQueue and DatabasePool adopt the DatabaseReader and DatabaseWriter protocols.
These protocols provide a unified API that lets you write safe concurrent code that targets both classes.
However, database queues are not database pools, and DatabaseReader and DatabaseWriter provide the smallest common guarantees. They require more discipline:
- Pools are less forgiving than queues when one overlooks a transaction (see concurrency rule 3).
- DatabaseWriter.readFromCurrentState is synchronous, or asynchronous, depending on whether it is run by a queue or a pool (see advanced DatabasePool). It thus requires higher libDispatch skills, and more complex synchronization code.
- The definition of "current state" in DatabaseWriter.readFromCurrentState is delicate.
DatabaseReader and DatabaseWriter are not a tool for applications that hesitate between DatabaseQueue and DatabasePool, and look for a common API. As seen above, the protocols actually make applications harder to write correctly. Instead, they target reusable agnostic code that has both queues and pools in mind. For example, GRDB uses those protocols for migrations and FetchedRecordsController, two tools that accept both queues and pools.
The first rule of GRDB is:
- Rule 1: Have a unique instance of DatabaseQueue or DatabasePool connected to any database file.
This means that dealing with external connections is not a focus of GRDB. Guarantees of GRDB may or may not hold as soon as some external connection modifies a database.
If you absolutely need multiple connections, then:
- Reconsider your position
- Read about isolation in SQLite
- Learn about locks and transactions
- Become a master of the WAL mode
- Prepare to setup a busy handler with Configuration.busyMode
- Ask questions
GRDB is a reasonably fast library, and can deliver quite efficient SQLite access. See Comparing the Performances of Swift SQLite libraries for an overview.
You'll find below general advice when you do look after performance:
- Focus
- Know your platform
- Use transactions
- Don't do useless work
- Learn about SQL strengths and weaknesses
- Avoid strings & dictionaries
You don't know which part of your program needs improvement until you have run a benchmarking tool.
Don't make any assumption, avoid optimizing code too early, and use Instruments.
If your application processes a huge JSON file and inserts thousands of rows in the database right from the main thread, it will quite likely become unresponsive, and provide a sub-quality user experience.
If not done yet, read the Concurrency Programming Guide and learn how to perform heavy computations without blocking your application.
Most GRBD APIs are synchronous. Spawning them into parallel queues is as easy as:
DispatchQueue.global().async {
dbQueue.inDatabase { db in
// Perform database work
}
DispatchQueue.main.async {
// update your user interface
}
}
Performing multiple updates to the database is much faster when executed inside a transaction. This is because a transaction allows SQLite to postpone writing changes to disk until the final commit:
// Inefficient
try dbQueue.inDatabase { db in
for person in persons {
try person.insert(db)
}
}
// Efficient
try dbQueue.inTransaction { db in
for person in persons {
try person.insert(db)
}
return .Commit
}
Obviously, no code is faster than any code.
Don't fetch columns you don't use
// SELECT * FROM persons
try Person.fetchAll(db)
// SELECT id, name FROM persons
try Person.select(idColumn, nameColumn).fetchAll(db)
If your Person type can't be built without other columns (it has non-optional properties for other columns), do define and use a different type.
Don't fetch rows you don't use
Use fetchOne when you need a single value, and otherwise limit your queries at the database level:
// Wrong way: this code may discard hundreds of useless database rows
let persons = try Person.order(scoreColumn.desc).fetchAll(db)
let hallOfFame = persons.prefix(5)
// Better way
let hallOfFame = try Person.order(scoreColumn.desc).limit(5).fetchAll(db)
Don't copy values unless necessary
Particularly: the Array returned by the fetchAll
method, and the cursor returned by fetchCursor
aren't the same:
fetchAll
copies all values from the database into memory, when fetchCursor
iterates database results as they are generated by SQLite, taking profit from SQLite efficiency.
You should only load arrays if you need to keep them for later use (such as iterating their contents in the main thread). Otherwise, use fetchCursor
.
See fetching methods for more information about fetchAll
and fetchCursor
. See also the Row.dataNoCopy method.
Don't update rows unless necessary
An UPDATE statement is costly: SQLite has to look for the updated row, update values, and write changes to disk.
When the overwritten values are the same as the existing ones, it's thus better to avoid performing the UPDATE statement.
The Record class can help you: it provides changes tracking:
if person.hasPersistentChangedValues {
try person.update(db)
}
Consider a simple use case: your store application has to display a list of authors with the number of available books:
- J. M. Coetzee (6)
- Herman Melville (1)
- Alice Munro (3)
- Kim Stanley Robinson (7)
- Oliver Sacks (4)
The following code is inefficient. It is an example of the N+1 problem, because it performs one query to load the authors, and then N queries, as many as there are authors. This turns very inefficient as the number of authors grows:
// SELECT * FROM authors
let authors = try Author.fetchAll(db)
for author in authors {
// SELECT COUNT(*) FROM books WHERE authorId = ...
author.bookCount = try Book.filter(authorIdColumn == author.id).fetchCount(db)
}
Instead, perform a single query:
let sql = "SELECT authors.*, COUNT(books.id) AS bookCount " +
"FROM authors " +
"LEFT JOIN books ON books.authorId = authors.id " +
"GROUP BY authors.id"
let authors = try Author.fetchAll(db, sql)
In the example above, consider extending your Author with an extra bookCount property, or define and use a different type.
Generally, define indexes on your database tables, and use SQLite's efficient query planning:
The String and Dictionary Swift types are better avoided when you look for the best performance.
Now GRDB records, for your convenience, do use strings and dictionaries:
class Person : Record {
var id: Int64?
var name: String
var email: String
required init(_ row: Row) {
id = row.value(named: "id") // String
name = row.value(named: "name") // String
email = row.value(named: "email") // String
super.init()
}
override var persistentDictionary: [String: DatabaseValueConvertible?] {
return ["id": id, "name": name, "email": email] // Dictionary
}
}
When convenience hurts performance, you can still use records, but you have better avoiding their string and dictionary-based methods.
For example, when fetching values, prefer loading columns by index:
// Strings & dictionaries
let persons = try Person.fetchAll(db)
// Column indexes
// SELECT id, name, email FROM persons
let request = Person.select(idColumn, nameColumn, emailColumn)
let rows = try Row.fetchCursor(db, request)
while let row = try rows.next() {
let id: Int64 = row.value(atIndex: 0)
let name: String = row.value(atIndex: 1)
let email: String = row.value(atIndex: 2)
let person = Person(id: id, name: name, email: email)
...
}
When inserting values, use reusable prepared statements, and set statements values with an array:
// Strings & dictionaries
for person in persons {
try person.insert(db)
}
// Prepared statement
let insertStatement = db.prepareStatement("INSERT INTO persons (name, email) VALUES (?, ?)")
for person in persons {
// Only use the unsafe arguments setter if you are sure that you provide
// all statement arguments. A mistake can store unexpected values in
// the database.
insertStatement.unsafeSetArguments([person.name, person.email])
try insertStatement.execute()
}
- How do I close a database connection?
- How do I open a database stored as a resource of my application?
- Generic parameter 'T' could not be inferred
- Compilation takes a long time
- SQLite error 10 "disk I/O error", SQLite error 23 "not authorized"
Database connections are managed by database queues and pools. A connection is closed when its database queue or pool is deallocated, and all usages of this connection are completed.
Database accesses that run in background threads postpone the closing of connections.
If your application does not need to modify the database, open a read-only connection to your resource:
var configuration = Configuration()
configuration.readonly = true
let dbPath = Bundle.main.path(forResource: "db", ofType: "sqlite")!
let dbQueue = try DatabaseQueue(path: dbPath, configuration: configuration)
If the application should modify the database, you need to copy it to a place where it can be modified. For example, in the Documents folder. Only then, open a connection:
let fm = FileManager.default
let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dbPath = (documentsPath as NSString).appendingPathComponent("db.sqlite")
if !fm.fileExists(atPath: dbPath) {
let dbResourcePath = Bundle.main.path(forResource: "db", ofType: "sqlite")!
try fm.copyItem(atPath: dbResourcePath, toPath: dbPath)
}
let dbQueue = try DatabaseQueue(path: dbPath)
You may get this error when using DatabaseQueue.inDatabase, DatabasePool.read, or DatabasePool.write:
// Generic parameter 'T' could not be inferred
let x = try dbQueue.inDatabase { db in
let result = try String.fetchOne(db, ...)
return result
}
This is a Swift compiler issue (see SR-1570).
The general workaround is to explicitly declare the type of the closure result:
// General Workaround
let string = try dbQueue.inDatabase { db -> String? in
let result = try String.fetchOne(db, ...)
return result
}
You can also, when possible, write a single-line closure:
// Single-line closure workaround:
let string = try dbQueue.inDatabase { db in
try String.fetchOne(db, ...)
}
When your record type is very slow to compile, it is usually because its persistentDictionary
property builds a long dictionary literal:
var persistentDictionary: [String: DatabaseValueConvertible?] {
// Long dictionary literals are slow to compile
return [
"a": a,
"b": b,
...
}
That's annoying, but the Swift compiler finds it difficult to compile such a dictionary. We can only hope that compiler improves over time.
To speed up compilation, build your dictionary step by step:
var persistentDictionary: [String: DatabaseValueConvertible?] {
var dict: [String: DatabaseValueConvertible?] = [:]
dict.updateValue(a, forKey: "a")
dict.updateValue(b, forKey: "b")
...
return dict
}
☝️ Note: it is important that you use the
updateValue
method, and not the subscript setter:// GOOD dict.updateValue(a, forKey: "a") // BAD: when the value is nil, this erases the key instead of setting it to nil. dict["a"] = a
Those errors may be the sign that SQLite can't access the database due to data protection.
When your application should be able to run in the background on a locked device, it has to catch this error, and, for example, wait for UIApplicationDelegate.applicationProtectedDataDidBecomeAvailable(_:) or UIApplicationProtectedDataDidBecomeAvailable notification and retry the failed database operation.
This error can also be prevented altogether by using a more relaxed file protection.
- The Documentation is full of GRDB snippets.
- GRDBDemoiOS: A sample iOS application.
- WWDC Companion: A sample iOS application.
- Check
GRDB.xcworkspace
: it contains GRDB-enabled playgrounds to play with. - How to synchronize a database table with a JSON payload: JSONSynchronization.playground
Thanks
- Pierlis, where we write great software.
- Vladimir Babin, Pascal Edmond, Andrey Fidrya, Cristian Filipov, David Hart, @peter-ss, Pierre-Loïc Raynaud, Steven Schveighoffer, @swiftlyfalling, and Kevin Wooten for their contributions, help, and feedback on GRDB.
- @aymerick and Mathieu "Kali" Poumeyrol because SQL.
- ccgus/fmdb for its excellency.