JSON driven testing of REST APIs.
This is a test framework targeted at JSON REST APIs. It comes in the form of a Node.js package called jsonapitest
that is available on the command line to run your tests. Tests are specified in JSON or JavaScript files and grouped into
test suites. Each test contains a list of API calls with assertions. The framework supports using JSON Schema
to validate the structure of responses. All HTTP traffic is logged extensively
by the test runner to help debug test failures. Any data (database records, user credentials etc.) that the tests
need are specified in JSON format and is easily interpolated into API calls. You can extend and customize
the framework with your own assertion functions, HTTP client, or logger.
- Motivation
- Installation
- Usage
- Test File Structure
- JavaScript instead of JSON
- The Anatomy of Test Files
- Config
- Data
- Suite
- API Call
- HTTP Clients
- Request
- Response
- Select
- Assert
- Custom Assert Functions
- Status Assertions
- Assert: schema
- Assert: equal
- Assert: equal_keys
- Assert: contains
- Assert: contains_keys
- Assert: size
- Saving Data
- Data Interpolation
- Merging Objects
- Logging
- Callbacks
I had a REST API implemented in Node.js and I started out writing my API tests with Mocha and Supertest. Although this approach worked
I ended up with test code that was time consuming and complex. Also, I didn't like the fact that my
tests were coupled to the implementation of the API. I tried doing some semi automated testing with curl and although I appreciate the simplicity
of curl the approach wasn't sufficiently structured and automated. What I was looking for was a declarative and black-box
way to do API testing. Here are a few selling points for jsonapitest
:
- Black box testing of APIs means the tests are not tied to the implementation behind the API (i.e. programming language, database etc.)
- Black box testing will encourage you to design more complete and user friendly APIs (since you cannot easily get at the implementation behind the API)
- By having test definitions be simple and pure data structures you are not tied to any particular test framework implementation. This means the test runner, the http client and the assertion engine could all be re-implemented and swapped out fairly easily.
- The fact that you are constrainted to a simple JSON structure for tests will help keep your tests dumb and devoid of complicated logic. This makes test maintenance easier.
- Since test specifications are pure data they lend themselves well to building for example a testing UI or API documentation.
- Debugging is helped by the verbose logging of all HTTP requests and responses (this log is also in JSON format)
- It's easy to point the test runner at different environments (i.e. a test, development or staging server)
npm install jsonapitest -g
Specify your test in a JSON file:
{
"config": {
"defaults": {
"api_call": {
"request": {
"base_url": "https://api.some-hostname.com"
}
}
}
},
"data": {
"schema": {
"user": {
"type": "object",
"properties": {
"id": {"type": "integer"},
"name": {"type": "string"},
"email": {"type": "string", "format": "email"}
}
}
},
"users": {
"member": {
"id":1,
"email":"[email protected]"
}
}
},
"suites": [
{
"name": "users",
"tests": [
{
"name": "get_user_success",
"description": "Fetch info about a user",
"api_calls": [
{
"request": "/v1/users/{{users.member.id}}",
"status": 200,
"assert": {
"select": "body",
"schema": "{{schema.user}}",
"equal_keys": {
"id": "{{users.member.id}}",
"email": "{{users.member.email}}"
}
}
}
]
},
{
"name": "get_user_missing",
"description": "Trying to fetch info about a user that doesn't exist",
"api_calls": [
{
"request": "/v1/users/99999999",
"status": 404
}
]
}
]
}
]
}
Run it:
jsonapitest path-to-your-test-file.json
Check out the Parse CRUD example for more sample code.
Tests are written in one or more JSON or JavaScript files and you may choose any file structure you like. With a small test case you may want to put all test code in a single file. A more typical structure is to divide your test code into three sets of files: configuration, data, and test suites. Here is an example:
test/integration/config.json
test/integration/data.json
test/integration/users_test.json
test/integration/articles_test.json
Here config.json will contain the configuration, data.json the data, users_test.json the test suite for users, and articles_test.json the test suite for articles. To run the tests:
jsonapitest test/integration/config.json test/integration/data.json test/integration/users_test.json test/integration/articles_test.json
Or more conveniently:
jsonapitest test/integration/*.json
The order in which test files are given to the test runner determines the execution order of tests. Since test suites are supposed to be independent of eachother this typically won't affect the outcome. The order comes into play if you have files with overlapping config or data properties. In this case later files will take precedence over earlier ones through a deep merge of the config and data properties. An example of how this may be used is when you would like to override the default configuration or data. Suppose you usually run your tests against a local test or development server but at times would like to run them against a remote staging server. You could then have a configuration file at test/integration/env/staging.json:
{
"config": {
"defaults": {
"api_call": {
"request": {
"base_url": "https://my.staging.api.example"
}
}
}
}
}
And run your tests against staging like so:
jsonapitest test/integration/*.json test/integration/env/staging.json
As a more flexible and slightly more readable alternative to JSON you have the option of specifying your
test files in JavaScript intead of JSON. JavaScript files should be Node.js modules
and jsonapitest
will simply invoke require
on them. Here is an example config.js
file:
'use strict';
module.exports = {
config: {
defaults: {
api_call: {
request: {
base_url: "https://my.staging.api.example"
}
}
}
}
};
The JSON in test files may contain one or more of the following top level properties:
The config property is an optional property where you can point out the path to a log file where HTTP requests are logged, the base_url of your server, and any default headers and response status of your API calls:
"config": {
"log_path": "log/integration-test-results.json",
"defaults": {
"api_call": {
"request": {
"base_url": "http://localhost:3001",
"headers": {
"X-API-CALL-ID": "{{$api_call_id}}",
"X-Token": "secret-api-token-goes-here",
"Accept": "application/json",
"Content-Type": "application/json"
}
},
"status": 200
}
}
}
For the X-API-CALL-ID
header above we are interpolating a built in variable called $api_call_id
that will be set to a unique hex
digest for each API call (see more under Data Interpolation). This is a technique you can use to make it easier
to find test request in your server logs.
The data property is a free form custom container for any kind of data that you need for your tests, i.e. database data, user credentials etc. Data is interpolated in api calls with the double curly syntax (i.e. {{my_data}}, see Data Interpolation).
You will most likely need to populate your database with test data before running your tests. If so any script that you write for this
should probably use the JSON data defined by the data
property (i.e. database records or documents). In general its a good idea
to write your API tests so that they make as few assumptions about the state of the system as possible. However,
in some test scenarios you really need to know exactly what the state of the system is in order to be able to make assertions and achieve high test coverage.
One approach that works in some projects is to run tests against a copy of the production data with a small amount of known test data added to it. Data population is currently outside the scope of this framework.
There are two pre-defined variables that you can interpolate into your API calls to generate unique data:
$run_id
- a 32 character long hex digest unique to the test run$api_call_id
- a 32 character long hex digest that is unique to each API call
Here is an example request that creates a new user with a unique email address:
{
"request": {
"method": "POST",
"path": "/v1/users",
"params": {
"name": "Mr New User",
"email": "new-user-{{$run_id}}@example.com"
}
}
}
You can interpolate environment variables into your config
and data
by using $env.SOME_ENVIRONMENT_VARIABLE
.
Use the suite property to define a single test suite:
{
"suite": {
"name": "users",
"description": "CRUD operations on the user resource",
"tests": [
{
"name": "get_user_success",
"description": "Fetch info about a user",
"api_calls": [
{
"request": {
"path": "/v1/users/{{users.member.id}}"
},
"assert": [{
"select": "body",
"schema": "{{schema.user}}",
"equal_keys": {
"id": "{{users.member.id}}",
"email": "{{users.member.email}}"
}
}]
}
]
}
]
}
}
A test suite is made up of a name, an optional description, and an array of tests. Each test in turn has a name and an optional description and an array of api calls. To define many test suites in a single file, use the suites (plural) property and have it point to an array of suite objects.
The API call lies at the heart of API testing and it is made up of an HTTP request and one or more assertions against the response. An API call can also save data from the HTTP response for use in later API calls.
To make the intention of API calls more obvious and help readability of tests you can use the optional properties it
(or the property description
) like this:
{
"it": "can GET a user of type member"
"request": "/v1/users/{{users.member.id}}",
"status": 200
}
The framework ships with adapters for two popular HTTP clients - superagent (default) and request. Here is how to configure the request
HTTP client:
"config": {
"modules": {
"http_client": "./http_clients/request"
}
}
In order to support a different HTTP client, all you have to do is write a simple adapter for it, see the
superagent and
request adapters
for examples of how to do this. You can either install your adapter globally as an npm package or set config.modules.http_client
to the absolute file path of your adapter.
The request
property of each API call is an object with the following properties:
method
- the HTTP verb (i.e. GET, PUT, POST, DELETE etc.). Defaults to "GET".path
- the path to make the request to. If abase_url
has been configured then theurl
property will be set to the base_url joined with the pathurl
- specify the full URL here instead of the path if you need a URL different from the base_urlheaders
- custom HTTP headersparams
- query or post parametersfiles
- an array with paths to files that will be uploaded with content type "multipart/form-data".
You can also let the request
property be a string for simple requests:
{
"request": "DELETE /v1/users/{{users.member.id}"
}
Notice that you can also append query parameters to the path instead of using the params
property:
{
"request": "/v1/users?limit=10&offset=10"
}
The following properties are available in the HTTP response object:
status
- the response status code (an integer)headers
- a hash with HTTP headersbody
- the parsed JSON bodyresponse_time
- elapsed number of milliseconds from request to response (integer)
Selections on the response data are used to make assertions and to save data. Selections can be made
on any property of the response. A selection is made up of a nested key
and the following optional
properties:
pattern
- a regular expressionlimit
- limit a selected array to a number of itemssort
- sort a selected array either ascending (asc
) or descending (desc
)
Selectors with just a key can be provided as just a string:
"api_calls": [
{
"request": "/v1/users/1",
"assert": {
"select": "body.user.name",
"equal": "Joe User"
}
}
]
The above expands to:
"api_calls": [
{
"request": "/v1/users/1",
"assert": {
"select": {"key": "body.user.name"},
"equal": "Joe User"
}
}
]
Here is the example with a regexp pattern
added to it:
"api_calls": [
{
"request": "/v1/users/1",
"assert": {
"select": {"key": "body.user.name", "pattern": "\w+$"},
"equal": "User"
}
}
]
If the regexp contains a capturing group then that group will be the selected value:
"api_calls": [
{
"request": "/v1/users/1",
"assert": {
"select": {"key": "body.user.name", "pattern": "^\w+ (\w+)$"},
"equal": "User"
}
}
]
A nested key also works on arrays:
"api_calls": [
{
"request": "/v1/users",
"assert": {
"select": "body.users.name",
"equal": ["First User", "Second User"]
}
}
]
You can use an array index to select a single item from an array:
"api_calls": [
{
"request": "/v1/users",
"assert": {
"select": "body.users.name.1",
"equal": "Second User"
}
}
]
You can apply sorting to an array:
"api_calls": [
{
"request": "/v1/users",
"assert": {
"select": {key: "body.users.name", sort: "desc"},
"equal": ["Second User", "First User"]
}
}
]
You can sort an array of objects by a property:
"api_calls": [
{
"request": "/v1/users",
"assert": {
"select": {key: "body.users", sort: {order: "desc", by: "name"}},
"equal": [{name: "Second User"}, {name: "First User"}]
}
}
]
The sort object also supports a type
property that you can set to "time" to sort by a datetime property.
An assert
object is made up of a selection on the the response object and one ore more assertions against that selection.
If no selection is specified then the assertion will be made against the response body. The following assert functions are built in:
Each assertion type has a logically inverted counterpart with a not_
prefix, i.e. not_equal
, not_contains
etc.
You can provide your own assert functions to fit the needs of your application. Take a look at the built in
assert functions to see what the code
should look like. Each assert function takes two arguments - the selected response value and the value
given to the assert function property. The assert function should return true
, false
, or an object with an
error_messages
property. Custom assert functions will take precedence over built in ones so that you can override them.
Put your assert functions in a globally installed npm package or provide an absolute file path in the config:
"config": {
"modules": {
"assert_functions": "/absolute/path/to/your/assert/functions/file"
}
}
Since making assertions about the response status code is so common some syntactic sugar is provided:
"api_calls": [
{
"request": "/v1/users",
"status": 200
}
]
The above expands to:
"api_calls": [
{
"request": "/v1/users",
"assert": {
select: "status",
equal: 200
}
}
]
Use the schema
property of an assert object to validate the response against a JSON schema:
"api_calls": [
{
"request": "/v1/users/1",
"assert": {
schema: {
"type": "object",
"properties": {
"id": {"type": "integer"},
"name": {"type": "string"},
"email": {"type": "string"}
},
"required": ["id", "name", "email"],
"additionalProperties": false
}
}
}
]
The equal
assertion does deep value equality check on arrays and objects. The values null
and undefined
are treated as equal. For other primitive values, i.e. numbers, strings and booleans, the types are not required to match and two values are regarded as equal if their string representation is equal.
If you would like to make assertions against only a subset of keys in the response object you can use equal_keys
instead of equal
. Suppose your user
record has a large number of columns but you would only like to make assertions about the id and the email:
{
"request": "/v1/users/{{users.member.id}}",
"assert": {
"equal_keys": {
"id": "{{users.member.id}}",
"email": "{{users.member.email}}"
}
}
}
The contains
assertion checks if a value is included in an array or a string.
The contains_keys
assertion checks if an array includes an object that matches the specified key-value pairs. It is thus the logical combination of contains
and equal_keys
.
{
"request": "/v1/users",
"assert": {
"select": "body.users",
"contains_keys": {
"name": "Peter"
}
}
}
The size
assertion checks the length of an array or a string:
{
"request": "/v1/users?limit=2",
"assert": {
"select": "body.users",
"size": 2
}
}
Sometimes its useful to save data from a response for user in later API calls. In this case you can use the save
property which takes
an object where the keys indicate where you would like to save the data and the values are selectors
(works the same as in assertions).
{
"request": {
"method": "PUT",
"path": "/v1/profile",
"params": {
"name": "Some new cool name {{$api_call_id}}"
}
},
"save": {
"saved.update_user.name": "body.name"
}
}
Data interpolation is done by embedding nested data keys in double curly braces in strings. The interpolation happens right before an API call is executed. Example:
{
"request": "/v1/news?organization_id={{organizations.test.id}}"
}
If the embedded variable encompasses the entire string then the string will be replaced by a value with same type as the data (any of the JSON datatypes, i.e. object, array, number, string, boolean, or null). Here is an example where a string with an interpolation is replaced with an object:
{
"request": "/v1/news?organization_id={{organizations.test.id}}",
"assert": {
"equal": "{{organizations.test}}"
}
}
You can use the $merge
special object property to merge (extend) data objects. Here is an example where an authentication header is extended with a content type:
"request": {
"method": "PUT",
"path": "/v1/profile",
"headers": {"$merge": ["{{headers.member_auth}}", {"Content-Type": "multipart/form-data"}]},
"params": {
"name": "Some new cool name"
},
"files": {
"portrait_image": "./test/integration/files/portrait_image.jpg"
}
}
The default logger prints basic request info and test results to standard output. Details about all API calls are
logged in JSON format to a file configured by the config.log_path
property.
If you don't like the default logger you can plug in your own. Take a look at the callbacks/console.js to see what the interface looks like:
"config": {
"modules": {
"callbacks": "my_logger_module"
}
}
There is an experimental logger that prints curl command line equivalents of all requests. You can enable it along side the default logger like so:
"config": {
"modules": {
"callbacks": ['./loggers/console', './loggers/curl']
}
}
Logging is implemented via a generic callback mechanism that allows you to instrument jsonapitest
with the following events:
module.exports = {
suite: {
start: function(suite) {},
end: function(suite) {}
},
test: {
start: function(suite, test) {},
end: function(suite, test) {}
},
api_call: {
start: function(suite, test, apiCall) {},
end: function(suite, test, apiCall, err, result) {}
},
all: {
start: function() {},
end: function(success, results) {}
}
};
The signatures of callback functions should match those above. All callback functions are invoked with this
set to the context of the test run. This means you can access/modify
test data via this.data
in a callback function (i.e. for setup/teardown).
Callback functions are synchronous by default. To get asynchronous invocation - add a callback argument to the function signature.
Configure your custom callbacks module by putting its path in config.modules.callbacks
. Either make sure your
modules are installed globally with npm and provide a relative path or use an absolute path via an environment variable
like this:
"config": {
"modules": {
"callbacks": ['$env.MODULES_PATH/my_first_callback', '$env.MODULES_PATH/my_second_callback']
}
}
Make sure your module exports an object with some or all of the functions listed above (check out callbacks/console.js for an example).