Contents
SciPy uses the Nose testing system, with some minor convenience features added. Nose is an extension of the unit testing framework offered by unittest.py. Our goal is that every module and package in SciPy should have a thorough set of unit tests. These tests should exercise the full functionality of a given routine as well as its robustness to erroneous or unexpected input arguments. Long experience has shown that by far the best time to write the tests is before you write or change the code - this is test-driven development. The arguments for this can sound rather abstract, but we can assure you that you will find that writing the tests first leads to more robust and better designed code. Well-designed tests with good coverage make an enormous difference to the ease of refactoring. Whenever a new bug is found in a routine, you should write a new test for that specific case and add it to the test suite to prevent that bug from creeping back in unnoticed.
To run SciPy's full test suite, use the following:
>>> import scipy
>>> scipy.test()
SciPy uses the testing framework from NumPy (specifically
numpy.testing
), so all the SciPy examples shown here are also
applicable to NumPy. So NumPy's full test suite can be run as
follows:
>>> import numpy
>>> numpy.test()
The test method may take two or more arguments; the first, label
is a
string specifying what should be tested and the second, verbose
is an
integer giving the level of output verbosity. See the docstring for
numpy.test for details. The default value for label
is 'fast' - which
will run the standard tests. The string 'full' will run the full battery
of tests, including those identified as being slow to run. If verbose
is 1 or less, the tests will just show information messages about the tests
that are run; but if it is greater than 1, then the tests will also provide
warnings on missing tests. So if you want to run every test and get
messages about which modules don't have tests:
>>> scipy.test(label='full', verbose=2) # or scipy.test('full', 2)
Finally, if you are only interested in testing a subset of SciPy, for
example, the integrate
module, use the following:
>>> scipy.integrate.test()
The rest of this page will give you a basic idea of how to add unit tests to modules in SciPy. It is extremely important for us to have extensive unit testing since this code is going to be used by scientists and researchers and is being developed by a large number of people spread across the world. So, if you are writing a package that you'd like to become part of SciPy, please write the tests as you develop the package. Also since much of SciPy is legacy code that was originally written without unit tests, there are still several modules that don't have tests yet. Please feel free to choose one of these modules and develop tests for it as you read through this introduction.
Every Python module, extension module, or subpackage in the SciPy
package directory should have a corresponding test_<name>.py
file.
Nose examines these files for test methods (named test*) and test
classes (named Test*).
Suppose you have a SciPy module scipy/xxx/yyy.py
containing a
function zzz()
. To test this function you would create a test
module called test_yyy.py
. If you only need to test one aspect of
zzz
, you can simply add a test function:
def test_zzz():
assert_(zzz() == 'Hello from zzz')
More often, we need to group a number of tests together, so we create a test class:
from numpy.testing import assert_, assert_raises
# import xxx symbols
from scipy.xxx.yyy import zzz
class TestZzz:
def test_simple(self):
assert_(zzz() == 'Hello from zzz')
def test_invalid_parameter(self):
assert_raises(...)
Within these test methods, assert_()
and related functions are used to test
whether a certain assumption is valid. If the assertion fails, the test fails.
Note that the Python builtin assert
should not be used, because it is
stripped during compilation with -O
.
Note that test_
functions or methods should not have a docstring, because
that makes it hard to identify the test from the output of running the test
suite with verbose=2
(or similar verbosity setting). Use plain comments
(#
) if necessary.
Sometimes it is convenient to run test_yyy.py
by itself, so we add
if __name__ == "__main__":
run_module_suite()
at the bottom.
Unlabeled tests like the ones above are run in the default
scipy.test()
run. If you want to label your test as slow - and
therefore reserved for a full scipy.test(label='full')
run, you
can label it with a nose decorator:
# numpy.testing module includes 'import decorators as dec'
from numpy.testing import dec, assert_
@dec.slow
def test_big(self):
print 'Big, slow test'
Similarly for methods:
class test_zzz:
@dec.slow
def test_simple(self):
assert_(zzz() == 'Hello from zzz')
Nose looks for module level setup and teardown functions by name; thus:
def setup():
"""Module-level setup"""
print 'doing setup'
def teardown():
"""Module-level teardown"""
print 'doing teardown'
You can add setup and teardown functions to functions and methods with nose decorators:
import nose
# import all functions from numpy.testing that are needed
from numpy.testing import assert_, assert_array_almost_equal
def setup_func():
"""A trivial setup function."""
global helpful_variable
helpful_variable = 'pleasant'
print "In setup_func"
def teardown_func():
"""A trivial teardown function."""
global helpful_variable
del helpful_variable
print "In teardown_func"
@nose.with_setup(setup_func, teardown_func)
def test_with_extras():
# This test uses the setup/teardown functions.
global helpful_variable
print " In test_with_extras"
print " Helpful is %s" % helpful_variable
One very nice feature of nose is allowing easy testing across a range of parameters - a nasty problem for standard unit tests. It does this with test generators:
def check_even(n, nn):
"""A check function to be used in a test generator."""
assert_(n % 2 == 0 or nn % 2 == 0)
def test_evens():
for i in range(0,4,2):
yield check_even, i, i*3
Note that check_even
is not itself a test (no 'test' in the name),
but test_evens
is a generator that returns a series of tests, using
check_even
, across a range of inputs.
A problem with generator tests can be that if a test is failing, it's hard to see for which parameters. To avoid this problem, ensure that:
- No computation related to the features tested is done in the
test_*
generator function, but delegated to a correspondingcheck_*
function (can be inside the generator, to share namespace).- The generators are used solely for loops over parameters.
- Those parameters are not arrays.
Warning
Parametric tests cannot be implemented on classes derived from TestCase.
Doctests are a convenient way of documenting the behavior of a function and allowing that behavior to be tested at the same time. The output of an interactive Python session can be included in the docstring of a function, and the test framework can run the example and compare the actual output to the expected output.
The doctests can be run by adding the doctests
argument to the
test()
call; for example, to run all tests (including doctests)
for numpy.lib:
>>> import numpy as np
>>> np.lib.test(doctests=True)
The doctests are run as if they are in a fresh Python instance which
has executed import numpy as np
. Tests that are part of a SciPy
subpackage will have that subpackage already imported. E.g. for a test
in scipy/linalg/tests/
, the namespace will be created such that
from scipy import linalg
has already executed.
Rather than keeping the code and the tests in the same directory, we
put all the tests for a given subpackage in a tests/
subdirectory. For our example, if it doesn't already exist you will
need to create a tests/
directory in scipy/xxx/
. So the path
for test_yyy.py
is scipy/xxx/tests/test_yyy.py
.
Once the scipy/xxx/tests/test_yyy.py
is written, its possible to
run the tests by going to the tests/
directory and typing:
python test_yyy.py
Or if you add scipy/xxx/tests/
to the Python path, you could run
the tests interactively in the interpreter like this:
>>> import test_yyy
>>> test_yyy.test()
Usually, however, adding the tests/
directory to the python path
isn't desirable. Instead it would better to invoke the test straight
from the module xxx
. To this end, simply place the following lines
at the end of your package's __init__.py
file:
...
def test(level=1, verbosity=1):
from numpy.testing import Tester
return Tester().test(level, verbosity)
You will also need to add the tests directory in the configuration section of your setup.py:
...
def configuration(parent_package='', top_path=None):
...
config.add_data_dir('tests')
return config
...
Now you can do the following to test your module:
>>> import scipy
>>> scipy.xxx.test()
Also, when invoking the entire SciPy test suite, your tests will be found and run:
>>> import scipy
>>> scipy.test()
# your tests are included and run automatically!
If you have a collection of tests that must be run multiple times with minor variations, it can be helpful to create a base class containing all the common tests, and then create a subclass for each variation. Several examples of this technique exist in NumPy; below are excerpts from one in numpy/linalg/tests/test_linalg.py:
class LinalgTestCase:
def test_single(self):
a = array([[1.,2.], [3.,4.]], dtype=single)
b = array([2., 1.], dtype=single)
self.do(a, b)
def test_double(self):
a = array([[1.,2.], [3.,4.]], dtype=double)
b = array([2., 1.], dtype=double)
self.do(a, b)
...
class TestSolve(LinalgTestCase):
def do(self, a, b):
x = linalg.solve(a, b)
assert_almost_equal(b, dot(a, x))
assert_(imply(isinstance(b, matrix), isinstance(x, matrix)))
class TestInv(LinalgTestCase):
def do(self, a, b):
a_inv = linalg.inv(a)
assert_almost_equal(dot(a, a_inv), identity(asarray(a).shape[0]))
assert_(imply(isinstance(a, matrix), isinstance(a_inv, matrix)))
In this case, we wanted to test solving a linear algebra problem using
matrices of several data types, using linalg.solve
and
linalg.inv
. The common test cases (for single-precision,
double-precision, etc. matrices) are collected in LinalgTestCase
.
Sometimes you might want to skip a test or mark it as a known failure, such as when the test suite is being written before the code it's meant to test, or if a test only fails on a particular architecture. The decorators from numpy.testing.dec can be used to do this.
To skip a test, simply use skipif
:
from numpy.testing import dec
@dec.skipif(SkipMyTest, "Skipping this test because...")
def test_something(foo):
...
The test is marked as skipped if SkipMyTest
evaluates to nonzero,
and the message in verbose test output is the second argument given to
skipif
. Similarly, a test can be marked as a known failure by
using knownfailureif
:
from numpy.testing import dec
@dec.knownfailureif(MyTestFails, "This test is known to fail because...")
def test_something_else(foo):
...
Of course, a test can be unconditionally skipped or marked as a known
failure by passing True
as the first argument to skipif
or
knownfailureif
, respectively.
A total of the number of skipped and known failing tests is displayed
at the end of the test run. Skipped tests are marked as 'S'
in
the test results (or 'SKIPPED'
for verbose > 1
), and known
failing tests are marked as 'K'
(or 'KNOWN'
if verbose >
1
).