Turing is a Julia library for (universal) probabilistic programming. It was originally created and is now managed by Hong Ge. The full list of contributors is Hong Ge, Adam Scibior, Matej Balog, Zoubin Ghahramani, Kai Xu, Emma Smith. Turing is an open source project so if you feel you have some relevant skills and are interested in contributing then please do get in touch.
To use Turing, you need install Julia first and then install Turing.
You will need Julia 0.5 (or 0.4; but 0.5 is recommended), which you can get from the official Julia website.
It provides three options for users
- A command line version Julia/downloads
- A community maintained IDE Juno
- JuliaBox.com - a Jupyter notebook in the browser
For command line version, we recommend that you install a version downloaded from Julia's official website, as Turing may not work correctly with Julia provided by other sources (e.g. Turing does not work with Julia installed via apt-get due to missing header files).
Juno also needs the command line version installed. This IDE is recommended for heavy users who require features like debugging, quick documentation check, etc.
JuliaBox provides a pre-installed Jupyter notebook for Julia. You can take a shot at Turing without installing Julia on your machine in few seconds.
Turing is an officially registered Julia package, so the following should install a stable version of Turing:
Pkg.update()
Pkg.add("Turing")
Pkg.build("Turing")
Pkg.test("Turing")
If you want to use the latest version of Turing with some experimental features, you can try the following instead:
Pkg.update()
Pkg.clone("Turing")
Pkg.build("Turing")
Pkg.test("Turing")
If all tests pass, you're ready to start using Turing.
A Turing probabilistic program is just a normal Julia program, wrapped in a @model
macro, that uses some of the special macros illustrated below. Available inference methods include Importance Sampling (IS), Sequential Monte Carlo (SMC), Particle Gibbs (PG), Hamiltonian Monte Carlo (HMC).
# Define a simple Normal model with unknown mean and variance.
@model gdemo(x) = begin
s ~ InverseGamma(2,3)
m ~ Normal(0,sqrt(s))
x[1] ~ Normal(m, sqrt(s))
x[2] ~ Normal(m, sqrt(s))
return s, m
end
Inference methods are functions which take the probabilistic program as one of the arguments.
# Run sampler, collect results
chain = @sample(gdemo([1.5, 2]), SMC(500))
chain = @sample(gdemo([1.5, 2]), PG(10,500))
chain = @sample(gdemo([1.5, 2]), HMC(1000, 0.1, 5))
The arguments for each sampler are
- SMC: number of particles
- PG: number of praticles, number of iterations
- HMC: number of samples, leapfrog step size, leapfrog step numbers
To cite Turing, please refer to the technical report. Sample BibTeX entry is given below:
@ARTICLE{Turing2016,
author = {Ge, Hong and {\'S}cibior, Adam and Xu, Kai and Ghahramani, Zoubin},
title = "{Turing: A fast imperative probabilistic programming language.}",
year = 2016,
month = jun
}