Metropolis-Hastings implementation seems work reasonably.
- 20201221: HMC and NUTS implementations are removed. (not work well)
- 20201213: MH and HMC works well with simple examples.
- 20201125: Metropolis-Hastings and Hamiltonian Monte Carlo.
- 20200927: Want to build a probabilistic programming support.
- 20200823: Done with Grokking Deep Reinforcement Learning Book.
- 20200430: New simple foreign memory management. (trigger gc, tested with SBCL)
- 20200329: New RNN layer based APIs.
- 20200120: I think current state of TH is generally usable. And yet, needs more examples.
- 20191226: Clozure CL runs TH codes very well. Often, CCL does not yet show memory trashing problems.
- 20191216: Version 1.44 of TH runs all the code under examples without problem; including dlfs and gdl. This code runs on Clozure CL as well as SBCL; however, SBCL shows much better performance.
Common Lisp Deep Learning Library which supports automatic backpropagation. I'd like to learn how neural network and automatic backpropagation works and this is my personal journey on the subject. From API design point of view, I'd like to use mathematical style operators rather than layer like abstractions, which exposes more detailed information on the real operation behind neural network but with slightly more tedious typings. However, you can always make some functions to reduce those problems, if you want to.
There should be a tensor/neural network library in common lisp which is easy to use(byte me!). I'd like to learn deep learning and I think building one from scratch will be the best way. I hope this library can be applied to the problems of differentiable programming. You can see what this library can do from examples. They are mostly neural network applications. Performance-wise, I think this library shows rather good performance, though I cannot find better, automated way of keeping memory usage low yet; so you have to insert full gc instruction properly.
At first, I've used libATen from pytorch but the project abandons all previous C interfaces in TH and libTHNN. So I've reverted to torch. But this makes another problem of index. And to build lib files I need to install cmake and other dependencies which TH does not use. So, I've forked the code into https://bitbucket.org/chunsj/LibTH (Yet, there's no makefile for automated build, I'll write one). After building, copy libTHTensor.0.dylib and libTHNeural.0.dylib and symlink each file as libTHTensor.dylib and libTHNeural.dylib respectively. Though current version of th does not support CUDA, I have a plan to support them and for this, you will need libTHCTensor and libTHCNeual under torch installation directory. For this recent changes in libraries, there might be still some problems due to the function signature changes between aten and TH/THNN, these problems are under investigation and fixing. You'd better use MKL version of libTH on macOS; eigenvalue/vector related routines emits error if libTH uses Accelerator.framework.
- Build https://bitbucket.org/chunsj/libth/src/master/ and install two libraries.
- You'll need my utility library mu.
- Link or clone this repository and mu into quicklisp's local-projects
- Check location of library path in the load.lisp file.
- Load with quicklisp (ql:quickload :th)
- If there's error, you need check previous processes.
- Additionally, there're th.images and th.text support libraries for examples.
- Basic tensor operations: 1
- Some examples on auto-backpropagation: 2
- XOR neural network: 3
- MNIST convolutional neural network: 4
- Cats and Dogs CNN: 5
- IMDB sentiment analysis: 6 (cl-ppcre is required)
- Binary number addition using vanilla RNN: 7
- Simple RNN examples based on layers API: 8-1 8-2
- Karpathy's character generation using RNN/LSTM: 9-1 9-2 9-3
- Autoencoder: 10-1 10-2 10-3
- Restricted Boltzmann Machine: 11
- Simple GAN (Fitting normal distribution): 12
- Generative Adversarial Network: 13-1 13-2 13-3 13-4 13-5 (opticl is required)
- Deep Convolutional GAN: 14-1 14-2
- Neural Arithmetic Logic Unit or NALU: 15
- Sequence-to-sequence with attention: 16
- VGG16, pretrained model: 17 (refer torch-vgg16.py under scratch/python)
- VGG19, pretrained model: 18 (refer torch-vgg16.py under scratch/python)
- ResNet50,101,152 pretrained model: 19 (refer torch-resnet50.py)
- DenseNet161, pretrained model: 20 (refer torch-densenet161.py)
- SqueezeNet1.1, pretrained model: 21 (refer torch-squeezenet11.py)
- Fully convolutional network: 22
- Hidden Markov model: 23 (from the Machine Learning with Tensorflow book)
- Reinforcement learning example: 24 (ditto above)
- Neural Fitted Q-iteration example: 25 (refer github.com/seungjaeryanlee)
- Deep Q-Network/Double DQNN: 26-1 26-2
- Simple Metropolis-Hastings: 27
- Mining Diaster Example with MCMC/MH(removed MCMC/HMC): 28
- Linear Regression with MCMC/MH: 29
- Variational Inference Examples: 30
- Examples from Bayesian Methods for Hackers: 31-1 31-2
- More Estimation Problems 32
Though there's currently 5 models, VGG16, VGG19, ResNet50, DenseNet161 and SqueezeNet1.1 are supported. However, I'll add more models if possible (if time permits). Refer corresponding weight file generation script written in python (using pytorch). Generated weight files should go under home directory as ~/.th/models/[modelname] (for exact path, refer vgg16.lisp code).
- Differentiable parameter creation: $parameter
- State (recurrent) creation/accessing: $state, $prev
- Operators:
$+, $ -,$*, $ / $@, ... - Functions: $sigmoid, $tanh, $softmax
- Gradient descent or parameter update: $gd!, $mgd!, $agd!, $amgd!, $rmgd!, $adgd!
- Weight initialization: $rn!, $ru!, $rnt!, $xavieru!, $xaviern!, $heu!, $hen!, $lecunu!, ...
- Weight creation utilities: vrn, vru, vrnt, vxavier, vhe, vlecun
- For easy construction: th.layers api such as sequential-layer, affine-layer, ...
- Deep Learning from Scratch: examples/books/dlfs
- Grokking Deep Learning: examples/books/gdl
- Grokking Deep Reinforcement Learning: examples/books/gdrl
- MNIST: db/mnist.lisp, you need to download original mnist data, unpack them, and generate. Refer generate-mnist-data function in db/mnist.lisp file.
- Fashion MNIST: db/fashion.lisp, same as above mnist data.
- CIFAR-10/CIFAR-100: db/cifar.lisp, same as above mnist data.
- CelebA: db/celeba.list, resized dataset for faster loading.
- Cats and Dogs: db/cats-and-dogs.lisp, resized dataset for faster loading.
- IMDB: db/imdb.lisp
- Misc CSV Files: data
- Most of the code in this folder is just for testing, teasing, or random trashing.
- They may not work at all.
SBCL and CCL does not know the memory pressure from foreign allocated ones, so I have to count them and check when it exceeds predefined size, the full garbage collection will occur. Current implementation is tested with SBCL only. If you have any better idea, than let me know. Default maximum is set as 4GB, you can modify this with th-set-maximum-allowed-heap-size function. Note that the argument of the function is the size in MB.
- Apply new layer based API, though I don't like it I cannot yet find better alternative.
- More application examples, especially other machine learning algorithms than neural network.
- Find why using Accelerator.framework makes geev emits floating point overflow error.