TensorLayer is a deep learning and reinforcement learning library based on TensorFlow. It provides rich data processing, model training and serving modules to help both researchers and engineers build practical machine learning workflows.
- [18 Jan] 《深度学习:一起玩转TensorLayer》 (Deep Learning: Play with TensorLayer)
- [17 Dec] Officially support Distributed Training contributed by TensorPort, see tiny example.
- [17 Nov] Release many APIs for data augmentation for object detection, see tl.prepro.
- [17 Nov] Support Convolutional LSTM, see ConvLSTMLayer.
- [17 Nov] Support Deformable Convolution, see DeformableConv2dLayer.
- [17 Nov] Download VOC dataset in one line, see load_voc_dataset.
- [17 Oct] We won the Best Open Source Software Award @ACM MM 2017, here is the slide for the presentation [click].
- [17 Sep] New example Chatbot in 200 lines of code for Seq2Seq.
- [17 Sep] Release ROI layer for Object Detection.
- [17 Jun] Release SpatialTransformer2dAffineLayer for Spatial Transformer Networks see example code.
- [17 Jun] Release Sub-pixel Convolution 2D for Super-resolution see SRGAN code.
- [17 May] You can now use TensorLayer with TFSlim and Keras together!
TensorLayer has install prerequisites including TensorFlow, numpy, matplotlib and nltk(optional). For GPU support, CUDA and cuDNN are required. Please check documentation for detailed instructions. The simplest way to install TensorLayer in your python program is:
[for master version] pip install git+https://github.com/zsdonghao/tensorlayer.git (Highly Recommended)
[for stable version] pip install tensorlayer
Examples can be found in this folder and Github topic.
- Multi-layer perceptron (MNIST) - Classification task, see tutorial_mnist_simple.py.
- Multi-layer perceptron (MNIST) - Classification using Iterator, see method1 and method2.
- Denoising Autoencoder (MNIST). Classification task, see tutorial_mnist.py.
- Stacked Denoising Autoencoder and Fine-Tuning (MNIST). Classification task, see tutorial_mnist.py.
- Convolutional Network (MNIST). Classification task, see tutorial_mnist.py.
- Convolutional Network (CIFAR-10). Classification task, see tutorial_cifar10.py and tutorial_cifar10_tfrecord.py.
- VGG 16 (ImageNet). Classification task, see tutorial_vgg16.py.
- VGG 19 (ImageNet). Classification task, see tutorial_vgg19.py.
- InceptionV3 (ImageNet). Classification task, see tutorial_inceptionV3_tfslim.py.
- Wide ResNet (CIFAR) by ritchieng.
- More CNN implementations of TF-Slim can be connected to TensorLayer via SlimNetsLayer.
- Spatial Transformer Networks by zsdonghao.
- U-Net for brain tumor segmentation by zsdonghao.
- Variational Autoencoder (VAE) for (CelebA) by yzwxx.
- Variational Autoencoder (VAE) for (MNIST) by BUPTLdy.
- Image Captioning - Reimplementation of Google's im2txt by zsdonghao.
- Recurrent Neural Network (LSTM). Apply multiple LSTM to PTB dataset for language modeling, see tutorial_ptb_lstm.py and tutorial_ptb_lstm_state_is_tuple.py.
- Word Embedding (Word2vec). Train a word embedding matrix, see tutorial_word2vec_basic.py.
- Restore Embedding matrix. Restore a pre-train embedding matrix, see tutorial_generate_text.py.
- Text Generation. Generates new text scripts, using LSTM network, see tutorial_generate_text.py.
- Chinese Text Anti-Spam by pakrchen.
- Chatbot in 200 lines of code for Seq2Seq.
- FastText Sentence Classification (IMDB), see tutorial_imdb_fasttext.py by tomtung.
- DCGAN (CelebA). Generating images by Deep Convolutional Generative Adversarial Networks by zsdonghao.
- Generative Adversarial Text to Image Synthesis by zsdonghao.
- Unsupervised Image to Image Translation with Generative Adversarial Networks by zsdonghao.
- Improved CycleGAN with resize-convolution by luoxier
- Super Resolution GAN by zsdonghao.
- DAGAN: Fast Compressed Sensing MRI Reconstruction by nebulaV.
- Policy Gradient / Network (Atari Ping Pong), see tutorial_atari_pong.py.
- Deep Q-Network (Frozen lake), see tutorial_frozenlake_dqn.py.
- Q-Table learning algorithm (Frozen lake), see tutorial_frozenlake_q_table.py.
- Asynchronous Policy Gradient using TensorDB (Atari Ping Pong) by nebulaV.
- AC for discrete action space (Cartpole), see tutorial_cartpole_ac.py.
- A3C for continuous action space (Bipedal Walker), see tutorial_bipedalwalker_a3c*.py.
- DAGGER for (Gym Torcs) by zsdonghao.
- TRPO for continuous and discrete action space by jjkke88.
- Distributed Training. tutorial_mnist_distributed.py by jorgemf.
- Merge TF-Slim into TensorLayer. tutorial_inceptionV3_tfslim.py.
- Merge Keras into TensorLayer. tutorial_keras.py.
- Data augmentation with TFRecord. Effective way to load and pre-process data, see tutorial_tfrecord*.py and tutorial_cifar10_tfrecord.py.
- Data augmentation with TensorLayer, see tutorial_image_preprocess.py.
- TensorDB by fangde see here.
- A simple web service - TensorFlask by JoelKronander.
- Float 16 half-precision model, see tutorial_mnist_float16.py
- TensorLayer provides two set of Convolutional layer APIs, see (Professional) and (Simplified) on readthedocs website.
- If you get into trouble, you can start a discussion on Slack, Gitter, Help Wanted Issues, QQ group and Wechat group.
As deep learning practitioners, we have been looking for a library that can serve for various development phases. This library shall be easy for beginners by providing rich neural network reference implementations. Later, it can be extended to address real-world problems by controlling training backends to exhibit low-level cognitive behaviours. In the end, it shall be able to serve in challenging production environments.
TensorLayer is designed for beginning, intermediate and professional deep learning users with following goals:
- Simplicity : TensorLayer lifts the low-level dataflow abstraction of TensorFlow to high-level deep learning modules. A user often find it easy to bootstrap with TensorLayer, and then dive into low-level implementation only if need.
- Transparency : TensorLayer provides access to the native APIs of TensorFlow. This helps users achieve flexible controls within the training engine.
- Composability : If possible, deep learning modules are composed, not built. TensorLayer can glue existing pieces together (e.g., connected with TF-Slim and Keras).
- Performance : TensorLayer provides zero-cost abstraction (see Benchmark below). It can run on distributed and heterogeneous TensorFlow platforms with full power.
A common concern towards TensorLayer is performance overhead. We investigate this by running classic models using TensorLayer and native TensorFlow implementations on a Titan X Pascal GPU. The following are the training throughputs of respective tasks:
CIFAR-10 | PTB LSTM | Word2Vec | |
---|---|---|---|
TensorLayer | 2528 images/s | 18063 words/s | 58167 words/s |
TensorFlow | 2530 images/s | 18075 words/s | 58181 words/s |
A frequent question regarding TensorLayer is what is the different with other libraries like Keras, TFSlim and Tflearn. These libraries are comfortable to start with. They provide imperative abstractions to lower adoption barrier; but in turn mask the underlying engine from users. Though good for bootstrap, it becomes hard to tune and modify from the bottom, which is quite necessary in tackling many real-world problems.
Without compromise in simplicity, TensorLayer advocates a more flexible and composable paradigm: neural network libraries shall be used interchangeably with the native engine. This allows users to enjoy the ease of pre-built modules without losing visibility to the deep. This non-intrusive nature also makes it viable to consolidate with other TF's wrappers.
The documentation [Online] [PDF] [Epub] [HTML] describes the usages of TensorLayer APIs. It is also a self-contained document that walks through different types of deep neural networks, reinforcement learning and their applications in Natural Language Processing (NLP) problems.
We have included the corresponding modularized implementations of Google TensorFlow Deep Learning tutorial, so you can read the TensorFlow tutorial [en] [cn] along with our document. Chinese documentation is also available.
TensorLayer is maintained by numerous Github contributors here. This project is in an active development stage and has received numerous contributions from an open community. It has been widely used by researchers from Imperial College London, Carnegie Mellon University, Stanford University, Tsinghua University, UCLA, Linköping University and etc., as well as engineers from Google, Microsoft, Alibaba, Tencent, Penguins Innovate, ReFULE4, Bloomberg, GoodAILab and many others.
- 🇬🇧 If you have question, drop us an email: [email protected].
- 🇨🇳 我们有官方的中文文档, 另外我们建立了多种交流渠道, 如QQ群和微信群.
If you find it is useful, please cite our paper in your project and paper.
@article{haoTL2017,
author = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},
journal = {ACM Multimedia},
title = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},
url = {http://tensorlayer.org},
year = {2017}
}
TensorLayer is released under the Apache 2.0 license.