Skip to content

zhxsxuan/Paddle

Folders and files

NameName
Last commit message
Last commit date

Latest commit

author
Helin Wang
Jan 17, 2017
20b851c · Jan 17, 2017
Dec 12, 2016
Jan 11, 2017
Jan 12, 2017
Jan 17, 2017
Dec 12, 2016
Jan 12, 2017
Jan 11, 2017
Jan 5, 2017
Nov 13, 2016
Dec 2, 2016
Jan 6, 2017
Dec 28, 2016
Nov 11, 2016
Jan 9, 2017
Jan 9, 2017
Dec 15, 2016
Nov 7, 2016
Dec 8, 2016
Nov 22, 2016
Nov 29, 2016
Aug 29, 2016

Repository files navigation

PaddlePaddle

Build Status Documentation Status Documentation Status Coverage Status Release License

Welcome to the PaddlePaddle GitHub.

PaddlePaddle (PArallel Distributed Deep LEarning) is an easy-to-use, efficient, flexible and scalable deep learning platform, which is originally developed by Baidu scientists and engineers for the purpose of applying deep learning to many products at Baidu.

Our vision is to enable deep learning for everyone via PaddlePaddle. Please refer to our release announcement to track the latest feature of PaddlePaddle.

Features

  • Flexibility

    PaddlePaddle supports a wide range of neural network architectures and optimization algorithms. It is easy to configure complex models such as neural machine translation model with attention mechanism or complex memory connection.

  • Efficiency

    In order to unleash the power of heterogeneous computing resource, optimization occurs at different levels of PaddlePaddle, including computing, memory, architecture and communication. The following are some examples:

    • Optimized math operations through SSE/AVX intrinsics, BLAS libraries (e.g. MKL, ATLAS, cuBLAS) or customized CPU/GPU kernels.
    • Highly optimized recurrent networks which can handle variable-length sequence without padding.
    • Optimized local and distributed training for models with high dimensional sparse data.
  • Scalability

    With PaddlePaddle, it is easy to use many CPUs/GPUs and machines to speed up your training. PaddlePaddle can achieve high throughput and performance via optimized communication.

  • Connected to Products

    In addition, PaddlePaddle is also designed to be easily deployable. At Baidu, PaddlePaddle has been deployed into products or service with a vast number of users, including ad click-through rate (CTR) prediction, large-scale image classification, optical character recognition(OCR), search ranking, computer virus detection, recommendation, etc. It is widely utilized in products at Baidu and it has achieved a significant impact. We hope you can also exploit the capability of PaddlePaddle to make a huge impact for your product.

Installation

Check out the Install Guide to install from pre-built packages (docker image, deb package) or directly build on Linux and Mac OS X from the source code.

Documentation

Both English Docs and Chinese Docs are provided for our users and developers.

  • Quick Start
    You can follow the quick start tutorial to learn how use PaddlePaddle step-by-step.

  • Example and Demo
    We provide five demos, including: image classification, sentiment analysis, sequence to sequence model, recommendation, semantic role labeling.

  • Distributed Training
    This system supports training deep learning models on multiple machines with data parallelism.

  • Python API
    PaddlePaddle supports using either Python interface or C++ to build your system. We also use SWIG to wrap C++ source code to create a user friendly interface for Python. You can also use SWIG to create interface for your favorite programming language.

  • How to Contribute
    We sincerely appreciate your interest and contributions. If you would like to contribute, please read the contribution guide.

  • Source Code Documents

Ask Questions

You are welcome to submit questions and bug reports as Github Issues.

Copyright and License

PaddlePaddle is provided under the Apache-2.0 license.

About

PArallel Distributed Deep LEarning

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • C++ 59.7%
  • Python 20.7%
  • Cuda 9.6%
  • C 4.4%
  • CMake 2.3%
  • Shell 1.7%
  • Other 1.6%