Skip to content
forked from chainer/chainermn

ChainerMN: Scalable distributed deep learning with Chainer

License

Notifications You must be signed in to change notification settings

doipfn/chainermn

 
 

Repository files navigation

Build Status Documentation Status PyPI License: MIT

ChainerMN: Distributed Deep Learning with Chainer

Documentation | Installation | Examples | Release Notes

ChainerMN is an additional package for Chainer, a flexible deep learning framework. ChainerMN enables multi-node distributed deep learning with the following features:

  • Scalable --- it makes full use of the latest technologies such as NVIDIA NCCL and CUDA-Aware MPI,
  • Flexible --- even dynamic neural networks can be trained in parallel thanks to Chainer's flexibility, and
  • Easy --- minimal changes to existing user code are required.

This blog post provides our benchmark results using up to 128 GPUs.

Installation

ChainerMN can be used for both inner-node (i.e., multiple GPUs inside a node) and inter-node settings. For inter-node settings, we highly recommend to use high-speed interconnects such as InfiniBand.

In addition to Chainer, ChainerMN depends on the following software libraries: CUDA-Aware MPI, NVIDIA NCCL, and a few Python packages. After setting them up, ChainerMN can be installed via PyPI:

pip install chainermn

Please refer to the installation guide for more information.

Getting Started

You can invoke MNIST example with four workers by the following command:

mpiexec -n 4 python examples/mnist/train_mnist.py
  • Chainer Tutorial --- If you are new to Chainer, we recommend to start from this.
  • ChainerMN Tutorial --- In this tutorial, we explain how to modify your existing code using Chainer to enable distributed training with ChainerMN in a step-by-step manner.
  • Examples --- The examples are based on the official examples of Chainer and the differences are highlighted.

Contributing

Any contribution to ChainerMN would be highly appreciated. Please refer to Chainer Contribution Guide.

License

MIT License

Reference

Akiba, T., Fukuda, K. and Suzuki, S., ChainerMN: Scalable Distributed Deep Learning Framework, Proceedings of Workshop on ML Systems in The Thirty-first Annual Conference on Neural Information Processing Systems (NIPS), (2017) URL, BibTex

About

ChainerMN: Scalable distributed deep learning with Chainer

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.2%
  • Shell 0.8%