Skip to content

Commit

Permalink
[Doc] update working with multiple backend section (dmlc#1128)
Browse files Browse the repository at this point in the history
* update work with different backend section

* fix some warnings

* Update backend.rst

* Update index.rst

Co-authored-by: VoVAllen <[email protected]>
  • Loading branch information
jermainewang and VoVAllen authored Dec 24, 2019
1 parent e4ef8d1 commit 17aab81
Show file tree
Hide file tree
Showing 7 changed files with 55 additions and 41 deletions.
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ Get Started
:glob:

install/index
install/backend

Follow the :doc:`instructions<install/index>` to install DGL. The :doc:`DGL at a glance<tutorials/basics/1_first>`
is the most common place to get started with. Each tutorial is accompanied with a runnable
Expand Down
42 changes: 42 additions & 0 deletions docs/source/install/backend.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
Working with different backends
===============================

DGL supports PyTorch, MXNet and Tensorflow backends. To change them, set the ``DGLBACKEND``
environcment variable. The default backend is PyTorch.

PyTorch backend
---------------

Export ``DGLBACKEND`` as ``pytorch`` to specify PyTorch backend. The required PyTorch
version is 0.4.1 or later. See `pytorch.org <https://pytorch.org>`_ for installation instructions.

MXNet backend
-------------

Export ``DGLBACKEND`` as ``mxnet`` to specify MXNet backend. The required MXNet version is
1.5 or later. See `mxnet.apache.org <https://mxnet.apache.org/get_started>`_ for installation
instructions.

MXNet uses uint32 as the default data type for integer tensors, which only supports graph of
size smaller than 2^32. To enable large graph training, *build* MXNet with ``USE_INT64_TENSOR_SIZE=1``
flag. See `this FAQ <https://mxnet.apache.org/api/faq/large_tensor_support>`_ for more information.

Tensorflow backend
------------------

Export ``DGLBACKEND`` as ``tensorflow`` to specify Tensorflow backend. The required Tensorflow
version is 2.0 or later. See `tensorflow.org <https://www.tensorflow.org/install>`_ for installation
instructions. In addition, Tensorflow backend requires ``tfdlpack`` package installed as follows and set ``TF_FORCE_GPU_ALLOW_GROWTH`` to ``true`` to prevent Tensorflow take over the whole GPU memory:

.. code:: bash
pip install tfdlpack # when using tensorflow cpu version
or

.. code:: bash
pip install tfdlpack-gpu # when using tensorflow gpu version
export TF_FORCE_GPU_ALLOW_GROWTH=true # and add this to your .bashrc/.zshrc file if needed
41 changes: 6 additions & 35 deletions docs/source/install/index.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
Install DGL
============
===========

This topic explains how to install DGL. We recommend installing DGL by using ``conda`` or ``pip``.

Expand Down Expand Up @@ -36,6 +36,8 @@ After the ``conda`` environment is activated, run one of the following commands.
conda install -c dglteam dgl # For CPU Build
conda install -c dglteam dgl-cuda9.0 # For CUDA 9.0 Build
conda install -c dglteam dgl-cuda10.0 # For CUDA 10.0 Build
conda install -c dglteam dgl-cuda10.1 # For CUDA 10.1 Build
Install from pip
----------------
Expand All @@ -52,7 +54,8 @@ For CUDA builds, run one of the following commands and specify the CUDA version.
pip install dgl # For CPU Build
pip install dgl-cu90 # For CUDA 9.0 Build
pip install dgl-cu92 # For CUDA 9.2 Build
pip install dgl-cu100 # For CUDA 10.0 Build
pip install dgl-cu100 # For CUDA 10.0 Build
pip install dgl-cu101 # For CUDA 10.1 Build
For the most current nightly build from master branch, run one of the following commands.

Expand All @@ -62,41 +65,9 @@ For the most current nightly build from master branch, run one of the following
pip install --pre dgl-cu90 # For CUDA 9.0 Build
pip install --pre dgl-cu92 # For CUDA 9.2 Build
pip install --pre dgl-cu100 # For CUDA 10.0 Build
pip install --pre dgl-cu101 # For CUDA 10.1 Build
Working with different backends
-------------------------------

DGL supports PyTorch and MXNet. Here's how to change them.

Switching backend
`````````````````

The backend is controlled by ``DGLBACKEND`` environment variable, which defaults to
``pytorch``. The following values are supported.

+---------+---------+--------------------------------------------------+
| Value | Backend | Constraints |
+=========+=========+==================================================+
| pytorch | PyTorch | Requires 0.4.1 or later. See |
| | | `pytorch.org <https://pytorch.org>`_ |
+---------+---------+--------------------------------------------------+
| mxnet | MXNet | Requires either MXNet 1.5 for CPU |
| | | |
| | | .. code:: bash |
| | | |
| | | pip install mxnet |
| | | |
| | | or MXNet for GPU with CUDA version, e.g. for CUDA 9.2 |
| | | |
| | | .. code:: bash |
| | | |
| | | pip install mxnet-cu90 |
| | | |
+---------+---------+--------------------------------------------------+
| numpy | NumPy | Does not support gradient computation |
+---------+---------+--------------------------------------------------+

.. _install-from-source:

Install from source
Expand Down
2 changes: 1 addition & 1 deletion tutorials/basics/1_first.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@

###############################################################################
# Tutorial problem description
# ---------------------------
# ----------------------------
#
# The tutorial is based on the "Zachary's karate club" problem. The karate club
# is a social network that includes 34 members and documents pairwise links
Expand Down
6 changes: 3 additions & 3 deletions tutorials/basics/2_basics.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

###############################################################################
# Creating a graph
# --------------
# ----------------
# The design of :class:`DGLGraph` was influenced by other graph libraries. You
# can create a graph from networkx and convert it into a :class:`DGLGraph` and
# vice versa.
Expand Down Expand Up @@ -71,7 +71,7 @@

###############################################################################
# Assigning a feature
# ------------------
# -------------------
# You can also assign features to nodes and edges of a :class:`DGLGraph`. The
# features are represented as dictionary of names (strings) and tensors,
# called **fields**.
Expand Down Expand Up @@ -138,7 +138,7 @@

###############################################################################
# Working with multigraphs
# ~~~~~~~~~~~
# ~~~~~~~~~~~~~~~~~~~~~~~~
# Many graph applications need parallel edges. To enable this, construct :class:`DGLGraph`
# with ``multigraph=True``.

Expand Down
2 changes: 1 addition & 1 deletion tutorials/basics/3_pagerank.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,7 +118,7 @@ def pagerank_naive(g):

###############################################################################
# Batching semantics for a large graph
# -----------------------------------
# ------------------------------------
# The above code does not scale to a large graph because it iterates over all
# the nodes. DGL solves this by allowing you to compute on a *batch* of nodes or
# edges. For example, the following codes trigger message and reduce functions
Expand Down
2 changes: 1 addition & 1 deletion tutorials/basics/4_batch.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
.. currentmodule:: dgl
Tutorial: Batched graph classification with DGL
=====================================
================================================
**Author**: `Mufei Li <https://github.com/mufeili>`_,
`Minjie Wang <https://jermainewang.github.io/>`_,
Expand Down

0 comments on commit 17aab81

Please sign in to comment.