Skip to content

Commit

Permalink
[Doc] Graph neural network and its variant Edit for grammar and style (
Browse files Browse the repository at this point in the history
…dmlc#992)

* Edit for grammar and style

Improve readability

* Update tutorials/models/1_gnn/README.txt

Co-Authored-By: Aaron Markham <[email protected]>

* Update tutorials/models/1_gnn/README.txt

Co-Authored-By: Aaron Markham <[email protected]>
  • Loading branch information
2 people authored and jermainewang committed Nov 28, 2019
1 parent c9ac6c9 commit 8ae9770
Showing 1 changed file with 19 additions and 21 deletions.
40 changes: 19 additions & 21 deletions tutorials/models/1_gnn/README.txt
Original file line number Diff line number Diff line change
@@ -1,48 +1,46 @@
.. _tutorials1-index:

Graph Neural Network and its variant
Graph neural networks and its variants
====================================

* **GCN** `[paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
* **Graph convolutional network (GCN)** `[research paper] <https://arxiv.org/abs/1609.02907>`__ `[tutorial]
<1_gnn/1_gcn.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gcn>`__
`[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/gcn>`__:
this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs.
This is the most basic GCN. The tutorial covers the basic uses of DGL APIs.

* **GAT** `[paper] <https://arxiv.org/abs/1710.10903>`__ `[tutorial]
* **Graph attention network (GAT)** `[research paper] <https://arxiv.org/abs/1710.10903>`__ `[tutorial]
<1_gnn/9_gat.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/blob/master/examples/pytorch/gat>`__
`[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/gat>`__:
the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention
among neighborhood of a node, thus greatly enhances the capacity and
GAT extends the GCN functionality by deploying multi-head attention
among neighborhood of a node. This greatly enhances the capacity and
expressiveness of the model.

* **R-GCN** `[paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
* **Relational-GCN** `[research paper] <https://arxiv.org/abs/1703.06103>`__ `[tutorial]
<1_gnn/4_rgcn.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/rgcn>`__
`[MXNet code]
<https://github.com/dmlc/dgl/tree/master/examples/mxnet/rgcn>`__:
the key difference of RGNN is to allow multi-edges among two entities of a
graph, and edges with distinct relationships are encoded differently. This
is an interesting extension of GCN that can have a lot of applications of
its own.
Relational-GCN allows multiple edges among two entities of a
graph. Edges with distinct relationships are encoded differently.

* **LGNN** `[paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
* **Line graph neural network (LGNN)** `[research paper] <https://arxiv.org/abs/1705.08415>`__ `[tutorial]
<1_gnn/6_line_graph.html>`__ `[Pytorch code]
<https://github.com/dmlc/dgl/tree/master/examples/pytorch/line_graph>`__:
this model focuses on community detection by inspecting graph structures. It
This network focuses on community detection by inspecting graph structures. It
uses representations of both the original graph and its line-graph
companion. In addition to demonstrate how an algorithm can harness multiple
graphs, our implementation shows how one can judiciously mix vanilla tensor
operation, sparse-matrix tensor operations, along with message-passing with
companion. In addition to demonstrating how an algorithm can harness multiple
graphs, this implementation shows how you can judiciously mix simple tensor
operations and sparse-matrix tensor operations, along with message-passing with
DGL.

* **SSE** `[paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
* **Stochastic steady-state embedding (SSE)** `[research paper] <http://proceedings.mlr.press/v80/dai18a/dai18a.pdf>`__ `[tutorial]
<1_gnn/8_sse_mx.html>`__ `[MXNet code]
<https://github.com/dmlc/dgl/blob/master/examples/mxnet/sse>`__:
the emphasize here is *giant* graph that cannot fit comfortably on one GPU
card. SSE is an example to illustrate the co-design of both algorithm and
system: sampling to guarantee asymptotic convergence while lowering the
complexity, and batching across samples for maximum parallelism.
SSE is an example to illustrate the co-design of both algorithm and
system. Sampling to guarantee asymptotic convergence while lowering
complexity and batching across samples for maximum parallelism. The emphasis
here is that a giant graph that cannot fit comfortably on one GPU card.

0 comments on commit 8ae9770

Please sign in to comment.