Skip to content

Commit

Permalink
[DGL-Go][Doc] Update DGL-Go version to 0.0.2 and misc fix from bug ba…
Browse files Browse the repository at this point in the history
…sh (dmlc#4236)

* Update

* Update

* Update

* Update

Co-authored-by: Ubuntu <[email protected]>
Co-authored-by: Xin Yao <[email protected]>
  • Loading branch information
3 people authored Jul 14, 2022
1 parent 79b0a50 commit fdbf5a0
Show file tree
Hide file tree
Showing 29 changed files with 45 additions and 48 deletions.
6 changes: 3 additions & 3 deletions dglgo/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Let's use one of the most classical setups -- training a GraphSAGE model for nod
classification on the Cora citation graph dataset as an
example.

### Step one: `dgl configure`
### Step 1: `dgl configure`

First step, use `dgl configure` to generate a YAML configuration file.

Expand All @@ -85,7 +85,7 @@ At this point you can also change options to explore optimization potentials.
The snippet below shows the configuration file generated by the command above.

```yaml
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cpu
Expand Down Expand Up @@ -181,7 +181,7 @@ That's all! Basically you only need two commands to train a graph neural network

### Step 3: `dgl export` for more advanced customization

That's not everything yet. You may want to open the hood and and invoke deeper
That's not everything yet. You may want to open the hood and invoke deeper
customization. DGL-Go can export a **self-contained, reproducible** Python
script for you to do anything you like.

Expand Down
2 changes: 1 addition & 1 deletion dglgo/dglgo/utils/enter_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ class PipelineConfig(DGLBaseModel):
loss: str = "CrossEntropyLoss"

class UserConfig(DGLBaseModel):
version: Optional[str] = "0.0.1"
version: Optional[str] = "0.0.2"
pipeline_name: PipelineFactory.get_pipeline_enum()
pipeline_mode: str
device: str = "cpu"
2 changes: 1 addition & 1 deletion dglgo/recipes/graphpred_hiv_gin.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: graphpred
pipeline_mode: train
device: cuda:0 # Torch device name, e.q. cpu or cuda or cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/graphpred_hiv_pna.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: graphpred
pipeline_mode: train
device: cuda:0 # Torch device name, e.q. cpu or cuda or cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/graphpred_pcba_gin.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: graphpred
pipeline_mode: train
device: cuda:0 # Torch device name, e.q. cpu or cuda or cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/linkpred_citation2_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: linkpred
pipeline_mode: train
device: cpu
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/linkpred_collab_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: linkpred
pipeline_mode: train
device: cpu
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/linkpred_cora_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: linkpred
pipeline_mode: train
device: cuda
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred-ns_arxiv_gcn.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 5 runs: 0.593288 ± 0.006103
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred-ns
pipeline_mode: train
device: 'cuda:0'
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred-ns_product_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 1 runs: 0.796911
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred-ns
pipeline_mode: train
device: cuda
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_citeseer_gat.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.7097 ± 0.006914
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_citeseer_gcn.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.6852 ± 0.008875
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_citeseer_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.6994 ± 0.004005
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_cora_gat.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.8208 ± 0.00663
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_cora_gcn.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.802 ± 0.005329
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_cora_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.8163 ± 0.006856
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_pubmed_gat.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.7788 ± 0.002227
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_pubmed_gcn.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.7826 ± 0.004317
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/recipes/nodepred_pubmed_sage.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Accuracy across 10 runs: 0.7819 ± 0.003176
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cuda:0
Expand Down
2 changes: 1 addition & 1 deletion dglgo/setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from distutils.core import setup

setup(name='dglgo',
version='0.0.1',
version='0.0.2',
description='DGL',
author='DGL Team',
author_email='[email protected]',
Expand Down
2 changes: 1 addition & 1 deletion dglgo/tests/cfg.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
version: 0.0.1
version: 0.0.2
pipeline_name: nodepred
pipeline_mode: train
device: cpu
Expand Down
6 changes: 3 additions & 3 deletions python/dgl/data/citation_graph.py
Original file line number Diff line number Diff line change
Expand Up @@ -439,7 +439,7 @@ def __getitem__(self, idx):
graph structure, node features and labels.
- ``ndata['train_mask']`` mask for training node set
- ``ndata['train_mask']``: mask for training node set
- ``ndata['val_mask']``: mask for validation node set
- ``ndata['test_mask']``: mask for test node set
- ``ndata['feat']``: node feature
Expand Down Expand Up @@ -590,7 +590,7 @@ def __getitem__(self, idx):
graph structure, node features and labels.
- ``ndata['train_mask']`` mask for training node set
- ``ndata['train_mask']``: mask for training node set
- ``ndata['val_mask']``: mask for validation node set
- ``ndata['test_mask']``: mask for test node set
- ``ndata['feat']``: node feature
Expand Down Expand Up @@ -738,7 +738,7 @@ def __getitem__(self, idx):
graph structure, node features and labels.
- ``ndata['train_mask']`` mask for training node set
- ``ndata['train_mask']``: mask for training node set
- ``ndata['val_mask']``: mask for validation node set
- ``ndata['test_mask']``: mask for test node set
- ``ndata['feat']``: node feature
Expand Down
9 changes: 2 additions & 7 deletions python/dgl/data/dgl_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,6 @@
import abc
from .utils import download, extract_archive, get_download_dir, makedirs
from ..utils import retry_method_with_fix
from .._ffi.base import __version__

class DGLDataset(object):
r"""The basic DGL dataset for creating graph datasets.
Expand Down Expand Up @@ -238,17 +237,13 @@ def raw_path(self):
def save_dir(self):
r"""Directory to save the processed dataset.
"""
return self._save_dir + "_v{}".format(__version__)
return self._save_dir

@property
def save_path(self):
r"""Path to save the processed dataset.
"""
if hasattr(self, '_reorder'):
path = 'reordered' if self._reorder else 'un_reordered'
return os.path.join(self._save_dir, self.name, path)
else:
return os.path.join(self._save_dir, self.name)
return os.path.join(self._save_dir, self.name)

@property
def verbose(self):
Expand Down
5 changes: 3 additions & 2 deletions python/dgl/data/flickr.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ class FlickrDataset(DGLBuiltinDataset):
Examples
--------
>>> from dgl.data import FlickrDataset
>>> dataset = FlickrDataset()
>>> dataset.num_classes
7
Expand Down Expand Up @@ -151,9 +152,9 @@ def __getitem__(self, idx):
- ``ndata['label']``: node label
- ``ndata['feat']``: node feature
- ``ndata['train_mask']`` mask for training node set
- ``ndata['train_mask']``: mask for training node set
- ``ndata['val_mask']``: mask for validation node set
- ``ndata['test_mask']:`` mask for test node set
- ``ndata['test_mask']``: mask for test node set
"""
assert idx == 0, "This dataset has only one graph"
Expand Down
3 changes: 1 addition & 2 deletions python/dgl/data/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
from .tensor_serialize import save_tensors, load_tensors

from .. import backend as F
from .._ffi.base import __version__

__all__ = ['loadtxt','download', 'check_sha1', 'extract_archive',
'get_download_dir', 'Subset', 'split_dataset', 'save_graphs',
Expand Down Expand Up @@ -241,7 +240,7 @@ def get_download_dir():
dirname : str
Path to the download directory
"""
default_dir = os.path.join(os.path.expanduser('~'), '.dgl_v{}'.format(__version__))
default_dir = os.path.join(os.path.expanduser('~'), '.dgl')
dirname = os.environ.get('DGL_DOWNLOAD_DIR', default_dir)
if not os.path.exists(dirname):
os.makedirs(dirname)
Expand Down
1 change: 1 addition & 0 deletions python/dgl/data/wikics.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ class WikiCSDataset(DGLBuiltinDataset):
Examples
--------
>>> from dgl.data import WikiCSDataset
>>> dataset = WikiCSDataset()
>>> dataset.num_classes
10
Expand Down
4 changes: 2 additions & 2 deletions python/dgl/data/yelp.py
Original file line number Diff line number Diff line change
Expand Up @@ -151,9 +151,9 @@ def __getitem__(self, idx):
- ``ndata['label']``: node label
- ``ndata['feat']``: node feature
- ``ndata['train_mask']`` mask for training node set
- ``ndata['train_mask']``: mask for training node set
- ``ndata['val_mask']``: mask for validation node set
- ``ndata['test_mask']:`` mask for test node set
- ``ndata['test_mask']``: mask for test node set
"""
assert idx == 0, "This dataset has only one graph"
Expand Down
10 changes: 5 additions & 5 deletions python/dgl/nn/pytorch/conv/egatconv.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ class EGATConv(nn.Module):
num_heads : int
Number of attention heads.
bias : bool, optional
If True, add bias term to :math: `f_{ij}^{\prime}`. Defaults: ``True``.
If True, add bias term to :math:`f_{ij}^{\prime}`. Defaults: ``True``.
Examples
----------
Expand Down Expand Up @@ -170,16 +170,16 @@ def forward(self, graph, nfeats, efeats, get_attention=False):
Returns
-------
pair of torch.Tensor
node output features followed by edge output features
The node output feature of shape :math:`(N, H, D_{out})`
The edge output feature of shape :math:`(F, H, F_{out})`
node output features followed by edge output features.
The node output feature is of shape :math:`(N, H, D_{out})`
The edge output feature is of shape :math:`(F, H, F_{out})`
where:
:math:`H` is the number of heads,
:math:`D_{out}` is size of output node feature,
:math:`F_{out}` is size of output edge feature.
torch.Tensor, optional
The attention values of shape :math:`(E, H, 1)`.
This is returned only when :attr: `get_attention` is ``True``.
This is returned only when :attr:`get_attention` is ``True``.
"""

with graph.local_scope():
Expand Down
9 changes: 5 additions & 4 deletions python/dgl/transforms/functional.py
Original file line number Diff line number Diff line change
Expand Up @@ -2872,6 +2872,8 @@ def sort_csr_by_tag(g, tag, tag_offset_name='_TAG_OFFSET', tag_type='node'):
``tag_type`` is ``node``.
>>> import dgl
>>> import torch
>>> g = dgl.graph(([0,0,0,0,0,1,1,1],[0,1,2,3,4,0,1,2]))
>>> g.adjacency_matrix(scipy_fmt='csr').nonzero()
(array([0, 0, 0, 0, 0, 1, 1, 1], dtype=int32),
Expand All @@ -2890,11 +2892,10 @@ def sort_csr_by_tag(g, tag, tag_offset_name='_TAG_OFFSET', tag_type='node'):
``tag_type`` is ``edge``.
>>> from dgl import backend as F
>>> g = dgl.graph(([0,0,0,0,0,1,1,1],[0,1,2,3,4,0,1,2]))
>>> g.edges()
(tensor([0, 0, 0, 0, 0, 1, 1, 1]), tensor([0, 1, 2, 3, 4, 0, 1, 2]))
>>> tag = F.tensor([1, 1, 0, 2, 0, 1, 1, 0])
>>> tag = torch.tensor([1, 1, 0, 2, 0, 1, 1, 0])
>>> g_sorted = dgl.sort_csr_by_tag(g, tag, tag_type='edge')
>>> g_sorted.adj(scipy_fmt='csr').nonzero()
(array([0, 0, 0, 0, 0, 1, 1, 1], dtype=int32), array([2, 4, 0, 1, 3, 2, 0, 1], dtype=int32))
Expand Down Expand Up @@ -2995,6 +2996,7 @@ def sort_csc_by_tag(g, tag, tag_offset_name='_TAG_OFFSET', tag_type='node'):
``tag_type`` is ``node``.
>>> import dgl
>>> import torch
>>> g = dgl.graph(([0,1,2,3,4,0,1,2],[0,0,0,0,0,1,1,1]))
>>> g.adjacency_matrix(scipy_fmt='csr', transpose=True).nonzero()
(array([0, 0, 0, 0, 0, 1, 1, 1], dtype=int32),
Expand All @@ -3013,9 +3015,8 @@ def sort_csc_by_tag(g, tag, tag_offset_name='_TAG_OFFSET', tag_type='node'):
``tag_type`` is ``edge``.
>>> from dgl import backend as F
>>> g = dgl.graph(([0,1,2,3,4,0,1,2],[0,0,0,0,0,1,1,1]))
>>> tag = F.tensor([1, 1, 0, 2, 0, 1, 1, 0])
>>> tag = torch.tensor([1, 1, 0, 2, 0, 1, 1, 0])
>>> g_sorted = dgl.sort_csc_by_tag(g, tag, tag_type='edge')
>>> g_sorted.adj(scipy_fmt='csr', transpose=True).nonzero()
(array([0, 0, 0, 0, 0, 1, 1, 1], dtype=int32), array([2, 4, 0, 1, 3, 2, 0, 1], dtype=int32))
Expand Down

0 comments on commit fdbf5a0

Please sign in to comment.