Skip to content

Commit

Permalink
all demo use python-3 (dmlc#555)
Browse files Browse the repository at this point in the history
  • Loading branch information
aksnzhy authored May 23, 2019
1 parent 605b518 commit f99725a
Show file tree
Hide file tree
Showing 17 changed files with 45 additions and 45 deletions.
2 changes: 1 addition & 1 deletion examples/mxnet/gat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,5 +19,5 @@ pip install requests

### Usage (make sure that DGLBACKEND is changed into mxnet)
```bash
DGLBACKEND=mxnet python gat_batch.py --dataset cora --gpu 0 --num-heads 8
DGLBACKEND=mxnet python3 gat_batch.py --dataset cora --gpu 0 --num-heads 8
```
6 changes: 3 additions & 3 deletions examples/mxnet/rgcn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Example code was tested with rdflib 4.2.2 and pandas 0.23.4
### Entity Classification
AIFB: accuracy 97.22% (DGL), 95.83% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d aifb --testing --gpu 0
DGLBACKEND=mxnet python3 entity_classify.py -d aifb --testing --gpu 0
```

MUTAG: accuracy 76.47% (DGL), 73.23% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
DGLBACKEND=mxnet python3 entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
```

BGS: accuracy 79.31% (DGL, n-basese=20, OOM when >20), 83.10% (paper)
```
DGLBACKEND=mxnet python entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
DGLBACKEND=mxnet python3 entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
```
18 changes: 9 additions & 9 deletions examples/mxnet/sampling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,44 +15,44 @@ pip install mxnet --pre
### Neighbor Sampling & Skip Connection
cora: test accuracy ~83% with `--num-neighbors 2`, ~84% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
```

citeseer: test accuracy ~69% with `--num-neighbors 2`, ~70% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
```

pubmed: test accuracy ~78% with `--num-neighbors 3`, ~77% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
```

reddit: test accuracy ~91% with `--num-neighbors 3` and `--batch-size 1000`, ~93% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
```


### Control Variate & Skip Connection
cora: test accuracy ~84% with `--num-neighbors 1`, ~84% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```

citeseer: test accuracy ~69% with `--num-neighbors 1`, ~70% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```

pubmed: test accuracy ~79% with `--num-neighbors 1`, ~77% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
```

reddit: test accuracy ~93% with `--num-neighbors 1` and `--batch-size 1000`, ~93% by training on the full graph
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
```

### Control Variate & GraphSAGE-mean
Expand All @@ -61,7 +61,7 @@ Following [Control Variate](https://arxiv.org/abs/1710.10568), we use the mean p

reddit: test accuracy 96.1% with `--num-neighbors 1` and `--batch-size 1000`, ~96.2% in [Control Variate](https://arxiv.org/abs/1710.10568) with `--num-neighbors 2` and `--batch-size 1000`
```
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
```

### Run multi-processing training
Expand Down
2 changes: 1 addition & 1 deletion examples/mxnet/tree_lstm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The script will download the [SST dataset] (http://nlp.stanford.edu/sentiment/in

## Usage
```
python train.py --gpu 0
python3 train.py --gpu 0
```

## Speed Test
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/appnp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ Results

Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python train.py --dataset cora --gpu 0
python3 train.py --dataset cora --gpu 0
```

* cora: 0.8370 (paper: 0.850)
Expand Down
4 changes: 2 additions & 2 deletions examples/pytorch/capsule/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Training & Evaluation
----------------------
```bash
# Run with default config
python main.py
python3 main.py
# Run with train and test batch size 128, and for 50 epochs
python main.py --batch-size 128 --test-batch-size 128 --epochs 50
python3 main.py --batch-size 128 --test-batch-size 128 --epochs 50
```
6 changes: 3 additions & 3 deletions examples/pytorch/dgi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,15 +20,15 @@ How to run
Run with following:

```bash
python train.py --dataset=cora --gpu=0 --self-loop
python3 train.py --dataset=cora --gpu=0 --self-loop
```

```bash
python train.py --dataset=citeseer --gpu=0
python3 train.py --dataset=citeseer --gpu=0
```

```bash
python train.py --dataset=pubmed --gpu=0
python3 train.py --dataset=pubmed --gpu=0
```

Results
Expand Down
4 changes: 2 additions & 2 deletions examples/pytorch/dgmg/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia.

## Usage

- Train with batch size 1: `python main.py`
- Train with batch size larger than 1: `python main_batch.py`.
- Train with batch size 1: `python3 main.py`
- Train with batch size larger than 1: `python3 main_batch.py`.

## Performance

Expand Down
8 changes: 4 additions & 4 deletions examples/pytorch/gat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,19 +23,19 @@ How to run
Run with following:

```bash
python train.py --dataset=cora --gpu=0
python3 train.py --dataset=cora --gpu=0
```

```bash
python train.py --dataset=citeseer --gpu=0
python3 train.py --dataset=citeseer --gpu=0
```

```bash
python train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
python3 train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
```

```bash
python train_ppi.py --gpu=0
python3 train_ppi.py --gpu=0
```

Results
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/gcn/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Results

Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python train.py --dataset cora --gpu 0 --self-loop
python3 train.py --dataset cora --gpu 0 --self-loop
```

* cora: ~0.810 (0.79-0.83) (paper: 0.815)
Expand Down
6 changes: 3 additions & 3 deletions examples/pytorch/gin/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ How to run
An experiment on the GIN in default settings can be run with

```bash
python main.py
python3 main.py
```

An experiment on the GIN in customized settings can be run with
```bash
python main.py [--device 0 | --disable-cuda] --dataset COLLAB \
python3 main.py [--device 0 | --disable-cuda] --dataset COLLAB \
--graph_pooling_type max --neighbor_pooling_type sum
```

Expand All @@ -35,7 +35,7 @@ Results
Run with following with the double SUM pooling way:
(tested dataset: "MUTAG"(default), "COLLAB", "IMDBBINARY", "IMDBMULTI")
```bash
python train.py --dataset MUTAB --device 0 \
python3 train.py --dataset MUTAB --device 0 \
--graph_pooling_type sum --neighbor_pooling_type sum
```

Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/graphsage/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Results

Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python graphsage.py --dataset cora --gpu 0
python3 graphsage.py --dataset cora --gpu 0
```

* cora: ~0.8470
Expand Down
4 changes: 2 additions & 2 deletions examples/pytorch/line_graph/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ How to run
An experiment on the Stochastic Block Model in default settings can be run with

```bash
python train.py
python3 train.py
```

An experiment on the Stochastic Block Model in customized settings can be run with
```bash
python train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
python3 train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
--n-features N_FEATURES --n-graphs N_GRAPH --n-iterations N_ITERATIONS \
--n-layers N_LAYER --n-nodes N_NODE --model-path MODEL_PATH --radius RADIUS
```
12 changes: 6 additions & 6 deletions examples/pytorch/sampling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,32 +16,32 @@ pip install torch requests
### Neighbor Sampling & Skip Connection
cora: test accuracy ~83% with --num-neighbors 2, ~84% by training on the full graph
```
python gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

citeseer: test accuracy ~69% with --num-neighbors 2, ~70% by training on the full graph
```
python gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

pubmed: test accuracy ~76% with --num-neighbors 3, ~77% by training on the full graph
```
python gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

### Control Variate & Skip Connection
cora: test accuracy ~84% with --num-neighbors 1, ~84% by training on the full graph
```
python gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

citeseer: test accuracy ~69% with --num-neighbors 1, ~70% by training on the full graph
```
python gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

pubmed: test accuracy ~77% with --num-neighbors 1, ~77% by training on the full graph
```
python gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
python3 gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
```

6 changes: 3 additions & 3 deletions examples/pytorch/sgc/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Results

Run with following (available dataset: "cora", "citeseer", "pubmed")
```bash
python sgc.py --dataset cora --gpu 0
python sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
python sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
python3 sgc.py --dataset cora --gpu 0
python3 sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
python3 sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
```

On NVIDIA V100
Expand Down
4 changes: 2 additions & 2 deletions examples/pytorch/transformer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ The folder contains training module and inferencing module (beam decoder) for Tr
- For training:

```
python translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
python3 translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
```
- For evaluating BLEU score on test set(by enabling `--print` to see translated text):
```
python translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
python3 translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
```
Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).
Expand Down
2 changes: 1 addition & 1 deletion examples/pytorch/tree_lstm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ pip install torch requests nltk

## Usage
```
python train.py --gpu 0
python3 train.py --gpu 0
```

## Speed
Expand Down

0 comments on commit f99725a

Please sign in to comment.