diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 07a50b46011f..5fc6294d9e79 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,15 +1,27 @@ ## Contributing to DGL -If you are interested in contributing to DGL, your contributions will fall -into two categories: -1. You want to propose a new Feature and implement it - - post about your intended feature, and we shall discuss the design and - implementation. Once we agree that the plan looks good, go ahead and implement it. -2. You want to implement a feature or bug-fix for an outstanding issue - - Look at the outstanding issues - - Especially look at the Low Priority and Medium Priority issues - - Pick an issue and comment on the task that you want to work on this feature - - If you need more context on a particular issue, please ask and we shall provide. - -Once you finish implementing a feature or bugfix, please send a Pull Request. +Contribution is always welcomed. A good starting place is the roadmap issue, where +you can find our current milestones. All contributions must go through pull requests +and be reviewed by the committors. +For document improvement, simply PR the change and prepend the title with `[Doc]`. + +For new features, we suggest first create an issue using the feature request template. +Follow the template to describe the features you want to implement and your plans. +We also suggest pick the features from the roadmap issue because they are more likely +to be incoporated in the next release. + +For bug fix, we suggest first create an issue using the bug report template if the +bug has not been reported yet. Please reply the issue that you'd like to help. Once +the task is assigned, make the change in your fork and PR the codes. Remember to +also refer to the issue where the bug is reported. + +Once your PR is merged, congratulation, you are now an contributor to the DGL project. +We will put your name in the list below and also on our [website](https://www.dgl.ai/ack). + +Contributors +------------ +[Yizhi Liu](https://github.com/yzhliu) +[Yifei Ma](https://github.com/yifeim) +Hao Jin +[Sheng Zha](https://github.com/szha) diff --git a/NEWS.md b/NEWS.md new file mode 100644 index 000000000000..1ab3f15b2087 --- /dev/null +++ b/NEWS.md @@ -0,0 +1,34 @@ +DGL release and change logs +========== + +Refer to the roadmap issue for the on-going versions and features. + +0.1.3 +----- +Bug fix +* Compatible with Pytorch v1.0 +* Bug fix in networkx graph conversion. + +0.1.2 +----- +First open release. +* Basic graph APIs. +* Basic message passing APIs. +* Pytorch backend. +* MXNet backend. +* Optimization using SPMV. +* Model examples w/ Pytorch: + - GCN + - GAT + - JTNN + - DGMG + - Capsule + - LGNN + - RGCN + - Transformer + - TreeLSTM +* Model examples w/ MXNet: + - GCN + - GAT + - RGCN + - SSE diff --git a/conda/dgl/meta.yaml b/conda/dgl/meta.yaml index 9956f2cf8857..219e053afb62 100644 --- a/conda/dgl/meta.yaml +++ b/conda/dgl/meta.yaml @@ -1,10 +1,10 @@ package: name: dgl - version: "0.1.2" + version: "0.1.3" source: - git_rev: 0.1.2 - git_url: https://github.com/jermainewang/dgl.git + git_rev: 0.1.x + git_url: https://github.com/dmlc/dgl.git requirements: build: @@ -21,5 +21,5 @@ requirements: - networkx about: - home: https://github.com/jermainewang/dgl.git + home: https://github.com/dmlc/dgl.git license_file: ../../LICENSE diff --git a/include/dgl/runtime/c_runtime_api.h b/include/dgl/runtime/c_runtime_api.h index 0dd5dd22b4db..ca11fb4bed76 100644 --- a/include/dgl/runtime/c_runtime_api.h +++ b/include/dgl/runtime/c_runtime_api.h @@ -33,7 +33,7 @@ #endif // DGL version -#define DGL_VERSION "0.1.2" +#define DGL_VERSION "0.1.3" // DGL Runtime is DLPack compatible. diff --git a/python/dgl/_ffi/libinfo.py b/python/dgl/_ffi/libinfo.py index b64c6757cbba..b12b46d54638 100644 --- a/python/dgl/_ffi/libinfo.py +++ b/python/dgl/_ffi/libinfo.py @@ -87,4 +87,4 @@ def find_lib_path(name=None, search_path=None, optional=False): # We use the version of the incoming release for code # that is under development. # The following line is set by dgl/python/update_version.py -__version__ = "0.1.2" +__version__ = "0.1.3" diff --git a/python/setup.py b/python/setup.py index 27a3d5affe99..b7391ec3cbe9 100644 --- a/python/setup.py +++ b/python/setup.py @@ -72,7 +72,7 @@ def get_lib_path(): 'scipy>=1.1.0', 'networkx>=2.1', ], - url='https://github.com/jermainewang/dgl', + url='https://github.com/dmlc/dgl', distclass=BinaryDistribution, classifiers=[ 'Development Status :: 3 - Alpha', diff --git a/python/update_version.py b/python/update_version.py index edbd2b28396c..0a29108a7938 100644 --- a/python/update_version.py +++ b/python/update_version.py @@ -11,7 +11,7 @@ # current version # We use the version of the incoming release for code # that is under development -__version__ = "0.1.2" +__version__ = "0.1.3" # Implementations def update(file_name, pattern, repl): diff --git a/tutorials/models/1_gnn/4_rgcn.py b/tutorials/models/1_gnn/4_rgcn.py index b824bcf15932..44cb88ee9700 100644 --- a/tutorials/models/1_gnn/4_rgcn.py +++ b/tutorials/models/1_gnn/4_rgcn.py @@ -56,7 +56,7 @@ # # This tutorial will focus on the first task to show how to generate entity # representation. `Complete -# code `_ +# code `_ # for both tasks can be found in DGL's github repository. # # Key ideas of R-GCN @@ -356,4 +356,4 @@ def forward(self, g): # The implementation is similar to the above but with an extra DistMult layer # stacked on top of the R-GCN layers. You may find the complete # implementation of link prediction with R-GCN in our `example -# code `_. +# code `_. diff --git a/tutorials/models/1_gnn/6_line_graph.py b/tutorials/models/1_gnn/6_line_graph.py index 2208ec4540e3..1468f02a3972 100644 --- a/tutorials/models/1_gnn/6_line_graph.py +++ b/tutorials/models/1_gnn/6_line_graph.py @@ -610,7 +610,7 @@ def collate_fn(batch): ###################################################################################### # You can check out the complete code -# `here `_. +# `here `_. # # What's the business with :math:`\{Pm, Pd\}`? # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/tutorials/models/1_gnn/8_sse_mx.py b/tutorials/models/1_gnn/8_sse_mx.py index 2a884816a8b6..1f277641e291 100644 --- a/tutorials/models/1_gnn/8_sse_mx.py +++ b/tutorials/models/1_gnn/8_sse_mx.py @@ -540,7 +540,7 @@ def test(g, test_nodes, steady_state_operator, predictor): # scaled SSE to a graph with 50 million nodes and 150 million edges in a # single P3.8x large instance and one epoch only takes about 160 seconds. # -# See full examples `here `_. +# See full examples `here `_. # # .. |image0| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/floodfill-paths.gif # .. |image1| image:: https://s3.us-east-2.amazonaws.com/dgl.ai/tutorial/img/neighbor-sampling.gif diff --git a/tutorials/models/1_gnn/README.txt b/tutorials/models/1_gnn/README.txt index 4dba976d6c7e..bb1d03d753eb 100644 --- a/tutorials/models/1_gnn/README.txt +++ b/tutorials/models/1_gnn/README.txt @@ -5,18 +5,18 @@ Graph Neural Network and its variant * **GCN** `[paper] `__ `[tutorial] <1_gnn/1_gcn.html>`__ `[code] - `__: + `__: this is the vanilla GCN. The tutorial covers the basic uses of DGL APIs. * **GAT** `[paper] `__ `[code] - `__: + `__: the key extension of GAT w.r.t vanilla GCN is deploying multi-head attention among neighborhood of a node, thus greatly enhances the capacity and expressiveness of the model. * **R-GCN** `[paper] `__ `[tutorial] <1_gnn/4_rgcn.html>`__ `[code] - `__: + `__: the key difference of RGNN is to allow multi-edges among two entities of a graph, and edges with distinct relationships are encoded differently. This is an interesting extension of GCN that can have a lot of applications of @@ -24,7 +24,7 @@ Graph Neural Network and its variant * **LGNN** `[paper] `__ `[tutorial] <1_gnn/6_line_graph.html>`__ `[code] - `__: + `__: this model focuses on community detection by inspecting graph structures. It uses representations of both the original graph and its line-graph companion. In addition to demonstrate how an algorithm can harness multiple @@ -34,7 +34,7 @@ Graph Neural Network and its variant * **SSE** `[paper] `__ `[tutorial] <1_gnn/8_sse_mx.html>`__ `[code] - `__: + `__: the emphasize here is *giant* graph that cannot fit comfortably on one GPU card. SSE is an example to illustrate the co-design of both algorithm and system: sampling to guarantee asymptotic convergence while lowering the diff --git a/tutorials/models/2_small_graph/3_tree-lstm.py b/tutorials/models/2_small_graph/3_tree-lstm.py index 7a2950d9f0e1..912cb48deb22 100644 --- a/tutorials/models/2_small_graph/3_tree-lstm.py +++ b/tutorials/models/2_small_graph/3_tree-lstm.py @@ -372,5 +372,5 @@ def batcher_dev(batch): ############################################################################## # To train the model on full dataset with different settings(CPU/GPU, # etc.), please refer to our repo's -# `example `__. +# `example `__. # Besides, we also provide an implementation of the Child-Sum Tree LSTM. diff --git a/tutorials/models/2_small_graph/README.txt b/tutorials/models/2_small_graph/README.txt index 5b672342312a..128e5bf082d6 100644 --- a/tutorials/models/2_small_graph/README.txt +++ b/tutorials/models/2_small_graph/README.txt @@ -6,7 +6,7 @@ Dealing with many small graphs * **Tree-LSTM** `[paper] `__ `[tutorial] <2_small_graph/3_tree-lstm.html>`__ `[code] - `__: + `__: sentences of natural languages have inherent structures, which are thrown away by treating them simply as sequences. Tree-LSTM is a powerful model that learns the representation by leveraging prior syntactic structures diff --git a/tutorials/models/3_generative_model/5_dgmg.py b/tutorials/models/3_generative_model/5_dgmg.py index 586806d74c71..962b0a487c6b 100644 --- a/tutorials/models/3_generative_model/5_dgmg.py +++ b/tutorials/models/3_generative_model/5_dgmg.py @@ -762,7 +762,7 @@ def _get_next(i, v_max): ####################################################################################### # For the complete implementation, see `dgl DGMG example -# `__. +# `__. # # Batched Graph Generation # --------------------------- diff --git a/tutorials/models/3_generative_model/README.txt b/tutorials/models/3_generative_model/README.txt index 23f3a682e147..4e0e33feeb0a 100644 --- a/tutorials/models/3_generative_model/README.txt +++ b/tutorials/models/3_generative_model/README.txt @@ -5,7 +5,7 @@ Generative models * **DGMG** `[paper] `__ `[tutorial] <3_generative_model/5_dgmg.html>`__ `[code] - `__: + `__: this model belongs to the important family that deals with structural generation. DGMG is interesting because its state-machine approach is the most general. It is also very challenging because, unlike Tree-LSTM, every @@ -14,7 +14,7 @@ Generative models inter-graph parallelism to steadily improve the performance. * **JTNN** `[paper] `__ `[code] - `__: + `__: unlike DGMG, this paper generates molecular graphs using the framework of variational auto-encoder. Perhaps more interesting is its approach to build structure hierarchically, in the case of molecular, with junction tree as diff --git a/tutorials/models/4_old_wines/2_capsule.py b/tutorials/models/4_old_wines/2_capsule.py index c5cc7590e866..1f2df97a3b77 100644 --- a/tutorials/models/4_old_wines/2_capsule.py +++ b/tutorials/models/4_old_wines/2_capsule.py @@ -257,8 +257,8 @@ def weight_animate(i): # |image5| # # The full code of this visualization is provided at -# `link `__; the complete -# code that trains on MNIST is at `link `__. +# `link `__; the complete +# code that trains on MNIST is at `link `__. # # .. |image0| image:: https://i.imgur.com/55Ovkdh.png # .. |image1| image:: https://i.imgur.com/9tc6GLl.png diff --git a/tutorials/models/4_old_wines/7_transformer.py b/tutorials/models/4_old_wines/7_transformer.py index c172377ced54..c9ce85a8655e 100644 --- a/tutorials/models/4_old_wines/7_transformer.py +++ b/tutorials/models/4_old_wines/7_transformer.py @@ -120,7 +120,7 @@ # In this tutorial, we show a simplified version of the implementation in # order to highlight the most important design points (for instance we # only show single-head attention); the complete code can be found -# `here `__. +# `here `__. # The overall structure is similar to the one from `The Annotated # Transformer `__. # @@ -576,7 +576,7 @@ # # Note that we do not involve inference module in this tutorial (which # requires beam search), please refer to the `Github -# Repo `__ +# Repo `__ # for full implementation. # # .. code:: python @@ -851,7 +851,7 @@ # that satisfy the given predicate. # # for the full implementation, please refer to our `Github -# Repo `__. +# Repo `__. # # The figure below shows the effect of Adaptive Computational # Time(different positions of a sentence were revised different times): diff --git a/tutorials/models/4_old_wines/README.txt b/tutorials/models/4_old_wines/README.txt index 4eba91db66b6..a29a310c3340 100644 --- a/tutorials/models/4_old_wines/README.txt +++ b/tutorials/models/4_old_wines/README.txt @@ -5,7 +5,7 @@ Old (new) wines in new bottle ----------------------------- * **Capsule** `[paper] `__ `[tutorial] <4_old_wines/2_capsule.html>`__ `[code] - `__: + `__: this new computer vision model has two key ideas -- enhancing the feature representation in a vector form (instead of a scalar) called *capsule*, and replacing max-pooling with dynamic routing. The idea of dynamic routing is to @@ -15,9 +15,9 @@ Old (new) wines in new bottle * **Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ - `[code] `__ and **Universal Transformer** + `[code] `__ and **Universal Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ - `[code] `__: + `[code] `__: these two models replace RNN with several layers of multi-head attention to encode and discover structures among tokens of a sentence. These attention mechanisms can similarly formulated as graph operations with