Skip to content

Commit

Permalink
Fix link to pytorch documents (pytorch#1294)
Browse files Browse the repository at this point in the history
* Fix link to pytorch documents

* Fix too long lines

Co-authored-by: vfdev <[email protected]>
  • Loading branch information
kamahori and vfdev-5 authored Sep 15, 2020
1 parent 766167e commit 564e541
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 12 deletions.
8 changes: 4 additions & 4 deletions examples/notebooks/FashionMNIST.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -234,10 +234,10 @@
"source": [
"Explanation of Model Architecture\n",
"\n",
"* [Convolutional layers](https://pytorch.org/docs/stable/nn.html#conv2d), the Convolutional layer is used to create a convolution kernel that is convolved with the layer input to produce a tensor of outputs.\n",
"* [Maxpooling layers](https://pytorch.org/docs/stable/nn.html#maxpool2d), the Maxpooling layer is used to downsample an input representation keeping the most active pixels from the previous layer.\n",
"* The usual [Linear](https://pytorch.org/docs/stable/nn.html#linear) + [Dropout](https://pytorch.org/docs/stable/nn.html#dropout2d) layers to avoid overfitting and produce a 10-dim output.\n",
"* We had used [Relu](https://pytorch.org/docs/stable/nn.html#id27) Non Linearity for the model and [logsoftmax](https://pytorch.org/docs/stable/nn.html#log-softmax) at the last layer because we are going to use the [NLLL loss](https://pytorch.org/docs/stable/nn.html#nllloss).\n"
"* [Convolutional layers](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html), the Convolutional layer is used to create a convolution kernel that is convolved with the layer input to produce a tensor of outputs.\n",
"* [Maxpooling layers](https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html), the Maxpooling layer is used to downsample an input representation keeping the most active pixels from the previous layer.\n",
"* The usual [Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html) + [Dropout](https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html) layers to avoid overfitting and produce a 10-dim output.\n",
"* We had used [Relu](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html) Non Linearity for the model and [logsoftmax](https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html) at the last layer because we are going to use the [NLLL loss](https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html).\n"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions ignite/contrib/handlers/tensorboard_logger.py
Original file line number Diff line number Diff line change
Expand Up @@ -384,16 +384,16 @@ class TensorboardLogger(BaseLogger):
otherwise, it falls back to using
`PyTorch's SummaryWriter
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_
<https://pytorch.org/docs/stable/tensorboard.html>`_
(>=v1.2.0).
Args:
*args: Positional arguments accepted from
`SummaryWriter
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_.
<https://pytorch.org/docs/stable/tensorboard.html>`_.
**kwargs: Keyword arguments accepted from
`SummaryWriter
<https://pytorch.org/docs/stable/tensorboard.html#torch.utils.tensorboard.writer.SummaryWriter>`_.
<https://pytorch.org/docs/stable/tensorboard.html>`_.
For example, `log_dir` to setup path to the directory where to log.
Examples:
Expand Down
2 changes: 1 addition & 1 deletion ignite/distributed/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -293,7 +293,7 @@ def train_fn(local_rank, a, b, c, d=12):
| and `node_rank=0` are tolerated and ignored, otherwise an exception is raised.
.. _dist.init_process_group: https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group
.. _mp.start_processes: https://pytorch.org/docs/stable/_modules/torch/multiprocessing/spawn.html#spawn
.. _mp.start_processes: https://pytorch.org/docs/stable/multiprocessing.html#torch.multiprocessing.spawn
.. _xmp.spawn: http://pytorch.org/xla/release/1.6/index.html#torch_xla.distributed.xla_multiprocessing.spawn
.. _hvd_run: https://horovod.readthedocs.io/en/latest/api.html#module-horovod.run
Expand Down
10 changes: 6 additions & 4 deletions ignite/handlers/checkpoint.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,8 +99,9 @@ class Checkpoint(Serializable):
include_self (bool): Whether to include the `state_dict` of this object in the checkpoint. If `True`, then
there must not be another object in ``to_save`` with key ``checkpointer``.
.. _DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel
.. _DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
.. _DistributedDataParallel: https://pytorch.org/docs/stable/generated/
torch.nn.parallel.DistributedDataParallel.html
.. _DataParallel: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
Note:
This class stores a single file as a dictionary of provided objects to save.
Expand Down Expand Up @@ -475,8 +476,9 @@ def load_objects(to_load: Mapping, checkpoint: Mapping, **kwargs) -> None:
**kwargs: Keyword arguments accepted for `nn.Module.load_state_dict()`. Passing `strict=False` enables
the user to load part of the pretrained model (useful for example, in Transfer Learning)
.. _DistributedDataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel
.. _DataParallel: https://pytorch.org/docs/stable/nn.html#torch.nn.DataParallel
.. _DistributedDataParallel: https://pytorch.org/docs/stable/generated/
torch.nn.parallel.DistributedDataParallel.html
.. _DataParallel: https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html
"""
Checkpoint._check_objects(to_load, "load_state_dict")
Expand Down

0 comments on commit 564e541

Please sign in to comment.