Skip to content

Commit

Permalink
http to https for pytorch.org in the tutorial documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
JoelMarcey committed Nov 19, 2018
1 parent 400e21b commit 2441373
Show file tree
Hide file tree
Showing 27 changed files with 59 additions and 59 deletions.
8 changes: 4 additions & 4 deletions advanced_source/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@ Advanced Tutorials

1. neural_style_tutorial.py
Neural Transfer with PyTorch
http://pytorch.org/tutorials/advanced/neural_style_tutorial.html
https://pytorch.org/tutorials/advanced/neural_style_tutorial.html

2. numpy_extensions_tutorial.py
Creating Extensions Using numpy and scipy
http://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html
https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html

3. c_extension.rst
Custom C Extensions for PyTorch
http://pytorch.org/tutorials/advanced/c_extension.html
https://pytorch.org/tutorials/advanced/c_extension.html

4. super_resolution_with_caffe2.py
Transfering a Model from PyTorch to Caffe2 and Mobile using ONNX
http://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html
https://pytorch.org/tutorials/advanced/super_resolution_with_caffe2.html
2 changes: 1 addition & 1 deletion advanced_source/cpp_export.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ Loading a PyTorch Model in C++
==============================

.. attention:: This tutorial requires PyTorch 1.0 (preview) or later.
For installation information visit http://pytorch.org/get-started.
For installation information visit https://pytorch.org/get-started.

As its name suggests, the primary interface to PyTorch is the Python
programming language. While Python is a suitable and preferred language for
Expand Down
2 changes: 1 addition & 1 deletion advanced_source/cpp_extension.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ you developed as part of your research.

The easiest way of integrating such a custom operation in PyTorch is to write it
in Python by extending :class:`Function` and :class:`Module` as outlined `here
<http://pytorch.org/docs/master/notes/extending.html>`_. This gives you the full
<https://pytorch.org/docs/master/notes/extending.html>`_. This gives you the full
power of automatic differentiation (spares you from writing derivative
functions) as well as the usual expressiveness of Python. However, there may be
times when your operation is better implemented in C++. For example, your code
Expand Down
4 changes: 2 additions & 2 deletions advanced_source/neural_style_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@
#
# .. Note::
# Here are links to download the images required to run the tutorial:
# `picasso.jpg <http://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg>`__ and
# `dancing.jpg <http://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg>`__.
# `picasso.jpg <https://pytorch.org/tutorials/_static/img/neural-style/picasso.jpg>`__ and
# `dancing.jpg <https://pytorch.org/tutorials/_static/img/neural-style/dancing.jpg>`__.
# Download these two images and add them to a directory
# with name ``images`` in your current working directory.

Expand Down
2 changes: 1 addition & 1 deletion advanced_source/super_resolution_with_caffe2.py
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@ def _initialize_weights(self):
# or a random tensor as long as it is the right size.
#
# To learn more details about PyTorch's export interface, check out the
# `torch.onnx documentation <http://pytorch.org/docs/master/onnx.html>`__.
# `torch.onnx documentation <https://pytorch.org/docs/master/onnx.html>`__.
#

# Input to the model
Expand Down
10 changes: 5 additions & 5 deletions beginner_source/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@ Beginner Tutorials

1. blitz/* and deep_learning_60min_blitz.rst
Deep Learning with PyTorch: A 60 Minute Blitz
http://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html
https://pytorch.org/tutorials/beginner/deep_learning_60min_blitz.html

2. former_torches/* and former_torchies_tutorial.rst
PyTorch for Former Torch Users
http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html
https://pytorch.org/tutorials/beginner/former_torchies_tutorial.html

3. examples_*/* and pytorch_with_examples.rst
Learning PyTorch with Examples
http://pytorch.org/tutorials/beginner/pytorch_with_examples.html
https://pytorch.org/tutorials/beginner/pytorch_with_examples.html

4. transfer_learning_tutorial.py
Transfer Learning Tutorial
http://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html

5. nlp/* and deep_learning_nlp_tutorial.rst
Deep Learning for NLP with Pytorch
http://pytorch.org/tutorials/beginner/deep_learning_nlp_tutorial.html
https://pytorch.org/tutorials/beginner/deep_learning_nlp_tutorial.html
8 changes: 4 additions & 4 deletions beginner_source/blitz/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,22 @@ Deep Learning with PyTorch: A 60 Minute Blitz

1. tensor_tutorial.py
What is PyTorch?
http://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html
https://pytorch.org/tutorials/beginner/blitz/tensor_tutorial.html

2. autograd_tutorial.py
Autograd: Automatic Differentiation
http://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html
https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html

3. neural_networks_tutorial.py
Neural Networks
http://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#
https://pytorch.org/tutorials/beginner/blitz/neural_networks_tutorial.html#

4. autograd_tutorial.py
Automatic Differentiation
https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html

5. cifar10_tutorial.py
Training a Classifier
http://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html
https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html


2 changes: 1 addition & 1 deletion beginner_source/blitz/autograd_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,4 @@
# **Read Later:**
#
# Documentation of ``autograd`` and ``Function`` is at
# http://pytorch.org/docs/autograd
# https://pytorch.org/docs/autograd
2 changes: 1 addition & 1 deletion beginner_source/blitz/cifar10_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -331,4 +331,4 @@ def forward(self, x):
# .. _More examples: https://github.com/pytorch/examples
# .. _More tutorials: https://github.com/pytorch/tutorials
# .. _Discuss PyTorch on the Forums: https://discuss.pytorch.org/
# .. _Chat with other users on Slack: http://pytorch.slack.com/messages/beginner/
# .. _Chat with other users on Slack: https://pytorch.slack.com/messages/beginner/
2 changes: 1 addition & 1 deletion beginner_source/blitz/data_parallel_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,5 +251,5 @@ def forward(self, input):
# collects and merges the results before returning it to you.
#
# For more information, please check out
# http://pytorch.org/tutorials/beginner/former\_torchies/parallelism\_tutorial.html.
# https://pytorch.org/tutorials/beginner/former\_torchies/parallelism\_tutorial.html.
#
4 changes: 2 additions & 2 deletions beginner_source/blitz/neural_networks_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ def num_flat_features(self, x):
# value that estimates how far away the output is from the target.
#
# There are several different
# `loss functions <http://pytorch.org/docs/nn.html#loss-functions>`_ under the
# `loss functions <https://pytorch.org/docs/nn.html#loss-functions>`_ under the
# nn package .
# A simple loss is: ``nn.MSELoss`` which computes the mean-squared error
# between the input and the target.
Expand Down Expand Up @@ -214,7 +214,7 @@ def num_flat_features(self, x):
#
# The neural network package contains various modules and loss functions
# that form the building blocks of deep neural networks. A full list with
# documentation is `here <http://pytorch.org/docs/nn>`_.
# documentation is `here <https://pytorch.org/docs/nn>`_.
#
# **The only thing left to learn is:**
#
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/blitz/tensor_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@
# 100+ Tensor operations, including transposing, indexing, slicing,
# mathematical operations, linear algebra, random numbers, etc.,
# are described
# `here <http://pytorch.org/docs/torch>`_.
# `here <https://pytorch.org/docs/torch>`_.
#
# NumPy Bridge
# ------------
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/chatbot_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# Corpus <https://www.cs.cornell.edu/~cristian/Cornell_Movie-Dialogs_Corpus.html>`__.
#
# .. attention:: This example requires PyTorch 1.0 (preview) or later.
# For installation information visit http://pytorch.org/get-started.
# For installation information visit https://pytorch.org/get-started.
#
# Conversational models are a hot topic in artificial intelligence
# research. Chatbots can be found in a variety of settings, including
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/deploy_seq2seq_hybrid_frontend_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@
# training.
#
# .. attention:: This example requires PyTorch 1.0 (preview) or later.
# For installation information visit http://pytorch.org/get-started.
# For installation information visit https://pytorch.org/get-started.
#
# What is the Hybrid Frontend?
# ----------------------------
Expand Down
8 changes: 4 additions & 4 deletions beginner_source/former_torchies/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,16 @@

1. tensor_tutorial.py
Tensors
http://pytorch.org/tutorials/beginner/former_torchies/tensor_tutorial.html
https://pytorch.org/tutorials/beginner/former_torchies/tensor_tutorial.html

2. autograd.py
Autograd
http://pytorch.org/tutorials/beginner/former_torchies/autograd_tutorial.html
https://pytorch.org/tutorials/beginner/former_torchies/autograd_tutorial.html

3. nn_tutorial.py
nn package
http://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html
https://pytorch.org/tutorials/beginner/former_torchies/nn_tutorial.html

4. parallelism_tutorial.py
Multi-GPU examples
http://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html
4 changes: 2 additions & 2 deletions beginner_source/former_torchies/parallelism_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ def forward(self, x):
# The code does not need to be changed in CPU-mode.
#
# The documentation for DataParallel can be found
# `here <http://pytorch.org/docs/nn.html#dataparallel>`_.
# `here <https://pytorch.org/docs/nn.html#dataparallel>`_.
#
# **Primitives on which DataParallel is implemented upon:**
#
Expand Down Expand Up @@ -125,4 +125,4 @@ def forward(self, x):
# .. _More examples: https://github.com/pytorch/examples
# .. _More tutorials: https://github.com/pytorch/tutorials
# .. _Discuss PyTorch on the Forums: https://discuss.pytorch.org/
# .. _Chat with other users on Slack: http://pytorch.slack.com/messages/beginner/
# .. _Chat with other users on Slack: https://pytorch.slack.com/messages/beginner/
4 changes: 2 additions & 2 deletions beginner_source/hybrid_frontend/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@

1. learning_hybrid_frontend_through_example_tutorial.py
Learning Hybrid Frontend Through Example
http://pytorch.org/tutorials/beginner/hybrid_frontend/learning_hybrid_frontend_through_example_tutorial.html
https://pytorch.org/tutorials/beginner/hybrid_frontend/learning_hybrid_frontend_through_example_tutorial.html

2. introduction_to_hybrid_frontend_tutorial.py
Introduction to Hybrid Frontend
http://pytorch.org/tutorials/beginner/hybrid_frontend/introduction_to_hybrid_frontend_tutorial.html
https://pytorch.org/tutorials/beginner/hybrid_frontend/introduction_to_hybrid_frontend_tutorial.html
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
**Author:** `Nathan Inkawhich <https://github.com/inkawhich>`_
This tutorial requires PyTorch 1.0 (preview) or later. For installation
information visit http://pytorch.org/get-started.
information visit https://pytorch.org/get-started.
This document is meant to highlight the syntax of the Hybrid Frontend
through a non-code intensive example. The Hybrid Frontend is one of the
Expand Down
10 changes: 5 additions & 5 deletions beginner_source/nlp/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,20 +3,20 @@ Deep Learning for NLP with Pytorch

1. pytorch_tutorial.py
Introduction to PyTorch
http://pytorch.org/tutorials/beginner/nlp/pytorch_tutorial.html
https://pytorch.org/tutorials/beginner/nlp/pytorch_tutorial.html

2. deep_learning_tutorial.py
Deep Learning with PyTorch
http://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html
https://pytorch.org/tutorials/beginner/nlp/deep_learning_tutorial.html

3. word_embeddings_tutorial.py
Word Embeddings: Encoding Lexical Semantics
http://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html
https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html

4. sequence_models_tutorial.py
Sequence Models and Long-Short Term Memory Networks
http://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html
https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html

5. advanced_tutorial.py
Advanced: Making Dynamic Decisions and the Bi-LSTM CRF
http://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html
2 changes: 1 addition & 1 deletion beginner_source/nlp/pytorch_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@


######################################################################
# See `the documentation <http://pytorch.org/docs/torch.html>`__ for a
# See `the documentation <https://pytorch.org/docs/torch.html>`__ for a
# complete list of the massive number of operations available to you. They
# expand beyond just mathematical operations.
#
Expand Down
2 changes: 1 addition & 1 deletion beginner_source/transfer_learning_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -289,7 +289,7 @@ def visualize_model(model, num_images=6):
# gradients are not computed in ``backward()``.
#
# You can read more about this in the documentation
# `here <http://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward>`__.
# `here <https://pytorch.org/docs/notes/autograd.html#excluding-subgraphs-from-backward>`__.
#

model_conv = torchvision.models.resnet18(pretrained=True)
Expand Down
2 changes: 1 addition & 1 deletion index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Some considerations:
* If you would like the tutorials section improved, please open a github issue
`here <https://github.com/pytorch/tutorials>`_ with your feedback.

Lastly, some of the tutorials are marked as requiring the *Preview release*. These are tutorials that use the new functionality from the PyTorch 1.0 Preview. Please visit the `Get Started <http://pytorch.org/get-started>`_ section of the PyTorch website for instructions on how to install the latest Preview build before trying these tutorials.
Lastly, some of the tutorials are marked as requiring the *Preview release*. These are tutorials that use the new functionality from the PyTorch 1.0 Preview. Please visit the `Get Started <https://pytorch.org/get-started>`_ section of the PyTorch website for instructions on how to install the latest Preview build before trying these tutorials.

Getting Started
------------------
Expand Down
12 changes: 6 additions & 6 deletions intermediate_source/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,24 +3,24 @@ Intermediate tutorials

1. char_rnn_classification_tutorial.py
Classifying Names with a Character-Level RNN
http://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html
https://pytorch.org/tutorials/intermediate/char_rnn_classification_tutorial.html

2. char_rnn_generation_tutorial.py
Generating Names with a Character-Level RNN
http://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html
https://pytorch.org/tutorials/intermediate/char_rnn_generation_tutorial.html

3. seq2seq_translation_tutorial.py
Translation with a Sequence to Sequence Network and Attention
http://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html
https://pytorch.org/tutorials/intermediate/seq2seq_translation_tutorial.html

4. reinforcement_q_learning.py
Reinforcement Learning (DQN) Tutorial
http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html

5. dist_tuto.rst
Writing Distributed Applications with PyTorch
http://pytorch.org/tutorials/intermediate/dist_tuto.html
https://pytorch.org/tutorials/intermediate/dist_tuto.html

6. spatial_transformer_tutorial
Spatial Transformer Networks Tutorial
http://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
https://pytorch.org/tutorials/intermediate/spatial_transformer_tutorial.html
4 changes: 2 additions & 2 deletions intermediate_source/char_rnn_classification_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
I assume you have at least installed PyTorch, know Python, and
understand Tensors:
- http://pytorch.org/ For installation instructions
- https://pytorch.org/ For installation instructions
- :doc:`/beginner/deep_learning_60min_blitz` to get started with PyTorch in general
- :doc:`/beginner/pytorch_with_examples` for a wide and deep overview
- :doc:`/beginner/former_torchies_tutorial` if you are former Lua Torch user
Expand Down Expand Up @@ -171,7 +171,7 @@ def lineToTensor(line):
# as regular feed-forward layers.
#
# This RNN module (mostly copied from `the PyTorch for Torch users
# tutorial <http://pytorch.org/tutorials/beginner/former_torchies/
# tutorial <https://pytorch.org/tutorials/beginner/former_torchies/
# nn_tutorial.html#example-2-recurrent-net>`__)
# is just 2 linear layers which operate on an input and hidden state, with
# a LogSoftmax layer after the output.
Expand Down
2 changes: 1 addition & 1 deletion intermediate_source/char_rnn_generation_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
I assume you have at least installed PyTorch, know Python, and
understand Tensors:
- http://pytorch.org/ For installation instructions
- https://pytorch.org/ For installation instructions
- :doc:`/beginner/deep_learning_60min_blitz` to get started with PyTorch in general
- :doc:`/beginner/pytorch_with_examples` for a wide and deep overview
- :doc:`/beginner/former_torchies_tutorial` if you are former Lua Torch user
Expand Down
12 changes: 6 additions & 6 deletions intermediate_source/dist_tuto.rst
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ GitHub repository <https://github.com/seba-1511/dist_tuto.pth/>`__.
Now that we understand how the distributed module works, let us write
something useful with it. Our goal will be to replicate the
functionality of
`DistributedDataParallel <http://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__.
`DistributedDataParallel <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__.
Of course, this will be a didactic example and in a real-world
situtation you should use the official, well-tested and well-optimized
version linked above.
Expand Down Expand Up @@ -380,7 +380,7 @@ could train any model on a large computer cluster.
lot more tricks <http://seba-1511.github.io/dist_blog>`__ required to
implement a production-level implementation of synchronous SGD. Again,
use what `has been tested and
optimized <http://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__.
optimized <https://pytorch.org/docs/stable/nn.html#torch.nn.parallel.DistributedDataParallel>`__.

Our Own Ring-Allreduce
~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -424,7 +424,7 @@ an exercise left to the reader, there is still one difference between
our version and the one in DeepSpeech: their implementation divide the
gradient tensor into *chunks*, so as to optimally utilize the
communication bandwidth. (Hint:
`torch.chunk <http://pytorch.org/docs/stable/torch.html#torch.chunk>`__)
`torch.chunk <https://pytorch.org/docs/stable/torch.html#torch.chunk>`__)

Advanced Topics
---------------
Expand All @@ -447,7 +447,7 @@ there are currently three backends implemented in PyTorch: TCP, MPI, and
Gloo. They each have different specifications and tradeoffs, depending
on the desired use-case. A comparative table of supported functions can
be found
`here <http://pytorch.org/docs/stable/distributed.html#module-torch.distributed>`__. Note that a fourth backend, NCCL, has been added since the creation of this tutorial. See `this section <https://pytorch.org/docs/stable/distributed.html#multi-gpu-collective-functions>`__ of the ``torch.distributed`` docs for more information about its use and value.
`here <https://pytorch.org/docs/stable/distributed.html#module-torch.distributed>`__. Note that a fourth backend, NCCL, has been added since the creation of this tutorial. See `this section <https://pytorch.org/docs/stable/distributed.html#multi-gpu-collective-functions>`__ of the ``torch.distributed`` docs for more information about its use and value.

**TCP Backend**

Expand Down Expand Up @@ -552,7 +552,7 @@ Those methods allow you to define how this coordination is done.
Depending on your hardware setup, one of these methods should be
naturally more suitable than the others. In addition to the following
sections, you should also have a look at the `official
documentation <http://pytorch.org/docs/stable/distributed.html#initialization>`__.
documentation <https://pytorch.org/docs/stable/distributed.html#initialization>`__.

Before diving into the initialization methods, let's have a quick look
at what happens behind ``init_process_group`` from the C/C++
Expand Down Expand Up @@ -673,7 +673,7 @@ multiple jobs to be scheduled on the same cluster.
I'd like to thank the PyTorch developers for doing such a good job on
their implementation, documentation, and tests. When the code was
unclear, I could always count on the
`docs <http://pytorch.org/docs/stable/distributed.html>`__ or the
`docs <https://pytorch.org/docs/stable/distributed.html>`__ or the
`tests <https://github.com/pytorch/pytorch/blob/master/test/test_distributed.py>`__
to find an answer. In particular, I'd like to thank Soumith Chintala,
Adam Paszke, and Natalia Gimelshein for providing insightful comments
Expand Down
Loading

0 comments on commit 2441373

Please sign in to comment.