Skip to content

Commit

Permalink
[DOC] Fix broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
remyleone authored and nelson-liu committed Mar 23, 2016
1 parent eed5fc5 commit 9b7176d
Show file tree
Hide file tree
Showing 77 changed files with 253 additions and 262 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,10 +12,10 @@ How to contribute
-----------------

The preferred way to contribute to scikit-learn is to fork the
[main repository](http://github.com/scikit-learn/scikit-learn/) on
[main repository](https://github.com/scikit-learn/scikit-learn) on
GitHub:

1. Fork the [project repository](http://github.com/scikit-learn/scikit-learn):
1. Fork the [project repository](https://github.com/scikit-learn/scikit-learn):
click on the 'Fork' button near the top of the page. This creates
a copy of the code under your account on the GitHub server.

Expand Down
2 changes: 1 addition & 1 deletion doc/README
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,5 @@ to update the http://scikit-learn.org/dev tree of the website.

The configuration of this server is managed at:

http://github.com/scikit-learn/sklearn-docbuilder
https://github.com/scikit-learn/sklearn-docbuilder

18 changes: 9 additions & 9 deletions doc/about.rst
Original file line number Diff line number Diff line change
Expand Up @@ -63,7 +63,7 @@ High quality PNG and SVG logos are available in the `doc/logos/ <https://github.
Funding
-------

`INRIA <http://www.inria.fr>`_ actively supports this project. It has
`INRIA <https://www.inria.fr>`_ actively supports this project. It has
provided funding for Fabian Pedregosa (2010-2012), Jaques Grobler
(2012-2013) and Olivier Grisel (2013-2015) to work on this project
full-time. It also hosts coding sprints and other events.
Expand All @@ -88,9 +88,9 @@ Environment also funds several students to work on the project part-time.
:width: 200pt
:align: center

The following students were sponsored by `Google <http://code.google.com/opensource/>`_
The following students were sponsored by `Google <https://developers.google.com/open-source/>`_
to work on scikit-learn through the
`Google Summer of Code <http://en.wikipedia.org/wiki/Google_Summer_of_Code>`_
`Google Summer of Code <https://en.wikipedia.org/wiki/Google_Summer_of_Code>`_
program.

- 2007 - David Cournapeau
Expand All @@ -102,14 +102,14 @@ program.
It also provided funding for sprints and events around scikit-learn. If
you would like to participate in the next Google Summer of code
program, please see `this page
<http://github.com/scikit-learn/scikit-learn/wiki/SummerOfCode>`_
<https://github.com/scikit-learn/scikit-learn/wiki/SummerOfCode>`_

The `NeuroDebian <http://neuro.debian.net>`_ project providing `Debian
<http://www.debian.org>`_ packaging and contributions is supported by
`Dr. James V. Haxby <http://haxbylab.dartmouth.edu/>`_ (`Dartmouth
College <http://www.dartmouth.edu/~psych/>`_).
College <http://pbs.dartmouth.edu>`_).

The `PSF <http://www.python.org/psf/>`_ helped find and manage funding for our
The `PSF <https://www.python.org/psf/>`_ helped find and manage funding for our
2011 Granada sprint. More information can be found `here
<https://github.com/scikit-learn/scikit-learn/wiki/Past-sprints#granada-19th-21th-dec-2011>`__

Expand All @@ -121,12 +121,12 @@ Donating to the project
~~~~~~~~~~~~~~~~~~~~~~~

If you are interested in donating to the project or to one of our code-sprints, you can use
the *Paypal* button below or the `NumFOCUS Donations Page <http://numfocus.org/donatejoin/>`_ (if you use the latter, please indicate that you are donating for the scikit-learn project).
the *Paypal* button below or the `NumFOCUS Donations Page <http://www.numfocus.org/support-numfocus.html>`_ (if you use the latter, please indicate that you are donating for the scikit-learn project).

All donations will be handled by `NumFOCUS
<http://numfocus.org/donations>`_, a non-profit-organization which is
<http://www.numfocus.org>`_, a non-profit-organization which is
managed by a board of `Scipy community members
<http://numfocus.org/board>`_. NumFOCUS's mission is to foster
<http://www.numfocus.org/board>`_. NumFOCUS's mission is to foster
scientific computing software, in particular in Python. As a fiscal home
of scikit-learn, it ensures that money is available when needed to keep
the project funded and available while in compliance with tax regulations.
Expand Down
2 changes: 1 addition & 1 deletion doc/datasets/twenty_newsgroups.rst
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ components by sample in a more than 30000-dimensional space
ready-to-use tfidf features instead of file names.

.. _`20 newsgroups website`: http://people.csail.mit.edu/jrennie/20Newsgroups/
.. _`TF-IDF`: http://en.wikipedia.org/wiki/Tf-idf
.. _`TF-IDF`: https://en.wikipedia.org/wiki/Tf-idf


Filtering text for more realistic training
Expand Down
10 changes: 4 additions & 6 deletions doc/developers/advanced_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@ from source package
~~~~~~~~~~~~~~~~~~~

download the source package from
`pypi <http://pypi.python.org/pypi/scikit-learn/>`_,
`pypi <https://pypi.python.org/pypi/scikit-learn>`_,
, unpack the sources and cd into the source directory.

this packages uses distutils, which is the default way of installing
Expand All @@ -163,12 +163,12 @@ or alternatively (also from within the scikit-learn source folder)::
windows
-------

first, you need to install `numpy <http://numpy.scipy.org/>`_ and `scipy
first, you need to install `numpy <http://www.numpy.org/>`_ and `scipy
<http://www.scipy.org/>`_ from their own official installers.

wheel packages (.whl files) for scikit-learn from `pypi
<https://pypi.python.org/pypi/scikit-learn/>`_ can be installed with the `pip
<http://pip.readthedocs.org/en/latest/installing.html>`_ utility.
<https://pip.readthedocs.org/en/stable/installing/>`_ utility.
open a console and type the following to install or upgrade scikit-learn to the
latest stable release::

Expand Down Expand Up @@ -280,9 +280,7 @@ path environment variable.

for 32-bit python it is possible use the standalone installers for
`microsoft visual c++ express 2008 <http://go.microsoft.com/?linkid=7729279>`_
for python 2 or
`microsoft visual c++ express 2010 <http://go.microsoft.com/?linkid=9709949>`_
or python 3.
for python 2 or microsoft visual c++ express 2010 for python 3.

once installed you should be able to build scikit-learn without any
particular configuration by running the following command in the scikit-learn
Expand Down
24 changes: 12 additions & 12 deletions doc/developers/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Contributing
This project is a community effort, and everyone is welcome to
contribute.

The project is hosted on http://github.com/scikit-learn/scikit-learn
The project is hosted on https://github.com/scikit-learn/scikit-learn

Scikit-learn is somewhat :ref:`selective <selectiveness>` when it comes to
adding new algorithms, and the best way to contribute and to help the project
Expand All @@ -19,7 +19,7 @@ Submitting a bug report

In case you experience issues using this package, do not hesitate to submit a
ticket to the
`Bug Tracker <http://github.com/scikit-learn/scikit-learn/issues>`_. You are
`Bug Tracker <https://github.com/scikit-learn/scikit-learn/issues>`_. You are
also welcome to post feature requests or pull requests.


Expand All @@ -29,7 +29,7 @@ Retrieving the latest code
==========================

We use `Git <http://git-scm.com/>`_ for version control and
`GitHub <http://github.com/>`_ for hosting our main repository.
`GitHub <https://github.com/>`_ for hosting our main repository.

You can check out the latest sources with the command::

Expand Down Expand Up @@ -82,14 +82,14 @@ How to contribute
-----------------

The preferred way to contribute to scikit-learn is to fork the `main
repository <http://github.com/scikit-learn/scikit-learn/>`__ on GitHub,
repository <https://github.com/scikit-learn/scikit-learn/>`__ on GitHub,
then submit a "pull request" (PR):

1. `Create an account <https://github.com/signup/free>`_ on
1. `Create an account <https://github.com/join>`_ on
GitHub if you do not already have one.

2. Fork the `project repository
<http://github.com/scikit-learn/scikit-learn>`__: click on the 'Fork'
<https://github.com/scikit-learn/scikit-learn>`__: click on the 'Fork'
button near the top of the page. This creates a copy of the code under your
account on the GitHub server.

Expand Down Expand Up @@ -237,8 +237,8 @@ and are viewable in a web browser. See the README file in the doc/ directory
for more information.

For building the documentation, you will need `sphinx
<http://sphinx.pocoo.org/>`_,
`matplotlib <http://matplotlib.sourceforge.net/>`_ and
<http://sphinx-doc.org/>`_,
`matplotlib <http://matplotlib.org>`_ and
`pillow <http://pillow.readthedocs.org/en/latest/>`_.

**When you are writing documentation**, it is important to keep a good
Expand Down Expand Up @@ -297,7 +297,7 @@ Finally, follow the formatting rules below to make it consistently good:
Testing and improving test coverage
------------------------------------

High-quality `unit testing <http://en.wikipedia.org/wiki/Unit_testing>`_
High-quality `unit testing <https://en.wikipedia.org/wiki/Unit_testing>`_
is a corner-stone of the scikit-learn development process. For this
purpose, we use the `nose <http://nose.readthedocs.org/en/latest/>`_
package. The tests are functions appropriately named, located in `tests`
Expand All @@ -313,7 +313,7 @@ We expect code coverage of new features to be at least around 90%.
.. note:: **Workflow to improve test coverage**

To test code coverage, you need to install the `coverage
<http://pypi.python.org/pypi/coverage>`_ package in addition to nose.
<https://pypi.python.org/pypi/coverage>`_ package in addition to nose.

1. Run 'make test-coverage'. The output lists for each file the line
numbers that are not tested.
Expand Down Expand Up @@ -392,7 +392,7 @@ the review easier so new code can be integrated in less time.

Uniformly formatted code makes it easier to share code ownership. The
scikit-learn project tries to closely follow the official Python guidelines
detailed in `PEP8 <http://www.python.org/dev/peps/pep-0008/>`_ that
detailed in `PEP8 <https://www.python.org/dev/peps/pep-0008>`_ that
detail how code should be formatted and indented. Please read it and
follow it.

Expand All @@ -414,7 +414,7 @@ In addition, we add the following guidelines:

* **Please don't use** ``import *`` **in any case**. It is considered harmful
by the `official Python recommendations
<http://docs.python.org/howto/doanddont.html#from-module-import>`_.
<https://docs.python.org/2/howto/doanddont.html#from-module-import>`_.
It makes the code harder to read as the origin of symbols is no
longer explicitly referenced, but most important, it prevents
using a static analysis tool like `pyflakes
Expand Down
14 changes: 7 additions & 7 deletions doc/developers/performance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ this means trying to **replace any nested for loops by calls to equivalent
Numpy array methods**. The goal is to avoid the CPU wasting time in the
Python interpreter rather than crunching numbers to fit your statistical
model. It's generally a good idea to consider NumPy and SciPy performance tips:
http://wiki.scipy.org/PerformanceTips
http://scipy.github.io/old-wiki/pages/PerformanceTips

Sometimes however an algorithm cannot be expressed efficiently in simple
vectorized Numpy code. In this case, the recommended strategy is the
Expand Down Expand Up @@ -304,7 +304,7 @@ Memory usage profiling
======================

You can analyze in detail the memory usage of any Python code with the help of
`memory_profiler <http://pypi.python.org/pypi/memory_profiler>`_. First,
`memory_profiler <https://pypi.python.org/pypi/memory_profiler>`_. First,
install the latest version::

$ pip install -U memory_profiler
Expand Down Expand Up @@ -401,7 +401,7 @@ project.
TODO: html report, type declarations, bound checks, division by zero checks,
memory alignment, direct blas calls...

- http://www.euroscipy.org/file/3696?vid=download
- https://www.youtube.com/watch?v=gMvkiQ-gOW8
- http://conference.scipy.org/proceedings/SciPy2009/paper_1/
- http://conference.scipy.org/proceedings/SciPy2009/paper_2/

Expand All @@ -421,16 +421,16 @@ Using yep and google-perftools

Easy profiling without special compilation options use yep:

- http://pypi.python.org/pypi/yep
- http://fseoane.net/blog/2011/a-profiler-for-python-extensions/
- https://pypi.python.org/pypi/yep
- http://fa.bianp.net/blog/2011/a-profiler-for-python-extensions

.. note::

google-perftools provides a nice 'line by line' report mode that
can be triggered with the ``--lines`` option. However this
does not seem to work correctly at the time of writing. This
issue can be tracked on the `project issue tracker
<https://code.google.com/p/google-perftools/issues/detail?id=326>`_.
<https://github.com/gperftools/gperftools>`_.



Expand Down Expand Up @@ -460,7 +460,7 @@ TODO: give a simple teaser example here.

Checkout the official joblib documentation:

- http://packages.python.org/joblib/
- https://pythonhosted.org/joblib


.. _warm-restarts:
Expand Down
2 changes: 1 addition & 1 deletion doc/developers/utilities.rst
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ Efficient Linear Algebra & Array Operations
by directly calling the BLAS
``nrm2`` function. This is more stable than ``scipy.linalg.norm``. See
`Fabian's blog post
<http://fseoane.net/blog/2011/computing-the-vector-norm/>`_ for a discussion.
<http://fa.bianp.net/blog/2011/computing-the-vector-norm>`_ for a discussion.

- :func:`extmath.fast_logdet`: efficiently compute the log of the determinant
of a matrix.
Expand Down
9 changes: 4 additions & 5 deletions doc/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,8 +51,8 @@ Canopy and Anaconda for all supported platforms
-----------------------------------------------

`Canopy
<http://www.enthought.com/products/canopy>`_ and `Anaconda
<https://store.continuum.io/cshop/anaconda/>`_ both ship a recent
<https://www.enthought.com/products/canopy>`_ and `Anaconda
<https://www.continuum.io/downloads>`_ both ship a recent
version of scikit-learn, in addition to a large set of scientific python
library for Windows, Mac OSX and Linux.

Expand Down Expand Up @@ -83,9 +83,8 @@ Anaconda offers scikit-learn as part of its free distribution.
Python(x,y) for Windows
-----------------------

The `Python(x,y) <https://code.google.com/p/pythonxy/>`_ project distributes
scikit-learn as an additional plugin, which can be found in the `Additional
plugins <http://code.google.com/p/pythonxy/wiki/AdditionalPlugins>`_ page.
The `Python(x,y) <https://python-xy.github.io>`_ project distributes
scikit-learn as an additional plugin.


For installation instructions for particular operating systems or for compiling
Expand Down
4 changes: 2 additions & 2 deletions doc/modules/clustering.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1010,7 +1010,7 @@ random labelings by defining the adjusted Rand index as follows:
.. topic:: References

* `Comparing Partitions
<http://www.springerlink.com/content/x64124718341j1j0/>`_
<http://link.springer.com/article/10.1007%2FBF01908075>`_
L. Hubert and P. Arabie, Journal of Classification 1985

* `Wikipedia entry for the adjusted Rand index
Expand Down Expand Up @@ -1170,7 +1170,7 @@ calculated using a similar form to that of the adjusted Rand index:
* Vinh, Epps, and Bailey, (2009). "Information theoretic measures
for clusterings comparison". Proceedings of the 26th Annual International
Conference on Machine Learning - ICML '09.
`doi:10.1145/1553374.1553511 <http://dx.doi.org/10.1145/1553374.1553511>`_.
`doi:10.1145/1553374.1553511 <https://dl.acm.org/citation.cfm?doid=1553374.1553511>`_.
ISBN 9781605585161.

* Vinh, Epps, and Bailey, (2010). Information Theoretic Measures for
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/computational_performance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -241,8 +241,8 @@ Linear algebra libraries
As scikit-learn relies heavily on Numpy/Scipy and linear algebra in general it
makes sense to take explicit care of the versions of these libraries.
Basically, you ought to make sure that Numpy is built using an optimized `BLAS
<http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms>`_ /
`LAPACK <http://en.wikipedia.org/wiki/LAPACK>`_ library.
<https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms>`_ /
`LAPACK <https://en.wikipedia.org/wiki/LAPACK>`_ library.

Not all models benefit from optimized BLAS and Lapack implementations. For
instance models based on (randomized) decision trees typically do not rely on
Expand Down Expand Up @@ -308,7 +308,7 @@ compromise between model compactness and prediction power. One can also
further tune the ``l1_ratio`` parameter (in combination with the
regularization strength ``alpha``) to control this tradeoff.

A typical `benchmark <https://github.com/scikit-learn/scikit-learn/tree/master/benchmarks/bench_sparsify.py>`_
A typical `benchmark <https://github.com/scikit-learn/scikit-learn/blob/master/benchmarks/bench_sparsify.py>`_
on synthetic data yields a >30% decrease in latency when both the model and
input are sparse (with 0.000024 and 0.027400 non-zero coefficients ratio
respectively). Your mileage may vary depending on the sparsity and size of
Expand Down
6 changes: 3 additions & 3 deletions doc/modules/cross_validation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ and the results can depend on a particular random choice for the pair of
(train, validation) sets.

A solution to this problem is a procedure called
`cross-validation <http://en.wikipedia.org/wiki/Cross-validation_(statistics)>`_
`cross-validation <https://en.wikipedia.org/wiki/Cross-validation_(statistics)>`_
(CV for short).
A test set should still be held out for final evaluation,
but the validation set is no longer needed when doing CV.
Expand Down Expand Up @@ -337,11 +337,11 @@ fold cross validation should be preferred to LOO.

* `<http://www.faqs.org/faqs/ai-faq/neural-nets/part3/section-12.html>`_;
* T. Hastie, R. Tibshirani, J. Friedman, `The Elements of Statistical Learning
<http://www-stat.stanford.edu/~tibs/ElemStatLearn>`_, Springer 2009;
<http://statweb.stanford.edu/~tibs/ElemStatLearn>`_, Springer 2009
* L. Breiman, P. Spector `Submodel selection and evaluation in regression: The X-random case
<http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/197.pdf>`_, International Statistical Review 1992;
* R. Kohavi, `A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection
<http://www.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf>`_, Intl. Jnt. Conf. AI;
<http://web.cs.iastate.edu/~jtian/cs573/Papers/Kohavi-IJCAI-95.pdf>`_, Intl. Jnt. Conf. AI
* R. Bharat Rao, G. Fung, R. Rosales, `On the Dangers of Cross-Validation. An Experimental Evaluation
<http://www.siam.org/proceedings/datamining/2008/dm08_54_Rao.pdf>`_, SIAM 2008;
* G. James, D. Witten, T. Hastie, R Tibshirani, `An Introduction to
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/decomposition.rst
Original file line number Diff line number Diff line change
Expand Up @@ -732,7 +732,7 @@ and the regularized objective function is:
.. topic:: References:

* `"Learning the parts of objects by non-negative matrix factorization"
<http://hebb.mit.edu/people/seung/papers/ls-lponm-99.pdf>`_
<http://www.columbia.edu/~jwp2128/Teaching/W4721/papers/nmf_nature.pdf>`_
D. Lee, S. Seung, 1999

* `"Non-negative Matrix Factorization with Sparseness Constraints"
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/density.rst
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,7 @@ The kernel density estimator can be used with any of the valid distance
metrics (see :class:`sklearn.neighbors.DistanceMetric` for a list of available metrics), though
the results are properly normalized only for the Euclidean metric. One
particularly useful metric is the
`Haversine distance <http://en.wikipedia.org/wiki/Haversine_formula>`_
`Haversine distance <https://en.wikipedia.org/wiki/Haversine_formula>`_
which measures the angular distance between points on a sphere. Here
is an example of using a kernel density estimate for a visualization
of geospatial data, in this case the distribution of observations of two
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/ensemble.rst
Original file line number Diff line number Diff line change
Expand Up @@ -414,7 +414,7 @@ decision trees).
Gradient Tree Boosting
======================

`Gradient Tree Boosting <http://en.wikipedia.org/wiki/Gradient_boosting>`_
`Gradient Tree Boosting <https://en.wikipedia.org/wiki/Gradient_boosting>`_
or Gradient Boosted Regression Trees (GBRT) is a generalization
of boosting to arbitrary
differentiable loss functions. GBRT is an accurate and effective
Expand Down
Loading

0 comments on commit 9b7176d

Please sign in to comment.