Skip to content

Commit

Permalink
FIX: Fix docs (mne-tools#3899)
Browse files Browse the repository at this point in the history
  • Loading branch information
larsoner authored and agramfort committed Jan 12, 2017
1 parent e0faad3 commit 10f6960
Show file tree
Hide file tree
Showing 55 changed files with 517 additions and 414 deletions.
4 changes: 4 additions & 0 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,6 +78,10 @@
copyright = u'2012-%s, MNE Developers. Last updated on %s' % (td.year,
td.isoformat())

nitpicky = True
needs_sphinx = '1.5'
suppress_warnings = ['image.nonlocal_uri'] # we intentionally link outside

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
Expand Down
8 changes: 4 additions & 4 deletions doc/configure_git.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

.. include:: links.inc

.. _using_github::
.. _using_github:

Using GitHub to make a Pull Request
-----------------------------------
Expand Down Expand Up @@ -41,7 +41,7 @@ Creating a fork

You need to do this only once for each package you want to contribute to. The
instructions here are very similar to the instructions at
https://help.github.com/fork-a-repo/ |emdash| please see that page for more
https://help.github.com/fork-a-repo/ -- please see that page for more
details. We're repeating some of it here just to give the specifics for the
mne-python_ project, and to suggest some default names.

Expand All @@ -52,7 +52,7 @@ Set up and configure a GitHub account

If you don't have a GitHub account, go to the GitHub page, and make one.

You then need to configure your account to allow write access |emdash| see
You then need to configure your account to allow write access -- see
the *Generating SSH keys* help on `GitHub Help`_.

Create your own fork of a repository
Expand Down Expand Up @@ -172,7 +172,7 @@ sections.
* When you are starting a new set of changes, fetch any changes from the
trunk, and start a new *feature branch* from that.

* Make a new branch for each separable set of changes |emdash| "one task, one
* Make a new branch for each separable set of changes -- "one task, one
branch" (`ipython git workflow`_).

* Name your branch for the purpose of the changes - e.g.
Expand Down
10 changes: 5 additions & 5 deletions doc/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

.. include:: links.inc

.. _contributie_to_mne:
.. _contribute_to_mne:

Contribute to MNE
=================
Expand Down Expand Up @@ -36,9 +36,9 @@ Code guidelines
* Use `numpy style`_ for docstrings. Follow existing examples for simplest
guidance.

* New functionality must be covered by tests. For example, a :class:`Evoked`
method in ``mne/evoked.py`` should have a corresponding test in
``mne/tests/test_evoked.py``.
* New functionality must be covered by tests. For example, a
:class:`mne.Evoked` method in ``mne/evoked.py`` should have a corresponding
test in ``mne/tests/test_evoked.py``.

* Changes must be accompanied by updated documentation, including
:doc:`doc/whats_new.rst <whats_new>` and
Expand Down Expand Up @@ -102,7 +102,7 @@ Style
^^^^^
* Use single quotes whenever possible.
* Prefer generator or list comprehensions over ``filter``, ``map`` and other functional idioms.
* Use explicit functional constructors for builtin containers to improve readability (e.g., :class:`list()`, :class:`dict()`).
* Use explicit functional constructors for builtin containers to improve readability (e.g., ``list()``, ``dict``).
* Avoid nested functions or class methods if possible -- use private functions instead.
* Avoid ``**kwargs`` and ``*args`` in function signatures.
* Add brief docstrings to simple private functions and complete docstrings for complex ones.
Expand Down
2 changes: 1 addition & 1 deletion doc/faq.rst
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,7 @@ able to read your data in the not-too-distant future. For details, see:

MNE-Python is designed to provide its own file saving formats
(often based on the FIF standard) for its objects usually via a
``save`` method or ``write_*`` method, e.g. :func:`mne.Raw.save`,
``save`` method or ``write_*`` method, e.g. :func:`mne.io.Raw.save`,
:func:`mne.Epochs.save`, :func:`mne.write_evokeds`,
:func:`mne.SourceEstimate.save`. If you have some data that you
want to save but can't figure out how, shoot an email to the
Expand Down
4 changes: 2 additions & 2 deletions doc/getting_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@
Get started
============

.. _what_can_you_do:

Installation
------------

Expand Down Expand Up @@ -46,8 +48,6 @@ the :ref:`MNE<install_python_and_mne_python>` and

.. container:: col-md-8

.. _what_can_you_do:

.. raw:: html

<h2>What can you do with MNE?</h2>
Expand Down
4 changes: 2 additions & 2 deletions doc/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,8 +90,8 @@

<h2>More help</h2>

- `MNE mailing list`_ for analysis talk
- `GitHub <https://github.com/mne-tools/mne-python/issues/>`_ for
- `Mailing list <MNE mailing list>`_ for analysis talk
- `GitHub issues <https://github.com/mne-tools/mne-python/issues/>`_ for
requests and bug reports
- `Gitter <https://gitter.im/mne-tools/mne-python>`_ to chat with devs

Expand Down
60 changes: 37 additions & 23 deletions doc/manual/datasets_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,17 +12,16 @@ use the ``data_path`` (fetches full dataset) or the ``load_data`` (fetches datas

Sample
======
:ref:`ch_sample_data` is recorded using a 306-channel Neuromag vectorview system.
:func:`mne.datasets.sample.data_path()`

:ref:`ch_sample_data` is recorded using a 306-channel Neuromag vectorview system.

In this experiment, checkerboard patterns were presented to the subject
into the left and right visual field, interspersed by tones to the
left or right ear. The interval between the stimuli was 750 ms. Occasionally
a smiley face was presented at the center of the visual field.
The subject was asked to press a key with the right index finger
as soon as possible after the appearance of the face. To fetch this dataset, do::

from mne.datasets import sample
data_path = sample.data_path() # returns the folder in which the data is locally stored.
as soon as possible after the appearance of the face.

Once the ``data_path`` is known, its contents can be examined using :ref:`IO functions <ch_convert>`.

Expand All @@ -35,37 +34,30 @@ IO for the `ctf` format as well in addition to the C converter utilities. Please

Auditory
^^^^^^^^
To access the data, use the following Python commands::
from mne.datasets.brainstorm import bst_raw
data_path = bst_raw.data_path()
:func:`mne.datasets.brainstorm.bst_raw.data_path()`.

Further details about the data can be found at the `auditory dataset tutorial`_ on the Brainstorm website.
Details about the data can be found at the Brainstorm `auditory dataset tutorial`_.

.. topic:: Examples

* :ref:`Brainstorm auditory dataset tutorial<sphx_glr_auto_examples_datasets_plot_brainstorm_data.py>`: Partially replicates the original Brainstorm tutorial.

Resting state
^^^^^^^^^^^^^
To access the data, use the Python command::
:func:`mne.datasets.brainstorm.bst_resting.data_path()`

from mne.datasets.brainstorm import bst_resting
data_path = bst_resting.data_path()

Further details can be found at the `resting state dataset tutorial`_ on the Brainstorm website.
Details can be found at the Brainstorm `resting state dataset tutorial`_.

Median nerve
^^^^^^^^^^^^
To access the data, use the Python command::

from mne.datasets.brainstorm import bst_raw
data_path = bst_raw.data_path()
:func:`mne.datasets.brainstorm.bst_raw.data_path()`

Further details can be found at the `median nerve dataset tutorial`_ on the Brainstorm website.
Details can be found at the Brainstorm `median nerve dataset tutorial`_.

MEGSIM
======
:func:`mne.datasets.megsim.load_data()`

This dataset contains experimental and simulated MEG data. To load data from this dataset, do::

from mne.io import Raw
Expand All @@ -81,17 +73,17 @@ Detailed description of the dataset can be found in the related publication [1]_

SPM faces
=========
The `SPM faces dataset`_ contains EEG, MEG and fMRI recordings on face perception. To access this dataset, do::
:func:`mne.datasets.spm_face.data_path()`

from mne.datasets import spm_face
data_path = spm_face.data_path()
The `SPM faces dataset`_ contains EEG, MEG and fMRI recordings on face perception.

.. topic:: Examples

* :ref:`sphx_glr_auto_examples_datasets_plot_spm_faces_dataset.py` Full pipeline including artifact removal, epochs averaging, forward model computation and source reconstruction using dSPM on the contrast: "faces - scrambled".

EEGBCI motor imagery
====================
:func:`mne.datasets.eegbci.load_data()`

The EEGBCI dataset is documented in [2]_. The data set is available at PhysioNet [3]_.
The dataset contains 64-channel EEG recordings from 109 subjects and 14 runs on each subject in EDF+ format.
Expand All @@ -116,6 +108,28 @@ to discuss the possibility to add more publicly available datasets.
.. _median nerve dataset tutorial: http://neuroimage.usc.edu/brainstorm/DatasetMedianNerveCtf
.. _SPM faces dataset: http://www.fil.ion.ucl.ac.uk/spm/data/mmfaces/

Somatosensory
=============
:func:`mne.datasets.somato.data_path()`

This dataset contains somatosensory data with event-related synchronizations
(ERS) and desynchronizations (ERD).

.. topic:: Examples

* :ref:`sphx_glr_auto_tutorials_plot_sensors_time_frequency.py`

Multimodal
==========
:func:`mne.datasets.multimodal.data_path()`

This dataset contains a single subject recorded at Otaniemi (Aalto University)
with auditory, visual, and somatosensory stimuli.

.. topic:: Examples

* :ref:`sphx_glr_auto_examples_io_plot_elekta_epochs.py`

References
==========

Expand Down
5 changes: 3 additions & 2 deletions doc/manual/decoding.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ This is a technique to analyze multichannel data based on recordings from two cl
.. math:: x_{CSP}(t) = W^{T}x(t)
:label: csp

where each column of :math:`W \in R^{C\times C}` is a spatial filter and each row of :math:`x_{CSP}` is a CSP component. The matrix :math:`W` is also called the de-mixing matrix in other contexts. Let :math:`\Sigma^{+} \in R^{C\times C}` and :math:`\Sigma^{-} \in R^{C\times C}` be the estimates of the covariance matrices of the two conditions.
where each column of :math:`W \in R^{C\times C}` is a spatial filter and each row of :math:`x_{CSP}` is a CSP component. The matrix :math:`W` is also called the de-mixing matrix in other contexts. Let :math:`\Sigma^{+} \in R^{C\times C}` and :math:`\Sigma^{-} \in R^{C\times C}` be the estimates of the covariance matrices of the two conditions.
CSP analysis is given by the simultaneous diagonalization of the two covariance matrices

.. math:: W^{T}\Sigma^{+}W = \lambda^{+}
Expand Down Expand Up @@ -153,7 +153,8 @@ To generate this plot, you need to initialize a GAT object and then use the meth

.. topic:: Examples:

* :ref:`sphx_glr_auto_examples_decoding_plot_decoding_time_generalization.py`
* :ref:`sphx_glr_auto_tutorials_plot_sensors_decoding.py`
* :ref:`sphx_glr_auto_examples_decoding_plot_decoding_time_generalization_conditions.py`

Source-space decoding
=====================
Expand Down
Loading

0 comments on commit 10f6960

Please sign in to comment.