Skip to content

Commit

Permalink
STY: Giant whitespace cleanup.
Browse files Browse the repository at this point in the history
Now is as good a time as any with open PR's at a low.
  • Loading branch information
charris committed Aug 18, 2013
1 parent 13b0b27 commit 8ddb0ce
Show file tree
Hide file tree
Showing 153 changed files with 1,464 additions and 1,545 deletions.
2 changes: 1 addition & 1 deletion BENTO_BUILD.txt
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ No-frill version:
* Clone bento::

git clone git://github.com/cournape/Bento.git bento-git

* Bootstrap bento::

cd bento-git && python bootstrap.py
Expand Down
11 changes: 5 additions & 6 deletions DEV_README.txt
Original file line number Diff line number Diff line change
@@ -1,19 +1,18 @@
Thank you for your willingness to help make NumPy the best array system
available.

We have a few simple rules:
We have a few simple rules:

* try hard to keep the Git repository in a buildable state and to not
indiscriminately muck with what others have contributed.

* Simple changes (including bug fixes) and obvious improvements are
always welcome. Changes that fundamentally change behavior need
discussion on [email protected] before anything is
* Simple changes (including bug fixes) and obvious improvements are
always welcome. Changes that fundamentally change behavior need
discussion on [email protected] before anything is
done.

* Please add meaningful comments when you check changes in. These
comments form the basis of the change-log.
comments form the basis of the change-log.

* Add unit tests to exercise new code, and regression tests
whenever you fix a bug.

4 changes: 2 additions & 2 deletions README.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
NumPy is the fundamental package needed for scientific computing with Python.
NumPy is the fundamental package needed for scientific computing with Python.
This package contains:

* a powerful N-dimensional array object
* sophisticated (broadcasting) functions
* tools for integrating C/C++ and Fortran code
* useful linear algebra, Fourier transform, and random number capabilities.
* useful linear algebra, Fourier transform, and random number capabilities.

It derives from the old Numeric code base and can be used as a replacement for Numeric. It also adds the features introduced by numarray and can be used to replace numarray.

Expand Down
6 changes: 3 additions & 3 deletions doc/CAPI.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ of the API) that will need to be changed:

* If you used any of the function pointers in the ``PyArray_Descr``
structure you will have to modify your usage of those. First,
the pointers are all under the member named ``f``. So ``descr->cast``
the pointers are all under the member named ``f``. So ``descr->cast``
is now ``descr->f->cast``. In addition, the
casting functions have eliminated the strides argument (use
``PyArray_CastTo`` if you need strided casting). All functions have
Expand Down Expand Up @@ -238,7 +238,7 @@ segfaults may result.
There are 6 (binary) flags that describe the memory area used by the
data buffer. These constants are defined in ``arrayobject.h`` and
determine the bit-position of the flag. Python exposes a nice attribute-
based interface as well as a dictionary-like interface for getting
based interface as well as a dictionary-like interface for getting
(and, if appropriate, setting) these flags.

Memory areas of all kinds can be pointed to by an ndarray, necessitating
Expand All @@ -254,7 +254,7 @@ PyArray_FromAny function.
``NPY_FORTRAN``
True if the array is (Fortran-style) contiguous in memory.

Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` contiguous
Notice that contiguous 1-d arrays are always both ``NPY_FORTRAN`` contiguous
and C contiguous. Both of these flags can be checked and are convenience
flags only as whether or not an array is ``NPY_CONTIGUOUS`` or ``NPY_FORTRAN``
can be determined by the ``strides``, ``dimensions``, and ``itemsize``
Expand Down
44 changes: 22 additions & 22 deletions doc/DISTUTILS.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ Requirements for SciPy packages

SciPy consists of Python packages, called SciPy packages, that are
available to Python users via the ``scipy`` namespace. Each SciPy package
may contain other SciPy packages. And so on. Therefore, the SciPy
directory tree is a tree of packages with arbitrary depth and width.
may contain other SciPy packages. And so on. Therefore, the SciPy
directory tree is a tree of packages with arbitrary depth and width.
Any SciPy package may depend on NumPy packages but the dependence on other
SciPy packages should be kept minimal or zero.

Expand All @@ -46,12 +46,12 @@ Their contents are described below.
The ``setup.py`` file
'''''''''''''''''''''

In order to add a Python package to SciPy, its build script (``setup.py``)
must meet certain requirements. The most important requirement is that the
package define a ``configuration(parent_package='',top_path=None)`` function
which returns a dictionary suitable for passing to
``numpy.distutils.core.setup(..)``. To simplify the construction of
this dictionary, ``numpy.distutils.misc_util`` provides the
In order to add a Python package to SciPy, its build script (``setup.py``)
must meet certain requirements. The most important requirement is that the
package define a ``configuration(parent_package='',top_path=None)`` function
which returns a dictionary suitable for passing to
``numpy.distutils.core.setup(..)``. To simplify the construction of
this dictionary, ``numpy.distutils.misc_util`` provides the
``Configuration`` class, described below.

SciPy pure Python package example
Expand All @@ -72,13 +72,13 @@ Below is an example of a minimal ``setup.py`` file for a pure SciPy package::

The arguments of the ``configuration`` function specifiy the name of
parent SciPy package (``parent_package``) and the directory location
of the main ``setup.py`` script (``top_path``). These arguments,
of the main ``setup.py`` script (``top_path``). These arguments,
along with the name of the current package, should be passed to the
``Configuration`` constructor.

The ``Configuration`` constructor has a fourth optional argument,
``package_path``, that can be used when package files are located in
a different location than the directory of the ``setup.py`` file.
a different location than the directory of the ``setup.py`` file.

Remaining ``Configuration`` arguments are all keyword arguments that will
be used to initialize attributes of ``Configuration``
Expand Down Expand Up @@ -159,12 +159,12 @@ in writing setup scripts:
sun.dat
bar/
car.dat
can.dat
can.dat

Path to data files can be a function taking no arguments and
returning path(s) to data files -- this is a useful when data files
are generated while building the package. (XXX: explain the step
when this function are called exactly)
when this function are called exactly)

+ ``config.add_data_dir(data_path)`` --- add directory ``data_path``
recursively to ``data_files``. The whole directory tree starting at
Expand All @@ -174,14 +174,14 @@ in writing setup scripts:
directory and the second element specifies the path to data directory.
By default, data directory are copied under package installation
directory under the basename of ``data_path``. For example,

::

config.add_data_dir('fun') # fun/ contains foo.dat bar/car.dat
config.add_data_dir(('sun','fun'))
config.add_data_dir(('gun','/full/path/to/fun'))

will install data files to the following locations
will install data files to the following locations

::

Expand All @@ -204,7 +204,7 @@ in writing setup scripts:
modules of the current package.

+ ``config.add_headers(*files)`` --- prepend ``files`` to ``headers``
list. By default, headers will be installed under
list. By default, headers will be installed under
``<prefix>/include/pythonX.X/<config.name.replace('.','/')>/``
directory. If ``files`` item is a tuple then it's first argument
specifies the installation suffix relative to
Expand All @@ -216,7 +216,7 @@ in writing setup scripts:
list. Scripts will be installed under ``<prefix>/bin/`` directory.

+ ``config.add_extension(name,sources,*kw)`` --- create and add an
``Extension`` instance to ``ext_modules`` list. The first argument
``Extension`` instance to ``ext_modules`` list. The first argument
``name`` defines the name of the extension module that will be
installed under ``config.name`` package. The second argument is
a list of sources. ``add_extension`` method takes also keyword
Expand Down Expand Up @@ -269,10 +269,10 @@ in writing setup scripts:
more information on arguments.

+ ``config.have_f77c()`` --- return True if Fortran 77 compiler is
available (read: a simple Fortran 77 code compiled succesfully).
available (read: a simple Fortran 77 code compiled succesfully).

+ ``config.have_f90c()`` --- return True if Fortran 90 compiler is
available (read: a simple Fortran 90 code compiled succesfully).
available (read: a simple Fortran 90 code compiled succesfully).

+ ``config.get_version()`` --- return version string of the current package,
``None`` if version information could not be detected. This methods
Expand Down Expand Up @@ -405,7 +405,7 @@ The header of a typical SciPy ``__init__.py`` is::
"""
Package docstring, typically with a brief description and function listing.
"""

# py3k related imports
from __future__ import division, print_function, absolute_import

Expand All @@ -414,7 +414,7 @@ The header of a typical SciPy ``__init__.py`` is::
...

__all__ = [s for s in dir() if not s.startswith('_')]

from numpy.testing import Tester
test = Tester().test
bench = Tester().bench
Expand All @@ -441,7 +441,7 @@ will compile the ``library`` sources without optimization flags.
It's recommended to specify only those config_fc options in such a way
that are compiler independent.

Getting extra Fortran 77 compiler options from source
Getting extra Fortran 77 compiler options from source
-----------------------------------------------------

Some old Fortran codes need special compiler options in order to
Expand All @@ -452,7 +452,7 @@ pattern::
CF77FLAGS(<fcompiler type>) = <fcompiler f77flags>

in the first 20 lines of the source and use the ``f77flags`` for
specified type of the fcompiler (the first character ``C`` is optional).
specified type of the fcompiler (the first character ``C`` is optional).

TODO: This feature can be easily extended for Fortran 90 codes as
well. Let us know if you would need such a feature.
32 changes: 16 additions & 16 deletions doc/HOWTO_DOCUMENT.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -142,15 +142,15 @@ The sections of the docstring are:
2. **Deprecation warning**

A section (use if applicable) to warn users that the object is deprecated.
Section contents should include:
Section contents should include:

* In what Numpy version the object was deprecated, and when it will be
removed.

* Reason for deprecation if this is useful information (e.g., object
is superseded, duplicates functionality found elsewhere, etc.).

* New recommended way of obtaining the same functionality.
* New recommended way of obtaining the same functionality.

This section should use the note Sphinx directive instead of an
underlined section header.
Expand Down Expand Up @@ -182,7 +182,7 @@ The sections of the docstring are:
x : type
Description of parameter `x`.

Enclose variables in single backticks. The colon must be preceded
Enclose variables in single backticks. The colon must be preceded
by a space, or omitted if the type is absent.

For the parameter types, be as precise as possible. Below are a
Expand All @@ -195,7 +195,7 @@ The sections of the docstring are:
filename : str
copy : bool
dtype : data-type
iterable : iterable object
iterable : iterable object
shape : int or tuple of int
files : list of str

Expand Down Expand Up @@ -370,7 +370,7 @@ The sections of the docstring are:
Referencing sources of a temporary nature, like web pages, is
discouraged. References are meant to augment the docstring, but
should not be required to understand it. References are numbered, starting
from one, in the order in which they are cited.
from one, in the order in which they are cited.

11. **Examples**

Expand All @@ -397,7 +397,7 @@ The sections of the docstring are:

>>> import numpy.random
>>> np.random.rand(2)
array([ 0.35773152, 0.38568979]) #random
array([ 0.35773152, 0.38568979]) #random

You can run examples as doctests using::

Expand Down Expand Up @@ -427,7 +427,7 @@ The sections of the docstring are:
*matplotlib* for plotting, but should import it explicitly, e.g.,
``import matplotlib.pyplot as plt``.


Documenting classes
-------------------

Expand Down Expand Up @@ -498,7 +498,7 @@ Document these as you would any other function. Do not include
``self`` in the list of parameters. If a method has an equivalent function
(which is the case for many ndarray methods for example), the function
docstring should contain the detailed documentation, and the method docstring
should refer to it. Only put brief summary and **See Also** sections in the
should refer to it. Only put brief summary and **See Also** sections in the
method docstring.


Expand All @@ -514,7 +514,7 @@ instances a useful docstring, we do the following:
* Multiple instances: If multiple instances are exposed, docstrings
for each instance are written and assigned to the instances'
``__doc__`` attributes at run time. The class is documented as usual, and
the exposed instances can be mentioned in the **Notes** and **See Also**
the exposed instances can be mentioned in the **Notes** and **See Also**
sections.


Expand Down Expand Up @@ -553,16 +553,16 @@ hard to get a good overview of all functionality provided by looking at the
source file(s) or the ``__all__`` dict.

Note that license and author info, while often included in source files, do not
belong in docstrings.
belong in docstrings.


Other points to keep in mind
----------------------------
* Equations : as discussed in the **Notes** section above, LaTeX formatting
should be kept to a minimum. Often it's possible to show equations as
Python code or pseudo-code instead, which is much more readable in a
terminal. For inline display use double backticks (like ``y = np.sin(x)``).
For display with blank lines above and below, use a double colon and indent
* Equations : as discussed in the **Notes** section above, LaTeX formatting
should be kept to a minimum. Often it's possible to show equations as
Python code or pseudo-code instead, which is much more readable in a
terminal. For inline display use double backticks (like ``y = np.sin(x)``).
For display with blank lines above and below, use a double colon and indent
the code, like::

end of previous sentence::
Expand Down Expand Up @@ -597,7 +597,7 @@ output. New paragraphs are marked with a blank line.

Use *italics*, **bold**, and ``monospace`` if needed in any explanations
(but not for variable names and doctest code or multi-line code).
Variable, module, function, and class names should be written between
Variable, module, function, and class names should be written between
single back-ticks (```numpy```).

A more extensive example of reST markup can be found in `this example
Expand Down
8 changes: 4 additions & 4 deletions doc/Py3K.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -483,21 +483,21 @@ So what is done in ``PyArray_FromAny`` currently is that:
3118 buffers, so that::

array([some_3118_object])

will treat the object similarly as it would handle an `ndarray`.

However, again, bytes (and unicode) have priority and will not be
handled as buffer objects.

This amounts to possible semantic changes:

- ``array(buffer)`` will no longer create an object array
- ``array(buffer)`` will no longer create an object array
``array([buffer], dtype='O')``, but will instead expand to a view
on the buffer.

.. todo::

Take a second look at places that used PyBuffer_FromMemory and
Take a second look at places that used PyBuffer_FromMemory and
PyBuffer_FromReadWriteMemory -- what can be done with these?

.. todo::
Expand Down Expand Up @@ -633,7 +633,7 @@ Currently, the following is done:

1) Numpy's integer types no longer inherit from Python integer.
2) int is taken dtype-equivalent to NPY_LONG
3) ints are converted to NPY_LONG
3) ints are converted to NPY_LONG

PyInt methods are currently replaced by PyLong, via macros in npy_3kcompat.h.

Expand Down
4 changes: 2 additions & 2 deletions doc/TESTS.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -206,7 +206,7 @@ but ``test_evens`` is a generator that returns a series of tests, using
A problem with generator tests can be that if a test is failing, it's
hard to see for which parameters. To avoid this problem, ensure that:

- No computation related to the features tested is done in the
- No computation related to the features tested is done in the
``test_*`` generator function, but delegated to a corresponding
``check_*`` function (can be inside the generator, to share namespace).
- The generators are used *solely* for loops over parameters.
Expand Down Expand Up @@ -236,7 +236,7 @@ for numpy.lib::
The doctests are run as if they are in a fresh Python instance which
has executed ``import numpy as np``. Tests that are part of a SciPy
subpackage will have that subpackage already imported. E.g. for a test
in ``scipy/linalg/tests/``, the namespace will be created such that
in ``scipy/linalg/tests/``, the namespace will be created such that
``from scipy import linalg`` has already executed.


Expand Down
2 changes: 1 addition & 1 deletion doc/cython/README.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,4 +17,4 @@ To run it locally, simply type::
make help

which shows you the currently available targets (these are just handy
shorthands for common commands).
shorthands for common commands).
Loading

0 comments on commit 8ddb0ce

Please sign in to comment.