Skip to content

Commit

Permalink
MAINT: Misc. typo fixes (numpy#13664)
Browse files Browse the repository at this point in the history
* DOC, MAINT: Misc. typo fixes

Found via `codespell`
  • Loading branch information
luzpaz authored and mattip committed May 31, 2019
1 parent 43465f7 commit 0c70787
Show file tree
Hide file tree
Showing 38 changed files with 52 additions and 51 deletions.
2 changes: 1 addition & 1 deletion benchmarks/benchmarks/bench_function_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@ def setup(self):
def time_sort_worst(self):
np.sort(self.worst)

# Retain old benchmark name for backward compatability
# Retain old benchmark name for backward compatibility
time_sort_worst.benchmark_name = "bench_function_base.Sort.time_sort_worst"


Expand Down
2 changes: 1 addition & 1 deletion doc/DISTUTILS.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ Template files

NumPy Distutils preprocesses C source files (extension: :file:`.c.src`) written
in a custom templating language to generate C code. The :c:data:`@` symbol is
used to wrap macro-style variables to empower a string substitution mechansim
used to wrap macro-style variables to empower a string substitution mechanism
that might describe (for instance) a set of data types.

As a more detailed scenario, a loop in the NumPy C source code may
Expand Down
2 changes: 1 addition & 1 deletion doc/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
# issues with the amendments to PYTHONPATH and install paths (see DIST_VARS).

# Use explicit "version_info" indexing since make cannot handle colon characters, and
# evaluate it now to allow easier debugging when printing the varaible
# evaluate it now to allow easier debugging when printing the variable

PYVER:=$(shell python3 -c 'from sys import version_info as v; print("{0}.{1}".format(v[0], v[1]))')
PYTHON = python$(PYVER)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reference/c-api.array.rst
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ From scratch
If *data* is ``NULL``, then new unitinialized memory will be allocated and
*flags* can be non-zero to indicate a Fortran-style contiguous array. Use
:c:func:`PyArray_FILLWBYTE` to initialze the memory.
:c:func:`PyArray_FILLWBYTE` to initialize the memory.
If *data* is not ``NULL``, then it is assumed to point to the memory
to be used for the array and the *flags* argument is used as the
Expand Down
8 changes: 4 additions & 4 deletions doc/source/reference/c-api.coremath.rst
Original file line number Diff line number Diff line change
Expand Up @@ -185,15 +185,15 @@ Those can be useful for precise floating point comparison.
* NPY_FPE_INVALID
Note that :c:func:`npy_get_floatstatus_barrier` is preferable as it prevents
agressive compiler optimizations reordering the call relative to
aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
.. c:function:: int npy_get_floatstatus_barrier(char*)
Get floating point status. A pointer to a local variable is passed in to
prevent aggresive compiler optimizations from reodering this function call
prevent aggressive compiler optimizations from reodering this function call
relative to the code setting the status, which could lead to incorrect
results.
Expand All @@ -211,15 +211,15 @@ Those can be useful for precise floating point comparison.
Clears the floating point status. Returns the previous status mask.
Note that :c:func:`npy_clear_floatstatus_barrier` is preferable as it
prevents agressive compiler optimizations reordering the call relative to
prevents aggressive compiler optimizations reordering the call relative to
the code setting the status, which could lead to incorrect results.
.. versionadded:: 1.9.0
.. c:function:: int npy_clear_floatstatus_barrier(char*)
Clears the floating point status. A pointer to a local variable is passed in to
prevent aggresive compiler optimizations from reodering this function call.
prevent aggressive compiler optimizations from reodering this function call.
Returns the previous status mask.
.. versionadded:: 1.15.0
Expand Down
6 changes: 3 additions & 3 deletions doc/source/reference/random/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
numpy.random
============

Numpy's random number routines produce psuedo random numbers using
Numpy's random number routines produce pseudo random numbers using
combinations of a `BitGenerator` to create sequences and a `Generator`
to use those sequences to sample from different statistical distributions:

Expand Down Expand Up @@ -41,7 +41,7 @@ which will be faster than the legacy methods in `RandomState`
`Generator` can be used as a direct replacement for `~RandomState`, although
the random values are generated by `~xoshiro256.Xoshiro256`. The
`Generator` holds an instance of a BitGenerator. It is accessable as
`Generator` holds an instance of a BitGenerator. It is accessible as
``gen.bit_generator``.

.. code-block:: python
Expand Down Expand Up @@ -127,7 +127,7 @@ What's New or Different
:ref:`Cython <randomgen_cython>`.
* `~.Generator.integers` is now the canonical way to generate integer
random numbers from a discrete uniform distribution. The ``rand`` and
``randn`` methods are only availabe through the legacy `~.RandomState`.
``randn`` methods are only available through the legacy `~.RandomState`.
The ``endpoint`` keyword can be used to specify open or closed intervals.
This replaces both ``randint`` and the deprecated ``random_integers``.
* `~.Generator.random` is now the canonical way to generate floating-point
Expand Down
2 changes: 1 addition & 1 deletion doc/source/reference/random/new-or-different.rst
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ And in more detail:
`~.Generator.standard_gamma`.
* `~.Generator.integers` is now the canonical way to generate integer
random numbers from a discrete uniform distribution. The ``rand`` and
``randn`` methods are only availabe through the legacy `~.RandomState`.
``randn`` methods are only available through the legacy `~.RandomState`.
This replaces both ``randint`` and the deprecated ``random_integers``.
* The Box-Muller used to produce NumPy's normals is no longer available.
* All bit generators can produce doubles, uint64s and
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/_dtype_ctypes.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
"""
Conversion from ctypes to dtype.
In an ideal world, we could acheive this through the PEP3118 buffer protocol,
In an ideal world, we could achieve this through the PEP3118 buffer protocol,
something like::
def dtype_from_ctypes_type(t):
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/einsumfunc.py
Original file line number Diff line number Diff line change
Expand Up @@ -287,7 +287,7 @@ def _update_other_results(results, best):
Returns
-------
mod_results : list
The list of modifed results, updated with outcome of ``best`` contraction.
The list of modified results, updated with outcome of ``best`` contraction.
"""

best_con = best[1]
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/numerictypes.py
Original file line number Diff line number Diff line change
Expand Up @@ -275,7 +275,7 @@ def obj2sctype(rep, default=None):
<class 'list'>
"""
# prevent abtract classes being upcast
# prevent abstract classes being upcast
if isinstance(rep, type) and issubclass(rep, generic):
return rep
# extract dtype from arrays
Expand Down
8 changes: 4 additions & 4 deletions numpy/core/shape_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -555,10 +555,10 @@ def _concatenate_shapes(shapes, axis):
ret[(slice(None),) * axis + sl_c] == c
```
Thses are called slice prefixes since they are used in the recursive
These are called slice prefixes since they are used in the recursive
blocking algorithm to compute the left-most slices during the
recursion. Therefore, they must be prepended to rest of the slice
that was computed deeper in the recusion.
that was computed deeper in the recursion.
These are returned as tuples to ensure that they can quickly be added
to existing slice tuple without creating a new tuple everytime.
Expand Down Expand Up @@ -841,9 +841,9 @@ def block(arrays):
return _block_concatenate(arrays, list_ndim, result_ndim)


# Theses helper functions are mostly used for testing.
# These helper functions are mostly used for testing.
# They allow us to write tests that directly call `_block_slicing`
# or `_block_concatenate` without blocking large arrays to forse the wisdom
# or `_block_concatenate` without blocking large arrays to force the wisdom
# to trigger the desired path.
def _block_setup(arrays):
"""
Expand Down
4 changes: 2 additions & 2 deletions numpy/core/src/multiarray/arraytypes.c.src
Original file line number Diff line number Diff line change
Expand Up @@ -2260,7 +2260,7 @@ VOID_copyswapn (char *dst, npy_intp dstride, char *src, npy_intp sstride,
char *dstptr, *srcptr;
/*
* In certain cases subarray copy can be optimized. This is when
* swapping is unecessary and the subarrays data type can certainly
* swapping is unnecessary and the subarrays data type can certainly
* be simply copied (no object, fields, subarray, and not a user dtype).
*/
npy_bool can_optimize_subarray = (!swap &&
Expand Down Expand Up @@ -2347,7 +2347,7 @@ VOID_copyswap (char *dst, char *src, int swap, PyArrayObject *arr)
int subitemsize;
/*
* In certain cases subarray copy can be optimized. This is when
* swapping is unecessary and the subarrays data type can certainly
* swapping is unnecessary and the subarrays data type can certainly
* be simply copied (no object, fields, subarray, and not a user dtype).
*/
npy_bool can_optimize_subarray = (!swap &&
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/multiarray/ctors.c
Original file line number Diff line number Diff line change
Expand Up @@ -3170,7 +3170,7 @@ PyArray_Zeros(int nd, npy_intp *dims, PyArray_Descr *type, int is_f_order)
* Empty
*
* accepts NULL type
* steals referenct to type
* steals a reference to type
*/
NPY_NO_EXPORT PyObject *
PyArray_Empty(int nd, npy_intp *dims, PyArray_Descr *type, int is_f_order)
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/multiarray/methods.c
Original file line number Diff line number Diff line change
Expand Up @@ -1687,7 +1687,7 @@ array_reduce(PyArrayObject *self, PyObject *NPY_UNUSED(args))
Notice because Python does not describe a mechanism to write
raw data to the pickle, this performs a copy to a string first
This issue is now adressed in protocol 5, where a buffer is serialized
This issue is now addressed in protocol 5, where a buffer is serialized
instead of a string,
*/

Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/npymath/npy_math_complex.c.src
Original file line number Diff line number Diff line change
Expand Up @@ -1246,7 +1246,7 @@ _clog_for_large_values@c@(@type@ x, @type@ y,
* Divide x and y by E, and then add 1 to the logarithm. This depends
* on E being larger than sqrt(2).
* Dividing by E causes an insignificant loss of accuracy; however
* this method is still poor since it is uneccessarily slow.
* this method is still poor since it is unnecessarily slow.
*/
if (ax > @TMAX@ / 2) {
*rr = npy_log@c@(npy_hypot@c@(x / NPY_E@c@, y / NPY_E@c@)) + 1;
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/npysort/selection.c.src
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ static NPY_INLINE void store_pivot(npy_intp pivot, npy_intp kth,
}

/*
* If pivot is the requested kth store it, overwritting other pivots if
* If pivot is the requested kth store it, overwriting other pivots if
* required. This must be done so iterative partition can work without
* manually shifting lower data offset by kth each time
*/
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/src/umath/ufunc_type_resolution.c
Original file line number Diff line number Diff line change
Expand Up @@ -1737,7 +1737,7 @@ set_ufunc_loop_data_types(PyUFuncObject *self, PyArrayObject **op,
}
/*
* For outputs, copy the dtype from op[0] if the type_num
* matches, similarly to preserve metdata.
* matches, similarly to preserve metadata.
*/
else if (i >= nin && op[0] != NULL &&
PyArray_DESCR(op[0])->type_num == type_nums[i]) {
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_dtype.py
Original file line number Diff line number Diff line change
Expand Up @@ -603,7 +603,7 @@ class TestStructuredDtypeSparseFields(object):
'offsets':[4]}, (2, 3))])

@pytest.mark.xfail(reason="inaccessible data is changed see gh-12686.")
@pytest.mark.valgrind_error(reason="reads from unitialized buffers.")
@pytest.mark.valgrind_error(reason="reads from uninitialized buffers.")
def test_sparse_field_assignment(self):
arr = np.zeros(3, self.dtype)
sparse_arr = arr.view(self.sparse_dtype)
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_half.py
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ def test_half_conversion_rounding(self, float_t, shift, offset):
# logic will be necessary, an arbitrarily small offset should cause
# normal up/down rounding always.

# Calculate the expecte pattern:
# Calculate the expected pattern:
cmp_patterns = f16s_patterns[1:-1].copy()

if shift == "down" and offset != "up":
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_nditer.py
Original file line number Diff line number Diff line change
Expand Up @@ -2292,7 +2292,7 @@ def test_dtype_copy(self):
assert_equal(vals, [[0, 1, 2], [3, 4, 5]])
vals = None

# writebackifcopy - using conext manager
# writebackifcopy - using context manager
a = arange(6, dtype='f4').reshape(2, 3)
i, j = np.nested_iters(a, [[0], [1]],
op_flags=['readwrite', 'updateifcopy'],
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_scalar_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def test_against_known_values(self):
R(*np.double(2.1).as_integer_ratio()))
assert_equal(R(-4728779608739021, 2251799813685248),
R(*np.double(-2.1).as_integer_ratio()))
# longdouble is platform depedent
# longdouble is platform dependent

@pytest.mark.parametrize("ftype, frac_vals, exp_vals", [
# dtype test cases generated using hypothesis
Expand Down
2 changes: 1 addition & 1 deletion numpy/core/tests/test_scalarprint.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def check(v):

def test_py2_float_print(self):
# gh-10753
# In python2, the python float type implements an obsolte method
# In python2, the python float type implements an obsolete method
# tp_print, which overrides tp_repr and tp_str when using "print" to
# output to a "real file" (ie, not a StringIO). Make sure we don't
# inherit it.
Expand Down
4 changes: 2 additions & 2 deletions numpy/distutils/fcompiler/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -484,11 +484,11 @@ def customize(self, dist = None):
# XXX Assuming that free format is default for f90 compiler.
fix = self.command_vars.compiler_fix
# NOTE: this and similar examples are probably just
# exluding --coverage flag when F90 = gfortran --coverage
# excluding --coverage flag when F90 = gfortran --coverage
# instead of putting that flag somewhere more appropriate
# this and similar examples where a Fortran compiler
# environment variable has been customized by CI or a user
# should perhaps eventually be more throughly tested and more
# should perhaps eventually be more thoroughly tested and more
# robustly handled
if fix:
fix = _shell_utils.NativeParser.split(fix)
Expand Down
2 changes: 1 addition & 1 deletion numpy/doc/basics.py
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@
The behaviour of NumPy and Python integer types differs significantly for
integer overflows and may confuse users expecting NumPy integers to behave
similar to Python's ``int``. Unlike NumPy, the size of Python's ``int`` is
flexible. This means Python integers may expand to accomodate any integer and
flexible. This means Python integers may expand to accommodate any integer and
will not overflow.
NumPy provides `numpy.iinfo` and `numpy.finfo` to verify the
Expand Down
5 changes: 3 additions & 2 deletions numpy/doc/indexing.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
"""==============
"""
==============
Array indexing
==============
Expand Down Expand Up @@ -107,7 +108,7 @@
It is possible to use special features to effectively increase the
number of dimensions in an array through indexing so the resulting
array aquires the shape needed for use in an expression or with a
array acquires the shape needed for use in an expression or with a
specific function.
Index arrays
Expand Down
4 changes: 2 additions & 2 deletions numpy/lib/function_base.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from __future__ import division, absolute_import, print_function

try:
# Accessing collections abstact classes from collections
# Accessing collections abstract classes from collections
# has been deprecated since Python 3.3
import collections.abc as collections_abc
except ImportError:
Expand Down Expand Up @@ -4341,7 +4341,7 @@ def delete(arr, obj, axis=None):
else:
slobj[axis] = slice(None, start)
new[tuple(slobj)] = arr[tuple(slobj)]
# copy end chunck
# copy end chunk
if stop == N:
pass
else:
Expand Down
2 changes: 1 addition & 1 deletion numpy/lib/recfunctions.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@ def get_names(adtype):
def get_names_flat(adtype):
"""
Returns the field names of the input datatype as a tuple. Nested structure
are flattend beforehand.
are flattened beforehand.
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion numpy/lib/tests/test_histograms.py
Original file line number Diff line number Diff line change
Expand Up @@ -798,7 +798,7 @@ def test_density_non_uniform_2d(self):
hist, edges = histogramdd((y, x), bins=(y_edges, x_edges))
assert_equal(hist, relative_areas)

# resulting histogram should be uniform, since counts and areas are propotional
# resulting histogram should be uniform, since counts and areas are proportional
hist, edges = histogramdd((y, x), bins=(y_edges, x_edges), density=True)
assert_equal(hist, 1 / (8*8))

Expand Down
2 changes: 1 addition & 1 deletion numpy/random/generator.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ cdef class Generator:
'when required.')

# Implementation detail: the old API used a masked method to generate
# bounded uniform integers. Lemire's method is preferrable since it is
# bounded uniform integers. Lemire's method is preferable since it is
# faster. randomgen allows a choice, we will always use the faster one.
cdef bint _masked = True

Expand Down
2 changes: 1 addition & 1 deletion numpy/random/mtrand.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -621,7 +621,7 @@ cdef class RandomState:
'ValueError', DeprecationWarning)

# Implementation detail: the use a masked method to generate
# bounded uniform integers. Lemire's method is preferrable since it is
# bounded uniform integers. Lemire's method is preferable since it is
# faster. randomgen allows a choice, we will always use the slower but
# backward compatible one.
cdef bint _masked = True
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/src/philox/philox-benchmark.c
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
*
* gcc philox-benchmark.c -O3 -o philox-benchmark
*
* Requres the Random123 directory containing header files to be located in the
* Requires the Random123 directory containing header files to be located in the
* same directory (not included).
*/
#include "Random123/philox.h"
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/src/philox/philox-test-data-gen.c
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
* gcc philox-test-data-gen.c -o philox-test-data-gen
* ./philox-test-data-gen
*
* Requres the Random123 directory containing header files to be located in the
* Requires the Random123 directory containing header files to be located in the
* same directory (not included).
*
*/
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/src/threefry/threefry-benchmark.c
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
*
* gcc threefry-benchmark.c -O3 -o threefry-benchmark
*
* Requres the Random123 directory containing header files to be located in the
* Requires the Random123 directory containing header files to be located in the
* same directory (not included).
*/
#include "Random123/threefry.h"
Expand Down
2 changes: 1 addition & 1 deletion numpy/random/src/threefry/threefry-test-data-gen.c
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
* threefry-test-data-gen
* ./threefry-test-data-gen
*
* Requres the Random123 directory containing header files to be located in the
* Requires the Random123 directory containing header files to be located in the
* same directory (not included).
*
*/
Expand Down
Loading

0 comments on commit 0c70787

Please sign in to comment.