Skip to content

Commit

Permalink
Merge pull request scipy#11157 from mkg33/docs-GSoD
Browse files Browse the repository at this point in the history
DOC: stylistic revision, punctuation, consistency
  • Loading branch information
rgommers authored Dec 2, 2019
2 parents 90fea58 + 7520183 commit 2ba80dc
Show file tree
Hide file tree
Showing 102 changed files with 1,371 additions and 1,376 deletions.
4 changes: 2 additions & 2 deletions scipy/optimize/_constraints.py
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ def new_bounds_to_old(lb, ub, n):
"""Convert the new bounds representation to the old one.
The new representation is a tuple (lb, ub) and the old one is a list
containing n tuples, i-th containing lower and upper bound on a i-th
containing n tuples, ith containing lower and upper bound on a ith
variable.
"""
lb = np.asarray(lb)
Expand All @@ -298,7 +298,7 @@ def old_bound_to_new(bounds):
"""Convert the old bounds representation to the new one.
The new representation is a tuple (lb, ub) and the old one is a list
containing n tuples, i-th containing lower and upper bound on a i-th
containing n tuples, ith containing lower and upper bound on a ith
variable.
"""
lb, ub = zip(*bounds)
Expand Down
2 changes: 1 addition & 1 deletion scipy/optimize/_differentiable_functions.py
Original file line number Diff line number Diff line change
Expand Up @@ -464,7 +464,7 @@ def hess(self, x, v):
class LinearVectorFunction(object):
"""Linear vector function and its derivatives.
Defines a linear function F = A x, where x is n-dimensional vector and
Defines a linear function F = A x, where x is N-D vector and
A is m-by-n matrix. The Jacobian is constant and equals to A. The Hessian
is identically zero and it is returned as a csr matrix.
"""
Expand Down
32 changes: 16 additions & 16 deletions scipy/optimize/_differentialevolution.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,19 +31,19 @@ def differential_evolution(func, bounds, args=(), strategy='best1bin',
Differential Evolution is stochastic in nature (does not use gradient
methods) to find the minimum, and can search large areas of candidate
space, but often requires larger numbers of function evaluations than
conventional gradient based techniques.
conventional gradient-based techniques.
The algorithm is due to Storn and Price [1]_.
Parameters
----------
func : callable
The objective function to be minimized. Must be in the form
The objective function to be minimized. Must be in the form
``f(x, *args)``, where ``x`` is the argument in the form of a 1-D array
and ``args`` is a tuple of any additional fixed parameters needed to
completely specify the function.
bounds : sequence or `Bounds`, optional
Bounds for variables. There are two ways to specify the bounds:
Bounds for variables. There are two ways to specify the bounds:
1. Instance of `Bounds` class.
2. ``(min, max)`` pairs for each element in ``x``, defining the finite
lower and upper bounds for the optimizing argument of `func`. It is
Expand Down Expand Up @@ -74,7 +74,7 @@ def differential_evolution(func, bounds, args=(), strategy='best1bin',
evolved. The maximum number of function evaluations (with no polishing)
is: ``(maxiter + 1) * popsize * len(x)``
popsize : int, optional
A multiplier for setting the total population size. The population has
A multiplier for setting the total population size. The population has
``popsize * len(x)`` individuals (unless the initial population is
supplied via the `init` keyword).
tol : float, optional
Expand Down Expand Up @@ -179,7 +179,7 @@ def differential_evolution(func, bounds, args=(), strategy='best1bin',
Important attributes are: ``x`` the solution array, ``success`` a
Boolean flag indicating if the optimizer exited successfully and
``message`` which describes the cause of the termination. See
`OptimizeResult` for a description of other attributes. If `polish`
`OptimizeResult` for a description of other attributes. If `polish`
was employed, and a lower minimum was obtained by the polishing, then
OptimizeResult also contains the ``jac`` attribute.
If the eventual solution does not satisfy the applied constraints
Expand All @@ -201,14 +201,14 @@ def differential_evolution(func, bounds, args=(), strategy='best1bin',
b' = b_0 + mutation * (population[rand0] - population[rand1])
A trial vector is then constructed. Starting with a randomly chosen 'i'th
A trial vector is then constructed. Starting with a randomly chosen ith
parameter the trial is sequentially filled (in modulo) with parameters from
``b'`` or the original candidate. The choice of whether to use ``b'`` or the
original candidate is made with a binomial distribution (the 'bin' in
'best1bin') - a random number in [0, 1) is generated. If this number is
'best1bin') - a random number in [0, 1) is generated. If this number is
less than the `recombination` constant then the parameter is loaded from
``b'``, otherwise it is loaded from the original candidate. The final
parameter is always loaded from ``b'``. Once the trial candidate is built
``b'``, otherwise it is loaded from the original candidate. The final
parameter is always loaded from ``b'``. Once the trial candidate is built
its fitness is assessed. If the trial is better than the original candidate
then it takes its place. If it is also better than the best overall
candidate it also replaces that.
Expand Down Expand Up @@ -352,7 +352,7 @@ class DifferentialEvolutionSolver(object):
evolved. The maximum number of function evaluations (with no polishing)
is: ``(maxiter + 1) * popsize * len(x)``
popsize : int, optional
A multiplier for setting the total population size. The population has
A multiplier for setting the total population size. The population has
``popsize * len(x)`` individuals (unless the initial population is
supplied via the `init` keyword).
tol : float, optional
Expand Down Expand Up @@ -389,7 +389,7 @@ class DifferentialEvolutionSolver(object):
callback : callable, `callback(xk, convergence=val)`, optional
A function to follow the progress of the minimization. ``xk`` is
the current value of ``x0``. ``val`` represents the fractional
value of the population convergence. When ``val`` is greater than one
value of the population convergence. When ``val`` is greater than one
the function halts. If callback returns `True`, then the minimization
is halted (any polishing is still carried out).
polish : bool, optional
Expand Down Expand Up @@ -638,7 +638,7 @@ def init_population_lhs(self):

def init_population_random(self):
"""
Initialises the population at random. This type of initialization
Initializes the population at random. This type of initialization
can possess clustering, Latin Hypercube sampling is generally better.
"""
rng = self.random_number_generator
Expand All @@ -653,7 +653,7 @@ def init_population_random(self):

def init_population_array(self, init):
"""
Initialises the population with a user specified population.
Initializes the population with a user specified population.
Parameters
----------
Expand Down Expand Up @@ -746,7 +746,7 @@ def solve(self):

self._promote_lowest_energy()

# do the optimisation.
# do the optimization.
for nit in xrange(1, self.maxiter + 1):
# evolve the population by a generation
try:
Expand Down Expand Up @@ -820,7 +820,7 @@ def solve(self):
self._nfev += result.nfev
DE_result.nfev = self._nfev

# polishing solution is only accepted if there is an improvement in
# Polishing solution is only accepted if there is an improvement in
# cost function, the polishing was successful and the solution lies
# within the bounds.
if (result.fun < DE_result.fun and
Expand Down Expand Up @@ -1244,7 +1244,7 @@ def _rand2(self, samples):
def _select_samples(self, candidate, number_samples):
"""
obtain random integers from range(self.num_population_members),
without replacement. You can't have the original candidate either.
without replacement. You can't have the original candidate either.
"""
idxs = list(range(self.num_population_members))
idxs.remove(candidate)
Expand Down
14 changes: 7 additions & 7 deletions scipy/optimize/_dual_annealing.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,10 @@ class VisitingDistribution(object):
Parameters
----------
lb : array_like
A 1-D numpy ndarray containing lower bounds of the generated
A 1-D NumPy ndarray containing lower bounds of the generated
components. Neither NaN or inf are allowed.
ub : array_like
A 1-D numpy ndarray containing upper bounds for the generated
A 1-D NumPy ndarray containing upper bounds for the generated
components. Neither NaN or inf are allowed.
visiting_param : float
Parameter for visiting distribution. Default value is 2.62.
Expand Down Expand Up @@ -135,10 +135,10 @@ class EnergyState(object):
Parameters
----------
lower : array_like
A 1-D numpy ndarray containing lower bounds for generating an initial
A 1-D NumPy ndarray containing lower bounds for generating an initial
random components in the `reset` method.
upper : array_like
A 1-D numpy ndarray containing upper bounds for generating an initial
A 1-D NumPy ndarray containing upper bounds for generating an initial
random components in the `reset` method
components. Neither NaN or inf are allowed.
callback : callable, ``callback(x, f, context)``, optional
Expand Down Expand Up @@ -436,7 +436,7 @@ def dual_annealing(func, bounds, args=(), maxiter=1000,
Parameters
----------
func : callable
The objective function to be minimized. Must be in the form
The objective function to be minimized. Must be in the form
``f(x, *args)``, where ``x`` is the argument in the form of a 1-D array
and ``args`` is a tuple of any additional fixed parameters needed to
completely specify the function.
Expand Down Expand Up @@ -503,7 +503,7 @@ def dual_annealing(func, bounds, args=(), maxiter=1000,
If the callback implementation returns True, the algorithm will stop.
x0 : ndarray, shape(n,), optional
Coordinates of a single n-dimensional starting point.
Coordinates of a single N-D starting point.
Returns
-------
Expand Down Expand Up @@ -583,7 +583,7 @@ def dual_annealing(func, bounds, args=(), maxiter=1000,
Examples
--------
The following example is a 10-dimensional problem, with many local minima.
The following example is a 10-D problem, with many local minima.
The function involved is called Rastrigin
(https://en.wikipedia.org/wiki/Rastrigin_function)
Expand Down
8 changes: 4 additions & 4 deletions scipy/optimize/_hessian_update_strategy.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,12 +76,12 @@ def dot(self, p):
Parameters
----------
p : array_like
1-d array representing a vector.
1-D array representing a vector.
Returns
-------
Hp : array
1-d represents the result of multiplying the approximation matrix
1-D represents the result of multiplying the approximation matrix
by vector p.
"""
raise NotImplementedError("The method ``dot(p)``"
Expand Down Expand Up @@ -206,12 +206,12 @@ def dot(self, p):
Parameters
----------
p : array_like
1-d array representing a vector.
1-D array representing a vector.
Returns
-------
Hp : array
1-d represents the result of multiplying the approximation matrix
1-D represents the result of multiplying the approximation matrix
by vector p.
"""
if self.approx_type == 'hess':
Expand Down
42 changes: 21 additions & 21 deletions scipy/optimize/_linprog.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,18 +43,18 @@ def linprog_verbose_callback(res):
----------
res : A `scipy.optimize.OptimizeResult` consisting of the following fields:
x : 1D array
x : 1-D array
The independent variable vector which optimizes the linear
programming problem.
fun : float
Value of the objective function.
success : bool
True if the algorithm succeeded in finding an optimal solution.
slack : 1D array
slack : 1-D array
The values of the slack variables. Each slack variable corresponds
to an inequality constraint. If the slack is zero, then the
corresponding constraint is active.
con : 1D array
con : 1-D array
The (nominally zero) residuals of the equality constraints, that is,
``b - A_eq @ x``
phase : int
Expand Down Expand Up @@ -120,18 +120,18 @@ def linprog_terse_callback(res):
----------
res : A `scipy.optimize.OptimizeResult` consisting of the following fields:
x : 1D array
x : 1-D array
The independent variable vector which optimizes the linear
programming problem.
fun : float
Value of the objective function.
success : bool
True if the algorithm succeeded in finding an optimal solution.
slack : 1D array
slack : 1-D array
The values of the slack variables. Each slack variable corresponds
to an inequality constraint. If the slack is zero, then the
corresponding constraint is active.
con : 1D array
con : 1-D array
The (nominally zero) residuals of the equality constraints, that is,
``b - A_eq @ x``.
phase : int
Expand Down Expand Up @@ -198,18 +198,18 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
Parameters
----------
c : 1D array
c : 1-D array
The coefficients of the linear objective function to be minimized.
A_ub : 2D array, optional
A_ub : 2-D array, optional
The inequality constraint matrix. Each row of ``A_ub`` specifies the
coefficients of a linear inequality constraint on ``x``.
b_ub : 1D array, optional
b_ub : 1-D array, optional
The inequality constraint vector. Each element represents an
upper bound on the corresponding value of ``A_ub @ x``.
A_eq : 2D array, optional
A_eq : 2-D array, optional
The equality constraint matrix. Each row of ``A_eq`` specifies the
coefficients of a linear equality constraint on ``x``.
b_eq : 1D array, optional
b_eq : 1-D array, optional
The equality constraint vector. Each element of ``A_eq @ x`` must equal
the corresponding element of ``b_eq``.
bounds : sequence, optional
Expand All @@ -230,16 +230,16 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
iteration of the algorithm. The callback function must accept a single
`scipy.optimize.OptimizeResult` consisting of the following fields:
x : 1D array
x : 1-D array
The current solution vector.
fun : float
The current value of the objective function ``c @ x``.
success : bool
``True`` when the algorithm has completed successfully.
slack : 1D array
slack : 1-D array
The (nominally positive) values of the slack,
``b_ub - A_ub @ x``.
con : 1D array
con : 1-D array
The (nominally zero) residuals of the equality constraints,
``b_eq - A_eq @ x``.
phase : int
Expand Down Expand Up @@ -287,7 +287,7 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
For method-specific options, see
:func:`show_options('linprog') <show_options>`.
x0 : 1D array, optional
x0 : 1-D array, optional
Guess values of the decision variables, which will be refined by
the optimization algorithm. This argument is currently used only by the
'revised simplex' method, and can only be used if `x0` represents a
Expand All @@ -299,15 +299,15 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
res : OptimizeResult
A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
x : 1D array
x : 1-D array
The values of the decision variables that minimizes the
objective function while satisfying the constraints.
fun : float
The optimal value of the objective function ``c @ x``.
slack : 1D array
slack : 1-D array
The (nominally positive) values of the slack variables,
``b_ub - A_ub @ x``.
con : 1D array
con : 1-D array
The (nominally zero) residuals of the equality constraints,
``b_eq - A_eq @ x``.
success : bool
Expand Down Expand Up @@ -381,7 +381,7 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
- column singletons in ``A_ub``, representing simple bounds.
If presolve reveals that the problem is unbounded (e.g. an unconstrained
and unbounded variable has negative cost) or infeasible (e.g. a row of
and unbounded variable has negative cost) or infeasible (e.g., a row of
zeros in ``A_eq`` corresponds with a nonzero in ``b_eq``), the solver
terminates with the appropriate status code. Note that presolve terminates
as soon as any sign of unboundedness is detected; consequently, a problem
Expand Down Expand Up @@ -526,7 +526,7 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
c_o, A_ub_o, b_ub_o, A_eq_o, b_eq_o = c.copy(
), A_ub.copy(), b_ub.copy(), A_eq.copy(), b_eq.copy()

# Solve trivial problem, eliminate variables, tighten bounds, etc...
# Solve trivial problem, eliminate variables, tighten bounds, etc.
c0 = 0 # we might get a constant term in the objective
if solver_options.pop('presolve', True):
rr = solver_options.pop('rr', True)
Expand Down Expand Up @@ -559,7 +559,7 @@ def linprog(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
else:
raise ValueError('Unknown solver %s' % method)

# Eliminate artificial variables, re-introduce presolved variables, etc...
# Eliminate artificial variables, re-introduce presolved variables, etc.
# need modified bounds here to translate variables appropriately
disp = solver_options.get('disp', False)

Expand Down
Loading

0 comments on commit 2ba80dc

Please sign in to comment.