🏷️sec_gd
In this section we are going to introduce the basic concepts underlying gradient descent. This is brief by necessity. See e.g., :cite:Boyd.Vandenberghe.2004
for an in-depth introduction to convex optimization. Although the latter is rarely used directly in deep learning, an understanding of gradient descent is key to understanding stochastic gradient descent algorithms. For instance, the optimization problem might diverge due to an overly large learning rate. This phenomenon can already be seen in gradient descent. Likewise, preconditioning is a common technique in gradient descent and carries over to more advanced algorithms. Let's start with a simple special case.
Gradient descent in one dimension is an excellent example to explain why the gradient descent algorithm may reduce the value of the objective function. Consider some continuously differentiable real-valued function sec_single_variable_calculus
) we obtain that
gd-taylor
That is, in first approximation
If the derivative
This means that, if we use
to iterate
For simplicity we choose the objective function
%matplotlib inline
import d2l
from mxnet import np, npx
npx.set_np()
def f(x):
return x**2 # Objective function
def gradf(x):
return 2 * x # Its derivative
Next, we use
def gd(eta):
x = 10
results = [x]
for i in range(10):
x -= eta * gradf(x)
results.append(x)
print('epoch 10, x:', x)
return results
res = gd(0.2)
The progress of optimizing over
def show_trace(res):
n = max(abs(min(res)), abs(max(res)))
f_line = np.arange(-n, n, 0.01)
d2l.set_figsize((3.5, 2.5))
d2l.plot([f_line, res], [[f(x) for x in f_line], [f(x) for x in res]],
'x', 'f(x)', fmts=['-', '-o'])
show_trace(res)
🏷️section_gd-learningrate
The learning rate
show_trace(gd(0.05))
Conversely, if we use an excessively high learning rate, gd-taylor
might become significant. In this case, we cannot guarantee that the iteration of
show_trace(gd(1.1))
To illustrate what happens for nonconvex functions consider the case of
c = 0.15 * np.pi
def f(x):
return x * np.cos(c * x)
def gradf(x):
return np.cos(c * x) - c * x * np.sin(c * x)
show_trace(gd(2))
Now that we have a better intuition of the univariate case, let's consider the situation where
Each partial derivative element
gd-multi-taylor
In other words, up to second order terms in
To see how the algorithm behaves in practice let's construct an objective function
# Saved in the d2l package for later use
def train_2d(trainer, steps=20):
"""Optimize a 2-dim objective function with a customized trainer."""
# s1 and s2 are internal state variables and will
# be used later in the chapter
x1, x2, s1, s2 = -5, -2, 0, 0
results = [(x1, x2)]
for i in range(steps):
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print('epoch %d, x1 %f, x2 %f' % (i + 1, x1, x2))
return results
# Saved in the d2l package for later use
def show_trace_2d(f, results):
"""Show the trace of 2D variables during optimization."""
d2l.set_figsize((3.5, 2.5))
d2l.plt.plot(*zip(*results), '-o', color='#ff7f0e')
x1, x2 = np.meshgrid(np.arange(-5.5, 1.0, 0.1), np.arange(-3.0, 1.0, 0.1))
d2l.plt.contour(x1, x2, f(x1, x2), colors='#1f77b4')
d2l.plt.xlabel('x1')
d2l.plt.ylabel('x2')
Next, we observe the trajectory of the optimization variable
def f(x1, x2):
return x1 ** 2 + 2 * x2 ** 2 # Objective
def gradf(x1, x2):
return (2 * x1, 4 * x2) # Gradient
def gd(x1, x2, s1, s2):
(g1, g2) = gradf(x1, x2) # Compute gradient
return (x1 - eta * g1, x2 - eta * g2, 0, 0) # Update variables
eta = 0.1
show_trace_2d(f, train_2d(gd))
As we could see in :numref:section_gd-learningrate
, getting the learning rate
Reviewing the Taylor expansion of
gd-hot-taylor
To avoid cumbersome notation we define
After all, the minimum of gd-hot-taylor
with regard to
That is, we need to invert the Hessian
For
c = 0.5
def f(x):
return np.cosh(c * x) # Objective
def gradf(x):
return c * np.sinh(c * x) # Derivative
def hessf(x):
return c**2 * np.cosh(c * x) # Hessian
# Hide learning rate for now
def newton(eta=1):
x = 10
results = [x]
for i in range(10):
x -= eta * gradf(x) / hessf(x)
results.append(x)
print('epoch 10, x:', x)
return results
show_trace(newton())
Now let's see what happens when we have a nonconvex function, such as
c = 0.15 * np.pi
def f(x):
return x * np.cos(c * x)
def gradf(x):
return np.cos(c * x) - c * x * np.sin(c * x)
def hessf(x):
return - 2 * c * np.sin(c * x) - x * c**2 * np.cos(c * x)
show_trace(newton())
This went spectacularly wrong. How can we fix it? One way would be to "fix" the Hessian by taking its absolute value instead. Another strategy is to bring back the learning rate. This seems to defeat the purpose, but not quite. Having second order information allows us to be cautious whenever the curvature is large and to take longer steps whenever the objective is flat. Let's see how this works with a slightly smaller learning rate, say
show_trace(newton(0.5))
We only analyze the convergence rate for convex and three times differentiable
Denote by
This holds for some
Plugging in the update equations leads to the following bound
As an aside, optimization researchers call this linear convergence, whereas a condition such as
Quite unsurprisingly computing and storing the full Hessian is very expensive. It is thus desirable to find alternatives. One way to improve matters is by avoiding to compute the Hessian in its entirety but only compute the diagonal entries. While this is not quite as good as the full Newton method, it is still much better than not using it. Moreover, estimates for the main diagonal elements are what drives some of the innovation in stochastic gradient descent optimization algorithms. This leads to update algorithms of the form
To see why this might be a good idea consider a situation where one variable denotes height in millimeters and the other one denotes height in kilometers. Assuming that for both the natural scale is in meters we have a terrible mismatch in parameterizations. Using preconditioning removes this. Effectively preconditioning with gradient descent amounts to selecting a different learning rate for each coordinate.
One of the key problems in gradient descent was that we might overshoot the goal or make insufficient progress. A simple fix for the problem is to use line search in conjunction with gradient descent. That is, we use the direction given by
This algorithm converges rapidly (for an analysis and proof see e.g., :cite:Boyd.Vandenberghe.2004
). However, for the purpose of deep learning this is not quite so feasible, since each step of the line search would require us to evaluate the objective function on the entire dataset. This is way too costly to accomplish.
- Learning rates matter. Too large and we diverge, too small and we do not make progress.
- Gradient descent can get stuck in local minima.
- In high dimensions adjusting learning the learning rate is complicated.
- Preconditioning can help with scale adjustment.
- Newton's method is a lot faster once it has started working properly in convex problems.
- Beware of using Newton's method without any adjustments for nonconvex problems.
- Experiment with different learning rates and objective functions for gradient descent.
- Implement line search to minimize a convex function in the interval
$[a, b]$ .- Do you need derivatives for binary search, i.e., to decide whether to pick
$[a, (a+b)/2]$ or$[(a+b)/2, b]$ . - How rapid is the rate of convergence for the algorithm?
- Implement the algorithm and apply it to minimizing
$\log (\exp(x) + \exp(-2*x -3))$ .
- Do you need derivatives for binary search, i.e., to decide whether to pick
- Design an objective function defined on
$\mathbb{R}^2$ where gradient descent is exceedingly slow. Hint - scale different coordinates differently. - Implement the lightweight version of Newton's method using preconditioning:
- Use diagonal Hessian as preconditioner.
- Use the absolute values of that rather than the actual (possibly signed) values.
- Apply this to the problem above.
- Apply the algorithm above to a number of objective functions (convex or not). What happens if you rotate coordinates by
$45$ degrees?