Skip to content

Commit 32483d7

Browse files
committed
add opt first 4 sections
1 parent 888e837 commit 32483d7

10 files changed

+2161
-83
lines changed

chapter_optimization/convexity.md

+268
Large diffs are not rendered by default.

chapter_optimization/convexity_origin.md

+377
Large diffs are not rendered by default.

chapter_optimization/gd.md

+323
Large diffs are not rendered by default.

chapter_optimization/gd_origin.md

+348
Large diffs are not rendered by default.

chapter_optimization/index.md

+4-11
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,11 @@
11
# 优化算法
22
:label:`chap_optimization`
33

4-
到目前为止,如果你按顺序阅读本书,你已经学会使用许多优化算法来训练深度学习模型。
5-
它们是允许我们继续更新模型参数和最小化损失函数值的工具。
6-
的确,很多人都愿意将优化视为“黑盒设备”,拥有一些使用深度学习优化“魔法”的知识,就能够基于简单的设置实现目标函数的最小化。
4+
如果您在此之前按顺序阅读这本书,则已经使用了许多优化算法来训练深度学习模型。这些工具使我们能够继续更新模型参数并最大限度地减少损失函数的价值,正如培训集评估的那样。事实上,任何人满意将优化视为黑盒装置,以便在简单的环境中最大限度地减少客观功能,都可能会知道存在着一系列此类程序的咒语(名称如 “SGD” 和 “亚当”)。
75

8-
然而,优化算法对于深度学习是很重要的,因此学习一些更深层次的知识可以更好地优化。
9-
一方面,训练一个复杂的深度学习模型可能需要数小时、数天甚至数周的时间,而优化算法的性能将直接影响模型的训练效率。
10-
另一方面,了解不同优化算法的原理及其超参数的作用,可以有针对性地调整超参数,提高深度学习模型的性能。
6+
但是,为了做得好,还需要更深入的知识。优化算法对于深度学习非常重要。一方面,训练复杂的深度学习模型可能需要数小时、几天甚至数周。优化算法的性能直接影响模型的训练效率。另一方面,了解不同优化算法的原则及其超参数的作用将使我们能够以有针对性的方式调整超参数,以提高深度学习模型的性能。
117

12-
在本章中,我们将深入探讨常见的深度学习优化算法。
13-
在深度学习中,几乎所有的优化问题都是 *非凸的*(nonconvex)。
14-
尽管如此,在 *凸问题* 的背景下设计和分析算法已经被证明是非常有益的。
15-
基于这个原因,本章包括了关于凸优化的入门,和一个非常简单的随机梯度下降算法在凸目标函数上的证明。
8+
在本章中,我们深入探讨常见的深度学习优化算法。深度学习中出现的几乎所有优化问题都是 * nonconvex*。尽管如此,在 *CONVex* 问题背景下设计和分析算法是非常有启发性的。正是出于这个原因,本章包括了凸优化的入门,以及凸目标函数上非常简单的随机梯度下降算法的证明。
169

1710
```toc
1811
:maxdepth: 2
@@ -28,4 +21,4 @@ rmsprop
2821
adadelta
2922
adam
3023
lr-scheduler
31-
```
24+
```

chapter_optimization/index_origin.md

+34
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
# Optimization Algorithms
2+
:label:`chap_optimization`
3+
4+
If you read the book in sequence up to this point you already used a number of optimization algorithms to train deep learning models.
5+
They were the tools that allowed us to continue updating model parameters and to minimize the value of the loss function, as evaluated on the training set. Indeed, anyone content with treating optimization as a black box device to minimize objective functions in a simple setting might well content oneself with the knowledge that there exists an array of incantations of such a procedure (with names such as "SGD" and "Adam").
6+
7+
To do well, however, some deeper knowledge is required.
8+
Optimization algorithms are important for deep learning.
9+
On one hand, training a complex deep learning model can take hours, days, or even weeks.
10+
The performance of the optimization algorithm directly affects the model's training efficiency.
11+
On the other hand, understanding the principles of different optimization algorithms and the role of their hyperparameters
12+
will enable us to tune the hyperparameters in a targeted manner to improve the performance of deep learning models.
13+
14+
In this chapter, we explore common deep learning optimization algorithms in depth.
15+
Almost all optimization problems arising in deep learning are *nonconvex*.
16+
Nonetheless, the design and analysis of algorithms in the context of *convex* problems have proven to be very instructive.
17+
It is for that reason that this chapter includes a primer on convex optimization and the proof for a very simple stochastic gradient descent algorithm on a convex objective function.
18+
19+
```toc
20+
:maxdepth: 2
21+
22+
optimization-intro
23+
convexity
24+
gd
25+
sgd
26+
minibatch-sgd
27+
momentum
28+
adagrad
29+
rmsprop
30+
adadelta
31+
adam
32+
lr-scheduler
33+
```
34+

chapter_optimization/optimization-intro.md

+38-72
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,231 @@
1+
# Optimization and Deep Learning
2+
3+
In this section, we will discuss the relationship between optimization and deep learning as well as the challenges of using optimization in deep learning.
4+
For a deep learning problem, we will usually define a *loss function* first. Once we have the loss function, we can use an optimization algorithm in attempt to minimize the loss.
5+
In optimization, a loss function is often referred to as the *objective function* of the optimization problem. By tradition and convention most optimization algorithms are concerned with *minimization*. If we ever need to maximize an objective there is a simple solution: just flip the sign on the objective.
6+
7+
## Goal of Optimization
8+
9+
Although optimization provides a way to minimize the loss function for deep
10+
learning, in essence, the goals of optimization and deep learning are
11+
fundamentally different.
12+
The former is primarily concerned with minimizing an
13+
objective whereas the latter is concerned with finding a suitable model, given a
14+
finite amount of data.
15+
In :numref:`sec_model_selection`,
16+
we discussed the difference between these two goals in detail.
17+
For instance,
18+
training error and generalization error generally differ: since the objective
19+
function of the optimization algorithm is usually a loss function based on the
20+
training dataset, the goal of optimization is to reduce the training error.
21+
However, the goal of deep learning (or more broadly, statistical inference) is to
22+
reduce the generalization error.
23+
To accomplish the latter we need to pay
24+
attention to overfitting in addition to using the optimization algorithm to
25+
reduce the training error.
26+
27+
```{.python .input}
28+
%matplotlib inline
29+
from d2l import mxnet as d2l
30+
from mpl_toolkits import mplot3d
31+
from mxnet import np, npx
32+
npx.set_np()
33+
```
34+
35+
```{.python .input}
36+
#@tab pytorch
37+
%matplotlib inline
38+
from d2l import torch as d2l
39+
import numpy as np
40+
from mpl_toolkits import mplot3d
41+
import torch
42+
```
43+
44+
```{.python .input}
45+
#@tab tensorflow
46+
%matplotlib inline
47+
from d2l import tensorflow as d2l
48+
import numpy as np
49+
from mpl_toolkits import mplot3d
50+
import tensorflow as tf
51+
```
52+
53+
To illustrate the aforementioned different goals,
54+
let us consider
55+
the empirical risk and the risk.
56+
As described
57+
in :numref:`subsec_empirical-risk-and-risk`,
58+
the empirical risk
59+
is an average loss
60+
on the training dataset
61+
while the risk is the expected loss
62+
on the entire population of data.
63+
Below we define two functions:
64+
the risk function `f`
65+
and the empirical risk function `g`.
66+
Suppose that we have only a finite amount of training data.
67+
As a result, here `g` is less smooth than `f`.
68+
69+
```{.python .input}
70+
#@tab all
71+
def f(x):
72+
return x * d2l.cos(np.pi * x)
73+
74+
def g(x):
75+
return f(x) + 0.2 * d2l.cos(5 * np.pi * x)
76+
```
77+
78+
The graph below illustrates that the minimum of the empirical risk on a training dataset may be at a different location from the minimum of the risk (generalization error).
79+
80+
```{.python .input}
81+
#@tab all
82+
def annotate(text, xy, xytext): #@save
83+
d2l.plt.gca().annotate(text, xy=xy, xytext=xytext,
84+
arrowprops=dict(arrowstyle='->'))
85+
86+
x = d2l.arange(0.5, 1.5, 0.01)
87+
d2l.set_figsize((4.5, 2.5))
88+
d2l.plot(x, [f(x), g(x)], 'x', 'risk')
89+
annotate('min of\nempirical risk', (1.0, -1.2), (0.5, -1.1))
90+
annotate('min of risk', (1.1, -1.05), (0.95, -0.5))
91+
```
92+
93+
## Optimization Challenges in Deep Learning
94+
95+
In this chapter, we are going to focus specifically on the performance of optimization algorithms in minimizing the objective function, rather than a
96+
model's generalization error.
97+
In :numref:`sec_linear_regression`
98+
we distinguished between analytical solutions and numerical solutions in
99+
optimization problems.
100+
In deep learning, most objective functions are
101+
complicated and do not have analytical solutions. Instead, we must use numerical
102+
optimization algorithms.
103+
The optimization algorithms in this chapter
104+
all fall into this
105+
category.
106+
107+
There are many challenges in deep learning optimization. Some of the most vexing ones are local minima, saddle points, and vanishing gradients.
108+
Let us have a look at them.
109+
110+
111+
### Local Minima
112+
113+
For any objective function $f(x)$,
114+
if the value of $f(x)$ at $x$ is smaller than the values of $f(x)$ at any other points in the vicinity of $x$, then $f(x)$ could be a local minimum.
115+
If the value of $f(x)$ at $x$ is the minimum of the objective function over the entire domain,
116+
then $f(x)$ is the global minimum.
117+
118+
For example, given the function
119+
120+
$$f(x) = x \cdot \text{cos}(\pi x) \text{ for } -1.0 \leq x \leq 2.0,$$
121+
122+
we can approximate the local minimum and global minimum of this function.
123+
124+
```{.python .input}
125+
#@tab all
126+
x = d2l.arange(-1.0, 2.0, 0.01)
127+
d2l.plot(x, [f(x), ], 'x', 'f(x)')
128+
annotate('local minimum', (-0.3, -0.25), (-0.77, -1.0))
129+
annotate('global minimum', (1.1, -0.95), (0.6, 0.8))
130+
```
131+
132+
The objective function of deep learning models usually has many local optima.
133+
When the numerical solution of an optimization problem is near the local optimum, the numerical solution obtained by the final iteration may only minimize the objective function *locally*, rather than *globally*, as the gradient of the objective function's solutions approaches or becomes zero.
134+
Only some degree of noise might knock the parameter out of the local minimum. In fact, this is one of the beneficial properties of
135+
minibatch stochastic gradient descent where the natural variation of gradients over minibatches is able to dislodge the parameters from local minima.
136+
137+
138+
### Saddle Points
139+
140+
Besides local minima, saddle points are another reason for gradients to vanish. A *saddle point* is any location where all gradients of a function vanish but which is neither a global nor a local minimum.
141+
Consider the function $f(x) = x^3$. Its first and second derivative vanish for $x=0$. Optimization might stall at this point, even though it is not a minimum.
142+
143+
```{.python .input}
144+
#@tab all
145+
x = d2l.arange(-2.0, 2.0, 0.01)
146+
d2l.plot(x, [x**3], 'x', 'f(x)')
147+
annotate('saddle point', (0, -0.2), (-0.52, -5.0))
148+
```
149+
150+
Saddle points in higher dimensions are even more insidious, as the example below shows. Consider the function $f(x, y) = x^2 - y^2$. It has its saddle point at $(0, 0)$. This is a maximum with respect to $y$ and a minimum with respect to $x$. Moreover, it *looks* like a saddle, which is where this mathematical property got its name.
151+
152+
```{.python .input}
153+
#@tab all
154+
x, y = d2l.meshgrid(
155+
d2l.linspace(-1.0, 1.0, 101), d2l.linspace(-1.0, 1.0, 101))
156+
z = x**2 - y**2
157+
158+
ax = d2l.plt.figure().add_subplot(111, projection='3d')
159+
ax.plot_wireframe(x, y, z, **{'rstride': 10, 'cstride': 10})
160+
ax.plot([0], [0], [0], 'rx')
161+
ticks = [-1, 0, 1]
162+
d2l.plt.xticks(ticks)
163+
d2l.plt.yticks(ticks)
164+
ax.set_zticks(ticks)
165+
d2l.plt.xlabel('x')
166+
d2l.plt.ylabel('y');
167+
```
168+
169+
We assume that the input of a function is a $k$-dimensional vector and its
170+
output is a scalar, so its Hessian matrix will have $k$ eigenvalues
171+
(refer to the [online appendix on eigendecompositions](https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/eigendecomposition.html)).
172+
The solution of the
173+
function could be a local minimum, a local maximum, or a saddle point at a
174+
position where the function gradient is zero:
175+
176+
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are all positive, we have a local minimum for the function.
177+
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are all negative, we have a local maximum for the function.
178+
* When the eigenvalues of the function's Hessian matrix at the zero-gradient position are negative and positive, we have a saddle point for the function.
179+
180+
For high-dimensional problems the likelihood that at least *some* of the eigenvalues are negative is quite high. This makes saddle points more likely than local minima. We will discuss some exceptions to this situation in the next section when introducing convexity. In short, convex functions are those where the eigenvalues of the Hessian are never negative. Sadly, though, most deep learning problems do not fall into this category. Nonetheless it is a great tool to study optimization algorithms.
181+
182+
### Vanishing Gradients
183+
184+
Probably the most insidious problem to encounter is the vanishing gradient.
185+
Recall our commonly-used activation functions and their derivatives in :numref:`subsec_activation-functions`.
186+
For instance, assume that we want to minimize the function $f(x) = \tanh(x)$ and we happen to get started at $x = 4$. As we can see, the gradient of $f$ is close to nil.
187+
More specifically, $f'(x) = 1 - \tanh^2(x)$ and thus $f'(4) = 0.0013$.
188+
Consequently, optimization will get stuck for a long time before we make progress. This turns out to be one of the reasons that training deep learning models was quite tricky prior to the introduction of the ReLU activation function.
189+
190+
```{.python .input}
191+
#@tab all
192+
x = d2l.arange(-2.0, 5.0, 0.01)
193+
d2l.plot(x, [d2l.tanh(x)], 'x', 'f(x)')
194+
annotate('vanishing gradient', (4, 1), (2, 0.0))
195+
```
196+
197+
As we saw, optimization for deep learning is full of challenges. Fortunately there exists a robust range of algorithms that perform well and that are easy to use even for beginners. Furthermore, it is not really necessary to find *the* best solution. Local optima or even approximate solutions thereof are still very useful.
198+
199+
## Summary
200+
201+
* Minimizing the training error does *not* guarantee that we find the best set of parameters to minimize the generalization error.
202+
* The optimization problems may have many local minima.
203+
* The problem may have even more saddle points, as generally the problems are not convex.
204+
* Vanishing gradients can cause optimization to stall. Often a reparameterization of the problem helps. Good initialization of the parameters can be beneficial, too.
205+
206+
207+
## Exercises
208+
209+
1. Consider a simple MLP with a single hidden layer of, say, $d$ dimensions in the hidden layer and a single output. Show that for any local minimum there are at least $d!$ equivalent solutions that behave identically.
210+
1. Assume that we have a symmetric random matrix $\mathbf{M}$ where the entries
211+
$M_{ij} = M_{ji}$ are each drawn from some probability distribution
212+
$p_{ij}$. Furthermore assume that $p_{ij}(x) = p_{ij}(-x)$, i.e., that the
213+
distribution is symmetric (see e.g., :cite:`Wigner.1958` for details).
214+
1. Prove that the distribution over eigenvalues is also symmetric. That is, for any eigenvector $\mathbf{v}$ the probability that the associated eigenvalue $\lambda$ satisfies $P(\lambda > 0) = P(\lambda < 0)$.
215+
1. Why does the above *not* imply $P(\lambda > 0) = 0.5$?
216+
1. What other challenges involved in deep learning optimization can you think of?
217+
1. Assume that you want to balance a (real) ball on a (real) saddle.
218+
1. Why is this hard?
219+
1. Can you exploit this effect also for optimization algorithms?
220+
221+
:begin_tab:`mxnet`
222+
[Discussions](https://discuss.d2l.ai/t/349)
223+
:end_tab:
224+
225+
:begin_tab:`pytorch`
226+
[Discussions](https://discuss.d2l.ai/t/487)
227+
:end_tab:
228+
229+
:begin_tab:`tensorflow`
230+
[Discussions](https://discuss.d2l.ai/t/489)
231+
:end_tab:

0 commit comments

Comments
 (0)