🏷️sec_calculus
Finding the area of a polygon had remained mysterious
until at least 2,500 years ago, when ancient Greeks divided a polygon into triangles and summed their areas.
To find the area of curved shapes, such as a circle,
ancient Greeks inscribed polygons in such shapes.
As shown in :numref:fig_circle_area
,
an inscribed polygon with more sides of equal length better approximates
the circle. This process is also known as the method of exhaustion.
In fact, the method of exhaustion is where integral calculus (will be described in :numref:sec_integral_calculus
) originates from.
More than 2,000 years later,
the other branch of calculus, differential calculus,
was invented.
Among the most critical applications of differential calculus,
optimization problems consider how to do something the best.
As discussed in :numref:subsec_norms_and_objectives
,
such problems are ubiquitous in deep learning.
In deep learning, we train models, updating them successively so that they get better and better as they see more and more data. Usually, getting better means minimizing a loss function, a score that answers the question "how bad is our model?" This question is more subtle than it appears. Ultimately, what we really care about is producing a model that performs well on data that we have never seen before. But we can only fit the model to data that we can actually see. Thus we can decompose the task of fitting models into two key concerns: (i) optimization: the process of fitting our models to observed data; (ii) generalization: the mathematical principles and practitioners' wisdom that guide as to how to produce models whose validity extends beyond the exact set of data examples used to train them.
To help you understand optimization problems and methods in later chapters, here we give a very brief primer on differential calculus that is commonly used in deep learning.
We begin by addressing the calculation of derivatives, a crucial step in nearly all deep learning optimization algorithms. In deep learning, we typically choose loss functions that are differentiable with respect to our model's parameters. Put simply, this means that for each parameter, we can determine how rapidly the loss would increase or decrease, were we to increase or decrease that parameter by an infinitesimally small amount.
Suppose that we have a function
(eq_derivative
if this limit exists.
If eq_derivative
as the instantaneous rate of change of
To illustrate derivatives,
let us experiment with an example.
(Define
%matplotlib inline
from d2l import mxnet as d2l
from IPython import display
from mxnet import np, npx
npx.set_np()
def f(x):
return 3 * x ** 2 - 4 * x
#@tab pytorch
%matplotlib inline
from d2l import torch as d2l
from IPython import display
import numpy as np
def f(x):
return 3 * x ** 2 - 4 * x
#@tab tensorflow
%matplotlib inline
from d2l import tensorflow as d2l
from IPython import display
import numpy as np
def f(x):
return 3 * x ** 2 - 4 * x
[By setting eq_derivative
(approaches
#@tab all
def numerical_lim(f, x, h):
return (f(x + h) - f(x)) / h
h = 0.1
for i in range(5):
print(f'h={h:.5f}, numerical limit={numerical_lim(f, 1, h):.5f}')
h *= 0.1
Let us familiarize ourselves with a few equivalent notations for derivatives.
Given
where symbols
-
$DC = 0$ ($C$ is a constant), -
$Dx^n = nx^{n-1}$ (the power rule,$n$ is any real number), -
$De^x = e^x$ , $D\ln(x) = 1/x.$
To differentiate a function that is formed from a few simpler functions such as the above common functions,
the following rules can be handy for us.
Suppose that functions
the sum rule
the product rule
and the quotient rule
Now we can apply a few of the above rules to find
[To visualize such an interpretation of derivatives,
we will use matplotlib
,] a popular plotting library in Python.
To configure properties of the figures produced by matplotlib
,
we need to define a few functions.
In the following,
the use_svg_display
function specifies the matplotlib
package to output the svg figures for sharper images.
Note that the comment #@save
is a special mark where the following function,
class, or statements are saved in the d2l
package
so later they can be directly invoked (e.g., d2l.use_svg_display()
) without being redefined.
#@tab all
def use_svg_display(): #@save
"""Use the svg format to display a plot in Jupyter."""
display.set_matplotlib_formats('svg')
We define the set_figsize
function to specify the figure sizes. Note that here we directly use d2l.plt
since the import statement from matplotlib import pyplot as plt
has been marked for being saved in the d2l
package in the preface.
#@tab all
def set_figsize(figsize=(3.5, 2.5)): #@save
"""Set the figure size for matplotlib."""
use_svg_display()
d2l.plt.rcParams['figure.figsize'] = figsize
The following set_axes
function sets properties of axes of figures produced by matplotlib
.
#@tab all
#@save
def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):
"""Set the axes for matplotlib."""
axes.set_xlabel(xlabel)
axes.set_ylabel(ylabel)
axes.set_xscale(xscale)
axes.set_yscale(yscale)
axes.set_xlim(xlim)
axes.set_ylim(ylim)
if legend:
axes.legend(legend)
axes.grid()
With these three functions for figure configurations,
we define the plot
function
to plot multiple curves succinctly
since we will need to visualize many curves throughout the book.
#@tab all
#@save
def plot(X, Y=None, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), figsize=(3.5, 2.5), axes=None):
"""Plot data points."""
if legend is None:
legend = []
set_figsize(figsize)
axes = axes if axes else d2l.plt.gca()
# Return True if `X` (tensor or list) has 1 axis
def has_one_axis(X):
return (hasattr(X, "ndim") and X.ndim == 1 or isinstance(X, list)
and not hasattr(X[0], "__len__"))
if has_one_axis(X):
X = [X]
if Y is None:
X, Y = [[]] * len(X), X
elif has_one_axis(Y):
Y = [Y]
if len(X) != len(Y):
X = X * len(Y)
axes.cla()
for x, y, fmt in zip(X, Y, fmts):
if len(x):
axes.plot(x, y, fmt)
else:
axes.plot(y, fmt)
set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
Now we can [plot the function
#@tab all
x = np.arange(0, 3, 0.1)
plot(x, [f(x), 2 * x - 3], 'x', 'f(x)', legend=['f(x)', 'Tangent line (x=1)'])
So far we have dealt with the differentiation of functions of just one variable. In deep learning, functions often depend on many variables. Thus, we need to extend the ideas of differentiation to these multivariate functions.
Let
To calculate
🏷️subsec_calculus-grad
We can concatenate partial derivatives of a multivariate function with respect to all its variables to obtain the gradient vector of the function.
Suppose that the input of function
where
Let
- For all
$\mathbf{A} \in \mathbb{R}^{m \times n}$ ,$\nabla_{\mathbf{x}} \mathbf{A} \mathbf{x} = \mathbf{A}^\top$ , - For all
$\mathbf{A} \in \mathbb{R}^{n \times m}$ ,$\nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{A} = \mathbf{A}$ , - For all
$\mathbf{A} \in \mathbb{R}^{n \times n}$ ,$\nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{A} \mathbf{x} = (\mathbf{A} + \mathbf{A}^\top)\mathbf{x}$ , -
$\nabla_{\mathbf{x}} |\mathbf{x} |^2 = \nabla_{\mathbf{x}} \mathbf{x}^\top \mathbf{x} = 2\mathbf{x}$ .
Similarly, for any matrix
However, such gradients can be hard to find. This is because multivariate functions in deep learning are often composite, so we may not apply any of the aforementioned rules to differentiate these functions. Fortunately, the chain rule enables us to differentiate composite functions.
Let us first consider functions of a single variable.
Suppose that functions
Now let us turn our attention to a more general scenario
where functions have an arbitrary number of variables.
Suppose that the differentiable function
for any
- Differential calculus and integral calculus are two branches of calculus, where the former can be applied to the ubiquitous optimization problems in deep learning.
- A derivative can be interpreted as the instantaneous rate of change of a function with respect to its variable. It is also the slope of the tangent line to the curve of the function.
- A gradient is a vector whose components are the partial derivatives of a multivariate function with respect to all its variables.
- The chain rule enables us to differentiate composite functions.
- Plot the function
$y = f(x) = x^3 - \frac{1}{x}$ and its tangent line when$x = 1$ . - Find the gradient of the function
$f(\mathbf{x}) = 3x_1^2 + 5e^{x_2}$ . - What is the gradient of the function
$f(\mathbf{x}) = |\mathbf{x}|_2$ ? - Can you write out the chain rule for the case where
$u = f(x, y, z)$ and$x = x(a, b)$ ,$y = y(a, b)$ , and$z = z(a, b)$ ?
:begin_tab:mxnet
Discussions
:end_tab:
:begin_tab:pytorch
Discussions
:end_tab:
:begin_tab:tensorflow
Discussions
:end_tab: