forked from PaddlePaddle/docs
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Update the docs of nn.loss.L1Loss and nn.functional.l1_loss, test=dev…
…elop (PaddlePaddle#2343) Update the docs of nn.loss.L1Loss and nn.functional.l1_loss
- Loading branch information
Showing
7 changed files
with
136 additions
and
43 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
========== | ||
functional | ||
========== | ||
|
||
.. toctree:: | ||
:maxdepth: 1 | ||
|
||
functional/l1_loss.rst |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,10 @@ | ||
.. _api_nn_functional_l1_loss: | ||
|
||
l1_loss | ||
------ | ||
|
||
.. autoclass:: paddle.nn.functional.l1_loss | ||
:members: | ||
:inherited-members: | ||
:noindex: | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,11 @@ | ||
======================= | ||
functional | ||
======================= | ||
|
||
|
||
|
||
|
||
.. toctree:: | ||
:maxdepth: 1 | ||
|
||
functional_cn/l1_loss_cn.rst |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
l1_loss | ||
------------------------------- | ||
|
||
.. py:function:: paddle.nn.functional.l1_loss(x, label, reduction='mean', name=None) | ||
该接口计算输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。 | ||
|
||
该损失函数的数学计算公式如下: | ||
|
||
当 `reduction` 设置为 ``'none'`` 时, | ||
|
||
.. math:: | ||
Out = \lvert x - label\rvert | ||
当 `reduction` 设置为 ``'mean'`` 时, | ||
|
||
.. math:: | ||
Out = MEAN(\lvert x - label\rvert) | ||
当 `reduction` 设置为 ``'sum'`` 时, | ||
|
||
.. math:: | ||
Out = SUM(\lvert x - label\rvert) | ||
参数 | ||
::::::::: | ||
- **x** (Tensor): - 输入的Tensor,维度是[N, *], 其中N是batch size, `*` 是任意数量的额外维度。数据类型为:float32、float64、int32、int64。 | ||
- **label** (Tensor): - 标签,维度是[N, *], 与 ``x`` 相同。数据类型为:float32、float64、int32、int64。 | ||
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回 `L1Loss`。 | ||
- **name** (str,可选): - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。 | ||
|
||
返回 | ||
::::::::: | ||
``Tensor``, 输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入 ``x`` 相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。 | ||
|
||
代码示例 | ||
::::::::: | ||
|
||
.. code-block:: python | ||
import paddle | ||
import numpy as np | ||
paddle.disable_static() | ||
x_data = np.array([[1.5, 0.8], [0.2, 1.3]]).astype("float32") | ||
label_data = np.array([[1.7, 1], [0.4, 0.5]]).astype("float32") | ||
x = paddle.to_variable(x_data) | ||
label = paddle.to_variable(label_data) | ||
l1_loss = paddle.nn.functional.l1_loss(x, label) | ||
print(l1_loss.numpy()) | ||
# [0.35] | ||
l1_loss = paddle.nn.functional.l1_loss(x, label, reduction='none') | ||
print(l1_loss.numpy()) | ||
# [[0.20000005 0.19999999] | ||
# [0.2 0.79999995]] | ||
l1_loss = paddle.nn.functional.l1_loss(x, label, reduction='sum') | ||
print(l1_loss.numpy()) | ||
# [1.4] |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,67 +1,66 @@ | ||
L1Loss | ||
------------------------------- | ||
|
||
.. py:function:: paddle.nn.loss.L1Loss(reduction='mean') | ||
.. py:class:: paddle.nn.loss.L1Loss(reduction='mean', name=None) | ||
该接口用于创建一个L1Loss的可调用类,L1Loss计算输入input和标签label间的 `L1 loss` 损失。 | ||
该接口用于创建一个L1Loss的可调用类,L1Loss计算输入x和标签label间的 `L1 loss` 损失。 | ||
|
||
该损失函数的数学计算公式如下: | ||
|
||
当 `reduction` 设置为 ``'none'`` 时, | ||
|
||
.. math:: | ||
Out = |input - label| | ||
Out = \lvert x - label\rvert | ||
当 `reduction` 设置为 ``'mean'`` 时, | ||
|
||
.. math:: | ||
Out = MEAN(|input - label|) | ||
Out = MEAN(\lvert x - label\rvert) | ||
当 `reduction` 设置为 ``'sum'`` 时, | ||
|
||
.. math:: | ||
Out = SUM(|input - label|) | ||
Out = SUM(\lvert x - label\rvert) | ||
输入input和标签label的维度是[N, *], 其中N是batch_size, `*` 是任意其他维度。 | ||
如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入input相同。 | ||
如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。 | ||
参数: | ||
- **reduction** (string, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回L1Loss。数据类型为string。 | ||
参数 | ||
::::::::: | ||
- **reduction** (str, 可选): - 指定应用于输出结果的计算方式,可选值有: ``'none'``, ``'mean'``, ``'sum'`` 。默认为 ``'mean'``,计算 `L1Loss` 的均值;设置为 ``'sum'`` 时,计算 `L1Loss` 的总和;设置为 ``'none'`` 时,则返回 `L1Loss`。 | ||
- **name** (str,可选): - 操作的名称(可选,默认值为None)。更多信息请参见 :ref:`api_guide_Name`。 | ||
|
||
返回:返回计算L1Loss的可调用对象。 | ||
形状 | ||
::::::::: | ||
- **x** (Tensor): - 输入的Tensor,维度是[N, *], 其中N是batch size, `*` 是任意数量的额外维度。数据类型为:float32、float64、int32、int64。 | ||
- **label** (Tensor): - 标签,维度是[N, *], 与 ``x`` 相同。数据类型为:float32、float64、int32、int64。 | ||
- **output** (Tensor): - 输入 ``x`` 和标签 ``label`` 间的 `L1 loss` 损失。如果 :attr:`reduction` 是 ``'none'``, 则输出Loss的维度为 [N, *], 与输入 ``x`` 相同。如果 :attr:`reduction` 是 ``'mean'`` 或 ``'sum'``, 则输出Loss的维度为 [1]。 | ||
**代码示例** | ||
代码示例 | ||
::::::::: | ||
|
||
.. code-block:: python | ||
# declarative mode | ||
import paddle.fluid as fluid | ||
import numpy as np | ||
import paddle | ||
input = fluid.data(name="input", shape=[1]) | ||
label = fluid.data(name="label", shape=[1]) | ||
l1_loss = paddle.nn.loss.L1Loss(reduction='mean') | ||
output = l1_loss(input,label) | ||
place = fluid.CPUPlace() | ||
exe = fluid.Executor(place) | ||
exe.run(fluid.default_startup_program()) | ||
input_data = np.array([1.5]).astype("float32") | ||
label_data = np.array([1.7]).astype("float32") | ||
output_data = exe.run(fluid.default_main_program(), | ||
feed={"input":input_data, "label":label_data}, | ||
fetch_list=[output], | ||
return_numpy=True) | ||
print(output_data) # [array([0.2], dtype=float32)] | ||
# imperative mode | ||
import paddle.fluid.dygraph as dg | ||
with dg.guard(place) as g: | ||
input = dg.to_variable(input_data) | ||
label = dg.to_variable(label_data) | ||
l1_loss = paddle.nn.loss.L1Loss(reduction='mean') | ||
output = l1_loss(input,label) | ||
print(output.numpy()) # [0.2] | ||
import numpy as np | ||
paddle.disable_static() | ||
x_data = np.array([[1.5, 0.8], [0.2, 1.3]]).astype("float32") | ||
label_data = np.array([[1.7, 1], [0.4, 0.5]]).astype("float32") | ||
x = paddle.to_variable(x_data) | ||
label = paddle.to_variable(label_data) | ||
l1_loss = paddle.nn.loss.L1Loss() | ||
output = l1_loss(x, label) | ||
print(output.numpy()) | ||
# [0.35] | ||
l1_loss = paddle.nn.loss.L1Loss(reduction='sum') | ||
output = l1_loss(x, label) | ||
print(output.numpy()) | ||
# [1.4] | ||
l1_loss = paddle.nn.loss.L1Loss(reduction='none') | ||
output = l1_loss(x, label) | ||
print(output.numpy()) | ||
# [[0.20000005 0.19999999] | ||
# [0.2 0.79999995]] | ||