Skip to content

Commit

Permalink
fix 2.0 api docs (PaddlePaddle#2849)
Browse files Browse the repository at this point in the history
  • Loading branch information
zhupengyang authored Nov 17, 2020
1 parent 36c5a65 commit 4f60551
Show file tree
Hide file tree
Showing 25 changed files with 103 additions and 173 deletions.
36 changes: 14 additions & 22 deletions doc/paddle/api/paddle/fluid/layers/prelu_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,16 +3,12 @@
prelu
-------------------------------

.. py:function:: paddle.static.nn.prelu(x, mode, param_attr=None, name=None)
.. py:function:: paddle.fluid.layers.prelu(x, mode, param_attr=None, name=None)
等式:
prelu激活函数

.. math::
y = max(0, x) + \alpha min(0, x)
prelu(x) = max(0, x) + \alpha * min(0, x)
共提供三种激活方式:

Expand All @@ -24,27 +20,23 @@ prelu
参数:
- **x** (Variable)- 多维Tensor或LoDTensor,数据类型为float32。
- **mode** (str) - 权重共享模式。
- **param_attr** (ParamAttr,可选) - 可学习权重 :math:`[\alpha]` 的参数属性,可由ParamAttr创建。默认值为None,表示使用默认的权重参数属性。具体用法请参见 :ref:`cn_api_fluid_ParamAttr` 。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。
- **x** (Tensor)- 多维Tensor或LoDTensor,数据类型为float32。
- **mode** (str) - 权重共享模式。
- **param_attr** (ParamAttr,可选) - 可学习权重 :math:`[\alpha]` 的参数属性,可由ParamAttr创建。默认值为None,表示使用默认的权重参数属性。具体用法请参见 :ref:`cn_api_fluid_ParamAttr` 。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。


返回: 表示激活输出Tensor或LoDTensor,数据类型为float32。与输入形状相同。


返回类型:Variable

返回: 表示激活输出Tensor,数据类型和形状于输入相同。

**代码示例:**

.. code-block:: python
import paddle.fluid as fluid
from paddle.fluid.param_attr import ParamAttr
x = fluid.data(name="x", shape=[None,5,10,10], dtype="float32")
mode = 'channel'
output = fluid.layers.prelu(
x,mode,param_attr=ParamAttr(name='alpha'))
import paddle
x = paddle.to_tensor([-1., 2., 3.])
param = paddle.ParamAttr(initializer=paddle.nn.initializer.Constant(0.2))
out = paddle.static.nn.prelu(x, 'all', param)
# [-0.2, 2., 3.]
7 changes: 2 additions & 5 deletions doc/paddle/api/paddle/nn/functional/activation/elu_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,8 @@ elu激活层(ELU Activation Operator)
import paddle
import paddle.nn.functional as F
import numpy as np
x = paddle.to_tensor(np.array([[-1,6],[1,15.6]]))
out = F.elu(x, alpha=0.2)
x = paddle.to_tensor([[-1., 6.], [1., 15.6]])
out = F.elu(x, alpha=0.2)
# [[-0.12642411 6. ]
# [ 1. 15.6 ]]
11 changes: 7 additions & 4 deletions doc/paddle/api/paddle/nn/functional/activation/gelu_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -38,9 +38,12 @@ gelu激活层(GELU Activation Operator)
import paddle
import paddle.nn.functional as F
import numpy as np
x = paddle.to_tensor(np.array([[-1, 0.5],[1, 1.5]]))
out1 = F.gelu(x) # [-0.158655 0.345731 0.841345 1.39979]
out2 = F.gelu(x, True) # [-0.158808 0.345714 0.841192 1.39957]
x = paddle.to_tensor([[-1, 0.5], [1, 1.5]])
out1 = F.gelu(x)
# [[-0.15865529, 0.34573123],
# [ 0.84134471, 1.39978933]]
out2 = F.gelu(x, True)
# [[-0.15880799, 0.34571400],
# [ 0.84119201, 1.39957154]]
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,6 @@ leaky_relu激活层。计算公式如下:
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
x = paddle.to_tensor(np.array([-2, 0, 1], 'float32'))
x = paddle.to_tensor([-2., 0., 1.])
out = F.leaky_relu(x) # [-0.02, 0., 1.]
20 changes: 10 additions & 10 deletions doc/paddle/api/paddle/nn/functional/activation/log_softmax_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,10 @@ log_softmax

.. math::
Out[i, j] = log(softmax(x)) = log(\frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])})
\begin{aligned}
log\_softmax[i, j] &= log(softmax(x)) \\
&= log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])})
\end{aligned}
参数
::::::::::
Expand All @@ -28,16 +31,13 @@ log_softmax
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
x = np.array([[[-2.0, 3.0, -4.0, 5.0],
[3.0, -4.0, 5.0, -6.0],
[-7.0, -8.0, 8.0, 9.0]],
[[1.0, -2.0, -3.0, 4.0],
[-5.0, 6.0, 7.0, -8.0],
[6.0, 7.0, 8.0, 9.0]]]).astype('float32')
x = [[[-2.0, 3.0, -4.0, 5.0],
[3.0, -4.0, 5.0, -6.0],
[-7.0, -8.0, 8.0, 9.0]],
[[1.0, -2.0, -3.0, 4.0],
[-5.0, 6.0, 7.0, -8.0],
[6.0, 7.0, 8.0, 9.0]]]
x = paddle.to_tensor(x)
out1 = F.log_softmax(x)
out2 = F.log_softmax(x, dtype='float64')
Expand Down
4 changes: 2 additions & 2 deletions doc/paddle/api/paddle/nn/functional/loss/hsigmoid_loss_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ hsigmoid_loss
- **num_classes** (int) - 类别总数(字典大小)必须大于等于2。若使用默认树结构,即当 ``path_table`` 和 ``path_code`` 都为None时 ,必须设置该参数。若使用自定义树结构,即当 ``path_table`` 和 ``path_code`` 都不为None时,它取值应为自定义树结构的非叶节点的个数,用于指定二分类的类别总数。
- **weight** (Tensor) - 该OP的权重参数。形状为 ``[numclasses-1, D]`` ,数据类型和 ``input`` 相同。
- **bias** (Tensor, 可选) - 该OP的偏置参数。形状为 ``[numclasses-1, 1]`` ,数据类型和 ``input`` 相同。如果设置为None,将没有偏置参数。默认值为None。
- **path_table** (Variable,可选) – 存储每一批样本从类别(单词)到根节点的路径,按照从叶至根方向存储。 数据类型为int64,形状为 ``[N, L]`` ,其中L为路径长度。``path_table`` 和 ``path_code`` 应具有相同的形状, 对于每个样本i,path_table[i]为一个类似np.ndarray的结构,该数组内的每个元素都是其双亲结点权重矩阵的索引。默认值为None。
- **path_code** (Variable,可选) – 存储每一批样本从类别(单词)到根节点的路径编码,按从叶至根方向存储。数据类型为int64,形状为 ``[N, L]``。默认值为None。
- **path_table** (Tensor,可选) – 存储每一批样本从类别(单词)到根节点的路径,按照从叶至根方向存储。 数据类型为int64,形状为 ``[N, L]`` ,其中L为路径长度。``path_table`` 和 ``path_code`` 应具有相同的形状, 对于每个样本i,path_table[i]为一个类似np.ndarray的结构,该数组内的每个元素都是其双亲结点权重矩阵的索引。默认值为None。
- **path_code** (Tensor,可选) – 存储每一批样本从类别(单词)到根节点的路径编码,按从叶至根方向存储。数据类型为int64,形状为 ``[N, L]``。默认值为None。
- **is_sparse** (bool,可选) – 是否使用稀疏更新方式。如果设置为True,W的梯度和输入梯度将会变得稀疏。默认值为False。
- **name** (str,可选) – 具体用法请参见 :ref:`api_guide_Name` ,一般无需设置,默认值为None。

Expand Down
3 changes: 1 addition & 2 deletions doc/paddle/api/paddle/nn/layer/activation/ELU_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ ELU激活层(ELU Activation Operator)
.. code-block:: python
import paddle
import numpy as np
x = paddle.to_tensor(np.array([[-1, 6],[1, 15.6]]))
x = paddle.to_tensor([[-1. ,6.], [1., 15.6]])
m = paddle.nn.ELU(0.2)
out = m(x)
# [[-0.12642411 6. ]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ Hardshrink激活层
.. code-block:: python
import paddle
paddle.disable_static()
x = paddle.to_tensor([-1, 0.3, 2.5])
m = paddle.nn.Hardshrink()
out = m(x) # [-1., 0., 2.5]
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,6 @@ Hardsigmoid激活层。sigmoid的分段线性逼近激活函数,速度比sigmo
import paddle
m = paddle.nn.Sigmoid()
m = paddle.nn.Hardsigmoid()
x = paddle.to_tensor([-4., 5., 1.])
out = m(x) # [0., 1, 0.666667]
5 changes: 2 additions & 3 deletions doc/paddle/api/paddle/nn/layer/activation/Hardtanh_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,7 @@ Hardtanh激活层(Hardtanh Activation Operator)。计算公式如下:
.. code-block:: python
import paddle
import numpy as np
x = paddle.to_tensor(np.array([-1.5, 0.3, 2.5]))
x = paddle.to_tensor([-1.5, 0.3, 2.5])
m = paddle.nn.Hardtanh()
out = m(x) # # [-1., 0.3, 1.]
out = m(x) # [-1., 0.3, 1.]
21 changes: 10 additions & 11 deletions doc/paddle/api/paddle/nn/layer/activation/LogSoftmax_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,10 @@ LogSoftmax激活层,计算公式如下:

.. math::
Out[i, j] = log(softmax(x))
= log(\\frac{\exp(X[i, j])}{\sum_j(exp(X[i, j])})
\begin{aligned}
Out[i, j] &= log(softmax(x)) \\
&= log(\frac{\exp(X[i, j])}{\sum_j(\exp(X[i, j])})
\end{aligned}
参数
::::::::::
Expand All @@ -26,16 +28,13 @@ LogSoftmax激活层,计算公式如下:
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
x = np.array([[[-2.0, 3.0, -4.0, 5.0],
[3.0, -4.0, 5.0, -6.0],
[-7.0, -8.0, 8.0, 9.0]],
[[1.0, -2.0, -3.0, 4.0],
[-5.0, 6.0, 7.0, -8.0],
[6.0, 7.0, 8.0, 9.0]]], 'float32')
x = [[[-2.0, 3.0, -4.0, 5.0],
[3.0, -4.0, 5.0, -6.0],
[-7.0, -8.0, 8.0, 9.0]],
[[1.0, -2.0, -3.0, 4.0],
[-5.0, 6.0, 7.0, -8.0],
[6.0, 7.0, 8.0, 9.0]]]
m = paddle.nn.LogSoftmax()
x = paddle.to_tensor(x)
out = m(x)
Expand Down
3 changes: 1 addition & 2 deletions doc/paddle/api/paddle/nn/layer/activation/ReLU_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,7 @@ ReLU激活层(Rectified Linear Unit)。计算公式如下:
.. code-block:: python
import paddle
import numpy as np
x = paddle.to_tensor(np.array([-2, 0, 1]).astype('float32'))
x = paddle.to_tensor([-2., 0., 1.])
m = paddle.nn.ReLU()
out = m(x) # [0., 0., 1.]
28 changes: 11 additions & 17 deletions doc/paddle/api/paddle/tensor/creation/arange_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,6 @@ arange

.. py:function:: paddle.arange(start=0, end=None, step=1, dtype=None, name=None)
该OP返回以步长 ``step`` 均匀分隔给定数值区间[``start``, ``end``)的1-D Tensor,数据类型为 ``dtype``。

当 ``dtype`` 表示浮点类型时,为了避免浮点计算误差,建议给 ``end`` 加上一个极小值epsilon,使边界可以更加明确。
Expand All @@ -33,21 +30,18 @@ arange

.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
import paddle
out1 = paddle.arange(5)
# [0, 1, 2, 3, 4]
out1 = paddle.arange(5)
# [0, 1, 2, 3, 4]
out2 = paddle.arange(3, 9, 2.0)
# [3, 5, 7]
out2 = paddle.arange(3, 9, 2.0)
# [3, 5, 7]
# use 4.999 instead of 5.0 to avoid floating point rounding errors
out3 = paddle.arange(4.999, dtype='float32')
# [0., 1., 2., 3., 4.]
# use 4.999 instead of 5.0 to avoid floating point rounding errors
out3 = paddle.arange(4.999, dtype='float32')
# [0., 1., 2., 3., 4.]
start_var = paddle.imperative.to_variable(np.array([3]))
out4 = paddle.arange(start_var, 7)
# [3, 4, 5, 6]
start_var = paddle.to_tensor([3])
out4 = paddle.arange(start_var, 7)
# [3, 4, 5, 6]
5 changes: 1 addition & 4 deletions doc/paddle/api/paddle/tensor/creation/ones_like_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,7 @@ ones_like
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.array([1,2,3], dtype='float32'))
x = paddle.to_tensor([1,2,3])
out1 = paddle.ones_like(x) # [1., 1., 1.]
out2 = paddle.ones_like(x, dtype='int32') # [1, 1, 1]
5 changes: 1 addition & 4 deletions doc/paddle/api/paddle/tensor/creation/zeros_like_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,7 @@ zeros_like
.. code-block:: python
import paddle
import numpy as np
paddle.enable_imperative()
x = paddle.imperative.to_variable(np.array([1,2,3], dtype='float32'))
x = paddle.to_tensor([1, 2, 3])
out1 = paddle.zeros_like(x) # [0., 0., 0.]
out2 = paddle.zeros_like(x, dtype='int32') # [0, 0, 0]
7 changes: 2 additions & 5 deletions doc/paddle/api/paddle/tensor/random/normal_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,18 +31,15 @@ normal
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
out1 = paddle.normal(shape=[2, 3])
# [[ 0.17501129 0.32364586 1.561118 ] # random
# [-1.7232178 1.1545963 -0.76156676]] # random
mean_tensor = paddle.to_tensor(np.array([1.0, 2.0, 3.0]))
mean_tensor = paddle.to_tensor([1.0, 2.0, 3.0])
out2 = paddle.normal(mean=mean_tensor)
# [ 0.18644847 -1.19434458 3.93694787] # random
std_tensor = paddle.to_tensor(np.array([1.0, 2.0, 3.0]))
std_tensor = paddle.to_tensor([1.0, 2.0, 3.0])
out3 = paddle.normal(mean=mean_tensor, std=std_tensor)
# [1.00780561 3.78457445 5.81058198] # random
10 changes: 4 additions & 6 deletions doc/paddle/api/paddle/tensor/random/rand_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,17 +23,15 @@ rand
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
# example 1: attr shape is a list which doesn't contain Tensor.
out1 = paddle.rand(shape=[2, 3])
# [[0.451152 , 0.55825245, 0.403311 ], # random
# [0.22550228, 0.22106001, 0.7877319 ]] # random
# example 2: attr shape is a list which contains Tensor.
dim1 = paddle.full([1], 2, "int64")
dim2 = paddle.full([1], 3, "int32")
dim1 = paddle.to_tensor([2], 'int64')
dim2 = paddle.to_tensor([3], 'int32')
out2 = paddle.rand(shape=[dim1, dim2, 2])
# [[[0.8879919 , 0.25788337], # random
# [0.28826773, 0.9712097 ], # random
Expand All @@ -43,7 +41,7 @@ rand
# [0.870881 , 0.2984597 ]]] # random
# example 3: attr shape is a Tensor, the data type must be int64 or int32.
shape_tensor = paddle.to_tensor(np.array([2, 3]))
out2 = paddle.rand(shape_tensor)
shape_tensor = paddle.to_tensor([2, 3])
out3 = paddle.rand(shape_tensor)
# [[0.22920267, 0.841956 , 0.05981819], # random
# [0.4836288 , 0.24573246, 0.7516129 ]] # random
11 changes: 4 additions & 7 deletions doc/paddle/api/paddle/tensor/random/randint_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,6 @@ randint
.. code-block:: python
import paddle
import numpy as np
paddle.disable_static()
# example 1:
# attr shape is a list which doesn't contain Tensor.
Expand All @@ -36,15 +33,15 @@ randint
# example 2:
# attr shape is a list which contains Tensor.
dim1 = paddle.full([1], 2, "int64")
dim2 = paddle.full([1], 3, "int32")
out2 = paddle.randint(low=-5, high=5, shape=[dim1, dim2], dtype="int32")
dim1 = paddle.to_tensor([2], 'int64')
dim2 = paddle.to_tensor([3], 'int32')
out2 = paddle.randint(low=-5, high=5, shape=[dim1, dim2])
# [[0, -1, -3], # random
# [4, -2, 0]] # random
# example 3:
# attr shape is a Tensor
shape_tensor = paddle.to_tensor(np.array([3]))
shape_tensor = paddle.to_tensor(3)
out3 = paddle.randint(low=-5, high=5, shape=shape_tensor)
# [-2, 2, 3] # random
Expand Down
Loading

0 comments on commit 4f60551

Please sign in to comment.