Skip to content

Commit

Permalink
Fix the structure of cn docs paddle.t & fix the annotation display on…
Browse files Browse the repository at this point in the history
… 5 cn docs (PaddlePaddle#4402)

* Update t_cn.rst

* Update basic_usage_cn.md

* Update cumsum_cn.rst

* Update InputSpec_cn.rst

* Update Program_cn.rst

* Update gradients_cn.rst

* Update t_cn.rst

* Update t_cn.rst

* Update t_cn.rst

* Update t_cn.rst

Co-authored-by: Ligoml <[email protected]>
  • Loading branch information
Liyulingyue and Ligoml authored Apr 20, 2022
1 parent caad954 commit 23a6e2d
Show file tree
Hide file tree
Showing 6 changed files with 25 additions and 57 deletions.
2 changes: 1 addition & 1 deletion docs/api/paddle/cumsum_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,6 @@ cumsum
y = paddle.cumsum(data, dtype='float64')
print(y.dtype)
# VarType.FP64
# paddle.float64
12 changes: 6 additions & 6 deletions docs/api/paddle/static/InputSpec_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,8 @@ InputSpec
input = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')
print(input) # InputSpec(shape=(-1, 784), dtype=VarType.FP32, name=x)
print(label) # InputSpec(shape=(-1, 1), dtype=VarType.INT64, name=label)
print(input) # InputSpec(shape=(-1, 784), dtype=paddle.float32, name=x)
print(label) # InputSpec(shape=(-1, 1), dtype=paddle.int64, name=label)
方法
Expand Down Expand Up @@ -61,7 +61,7 @@ from_tensor(tensor, name=None)
x = paddle.to_tensor(np.ones([2, 2], np.float32))
x_spec = InputSpec.from_tensor(x, name='x')
print(x_spec) # InputSpec(shape=(2, 2), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(2, 2), dtype=paddle.float32, name=x)
from_numpy(ndarray, name=None)
Expand All @@ -88,7 +88,7 @@ from_numpy(ndarray, name=None)
x = np.ones([2, 2], np.float32)
x_spec = InputSpec.from_numpy(x, name='x')
print(x_spec) # InputSpec(shape=(2, 2), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(2, 2), dtype=paddle.float32, name=x)
batch(batch_size)
Expand All @@ -112,7 +112,7 @@ batch(batch_size)
x_spec = InputSpec(shape=[64], dtype='float32', name='x')
x_spec.batch(4)
print(x_spec) # InputSpec(shape=(4, 64), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(4, 64), dtype=paddle.float32, name=x)
unbatch()
Expand All @@ -133,4 +133,4 @@ unbatch()
x_spec = InputSpec(shape=[4, 64], dtype='float32', name='x')
x_spec.unbatch()
print(x_spec) # InputSpec(shape=(64,), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(64,), dtype=paddle.float32, name=x)
8 changes: 4 additions & 4 deletions docs/api/paddle/static/Program_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -434,8 +434,8 @@ Generator,会yield每个Program中的变量。
for var in prog.list_vars():
print(var)
# var img : paddle.VarType.LOD_TENSOR.shape(-1, 1, 28, 28).astype(VarType.FP32)
# var label : paddle.VarType.LOD_TENSOR.shape(-1, 1).astype(VarType.INT64)
# var img : LOD_TENSOR.shape(-1, 1, 28, 28).dtype(float32).stop_gradient(True)
# var label : LOD_TENSOR.shape(-1, 1).dtype(int64).stop_gradient(True)
all_parameters()
'''''''''
Expand Down Expand Up @@ -467,8 +467,8 @@ list[ :ref:`api_guide_parameter` ],一个包含当前Program中所有参数的
# Here will print all parameters in current program, in this example,
# the result is like:
#
# persist trainable param fc_0.w_0 : paddle.VarType.LOD_TENSOR.shape(13, 10).astype(VarType.FP32)
# persist trainable param fc_0.b_0 : paddle.VarType.LOD_TENSOR.shape(10,).astype(VarType.FP32)
# persist trainable param fc_0.w_0 : LOD_TENSOR.shape(13, 10).dtype(float32).stop_gradient(False)
# persist trainable param fc_0.b_0 : LOD_TENSOR.shape(10,).dtype(float32).stop_gradient(False)
#
# Here print(param) will print out all the properties of a parameter,
# including name, type and persistable, you can access to specific
Expand Down
3 changes: 2 additions & 1 deletion docs/api/paddle/static/gradients_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,4 +40,5 @@ list[Tensor],包含与输入对应的梯度。如果一个输入不影响目
y = paddle.static.nn.conv2d(x, 4, 1, bias_attr=False)
y = F.relu(y)
z = paddle.static.gradients([y], x)
print(z) # [var x@GRAD : fluid.VarType.LOD_TENSOR.shape(-1L, 2L, 8L, 8L).astype(VarType.FP32)]
print(z)
# [var x@GRAD : LOD_TENSOR.shape(-1, 2, 8, 8).dtype(float32).stop_gradient(False)]
49 changes: 8 additions & 41 deletions docs/api/paddle/t_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,51 +5,18 @@ t

.. py:function:: paddle.t(input, name=None)
该OP对小于等于2维的Tensor进行数据转置。0维和1维Tensor返回本身,2维Tensor等价于perm设置为0,1的 :ref:`cn_api_fluid_layers_transpose` 函数。
对小于等于2维的Tensor进行数据转置。0维和1维Tensor返回本身,2维Tensor等价于perm设置为0,1的 :ref:`cn_api_fluid_layers_transpose` 函数。

参数
:::::::::

- **input** (Tensor) - 输入:N维(N<=2)Tensor,可选的数据类型为float16, float32, float64, int32, int64。
- **name** (str, 可选)- 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None
::::::::
- **input** (Tensor) - 输入:N维(N<=2)Tensor,可选的数据类型为float16、float32、float64、int32、int64,默认值为None。
- **name** (str,可选)- 该参数供开发人员打印调试信息时使用,具体用法请参见 :ref:`api_guide_Name` ,默认值为None。

返回
:::::::::
N维Tensor


代码示例
:::::::::

.. code-block:: text
# 例1 (0-D tensor)
x = tensor([0.79])
paddle.t(x) = tensor([0.79])
# 例2 (1-D tensor)
x = tensor([0.79, 0.84, 0.32])
paddle.t(x) = tensor([0.79, 0.84, 0.32])
# 例3 (2-D tensor)
x = tensor([0.79, 0.84, 0.32],
[0.64, 0.14, 0.57])
paddle.t(x) = tensor([0.79, 0.64],
[0.84, 0.14],
[0.32, 0.57])
::::::::
Tensor,0维和1维Tensor返回本身,2维Tensor返回转置Tensor。

代码示例
::::::::::::

.. code-block:: python
import paddle
x = paddle.ones(shape=[2, 3], dtype='int32')
x_transposed = paddle.t(x)
print(x_transposed.shape)
# [3, 2]
::::::::

COPY-FROM: <paddle.t>:<code-example>
8 changes: 4 additions & 4 deletions docs/guides/04_dygraph_to_static/basic_usage_cn.md
Original file line number Diff line number Diff line change
Expand Up @@ -150,8 +150,8 @@ from paddle.static import InputSpec
x = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')

print(x) # InputSpec(shape=(-1, 784), dtype=VarType.FP32, name=x)
print(label) # InputSpec(shape=(-1, 1), dtype=VarType.INT64, name=label)
print(x) # InputSpec(shape=(-1, 784), dtype=paddle.float32, name=x)
print(label) # InputSpec(shape=(-1, 1), dtype=paddle.int64, name=label)
```


Expand All @@ -166,7 +166,7 @@ from paddle.static import InputSpec

x = paddle.to_tensor(np.ones([2, 2], np.float32))
x_spec = InputSpec.from_tensor(x, name='x')
print(x_spec) # InputSpec(shape=(2, 2), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(2, 2), dtype=paddle.float32, name=x)
```

> 注:若未在 ``from_tensor`` 中指定新的 ``name``,则默认使用与源 Tensor 相同的 ``name``
Expand All @@ -182,7 +182,7 @@ from paddle.static import InputSpec

x = np.ones([2, 2], np.float32)
x_spec = InputSpec.from_numpy(x, name='x')
print(x_spec) # InputSpec(shape=(2, 2), dtype=VarType.FP32, name=x)
print(x_spec) # InputSpec(shape=(2, 2), dtype=paddle.float32, name=x)
```

> 注:若未在 ``from_numpy`` 中指定新的 ``name``,则默认使用 ``None``
Expand Down

0 comments on commit 23a6e2d

Please sign in to comment.