在 3.3 节(“线性回归的简洁实现”)一节中,我们通过init
模块来初始化模型的全部参数。我们也介绍了访问模型参数的简单方法。本节将深入讲解如何访问和初始化模型参数,以及如何在多个层之间共享同一份模型参数。
我们先定义一个与上一节中相同的含单隐藏层的多层感知机。我们依然使用默认方式初始化它的参数,并做一次前向计算。
import tensorflow as tf
import numpy as np
print(tf.__version__)
2.0.0
net = tf.keras.models.Sequential()
net.add(tf.keras.layers.Flatten())
net.add(tf.keras.layers.Dense(256,activation=tf.nn.relu))
net.add(tf.keras.layers.Dense(10))
X = tf.random.uniform((2,20))
Y = net(X)
Y
<tf.Tensor: id=62, shape=(2, 10), dtype=float32, numpy=
array([[ 0.15294254, 0.0355227 , 0.05113338, 0.06625789, 0.12223213,
-0.5954561 , 0.38035268, -0.17244355, 0.6725004 , 0.00750941],
[ 0.12288147, -0.2162356 , -0.02103446, 0.14871466, 0.10256162,
-0.57710034, 0.22278625, -0.21283135, 0.52407515, -0.1426214 ]],
dtype=float32)>
对于使用Sequential
类构造的神经网络,我们可以通过weights属性来访问网络任一层的权重。回忆一下上一节中提到的Sequential
类与tf.keras.Model
类的继承关系。对于Sequential
实例中含模型参数的层,我们可以通过tf.keras.Model
类的weights
属性来访问该层包含的所有参数。下面,访问多层感知机net
中隐藏层的所有参数。索引0表示隐藏层为Sequential
实例最先添加的层。
net.weights[0], type(net.weights[0])
(<tf.Variable 'sequential/dense/kernel:0' shape=(20, 256) dtype=float32, numpy=
array([[-0.07852519, -0.03260126, 0.12601742, ..., 0.11949158,
0.10042094, -0.10598273],
[ 0.03567271, -0.11624913, 0.04699135, ..., -0.12115637,
0.07733515, 0.13183317],
[ 0.03837337, -0.11566538, -0.03314627, ..., -0.10877015,
0.09273799, -0.07031895],
...,
[-0.03430544, -0.00946991, -0.02949082, ..., -0.0956497 ,
-0.13907745, 0.10703176],
[ 0.00447187, -0.07251608, 0.08081181, ..., 0.02697623,
0.05394638, -0.01623751],
[-0.01946831, -0.00950103, -0.14190955, ..., -0.09374787,
0.08714674, 0.12475103]], dtype=float32)>,
tensorflow.python.ops.resource_variable_ops.ResourceVariable)
我们在[“数值稳定性和模型初始化”]一节中描述了模型的默认初始化方法:权重参数元素为[-0.07, 0.07]之间均匀分布的随机数,偏差参数则全为0。但我们经常需要使用其他方法来初始化权重。在下面的例子中,我们将权重参数初始化成均值为0、标准差为0.01的正态分布随机数,并依然将偏差参数清零。
class Linear(tf.keras.Model):
def __init__(self):
super().__init__()
self.d1 = tf.keras.layers.Dense(
units=10,
activation=None,
kernel_initializer=tf.random_normal_initializer(mean=0,stddev=0.01),
bias_initializer=tf.zeros_initializer()
)
self.d2 = tf.keras.layers.Dense(
units=1,
activation=None,
kernel_initializer=tf.ones_initializer(),
bias_initializer=tf.ones_initializer()
)
def call(self, input):
output = self.d1(input)
output = self.d2(output)
return output
net = Linear()
net(X)
net.get_weights()
[array([[-0.00306494, 0.01149799, 0.00900665, -0.00952527, -0.00651997,
0.00010531, 0.00802666, -0.01102469, 0.01838934, 0.00915548],
[ 0.00401672, 0.01788972, -0.00245794, -0.01051202, 0.02268461,
-0.00271502, -0.00447782, 0.00636486, 0.00408998, -0.01373187],
[-0.00468962, -0.00180526, -0.0117501 , 0.01840584, 0.00044537,
-0.00745311, 0.01155732, -0.00615015, -0.00942082, -0.00023081],
[-0.01116156, -0.00614527, -0.00119119, -0.00843481, 0.01192368,
0.00889105, -0.01000126, -0.0017869 , -0.00833272, 0.0019026 ],
[ 0.0183291 , -0.00640716, 0.00936602, 0.01040828, -0.00140882,
-0.00143817, 0.00126366, 0.01094474, 0.0132029 , 0.00405393],
[-0.00548183, -0.00489746, -0.01264372, -0.00501967, 0.00602909,
0.00439432, 0.02449438, 0.00426046, -0.0017243 , -0.00319188],
[-0.00034199, -0.00648715, -0.00694025, -0.00984227, 0.02798587,
-0.01283635, -0.01735584, -0.00181439, 0.01585936, 0.00348289],
[ 0.00181157, -0.00343991, 0.01415697, -0.00160312, 0.0018713 ,
-0.00968461, -0.00268579, 0.01320006, -0.00041133, -0.01282531],
[-0.0145638 , 0.0096653 , -0.00787722, -0.00073892, -0.00222261,
0.0031008 , -0.01858314, 0.00559973, 0.00439452, -0.02467434],
[-0.00303086, 0.0015006 , -0.00920389, 0.01035136, -0.00040001,
-0.00945453, -0.00506378, 0.00816534, 0.00347233, 0.01201165],
[ 0.01979353, 0.00881971, -0.00060045, -0.00671935, 0.02482731,
-0.0039808 , 0.01195751, -0.00499541, -0.01421177, 0.00125722],
[-0.00206965, 0.00737946, 0.02711954, -0.00566722, -0.01916223,
0.00635906, -0.00112362, 0.00351852, 0.0027598 , 0.00804986],
[ 0.00190901, 0.00799948, -0.01007551, -0.00751526, 0.0027352 ,
-0.00126002, 0.00079498, -0.00190032, -0.00912007, 0.00432031],
[-0.00574654, 0.00703932, 0.00375365, 0.01700558, -0.00392553,
0.00246399, 0.00686003, -0.00327425, -0.00158563, 0.01139532],
[-0.010441 , -0.01566261, 0.01807244, -0.01265192, -0.00422926,
-0.00729915, -0.00717674, -0.00036729, 0.00728995, 0.0034066 ],
[-0.00497032, -0.01395558, -0.00276683, 0.0114197 , -0.01044411,
-0.01518542, 0.00793149, -0.00169621, -0.008745 , -0.00825851],
[-0.00098009, -0.00765272, -0.01993775, 0.0207908 , -0.0088134 ,
0.01211826, 0.0033179 , 0.0064116 , 0.00399073, 0.00067746],
[ 0.00282402, 0.00589997, 0.00674444, -0.01209166, -0.00875635,
0.01789016, -0.00037993, 0.00392861, 0.02248183, -0.00427692],
[-0.00629026, -0.01388059, 0.0160582 , 0.00855581, 0.00170209,
0.00430258, 0.0092911 , 0.00232163, 0.00591121, 0.02038265],
[-0.00792203, -0.00259904, -0.00109487, -0.00959524, -0.00030968,
-0.01322429, 0.00489308, 0.00503101, 0.01801165, 0.00972504]],
dtype=float32),
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32),
array([[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.],
[1.]], dtype=float32),
array([1.], dtype=float32)]
可以使用tf.keras.initializers
类中的方法实现自定义初始化。
def my_init():
return tf.keras.initializers.Ones()
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(64, kernel_initializer=my_init()))
Y = model(X)
model.weights[0]
<tf.Variable 'sequential_1/dense_4/kernel:0' shape=(20, 64) dtype=float32, numpy=
array([[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
...,
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.]], dtype=float32)>
注:本节除了代码之外与原书基本相同,原书传送门