It seems that tf.train.init_from_checkpoint
initalizes variables created via tf.get_variable
but not those created via tf.Variable
.
For example, let's create two variables and save them:
import tensorflow as tf
tf.Variable(1.0, name='foo')
tf.get_variable('bar',initializer=1.0)
saver = tf.train.Saver()
with tf.Session() as sess:
tf.global_variables_initializer().run()
saver.save(sess, './model', global_step=0)
If I load them again via a tf.train.Saver
, everything works fine: variables are loaded back to 1 even though they are initialized at zero here:
import tensorflow as tf
foo = tf.Variable(0.0, name='foo')
bar = tf.get_variable('bar', initializer=0.0)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, './model-0')
print(f'foo: {foo.eval()} bar: {bar.eval()}')
# foo: 1.0 bar: 1.0
However if I use tf.train.init_from_checkpoint
I get
import tensorflow as tf
foo = tf.Variable(0.0, name='foo')
bar = tf.get_variable('bar', initializer=0.0)
tf.train.init_from_checkpoint('./model-0', {'/':'/'})
with tf.Session() as sess:
tf.global_variables_initializer().run()
print(f'foo: {foo.eval()} bar: {bar.eval()}')
# foo: 0.0 bar: 1.0
bar
is set back to 1 as expected but foo
remains at 0.
Is this the intended behavior? If so, why?
from tf.train.init_from_checkpoint does not initialize variables created with tf.Variable
0 komentar:
Posting Komentar