Tensorflow – How to re-initialize layer weights of an existing model in Keras

conv-neural-networkkerastensorflow

The actual problem is generating random layer weights for an existing (already built) model in Keras. There are some solutions using Numpy [2] but it is not good to choice that solutions. Because, in Keras, there are special initializers using different distributions for each layer type. When Numpy is used instead of the initializers, the generated weights have different distribution then its original. Let's give an example:

Second layer of my model is a convolutional (1D) layer and its initializer is GlorotUniform [1]. If you generate random weights using Numpy, the distribution of generated weights will not be the GlorotUniform.

I have a solution for this problem but it has some problems. Here is what I have:

def set_random_weights(self, tokenizer, config):
    temp_model = build_model(tokenizer, config)
    self.model.set_weights(temp_model.get_weights())

I am building the existing model. After the building process, weights of the model are re-initialized. Then I get the re-initalized weights and set them to another model. Building model to generate new weights has redundant processes. So, I need a new solution without building a model and Numpy.

  1. https://keras.io/initializers/
  2. https://www.codementor.io/nitinsurya/how-to-re-initialize-keras-model-weights-et41zre2g

Best Answer

See previous answers to this question here. Specifically, if you want to use the original weights initializer of a Keras layer, you can do the following:

import tensorflow as tf
import keras.backend as K

def init_layer(layer):
    session = K.get_session()
    weights_initializer = tf.variables_initializer(layer.weights)
    session.run(weights_initializer)


layer = model.get_layer('conv2d_1')
init_layer(layer)
Related Topic