Is it possible to start TensorFlow from an intermediate layer?

Hello,

I was wondering

Lets say if I get the output of an intermediary layer, would it be possible to feed the data back into the intermediary layer and resume the processing only from that layer?

Just wondering…

Hi @Lolcocks it is possible to feed the data to the intermediate layers.For, example let’s say you had the model and is you want to pass the input from 3rd layer you can use tf.keras.models.Sequential(model.layers[3:]) which will return the model from layer 3 onwards and you can build the model using .build(Input_shape) method(here input shape is the shape of the input which you you want to pass to that intermediate layer) and you can pass the input.

1 Like

Thank you @Kiran_Sai_Ramineni ! I am extremely new to machine learning and TensorFlow so kindly bear with me here.

I got the output of my 31st layer using:

conv2d = Model(inputs = self.model_ori.input, outputs= self.model_ori.layers[31].output)
intermediateResult = conv2d.predict(img)

So I will have to run a

newmodel = keras.Sequential(self.model_ori.layers[32:])
newmodel = newmodel.build(intermediateResult.shape)

Is my understanding correct?

EDIT:
I did the above but I got a

A merge layer should be called on a list of inputs. Received: inputs=Tensor("up_sampling2d_2/resize/ResizeNearestNeighbor:0", shape=(1, 26, 26, 128), dtype=float32) (not a list of tensors)

Here is my model summary:

Model: "model"
__________________________________________________________________________________________________
 Layer (type)                   Output Shape         Param #     Connected to
==================================================================================================
 input_1 (InputLayer)           [(None, None, None,  0           []
                                 3)]

 conv2d (Conv2D)                (None, None, None,   432         ['input_1[0][0]']
                                16)

 batch_normalization (BatchNorm  (None, None, None,   64         ['conv2d[0][0]']
 alization)                     16)

 leaky_re_lu (LeakyReLU)        (None, None, None,   0           ['batch_normalization[0][0]']
                                16)

 max_pooling2d (MaxPooling2D)   (None, None, None,   0           ['leaky_re_lu[0][0]']
                                16)

 conv2d_1 (Conv2D)              (None, None, None,   4608        ['max_pooling2d[0][0]']
                                32)

 batch_normalization_1 (BatchNo  (None, None, None,   128        ['conv2d_1[0][0]']
 rmalization)                   32)

 leaky_re_lu_1 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_1[0][0]']
                                32)

 max_pooling2d_1 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_1[0][0]']
                                32)

 conv2d_2 (Conv2D)              (None, None, None,   18432       ['max_pooling2d_1[0][0]']
                                64)

 batch_normalization_2 (BatchNo  (None, None, None,   256        ['conv2d_2[0][0]']
 rmalization)                   64)

 leaky_re_lu_2 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_2[0][0]']
                                64)

 max_pooling2d_2 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_2[0][0]']
                                64)

 conv2d_3 (Conv2D)              (None, None, None,   73728       ['max_pooling2d_2[0][0]']
                                128)

 batch_normalization_3 (BatchNo  (None, None, None,   512        ['conv2d_3[0][0]']
 rmalization)                   128)

 leaky_re_lu_3 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_3[0][0]']
                                128)

 max_pooling2d_3 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_3[0][0]']
                                128)

 conv2d_4 (Conv2D)              (None, None, None,   294912      ['max_pooling2d_3[0][0]']
                                256)

 batch_normalization_4 (BatchNo  (None, None, None,   1024       ['conv2d_4[0][0]']
 rmalization)                   256)

 leaky_re_lu_4 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_4[0][0]']
                                256)

 max_pooling2d_4 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_4[0][0]']
                                256)

 conv2d_5 (Conv2D)              (None, None, None,   1179648     ['max_pooling2d_4[0][0]']
                                512)

 batch_normalization_5 (BatchNo  (None, None, None,   2048       ['conv2d_5[0][0]']
 rmalization)                   512)

 leaky_re_lu_5 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_5[0][0]']
                                512)

 max_pooling2d_5 (MaxPooling2D)  (None, None, None,   0          ['leaky_re_lu_5[0][0]']
                                512)

 conv2d_6 (Conv2D)              (None, None, None,   4718592     ['max_pooling2d_5[0][0]']
                                1024)

 batch_normalization_6 (BatchNo  (None, None, None,   4096       ['conv2d_6[0][0]']
 rmalization)                   1024)

 leaky_re_lu_6 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_6[0][0]']
                                1024)

 conv2d_7 (Conv2D)              (None, None, None,   262144      ['leaky_re_lu_6[0][0]']
                                256)

 batch_normalization_7 (BatchNo  (None, None, None,   1024       ['conv2d_7[0][0]']
 rmalization)                   256)

 leaky_re_lu_7 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_7[0][0]']
                                256)

 conv2d_10 (Conv2D)             (None, None, None,   32768       ['leaky_re_lu_7[0][0]']
                                128)

 batch_normalization_9 (BatchNo  (None, None, None,   512        ['conv2d_10[0][0]']
 rmalization)                   128)

 leaky_re_lu_9 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_9[0][0]']
                                128)

 up_sampling2d (UpSampling2D)   (None, None, None,   0           ['leaky_re_lu_9[0][0]']
                                128)

 concatenate (Concatenate)      (None, None, None,   0           ['up_sampling2d[0][0]',
                                384)                              'leaky_re_lu_4[0][0]']

 conv2d_8 (Conv2D)              (None, None, None,   1179648     ['leaky_re_lu_7[0][0]']
                                512)

 conv2d_11 (Conv2D)             (None, None, None,   884736      ['concatenate[0][0]']
                                256)

 batch_normalization_8 (BatchNo  (None, None, None,   2048       ['conv2d_8[0][0]']
 rmalization)                   512)

 batch_normalization_10 (BatchN  (None, None, None,   1024       ['conv2d_11[0][0]']
 ormalization)                  256)

 leaky_re_lu_8 (LeakyReLU)      (None, None, None,   0           ['batch_normalization_8[0][0]']
                                512)

 leaky_re_lu_10 (LeakyReLU)     (None, None, None,   0           ['batch_normalization_10[0][0]']
                                256)

 conv2d_9 (Conv2D)              (None, None, None,   130815      ['leaky_re_lu_8[0][0]']
                                255)

 conv2d_12 (Conv2D)             (None, None, None,   65535       ['leaky_re_lu_10[0][0]']
                                255)

==================================================================================================
Total params: 8,858,734
Trainable params: 8,852,366
Non-trainable params: 6,368
__________________________________________________________________________________________________
None

Can someone kindly help me out?

Sincerely,
Lolcocks.

Hi @Lolcocks when you are building a model using .build(input_shape) please provide the input shape as a list. For example model.build([None,56,56,24]). Thanks!

1 Like

Thanks!

I already tried that and I am getting the same error. I am confused as to what input does the input_shape exactly want.

conv2d = Model(inputs = self.model_ori.input, outputs= self.model_ori.layers[31].output)
intermediateResult = conv2d.predict(img)

print(intermediateResult.shape)  #Prints tuple: (1, 13, 13, 128)

newmodel = keras.Sequential(self.model_ori.layers[32:])
newmodel = newmodel.build([None,13,13,128])

Error:

A merge layer should be called on a list of inputs. Received: inputs=Tensor("up_sampling2d_2/resize/ResizeNearestNeighbor:0", shape=(None, 26, 26, 128), dtype=float32) (not a list of tensors)

Could you kindly help me one more time?

Sincerely,
Lolcocks.

Hi @Lolcocks may i know the output shape of layer32. Thanks!

Thank you for the quick response.

print(intermediateResult.shape)  #Prints tuple: (1, 13, 13, 128)

From the model.summary():

conv2d_10 (Conv2D)             (None, None, None, 128)

The output shape of layer 31 and layer 32 are the same.

__________________________________________________________________________________________________
 Layer (type)                                 Output Shape         Param #     Connected to
==================================================================================================

 leaky_re_lu_7 (LeakyReLU) (Layer 30)         (None, None, None,   0           ['batch_normalization_7[0][0]']
                                               256)

 conv2d_10 (Conv2D) (Layer 31)                (None, None, None,   32768       ['leaky_re_lu_7[0][0]']
                                               128)

 batch_normalization_9 (BatchNo               (None, None, None,   512        ['conv2d_10[0][0]']
 rmalization) (Layer 32)                       128)

 leaky_re_lu_9 (LeakyReLU) (Layer 33)         (None, None, None,   0           ['batch_normalization_9[0][0]']
                                                128)

can you please try the shape [None,None,None,128] while building the model. Thanks!

Tried that as well, same issue. It keeps throwing the same error that it needs a list of inputs.

conv2d = Model(inputs = self.model_ori.input, outputs= self.model_ori.layers[31].output)
intermediateResult = conv2d.predict(img)

print(intermediateResult.shape)  #Prints tuple: (1, 13, 13, 128)

newmodel = keras.Sequential(self.model_ori.layers[32:])
newmodel = newmodel.build([None,None,None,128])

Error:

A merge layer should be called on a list of inputs. Received: inputs=Tensor("up_sampling2d_2/resize/ResizeNearestNeighbor:0", shape=(None, None, None, 128), dtype=float32) (not a list of tensors)

I am really confused at this point.

Sincerely,
Lolcocks.