What is the correct way to perform tensorflow operations in a custom Keras Layer? (Avoiding TFOpLambda layers)

Hi all,

I have some code with custom Keras layers that I’m trying to run in newer versions of tensorflow (>=2.5). Tensorflow introduced behavior whereby in building a Keras model, tensorflow operations in the model layers are automatically turned into new layers as TFOpLambda or SlicingOpLambda. Unfortunately this behavior is breaking for me, for a few reasons:

  • the literal hundreds of ops make it impossible to find specific layers for other use, and model summary (model.summary()) is illegible
  • model cannot be serialized since inputs to the TFOpLambda layers are tensors or other non-serializable data types

The code itself was implemented using all Keras symbolic tensors and operations. For example, weights are added in the build method such as

self.equatorial_kernel = self.add_weight(shape=kernel_shape,
                                                 initializer=self.kernel_initializer,
                                                 name='equatorial_kernel',
                                                 regularizer=self.kernel_regularizer,
                                                 constraint=self.kernel_constraint)

and the operations within call methods are, for instance, with import tensorflow.python.keras.backend as K:

K.conv2d(
    inputs[:, :, f, :, :] if channels_first else inputs[:, f],
    self.equatorial_kernel,
    strides=self.strides,
    padding=self.padding,
    data_format=self.data_format,
    dilation_rate=self.dilation_rate
)

In other words, with consistent use of the Keras API I do not understand why my custom layers are getting transformed into hundreds of unusable TFOpLambda layers.

On a (possibly related) side note, despite adding weights with the add_weight method, the model summary also returns zero trainable parameters:

Total params: 0
Trainable params: 0
Non-trainable params: 0

I can confirm that while the model runs, the weights are not being updated.

Is there any way to disable this behavior and make my custom layers work as they did in tensorflow 2.3?

Hi @jweyn

Welcome to the TensorFlow Forum!

Could you please tell us if this issue still persists? if so, Please share some more details on the issue along with the minimal reproducible code to replicate and understand the issue. Thank you.