What is the warming The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_1), but are not present "?

WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_1), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_1/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_1/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_2), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_2/gamma:0' shape=(64,) dtype=float32>
  <tf.Variable 'batch_normalization_2/beta:0' shape=(64,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_3), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_3/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_3/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_4), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_4/gamma:0' shape=(64,) dtype=float32>
  <tf.Variable 'batch_normalization_4/beta:0' shape=(64,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_5), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_5/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_5/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_6), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_6/gamma:0' shape=(64,) dtype=float32>
  <tf.Variable 'batch_normalization_6/beta:0' shape=(64,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_7), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_7/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_7/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_8), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_8/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_8/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_9), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_9/gamma:0' shape=(32,) dtype=float32>
  <tf.Variable 'batch_normalization_9/beta:0' shape=(32,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_10), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_10/gamma:0' shape=(48,) dtype=float32>
  <tf.Variable 'batch_normalization_10/beta:0' shape=(48,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_11), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_11/gamma:0' shape=(24,) dtype=float32>
  <tf.Variable 'batch_normalization_11/beta:0' shape=(24,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_12), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_12/gamma:0' shape=(8,) dtype=float32>
  <tf.Variable 'batch_normalization_12/beta:0' shape=(8,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_13), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_13/gamma:0' shape=(16,) dtype=float32>
  <tf.Variable 'batch_normalization_13/beta:0' shape=(16,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_14), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_14/gamma:0' shape=(16,) dtype=float32>
  <tf.Variable 'batch_normalization_14/beta:0' shape=(16,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_15), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_15/gamma:0' shape=(25,) dtype=float32>
  <tf.Variable 'batch_normalization_15/beta:0' shape=(25,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_16), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_16/gamma:0' shape=(50,) dtype=float32>
  <tf.Variable 'batch_normalization_16/beta:0' shape=(50,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.
WARNING:tensorflow:
The following Variables were used a Lambda layer's call (tf.compat.v1.nn.fused_batch_norm_17), but
are not present in its tracked objects:
  <tf.Variable 'batch_normalization_17/gamma:0' shape=(100,) dtype=float32>
  <tf.Variable 'batch_normalization_17/beta:0' shape=(100,) dtype=float32>
It is possible that this is intended behavior, but it is more likely
an omission. This is a strong indication that this layer should be
formulated as a subclassed Layer rather than a Lambda layer.

This warming is occured.

def EEGInception(input_time=1000, fs=125, ncha=14, filters_per_branch=8,
                 scales_time=(500, 250, 125), dropout_rate=0.9,
                 activation='elu', n_classes=2, learning_rate=0.001):
    """Keras implementation of EEG-Inception. All hyperparameters and
    architectural choices are explained in the original article:
    https://doi.org/10.1109/TNSRE.2020.3048106
    Parameters
    ----------
    input_time : int
        EEG epoch time in milliseconds
    fs : int
        Sample rate of the EEG
    ncha :
        Number of input channels
    filters_per_branch : int
        Number of filters in each Inception branch
    scales_time : list
        Temporal scale (ms) of the convolutions on each Inception module.
        This parameter determines the kernel sizes of the filters
    dropout_rate : float
        Dropout rate
    activation : str
        Activation
    n_classes : int
        Number of output classes
    learning_rate : float
        Learning rate
    Returns
    -------
    model : keras.models.Model
        Keras model already compiled and ready to work
    """

    # ============================= CALCULATIONS ============================= #
    input_samples = int(input_time * fs / 1000)
    scales_samples = [int(s * fs / 1000) for s in scales_time]

    # ================================ INPUT ================================= #
    input_layer = Input((input_samples, ncha, 1))

    # ========================== BLOCK 1: INCEPTION ========================== #
    b1_units = list()
    for i in range(len(scales_samples)):
        unit = Conv2D(filters=filters_per_branch,
                      kernel_size=(scales_samples[i], 1),
                      kernel_initializer='he_normal',
                      padding='same')(input_layer)
        unit = tf.keras.layers.BatchNormalization()(unit)
        unit = Activation(activation)(unit)
        unit = Dropout(dropout_rate)(unit)

        unit = DepthwiseConv2D((1, ncha),
                               use_bias=False,
                               depth_multiplier=2,
                               depthwise_constraint=max_norm(1.))(unit)
        unit = tf.keras.layers.BatchNormalization()(unit)
        unit = Activation(activation)(unit)
        unit = Dropout(dropout_rate)(unit)

        b1_units.append(unit)

    # Concatenation
    b1_out = keras.layers.concatenate(b1_units, axis=3)
    b1_out = AveragePooling2D((4, 1))(b1_out)

    # ========================== BLOCK 2: INCEPTION ========================== #
    b2_units = list()
    for i in range(len(scales_samples)):
        unit = Conv2D(filters=filters_per_branch,
                      kernel_size=(int(scales_samples[i]/4), 1),
                      kernel_initializer='he_normal',
                      use_bias=False,
                      padding='same')(b1_out)
        unit = tf.keras.layers.BatchNormalization()(unit)
        unit = Activation(activation)(unit)
        unit = Dropout(dropout_rate)(unit)

        b2_units.append(unit)

    # Concatenate + Average pooling
    b2_out = keras.layers.concatenate(b2_units, axis=3)
    b2_out = AveragePooling2D((2, 1))(b2_out)

    # ============================ BLOCK 3: OUTPUT =========================== #
    b3_u1 = Conv2D(filters=int(filters_per_branch*len(scales_samples)/2),
                   kernel_size=(8, 1),
                   kernel_initializer='he_normal',
                   use_bias=False,
                   padding='same')(b2_out)
    b3_u1 = tf.keras.layers.BatchNormalization()(b3_u1)
    b3_u1 = Activation(activation)(b3_u1)
    b3_u1 = AveragePooling2D((2, 1))(b3_u1)
    b3_u1 = Dropout(dropout_rate)(b3_u1)

    b3_u2 = Conv2D(filters=int(filters_per_branch*len(scales_samples)/4),
                   kernel_size=(4, 1),
                   kernel_initializer='he_normal',
                   use_bias=False,
                   padding='same')(b3_u1)
    b3_u2 = tf.keras.layers.BatchNormalization()(b3_u2)
    b3_u2 = Activation(activation)(b3_u2)
    b3_u2 = AveragePooling2D((2, 1))(b3_u2)
    b3_out = Dropout(dropout_rate)(b3_u2)

    # Output layer
    output_layer = Flatten()(b3_out)
    output_layer = Dense(n_classes, activation='softmax')(output_layer)

    # ================================ MODEL ================================= #
    model = keras.models.Model(inputs=input_layer, outputs=output_layer)
    optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate, beta_1=0.9,
                                      beta_2=0.999, amsgrad=False)
    model.compile(loss='binary_crossentropy', optimizer= 'adam',metrics=['accuracy'])
    return model

This Warming occued at this code.
Tensoflow version is 2.11.0.
I considered that tf.keras.layers.BatchNormalization cause this warmming.
But I don’t know how to solve this warming.

Hi @ruorch

Welcome to the TensorFlow Forum!

The above given code is working fine and does not show any warning when I tried replicating the same code using Tensorflow 2.11 and Tensorflow 2.12 in Google Colab.

However you can put the below code on top of your code before import tensorflow to avoid getting warning messages.

import os
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "2"

Please try again and let us know if the issue still persists. Thank you.

1 Like