TypeError: Input ‘y’ of ‘Sub’ Op has type float16 that does not match type float32 of argument ‘x’

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MacOS
  • TensorFlow installed from (source or binary): Colab
  • TensorFlow version (use command below): 2.5.0
  • Python version: python 3.7
  • GPU model and memory: Tesla T4

Error
TypeError: Input 'y' of 'Sub' Op has type float16 that does not match type float32 of argument 'x'

Current behaviour

While using Mixed Precision and building a Keras Functional API model (EfficientNet B0), it shows the below error


Describe the expected behaviour
The Global Policy I set in the previous cell was mixed_float16. The problem works fine when running on tensorflow 2.4.1 so the bug is with tensorflow 2.5.0

You can reproduce the same error using the below notebook :

1 Like

Can you post the code in the screenshot? It seems that the colab Is a more complex and slow example to reproduce.

2 Likes

For some reason, I can’t include images or links in this reply.

The above screenshots are enough to get the gist of the problem, but if still confused, please check out the GitHub Repo/Issues of Tensorflow. I’ve reported the same issue there too.

1 Like

You can include inline code in the reply. From the screenshot I see that there are just few lines

1 Like

Code used when i set the global_policy :

from tensorflow.keras import mixed_precision
mixed_precision.set_global_policy(policy='mixed_float16')

And the dataset i used was in float32

1 Like

Code used for building the model :

from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
 
# Create base model
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False # freeze base model layers
 
# Create Functional model 
inputs = layers.Input(shape=input_shape, name="input_layer")
# Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below
# x = preprocessing.Rescaling(1./255)(x)
x = base_model(inputs, training=False) # set base_model to inference mode only
x = layers.GlobalAveragePooling2D(name="pooling_layer")(x)
x = layers.Dense(len(class_names))(x) # want one output neuron per class 
# Separate activation of output layer so we can output float32 activations
outputs = layers.Activation("softmax")(x) 
model = tf.keras.Model(inputs, outputs)
 
# Compile the model
model.compile(loss="sparse_categorical_crossentropy", # Use sparse_categorical_crossentropy when labels are *not* one-hot
              optimizer=tf.keras.optimizers.Adam(),
              metrics=["accuracy"])
 
model.summary()
1 Like

You can simply reproduce this with 3 lines:

import tensorflow as tf 
tf.keras.mixed_precision.set_global_policy('mixed_float16')
model = tf.keras.applications.EfficientNetB0()

I think that there is an issue with the autocasting in the preprocessing normalization layer.
You could try to open a Bug on Github

1 Like

I was working with mixed_precision earlier today seems things were working smoothly but not sure when I tried to run the same block of code it throws an error.


And the TensorFlow version I am running on is 2.5.0. Downgrading to version 2.4.1 works fine. Any help with this?

Also reading some of the old threads seems it has been fixed after the version update.

1 Like

The ticket Is

2 Likes

So does it mean there is an issue with with EfficientNetB0 model? Because just now I built an Resnet101 model and performed mixed_precision and it works fine.

1 Like

I am facing the same issue on my custom model while using mixed-precision. I am using tf 2.5.0. This issue is visible on Windows and Ubuntu.

x = self.conv4(y1) + self.conv3(y2)

here x, y1, y2 have dtype = float16
self.conv3 and self.conv4 are mixed precision layers