Data type error when model is saved

Hello,

My end goal is to convert a network I trained in TF2 to TF-Lite which will be later migrated to edge-tpu as I would like to deploy it in a Coral TPU. The network consists of convolutions, pooling and dense layers (all supported by TF-Lite).

The code I am using to convert the model to TF-Lite (post-training quantization) is the following:

def convert_to_TFLite(tf_path, tf_lite_path, data_path):

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model(tf_path) # path to the SavedModel directory
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS_INT8, # enable TensorFlow Lite ops.
    tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]

converter.allow_custom_ops = True
converter.optimizations = [tf.lite.Optimize.DEFAULT]

get_representative_dataset_gen = BatchGenerator(1000, data_path)
converter.representative_dataset = get_representative_dataset_gen

#Coral Dev Board
converter.target_spec.supported_types = [tf.int8]
converter.inference_input_type = tf.int8 
converter.inference_output_type = tf.int8

tflite_model = converter.convert()

#Save the model.
with open(tf_lite_path, 'wb') as f:
   f.write(tflite_model)

The ‘tf_path’ was created with the following line after the training ended:
model.save(pb_model_path, save_format=‘tf’)

When I try to convert the model to TF-Lite, I get the following error message: “error: ‘tf.cast’ op is neither a custom op nor a flex op”.

After some troubleshooting, I noticed that 3 inputs args out of 4 were defined as float64 in the pb file: I see this if I load the pb model in tensorboard. The odd thing is that all the data used for training was float32 and tf.keras.backend.floatx() gives ‘float32’.

How can this be possible?
Is this explaining the ‘tf.cast’ issue?
My main guess is that TF-Lite does not support casting from float64 data type…

I am working on MAC-OS (Intel), python 3.9.16, protobuf 4.22.1 and tensorflow 2.12.

Hope somebody can help me!
Thanks!

You are correct, TF-Lite does not support casting from the float64 data type. This is because TF-Lite is designed to be a lightweight inference engine, and storing data in float64 would take up too much memory.

You need to cast the inputs to float32 before converting the model to TF-Lite.

Thanks.