TFLiteConverter input error at LSTM layer

Hello, I’ve tried to convert a keras model to a int8 tflite model including LSTM layer. (Post training quantization)

First, I tried to build a model [Dense - LSTM - FC] and tried to convert it to tflite,
but I got an error as below.

loc("tfl.unidirectional_sequence_lstm"): error: Input 0 should be from DequantizeCast, Statistics,  or ops with same scale requirement.

I didn’t exactly understood what the error means, but assumed that it would be something about the scale of input of lstm layer.
So I tried second model [Dense - Batchnorm - relu6 - LSTM - FC] . In this case, model converted well without the error.

But I want to know the input requirements of tfl.undirectional_sequence_lstm layer and exact meaning of error for my further research.

Thank you for your help!

For additional, this is the code of tflite converter I used.

converter = tf.lite.TFLiteConverter.from_keras_model(model)
# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This sets the representative dataset for quantization
converter.representative_dataset = representative_data_gen
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
#converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS, tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
#converter._experimental_lower_tensor_list_ops = False
# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity
converter.target_spec.supported_types = [tf.int8]
# These set the input and output tensors to int8
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
tflite_model_quant = converter.convert()

Hello @dals2539 ,
Could you find any solution to this issue ?
I am facing the same.

Cheers