Performance drop with float tflite conversion

I was testing with a custom tflite model where I noticed a significant drop in the model accuracy after tflite conversion. I understand this is expected but it was a float conversion, no optimization applied whatsoever. Not very sure what caused this.

Keras model accuracy: 85%
Float tflite model accuracy: 68%

The tflite conversion config is as follows.
converter = tf.lite.TFLiteConverter.from_keras_model(fixed_batch_model)
tfmodel = converter.convert()

Unfortunately can not upload the model due to company policies.

When I checked the model weights, even though they are in the same precision, they are different in the models. As per my understanding, this should not happen since we are not applying any optimization.
Clearly, I am missing some under-the-hood conversion step.
Please let me know what I am missing here.
Thanks a lot in advance.



The conversion steps seems to be fine. Can you please reverify the interpreter steps?

You can test the inference one by one instead of fixed batch inference.

Also, can you see what kind of examples it is failing?

Thank you!

To my knowledge, there are various problems with the OP implementation of the TFLite runtime, which can significantly degrade the accuracy of even the Float32 model.

Below is an example from my repository, showing a workaround for a problem that can be catastrophic for accuracy in PReLU. This is just one example, and there are many other problems inherent.

I can only give you this level of lecture when you are unable to share your model.