QAT Accuracy is always the same after evaluation


I am trying to evaluate my QAT TFLite model. I saw something suspicious that, even if I fine-tune my model for 1, 20 or even 150 epochs, and then evaluate the TFLite of my model I can see the score is always stable. Although, I can see that during fine-tune the TensorFlow metrics are increasing.

I am wondering what could be the problem?


Hi @MLEnthusiastic, Could you please let me know the accuracy difference between the trained models(trained for different numbers of epochs) and the corresponding tflite models. And also let me know which type of QAT you are performing. Thank You.