Drop in accuracy after evaluating the model

Hi,

I have trained my model and then applied QAT on it and re-train it again. During training the QAT model I can see that my metrics are doing very well. But, when I evaluate my model after the train phase has been finished, I see a drop in metrics.

I have 2 questions:

  1. As far as I understand, my QAT model is fine, but the problem is in evaluation, right?
  2. Why this drop after evaluating my model? Is this related to the evaluation dataset?
    NB: I am using the same train and test dataset.

Thanks,

Hi @MLEnthusiastic ,

As you’re using the same dataset for training and testing, might be your model is overfitting. So the accuracy is good for training data and worse while evaluating. It’s always recommended to give the unseen data for evaluation.
Quantization-Aware Training (QAT) involves calibrating the model for quantization while training. If the calibration process does not accurately reflect the evaluation data distribution, it can lead to accuracy drop.

Thank You