Validation loss overfitting with higher accuracy

i have trained a model using ResNet implementation for Radio wave modulation classification and it’s working pretty well until finished training.

the test accuracy on validation dataset is continue to improve however, the validation loss increases,

so the question is "is this overfitting the model or it’s normal to increase in loss but get higher accuracy i don’t even know train it more or what to do right now

photo_2022-11-06_11-30-18

Are you absolutely sure that this accuracy is on the validation data and not the training data?

But you could have constant accuracy with loss getting worse if the confidence for correct predictions decreased and the confidence for incorrect predictions increased. I say confidence but I mean softmax or sigmoid outputs, not statistical confidence.