Monitor val_loss or val_meanIoU to detect overfitting?

I am training a U-Net for semantic segmentation and monitor mean IoU. Now when I look at the TensorBoard graphs, validation loss starts to increase, but validation mean IoU stays constant. Is the model overfitting at this point, or is it not?
Here’s the TensorBoard:

Which one should I use for callbacks such as early stopping or save checkpoints?

@Manuel_Popp’s question is whether he should use the MeanIoU metric or the loss on the validation data to determine whether the model is overfitting. His graphs show that, over training epochs on “new” (validation) data, performance flattens out as measured by MeanIoU but worsens significantly as measured by loss.

So the question remains: should he use MeanIoU, loss, or something else to determine when overfitting is occurring and when to stop training?

To answer this question:
I now trained a model and saved checkpoints after every epoch.
I determined the epochs where min and max values for both loss and mean IoU were found.

I downloaded the respective checkpoints and used the models to predict some unknown data. Finally, I calculated F1-Score, Kappa, Overall accuracy, precision, and recall for both predictions. Overall, it appears the model where validation mean IoU was highest slightly outperforms the model where validation loss (categorical crossentropy) was low.

1 Like