In keras.callbacks.EarlyStopping there is the argument mode, which is used to determine wether the monitored value has to increase or decrease. In the documentation you can see that this is set to ‘auto’ by default. When this is set to ‘auto’ the docs say that the “direction is automatically inferred from the name of the monitored quantity”.
You can see here, that the “automatically inferred direction” is just a check wether the monitored variable ends in ‘acc’. ‘accuracy’ or ‘auc’. I think this is overly simplified and can lead to confusion. For example the user in this Stackoverflow post does not understand why the direction of EarlyStopping is wrong even though their monitor ends in ‘gain’ which semantically clearly define a direction. This was just one example I found in a matter of seconds, I am sure there are more and more severe.
I propose a similary simple approach to determining wether ‘mode’ should be ‘min’ or ‘max’: Watch the monitored variable for ‘patience’-1 epochs and derive the direction from the difference of the start and end value. There are some edge cases that would have to be addressed like when patience is 0.
The other problem is: Would changing this cause more harm then good? Maybe there are projects that rely on this behaviour. What do you think?