Problem to find the eval metric in the compiled model

How do I monitor the metric regardless of the metric I’m using, extracting the metric information from the compiled model?
I try this or this(model.metrics[1]), but didn’t work.

# Some Callback
early_stopping = tf.keras.callbacks.EarlyStopping(monitor=model.compiled_metrics, patience=6, restore_best_weights=False)
reduce_lr_callback = ReduceLROnPlateau(monitor=model.compiled_metrics, factor=0.1, patience=2, min_lr=1.0e-15)
checkpoint = ModelCheckpoint('model{model.compiled_metrics:.2f}.h5', monitor=model.compiled_metrics, verbose=0, save_best_only=True, mode='max')

Hi @Automata,

The model.compiled_metrics isn’t directly a metric name that can be used for monitoring. Instead, you should specify the name of the metric you’re interested in.

The monitor parameter of each callback is set to a specific metric name (val_loss, val_accuracy, etc.).

Note that the ModelCheckpoint callback is set to save the model with the best validation accuracy.

I hope this helps.