What is the approach for training and validation metrics evaluation while writing code from scratch?

I am learning to write code for custom models using TensorFlow and Keras.

I am a bit confused about when to evaluate metrics (like accuracy, loss…etc) during the training and validation.

Should I have to evaluate metrics after each training iteration or after each epoch?

Do I have to take the average of metrics of all batches, at the end of each epoch?

Please share the preferred approach for training and validation metrics evaluation.

Thanks in advance.

Hi @Rajesh_Nakka

The metrics evaluation and training already happens at the time of model training on given training and validation dataset at every epoch. That’s when model optimize the loss function and try to have less error and more accuracy at each epoch while model training or model learning process.

You can also further evaluate the model on test dataset after the model training completes using model.evaluate().

Please refer to these Model training apis for better understanding on each apis functionality and usability. Thank you.

1 Like