What is the approach for training and validation metrics evaluation while writing code from scratch?

I am learning to write code for custom models using TensorFlow and Keras.

I am a bit confused about when to evaluate metrics (like accuracy, loss…etc) during the training and validation.

Should I have to evaluate metrics after each training iteration or after each epoch?

Do I have to take the average of metrics of all batches, at the end of each epoch?

Please share the preferred approach for training and validation metrics evaluation.

Thanks in advance.