Evaluate models while training?

Hi guys, i got a small problem and i wonder if there is a solution to it.

I often run into the problem, that when i train my models i overfit them. So my accuracy rises and rises, but its on my training data set and not for the test data.

For example i run 10 Epochs and get a 0.7 on my test data set.
Then i run 25 Epochs and get a 0.5 on my test data set, even thou my accuracy for the training data is going up and up.

So i wonder if there is a way to test my model on the test data set while training or a graph or something (maybe with tensorboard) so i know at what epoch count i have the peak performance?

When you do your model.fit, you can send a validation_data set and get its statistics at the same time as the training, e.g.

history = model.fit(
    x_train,
    y_train,
    batch_size=64,
    epochs=2,
    # We pass some validation for
    # monitoring validation loss and metrics
    # at the end of each epoch
    validation_data=(x_val, y_val),
)