GPU usage dips after each epoch to 0%

Hi everyone,
when training my model using and using for my training and validation data the GPU usage dips to 0% after each epoch even though I am using the prefetch method for

Have you experienced something similar?
Sadly I cannot provide any code.

Thank you in advance.

My first two guesses would be:

  • The dataset needs to refill the shuffle buffer after each epoch, like instead of, steps_per_epoch=N)

  • Maybe something with the evaluation logic?

Thank you for your reply.
Currently I am using, epochs=self.epochs, validation_data=val_data, verbose=1)

Where train_data is a with

train_data =, train_logr, train_metric))
train_data = train_data.shuffle(buffer_size=train_ivs.shape[0], seed=self.seed, 
train_data = train_data.batch(self.batch_size)
train_data = train_data.prefetch(

The evaluation step does not seem to cause any problem either.