Semantic segmentation model overfitting to train data

I’m currently training a 2D U-Net model for semantic segmentation. During training, the model reaches a dice score close to 0.8, while the test dice score only reaches 0.42. Anyone has advice to avoid overfitting? Right now I’m using batches of 32 images, AdamW with Cosine Scheduler and Dropout but nothing seems to help.

Other then enriching your training set you can also explore some augmentation:

We are adding many augmentation in Keras-CV: