Deeplabv3+ model acts marginalized

I can train the deeplabv3+ model on my dataset and it gives good results. But my problem with it is that the model infers pixels with very high probability. The model treats black (does not belong to a class) and white (definitely belongs to a class) with pixels. For example, when it wants to assign a probability value to a pixel that it is sure pixel belongs to a person, it assigns it 0.99, but when it encounters a pixel, say a blurred pixel from a person’s hand and it is not sure of the class of that pixel it assigns 0.04 probability of being human or lower values ​​for that pixel where my expectation is assigning a value around 0.4 or 0.6. It is important for me to get such values. I’ve tried weighting the critical parts of my dataset (like hands, …) to make it harder for the model to learn shapes, but the problem persists. I know that my model is not over−fitted as the official definition (because the mIOU of my model on the test dataset is low as expected) but it works marginalized as describes earlier.

I hope I was able to clearly define my problem

Any ideas would be appreciated.

Hi @Mahdi_Khoursha

Welcome to the TensorFlow Forum!

Please provide minimal reproducible code to replicate the error and to understand the issue. Thank you.