Validation loss always equals zero for Instance / Semantic Segmentation with Model Garden

I’m running the notebook Instance Segmentation with Model Garden from Tensorflow Model Garden documentation. During the training, the output of the ‘Train and Evaluate’ periodically displays the metrics, but the value of the validation loss is always zero.
For example:

...
eval | step:   1200 | steps/sec:    3.3 | eval time:   60.0 sec | output: 
{'AP': 0.0814979,
 'AP50': 0.16509584,
 'AP75': 0.07145937,
  ...
 'mask_ARs': 0.0015765766,
 'steps_per_second': 3.3340747415649394,
 'validation_loss': 0.0}
...

I noticed the same behavior in official Semantic Segmentation tutorial. In this tutorial, the outputs are shown on the tutorial page: Semantic Segmentation with Model Garden  |  TensorFlow Core

Is this a TensorFlow Models bug? Do I need to set any training configuration variable to calculate the validation loss?

Hi @Luiz_Felipe,

Could you please check this gist in which exp_config.task.validation_data.resize_eval_groundtruth = True set this variable to true to enable validation on validation data for semantic segmentation and also pass this variable eval_summary_manager=summary_manager.maybe_build_eval_summary_manager( params=exp_config, model_dir=model_dir) while training as evaluation results has image data . For instance segmentation the validation loss kept intentionally as zero.

Thanks.