When training or inferring a model, identical values are obtained for different performance metrics

Hey All,

I am training a binary classifier from scratch with TensorFlow v2.10.0. I am using the tf.keras.losses.CategoricalCrossentropy() loss with a MobileNetV3 model followed by multiple Dense layers, with the last layer being a Dense layer with softmax activation (with two neurons). I am also monitoring several performance metrics, including precision, recall, TP, FP, TN, FN, AUC, etc. (using Keras built-in metrics objects), and I have observed that the obtained TN and TP values are identical (FN and FP are also identical) during training (via fit method) and evaluation (via evaluate method).

Below is an output taken during the training step which illustrates the metrics equality.

Epoch 22/60 116/116 [==============================] - ETA: 0s - loss: 1.1220 - accuracy: 0.6231 - prc: 0.7403 - recall: 0.6231 - precision: 0.6231 - **tp: 4591.0000** - fp: 2777.0000 - **tn: 4591.0000** - fn: 2777.0000 - auc: 0.7275 Epoch 22: saving model to scratch_training_weighted_2/cp.ckpt
116/116 [==============================] - 98s 836ms/step - loss: 1.1220 - accuracy: 0.6231 - prc: 0.7403 - recall: 0.6231 - precision: 0.6231 - **tp: 4591.0000** - fp: 2777.0000 - **tn: 4591.0000** - fn: 2777.0000 - auc: 0.7275 - val_loss: 0.5135 - val_accuracy: 0.7603 - val_prc: 0.8439 - val_recall: 0.7603 - val_precision: 0.7603 - val_tp: 625.0000 - val_fp: 197.0000 - val_tn: 625.0000 - val_fn: 197.0000 - val_auc: 0.8469 

Epoch 23/60 116/116 [==============================] - ETA: 0s - loss: 1.0479 - accuracy: 0.7022 - prc: 0.8025 - recall: 0.7022 - precision: 0.7022 - tp: 5174.0000 - **fp: 2194.0000** - tn: 5174.0000 - **fn: 2194.0000** - auc: 0.7955
116/116 [==============================] - 97s 831ms/step - loss: 1.0479 - accuracy: 0.7022 - prc: 0.8025 - recall: 0.7022 - precision: 0.7022 - tp: 5174.0000 - fp: 2194.0000 - tn: 5174.0000 - fn: 2194.0000 - auc: 0.7955 - val_loss: 0.6448 - val_accuracy: 0.7360 - val_prc: 0.8286 - val_recall: 0.7360 - val_precision: 0.7360 - val_tp: 605.0000 - val_fp: 217.0000 - val_tn: 605.0000 - val_fn: 217.0000 - val_auc: 0.8291

Is there anyone who has encountered a similar problem?

Thanks!