Problem defining my own metrics

Hello !
I’m trying to use my own metrics to monitor my Tensorflow model, but I’m encoutering an unexpected bug.

Let me show you the metrics : It’s simply accuracy metrics, but on the first 10 outputs, since my other outputs are not linked with classification. Here is the code :

def my_metrics_fn(y_true, y_pred_aug):
    y_pred = y_pred_aug[:, 0:10]
    pred = tf.math.argmax(y_pred, axis =-1)    
    y_true = tf.cast(y_true, tf.int64)
    return K.mean(tf.equal(pred, y_true))

test_net.compile(loss = my_loss_fn, metrics = [my_metrics_fn], optimizer=opt, run_eagerly = False)

During training, my metrics give me an accuracy of 44%, which is really poor. But if I’m computing accuracy by myself, using :

s = 0
for i in range(239):
    a = PC_train_generator.__getitem__(i)[0]
    b = PC_train_generator.__getitem__(i)[1]
    c = test_model.predict(a)
    s = s + my_metrics_fn(b,c).numpy()
    if i>0:
        print(i, s/i, end = '\r')1:

then I get an accuracy of 80%, so aproximately the double… But if I’m trying test_model.evaluate on the same PC_train_generator, my accuracy is 44%.
Do you have any idea where that could come from ?

By the way, I’ve checked my answers, and I know that 80% is the correct number of samples correctly labelled.