My custom loss function not working

Hi,
I have tried to re-implement the custom loss function, but it is not working properly. But the built_in MSE works fine.
Any help will be appreciated, thanks.
here are the codes:

class CustomModel(keras.Model):

def train_step(self, data):

    x = data

    # data has only inputs

    with tf.GradientTape() as tape:

        y_pred = self(x, training=True)  # Forward pass

        # Compute our own loss

        # ----- loss 1 ------

        loss = tf.math.reduce_mean(tf.math.square(y_pred - 2*x))

        # ------- loss 2 --------

        # mse = tf.keras.losses.MeanSquaredError()

        loss = custom_mse(2*x, y_pred)

    # Compute gradients

    trainable_vars = self.trainable_variables

    gradients = tape.gradient(loss, trainable_vars)

    # Update weights

    self.optimizer.apply_gradients(zip(gradients, trainable_vars))

    # Compute our own metrics

    loss_tracker.update_state(loss)

    return {"loss": loss_tracker.result()}

Hi everyone,
I have tried to implement a custom loss function, but it is not working.
To make sure that I do it in the right way, I have tried to re-implement the MSE loss function, but it is not working as the built-in MSE ! I’m not sure what is the problem.
The code is listed below, any help will be appreciated, thanks.

class CustomModel(keras.Model):

def train_step(self, data):

    x = data

    # data has only inputs

    with tf.GradientTape() as tape:

        y_pred = self(x, training=True)  # Forward pass

        # Compute our own loss

        loss = tf.math.reduce_mean(tf.math.square(y_pred - 2*x))

    # Compute gradients

    trainable_vars = self.trainable_variables

    gradients = tape.gradient(loss, trainable_vars)

    # Update weights

    self.optimizer.apply_gradients(zip(gradients, trainable_vars))

    # Compute our own metrics

    loss_tracker.update_state(loss)

    return {"loss": loss_tracker.result()}

I would check the dimensions of y_pred and x to make sure they match. I had this same problem and it turned out the two tensors I was taking the difference of had different shapes and being cast in a way I didn’t expect in the loss function expression.

In my case, they have the same dimensions.
The problem is in re-implementing the MSE loss function. The built-in MSE works correctly, but the re-implemented one is not.