Does mixed_precision affect custom layers with metric tensor

Here is a code I used to create a custom layer with a metric tensor

class MonitorStd(tf.keras.layers.Layer):
    def __init__(self,name):
    def call(self,embedding):
        e = tf.math.l2_normalize(embedding,axis=-1)
        e = tf.math.reduce_std(e,axis=0)
        return embedding

The code receives an embedding from some other layer and creates a metric tensor containing the standard deviation of the l2_normalized inputs. During training, this value is computed and displayed when I call fit(). When I use float16 mixed precision, the values are 0.000e+00 throughout the training, however, it displays fine (expected value 0.0 to 0.0222) without mixed_precision. Am I doing something wrong? I am using the T4 x 2 GPU variant on Kaggle.

I have solved it and it works. I believe my setting the global policy to float16 caused numerical instability as the output of the MonitorStd layer is added to my metrics. However, by explicitly setting the dtype of my layer to float32, I am able to get the right values displayed without any numeric issues.