Here is a code I used to create a custom layer with a metric tensor
class MonitorStd(tf.keras.layers.Layer): def __init__(self,name): super(MonitorStd,self).__init__(name=name) def call(self,embedding): e = tf.math.l2_normalize(embedding,axis=-1) e = tf.math.reduce_std(e,axis=0) self.add_metric(e,name=self.name,aggregation="mean") return embedding
The code receives an embedding from some other layer and creates a metric tensor containing the standard deviation of the l2_normalized inputs. During training, this value is computed and displayed when I call fit(). When I use float16 mixed precision, the values are
0.000e+00 throughout the training, however, it displays fine (expected value 0.0 to 0.0222) without mixed_precision. Am I doing something wrong? I am using the T4 x 2 GPU variant on Kaggle.