I am trying to describe the Huber loss as below by myself.
Instantiating by loss=huber_loss(delta=0.5), and calling it by loss(a,b),
the code seems to work as I intend but I am not sure the way to describe
is correct or not. What I want to know is how to tell the delta to the class
function. Is this correct?
Here are some small changes in your code:
import tensorflow as tf class HuberLoss(tf.keras.losses.Loss): def __init__(self, delta): super(HuberLoss, self).__init__() self.delta = delta def call(self, y_true, y_pred): a = tf.where(tf.abs(y_pred - y_true) < self.delta, 0.5 * tf.square(y_pred - y_true), self.delta * tf.abs(y_pred - y_true) - 0.5 * tf.square(self.delta)) return tf.reduce_mean(a)
- In the
super()function is called with the
HuberLossclass itself as the first argument, followed by
selfto initialize the parent class.
callmethod is modified to use
**for squaring the difference between
tf.math.reduce_mean()function is replaced with
tf.reduce_mean()for computing the mean of
Finally , your approach to passing the
delta value to the class function is correct. In your code, you are instantiating the
HuberLoss class with a
delta value and then using that value within the
I hope this helps!
Some of characters I pasted above were ignored and it looks strange.
In spite of this, thank you for your feed-backs.