Using add_loss method in my call function

Hello everybody,

i want to use the add_loss method in my custom layer. I have a calculation using an output of an intermediate layer and want to minimize a tensor e.g. diss = output1 / dot * 100.

Is it enough if i add the following to my call function:

def call(self, input):

output_overall = self.dense(input)
diss = output1 / dot * 100
self.add_loss (tf.reduce_mean(diss))

return output_overall

Hi @Betim_Bahtiri ,

In your custom layer’s call method, The loss term (diss ) is calculated based on the output of an intermediate layer (output1 ). This additional loss will be considered during the training process, enhancing the learning process of your model.

  1. Tensor Shapes: Ensure that the tensors involved in the loss calculation (output1 , dot , and diss ) have compatible shapes for the division and multiplication operations.
  2. Loss Weighting: If you have multiple losses in your model, you might want to adjust their relative contributions to the overall training objective. You can control this by providing a loss_weight argument to the add_loss method.
  3. Model Compilation: When compiling your model, make sure to include the custom loss in the loss argument of the compile method
  4. Gradient Tracking: If you’re using the add_loss method within a subclassed model or layer (as opposed to a subclassed model), remember to call self.add_loss within a tf.GradientTape context to ensure that gradients are calculated correctly.

I hope these details helps you.