Custom error signal for back-propagation rather than loss-function

I wonder how I can use a custom error for back-propagation. I’ve seen how to use a custom Loss Function but my problem is that while I know the error-signal I want to back-propagate (the difference in current output to desired output) I do not know the integral form for it needed for a loss function.
The use-case is a PDE solver where I want my error to be backpropagated to be equal to the residual.
Is there a way to do this ?

Using a custom error signal directly for back-propagation in a neural network, as opposed to deriving it from a loss function, is an unconventional approach because most deep learning frameworks, including TensorFlow, are designed to work with gradients computed from a loss function. The loss function’s gradient with respect to the network’s parameters is what is typically used for back-propagation.

However, if you have a specific error signal (like the residual in a PDE solver) that you want to use for back-propagation, you might consider a few approaches to integrate this into a training process:

  1. Custom Loss Function:

The most straightforward approach would be to encapsulate your error signal within a custom loss function. Even if you cannot express an integral form of the error, you can often define a loss function that computes the error for each data point and then averages these errors across the batch. This is essentially what happens in standard practice, where the loss function measures the discrepancy between predictions and targets.

import tensorflow as tf

def custom_loss(y_true, y_pred):
# y_true: true values
# y_pred: model’s predictions
# Compute your custom error signal here, for example:
error_signal = compute_residual(y_true, y_pred)
# Return the mean error as the loss
return tf.reduce_mean(error_signal)

When compiling the model, specify the custom loss function

model.compile(optimizer=‘adam’, loss=custom_loss)

  1. Gradient Tape:

TensorFlow’s tf.GradientTape API allows for more flexibility and might enable you to use your custom error signal directly for gradient computation. You can manually compute the gradients of the weights with respect to your custom error signal and then apply these gradients using an optimizer. This approach gives you direct control over the gradient computation and application process.

import tensorflow as tf

optimizer = tf.optimizers.Adam()
loss_history = []

for x, y_true in data:
with tf.GradientTape() as tape:
y_pred = model(x, training=True)
# Compute your custom error signal
error_signal = compute_residual(y_true, y_pred)
loss = tf.reduce_mean(error_signal)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))

  1. Custom Training Loop:

Implementing a custom training loop gives you complete control over the training process. You can use tf.GradientTape as shown above to compute gradients based on your custom error signal and then apply these gradients using your chosen optimizer. This approach is flexible and allows for sophisticated training procedures that might not be possible with the standard method.

  1. Physics-Informed Neural Networks (PINNs):

For solving PDEs specifically, you might explore the domain of Physics-Informed Neural Networks, where the network is designed to respect physical laws represented by differential equations. In such networks, the residual of the PDE can be directly incorporated into the loss function, guiding the network to learn solutions that adhere to the underlying physics.


While directly using a custom error signal for back-propagation is not straightforward in high-level APIs like TensorFlow’s Keras, you can achieve a similar effect by defining a custom loss function that encapsulates your error signal, using TensorFlow’s lower-level APIs like tf.GradientTape for more control, or adopting a more tailored approach like PINNs for PDE solving. Each of these approaches provides a pathway to integrate your specific error requirements into the learning process of the model.

1 Like

Thank you very much for your detailed explanation and giving me all these possible starting points to look into. As I’m very new to TF it will need some time to dive more into all of this and get a feel how to apply this to my setting.