Extracting KL & NLL separately from compiler

Dear,

I am currently building Bayesian Neural Networks and I managed to train my networks quite good using the negative log likelihood as loss in the model compiler. As I am using densevariational layers instead of regular dense layers, the KL divergence is added as loss to the compiler (at least that is what I understood).
It is clearly visible that the loss (KL+NLL) converges to a minimum value, however I would like to plot the KL and NLL separately as well, so I must extract them from the compiler, or use a different approach.

Does anyone have experience with this / have any tips?

Thanks in advance!
Benjamin


Model code (MWE):

Define the model

def get_full_model(x_train_shape):

model = tf.keras.Sequential([

    # Epistemic uncertainty
    tfpl.DenseVariational(...),
    tfpl.DenseVariational(...),

    # Aleatoric uncertainty
    tfpl.IndependentNormal(1)
])

def neg_loglik(y_true, y_pred):
    return -y_pred.log_prob(y_true)

model.compile(loss=neg_loglik, optimizer='rmsprop',metrics=[tf.keras.metrics.MeanAbsoluteError()])

return model