Help understanding log_prob and defining the right loss function with tensorflow probability

Hi everyone, i have (very) recently started using tensorflow and i am facing some issue with tensorflow probability and i hope you guys can give me an hand.
basically i have set up a very small fcnn bayesian network for regression

from tensorflow.keras.layers import Dropout

from tensorflow.keras.layers import Input

from tensorflow.keras.layers import Dense

 

from tensorflow.keras.models import Model

from tensorflow.keras.optimizers import Adam

def NLL(y, distr): 

  return -distr.log_prob(y) 

inputs = Input(shape=(1,))

hidden = Dense(200,activation="relu")(inputs)

hidden = Dropout(0.1)(hidden, training=True)

hidden = Dense(200,activation="relu")(hidden)

hidden = Dropout(0.1)(hidden, training=True)

hidden = Dense(200,activation="relu")(hidden)

hidden = Dropout(0.1)(hidden, training=True)

params_mc = Dense(2)(hidden)

dist_mc = tfp.layers.DistributionLambda(normal_exp, name='normal_exp')(params_mc) 

model_mc = Model(inputs=inputs, outputs=dist_mc)

model_mc.compile(Adam(learning_rate=0.004), loss=NLL)

the network works and the results are good but the problem is that my data are not independent from each other but correlated while the network links the output distribution to the loss function through -distr.log_prob . Is it possible to set a loss function that does not depend from the output distribution? I have a covariance matrix and i would like to use the log likelihood of a multivariate gaussian as a loss function while leaving untouched the output layer ( for predicting mean and variance ). In case it is not possible, how can i predict the entire covariance matrix?