KL Divergence in PixelCNN

Hi, I tried to use Kl Divergence Loss function in Gated PixelCNN but till now no success. in KL divergence It is required to have y_true and y_pred. I am sampling my input from the Neural network itself. The code for sampling is here (the output gives me a probability):
‘’’
I_Par=torch.zeros(shape,dtype=torch.float64,device=self.device)
Start1=time.time()
for h in range(shape[2]):
for w in range(shape[3]):
for c in range(shape[1]):
with torch.no_grad():
Net_1=(self.forward(I_Par[:,:,:h+1,:]))#forward pass of the neural network
rand=torch.rand((bs),dtype=torch.float64,device=self.device)
I_Par[:,c,h,w]=torch.where(Net_1[:,c,h,w] > rand[:],-1,1)
‘’’

I thought after sampling, If i pass my final input again in the neural network and then I get both the y_true and y_pred but it is not correct. Please help me with this, if you have any idea. I can add my code also. I implemented this in pytorch.