Check/ Visualize gardients/loss getting applied to each layer

My network makes use of 3 losses that are applied to different layers. This is what I am doing currently :
trainable_vars = self.trainable_variables
gradients = tape.gradient([loss1,loss2,loss3], trainable_vars)
I am hoping that the gradients get applied according to the layer outputs that were involved in the calculation of each loss. However, how do I verify this?
Is there any means to check which loss/ gradient each layer in the network is getting affedcted by during back propagation without the need for custom layers?