Mixed Precision Backward Pass Type

Hello all,
I was wondering if mixed precision did the backward pass in float32 or float16? I was looking around and I couldn’t find evidence of either.
I saw that in pytorch you can configure it. Can you do this in tensorflow?
If someone could give me resources for how to figure this out/configure it that would be great.

Thanks

@g-w1,

Welcome to the Tensorflow Forum!

Tensorflow provides LossScaleOptimizer that applies loss scaling to prevent numeric underflow in intermediate gradients when float16 is used.

Please find the reference here and let us know if this is what you are looking for?

Thank you!

This is not what I was looking for.
Sorry, I must not have been clear enough.

When using mixed precision, does tensorflow compute gradients using float32 precision or float16 precision? I couldn’t find which one it does.

@g-w1,

As per my knowledge it use float32 precision.

Thank you!

Thanks so much for the quick reply!
Would you be able to provide me some evidence for why you think that? I also think that it does it in float32, but for the task that I am doing, I need to be absolutely sure that this is the case.
I tried looking through the code, but I couldn’t find anything.