I was wondering if mixed precision did the backward pass in float32 or float16? I was looking around and I couldn’t find evidence of either.
I saw that in pytorch you can configure it. Can you do this in tensorflow?
If someone could give me resources for how to figure this out/configure it that would be great.
Thanks so much for the quick reply!
Would you be able to provide me some evidence for why you think that? I also think that it does it in float32, but for the task that I am doing, I need to be absolutely sure that this is the case.
I tried looking through the code, but I couldn’t find anything.