Is it possible to use 8 bit integers instead of floating point numbers?

Hello,

Here is a weird questions.

For the Tensors, we used floating point numbers, (FP16 or FP32).

I was wondering if it was possible to use integers to improve performance as I want to reduce the size of the intermediate results and don’t want such a high resolution accuracy on my data.

Will that be possible?

EDIT:
Also, wondering if this is possible with my existing model.

1 Like

You can find integer quantization at: