Quantizing activation

Hey,
from a lot of sources I went through now, I do not understand what exactly is quantized in the activations. Assume a sigmoid activation function. I can observe (using a representative dataset) a bunch of activation outputs, using the unquantized float32 weights. Then I determine the min,max range, but what do I do with that determined scale factor afterwards? My intuition says I need to quantize the entire activation function somehow (or dequantize the activation input), because e.g. a regular sigmoid cant deal with quantized (e.g. int8) input values and would always end in the saturated area. In other words, I dont really get the quantization of activations. Please clear up my confusion.

A post was merged into an existing topic: Quantize Activations. What values are quantized?