Post training quantization of AveragePool2D layer

Hello,

I struggle to understand why there isn’t calibrated output quantization for AveragePool2D layer in my TFLite model.

I have following AveragePool2D layer in my quantized model:

As you could see, layer is quantized to int8 and quantization parameters are same for both, input and output tensors. I’m able to measure, that values coming from Add/Relu layer are within range <-128,125>, what indicates that quantization on this layer performs nicely. On the other hand, values produced by AveragePool2D are always in range <-128, -83>, so less than quarter of available int8 range (<-128, 127>) is utilized. I would expect different quantization parameters on output tensor, so full int8 range is used.

Could you please give me hints where are the gaps in my thought process? I would be also grateful for links to additional sources or code, that give me more clue,

Have a nice day,
Lukas