Tflite gives different output when run inferences with batch

Hi everybody,

I have a fp16 tflite model running on ARM cpu. When I run inference with a batch of N (N>1) input vectors, all input vectors are the same, I obtained N output vectors. All output vectors are the same as expected. However, when I compare this output vector with the one obtained from the inference result of only one input vector, they are slightly different.
For example, with the worse case in my test, a component of my output vector has a value of 0,00019744 vs 0,0001975 (N>1) (0,03%).

Does anyone experience the same thing?