For reference, tensorflow version is 2.10.0
I am running inference with a model that I trained and stored on file using Keras. I have about 120k samples as input for the model, each a 2d vector, with a probability output (binary classification).
I have noticed that running inference with the same model, on the same data, produces slightly different predictions for some of the samples within this dataset.
The number of differing samples varies. There were cases where all samples had the same predicted probability, while the worst case a 97 of 120k samples (about 0.08%) had different predicted values. The difference is very small, in the case of the 97 the maximum absolute difference was about 0.0015, so I am not worried about it impacting final binarized results.
However, I am curious if such difference is expected, and, if yes, if there is some way to prevent it from happening (something like setting
tf.config.experimental.enable_op_determinism() during inference)