Model predict returning different results compared to tensorflow server running the same model

Hi,
I’ve been testing and searching high and low, but keep running into this:

  • trained an image classifier on about 1300 images, for 2 classes
  • I use this function to create the training and validation sets:
train_ds = tf.keras.utils.image_dataset_from_directory(
  data_dir,
  validation_split = 0.2,
  subset = "training",
  seed = 123,
  image_size = (img_height, img_width),
  batch_size = batch_size)

(same call for the validation set, except subset = “validation”)

  • I train the model, then run predict on the validation set:
predictions = probability_model.predict(val_ds)

and then save the validation set filenames and the two class probabilities to a file.

  • I then save the model, and run tensorflow_model_server with it (installed from tensorflow-model-serving-1.14-1.0-1.x86_64.rpm)

A separate script then reads the validation set filenames, and sends these to the tensorflow server. I am expecting this server to return the same probabilities, but it doesn’t.

What could be the reason?

Many thanks in advance :slight_smile:

The only “solution” I have found so far is to re-implement the model in PyTorch,
where this does work as expected (i.e. the same probabilities are returned)…

Hi @tensordor, Could you please try by using the latest version of tf serving i.e 2.14. If you still see the difference in results, could you share the results to see the difference. Thank You.

Ok, thanks, I’ll try that.
I ended up using the rpm because it’s an easy install,
but it’s an obvious candidate for the mismatch of course…