How to expand dims of tensor for on device prediction?

Hello,

I’ve trained a custom image classification model using Keras, and I’m now trying to use it for prediction on an Android device (Kotlin). The model obviously takes as input a batch of tensors, so when predicting on Python on a single image, we need to use tf.expand_dims(image_tensor,0) to be able to input it, like so:

interpreter = tf.lite.Interpreter(TFLITE_FILE_PATH)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

image = tf.io.decode_jpeg(tf.io.read_file(image_path))
image_without_alpha = image[:, :, 0:-1]
gray_scale_image = tf.image.rgb_to_grayscale(image_without_alpha)
resized_image = tf.image.resize(gray_scale_image, size=(224, 224))
input_data = tf.expand_dims(resized_image, 0)
interpreter.set_tensor(input_details['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details['index'])
print(np.argmax(output_data))

Yet, in Android, using TensorImage, it’s not clear how I can achieve this. I know it’s possible with TensorFlow Java, but it doesn’t seem to be available on Android.

Or am I supposed to add an extra step when converting the model to TFlite?

Apparently ImageProcessor takes care of that, so you don’t need to do it manually. It’d be helpful if they explained that in the docs.