Change batch size (statically) for inference TF2

I have a trained keras .h5 model and want to change the batch size, to allow processing multiple samples at inference (using a .tflite model). How is that possible in tf2? Unfortunately, I didnt find anything.

Hi @Horst_G !

I had done something similar using resize_tensor_input method at .tflite models, when you can change te input to specific input, like this:

  input_index = interpreter.resize_tensor_input(input_details[0]['index'],[1453,102,1])
  interpreter.allocate_tensors()
  interpreter.set_tensor(input_details[0]['index'], data)
  interpreter.invoke()
  tflite_model_predictions = interpreter.get_tensor(output_details[0]['index'])

In my case i had made this snippet to inference of 1453 samples at once in data array. You can combine this solution using an for loop if you wanna segmented in more predictions like this:

pred = []
for i in range(27):
  input_index = interpreter.resize_tensor_input(input_details[0]['index'],[1453,102,1])
  interpreter.allocate_tensors()
  interpreter.set_tensor(input_details[0]['index'], split_array[i])
  interpreter.invoke()
  tflite_model_predictions = interpreter.get_tensor(output_details[0]['index'])
  pred.append(tflite_model_predictions)

I hope it’s help you!

Verified - Divvya Saxena

Sorry my bad. I am using a .tflite model, but not the tflite API for inference. So I would need to change the batch size either during the conversion process or beforehand, I guess. Do you have another idea?

I believe that doing it beforehand should be easier, because I don’t know if it is possible to change it during the conversion

What worked for me:

model = tf.keras.models.load_model("model.h5")

conf = model.get_config()
model_config = model.get_config()
input_layer_name = model_config['layers'][0]['name']
model_config['layers'][0] = {
                      'name': 'new_input',
                      'class_name': 'InputLayer',
                      'config': {
                          'batch_input_shape': (2, 512, 768,3),
                          'dtype': 'float32',
                          'sparse': False,
                          'name': 'modified_input'
                      },
                      'inbound_nodes': []
                  }
model_config['layers'][1]['inbound_nodes'] = [[['new_input', 0, 0, {}]]]
model_config['input_layers'] = [['new_input', 0, 0]]
new_model = model.__class__.from_config(model_config, custom_objects={})  # change custom objects if necessary

new_model.set_weights(model.get_weights())

Found on stackoverflow python - Keras replacing input layer - Stack Overflow
Solved. But thanks anyway.