TFlite with Signatures issue


I have created a tflite file and it worked. No problem until this point but what I don’t have is “signatures”. I need the tflite with signatures file. Is there any additional parameter to add them or how could I train a tflite with signatures file? What I used here on tflite creation is tflite model maker.

1 Like

Hi @murkoc

Usually when you convert to tflite there is ‘serving_default’ as a signature.
Use the below snippet to check it:

# Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path="/diffusion_model.tflite")

# Get input, output tensors and signatures.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
signature_lists = interpreter.get_signature_list()

You can read more details here.


Thank you @George_Soloupis ,

At the moment, I don’t have any model and would like to create it . So, the script you provided didn’t work on my issue. I had only train images and model maker was asking on it and created tflite.

I have already checked the link source by the way.

I am trying to run tflite model on android device and another working model has below output:

[{‘name’: ‘serving_default_input_2:0’, ‘index’: 0, ‘shape’: array([ 1, 384, 384, 3]), ‘shape_signature’: array([ -1, 384, 384, 3]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]
[{‘name’: ‘StatefulPartitionedCall:1’, ‘index’: 245, ‘shape’: array([ 1, 16]), ‘shape_signature’: array([-1, 16]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:0’, ‘index’: 238, ‘shape’: array([ 1, 100]), ‘shape_signature’: array([ -1, 100]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]
{‘serving_default’: {‘inputs’: [‘input_2’], ‘outputs’: [‘classifier’, ‘locator’]}}

but my data has the below and it has a lot differency:

[{‘name’: ‘serving_default_images:0’, ‘index’: 0, ‘shape’: array([ 1, 320, 320, 3]), ‘shape_signature’: array([ 1, 320, 320, 3]), ‘dtype’: <class ‘numpy.uint8’>, ‘quantization’: (0.0078125, 127), ‘quantization_parameters’: {‘scales’: array([0.0078125], dtype=float32), ‘zero_points’: array([127]), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]
[{‘name’: ‘StatefulPartitionedCall:1’, ‘index’: 600, ‘shape’: array([ 1, 25]), ‘shape_signature’: array([ 1, 25]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:3’, ‘index’: 598, ‘shape’: array([ 1, 25, 4]), ‘shape_signature’: array([ 1, 25, 4]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:0’, ‘index’: 601, ‘shape’: array([1]), ‘shape_signature’: array([1]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:2’, ‘index’: 599, ‘shape’: array([ 1, 25]), ‘shape_signature’: array([ 1, 25]), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]
{‘serving_default’: {‘inputs’: [‘images’], ‘outputs’: [‘output_0’, ‘output_1’, ‘output_2’, ‘output_3’]}}

As I understood I need to have a model with different parameters. Could anyone help at this point?

I distinguish that my model has wrong serving_default signature parameters which is

‘inputs’: [‘input_2’], ‘outputs’: [‘classifier’, ‘locator’"


‘inputs’: [‘images’], ‘outputs’: [‘output_0’, ‘output_1’, ‘output_2’, ‘output_3’.

How to solve it?


From above I see a lot of differences between your model and the other one you are comparing. The idea here is what exactly you are trying to do? What is the task? Is it an Object Detection task and you are trying to use a project out of the box with another tflite model?
There is no right or wrong answer here. The model can perform OK but you have to feed it with the correct data and get the result from one of the outputs.
Can you give more details? and if it is possible can you create a notebook on how you are training the model and how you convert it.


Hi @George_Soloupis ,

Thank you for your interest,

Because this was the first attempt of mine on tflite on android, I have a lot of misunderstood points on it. What I was trying to do is to train a data for object detection that works on android device.

The script I used is tflite model maker script :

It was requested the train images only at this source. I defined the source of train images and it worked. Finally I had the tflite model. I directly used it on android device script. That’s all. It gives the error which requested some parameters on the signatures output. It was the reason to request signatures availability but the answer was not what I was hoping. You recommended a script and it really provided the signatures information. Now I need the tflite model which works on android device for object detection. The other model is another working model on android so I tried on comparing twos and checked the differencies on the output. I was thinking that I needed to retrain my dataset with some different parameters.

I hope it is enough to explain the issue.

Additionally, I haven’t tried any conversion on it. Should I convert the tflite I generated to another one?

Thanks in advance.

Did you use an android app out of the box? Which was it?

Pycharm is what I used for the aim.

Can you elaborate more? eg a link to that?

Also check this tutorial where you can find an implementation of the model maker and a suggestion of an android app that will use the model created.

After checking again carefully, I had the error which is;

java.lang.IllegalArgumentException: Cannot copy to a TensorFlowLite tensor (serving_default_images:0) with 1228800 bytes from a Java Buffer with 1769472 bytes.

I am not sure if it is able be to solved on android side or I must re train model with some specific parameters.

You are feeding the Interpreter with the wrong buffer. Check again the input.
It is expecting a buffer of 1x320x320x3x4 = 1228800 and you are feeding a 1x384x384x3x4 = 1769472

1 Like

So, should I provide it on interpreter script or is it possible to create the feeding data with 1769472? It is ok to set 1x320x320 but I don’t know how to set the last indixe 4 on the interpreter.

Hi, I recreated the data with the desired buffer. It passed that error and passed to the next one which is:

DataType error: cannot resolve DataType of java.lang.Object

Do you have an idea on this one?

Thank you for your interest again .

It is expecting eg Float but you are feeding it with an Object. It means you are not feeding it with correct type. Check what your input is.

Do you mean make some adaptation on android interpreter side? Because no ability on tflitemodelmaker side i think. Is there a possibility on training model with some paramters changing?

On android side.
You have to place some code or a link to your code to see better.