ISSUE with DataType when executing tflite model

I’m getting this error when executing a tflite model in an Android app.

Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:3) with 4 bytes to a Java Buffer with 1 bytes.

Here is my code:
Object[] inputs = {byteBuffer};
Map<Integer, Object> outputs = new HashMap<>();

        tflite = new Interpreter(loadModel(MainActivity.this));
        int [] index0 = new int[]{};
        int [] index1 = new int[]{1,1};
        int [] index2 = new int[]{1,1};
        int [] index3 = new int[]{};
        int [] index4 = new int[]{1,1,4};
        Tensor ts0 = tflite.getOutputTensor(0);
        Tensor ts1 = tflite.getOutputTensor(1);
        Tensor ts2 = tflite.getOutputTensor(2);
        Tensor ts3 = tflite.getOutputTensor(3);
        Tensor ts4 = tflite.getOutputTensor(4);
        TensorBuffer out0 = TensorBuffer.createFixedSize(index0, DataType.UINT8);
        TensorBuffer out1 = TensorBuffer.createFixedSize(index1, DataType.UINT8);
        TensorBuffer out2 = TensorBuffer.createFixedSize(index2, DataType.UINT8);
        TensorBuffer out3 = TensorBuffer.createFixedSize(index3, DataType.UINT8);
        TensorBuffer out4 = TensorBuffer.createFixedSize(index4, DataType.UINT8);
        outputs.put(0, out0.getBuffer());
        outputs.put(1, out1.getBuffer());
        outputs.put(2, out2.getBuffer());
        outputs.put(3, out3.getBuffer());
        outputs.put(4, out4.getBuffer());
        tflite.runForMultipleInputsOutputs(inputs, outputs);

Can anybody tell me what i’m doing wrong?

Thank you very much.

Hi @uridium

Can you check your .tflite file with netron.app and report what are your inputs and outputs? Number of inputs and outputs, datatypes etc

Thanks

Hi @George_Soloupis

This is the Netron output:
input float32[1,64,256,3]
output 0 float32[1,1,4]
output 1 float32[1,1]
output 2 int32[1,1]
output 3 int32[1]
output 4 int32[1]

I’ve (almost) solved the problem. I 've create a float for the output 0
new float[1][xx][4];
and so on.

new code that works ok only if i know previoulsy that there will be 7 objects detected:
Object[] inputs = {byteBuffer};
Map<Integer, Object> outputs = new HashMap<>();
tflite = new Interpreter(loadModel(MainActivity.this));
int [] index0 = new int[1];
int [][] index1 = new int[1][7];
float [][] index2 = new float[1][7];
int [] index3 = new int[1];
float [][][] index4 = new float[1][7][4];

        outputs.put(0, index0);
        outputs.put(1, index1);
        outputs.put(2, index2);
        outputs.put(3, index3);
        outputs.put(4, index4);
        tflite.runForMultipleInputsOutputs(inputs, outputs);

The problem now is that how can I define the number of the variable objects before the inference?.

Any idea?

Thank you very much.

I do not think the .tflite file is created in the right way.
Unfortunately you have to prepare a python notebook with the code you used to convert your model to .tflite so I can see and suggest changes.

Check a little bit the example app from TF Lite documentation for Object detection that uses the Task Library

Hello,

The model has been supplied by another company, it has not been developed by my.
I will ask them If possible to send the script.

I checked the example you suggest some days ago. This is not the exact example because I have to use an InterpreterApi object to run the inference.
Maybe you have an example about how to get te options objects for the InterpreterApi object by the way?

Thank you very much

The error is due to a data type mismatch: your model’s output tensor expects a different data type than the one you’re providing through your Java Buffer. Ensure your TensorBuffer data types match the model’s output tensor data types (e.g., use DataType.FLOAT32 instead of DataType.UINT8 if required). Adjust the TensorBuffer creation in your code accordingly to resolve the issue.

Hi @Tim_wolfe,

I did it but still the same problem. I’ve solved the issue with the following workaround:

I run the model and only read:
output 3 int32[1]
output 4 int32[1]
One of these outputs tells me the number of elements (N) the model has detected.
Then I run de model again whith
output 0 float32[1,N,4]
output 1 float32[1,N]
output 2 int32[1,N]

Then everything is working ok. But i believe this is not the right way.
Anyway, can anybody provide me with an example about how to create the options instance needed for the InterpreterApi.create( fileNameModel, Options) method???

Regards.