Object Detection Android App creates error with model from tflite-model-maker (it had worked for many weeks a until a few weeks ago)

I viewed the 2 files in Netron. They are exactly the same like:

BUT when you print the details of the output tensorsyou have:
Your file:

[{‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:02’, ‘index’: 600, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:0’, ‘index’: 598, ‘shape’: array([ 1, 25, 4], dtype=int32), ‘shape_signature’: array([ 1, 25, 4], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}, {‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:03’, ‘index’: 601, ‘shape’: array([1], dtype=int32), ‘shape_signature’: array([1], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:3;StatefulPartitionedCall:2;StatefulPartitionedCall:1;StatefulPartitionedCall:01’, ‘index’: 599, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]

and old working file:

[{‘name’: ‘StatefulPartitionedCall:31’, ‘index’: 598, ‘shape’: array([ 1, 25, 4], dtype=int32), ‘shape_signature’: array([ 1, 25, 4], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:32’, ‘index’: 599, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:33’, ‘index’: 600, ‘shape’: array([ 1, 25], dtype=int32), ‘shape_signature’: array([ 1, 25], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}},
{‘name’: ‘StatefulPartitionedCall:34’, ‘index’: 601, ‘shape’: array([1], dtype=int32), ‘shape_signature’: array([1], dtype=int32), ‘dtype’: <class ‘numpy.float32’>, ‘quantization’: (0.0, 0), ‘quantization_parameters’: {‘scales’: array([], dtype=float32), ‘zero_points’: array([], dtype=int32), ‘quantized_dimension’: 0}, ‘sparsity_parameters’: {}}]

I think the output arrays order has been changed. You can go here

and change the order to:

outputMap.put(0, outputScores);
outputMap.put(1, outputLocations);
outputMap.put(2, numDetections);
outputMap.put(3, outputClasses);

and I think your project will work again!

I do not know why this change happened but I feel like we have to tag @khanhlvg and @Yuqi_Li to shed some light or just to inform them.

If you need more help tag me.

Thank you so much!! :slight_smile: It finally worked with tensorflow 2.5.0 and PyYaml Version 5.1.
We changed the lines inside lib_interpreter, but it did not work. I believe that there must be done more changes inside the android app.
I wish you all the best.

Greetings,
Daniel Hauser

1 Like

We are aware of a breaking change in TF 2.6 regarding model signature def, resulted in a change in output tensor order of object detection models created by Model Maker. That the root cause of the issue you raised in the first comment. We’re actively working on fixing it. For the time being, please stick to TF 2.5 when training and running Model Maker for object detection.

3 Likes

Hey I am using the salad object detection colab ( Google Colab ) and am running into this exact same error. I followed the instructions to downgrade the tensorflow to 2.5 with PyYaml 5.1 and still getting error. What am I missing?

Hi @Winton_Cape

Are you getting the error “Output tensor at index 0 is expected to have 3 dimensions, found 2”??
If so…try to change the order of the outputs at the android app. Check my answer above

Best

1 Like

Just so anyone is having issues while using tflite model maker in colab recently and stumble upon this thread. You might want to look at @Winton_Cape’s issue if you have the same one.

Yes, I am getting that error. I did see your answer but I am using the Salad detector object demo. When I import the project into Android Studio, there is no file called TFLiteObjectDetionAPIModel.java. So I don’t know how to change the order of the outputs in this project. This is what worked for me with the Salad Objection Demo.

  1. Changed notebook to CPU by editing the notepook settings

  2. Changed the tensorflow version and modelmaker
    !pip install -q tensorflow==2.5.0
    !pip install -q --use-deprecated=legacy-resolver tflite-model-maker
    !pip install -q pycocotools

  3. Ran all the other cells as is and it worked. I was able to generate a model file (model.tflite) and evaluate that file as well as test the performance of that file on a URL image.

1 Like

I tried this and the app runs for me, with lib interpreter.
Is this the only difference between 2.6 and 2.5 though?
If not then the detection results might not be accurate

Update… as stated above the code is all wrong… the order of the outputs in last two optional steps is wrong…Heres what I did:

  1. Changed the order of the last two outputs of the model. This is in the detect_objects function, the assignment of count and scores is reversed. I changed it to :slight_smile:
    count = int(get_output_tensor(intepreter, 2))
    scores = get_output_tensor(intepreter, 3)
    *** these two are reversed in the tutorial *****

  2. When the results are being assigned just after the above tensor assignment the bounding_box and class assignment should be reversed:
    result = { ‘class_id’ : boxes [i]
    bounding_box : classes[i]

This was the quick fix I did to get it to work… I am sure the code could be optomized.

Hi @Winton_Cape

Is there anyway you can explain this process a little more so I could follow along with my Collab/models

The TF upgrade to 2.6 seems to be having a lot of issues with my output orders and it keeps crashing my Android app. The only models I can get to work in it were created before the upgrade to 2.6.

Any help is greatly appreciated.

Cheers,
Will

Hi Khanh,

Any update on a fix change in output tensor order for Model Maker models? Or even just a sneaky work around, my app works great with models built on 2.5 and I’m trying to avoid rebuilding the app.

Any tips on how to continue building models on 2.5 without running into compatibility errors with dependent packages etc.?

(trying my best to stick with the app from the salad detection collab)

@wwfisher The issue was fixed in the latest version of Model Maker (0.3.4). You can use it with the latest version of TensorFlow. Please be noted that the output TFLite models only work correctly with Task Library version 0.3.1.

You can see the object detection tutorial for details.

1 Like

Hi Will,

Yes, I have faced the same issues. I gave up on using that colab because the output order of the model tensors that is created is different from the model used in the sample app. So I attacked the problem from a different angle. Here’s what I did.

I still used the same object detection phone app example but used a different strategy to create the custom model. I used the Google Cloud Vision API to create the custom object detection model. I ended up relabeling all of my images because I couldn’t figure out a way to convert my label data into the Google CSV format, but other than that their process worked smoothly. The link even uses the same salad example.

Once the Vision API has generated the model, you can download the model from the cloud. That model will have the correct output tensor order and will work with the object detection phone app example. If you need more help let me know.

***** If you can figure out a way to easily transfer label data (VOC) let me know.
***** looks live they solved the issue

I can verify that the changes to the Object Detection with TensorFlow Model Maker do work.

  • Able to train using VOC label files
  • Able to evaluate the resultant model
  • Able to convert model to TFLite
  • Able to evaluate TFLite model

However when I tried to test the model using a URL. I got an error about cv2 unable to import…
so did a import

  • !pip install “opencv-python-headless<4.3”

ran the last 2 cells and it worked…

Thanks @Winton_Cape and @khanhlvg for replying to help clear up my confusion.

I think I’ve got it all sussed in my head now and might lay it out here for anyone that stumbles across this thread later on trying to fix the Salad Detector example Android app crashing with custom trained models. This information is relevant for anyone doing this after Nov 2021.

This is the tutorial to follow if you are looking to build an Android app using a transfer learned model - known as a the Salad Detector tutorial.

The Android example apps have been updated in Jan 2022 to work successfully with models trained on TensorFlow 2.6 and Model Maker 0.3.4.

The only problem with this tutorial is it doesn’t link to the correct Colab file for transfer learning on Step 7: Train a custom object detection model. DO NOT follow the links to the Colab on this step.

The correct Colab to train a working tflite model is the updated one located here.

They look very similar but have a few changes that make it work and not crash the app. There are also a few variations of the Android example app floating around out there as well. You want to use the on in this step of the tutorial here.

Hopefully all this information is correct as of Feb 2022.

Tutorial to follow for building Android App with Salad Detector

Step with the updated Android example app

Step with non-working Colab for training your own model

Correct Colab/Tutorial for training your own model

Its 2022 - and I get the same problem (deploying with android lead to error of

java.lang.IllegalArgumentException: Error occurred when initializing ObjectDetector: Output tensor at index 0 is expected to have 3 dimensions, found 2.

with the model that is quantized using

config = QuantizationConfig.for_int8(test_data, inference_input_type = tf.uint8, inference_output_type = tf.uint8)

here is the notebook Google Colab

I hope I should not go back to TF 2.5 !

Same here…are you able to resolve it?
float16 has no issue, the issue is only with uint8 quantization.
Thanks!

If you’re using Model Maker to train your tflite model I’d use the nightly build.

pip install -q tflite-model-maker-nightly

The android dependency for the example apps was updated to incorporate output changes from TF 2.6 onwards.

This post here clarifies it all a bit better.

Basically use nightly to build your model and update your dependancies to 0.3.1 and above (I’m using 0.4.0)

    implementation 'org.tensorflow:tensorflow-lite-task-vision:0.4.0'

Greetings, everyone. I am new to TensorFlow and I am currently developing a real-time object detection application. I have trained an object detection model on my custom dataset using tflite model maker, but when I try to use the model in my app, the following error occurred.

Caused by: java.lang.IllegalArgumentException: Cannot copy from a TensorFlowLite tensor (StatefulPartitionedCall:1) with shape [1, 25] to a Java object with shape [1, 25, 4].

May I know if is it caused by the output order? As I used the Netron to visualize my model and found that the bounding box output order of my model is different from the sample model (which works fine in my app).

The screenshot of my model’s outputs and the sample model’s outputs are in this google drive:
https://drive.google.com/drive/folders/1pUR5WVkB2G5AP9op_b09qD9wv9Y7dmS0?usp=sharing

Please help me, thank you!!! :pray:

Greetings Everyone. I am also working on object detection. I trained my model from Teachable Machine. I am getting same error. Did anyone error got resolved because I have been trying to for days but nothing is working.