This worked without any errors for many weeks. Now I get the following error in Android Studio:
Output tensor at index 0 is expected to have 3 dimensions, found 2.
My dataset is exactly the same and I train on Google Colab. I am sure that I didn’t change anything on the Android App.
I look forward for your answers
In which build variant are you getting the error? lib_task or lib_interpreter?
Upload somewhere the tflite file and give us a link so we can verify the output shape.
Recently colab tensorflow version changed from 2.5.0 to 2.6.0
Check if with previous version you can get what you want I will get back to u with info of the .tflite files
Okay thank you. This could be the issue.
I tried the following:
!pip install --ignore-installed --upgrade tensorflow==2.5.0
But I got problems with software dependencies. Later I should spend time to get it working with version 2.5.0
Do you know how to swith to lib_task_api instead of lib_interpreter?
Thank you so much!! It finally worked with tensorflow 2.5.0 and PyYaml Version 5.1.
We changed the lines inside lib_interpreter, but it did not work. I believe that there must be done more changes inside the android app.
I wish you all the best.
We are aware of a breaking change in TF 2.6 regarding model signature def, resulted in a change in output tensor order of object detection models created by Model Maker. That the root cause of the issue you raised in the first comment. We’re actively working on fixing it. For the time being, please stick to TF 2.5 when training and running Model Maker for object detection.
Hey I am using the salad object detection colab ( Google Colab ) and am running into this exact same error. I followed the instructions to downgrade the tensorflow to 2.5 with PyYaml 5.1 and still getting error. What am I missing?
Are you getting the error “Output tensor at index 0 is expected to have 3 dimensions, found 2”??
If so…try to change the order of the outputs at the android app. Check my answer above
Just so anyone is having issues while using tflite model maker in colab recently and stumble upon this thread. You might want to look at @Winton_Cape’s issue if you have the same one.
Yes, I am getting that error. I did see your answer but I am using the Salad detector object demo. When I import the project into Android Studio, there is no file called TFLiteObjectDetionAPIModel.java. So I don’t know how to change the order of the outputs in this project. This is what worked for me with the Salad Objection Demo.
Changed notebook to CPU by editing the notepook settings
Changed the tensorflow version and modelmaker
!pip install -q tensorflow==2.5.0
!pip install -q --use-deprecated=legacy-resolver tflite-model-maker
!pip install -q pycocotools
Ran all the other cells as is and it worked. I was able to generate a model file (model.tflite) and evaluate that file as well as test the performance of that file on a URL image.
I tried this and the app runs for me, with lib interpreter.
Is this the only difference between 2.6 and 2.5 though?
If not then the detection results might not be accurate
Update… as stated above the code is all wrong… the order of the outputs in last two optional steps is wrong…Heres what I did:
Changed the order of the last two outputs of the model. This is in the detect_objects function, the assignment of count and scores is reversed. I changed it to
count = int(get_output_tensor(intepreter, 2))
scores = get_output_tensor(intepreter, 3)
*** these two are reversed in the tutorial *****
When the results are being assigned just after the above tensor assignment the bounding_box and class assignment should be reversed:
result = { ‘class_id’ : boxes [i]
bounding_box : classes[i]
This was the quick fix I did to get it to work… I am sure the code could be optomized.
Is there anyway you can explain this process a little more so I could follow along with my Collab/models
The TF upgrade to 2.6 seems to be having a lot of issues with my output orders and it keeps crashing my Android app. The only models I can get to work in it were created before the upgrade to 2.6.
Any update on a fix change in output tensor order for Model Maker models? Or even just a sneaky work around, my app works great with models built on 2.5 and I’m trying to avoid rebuilding the app.
Any tips on how to continue building models on 2.5 without running into compatibility errors with dependent packages etc.?
(trying my best to stick with the app from the salad detection collab)
@wwfisher The issue was fixed in the latest version of Model Maker (0.3.4). You can use it with the latest version of TensorFlow. Please be noted that the output TFLite models only work correctly with Task Library version 0.3.1.
You can see the object detection tutorial for details.
Yes, I have faced the same issues. I gave up on using that colab because the output order of the model tensors that is created is different from the model used in the sample app. So I attacked the problem from a different angle. Here’s what I did.
I still used the same object detection phone app example but used a different strategy to create the custom model. I used the Google Cloud Vision API to create the custom object detection model. I ended up relabeling all of my images because I couldn’t figure out a way to convert my label data into the Google CSV format, but other than that their process worked smoothly. The link even uses the same salad example.
Once the Vision API has generated the model, you can download the model from the cloud. That model will have the correct output tensor order and will work with the object detection phone app example. If you need more help let me know.
***** If you can figure out a way to easily transfer label data (VOC) let me know.
***** looks live they solved the issue