InvalidArgumentError: required broadcastable shapes [Op:Mul]

When running model.evaluate_tflite('model.tflite', test_data) after training an object detection model with the Tensorflow Lite Model Maker I’m getting the following error: InvalidArgumentError: required broadcastable shapes [Op:Mul].

This can also be observed in the Google Codelab example.

Any help is greatly appreciated.

2 Likes

Adding @Yuqi_Li for the visibility.

Please fix tensorflow==2.5.0. The output order of the tflite model is changed after tensorflow==2.6.0, which caused the error.

1 Like

It works. Thanks a lot.

How would you fix this in Colab? I tried to set the version to 2.5.0 but Colab was happy to tell me it would interpret that as 2.x.

Thank you

For me it worked by running:

!pip install tensorflow==2.5.0

For more information, check out the notebook.

Hi, did you also recreate the whole model again?
Bcs. I am still getting the same error after call
!pip install tensorflow==2.5.0 in the begining of my file

Hi, I found another issue - maybe related with Google Colab. When I switched TF version to 2.5.0 with
!pip install tensorflow==2.5.0
The epochs started be incredible slow - basically unusable. Time per one epoch is more than 25 mins, while before it takes 100s. I stopped my learning process because of that. If I switch my Google Colab Pro back to factory reset and not using tensorflow 2.5.0, it started to work fast as before, but after try to convert model, I got the same issue with InvalidArgumentError . Any workaround for this?

Could you please try to pip install tflite-model-maker-nightly instead of tflite-model-maker without !pip install tensorflow==2.5.0?

This should resolve the issue of evaluate_tflite. However, the exported TFLite model under model-maker-nightly version is not compatible with ObjectDetector under TFLite Task library since the output order is changed. We’re WIP to support such case.

TFLite Task library is unfortunately used in our project, so there is no possibility to use nightly version.
We are working on native mobile development with kotlin and android studio but as you already pointed, output order is changed, so we are getting error with TFLite Task library, we are trying to solve it since yesterday but without any success.

So as I already said, we tried the solution with downgrade of tensorflow to version 2.5.0 but after that, the learning process is unusable, it takes so long.

Unfortunately, we still need some time to support in TFLite Task library.

As for the issue that tensorflow==2.5.0 runs slow, it may be due to incompatible cuda/cudnn version inside Colab so that it’s not running on GPU which we can’t control. Could you please run on your local station instead?

Our dataset is quite big, that is the reason why we work with Colab. Is there any possibility for work around directly with Task Library?
We are getting this kind of error:
Error occurred when initializing ObjectDetector: Output tensor at index 0 is expected to have 3 dimensions, found 2.

Hi @civoMT,

I do not know if you want to change the Task library and use a custom .aar file. If you want you can make changes directly to TensorFlow Lite Support library and build the artifacts with this tool:

I just tried it out, and I can confirm that GPU training isn’t working with version 2.5.0 or 2.5.1 anymore.

import tensorflow as tf
tf.config.list_physical_devices('GPU')

Output: []

I’m not quite sure why this is, though, as according to this document, Tensorflow 2.5.0 supports the same version of Cuda and cuDNN than Tensorflow 2.6 (Cuda 11.2 and cuDNN 8.1)

1 Like

I tried this in colab and it fixed the evaluate_tfilte issue. But when I try to test the model in the next step and use it I get: TypeError: only size-1 arrays can be converted to Python scalars

Do you think this is this related the TFlite Task Library issue you mention?

Thank you.

Yes. This should due to output order changed. We’re working on fixing it.

I’m also having the same issue and when my partner try to deploy tflite model on an android app, we have this issue:

java.lang.AssertionError: Error occurred when initializing ObjectDetector: Output tensor at index 0 is expected to have 3 dimensions, found 2.
at org.tensorflow.lite.task.vision.detector.ObjectDetector.initJniWithByteBuffer(Native Method)

You wouldn’t happen to have a map from the old way to the new way? Like 0->4, 1 is now 2.

Is there any update on this?

We are working on fixing this. Still need ~1-2 weeks to finalize it. Current workaround is still setting tensorflow==2.5.0

3 Likes