InvalidArgumentError: required broadcastable shapes [Op:Mul]

Thanks for the quick reply. I’m looking forward to the fix.

find the object_detector_spec.py in anaconda3/envs/tf2.5/lib/python3.6/site-packages/tensorflow_examples/lite/model_maker/core/task/model_spec, then change nms_boxes, nms_classes, nms_scores, _ = lite_runner.run(images) ----> nms_scores, nms_boxes, nms_count, nms_classes = lite_runner.run(images), should address the error in tf2.6-gpu

Is there any update about the issue, please? Maybe can you let us know when it going to be fixing exactly…

Sorry that it takes longer time than we expected. Will notify you here once it’s released.

1 Like

Any new update on this issue?

1 Like

We have released a new version of model maker.

1 Like

Thanks a lot for the hard work. I’ll try it out as soon as possible.

My quick test with the salad maker colab demo failed at " Step 3. Train the TensorFlow model with the training data."

I also tried with modelmaker nightly.

UnknownError: 2 root error(s) found.
(0) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node keras_layer/StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/efficientnet-lite2/StatefulPartitionedCall/stem/conv2d/Conv2D}}]]
(1) Unknown: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node keras_layer/StatefulPartitionedCall/StatefulPartitionedCall/StatefulPartitionedCall/efficientnet-lite2/StatefulPartitionedCall/stem/conv2d/Conv2D}}]]
[[Func/batch_cls_loss/write_summary/summary_cond/then/_4448/input/_8959/_96]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_126022]

fixed this by switching to high memory instance.

Well it ran almost all the way through the colab example. But in the end when it tries to test the model it’s failing.

#@title Run object detection and show the detection results

gives me:

0.3
<class ‘float’>

IndexError Traceback (most recent call last)
in ()
20 TEMP_FILE,
21 interpreter,
—> 22 threshold=DETECTION_THRESHOLD
23 )
24

1 frames
in detect_objects(interpreter, image, threshold)
61 results = []
62 for i in range(count):
—> 63 if scores[i] >= threshold:
64 result = {
65 ‘bounding_box’: boxes[i],

IndexError: too many indices for array: array is 0-dimensional, but 1 were indexed

1 Like

This error occurs because the output format has changed. Replacing

boxes = get_output_tensor(interpreter, 0)
classes = get_output_tensor(interpreter, 1)
scores = get_output_tensor(interpreter, 2)
count = int(get_output_tensor(interpreter, 3))

with:

scores = get_output_tensor(interpreter, 0)
boxes = get_output_tensor(interpreter, 1)
count = int(get_output_tensor(interpreter, 2))
classes = get_output_tensor(interpreter, 3)  

I’ll create a PR to fix this in the Train_a_salad_detector_with_TFLite_Model_Maker.ipynb notebook.

Kind regards,
Gilbert Tanner

Thank you, I’m trying to get this to work now.

I updated the Train_a_salad_detector_with_TFLite_Model_Maker.ipynb notebook in this PR. It seems to work with the Android example as well, even though that needs to be further tested.

Oh that’s great, that was my next move. I was able to get it to work on colab. Oddly it failed most of the time when using a GPU instance. But it ran okay with a TPU instance.

Been following this thread and it finally worked for me. Here is what I did …
In the first cell I did NOT use tensorflow 2.5
!pip install -q tflite-model-maker

!pip install -q pycocotools
!pip install -q tflite-support

AND in the second to last cell there was a missing import for image processing
after tflite metadata support I added the image support

from tflite_support import metadata
from PIL import Image

Thats it and it was pretty fast in training and evaluation

I tried to use a copy of the tflite model from the updated notebook in Android Object detection app. … and it did not work. Do you know how to change the order of the outputs in the model… in android object detection app???

Has anyone got the salad colab tflite file to work with android object detection???

This worked for me, I was so confused for so long about what I was doing wrong from the sample code!!