TFlite batch inference bug

Hi-

I’m trying to get batch inference running for this EfficientDetLite0 model.

Here is my code and error at the bottom:

import cv2
import glob
import numpy as np
import tflite_runtime.interpreter as tflite 
TFLITE_FILENAME = 'models/lite-model_efficientdet_lite0_detection_default_1.tflite'

# -------------- SET INTERPRETER-----------------#

interpreter = tflite.Interpreter(TFLITE_FILENAME)

# ---------------BATCH WORK-----------------------#

input_details = interpreter.get_input_details()
print(input_details)
tensor_index = input_details[0]['index']
interpreter.resize_tensor_input(tensor_index, [4, 320, 320, 3]);
interpreter.allocate_tensors()

#------------------IMAGE READ IN ----------------#

files=glob.glob("images/*.jpg")
images=np.zeros([len(files),320,320,3]).astype(np.uint8)
for i in range(len(files)):
    images[i]=cv2.imread(files[i])
    images[i]=cv2.cvtColor(images[i],cv2.COLOR_BGR2RGB)
    
#--------------SET INPUT TENSOR ------------------#

interpreter.set_tensor(tensor_index, images)

#--------------- INVOKE ---------------------#

interpreter.invoke()

-------------------------------------------------------------------------------------------
RuntimeError: /workspace/tensorflow/lite/kernels/detection_postprocess.cc:447 ValidateBoxes(decoded_boxes, num_boxes) was not true.Node number 266 (TFLite_Detection_PostProcess) failed to invoke.

Thanks in advance!

@khanhlvg might be able to help

I’ve worked with this some more, and found some pretty interesting behavior. I played around with the number of images included in the batch, and discovered that no errors are thrown then images <=3. This seems to be some sort of limit to prevent batch processing. Could also just be a bug.

Understanding the output is a another story. Ive been using the pycoral.detect module to process outputs.

objs = detect.get_objects(interpreter, score_threshold=0.5)

This is what objs looks like

[Object(id=43, score=0.5859375, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5859375, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5859375, bbox=BBox(xmin=-12030157278092323633233920, ymin=19330061968751681494078602608640, xmax=10419190147768341663580160, ymax=19330204017535486213006630584320)),
 Object(id=43, score=0.56640625, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.56640625, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.56640625, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.54296875, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5234375, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5234375, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5234375, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0)),
 Object(id=43, score=0.5, bbox=BBox(xmin=0, ymin=0, xmax=0, ymax=0))]

So it appears that it is correctly identifying the object in the image(a bottle in my case), but doesn’t know where it is located.

Isn’t this just classification?