Expanding tflite object detector functionality

I am trying to expand the model from tflite object detector with OCR, I have exported the underlying serving model and I have some trouble understanding the output I’m getting from the keras model. The output I’m getting is 2 arrays with pyramid like outputs.

I’m trying to understand how I can get bboxes and predictions (scores and class) out of it. I’ve been looking for clues here but I can’t wrap my head around it.
I’d like to stay with tflite implementation of the model as it provides me with very convenient training and the results with just some augmentation added on top of what tflite already provides is getting me satisfying results and just use the underlying model to combine it with OCR and later export it back to tflite for mobile use.

Can you try something like this?

interpreter = tf.lite.Interpreter(model_content=tflite_model_path)
fn = interpreter.get_signature_runner()
output = fn(images=image)

I do have a working inference with the model both in python and on android. The problem is I want to use the underlying keras model and combine it with OCR to build a single model that detects objects and reads text out of them (necessary for my use case). The idea is to use a Keras sequential model with efficiientdet model as first layer, data processing as a second layer and OCR as the third one. My issue is that the keras model output is different from the tflite model so I’m trying to replicate the output postprocessing.

The difference is probably nms in tflite model.
Maybe use this after your keras model?

1 Like

Passing the output from the model into generate_detections with params being a dict from EfficientDetLite0Spec config and scales [0.25] (single image input for now), I get

InvalidArgumentError: indices[0,19206] = 19206 is not in [0, 19206) [Op:GatherV2]

in line 146 of automl/efficientdet/keras/postprocess.py (I can’t post git links yet)

Not sure what I’m doing wrong at this point.
For reference snippets of my code

keras_model = model.create_serving_model()
res = keras_model(np.expand_dims(npimages[0], 0))
detections = generate_detections(params, res[0], res[1], [0.25], [1])

I seem to have fixed the issue, I need to add 1 to the number of classes in the parameters to accommodate the background class. Thank you @Kzyh for help!