Problem in Tensorflowlite code with pyhton and Flask on Google Coral Dev Board for object detection

Dear community.
I’m currently using a Google Coral Dev Board, and have started deploying the raspberry pi examples from the following Git:

To do this I followed the steps recommended by @khanhlvg in the following link:

Update the Mendel OS and pip, create a virtual environment, activate it, clone the git “object_detection”, proceed to install the requirements:

numpy>=1.20.0 # To ensure compatibility with OpenCV on Raspberry Pi.

I also installed Flask to make a small video streaming application with python, and it worked very well with a USB webcam.

After a few bugs with Flask and additional installations, my virtual environment has the following:

(tflite) mendel@elusive-jet:~/DevBoard$ pip3 freeze

The next step was to modify the script, where I adapted it to work with Flask.

"""Main script to run the object detection routine."""
from flask import Flask
from flask import render_template
from flask import Response

import argparse
import sys
import time

import cv2
from tflite_support.task import core
from tflite_support.task import processor
from tflite_support.task import vision
import utils

app = Flask(__name__)

def run(model: str, camera_id: int, width: int, height: int, num_threads: int,
        enable_edgetpu: bool) -> None:
  """Ejecute inferencias de forma continua en las imágenes adquiridas de la cámara.

    model: Nombre del modelo de detección de objetos TFLite.
    camera_id: la identificación de la cámara que se pasará a OpenCV.
    width: El ancho del cuadro capturado desde la cámara.
    height: La altura del cuadro capturado desde la cámara.
    num_threads: el número de subprocesos de la CPU para ejecutar el modelo.
    enable_edgetpu: Verdadero/Falso si el modelo es un modelo EdgeTPU.

  # Variables para calcular FPS
  counter, fps = 0, 0
  start_time = time.time()

  # Comience a capturar la entrada de video de la cámara
  cap = cv2.VideoCapture(camera_id)
  cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
  cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)

  # Parámetros de visualización
  row_size = 20  # pixels
  left_margin = 24  # pixels
  text_color = (0, 0, 255)  # red
  font_size = 1
  font_thickness = 1
  fps_avg_frame_count = 10

  # Inicializar el modelo de detección de objetos
  base_options = core.BaseOptions(
      file_name=model, use_coral=enable_edgetpu, num_threads=num_threads)
  detection_options = processor.DetectionOptions(
      max_results=3, score_threshold=0.3)
  options = vision.ObjectDetectorOptions(
      base_options=base_options, detection_options=detection_options)
  detector = vision.ObjectDetector.create_from_options(options)

  # Capture continuamente imágenes de la cámara y ejecute la inferencia
  while cap.isOpened():
    success, image =
    if not success:
          'ERROR: Unable to read from webcam. Please verify your webcam settings.'

    counter += 1
    image = cv2.flip(image, 1)

    # Convierta la imagen de BGR a RGB según lo requiera el modelo TFLite.
    rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # Cree un objeto TensorImage a partir de la imagen RGB.
    input_tensor = vision.TensorImage.create_from_array(rgb_image)

    # Ejecute la estimación de detección de objetos utilizando el modelo.
    detection_result = detector.detect(input_tensor)

    # Dibujar puntos clave y bordes en la imagen de entrada
    image = utils.visualize(image, detection_result)

    # Calcular los FPS
    if counter % fps_avg_frame_count == 0:
      end_time = time.time()
      fps = fps_avg_frame_count / (end_time - start_time)
      start_time = time.time()

    # Mostrar los FPS
    fps_text = 'FPS = {:.1f}'.format(fps)
    text_location = (left_margin, row_size)
    image=cv2.putText(image, fps_text, text_location, cv2.FONT_HERSHEY_PLAIN,
                      font_size, text_color, font_thickness)
    # Mostrar resultado en JPG mediante la web de FLASK
    (flag, encodedImage) = cv2.imencode(".jpg", image)
    if not flag:
    yield(b'--image\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
        bytearray(encodedImage) + b'\r\n')

    """# Detener el programa si se presiona la tecla ESC.
    if cv2.waitKey(1) == 27:
    cv2.imshow('object_detector', image)
def index():
     return render_template("index.html")
def video_feed():
     return Response(run('efficientdet_lite0_edgetpu.tflite',1,640,480,4,True),
          mimetype = "multipart/x-mixed-replace; boundary=image")

if __name__ == '__main__':

But after several attempts, I get the following error. - - [20/Jun/2022 20:28:29] "GET /video_feed HTTP/1.1" 200 -
 * Detected change in '/home/mendel/DevBoard/', reloading
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 135-396-847 - - [20/Jun/2022 20:52:48] "GET / HTTP/1.1" 200 -
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
  File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/", line 462, in __next__
    return self._next()
  File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/wrappers/", line 50, in _iter_encoded
    for item in iterable:
  File "/home/mendel/DevBoard/", line 68, in run
    detector = vision.ObjectDetector.create_from_options(options)
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/vision/", line 83, in create_from_options
    options.base_options.to_pb2(), options.detection_options.to_pb2())
TypeError: create_from_options(): incompatible function arguments. The following argument types are supported:
    1. (arg0: tflite::python::task::core::BaseOptions, arg1: tflite::task::processor::DetectionOptions) ->

Invoked with: <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'>, <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'> - - [20/Jun/2022 20:52:48] "GET /video_feed HTTP/1.1" 200 -

Please, could you help me what options should I try to correct this error message?

My Version de Coral Dev Board is:

(tflite) mendel@elusive-jet:~/DevBoard$ cat /etc/os-release
PRETTY_NAME="Mendel GNU/Linux 5 (Eagle)"
NAME="Mendel GNU/Linux"

Thank you very much, greetings from Peru

Hi @MikiVera

There’s a bug in the tflite-support v0.4.1 that caused the issue. Please run pip install tflite-support==0.4.0, or update your requirements.txt file accordingly to install the older version of the library for the time being.

We’ll release v0.4.2 which will fix the bug in coming weeks.


Dear @khanhlvg very grateful for your support.

The recommended was done

My script issued the following error.

(tflite) mendel@elusive-jet:~/tflite$ python3
python3: can't open file '': [Errno 2] No such file or directory
(tflite) mendel@elusive-jet:~/tflite$ cd
(tflite) mendel@elusive-jet:~$ cd DevBoard
(tflite) mendel@elusive-jet:~/DevBoard$ python3
Traceback (most recent call last):
  File "", line 24, in <module>
    from tflite_support.task import core
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/", line 53, in <module>
    from tflite_support import task
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/task/", line 28, in <module>
    from . import audio
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/task/audio/", line 20, in <module>
    from import audio_classifier
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/audio/", line 18, in <module>
    from import audio_record
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/audio/core/", line 17, in <module>
    import sounddevice as sd
  File "/home/mendel/tflite/lib/python3.7/site-packages/", line 71, in <module>
    raise OSError('PortAudio library not found')
OSError: PortAudio library not found

I proceeded to install that audio package

(tflite) mendel@elusive-jet:~/DevBoard$ sudo apt-get install libportaudio2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 61.2 kB of archives.
After this operation, 211 kB of additional disk space will be used.
Get:1 eagle/main arm64 libportaudio2 arm64 19.6.0-1+deb10u1 [61.2 kB]
Fetched 61.2 kB in 1s (66.0 kB/s)
Selecting previously unselected package libportaudio2:arm64.
(Reading database ... 50015 files and directories currently installed.)
Preparing to unpack .../libportaudio2_19.6.0-1+deb10u1_arm64.deb ...
Unpacking libportaudio2:arm64 (19.6.0-1+deb10u1) ...
Setting up libportaudio2:arm64 (19.6.0-1+deb10u1) ...
Processing triggers for libc-bin (2.28-10) ...

Run my Script again with no problem

(tflite) mendel@elusive-jet:~/DevBoard$ python
 * Serving Flask app 'detect' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on all addresses (
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on
 * Running on (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 301-658-436 - - [22/Jun/2022 18:33:39] "GET / HTTP/1.1" 200 -
INFO: Created TensorFlow Lite XNNPACK delegate for CPU. - - [22/Jun/2022 18:33:42] "GET /video_feed HTTP/1.1" 200 -

I attach results with the ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite and efficientdet_lite0_edgetpu.tflite models in the web viewer with Flask, since Coral Dev Board does not have a graphical environment, or they do not recommend installing it so as not to impact its performance.

Now I will go to the next level of the tutorial

I confess that I have a Coral USB but I didn’t know how to use it, the coral page is a bit complex, as soon as I saw your videos and tried the operation, I was encouraged to work with the Dev Board, since the Coral USB is exhausted and no I want to spoil it. Very grateful for your contribution and for motivating us to work with Tendorflow Lite (it’s better than the libraries that are in Coral), you are a great person and an excellent teacher.

1 Like

Awesome! I’m glad that you were able to make the code work. Using Flask to stream the detection result via a web server is a very nice workaround for Coral DevBoard :slight_smile:

1 Like

Is a monitor require to run object detection? What if I wanted to run it to recognize a pet and perform an action? I don’t need to see it on a monitor.

Hello @MarcusA , you don’t need a monitor, I recommend you apply “Object Classification”, you can see the following link:

In that code you evaluate the Score of your pet if it is greater than a threshold of 0.6 for example, and execute the action such as turning on a led through the GPIO port.


1 Like

How does this change with the new bullseye update and libcamera?

libcamera is an application to display the camera of the raspberry pi, which can be executed with arguments to select if it is the USB camera or the flat cable camera.

I don’t know if you can change it.

I recommend that you review videos on how to install and use the camera with Open CV on your raspberry pi.
After learning the Open CV code and making its applications, it will be very easy for you to work with TensorFlow lite.

1 Like

The TFLite samples are using OpenCV so you’ll need to enable the legacy camera stack on Bullseye to make them work.

We’ll update the samples to use libcamera when its Python API becomes stable.