Problem in Tensorflowlite code with pyhton and Flask on Google Coral Dev Board for object detection

Dear community.
I’m currently using a Google Coral Dev Board, and have started deploying the raspberry pi examples from the following Git:

To do this I followed the steps recommended by @khanhlvg in the following link:

Update the Mendel OS and pip, create a virtual environment, activate it, clone the git “object_detection”, proceed to install the requirements:

argparse
numpy>=1.20.0 # To ensure compatibility with OpenCV on Raspberry Pi.
opencv-python~=4.5.3.56
tflite-support>=0.4.0

I also installed Flask to make a small video streaming application with python, and it worked very well with a USB webcam.

After a few bugs with Flask and additional installations, my virtual environment has the following:

(tflite) mendel@elusive-jet:~/DevBoard$ pip3 freeze
absl-py==1.1.0
cffi==1.15.0
click==8.1.3
Flask==2.1.2
flatbuffers==1.12
importlib-metadata==4.11.4
itsdangerous==2.1.2
Jinja2==3.1.2
MarkupSafe==2.1.1
numpy==1.21.6
opencv-python==4.5.3.56
pkg_resources==0.0.0
protobuf==3.20.1
pybind11==2.9.2
pycparser==2.21
sounddevice==0.4.4
tflite-support==0.4.1
typing_extensions==4.2.0
Werkzeug==2.1.2
zipp==3.8.0

The next step was to modify the detect.py script, where I adapted it to work with Flask.

"""Main script to run the object detection routine."""
from flask import Flask
from flask import render_template
from flask import Response

import argparse
import sys
import time

import cv2
from tflite_support.task import core
from tflite_support.task import processor
from tflite_support.task import vision
import utils

app = Flask(__name__)

def run(model: str, camera_id: int, width: int, height: int, num_threads: int,
        enable_edgetpu: bool) -> None:
  """Ejecute inferencias de forma continua en las imágenes adquiridas de la cámara.

  Argumentos:
    model: Nombre del modelo de detección de objetos TFLite.
    camera_id: la identificación de la cámara que se pasará a OpenCV.
    width: El ancho del cuadro capturado desde la cámara.
    height: La altura del cuadro capturado desde la cámara.
    num_threads: el número de subprocesos de la CPU para ejecutar el modelo.
    enable_edgetpu: Verdadero/Falso si el modelo es un modelo EdgeTPU.
  """

  # Variables para calcular FPS
  counter, fps = 0, 0
  start_time = time.time()

  # Comience a capturar la entrada de video de la cámara
  cap = cv2.VideoCapture(camera_id)
  cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
  cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)

  # Parámetros de visualización
  row_size = 20  # pixels
  left_margin = 24  # pixels
  text_color = (0, 0, 255)  # red
  font_size = 1
  font_thickness = 1
  fps_avg_frame_count = 10

  # Inicializar el modelo de detección de objetos
  base_options = core.BaseOptions(
      file_name=model, use_coral=enable_edgetpu, num_threads=num_threads)
  detection_options = processor.DetectionOptions(
      max_results=3, score_threshold=0.3)
  options = vision.ObjectDetectorOptions(
      base_options=base_options, detection_options=detection_options)
  detector = vision.ObjectDetector.create_from_options(options)

  # Capture continuamente imágenes de la cámara y ejecute la inferencia
  while cap.isOpened():
    success, image = cap.read()
    if not success:
      sys.exit(
          'ERROR: Unable to read from webcam. Please verify your webcam settings.'
      )

    counter += 1
    image = cv2.flip(image, 1)

    # Convierta la imagen de BGR a RGB según lo requiera el modelo TFLite.
    rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    # Cree un objeto TensorImage a partir de la imagen RGB.
    input_tensor = vision.TensorImage.create_from_array(rgb_image)

    # Ejecute la estimación de detección de objetos utilizando el modelo.
    detection_result = detector.detect(input_tensor)

    # Dibujar puntos clave y bordes en la imagen de entrada
    image = utils.visualize(image, detection_result)

    # Calcular los FPS
    if counter % fps_avg_frame_count == 0:
      end_time = time.time()
      fps = fps_avg_frame_count / (end_time - start_time)
      start_time = time.time()

    # Mostrar los FPS
    fps_text = 'FPS = {:.1f}'.format(fps)
    text_location = (left_margin, row_size)
    image=cv2.putText(image, fps_text, text_location, cv2.FONT_HERSHEY_PLAIN,
                      font_size, text_color, font_thickness)
                
    # Mostrar resultado en JPG mediante la web de FLASK
    (flag, encodedImage) = cv2.imencode(".jpg", image)
    if not flag:
        continue
    yield(b'--image\r\n' b'Content-Type: image/jpeg\r\n\r\n' +
        bytearray(encodedImage) + b'\r\n')

    """# Detener el programa si se presiona la tecla ESC.
    if cv2.waitKey(1) == 27:
      break
    cv2.imshow('object_detector', image)
    """
  cap.release()
  #cv2.destroyAllWindows()
     
#CODIGO FLASK
@app.route("/")
def index():
     return render_template("index.html")
     
@app.route("/video_feed")
def video_feed():
     return Response(run('efficientdet_lite0_edgetpu.tflite',1,640,480,4,True),
          mimetype = "multipart/x-mixed-replace; boundary=image")
          

if __name__ == '__main__':
  app.debug = True
  app.run(host="0.0.0.0") #ACCESIBLES PARA TODAS LAS DIRECCIONES

But after several attempts, I get the following error.

192.168.1.85 - - [20/Jun/2022 20:28:29] "GET /video_feed HTTP/1.1" 200 -
 * Detected change in '/home/mendel/DevBoard/detect.py', reloading
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 135-396-847
192.168.1.85 - - [20/Jun/2022 20:52:48] "GET / HTTP/1.1" 200 -
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
  File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/wsgi.py", line 462, in __next__
    return self._next()
  File "/home/mendel/tflite/lib/python3.7/site-packages/werkzeug/wrappers/response.py", line 50, in _iter_encoded
    for item in iterable:
  File "/home/mendel/DevBoard/detect.py", line 68, in run
    detector = vision.ObjectDetector.create_from_options(options)
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/vision/object_detector.py", line 83, in create_from_options
    options.base_options.to_pb2(), options.detection_options.to_pb2())
TypeError: create_from_options(): incompatible function arguments. The following argument types are supported:
    1. (arg0: tflite::python::task::core::BaseOptions, arg1: tflite::task::processor::DetectionOptions) -> tensorflow_lite_support.python.task.vision.pybinds._pywrap_object_detector.ObjectDetector

Invoked with: <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'>, <MagicMock name='mock.do_not_generate_docs()()' id='281473195160968'>
192.168.1.85 - - [20/Jun/2022 20:52:48] "GET /video_feed HTTP/1.1" 200 -

Please, could you help me what options should I try to correct this error message?

My Version de Coral Dev Board is:

(tflite) mendel@elusive-jet:~/DevBoard$ cat /etc/os-release
PRETTY_NAME="Mendel GNU/Linux 5 (Eagle)"
NAME="Mendel GNU/Linux"
ID=mendel
ID_LIKE=debian
HOME_URL="https://coral.ai/"
SUPPORT_URL="https://coral.ai/"
BUG_REPORT_URL="https://coral.ai/"
VERSION_CODENAME="eagle"

Thank you very much, greetings from Peru

Hi @MikiVera

There’s a bug in the tflite-support v0.4.1 that caused the issue. Please run pip install tflite-support==0.4.0, or update your requirements.txt file accordingly to install the older version of the library for the time being.

We’ll release v0.4.2 which will fix the bug in coming weeks.

2 Likes

This post was flagged by the community and is temporarily hidden.

Dear @khanhlvg very grateful for your support.

The recommended was done

My script issued the following error.

(tflite) mendel@elusive-jet:~/tflite$ python3 detect.py
python3: can't open file 'detect.py': [Errno 2] No such file or directory
(tflite) mendel@elusive-jet:~/tflite$ cd
(tflite) mendel@elusive-jet:~$ cd DevBoard
(tflite) mendel@elusive-jet:~/DevBoard$ python3 detect.py
Traceback (most recent call last):
  File "detect.py", line 24, in <module>
    from tflite_support.task import core
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/__init__.py", line 53, in <module>
    from tflite_support import task
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/task/__init__.py", line 28, in <module>
    from . import audio
  File "/home/mendel/tflite/lib/python3.7/site-packages/tflite_support/task/audio/__init__.py", line 20, in <module>
    from tensorflow_lite_support.python.task.audio import audio_classifier
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/audio/audio_classifier.py", line 18, in <module>
    from tensorflow_lite_support.python.task.audio.core import audio_record
  File "/home/mendel/tflite/lib/python3.7/site-packages/tensorflow_lite_support/python/task/audio/core/audio_record.py", line 17, in <module>
    import sounddevice as sd
  File "/home/mendel/tflite/lib/python3.7/site-packages/sounddevice.py", line 71, in <module>
    raise OSError('PortAudio library not found')
OSError: PortAudio library not found

I proceeded to install that audio package

(tflite) mendel@elusive-jet:~/DevBoard$ sudo apt-get install libportaudio2
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  libportaudio2
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 61.2 kB of archives.
After this operation, 211 kB of additional disk space will be used.
Get:1 https://mendel-linux.org/apt/eagle eagle/main arm64 libportaudio2 arm64 19.6.0-1+deb10u1 [61.2 kB]
Fetched 61.2 kB in 1s (66.0 kB/s)
Selecting previously unselected package libportaudio2:arm64.
(Reading database ... 50015 files and directories currently installed.)
Preparing to unpack .../libportaudio2_19.6.0-1+deb10u1_arm64.deb ...
Unpacking libportaudio2:arm64 (19.6.0-1+deb10u1) ...
Setting up libportaudio2:arm64 (19.6.0-1+deb10u1) ...
Processing triggers for libc-bin (2.28-10) ...

Run my Script again with no problem

(tflite) mendel@elusive-jet:~/DevBoard$ python detect.py
 * Serving Flask app 'detect' (lazy loading)
 * Environment: production
   WARNING: This is a development server. Do not use it in a production deployment.
   Use a production WSGI server instead.
 * Debug mode: on
 * Running on all addresses (0.0.0.0)
   WARNING: This is a development server. Do not use it in a production deployment.
 * Running on http://127.0.0.1:5000
 * Running on http://192.168.1.82:5000 (Press CTRL+C to quit)
 * Restarting with stat
 * Debugger is active!
 * Debugger PIN: 301-658-436
192.168.1.85 - - [22/Jun/2022 18:33:39] "GET / HTTP/1.1" 200 -
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
192.168.1.85 - - [22/Jun/2022 18:33:42] "GET /video_feed HTTP/1.1" 200 -

I attach results with the ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite and efficientdet_lite0_edgetpu.tflite models in the web viewer with Flask, since Coral Dev Board does not have a graphical environment, or they do not recommend installing it so as not to impact its performance.


Now I will go to the next level of the tutorial

I confess that I have a Coral USB but I didn’t know how to use it, the coral page is a bit complex, as soon as I saw your videos and tried the operation, I was encouraged to work with the Dev Board, since the Coral USB is exhausted and no I want to spoil it. Very grateful for your contribution and for motivating us to work with Tendorflow Lite (it’s better than the libraries that are in Coral), you are a great person and an excellent teacher.

Awesome! I’m glad that you were able to make the code work. Using Flask to stream the detection result via a web server is a very nice workaround for Coral DevBoard :slight_smile:

1 Like