Raspberry pi zero suitable for tensorflow lite? illegal instruction error

I finally managed to install tensorflow lite incl. opencv on my raspberry pi zero w (v.1.1) with bullseye OS and python3. When I try to run the example recommended on this site for Raspberry PI (called classify.py) but also when I try to run the example label_image.py I get an error message “illegal instruction”, with no further details.

Through the use of print statements in the python files, I manage to narrow it down to the statement
“from tflite_runtime.interpreter import Interpreter”. That is when the error occurs (in the file image_classifier.py)

When I type this import statement manually at a python prompt, I also get the same error message.

I have a suspicion that the PI Zero is just not suitable to run tf lite due to limited processor capabilities? Is that correct? Do I need a more powerful Raspberry PI?

Or is there another explanation (or even better: a solution!) ? How can I debug further?

In the end, my goal is to do some object recognition to control the direction of a small robot.

Thanks

some additional info:

" import tflite_runtime" does not trigger an error
" import tflite_runtime.interpreter" does trigger the same error as reported in the original post above

so, with some further research. it looks to be an error related to namespaces (but I am not an expert in how to use the import statements)
Could it be because I installed opencv separately before running the tensorflow lite installation instructions (which also seems to install opencv in the setup.sh shell script)?

I’ll start from scratch (again, already spent 3 days …) but I hope someone can shed some light on this issue.

I narrowed it down further and now I cannot get any more progress:

The statement causing the problems is:
from tflite_runtime import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper

within the file interpreter.py within the folder
/usr/lib/python3/dist-packages/tflite_runtime

Within that same folder I see 4 files starting with _pywrap as follows:
-rw-r–r-- 1 root root 2870104 Jul 26 20:14 _pywrap_tensorflow_interpreter_wrapper.cpython-36m-arm-linux-gnueabihf.so
-rw-r–r-- 1 root root 2862660 Jul 26 20:14 _pywrap_tensorflow_interpreter_wrapper.cpython-37m-arm-linux-gnueabihf.so
-rw-r–r-- 1 root root 2928464 Jul 26 20:14 _pywrap_tensorflow_interpreter_wrapper.cpython-38-arm-linux-gnueabihf.so
-rw-r–r-- 1 root root 2859152 Jul 26 20:14 _pywrap_tensorflow_interpreter_wrapper.cpython-39-arm-linux-gnueabihf.so

Anyone knows why this is failing?

Have you tried to follow the official build guide?

I’ll give it a try. So far I followed these instructions, which are supposed to be “great for embedded devices like the Raspberry Pi”.

Is that build gide a replacement for the quickstart instructions or is it something that needs to be done in addition to the quickstart setup?

I don’t know if the pip wheel is compiled for pi zero.
So it Is better that you try to build from soruces.

PTAL @Thai_Nguyen @yyoon @xhae :+1:

The thead is also at:

https://tensorflow-prod.ospodiscourse.com/t/instructions-for-cmake-on-raspberry-pi-zero-are-inaccurate/6610

I have same problem with raspberry pi zero w running buster OS.

This allows for running TF-Lite models on a RaspberryPi Zero using the Tensorflow-Lite Micro (TFLM) interpreter.

This provides the Python package:tflite_micro_runtime which uses the same API as tflite_runtime. The main difference is tflite_micro_runtime uses the Tensorflow-Lite Micro interpreter instead of the Tensorflow-Lite interpreter.

Using the Tensorflow-Lite Micro (TFLM) interpeter provides ~8x improvement on inference time.
TFLM provides a speedup because it uses the ARM CMSIS NN library which is optimized for ARMv6 processor that RPI0 uses. The RPI0’s ARMv6 processor does not have a GPU or other hardware optimizations so can not leverage any of the features that come with the tflite_runtime library. Thus the tflite_micro_runtime library is faster on the RPI0 but not other Raspberry PIs that do feautre a GPU.