Instructions for Cmake on Raspberry Pi Zero are inaccurate

The instructions for Tensorflow Lite cross compilation for ARM are inaccurate (contain errors/omissions, as far as I can see), as follows:

  1. In the PREREQUISITES section it states " You need CMake installed and downloaded TensorFlow source code. Please check Build TensorFlow Lite with CMake page for the details."
    To be more accurate you need to execute step 1 and 2 shown on that referenced page. Step 3 also needs to be done but not yet.

  2. In the section " Build for Raspberry Pi Zero (ARMv6)" and then the instructions for “download toolchain” a second line should be added that reads “mkdir -p ${HOME}/toolchains” This line is shown in the instructions for the other builds but not in the instructions for the Raspberry Pi Zero. Without creating this directory, the next statements will fail.

  3. After installing the toolchains but before running cmake you need to execute step 3 of the PREREQUISITES, being:
    mkdir tflite_build
    cd tflite_build
    since the cmake statement needs to be run from that directory.

  4. If you then execute the cmake statement as shown it will fail, throwing errors as follows:

4a) first error “CMake Error: The source directory “/home/tensorflow/lite” does not exist.” this is because the directory should be “/home/tensorflow_src/tensorflow/lite” instead. You should add “/tensorflow_src” to the cmake statement “…/tensorflow/lite”

4b) running it again then throws the error (not the complete text shown):
The C compiler identification is unknown
– The CXX compiler identification is unknown
– Detecting C compiler ABI info
– Detecting C compiler ABI info - failed
– Check for working C compiler: /home/pi/toolchains/arm-rpi-linux-gnueabihf/x64-gcc-6.5.0/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc
– Check for working C compiler: /home/pi/toolchains/arm-rpi-linux-gnueabihf/x64-gcc-6.5.0/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc - broken
CMake Error at /usr/share/cmake-3.18/Modules/CMakeTestCCompiler.cmake:66 (message):
The C compiler
“/home/pi/toolchains/arm-rpi-linux-gnueabihf/x64-gcc-6.5.0/arm-rpi-linux-gnueabihf/bin/arm-rpi-linux-gnueabihf-gcc”
is not able to compile a simple test program. (etc etc etc rest of text not copied.)

I was not sure how to correct this but I removed the ${ARMCC_PREFIX} for the gcc and g++ compiler in the cmake statement, just using the compilers already installed. At least it finds the compilers and starts the process. (if I change it to use 8.3.0. instead of 6.5.0 it also fails)

4c) Cmake gives a lot of positive messages about progress but then at the end reports the errors:
Looking for a Fortran compiler
– Looking for a Fortran compiler - NOTFOUND
– Could NOT find CLANG_FORMAT: Found unsuitable version “0.0”, but required is exact version “9” (found CLANG_FORMAT_EXECUTABLE-NOTFOUND)

so I installed a fortran compiler with “sudo apt-get install gfortran”. Nowhere this is mentioned as a prerequisite.

It still reports failures on the following points, not sure whether this is due to changing the compiler path or anything else, or how to solve it. All other message are positive.
– Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
– Performing Test COMPILER_SUPPORT_Wshorten64to32 - Failed
– Performing Test COMPILER_SUPPORT_Wenumconversion - Failed
– Performing Test COMPILER_SUPPORT_Wcpp11extensions - Failed
– Performing Test COMPILER_SUPPORT_wd981 - Failed
– Performing Test COMPILER_SUPPORT_wd2304 - Failed

4d) Running cmake again now still reports
Could NOT find CLANG_FORMAT: Found unsuitable version “0.0”, but required is exact version “9” (found CLANG_FORMAT_EXECUTABLE-NOTFOUND)
while the other messages all look positive. (except for the tests shown at 4c above)

I don’t know whether this message about clang-format is critical and I don’t know what to do to solve it.

Also I have not tested running tensorflow yet after these steps.

I am running a Raspberry Pi Zero W v1.1 with updated and upgraded Bullseye OS (Dec 17, 2021).

So to summarise, the final cmake statement I used was as follows (running from the tflite_build directory):
cmake -DCMAKE_C_COMPILER=gcc -DCMAKE_CXX_COMPILER=g++ -DCMAKE_C_FLAGS="${ARMCC_FLAGS}" -DCMAKE_CXX_FLAGS="${ARMCC_FLAGS}" -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_SYSTEM_PROCESSOR=armv6 -DTFLITE_ENABLE_XNNPACK=OFF …/tensorflow_src/tensorflow/lite/

(note: somehow the forum adds 3 dots in front of “…/tensorflow_src/etc” , should just be 2.)

  1. For clarity it could be added that step 5 or 6 of the page referenced in the PREREQUISITES should then be executed.
1 Like

Thanks for the feedback @Willem, looping in @xhae

Also looping in @yyoon and @Thai_Nguyen

Just to let you know I interrupted the build process. After approx.12 hours it did not get past 17% and it did not show any more progress in the last 10 hours. So I think I have tried all kinds of ways to get Tensorflow working. I’ll probably have to do with the object recognition capabilities of OpenCV and skip on using Tensorflow.

I just realized something… these instructions are made for cross compilation for a Raspberry Pi Zero but not for compiling on a Raspberry Pi Zero? oops

So better use a Linux installation on my PC to do this? And move it over to the Raspberry afterwards using docker?

Yes It is crosscompilation on the Linux host.

You can prepare the installable wheel from the linux host with rpi0:

O.K. please check and comment my plan of approach before I spend another few days getting nowhere…

So far, I have a PC just running Windows 10 or 11, and a few Raspberry Pi Zero’s running Buster or Bullseye. My goal is to play around with the tflite_runtime on the Raspberry Pi Zero to control a robot. (alternatively I will do the same just using OpenCV)

Having tried and failed the Python quickstart (probably because it is for other Pi’s, but not for the Zero) and having tried the compilation on the Zero itself (cancelled because it gets stuck), I now want to try the Python wheel approach.

But I am new to using Docker. Did some reading…

The plan:

  1. install WSL2 on my PC with Ubuntu distribution.
    Q1: or should I use a multi-boot setup Windows/Ubuntu instead of WSL2?
    Q2: or should I use Debian instead of Ubuntu within WSL2?
  2. Load and start the tensorflow:devel docker image within the Ubuntu WSL environment (is this step required?)
  3. Install cmake and download the Tensorflow source code (i.e. step 1 and 2 of the Cmake overview explanation) within the docker image
  4. Run the command:
    tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 \ tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0
    (since I have Python 3.9 within Bulleseye on a Raspberry Pi Zero with ARMv6)

Or, instead of using Piwheels, should I still do the ARM cross compilation using Cmake instead of step 4 above?

And what happends after the above? How to move the result then onto the Pi Zero?

Thanks for your assistance.

You will have an output wheel package that could be installed on your Raspberry with pip install

First thing I tried now is run the cmake again but now within the docker container on Ubuntu within WSL2.

It still required the corrections as mentioned in my first post above:

  1. As indeed mentioned on the site, cmake needs to be installed, but now step 2 can be skipped since the tensorflow_src is already part of the image (as mentioned on the site as well)
  2. the mkdir -p ${HOME}/toolchains command needs to be added to the instructions for the toolchain
  3. step 3 needs to be done
    4a) the tensorflow_src directory needs to be added to the path in the cmake command
    4b) this time there are no errors on the gcc and g++ compiler (so the ARMCC_PREFIX in the cmake command can be left as-is)
    4c) the fortran compiler needs to be installed as a prerequisite
    4d) the cmake runs with similar positive and negative messages as before
    Here is the output (with some of the positive messages replaced by etc etc)
    cmake -DCMAKE_C_COMPILER=${ARMCC_PREFIX}gcc -DCMAKE_CXX_COMPILER=${ARMCC_PREFIX}g++ -DCMAKE_C_FLAGS="${ARMCC_FLAGS}" -DCMAKE_CXX_FLAGS="${ARMCC_FLAGS}" -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON -DCMAKE_SYSTEM_NAME=Linux -DCMAKE_SYSTEM_PROCESSOR=armv6 -DTFLITE_ENABLE_XNNPACK=OFF …/tensorflow_src/tensorflow/lite
    – Setting build type to Release, for debug builds use’-DCMAKE_BUILD_TYPE=Debug’.
    – The C compiler identification is GNU 6.5.0
    – The CXX compiler identification is GNU 6.5.0
    – etc etc
    – Performing Test CMAKE_HAVE_LIBC_PTHREAD
    – Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
    – Looking for pthread_create in pthreads
    – Looking for pthread_create in pthreads - not found
    – etc etc
    – Performing Test COMPILER_SUPPORT_Wshorten64to32 - Failed
    – etc etc
    – Performing Test COMPILER_SUPPORT_Wenumconversion - Failed
    – Performing Test COMPILER_SUPPORT_Wcpp11extensions
    – Performing Test COMPILER_SUPPORT_Wcpp11extensions - Failed
    – etc etc
    – Performing Test COMPILER_SUPPORT_wd981
    – Performing Test COMPILER_SUPPORT_wd981 - Failed
    – Performing Test COMPILER_SUPPORT_wd2304
    – Performing Test COMPILER_SUPPORT_wd2304 - Failed
    – Performing Test COMPILER_SUPPORT_OPENMP
    – etc etc
    – Could NOT find CLANG_FORMAT: Found unsuitable version “10.0.0”, but required is exact version “9” (found /usr/bin/clang-format)
    – etc etc
    – Build files have been written to: /tflite_build

Running step 6 (the cmake build command) after this goes well until 79%. Then it starts throwing a lot of internal compiler errors. Here is a small sample:

make[3]: *** [CMakeFiles/tensorflow-lite.dir/build.make:872: CMakeFiles/tensorflow-lite.dir/kernels/div.cc.o] Error 4
arm-rpi-linux-gnueabihf-g++: internal compiler error: Killed (program cc1plus)
Please submit a full bug report,
with preprocessed source if appropriate.
See http://gcc.gnu.org/bugs.html for instructions.
make[3]: *** [CMakeFiles/tensorflow-lite.dir/build.make:1041: CMakeFiles/tensorflow-lite.dir/kernels/gather.cc.o] Error 4
make[3]: Leaving directory ‘/tflite_build’
make[2]: *** [CMakeFiles/Makefile2:1258: CMakeFiles/tensorflow-lite.dir/all] Error 2
make[2]: Leaving directory ‘/tflite_build’
make[1]: *** [CMakeFiles/Makefile2:5203: examples/label_image/CMakeFiles/label_image.dir/rule] Error 2
make[1]: Leaving directory ‘/tflite_build’
make: *** [Makefile:1710: label_image] Error 2

I realise this feedback is a mix of “site feedback” and “user experience”. I hope it is still useful.

Now I will move on to the python wheel approach…

Now I did the python wheel approach, as follows:

  1. start Ubuntu within WSL2
  2. docker run --name wdtest -it tensorflow/tensorflow:devel
  3. sudo apt-get cmake
  4. sudo apt-get install gfortran (just to be sure…)
  5. cd tensorflow_src
  6. tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0

and get the following:

WORKSPACE: /tensorflow_src
CI_DOCKER_BUILD_EXTRA_PARAMS:
CI_DOCKER_EXTRA_PARAMS:
COMMAND: tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0
CI_COMMAND_PREFIX: ./tensorflow/tools/ci_build/builds/with_the_same_user ./tensorflow/tools/ci_build/builds/configured pi-python39
CONTAINER_TYPE: pi-python39
BUILD_TAG: tf_ci
(docker container name will be tf_ci.pi-python39)

Building container (tf_ci.pi-python39)…
tensorflow/tools/ci_build/ci_build.sh: line 145: docker: command not found
ERROR: docker build failed. Dockerfile is at /tensorflow_src/tensorflow/tools/ci_build/Dockerfile.pi-python39

It is trying to build a docker container within a docker container? I will now try it without step 2 (docker run) above and adding the tensorflow_src download (clone).

So now I tried it without running the docker first, as follows:

  1. start Ubuntu within WSL2
  2. sudo apt-get cmake
  3. sudo apt-get install gfortran (just to be sure…)
  4. git clone GitHub - tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone tensorflow_src (note: the forum editor messes up here, this should be displayed as step 2 of the prerequisite instructions to Clone TensorFlow repository but it somehow changes the display and I don’t know how to undo this)
  5. cd tensorflow_src
  6. tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0

Now the process starts running with a lot of progress messages, but:

  1. the cmake command still throws the same warnings/errors as reported in point 4c of my first post above. Nevertheless the build after that still continues
  2. the build command progresses up to 98% (Building CXX object CMakeFiles/tensorflow-lite.dir/simple_planner.cc.o) and then after 4 more long g++ commands waits some time, before throwing the error message:

ERRO[0169] error waiting for container: invalid character ‘u’ looking for beginning of value

Note it did create a container called tf_ci.pi-python39

I think I have exhausted all options now and am giving up until there is some feedback to solve this.

I managed to transfer the docker container that was created in the above process to the raspberry pi zero (using save to tar, ftp and load from tar) and get the following message when I try to run it on the raspberry:

WARNING: The requested image’s platform (linux/amd64) does not match the detected host platform (linux/arm/v6) and no specific platform was requested
standard_init_linux.go:228: exec user process caused: exec format error

and when I inspect the docker either on the Raspberry Pi or in the Ubuntu environment on the PC it shows the linux OS and amd64 architecture.

Are you running tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0 inside a container or on the host?

First I tried running it within a container ( step 2 was “docker run --name wdtest -it tensorflow/tensorflow:devel”, step 3 to 6 were within the container). After that had failed (probably because it tried creating a container within a container and the docker command was not installed in the container) I ran it directly on the host (Ubuntu within WSL), after first downloading the required source code. It then created a container as result.

Since this process should have resulted in a pip wheel I also inspected the contents of the container that was created (tf_ci.pi-pytho39) for any wheel files. This is what was found:
/root/.cache/pip/wheels/98/23/68/efe259aaca055e93b08e74fbe512819c69a2155c11ba3c0f10/wrapt-1.12.1-cp39-cp39-linux_x86_64.whl
/root/.cache/pip/wheels/a9/33/c2/bcf6550ff9c95f699d7b2f261c8520b42b7f7c33b6e6920e29/py_cpuinfo-8.0.0-py3-none-any.whl
/root/.cache/pip/wheels/b6/0d/90/0d1bbd99855f99cb2f6c2e5ff96f8023fad8ec367695f7d72d/termcolor-1.1.0-py3-none-any.whl
/root/.cache/pip/wheels/2f/a0/d3/4030d9f80e6b3be787f19fc911b8e7aa462986a40ab1e4bb94/future-0.18.2-py3-none-any.whl
/root/.cache/pip/wheels/fa/17/1f/332799f975d1b2d7f9b3f33bbccf65031e794717d24432caee/typing-3.7.4.3-py3-none-any.whl
/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-any.whl
/usr/share/python-wheels/pip-8.1.1-py2.py3-none-any.whl
/usr/share/python-wheels/colorama-0.3.7-py2.py3-none-any.whl
/usr/share/python-wheels/setuptools-20.7.0-py2.py3-none-any.whl
/usr/share/python-wheels/lockfile-0.12.2-py2.py3-none-any.whl
/usr/share/python-wheels/wheel-0.29.0-py2.py3-none-any.whl
/usr/share/python-wheels/pyparsing-2.0.3-py2.py3-none-any.whl
/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl
/usr/share/python-wheels/retrying-1.3.3-py2.py3-none-any.whl
/usr/share/python-wheels/html5lib-0.999-py2.py3-none-any.whl
/usr/share/python-wheels/distlib-0.2.2-py2.py3-none-any.whl
/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl
/usr/share/python-wheels/six-1.10.0-py2.py3-none-any.whl
/usr/share/python-wheels/packaging-16.6-py2.py3-none-any.whl
/usr/share/python-wheels/progress-1.2-py2.py3-none-any.whl
/usr/share/python-wheels/ipaddress-0.0.0-py2.py3-none-any.whl
/usr/share/python-wheels/chardet-2.3.0-py2.py3-none-any.whl
/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/pip-20.3.4-py2.py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/pip-21.3.1-py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/setuptools-44.1.1-py2.py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/setuptools-50.3.2-py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/setuptools-58.3.0-py3-none-any.whl
/usr/local/lib/python3.9/dist-packages/virtualenv/seed/wheels/embed/wheel-0.37.0-py2.py3-none-any.whl

Have you tried to install docker in the wsl2 Linux os?

P.s.
What I meant is that build_pip_package_with_cmake.sh is for a Linux host with docker.

Yes

I have tried:

  1. the Python quickstart guide for Raspberry PI in different ways as described here and here, both with pre-installing OpenCV and without.
    After installation this is failing on the statement "“from tflite_runtime import _pywrap_tensorflow_interpreter_wrapper as _interpreter_wrapper” as explained in my other post.
  2. the Cmake compilation on the Raspberry PI. I interrupted the build process after 12 hours because it got stuck.
  3. the Cmake cross compilation on Ubuntu within WSL2, within the tensorflow/tensorflow:devel container. It failed after 79% as described, throwing a lot of compiler errors.
  4. the Python wheel approach within the tensorflow/tensorflow:devel container. This failed because it can’t find the docker command within that container (it tries to create a container within a container)
  5. the Python wheel approach on Ubuntu within WSL2, but without using the tensorflow/tensorflow:devel container. This gets up to 98% and then fails with the error “ERRO[0169] error waiting for container: invalid character ‘u’ looking for beginning of value”

So the answer to your latest question is yes. This is point 5 above. That process does create a new container called tf_ci.pi-python39 and tries to run build_pip_package_with_cmake.sh within that container. The build_pip_package starts and runs up to 98% but does not completely finish, as explained.

Are you able to successful run docker run hello-world in your Ubuntu/WSL2?

Yes, no problem to run the hello-world docker in Ubuntu on WSL2.

And I have already described that it is no problem to install and run commands within the tensorflow/tensorflow:devel docker image or to create a new docker image called cf_ti:pi-python39 and then run commands within it. There are just specific commands related to the tensorflow installation script that fail.

I don’t know if it is a specific WSL2 issue but with a standard linux host + docker with:

tensorflow/tools/ci_build/ci_build.sh PI-PYTHON39 tensorflow/lite/tools/pip_package/build_pip_package_with_cmake.sh rpi0 

I’ve produced the raspberry p0 wheel:

/workspace/tensorflow/lite/tools/pip_package/gen/tflite_pip/python3.9/dist/tflite_runtime-2.8.0-cp39-cp39-linux_armv6l.whl

The only problem I see is:

${BUILD_NUM_JOBS} in

As it is probably a CI only env variable and it is going to not limit the number of parallel jobs in the compilation exhausting host resources.

I suggest as a temp workaround to substitute that variable with the number of available cores on you host.