CMake build Tensorflow Lite with Flex

Hi, I want to run simple conv2d layer training using C++ API of TF Lite. I’ve used examples/minimal project as reference. Calling trainer->invoke() leads to next error:

ERROR: TensorFlow Lite Error: Select TensorFlow op(s), included in the given model is(are) not supported by this interpreter. Make sure you apply/link Flex delegate before inference. For the Android, it can be resolved by adding “org.tensorflow:tensorflow-lite-select-tf-ops” dependency…

ERROR: Node number 48 (FlexConv2DBackpropFilter) failed to prepare.

I didn’t find any good tutorial on Flex delegate. Please, tell me, how to build TF Lite with Flex using CMake and how to make this interpreter use Flex’s implementation of this node. Thanks

@mymind1919 I have been trying to achieve same thing and on Mac os, I am able to link and generate correct binary/executable by using below sample cmake snippet.

To build required libraries…

bazel build -c opt --config=monolithic tensorflow/lite:libtensorflowlite
bazel build -c opt --config=monolithic tensorflow/lite:libtensorflowlite_flex

Assuming binary file name is inference.cpp and then need to link these libraries.

  1. tensorflowlite
  2. tensorflowlite_flex
add_executable(inference inference.cpp)
target_link_libraries(inference tensorflowlite tensorflowlite_flex)

That enables Flex delegate support for Mac OS but same thing doesn’t work when I am trying to build it for ARM.

Can you please try –no-as-needed option and let us know?

Step 1. Install CMake tool
It requires CMake 3.16 or higher. On Ubuntu, you can simply run the following command.
sudo apt-get install cmake
Or you can follow the official cmake installation guide
Step 2. Clone TensorFlow repository
git clone GitHub - tensorflow/tensorflow: An Open Source Machine Learning Framework for Everyone tensorflow_src
Note: If you’re using the TensorFlow Docker image, the repo is already provided in /tensorflow_src/.
Step 3. Create CMake build directory

mkdir tflite_build
cd tflite_build
Step 4. Run CMake tool with configurations
Release build
It generates an optimized release binary by default. If you want to build for your workstation, simply run the following command.

cmake …/tensorflow_src/tensorflow/lite