Add "FlexVarHandleOp" operator to tesnorflow-lite interpretrer (C++/Arm)

Hello TF experts!
I’m trying to run a TF-Lite model (Armv7, Linux, C++) converted from a ONNX/TF model.
I already did it for a simple DNN, but since I’m trying a LSTM I get errors at runtime.

I have been able to convert the model to TF-Lite by following these instructions: Select TensorFlow operators | TensorFlow Lite

I have also updated the TF-Lite library to compile with content of “tensorflow/lite/delegates/flex” folder, but I still get this error at runtime:
“ERROR: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
ERROR: Node number 1 (FlexVarHandleOp) failed to prepare.”

I’m not building TF-Lite with Bazel (specific build env), maybe there are some additionnal things to do?

Many thanks for any advice onn my issue,

Kind regards,
Nicolas

Hi @nicolas_vassal

Have you used specifically the instructions here?

Hi George,
thanks for the link.
The “tensorflow/lite/delegates/flex” source folder is added to the build via “/tensorflow_src/tensorflow/lite/tools/make/Makefile”

But I have not set the “–config=monolithic” flag. What is its exact purpose? the comment in source is not quite clear to me.

Kind regards,
Nicolas

So write here the bazel command that you use to build the TensorFlow Lite libraries using the bazel pipeline.
If I am not aware I can tag a specific person…but we have to show him what you have done already.

Thanks

I’m not using Bazel to build tensorflow-lite lib.
I’m running the “build_rpi_lib.sh” script from “tensorflow/lite/tools/make” folder.

I do not see that kind of instructions at the documentation. I think you should try whatever the link is suggesting.
(or modify the script to use additionally the “tensorflow/lite/delegates/flex:delegate”)

Thanks for your advice but unfortunately what I’m trying to do seems to be unsupported.
Building tflite lib with TF operators is not supported when using CMake build system.

It’s explained here: Build TensorFlow Lite for ARM boards

I didn’t found it at first because of my research keywords :-\ … but the only answer to my problem is to use Bazel, no other options.

1 Like

hi @nicolas_vassal . Have you solved this by bazel? I also want to use the tf ops for tflite, but it failed when build from bazel.

Hello @Leroy, I have tried to build with bazel (without adding new ops) just to check the faisability. But it generates shared libs of +350MB which is incompatible with my target.
Moreover Bazel uses a too “new” toolchain and glibc dependency for my legacy environment.

Regards,
Nicolas

@nicolas_vassal Thank you for your reply. So how did you end up handling it? Did you change the model implementation to avoid the use of tf selected ops?