Error in reducing Tensorflow lite binary size

Hi, I was recently trying to selectively reduce the tf delegates for a model link here using Reduce TensorFlow Lite binary size. I have setup docker using the guide here ( Build TensorFlow Lite for Android) and I have no problem building fat binaries only issue is the tf-delegate reduction. I have tried both tensorflow:devel and tensorflow:devel-latest containers both resulting in the same error behavior resembling (Server terminated abruptly (error code: 14, error message: ‘Socket closed’ · Issue #41480 · tensorflow/tensorflow · GitHub)

The steps after setting up docker are as follows

  • copy my models to the container to a folder using docker cp
  • run
bash tensorflow/lite/tools/build_aar.sh -- 
input_models=/host_dir/smallbert_L6_H128,/host_dir/smallbert_L12_H128_mean.tflite --target_archs=arm64-v8a,armeabi-v7a

Error occurs during the same exact phase and almost the same files if I remember correctly, over couple of tried I did. Also the compilation stops and the timer continues for next 100-200 sec depending each time, while reading disk at about 2GB per sec the entire minute or so the timer continues.

one possiblity that found was low memory but I have plenty of ram left for the process … so :man_shrugging: .

Just one more thing I would like to mention, I have tried compiling custom binaries by editing BUILD file maybe adding the specific build is possible for tf delegates I need but I have no experience how to go about approching it either

tflite_jni_binary(
    name = "libtensorflowlite_jni_normal_with_gpu.so", 
    linkscript = ":tflite_version_script.lds",
    deps = [
        "//tensorflow/lite/c:c_api",
        "//tensorflow/lite/c:c_api_experimental",
        "//tensorflow/lite/delegates/nnapi/java/src/main/native",
        "//tensorflow/lite/delegates/xnnpack:xnnpack_delegate",
        "//tensorflow/lite/java/src/main/native",
        "//tensorflow/lite/delegates/gpu/java/src/main/native", 
    ],
)

Any help would be appreciated.

Hi,

Maybe @Thai_Nguyen can sed some light here.

1 Like

The docker to build TFLite is actually tflite-builder, not tensorflow:devel or tensorflow:devel-latest.
For using docker, I think following this section Reduce TensorFlow Lite binary size, and use the build_aar_with_docker.sh script is more convenient than build_aar.sh. Note, that you need to run build_aar_with_docker.sh on the host machine (not inside a docker).

1 Like

Thanks for directions, I currently don’t have linux os ready to go but I will update you today about the proceedings.

Oh, I was talking about the images which the dockerfile uses as source to build a container locally. I changed it from tensorflow:devel and rebuilt the image to try tensorflow:devel-latest source image. Sorry for the miss communication.

Just an update, I thought I should mention the cause which might help others who view this tread on a later time. The time bazel server was exiting had nothing to do with the scripts that I was running.

When I created the issue on my laptop which was running windows and docker for this purpose. Bazel server exiting was not caused by some bad command but memory constrains, my observations while doing it on linux was that, the memory usage spikes while compiling the mkl library with a delta above 1.5Gigs. The WSL2 being used by Docker to manage the memory outright refuses to provide because of my limited 8G ram causing the bazel to error out. On linux it basically crashed.
The solution was pretty easy I had to compile on my desktop which had more ram available. Docker Hub : Sid911/tflite-builder This is the image I uploaded for my own devops use, just in case someone doesn’t want to build it they can use it. I will update it per month for my own project but that’s not guaranteed.