Build tensorflow with TensorRT

Hi,

I am trying to build tensorflow to use it with TensorRT. The build end with success, but then when I try to use it I have this error : RuntimeError: Tensorflow has not been built with TensorRT support

I followed thoses instructions : Compiler à partir de la source  |  TensorFlow

I have a card Nvidia Jetson AGX Orin, with Cuda 11.4, TensorRT 8.4.1 and CuDNN 8.4.1
I used Bazel 5.3.0
I tried the branch master (v2.11), and the branch r2.10
In the ./configure I did :

You have bazel 5.3.0- (@non-git) installed.
Please specify the location of python. [Default is /usr/bin/python3]:


Found possible Python library paths:
  /usr/lib/python3.8/dist-packages
  /usr/lib/python3/dist-packages
  /usr/local/lib/python3.8/dist-packages
Please input the desired Python library path to use.  Default is [/usr/lib/python3.8/dist-packages]

Do you wish to build TensorFlow with ROCm support? [y/N]: N
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: y
TensorRT support will be enabled for TensorFlow.

Found CUDA 11.4 in:
    /usr/local/cuda-11.4/targets/aarch64-linux/lib
    /usr/local/cuda-11.4/targets/aarch64-linux/include
Found cuDNN 8 in:
    /usr/lib/aarch64-linux-gnu
    /usr/include
Found TensorRT 8 in:
    /usr/lib/aarch64-linux-gnu
    /usr/include/aarch64-linux-gnu


Please specify a list of comma-separated CUDA compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus. Each capability can be specified as "x.y" or "compute_xy" to include both virtual and binary GPU code, or as "sm_xy" to only include the binary code.
Please note that each additional compute capability significantly increases your build time and binary size, and that TensorFlow only supports compute capabilities >= 3.5 [Default is: 3.5,7.0]: 8.7


Do you want to use clang as CUDA compiler? [y/N]: N
nvcc will be used as CUDA compiler.

Please specify which gcc should be used by nvcc as the host compiler. [Default is /usr/bin/gcc]:


Please specify optimization flags to use during compilation when bazel option "--config=opt" is specified [Default is -Wno-sign-compare]:


Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: N
Not configuring the WORKSPACE for Android builds.

Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details.
    --config=mkl             # Build with MKL support.
    --config=mkl_aarch64     # Build with oneDNN and Compute Library for the Arm Architecture (ACL).
    --config=monolithic      # Config for mostly static monolithic build.
    --config=numa            # Build with NUMA support.
    --config=dynamic_kernels    # (Experimental) Build kernels into separate shared objects.
    --config=v1              # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:
    --config=nogcp           # Disable GCP support.
    --config=nonccl          # Disable NVIDIA NCCL support.
Configuration finished

Then I did :

bazel build --config=cuda //tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
python3 -m pip install /tmp/tensorflow_pkg/tensorflow-2.10.0-cp38-cp38-linux_aarch64.whl

When I try to use the function tensorflow.python.compiler.tensorrt.trt_convert.TrtGraphConverterV2 I have this error :

ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
Traceback (most recent call last):
  File "convert_FP16.py", line 45, in <module>
    converter = trt.TrtGraphConverterV2(input_saved_model_dir='model/',
  File "/home/nvidia/.local/lib/python3.8/site-packages/tensorflow/python/util/deprecation.py", line 561, in new_func
    return func(*args, **kwargs)
  File "/home/nvidia/.local/lib/python3.8/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1209, in __init__
    _check_trt_version_compatibility()
  File "/home/nvidia/.local/lib/python3.8/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 223, in _check_trt_version_compatibility
    raise RuntimeError("Tensorflow has not been built with TensorRT support.")
RuntimeError: Tensorflow has not been built with TensorRT support.

Thank you for your help

1 Like

Hi @Celia_MARTIN, As mentioned in this document, Currently Tensorflow nightly builds include TF-TRT by default, which means you don’t need to install TF-TRT separately. You can pull the latest TF containers from docker hub or install the latest TF pip package to get access to the latest TF-TRT. Thank You