Need help for understanding messages from importing tensorflow in WSL

Hi, can someone please explain to me what the following messages mean and what I need to do? I run tensorflow v2.15 on WSL. When I do “import tensorflow as tf”, I receive the following messages:

2024-03-07 23:19:05.468868: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0.
2024-03-07 23:19:05.492451: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-03-07 23:19:05.492472: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-03-07 23:19:05.493026: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-03-07 23:19:05.497376: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-03-07 23:19:05.988818: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Hi @kdlin1,

Take a look at this discussion: Tensorflow 2.15 Install is driving me CRAZY! - #15 by Igor_Lessio. It seems this is a very common issue. At the very least you need to install TensorRT at the system level.

I haven’t figured out the three “E” (errors?) lines regarding cuDNN, cuFFT, and cuBLAS, but it doesn’t seem to make any difference in the code performance. My code runs anyway.

You can ignore the “cpu_feature_guard” as that’s informational only and won’t go away unless you recompile per the instructions. As long as your GPU is working - and your code is using the GPU - you won’t see much difference. That’s not to say that those parts of your code that don’t use the GPU couldn’t run faster if your tensorflow was optimized for your specific system. There is an environment variable you can set to disable that “Informational” message. Lookup TF_CPP_MIN_LOG_LEVEL to see if that helps you.

I found that my code would definitely crash – especially during training – if I didn’t have TensorRT installed. You must get this part of the issue resolved. Checkout the [Installation Guide :: NVIDIA Deep Learning TensorRT Documentation].

Hello mate.
The list of warnings is normal in the TF specially on WSL due to a kernel compile.

About TensorRT and the new Cudnn they are pretty easy to setup now just few commands in the terminal and you setup.

I install CUDA cudnn and TensorRT on the system then if you also need different cudnn or TF you can set extra into the env.

If your code crash it can be due to memory allocation for example.
If you need help we are here.

Thanks, Igor and delad. My code seems to work okay. Should I just ignore these messages and install TensorRT? Is the following the right command to install TensorRT?

python3 -m pip install --upgrade tensorrt

That’s only part of it. Install TensortRT at the system level “sudo apt install …” per instructions by Nvidia.

I tried: sudo apt-get install tensorrt, but received the following error message:

E: Unable to locate package tensorrt

Is TensorRT only needed for faster inferencing?

Likely(?). My experience was that the code I was running continued to fail until I finally installed. Here’s what I finally ended up with:

sudo apt list --installed tensorrt
Listing… Done
tensorrt/unknown,now 8.6.1.6-1+cuda12.0 amd64 [installed]