TensorFlow Install error

Hey, Im trying to use Tensorflow 2.14.0 in my Ubuntu 22.04 but he doesnt recognize my gpu (NVIDIA 4070ti). My cuda version is 11.5, python 3.10.12, pip 22.0.2 and TensorRT 8.5.3.1.

When I try to run, receive this:

E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:894] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
W tensorflow/core/common_runtime/gpu/gpu_device.cc:2211] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at GPU サポート  |  TensorFlow for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…

Also, when I try run this:

verify available gpu

physical_devices = tf.config.list_physical_devices(‘GPU’)
if len(physical_devices) == 0:
print(“Error”)
else:
# Tensorflow config to use gpu
tf.config.experimental.set_memory_growth(physical_devices[0], True)

He prints the error message

Hi @min_guell

Welcome to the TensorFlow Forum!

The installed CUDA version is not compatible with Tensorflow 2.14. You may need to install CUDA 11.8 and cuDNN 8.7 for Tensorflow 2.14 as mentioned in this tested build configuration.

Please refer to this TF install page and check all the Hardware/Software requirements for GPU support along with following the step step by instructions mentioned to install Tensorflow with GPU.

Please let us know if the issue still persists. Thank you.

2 Likes

I think I have the same problem.
GPU: A800 DriveVersion: 525.105.17 CUDA Version: 12.0
I’ve installed such versions below
tensorflow 2.9.0 / 2.12/ 2.13.1 / 2.14.0

But it seems like tensorflow could not detect GPU because:
available = tf.test.is_gpu_available() is_cuda_gpu_available = tf.test.is_gpu_available(cuda_only=True) is_cuda_gpu_min_3 = tf.test.is_gpu_available(True, (3,0)) All were “False”.

When I tried pip install tensorflow[and-cuda]==2.15.0 It exactly returns “True”

But still received the error message:

E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered

Hello I’ve got the same problem and solved (not yet the -1 NUMA nodes) following this YouTube tutorial, First I wiped out cuda tool kit on my system with:

sudo apt-get --purge remove “cuda” “cublas” “cufft” “cufile” “curand” “cusolver” “cusparse” “gds-tools” “npp” “nvjpeg” “nsight*” “nvvm

and then follow then follow the cudnn installation guide but directing to the specific folder of the new cuda 11.8 installed on my system, like this:

$ sudo cp cudnn--archive/include/cudnn.h /usr/local/cuda-11.8/include
$ sudo cp -P cudnn--archive/lib/libcudnn /usr/local/cuda-11.8/lib64
$ sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda-11.8/lib64/libcudnn*

By the way, the for venv I used conda:

$ conda create -n tf python=3.10
$ conda activate tf
$ pip install --upgrade pip
$ pip install -U setuptools wheel
$ pip install tensorflow==2.12.0

if you find any solution for NUMA nodes please share it :slight_smile:
Have a nice one!

Hi I have installed tensorflow-gpu - 2.10.1 and I have cuda version 11.4
When I try to import tensorflow I get the following error. Can anyone please guide me to solve the error.
But suprisingly i can see the number of physical gpu using tensorflow command.
I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer.so.7’; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library ‘libnvinfer_plugin.so.7’; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.