GPU with cuda 11.8 not detected, could not find cuda drivers

I am trying to run a simple code for Tensroflow and I can’t detect my GPU, I followed exactly the installation instructions installation, but until now I didn’t find any solution.

Os: Ubuntu 22.04.02 LTS
VScode


pip show nvidia-cudnn-cu11 

Name: nvidia-cudnn-cu11
Version: 8.6.0.163
Summary: cuDNN runtime libraries
Home-page: https://developer.nvidia.com/cuda-zone
Author: Nvidia CUDA Installer Team
Author-email: cuda_installer@nvidia.com
License: NVIDIA Proprietary Software
Location: /home/myPath/.venv/lib/python3.10/site-packages
Requires: nvidia-cublas-cu11
Required-by: 

pip show tensorflow


Name: tensorflow
Version: 2.12.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: packages@tensorflow.org
License: Apache 2.0
Location: /home/myPath/.venv/lib/python3.10/site-packages
Requires: absl-py, astunparse, flatbuffers, gast, google-pasta, grpcio, h5py, jax, keras, libclang, numpy, opt-einsum, packaging, protobuf, setuptools, six, tensorboard, tensorflow-estimator, tensorflow-io-gcs-filesystem, termcolor, typing-extensions, wrapt
Required-by:


Error:

2023-04-09 19:16:25.164907: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-04-09 19:16:25.218184: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-04-09 19:16:25.218592: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-04-09 19:16:26.022157: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-04-09 19:16:26.919740: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-04-09 19:16:26.920248: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...

[]
nvidia-smi

NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0

@Apisteutos84,

Welcome to the Tensorflow Forum!

Please install CUDA11.8 as shown below and let us know?

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin
sudo mv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.8.0/local_installers/cuda-repo-ubuntu2204-11-8-local_11.8.0-520.61.05-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu2204-11-8-local_11.8.0-520.61.05-1_amd64.deb
sudo cp /var/cuda-repo-ubuntu2204-11-8-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda

Thank you!

I’m having similar problems to the OP, Ubuntu 22.04, tried everything I can find to make tensorflow work with my nvidia gtx 1060. I followed your instructions and the commands seemed to complete successfully, however afterwards when I try to test it I still get:

~$ python3 -c “import tensorflow as tf; print(tf.config.list_physical_devices(‘GPU’))”
2023-04-22 20:19:44.842303: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-04-22 20:19:45.643359: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at Install TensorFlow with pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
[]

Any suggestions gratefully received.

Looks like several folks are experiencing similar configuration problems getting tensorflow installed on ubuntu 22. The above recommendation of installing CUDA11.8 doesn’t really work because following the nvidia guidelines will install CUDA 12.1.1_1 which is newer than 11.8. My configuration is NVIDIA T1000 running 530.41.03 driver and CUDA version 12.1. Nvidia believes the cuda drivers are installed but tensorflow cannot find them. Is there an environment variable that can be set to correct this to point to an appropriate directory? This is apparently related to tensorflow/tsl/cuda/cudart_stud.cc:28. In addition, my configuration shows tensorrt 8.6.1 is installed in the python3.10/site-packages yet TensorFlow cannot find it and issues W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT. My hunch is that these is compatibility problems are in the source code that may required a rebuild for GPU support. One approach might be at Bangun dari sumber  |  TensorFlow using Bazel. If that’s not the case, I and others would appreciate knowing what configurations do work and how to resolve these issues

I would agree with Phillip Schmidt. Nvidia needs a later CUDA version albeitTensorflow requires 11.8. Per Tensorflow installation instructions, I did create a conda instance = ‘tf’ and while validating my tf instance in the CLI there is no issue identifying the GPU or tensorrt version.

(tf) whereismygpu@ubuntu:~/testplatform$ python
Python 3.10.11 (main, Apr 20 2023, 19:02:41) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2023-05-16 16:50:35.366302: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-05-16 16:50:36.079860: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
>>> import tensorrt as trf
>>> print(tf.__version__)
2.12.0
>>> 
>>> print(trf.__version__)
8.6.1
>>> print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
2023-05-16 16:53:01.043803: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-05-16 16:53:01.073270: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-05-16 16:53:01.073464: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:996] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
Num GPUs Available:  1
>>> 

On another note, I’ve emailed the author Prarit Bhargava ( Removed by moderator ) for the NUMA (Non Uniform Memory Access) node message that tells me I have no NUMA nodes = -1.

@seven, @Phillip_Schmidt,

Apologies for the delayed reply.

I followed the instructions as mentioned in Instale o TensorFlow com pip and i was able to recognise GPU using Tensorflow 2.12 & CUDA 11.8 on Ubuntu 22.04.

Could you please try again and let us know?

Thank you!

1 Like