Tensorflow 2.13.0 does not find GPU with CUDA 12.1

I have a new computer running Windows 11, with CUDA 12.1 supporting an RTX 4080 GPU. I installed Tensorflow 2.13.0 and have Python 3.11.4. Tensorflow does not find the GPU. Is Tensorflow 2.13.0 compatible with CUDA 12.1?

P.S. tried installing the nightly, which also didn’t detect the GPU.

1 Like

I had the same problem and it solved by following this Zainstaluj TensorFlow z pip
do the following steps:

  1. conda create --name tf python=3.9
  2. conda activate tf
  3. conda install -c conda-forge cudatoolkit=11.8.0
  4. pip install nvidia-cudnn-cu11==8.6.0.163
  5. pip install --upgrade pip
  6. pip install tensorflow==2.13.*
  7. test the GPU
1 Like

Hi @Steve_F, As per the test build configuration Tensorflow 2.13 supports cuDNN: 8.6, CUDA:11.8 and again TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11 , you will need to install TensorFlow in WSL2 for using the GPU. Thank You.

2 Likes

I’ve run aground in WSL2 installation – when attempting to install the cuDNN libraries with

sudo apt-get install libcudnn8

I get the following:

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
E: Unable to locate package libcudnn8

As a consequence, Tensorflow doesn’t find the GPU. I’ve researched this and haven’t found a solution that works.

Thanks – did not work for me.

Finally got it to work by installing the cuDNN library locally, following the Debian installation instructions found here.

Sorry for asking, but I couldn’t find where the doc mentioned that Tensorflow 2.10 was the last tf release that supported GPU on native Windows. Would you mind providing the link?
Thanks ahead!

1 Like

Hi @Chieh-Sheng_Chen, You can see it here. Thank You!

1 Like

Thanks for replying.
I think I know why I couldn’t find related information before because my default language of the doc is zh-TW.
After I changed it to English, there is actually more information.

I have a way to install the whole work environment to be encapsulated in python virtual environment:

Base Config: Windows10 + RTX3060ti

Anaconda installed at D:\Anaconda3\

conda create -n tf39 python=3.9.*
conda activate tf39
conda install -c conda-forge cudatoolkit=11.8.*
pip install nvidia-cudnn-cu11
pip install tensorflow==2.10.*

Set wherer the cudnn64_8.dll is located that install in previous step

conda env config vars set PATH=D:\Anaconda3\envs\tf39\Lib\site-packages\nvidia\cudnn\bin;%PATH%

in my environment, GPU can be found and running the calculation.

it worked for me thanks yet when I train a model there is an issue

ValueError: Calling Model.fit in graph mode is not supported when the Model instance was constructed with eager mode enabled. Please construct your Model instance in graph mode or call Model.fit with eager mode enabled.

yet when I run it with a CPU instead of gpu I dont receive that error

down load cudatoolkit in NVIDIA Official Website and install it ;
do not use pip to install