Unable to see GPU in Jupyter Notebook

I’ve installed the cuda toolkit and cudnn on my Ubuntu machine to run my models on the GPU. My LD_LIBRARY_PATH is set to the absolute path, “/usr/local/cuda/include:/usr/local/cuda/lib64.” If I were to run the following script as a .py file, tensorflow can see my GPU.

import tensorflow as tf

tf.config.list_physical_devices('GPU')

The result is [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')].

However, if I were to run the same code block on a Jupyter Notebook, the returned list is empty. I am unsure as to why the notebook cannot see my GPU. I have tried setting its own environment variable to the same value as my $LD_LIBRARY_PATH environment variable, but no such luck.

Can you try to test your Notebook with a prepared Docker environment with the official Jupyter gpu image:

1 Like

Hi @jay1643

Welcome to the TensorFlow Forum!

Could you follow the step by step instructions mentioned in this TF install page to install Tensorflow using conda in your system? Also make sure to set the path for the installed software properly for the TF gpu support enabled.

Please let us know if the issue still persists. Thank you.