Help with tensorflow and cuda gpu processing

I’m trying to build my first machine learning program in visual studio and i’m trying to get it to process my model using my gpu instead of cpu. This is what i’ve got so far, i just can’t figure out what i’m doing wrong…

Windows 10 pro fresh install
Visual studio 2019 and 2022 community edition
Cuda 11.8
cudnnn 8.6.0.163
copy dll’s and also create directory for the cudnn
made environment system variables
anaconda 3.9.12

What else could i be missing?

I run this in anaconda but still get False for available GPU’s when i have 2 rtx 3060ti’s isntalled.

python
import tensorflow as tf
tf.version
len(tf.config.list_physical_devices(‘GPU’))>0

@Michael_Duhon,

Welcome to the Tensorflow Forum!

According to Install Tensorflow with pip on Windows

TensorFlow 2.10 was the last TensorFlow release that supported GPU on native-Windows. Starting with TensorFlow 2.11 , you will need to install TensorFlow in WSL2, or install tensorflow-cpu and, optionally, try the TensorFlow-DirectML-Plugin.

Are you using the latest version of Tensorflow 2.12 ?

If yes, could you please try as suggested above and let us know?

Thank you!

ok this is what i ended up having to do in order to get it running with gpu on my machine using older versions of tensorflow-gpu. Would you recommend using 2.11 or above wtih your suggestion to handle the new error i’m getting trying to use both my gpu’s. with this update below i can only use 1 gpu.

python 3.8.5 downgrade
pip install tensorflow-gpu==2.7.1
pip install protobuf 3.20.1
pip install grpcio==1.48.2
pip install pandas --user
pip install scikit-learn
pip install pyodbc
pip install sqlalchemy
pip install numpy==1.21

I’m now seeing an issue that it will only process using 1 of my gpu’s, if I include both of my rtx 3060ti’s then i get this error below?

No OpKernel was registered to support Op ‘NcclAllReduce’ used by {{node Adam/NcclAllReduce}} with these attrs: [reduction=“sum”, shared_name=“c1”, T=DT_FLOAT, num_devices=2]
Registered devices: [CPU, GPU]

@Michael_Duhon,

tf.distribute.MirroredStrategy() uses NCCL in default.

Can you try to call tf.distribute.mirrorstrategy(cross_device_ops=tf.distribute.HierarchicalCopyAllReduce()) and let us know?

Would you recommend using 2.11 or above wtih your suggestion to handle the new error i’m getting trying to use both my gpu’s. with this update below i can only use 1 gpu.

Please share the error

Thank you!

Yes i have already tried this

@Michael_Duhon,

Are you still seeing the same error? What is the current version of CUDA?

Also, can you please try and let us know?

strategy = tf.distribute.MirroredStrategy(
     cross_device_ops=tf.distribute.ReductionToOneDevice())

Thank you!