I followed the instructions for installing Tensorflow-Cuda on Instalar TensorFlow con pip and get the following error when running the GPU test:
2023-10-23 14:54:25.841187: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable
2023-10-23 14:54:25.865431: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-23 14:54:26.361517: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-10-23 14:54:27.259433: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:982] could not open file to read NUMA node: /sys/bus/pci/devices/0000:01:00.0/numa_node
Your kernel may have been built without NUMA support.
2023-10-23 14:54:27.296509: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1956] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at ติดตั้ง TensorFlow ด้วย pip for how to download and setup the required libraries for your platform.
Skipping registering GPU devices…
nvidia-smi returns the 3070ti GPU and cuda 12.2. nvcc --version returns 11.3