Wsl2 tf2.11 keras optimizer fail

for wsl2 tf==2.11, optimizer=tf.keras.optimizers.legacy.Adam() works


NOT optimizer=“adam”

NOR optimizer=tf.keras.optimizers.Adam()

is there a new way to call the new optimizers or does the paths to CUDA in the new keras optimizers need correction?

Hi @Steven_Cohen, In v2.11 and later, tf.keras.optimizers.Optimizer points to a new base class implementation. So it works with tf.keras.optimizers.legacy.Optimizer.For more details please refer to this documentationThank You.

@Steven_Cohen - Can you provide steps for reproduction or any call to Adam optimizer fails in WSL2? Have you tried in TF2.12 release?

@chenmoney - FYI.

Hi thanks for getting back, i just followed the installation instructions for tf12* Install TensorFlow with pip and when i run and a simple optimizers.Adam() I get, 2023-04-09 18:02:22.272960: W tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/] Can’t find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice. Also when i look in the directory there is no nvvc file downloaded by nvidia.

I’ve tried looking though “\wsl.localhost\Ubuntu\home\steve\anaconda3.23.03tf12\lib\python3.10\site-packages\keras\optimizers\” for a solution but its a bit above my paygrade.

I have also noticed that for windows native, tf.keras.optimizers.experimental.Adam(), the same error occurs InternalError: Graph execution error: … Node: ‘StatefulPartitionedCall_2’
libdevice not found at ./libdevice.10.bc
[[{{node StatefulPartitionedCall_2}}]] [Op:__inference_train_function_739].

But at least i can find “C:\Users\sjc52\anaconda3.2022.10\pkgs\cuda-nvcc-11.7.99-0”

I am assuming you had used the steps here - Install TensorFlow with pip

And also the Verify Install steps there to confirm that GPU is enabled. I don’t have access to WSL2 now, but are you able to run nvidia-smi to verify the drivers in WSL?
ALso, if you are familiar with docker, you can give the steps to use TF inside a Docker a try - Docker  |  TensorFlow

Please post here if you still can’t get it to work with GPU. I hope other community members who are familiar with WSL can comment and offer additional assistance.

perhaps if your running tf2.10 on windows you could confirm if this code works?

import tensorflow as tf
mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential(
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation=“relu”),
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=tf.keras.optimizers.experimental.Adam(), loss=loss_fn, metrics=[“accuracy”]), y_train, epochs=5)

I don’t have a Windows OS to confirm this. But I would recommend using either tf.keras.optimizers.Adam or tf.keras.optimizers.legacy.Adam. I don’t think we need the experimental for Adam.

i ive installed tf13 and now is working maybe because installed latest cuda drivers as instructed here NVIDIA GPU Accelerated Computing on WSL 2 — wsl-user-guide 12.2 documentation .

but i also noticed from pip로 TensorFlow 설치

but it is only tacked on at the end of the linux install not the wsl2

Ubuntu 22.04

In Ubuntu 22.04, you may encounter the following error:

Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice....Couldn't invoke ptxas --version...InternalError: libdevice not found at ./libdevice.10.bc [Op:__some_op]

To fix this error, you will need to run the following commands.

# Install NVCCconda install -c nvidia cuda-nvcc=11.3.58# Configure the XLA cuda directorymkdir -p $CONDA_PREFIX/etc/conda/activate.dprintf 'export XLA_FLAGS=--xla_gpu_cuda_data_dir=$CONDA_PREFIX/lib/\n' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.shsource $CONDA_PREFIX/etc/conda/activate.d/ Copy libdevice file to the required pathmkdir -p $CONDA_PREFIX/lib/nvvm/libdevicecp $CONDA_PREFIX/lib/libdevice.10.bc $CONDA_PREFIX/lib/nvvm/libdevice/