Unable to load model using tensorflow hub.KerasLayer

I am getting this error, when I try to install the bert models:

Code:
bert_preprocess = hub.KerasLayer(‘TensorFlow Hub’)
bert_encoder = hub.KerasLayer(‘TensorFlow Hub’)

InvalidArgumentError: Multiple OpKernel registrations match NodeDef at the same priority ‘{{node AssignVariableOp}}’: ‘op: “AssignVariableOp” device_type: “GPU” constraint { name: “dtype” allowed_values { list { type: DT_INT64 } } } host_memory_arg: “resource”’ and ‘op: “AssignVariableOp” device_type: “GPU” constraint { name: “dtype” allowed_values { list { type: DT_INT64 } } } host_memory_arg: “resource”’
[[AssignVariableOp]] [Op:AssignVariableOp]

Hi @Abdul_Rahmaan, Before importing the bert model from tf hub you have to import tensorflow_text as text to registers the ops

import tensorflow_text as text
import tensorflow_hub as hub

preprocessor = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")

. Thank You.

1 Like

Yes, I have done that but still getting the same error.

HI Abdul,

I’ve just tested exactly the same code here on colab:

!pip install tensorflow_text

import tensorflow_text as text
import tensorflow_hub as hub

preprocessor = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
bert_encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4")

and everything works fine.
Maybe there’s something on the env you are running it?

Hey, I am not sure if there is env issue as I have created a new env. I am adding the logs on anaconda prompt. Hopefully that helps.

To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-08-18 19:31:57.154641: I tensorflow/c/logging.cc:34] DirectML: creating device on adapter 0 (NVIDIA GeForce GTX 1650)
2023-08-18 19:31:57.336691: I tensorflow/c/logging.cc:34] Successfully opened dynamic library Kernel32.dll
2023-08-18 19:31:57.340141: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:306] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2023-08-18 19:31:57.340511: W tensorflow/core/common_runtime/pluggable_device/pluggable_device_bfc_allocator.cc:28] Overriding allow_growth setting because force_memory_growth was requested by the device.
2023-08-18 19:31:57.341053: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:272] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10190 MB memory) → physical PluggableDevice (device: 0, name: DML, pci bus id: )
2023-08-18 19:31:59.144319: E tensorflow/core/grappler/clusters/utils.cc:87] Failed to get device properties, error code: 302
[I 2023-08-18 19:32:57.550 ServerApp] Saving file at /NNCodeBasics/Untitled1.ipynb

Hi @Abdul_Rahmaan, Could you please share the details of the environment like OS, Tensorflow version you are using, the platform which you are trying to execute the code, etc… Thank You.

Hey, I was able to solve the issue by deleting the environment and creating a new one.
Thank you.