Operator device placement for models from tensorflow hub

Hi,

I would like to know the device placements of operators from a model retrieved from tensorflow hub.
I think the code will explain better:
test.py

import tensorflow as tf
import tensorflow_hub as hub

tf.debugging.set_log_device_placement(True)

layer = hub.KerasLayer('https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1')
model = tf.keras.Sequential([layer])

When I run the script, it does print some device placement, but nothing related to any relevant operations.
I can get a list of all unique operators on a unix system using:

python test.py 2>&1|awk '{print $7,$10}'|sort|uniq

And I get the following:

AssignVariableOp /job:localhost/replica:0/task:0/device:CPU:0
DestroyResourceOp /job:localhost/replica:0/task:0/device:CPU:0
Identity /job:localhost/replica:0/task:0/device:CPU:0
RestoreV2 /job:localhost/replica:0/task:0/device:CPU:0
VarHandleOp /job:localhost/replica:0/task:0/device:CPU:0
_EagerConst /job:localhost/replica:0/task:0/device:CPU:0

Is it possible to retrieve operators placement when a model comes from tensorflow hub?

I don’t understand your question

As far as I’m aware, you can place the model on any device you want (CPU, GPU and TPU) but I don’t think you can do per operator

Hello @lgusm, thank for taking the time to read my question.

So I am working on a tensorflow plugin for a device and I need real-world models to drive the development.
Here is my workflow has been looking like so far:

  1. My first focus was on supporting Resnet50 operators, which is a widely available model and I have an implementation that doesn’t come from the hub.
  2. I made a simple python script that loads the model and I activated tf.debugging.set_log_device_placement(True) option, this gives me a list of all the kernels I need to support for supporting Resnet50.

Now I would like to support other real-world tensorflow models, and tensorflow hub is a good place to find them.
However when I load a model from tensorflow hub like the RetinaNet in the example above, I don’t see any relevant kernel (I would like to see Conv2D for example).
I understand that a tensorflow hub model encapsulate the model as a KerasLayer, but at the lower level I assume that the Conv2D and other operators/kernels will be executed. I would like to retrieve what kernels/operators a model from tensorflow hub is going to execute.

Let me know if my question is better understandable now.

IDK how device placement works with SavedModel (hub just wraps SavedModel).

But try having a look at:

# load the saved_model
mod = hub.load('https://tfhub.dev/tensorflow/retinanet/resnet152_v1_fpn_1024x1024/1')
# Get the graph out of the default signature and convert it to a graph_def proto.
mod.signatures['serving_default'].graph.as_graph_def()
3 Likes

There are some limits related to hard-coding the device placement with tf.function

Something can be controlled at save time with:

https://www.tensorflow.org/api_docs/python/tf/saved_model/experimental/VariablePolicy

But it is still not fully supported with load:

Awesome, I can extract the list of operators from this. Thank you!

By the way, is there any documentation about “graph_def”?
I am currently able to extract the list of op by converting the graph_def to string and extracting the info from it, but I am sure I could extract the relevant information directly from it.

Is it an explanation of why the device placement logging “doesn’t work” for operators within hub models? (if so then I am deducing that hub models are loaded inside tf.function)

No, It was just related to the cases where a device could be hardcoded in the saved model.

Okay, thanks for clarifying.

It’s a proto, defined here: tensorflow/graph.proto at master · tensorflow/tensorflow · GitHub

Hub loads savedmodels. savedmodels contain a collection of tf.functions.

1 Like

Thank you, I found it in the meantime :blush:

I am going to close the subject as I have all the information I needed. Thanks again to both of you for taking the time to answer!

1 Like