Accessing an object's attribute in tf.function

Hi, I’m trying to manipulate a tensor as an object’s attribute, I’m only using tf.function’s return value here. All assignments are outside the function, I don’t think I’m violating any rules here. But the result is clearly not as expected.

class Holder:

  def __init__(self):
    self.v = tf.zeros([])


@tf.function
def add(holder):
  return holder.v + 1

h = Holder()
l = []
for _ in range(10):
  h.v = add(h)
  l.append(h.v)

print(l)

Result:

[<tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>]

How can I achieve this correctly without explicitly passing the tensor itself as the args of the function which would be quite a long list? I’m just using an object to pack args.
Thank you.

Are you looking for something like:

https://www.tensorflow.org/api_docs/python/tf/function#variables_may_only_be_created_once_2

Well, kind of. In the example the variables are bind to self. I want them in another object.

You need to use something like:

import tensorflow as tf
class Holder:
  def __init__(self):
    self._v = tf.Variable(tf.zeros([]),name="Holder.v")
    
  @property
  def v(self):
      return self._v

  @v.setter
  def v(self, value):
    self._v.assign(value)

@tf.function
def add(holder):
  return holder.v + 1.0

h = Holder()
l = []
for _ in range(3):
  res = add(h)
  h.v = res
  l.append(res)

print(l)
print(h.v)

[<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=2.0>, <tf.Tensor: shape=(), dtype=float32, numpy=3.0>]

<tf.Variable 'Holder.v:0' shape=() dtype=float32, numpy=3.0>

Thank you so much. But what if the function return a different shaped tensor every time. From what I know, you can’t create a TensorArray outside the function and modify it inside. Btw, where can I learn the practical detail like this? Any resources or books recommendation?

You have few examples of tf.function with TensorArray in:

https://www.tensorflow.org/api_docs/python/tf/TensorArray

I read it before, they are used within a tf.function.

Check out Extension Types, they might be of use here: Extension types  |  TensorFlow Core

In particular, if you make Holder an extension type, then you’ll be able to pass it as argument and return it from the tf.function (as long as you return a whole object). That is, something like this:

@tf.function
def add(holder):
  new_h = Holder()
  new_h.v = holder.v + 1
  return new_h

Note that mutation of the object will still not work as you expected; for that, we’d need some finer-grained APIs that aren’t yet ready. The tracing protocol might be of more use, if you create a custom tracing type for Holder: tensorflow/trace.py at 0179633763322a176156dd8f03c8eb5fbfa33049 · tensorflow/tensorflow · GitHub. This is a recent, experimental API so it might have some rough edges but would definitely be interesting to try.

1 Like

Before digging into it, I wonder if it works if holder.v is a TensorArray with dynamic size. How can I apply the same function to a Tensor with some size changing ops every time and keep the result?
The exact thing I want is like this:

class Holder:

  def __init__(self):
    self.v = tf.TensorArray(tf.int32, size=10)


@tf.function
def add(i, holder):
  return holder.v.write(i, i)

h = Holder()
for i in range(10):
  h.v = add(i, h)

print(h.v.stack())

Apparently this is not supported by tf.

class Holder:

  def __init__(self):

    self.v = tf.zeros([0], tf.int32)

@tf.function

def add(i, holder):

  return tf.concat([holder.v, tf.convert_to_tensor([i])], 0)

h = Holder()

for i in range(10):

  h.v = add(i, h)

print(h.v)

Or this but will cause retracing. The only solution I can think of is allocate a long enough tensor with some flag values indicating assigned or not.

Unfortunately, TensorArray is semantically an immutable data type (that’s something that greatly simplifies autodiff). So you still need a place to put the updated array - you can’t update it “in-place” like you would a tf.Variable. Which puts us back in the original pattern.

Note, there is a separate bug with TensorArray (it’s a bug, not intended behavior, so hopefully it will work in the future) - right now returning TensorArrays doesn’t work properly. I think that’s why the last snippet didn’t work.

A question: do you need to update it randomly, that is, do you call add with is in random order? Also, do you need the entire operation to be differentiable? The best approach depends on these answers. For example I wonder if something like a tf.data.Dataset.range(10).map(add).batch(10) might work.

Update: the latter solution, using tf.concat should work. To avoid retracing, you can add an input_signature to @tf.function to tell it that the first dimension is dynamic.

Something like input_signature=[tf.TensorSpec(shape=[None], dtype=tf.float32)]

Or just set experimental_relax_shapes=True and then it will only retrace 2-3 times before it switches to dynamic shapes.

1 Like

And this will work with ExtensionType you mentioned?
In fact I’m just using tf to speed up calculations, is this a bad choice?

For the question you asked, I don’t need it to be differentiable and I update it randomly. Is there a better solution?

And this will work with ExtensionType you mentioned?

It should, so long as you take care to always take and return whole objects.

In fact I’m just using tf to speed up calculations, is this a bad choice?

It depends much on the nature of the calculations. Normally, I’d say it’s fine, but we know the optimizer still has much room for improvement. If the computation or data is small in size, something like numpy might be fast enough. If the computation is simple enough, I’d try both alternatives.

For random updates, either tf.Variable pre-allocated to the right size and scatter_nd_update. Using a regular tensor and tensor_scanner_nd_update should be able to avoid excessive memory use as well.

1 Like

Will try the ExtensionType solution later, I just saw this Nested TypeSpec in the docs and maybe I can specify a nested dynamic sized Tensor in Holder. Hope everything works as expected in this experimental api.

Thank you so much for helping.