I’m building a Tensorflow-Keras model based on Mask-R-CNN with a custom Bilinear interpolation layer (part of multi-scale ROI Align), and hit something that I can’t solve (yet).

I have a true_fn, where ainput is a tensor (1,512, 32, 32), it generates four index tensors, and returns a sliced tensor with the shape (1, 512, 7, 7, 2, 2). This (indexing / slicing) can’t possibly work with a set of Keras Symbolic Tensors, so I also have a false_fn, where ainput comes in as (None, 512, 32, 32), and is reshaped and sliced into an output result tensor (None, 512, 7, 7, 2, 2), which can be passed on through the rest of the layers.

I also have tried various cond_fn, the most promising one is:

cond_fn = lambda: ainput.shape.is_fully_defined()

Putting it all together:

return tf.cond(cond_fn(), true_fn, false_fn)

And the model compiles and runs, except it always runs the false_fn and the math results running with real data are always zero. I suspect that it’s baking the construction path used into the compute graph.

All attempts to make a cond_fn that looks for None, computes tensor size, asks if it’s symbolic, etc. are met with similar failures. How do I construct a layer that gets the geometry correct during model creation using symbolic tensors, and both the geometry and math correct when it runs on real data?

The goal here is to make a model that eventually gets made into a TFLite model to run on an NPU accelerated embedded device. This layer and all the others work fine outside of a keras model, but that doesn’t get me to TFLite.

What is the tensorflow 2.13+ way to make models that have layers with calculations that can’t be modeled / graphed symbolically?

There’s an open-source Mask-R-CNN Tensorflow/TFLite example which uses a simplified tf.raw_ops.CropAndResize, which generates quite different results. Might I need to construct a intrinsic layer to handle this? There seems to be zero documentation on how to approach this.

Thanks in advance for any suggestions!