Lowering for tf raw_ops ImageProjectiveTransform

I wanted to know what the policy/approach to support lowering for certain useful TF raw ops is: these currently aren’t in the MLIR TF dialect, and several higher-level abstractions are de-abstracted through these. As an example, the tensorflow addons package’s tfa.image.translate lowers through the general “projective transformation” op (tf.ImageProjectiveTransformationV3), which can model combinations of rotation, scaling, skewing, translations, etc.) and is extremely useful to support for further optimization and code generation. I’ve added a lowering for this op from TF to lower level TF ops for a typical case (commit link below):


Without such a lowering, the op otherwise fails conversion beyond the MLIR TF dialect. I’m assuming TF/MLIR is open to contributions to lowering such ops?

1 Like

Is this also related to:

We are also talking about this in some augmentation/preprocessing layers performance tickets:

It’s the same op but my post isn’t about XLA proper or the TF → XLA support (yes, this isn’t supported on the TF → XLA path as well).

Yes sorry, it Is super hard to understand every day, for the avg contributor or end user, when and where MLIR is involved or not in a compiler path: