I wanted to know what the policy/approach to support lowering for certain useful TF raw ops is: these currently aren’t in the MLIR TF dialect, and several higher-level abstractions are de-abstracted through these. As an example, the tensorflow addons package’s tfa.image.translate
lowers through the general “projective transformation” op (tf.ImageProjectiveTransformationV3), which can model combinations of rotation, scaling, skewing, translations, etc.) and is extremely useful to support for further optimization and code generation. I’ve added a lowering for this op from TF to lower level TF ops for a typical case (commit link below):
https://www.tensorflow.org/api_docs/python/tf/raw_ops/ImageProjectiveTransformV3
Without such a lowering, the op otherwise fails conversion beyond the MLIR TF dialect. I’m assuming TF/MLIR is open to contributions to lowering such ops?