uday
March 22, 2022, 6:09pm
#1
I wanted to know what the policy/approach to support lowering for certain useful TF raw ops is: these currently aren’t in the MLIR TF dialect, and several higher-level abstractions are de-abstracted through these. As an example, the tensorflow addons package’s tfa.image.translate
lowers through the general “projective transformation” op (tf.ImageProjectiveTransformationV3), which can model combinations of rotation, scaling, skewing, translations, etc.) and is extremely useful to support for further optimization and code generation. I’ve added a lowering for this op from TF to lower level TF ops for a typical case (commit link below):
tensorflow:master
← polymage-labs:uday/projective_transformation_lowering
opened 06:07PM - 22 Mar 22 UTC
Add TF to TF lowering for projective image transformations modeled by
the tf.Im… ageProjectiveTransformV3 ops. Add this op to the TF dialect.
Lower projective transformations in the case of "translations" to pad +
slice ops.
https://www.tensorflow.org/api_docs/python/tf/raw_ops/ImageProjectiveTransformV3
Without such a lowering, the op otherwise fails conversion beyond the MLIR TF dialect. I’m assuming TF/MLIR is open to contributions to lowering such ops?
1 Like
Bhack
March 22, 2022, 6:18pm
#3
We are also talking about this in some augmentation/preprocessing layers performance tickets:
opened 04:37PM - 10 Mar 22 UTC
type:feature
**System information**.
TensorFlow version (you are using):
master
Are you … willing to contribute it (Yes/No) :
I need more detail
**Describe the feature and the current behavior/state**.
I think that we need to cover core image processing transformation with TF native ops.
Currently a core transformation in preprocessing still rely on numpy/scipy impl.
https://github.com/keras-team/keras/blob/master/keras/preprocessing/image.py#L2622
Describe the feature clearly here. Be sure to convey here why the requested feature is needed. Any brief description about the use-case would help.
**Will this change the current api? How?**
**Who will benefit from this feature?**
**[Contributing](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md)**
- Do you want to contribute a PR? (yes/no):
- If yes, please read [this page](https://github.com/keras-team/keras/blob/master/CONTRIBUTING.md) for instructions
- Briefly describe your candidate solution(if contributing):
keras-team:master
← bhack:patch-2
opened 02:31PM - 22 Feb 22 UTC
As we have discussed in https://github.com/keras-team/keras-cv/pull/143#issuecom… ment-1047215737 this is just a canary (failing) test (check the CI):
```
ValueError: Input "maxval" of op 'RandomUniformInt' expected to be loop invariant.
```
As I've mentioned in the thread we really need to understand if we want to have randomness inside the batch or between the batches and what kind of impact we have between the computing overhead, contributing speed/code readability and network convergence.
Also I don't know if @joker-eph or @qlzh727 could expose us a little bit the pro and cons of `jit_compile` a function vs using the `vectorized_map` or if they are orthogonal.
With many CV transformations we cannot compile the function as the underline `tf.raw_ops.ImageProjectiveTransformV3` op isn't supported by XLA.
/cc @chjort
uday
March 22, 2022, 6:31pm
#4
It’s the same op but my post isn’t about XLA proper or the TF → XLA support (yes, this isn’t supported on the TF → XLA path as well).
Bhack
March 22, 2022, 6:44pm
#5
Yes sorry, it Is super hard to understand every day, for the avg contributor or end user, when and where MLIR is involved or not in a compiler path:
opened 02:36AM - 04 Dec 21 UTC
stat:awaiting tensorflower
type:others
comp:xla
Hello, may I ask a few simple questions on XLA .
1. Is there an open source… d XLA MLIR backend that one can try "out of the box" ?
2. Is there a way to enable/disable various optimizations that are used in XLA ?
3. What is the best way to figure out what optimizations are available in XLA ?