I am currently using the
RandAugment class from
from official.vision.beta.ops import augment). The
RandAugment().distort(), however, does not allow batched inputs, and computation-wise it’s expensive as well (especially when you have more than two augmentation operations).
So, following suggestions from
this guide, I wanted to be able to map
after my dataset is batched. Any workaround for that?
Here’s how I am building my input pipeline for now:
# Recommended is m=2, n=9
augmenter = augment.RandAugment(num_layers=3, magnitude=10)
dataset = load_dataset(filenames)
dataset = dataset.shuffle(batch_size*10)
dataset = dataset.map(augmenter.distort, num_parallel_calls=AUTO)
March 24, 2021, 11:36am
Yes the issue is that It seems to me that we have also duplicated OPS like e.g.
cutout not batched in
official.vision namespace and
batched in TFA.
These are the origins of the current status:
03:57PM - 05 Mar 20 UTC
12:54AM - 28 May 20 UTC
**Describe the feature and the current behavior/state.**
RandAugment and AutoAu
… gment are both policies for enhanced image preprocessing that are included in EfficientNet, but are still using `tf.contrib`.
The only `tf.contrib` image operations that they use, however, are [rotate](https://www.tensorflow.org/addons/api_docs/python/tfa/image/rotate), [translate](https://www.tensorflow.org/addons/api_docs/python/tfa/image/translate) and [transform](https://www.tensorflow.org/addons/api_docs/python/tfa/image/transform) - all of which have been included in TensorFlow Addons.
- Are you willing to contribute it (yes/no):
No, but am hoping that someone from the community will pick it up (potentially a Google Summer of Code student)?
- Are you willing to maintain it going forward? (yes/no):
- Is there a relevant academic paper? (if so, where):
AutoAugment Reference: https://arxiv.org/abs/1805.09501
RandAugment Reference: https://arxiv.org/abs/1909.13719
- Is there already an implementation in another framework? (if so, where):
See link above; this would be a standard migration from `tf.contrib`.
- Was it part of tf.contrib? (if so, where):
**Which API type would this fall under (layer, metric, optimizer, etc.)**
**Who will benefit with this feature?**
Anyone doing image preprocessing, especially for EfficientNet.
08:53PM - 31 Mar 20 UTC
12:39AM - 01 Feb 22 UTC
As we have just refreshed the model repo as model garden I would enforce the con
… tributions policies of generale use (or already established in literature utils, losses, layers, ops to be contribute more systematically in Tensorflow/addons instead to be embedded or duplicated in the model repos.
/cc @ewilderj @facaiy @seanpmorgan
# Policies to be enforced with a PR
So, currently, no workaround, right?
March 24, 2021, 12:40pm
My opinion is that we need just to see how we want to standardize our image processing OPS in the ecosystem. I think these duplicates are going to create confusion.