innat
#1
Is it possible to decode file on GPU during training a model? Resizing, Rescaling etc can be done as part of a model.
with tf.device('/GPU:0'):
tf.io.decode_*
model = Sequential(
[
ImageReader(),
ImageResizer(),
ImageNetModel()
...
]
)
Reference. https://developer.nvidia.com/dali
Bhack
#2
I don’t think that currently we have a GPU decoding.
We had a thread with preprocessing + decoding at:
We have also discussed something for Video:
Another emerging approach is:
RGB no more: Minimally-decoded JPEG Vision Transformers
2 Likes
Does nvidia dali suit your usecase? I have not used it but could be something.
Also if using distributed strategies see these experimental options:
@tf_export("distribute.InputOptions", v1=[])
class InputOptions(
collections.namedtuple("InputOptions", [
"experimental_fetch_to_device",
"experimental_replication_mode",
"experimental_place_dataset_on_device",
"experimental_per_replica_buffer_size",
])):
...
Perhaps experimental_place_dataset_on_device
does what you are looking for