Is it possible to decode file on GPU during training a model? Resizing, Rescaling etc can be done as part of a model.
model = Sequential(
I don’t think that currently we have a GPU decoding.
We had a thread with preprocessing + decoding at:
We have also discussed something for Video:
Another emerging approach is:
RGB no more: Minimally-decoded JPEG Vision Transformers
Does nvidia dali suit your usecase? I have not used it but could be something.
Also if using distributed strategies see these experimental options:
experimental_place_dataset_on_device does what you are looking for