.png and .gif errors - 'Load and preprocess images' tutorial

I was wondering if anyone had insight into the potential
source of errors related to .png and .gif file types returned by the ‘Using tf.data for finer control’ portion of the ‘Load and preprocess images’ tutorial?

The tutorial ran smoothly when I used the original image data, but I ran into errors when I used my own data.

Thanks!

What is your error with .png and .gif?

‘animated gifs can only be decoded by tf.io.decode_gif or tf.io.decode_image’
'PNG warning: iCCP: known incorrect sRBG profile
‘PNG warning: cHRM: inconsistent chromaticities’

The tutorial works without errors using my own data if I remove all .png and all .gif files and just use the .jpg files.

What is your TF version?

is the error inside the function

decode_img

?

if so you might need to use other functions internally, instead of decode_jpeg , use decode_image that takes care of the proper image type

print(tf.version) returns 2.6.2

I’m not quite sure (yet). I will look into decode_image. In the meantime, here is the full error message:

---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/tmp/ipykernel_34/1576256020.py in <module>
      6   train_ds,
      7   validation_data=val_ds,
----> 8   epochs=3
      9 )

/opt/conda/lib/python3.7/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
   1182                 _r=1):
   1183               callbacks.on_train_batch_begin(step)
-> 1184               tmp_logs = self.train_function(iterator)
   1185               if data_handler.should_sync:
   1186                 context.async_wait()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
    883 
    884       with OptionalXlaContext(self._jit_compile):
--> 885         result = self._call(*args, **kwds)
    886 
    887       new_tracing_count = self.experimental_get_tracing_count()

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
    915       # In this case we have created variables on the first call, so we run the
    916       # defunned version which is guaranteed to never create variables.
--> 917       return self._stateless_fn(*args, **kwds)  # pylint: disable=not-callable
    918     elif self._stateful_fn is not None:
    919       # Release the lock early so that multiple threads can perform the call

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in __call__(self, *args, **kwargs)
   3038        filtered_flat_args) = self._maybe_define_function(args, kwargs)
   3039     return graph_function._call_flat(
-> 3040         filtered_flat_args, captured_inputs=graph_function.captured_inputs)  # pylint: disable=protected-access
   3041 
   3042   @property

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _call_flat(self, args, captured_inputs, cancellation_manager)
   1962       # No tape is watching; skip to running the function.
   1963       return self._build_call_outputs(self._inference_function.call(
-> 1964           ctx, args, cancellation_manager=cancellation_manager))
   1965     forward_backward = self._select_forward_and_backward_functions(
   1966         args,

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/function.py in call(self, ctx, args, cancellation_manager)
    594               inputs=args,
    595               attrs=attrs,
--> 596               ctx=ctx)
    597         else:
    598           outputs = execute.execute_with_cancellation(

/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
     58     ctx.ensure_initialized()
     59     tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60                                         inputs, attrs, num_outputs)
     61   except core._NotOkStatusException as e:
     62     if name is not None:

InvalidArgumentError:  Got 53 frames, but animated gifs can only be decoded by tf.io.decode_gif or tf.io.decode_image
	 [[{{node DecodeJpeg}}]]
	 [[IteratorGetNext]] [Op:__inference_train_function_1903]

Function call stack:
train_function

I think my suggestion will help you given the text in the exception:

animated gifs can only be decoded by tf.io.decode_gif or tf.io.decode_image

@lgusm

decode_image is called internally from:

But in this case we are entering in:

I don’t know if with the current API we could enter instead in this branch:

1 Like
def decode_img(img):  
  img = tf.io.decode_image(img, expand_animations = False, channels=3)  
  return tf.image.resize(img, [img_height, img_width])

The above seems to prevent error messages. I appreciate your help!