Is it possible to use a custom input shape with efficient det?

Hi,

I want to use TFLite Model Maker to train custom object detection models. When I try to change the hparam image_size.

spec = EfficientDetModelSpec(
    model_name="efficientdet-lite0",
    uri="https://tfhub.dev/tensorflow/efficientdet/lite0/feature-vector/1",
    hparams={
        "image_size": "420x420"
    }
)
train_data, validation_data, test_data = object_detector.DataLoader.from_csv("...")
model = object_detector.create( train_data,
                                validation_data=validation_data,
                                model_spec=spec,
                                epochs=1,
                                batch_size=4,
                                train_whole_model=True)

The following error occurs:

Error
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-5-4ed9b6a821a2> in <module>
----> 1 model = object_detector.create( train_data,
    2                                 validation_data=validation_data,
    3                                 model_spec=spec,
    4                                 epochs=1,
    5                                 batch_size=4,

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_examples\lite\model_maker\core\task\object_detector.py in create(cls, train_data, model_spec, validation_data, epochs, batch_size, train_whole_model, do_train)
  285     if do_train:
  286       tf.compat.v1.logging.info('Retraining the models...')
--> 287       object_detector.train(train_data, validation_data, epochs, batch_size)
  288     else:
  289       object_detector.create_model()

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_examples\lite\model_maker\core\task\object_detector.py in train(self, train_data, validation_data, epochs, batch_size)
  154       validation_ds, validation_steps, val_json_file = self._get_dataset_and_steps(
  155           validation_data, batch_size, is_training=False)
--> 156       return self.model_spec.train(self.model, train_ds, steps_per_epoch,
  157                                    validation_ds, validation_steps, epochs,
  158                                    batch_size, val_json_file)

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_examples\lite\model_maker\core\task\model_spec\object_detector_spec.py in train(self, model, train_dataset, steps_per_epoch, val_dataset, validation_steps, epochs, batch_size, val_json_file)
  262             val_json_file=val_json_file,
  263             batch_size=batch_size))
--> 264     train.setup_model(model, config)
  265     train.init_experimental(config)
  266     model.fit(

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_examples\lite\model_maker\third_party\efficientdet\keras\train.py in setup_model(model, config)
  111 def setup_model(model, config):
  112   """Build and compile model."""
--> 113   model.build((None, *config.image_size, 3))
  114   model.compile(
  115       steps_per_execution=config.steps_per_execution,

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\keras\engine\training.py in build(self, input_shape)
  417                            'method accepts an `inputs` argument.')
  418         try:
--> 419           self.call(x, **kwargs)
  420         except (errors.InvalidArgumentError, TypeError):
  421           raise ValueError('You cannot build your model by calling `build` '

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_examples\lite\model_maker\third_party\efficientdet\keras\train_lib.py in call(self, inputs, training)
  883 
  884   def call(self, inputs, training):
--> 885     cls_outputs, box_outputs = self.base_model(inputs, training=training)
  886     for i in range(self.config.max_level - self.config.min_level + 1):
  887       cls_outputs[i] = self.classes(cls_outputs[i])

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
 1010         with autocast_variable.enable_auto_cast_variables(
 1011             self._compute_dtype_object):
-> 1012           outputs = call_fn(inputs, *args, **kwargs)
 1013 
 1014         if self._activity_regularizer:

c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\autograph\impl\api.py in wrapper(*args, **kwargs)
  668       except Exception as e:  # pylint:disable=broad-except
  669         if hasattr(e, 'ag_error_metadata'):
--> 670           raise e.ag_error_metadata.to_exception(e)
  671         else:
  672           raise

ValueError: in user code:

  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow_hub\keras_layer.py:243 call  *
      result = smart_cond.smart_cond(training,
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\saved_model\load.py:668 _call_attribute  **
      return instance.__call__(*args, **kwargs)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py:828 __call__
      result = self._call(*args, **kwds)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py:871 _call
      self._initialize(args, kwds, add_initializers_to=initializers)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py:725 _initialize
      self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\function.py:2969 _get_concrete_function_internal_garbage_collected
      graph_function, _ = self._maybe_define_function(args, kwargs)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\function.py:3361 _maybe_define_function
      graph_function = self._create_graph_function(args, kwargs)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\function.py:3196 _create_graph_function
      func_graph_module.func_graph_from_py_func(
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\framework\func_graph.py:990 func_graph_from_py_func
      func_outputs = python_func(*func_args, **func_kwargs)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\eager\def_function.py:634 wrapped_fn
      out = weak_wrapped_fn().__wrapped__(*args, **kwds)
  c:\users\felix\appdata\local\programs\python\python38\lib\site-packages\tensorflow\python\saved_model\function_deserialization.py:267 restored_function_body
      raise ValueError(

  ValueError: Could not find matching function to call loaded from the SavedModel. Got:
    Positional arguments (2 total):
      * Tensor("inputs:0", shape=(None, 420, 420, 3), dtype=float32)
      * False
    Keyword arguments: {}
  
  Expected these arguments to match one of the following 4 option(s):
  
  Option 1:
    Positional arguments (2 total):
      * TensorSpec(shape=(None, 320, 320, 3), dtype=tf.float32, name='inputs')
      * True
    Keyword arguments: {}
  
  Option 2:
    Positional arguments (2 total):
      * TensorSpec(shape=(None, 320, 320, 3), dtype=tf.float32, name='input_1')
      * False
    Keyword arguments: {}
  
  Option 3:
    Positional arguments (2 total):
      * TensorSpec(shape=(None, 320, 320, 3), dtype=tf.float32, name='input_1')
      * True
    Keyword arguments: {}
  
  Option 4:
    Positional arguments (2 total):
      * TensorSpec(shape=(None, 320, 320, 3), dtype=tf.float32, name='inputs')
      * False
    Keyword arguments: {}

Does this mean, that custom input shape is not supported?

1 Like

Hi @felithium

Please where did you find this code snippet? Place here the link to check it.
I read the documentation here and I cannot find EfficientDetModelSpec…and the dictionary for the hparams

1 Like

here there’re some documentation that can help: Object Detection with TensorFlow Lite Model Maker

If you want to change the input image size, you might need to change the base model spec to one that has a bigger input from here: TensorFlow Hub

hope it helps

1 Like

I stepped into the source and found out that …

from tflite_model_maker import model_spec

# ... this is the same as ...
spec = model_spec.get('efficientdet_lite0')
#
# ... this or ...
# efficientdet_lite0_spec = functools.partial(
#    EfficientDetModelSpec,
#    model_name='efficientdet-lite0',
#    uri='https://tfhub.dev/tensorflow/efficientdet/lite0/feature-vector/1',
#)
#
# ... this.
# spec = EfficientDetModelSpec(
#     model_name="efficientdet-lite0",
#     uri="https://tfhub.dev/tensorflow/efficientdet/lite0/feature-vector/1",
#     hparams={...} # here I can add hparams which I think is convinient
# )

Sadly I am not allowed to post links. But you can find the documentation to EfficientDetModelSpec exported as EfficientDetSpec if you search for tflite_model_maker.object_detector.EfficientDetSpec on your link and press on it.

@mm_export('object_detector.EfficientDetSpec')
class EfficientDetModelSpec(object):
2 Likes

I read the documentation you linked. And it says that I can change hparams which contains image_size but doing so results in the error I posted. Sadly changing the model won’t fix my problem, because I need a special aspect ratio like 1.67, because all my images have this aspect ratio, while every model in the hub has an aspect ration of 1.

2 Likes

@Yuqi_Li might have an isight to help here

1 Like

@felithium Yes, unfortunately, custom input shape is not supported in TFLite Model Maker for now. If you’d like to use a custom input shape, you need to to use automl/efficientdet at master · google/automl · GitHub repo.

3 Likes

Typically we think of Convolutional Neural Networks as accepting fixed size inputs (i.e., 224Ă—224, 227Ă—227, 299Ă—299, etc.).

But what if you wanted to:

  1. Utilize a pre-trained network for transfer learning…
  2. …and then update the input shape dimensions to accept images with different dimensions than what the original network was trained on?

Why might you want to utilize different image dimensions?

There are two common reasons:

  • Your input image dimensions are considerably smaller than what the CNN was trained on and increasing their size introduces too many artifacts and dramatically hurts loss/accuracy.
  • Your images are high resolution and contain small objects that are hard to detect. Resizing to the original input dimensions of the CNN hurts accuracy and you postulate increasing resolution will help improve your model.

In these scenarios, you would wish to update the input shape dimensions of the CNN and then be able to perform transfer learning.

The question then becomes, is such an update possible?

Yes, in fact, it is.

For more visit: https://reviewscot.com/ web blog.

1 Like

Thank you! I will look into it.

1 Like

hello
Is the default size is 4484483?

I check the size by

output_details = interpreter.get_output_details()
input_shape = input_details[0][“shape”]

and I get
[ 1 448 448 3]

Can we set input shape right now?
If can’t
What I should do Is train the EfficientNet by original way, and convert to tflite model by
tf.lite.TFLiteConverter.from_saved_model()
is that right?