Object Detection with model Zoo

I am starting with the topic of object detection, and I am following a tutorial. [(Installation — TensorFlow 2 Object Detection API tutorial documentation) I have followed it step by step, but no matter how hard I try, I always have problems with the incompatibility of the libraries. Because it is an example from a couple of years ago, does it become impractical? Or is there some way to do it. I already did the model garden with resnet50, but I want to have a variety of knowledge with multiple models. How should I proceed? thank you very much to all

pd. I have tried it from Google Colab, and from virtual environments, trying to install the specific libraries, but there are some libraries that update other libraries and I think it becomes impractical, or am I going the wrong way?

1 Like

Hi @David_Vahos
Does your setup match requirements?
If yes, I would report a new issue on the github of this project.
And remember to share/post error messages you get. It is always helpful.

1 Like

Hi @David_Vahos,

You can go through Model Garden Notebook which has

  1. Object Detection
  2. Instance Segmentation
  3. Semantic Segmentation

Also each task can be achieved with different model architectures and model garden support different architectures with various backbones.

Please check all default the configurations available in model garden here.

For example if want to try retinanet with mobilenet backbone you can use following config and change mobilenet configuration in exp_config.

Same way you can do for retinaet with resnet backbone, where you can change resnet50, 101 etc.

exp_config = tfm.core.exp_factory.get_exp_config('retinanet_mobile_coco')

Hope this helps.


1 Like

Hi @Siva_Sravana_Kumar_N,

The three tutorials you mentioned are very useful. They show the fine-tuning using the Model Garden training experiment framework, which can display the training metrics for the training and validation sets.

What if, after training, I want to evaluate the model using a third split of the dataset (the test set)? I just need to get the same type of metrics (AP) displayed during training. Can I do this using the Model Garden?

1 Like

Hello, try to see if this tutorial guide from Krish will help you. (https://youtu.be/XoMiveY_1Z4?list=PLZoTAELRMXVNvTfHyJxPRcQkpV8ubBwHo)

Hi @Luiz_Felipe,

I will take a look, if that is possible and let you know.


Thanks for your reply @Japheth_Mumo. The video you linked is about Tensorflow Object Detection API, but I am actualy using the TF-Vision Model Garden.
According to the README, TensorFlow Object Detection API is deprecated:

1 Like

@Siva_Sravana_Kumar_N, I found a workaround: after training, i run the experiment a second time. This time I use the test set in place of the validation set, and configure the experiment mode to 'eval' instead of 'train_and_eval'. And for model_dir I use a copy of the original model_dir directory, in order to not to mix the actual validation logs with the test logs.

exp_config.task.validation_data.input_path = TEST_DATA_INPUT_PATH
exp_config.trainer.validation_steps =  TEST_STEPS
model_dir = MODEL_DIR_COPY

model, eval_logs = tfm.core.train_lib.run_experiment(

It seems not to the best way to do this, but it’s the only way I found so far…

1 Like

Hi, thanks for sharing the work around on evaluating with the test set, it is very useful.

The training and val went well for me, but when I export to test the pertaining performance (before I train). I export the model using the snippet as in the tutorial:

    input_image_size = [640, 640],
    log_model_flops_and_params = True

But after having the saved_model.pb, I loaded it up to infer, it returned blank outputs(as in the pic in this issue comment), I don’t know what to look at next since I just pull config and checkpoint from source.

Appreciate every suggestion, thanks a lot!