Object Detection - Tutorial Example using Model Garden

Want to know more about how Image Classification works with Model Garden, This tutorial fine-tunes a RetinaNet with ResNet-50 as backbone model from the TensorFlow Model Garden package (tensorflow-models) to detect three different Blood Cells in BCCD dataset. The RetinaNet is pretrained on COCO train2017 and evaluated on COCO val2017.

Check it out and share your feedback.

2 Likes

Hi, thanks for the tutorial. How do I export a trained model with a different input signature, a float32 instead of uint8? I also tried to convert the model to the latest 8.4 TensorRT but it didn’t work. Is there a working example of a TensorRT export?

Hi @alekseisolovev,

Please first go through this tutorial and execute. Once the model is trained and exported, save it to your local or google dirve(preferably). Now once go through this gist which uses INT8 for optimizing model using TensorRT. In the gist I used same dataset which is used in above tutorial, you can try with different batch sizes and image sizes. Make sure that exported model is exported with proper batch_size. The model is trained with tfrecords in the tutorial, so I presume the datatype in tf.uint8 while training so I used INT8 as my precision.

Currently Tensorflow nightly builds include TF-TRT by default, which means you don’t need to install TF-TRT separately. You can pull the latest TF containers from docker hub or install the latest TF pip package to get access to the latest TF-TRT.

Please go through tensorflow/tensorrt repo for more details.

Nvidia here provides different ways to install TensorRT. I used python installation which doesn’t require any extra installation. Please make sure you have proper nvidia TensorRT version and Cuda versions specified according to the documentation.

Colab has latest cuda driver 11.8.

Screenshot 2023-03-10 at 4.52.11 PM

I hope the explanation is clear and help you resolve the issue.

Thanks.

Hi! Thanks for the info! The problem with TF-TRT is that it creates a TF graph which requires installing TF on a Jetson device which I want to avoid. In other words I’m looking for a way to generate TRT engine, not a TF graph.

@alekseisolovev did you find a way to convert the saved_model to a proper TRT engine yet?

I’m also facing the same issue. I’m also attempting to convert the saved_model → .onnx → TRT, but I get

[03/20/2023-09:42:32] [W] [TRT] onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[03/20/2023-09:42:32] [W] [TRT] onnx2trt_utils.cpp:403: One or more weights outside the range of INT32 was clamped
[03/20/2023-09:42:33] [E] Error[3]: [topKLayer.h::setK::22] Error Code 3: API Usage Error (Parameter check failed at: /_src/build/aarch64-gnu/release/optimizer/api/layers/topKLayer.h::setK::22, condition: k > 0 && k <= kMAX_TOPK_K
)
[03/20/2023-09:42:33] [E] Error[2]: [topKLayer.cpp::TopKLayer::20] Error Code 2: Internal Error (Assertion ThreadContext::getThreadResources().getErrorRecorder().getNbErrors() == prevNbErrors failed. )
[03/20/2023-09:42:33] [E] [TRT] ModelImporter.cpp:726: While parsing node number 310 [TopK -> "StatefulPartitionedCall/generate_detections/TopKV2:0"]:
[03/20/2023-09:42:33] [E] [TRT] ModelImporter.cpp:727: --- Begin node ---
[03/20/2023-09:42:33] [E] [TRT] ModelImporter.cpp:728: input: "StatefulPartitionedCall/generate_detections/Reshape:0"
input: "const_fold_opt__1545"
output: "StatefulPartitionedCall/generate_detections/TopKV2:0"
output: "StatefulPartitionedCall/generate_detections/TopKV2:1"
name: "StatefulPartitionedCall/generate_detections/TopKV2"
op_type: "TopK"
attribute {
  name: "sorted"
  i: 1
  type: INT
}

[03/20/2023-09:42:33] [E] [TRT] ModelImporter.cpp:729: --- End node ---
[03/20/2023-09:42:33] [E] [TRT] ModelImporter.cpp:731: ERROR: builtin_op_importers.cpp:4931 In function importTopK:
[8] Assertion failed: layer && "Failed to add TopK layer."
[03/20/2023-09:42:33] [E] Failed to parse onnx file
[03/20/2023-09:42:33] [E] Parsing model failed
[03/20/2023-09:42:33] [E] Failed to create engine from model or file.
[03/20/2023-09:42:33] [E] Engine set up failed

Hi, @Gling_K. With the latest TensorRT 8.5 I’m having the same TopKLayer issue.

Hi @Gling_K, @alekseisolovev,

Can you provide a code snippet to reproduce the issue, so that I can take a look at it and if possible work on resolving the issue.

Thanks

Hi @Siva_Sravana_Kumar_N ,
Here are the steps I did:

  1. Use the object detection colab and ran until Saving and exporting the trained model. section and then zip & download the export model to my local machine.

  2. On my local machine that has tf2onnx installed and ran the following command in terminal:

python -m tf2onnx.convert --saved-model tensorflow-model-path --output test.onnx
  1. On local machine (Orin AGX) , using nvcr.io/nvidia/l4t-tensorrt:r8.5.2.2-devel docker image and through bash inside the docker container I ran:
/usr/src/tensorrt/bin/trtexec --onnx='/workspaces/scratch_ai/onnx/test.onnx' --saveEngine='/workspaces/scratch_ai/trt/test.engine' --exportProfile='/workspaces/scratch_ai/trt/test.json' --allowGPUFallback --useSpinWait --separateProfileRun > '/workspaces/scratch_ai/trt/test.log'

which led me to the error reported.

It would be nice if there is a way to make the models in tf model garden play nice with TensorRT without the need to use TF-Trt, as in my tests plain tensorrt engines run much faster than tf-trt models

Any help is appreciated