Inference time with Inception Resnet V2

I’m using code similar to this: वस्तु का पता लगाना  |  TensorFlow Hub
I get inference times of about 0.4 seconds per image. Not sure why so slow. Running on 3080 GPU. Benchmarks look like I should be getting 10 times faster (~30ms).



Hi Richard, welcome to the TF Forum

which benchmark are you referring to?
which model are you using?
whats the input image size?

I’ve just tested the tutorial and for the inception_resnet_v2 model, I’m getting ~1.4 seconds per inference on a public colab T4

for the ssd mobilenet v2 I’m getting around .2 seconds per inference (that is similar to your results)

BTW, the tutorial you mentioned is for TF1, the TF2 version is here: TensorFlow Hub Object Detection Colab