EfficientDet-Lite0 very slow on windows(intel)


I’m running TF lite with EfficientDet-Lite0 models on an intel-based windows computer. I’m receiving inference times of around 8 seconds whereas I get inference times of <100ms on mobile platforms.

Is this expected?


Yeah, default kernels are not optimized for intel CPU.

1 Like

You can try to build TF lite 1with XNNPACK:

1 Like

Thanks @sx_f @Bhack , I fixed it by switching on RUY. Inference time went from 8seconds to 30ms. Surprised it’s not ON by default.