Hi,
I’m running TF lite with EfficientDet-Lite0 models on an intel-based windows computer. I’m receiving inference times of around 8 seconds whereas I get inference times of <100ms on mobile platforms.
Is this expected?
Thanks
Hi,
I’m running TF lite with EfficientDet-Lite0 models on an intel-based windows computer. I’m receiving inference times of around 8 seconds whereas I get inference times of <100ms on mobile platforms.
Is this expected?
Thanks
Yeah, default kernels are not optimized for intel CPU.
You can try to build TF lite 1with XNNPACK: