Is there a way to get inisight on a model's performance pre-inference?


Given a TFLite model, is there a way to get information about it to know how it will perform? Such as, how can I know how long will it take the model to run on a given device (how does it consume CPU/GPU resources) before actually inferencing it?

The closest thing I can think of is getting the number of operations that will be run, but that alone is far from enough to know the exact time. Another option you can use if the tflite_benchmark which generates random input and does inference on them, in this case at least you won’t have to generate the input and feed it to the model yourself.