Asynchronous Inference Execution with TFLite Model

Hi I’m currently using the TFLite benchmark tool to measure the performance of my model. I’m interested in executing some of the graph’s operations asynchronously. I noticed that TensorFlow Lite seems to support asynchronous operations under tensorflow/tensorflow/lite/core/async/, but I’m unsure how to utilize this in my benchmarking process.

Could anyone provide guidance or examples on how to implement asynchronous inference execution using TFLite? Any help or pointers would be greatly appreciated. Thank you!