Does Tesnorflow lite supports such layer-level splitting

Hi folks,
I am trying to utilize TensorFlow lite to run the networks on heterogeneous cores.
The platform I am using is HiKey 970 which contains 4 Small cores, 4 Big cores, one Mali M72 GPU, and an NPU.

I have used “taskset” and setting thread number to set my simulation to run either on big or small cores.
For example, To run on 4 small core. I use the following command

taskset -c 0-3 ./benchmark_model – --graph={source_dir}/{latency_file} num_thread =4

But this setting will include all neural networks into the same core that I assign.

What I am trying to now is split my work into different layers, and assign layer into different cores. For instance, for 4 layers neural network, assigning the first 2 layers into 3 Big cores, the third layer into another, and the last layer into 4 small). I am wondering if Tesnorflow lite supports such layer-level splitting setting?

The only method I found online is use ARM - Compute Library .
paper use ARM - Compute Library to achieve layer-level CPU assignment. But does Tesnorflow lite supports such layer-level splitting setting?

Thanks in advance :slight_smile: