Tensorflow lite inferencing is too slow on risc-v platform

Hi, do any one use tensorflow lite on risc-v platform?
I use both python tflite_runtime and c++ tensorflow lite environment on risc-v platform to do image detecting, however it shows that the performance is really bad.
as the table shows below, I use default param to do benchmark(tensorflow-r2.5\tensorflow\lite\tools\benchmark)

in order to find whether software matters more to the result, I also compare the coremark scores on U540 and Raspberry 3B, it turns that coremark score gap is not that big as inference benchmark time(both cpu freq 1GHz).

> Can the above test prove "tensorflow lite is bad fitted on risc-v platform " ?
**> **
> Or other low-level software support like math lib have not as good performance as ARM platform done?

What risc-v board do you have?

ARM has neon delegate:

In the new MLIR compiler stack they are working on:

I use sifive U740(RISC-V)

You can probably upvote, comment and subscribe:

1 Like

A recent update on the compiler stack:

ARM has neon implementation in TFLite to accelerate inference.
SiFive also has RVV implementation for TFLite, but not open-sourced yet.