Is there a way to invoke a tflite model without emptying the input layer?

Hi!
Im am using a tflite model on an edge device with not a lot of RAM. My program reads continuous sensor data and the input layer takes in 150 datapoints. I want to invoke the model in intervals of 50 datapoints so that there is overlap with the data in each invoke. Currently I save the data to a buffer and copy it to the input when I want to invoke the model but this requires too much RAM.
Is there a way to keep the data in the input kayer after invoking, so that I don’t have to hold the data in 2 seperate places?

Hi @Kvenn,

I am going through the backlogs. You might have understood how to place the data in thee input layer. Some more details are also added here.

  • There is no direct way to keep data in the TFLite interpreter’s input layer after running the model. TFLite normally expects new input for each inference call. However, there are a few ways you may use to overcome your edge device’s RAM restrictions when processing sensor data with overlapping windows:
  • Implement a circular buffer data structure in your program. This buffer can hold the latest 150 data points, with a fixed size.
  • If possible, pre-process your sensor data into smaller chunks (e.g., 25 data points) with some overlap.

Thank You