Is there a way to invoke the model and still maintain the input layer?

Hi,
I am building a MCU application with very limited RAM and I want to have an overlap in the consecutive inputs of my tflite model. Invoking my model will empty the input layer, therefore I must maintain a copy of the input layer which costs too much RAM, which could be used for a better model. Is there a way to keep the input layer when invoking the model?

@Kvenn Your objective is not very clear based on description. Still trying to answer based on my understanding with the objective.

To invoke any tensorflow lite model, it requires input tensor buffer filled prior to calling inference API and so once data has been copied to input tensor, if you have local copy of buffer data within your stack then you should have access to that buffer.

If you’re using C++ API (auto in = interpreter->typed_input_tensor<int>(input_tensor_idx);) then you can direct access input tensor pointer index itself and directly can copy/fill data in there while with C API (TfLiteTensorCopyFromBuffer()) need to copy data from local buffer to buffer at input tensor index.

Above steps after tensor space has been already allocated (TfLiteInterpreterAllocateTensors()) and made sure that there is enough memory available for tensors.