How to deploy a tensorflow lite model on a low resource edge device

Hi, I am reaching out for some guidance (or any good resource / examples) describing how to deploy a model on a low resource edge device (essentially a microcontroller). I am a junior (and the only ml engineer on my team) and I’ve been tasked with deploying a custom model on an edge device (with essentially 128kb of space). I figured out how to reduce the size of my model by quantizing the model (using this examples provided here Optimisation du modèle  |  TensorFlow Lite) and now I need to convert (or translate) the tensorflow lite model (or file) into C header files. I’m not sure but I think I also have to be conscious about the the other header files I am including e.g (because of the limited amount of space I have, I might not be able to include some of the tensorflow packages that are commonly used during deployment).

Hi @r8a ,

Could you please check this TensorFlow Tutorials Documentation and tutorials for deploying TensorFlow models on microcontrollers., it might help.