I’m trying to integrate TF into my C++ project using TF lite, and I’m not sure if I’m doing it properly.
Due to the application design it’s kinda hard to build and use model within a single thread. So basically, I’m building interpreter and allocating tensors in main thread, but then using it(calling interpreter->Invoke()) from one of the worker threads(from a single thread, so there’s no concurrency with that resource). And I’m not sure if this is safe, since there could be some thread_local variables defined within the build process.
Documentation states that interpreters are not thread safe, but does it mean that there should be no concurrency or that they’re also thread dependent?
Thanks in advance!
Your approach of using TensorFlow Lite (TFLite) in a C++ project, where the interpreter is initialized in the main thread and then invoked in a worker thread without concurrent access, is generally safe. Key considerations include:
Thread-Local Variables: While TFLite doesn’t heavily rely on thread-local storage, be aware of any dependencies that might.
Interpreter Initialization: Initializing the interpreter in one thread and using it in another is typically fine as long as there’s no concurrent access.
Memory Management: Ensure resources used by the interpreter (like tensors) remain valid across thread boundaries.
No Concurrent Access: It’s crucial to avoid concurrent access to the interpreter, as TFLite interpreters are not thread-safe.
Testing and Monitoring: Thoroughly test your setup to ensure it functions correctly under various conditions.
Consult Documentation: Keep up-to-date with TensorFlow’s official documentation for any specific guidance on thread usage.
Fallback Plan: If issues arise, consider keeping all TensorFlow Lite operations within a single thread.
In summary, your method should work under the conditions you’ve described, provided you manage resources carefully and avoid concurrent access to the interpreter.