Dear all,
I want to better understand which strategy for writing models is best suited for benefiting from XLA compilation in native TensorFlow (without using Keras):
- “All-in-one” approach: Having one Python/tf.Function in tf.Module, that calls other Python functions (but not from tf.Modules).
- Splitting out logical parts and putting them into tf.Module(s)?
Would TF/XLA compiler(s) see the difference when analyze the same model implemented in both ways ?
Thank you very much in advance!