What's best practice for using XLA?

Hi,

Quick question I hope!

We’ve been using the following in our codebase for the longest time:

tf.config.optimizer.set_jit(True)

I recently also re-discovered the jit_compile=True argument for model.compile()

What’s the difference between these approaches? Should we use former, the latter, or both?

Cheers,
Liam

Hi @loneil

Welcome to the TensorFlow Forum!

tf.config.optimizer.set_jit(True) and jit_compile=True argument for model.compile() both the functions are used to enable JIT compilation in the model for better performance.

tf.config.optimizer.set_jit(True) sets the global JIT compilation setting in the TF model to automatically compile any graph it encounters in the model. This will compile all the graphs inside the model using jit. jit_compile=True in model.compile() only compiles the graphs that are actually used by the model.

tf.config.optimizer.set_jit(True) can be used if you are using multiple models in your codebase and is generally recommended to set the global JIT compilation setting to True . However, if you are using only a single model, you can enable JIT compilation for that model explicitly using the jit_compile=True argument to model.compile() which is more granular and can help to avoid unnecessary JIT compilation. So, it may depends on your specific needs and preferences. Thank you.

Thanks Renu!

As I understand your answer: setting jit_compile=True in model.compile() shouldn’t give any additional speed up over just setting: tf.config.optimizer.set_jit(True). This is what I expected too :slight_smile:

However, in practice, I am seeing a x2-3 speed up when applying both. Setting just tf.config.optimizer.set_jit(True) is much slower than both in my case - was curious if there was an explicit difference!