TFJS - parallelise models inference

Hey!

I am wondering how to do the inference of more than one ML model in parallel in tfjs? If I create asynchronous functions of models inference (predict_1, predict_2) then if I run (await predict_1, await predict_2) it does not work.

Any errors or more information?