Improving Dataflow Pipelines for Text Data Processing

This is something we (@nilabhra and I) worked on at Carted. Improving text data processing at scale with Beam and Cloud Dataflow.

Blog post:


We use some tools from the TensorFlow ecosystem such as a BERT model from TensorFlow Hub, TFRecords for serializing the preprocessed data, etc. I hope this will be really beneficial for the community as with these techniques we were able to reduce the total wall-clock time from more than 3 days to under 3 hours.

We further optimized the BERT model we used in the blog post with ONNX (since we run with CPUs) and the pipeline total takes around 1 hr 45 mins now.


This is super cool!!! Congrats!

Question: why the last step makes the model better, what’s changed on the model? does it replace ops with optimised ones for CPU?

Do you mean the ONNX conversion step? If so, then it is because ONNX performs layer fusion, replaces layers producing constant values, etc. It simplifies the model graph and hence the latency gets reduced.

1 Like

yes, it was the ONNX conversion step, thanks!