When I receive a request from a client to the model in tensorflow-serving, I first need to process them using 13 regexes, then pass that text through
tf.keras.preprocessing.text.Tokenizer to convert them to numbers(or token), and then pass them to
tf.keras.preprocessing.sequence.pad_sequences to add 0s (for the sentences whose lenght doesn’t match the input that the model expects) at the end of each array(for a batch of inputs), then this(a single sentence or a batch of sentences as tokens) will be fed to a tf.keras model to get some probabilities as outputs. And I then need to map these probabilites(different thresholds for different units) to texts and return it to the client.
While trying to put together all that to be able to serve the model using tensorflow-serving, I learned that some parts can be converted to tensorflow functions, but not all of it.
- regexes: I still couldn’t figure out where and how to put my regexes to be able to manipulate the text.
- tokenizer: I learned from some blogs and SO questions, that tf.lookup.StaticHashTable can be used for this purpose.
- pad_sequences: no help with this too.
- post-processing: I could find very little information to do this.
I read the beginner and advanced blogs on tensorflow-transform tutorials page, but either of them mentioned how to link those tft functions to the tf-keras model, while saving it. And I could also find some information about adding pre-processing for serving, but all of them involved tensorflow code and some workarounds, but they didn’t cover what I am trying to achieve even indirectly.
I can provide more information as required.
How do I add these steps to the graph, while saving the model?