Deployment of tensorflow serving


Is there any way to deploy just the preprocessing part without a trained model?

The reason I ask is because I am trying to minimize the services that I use to put a model in production.

I currently have a service that gathers data using FastApi, but I want to deploy a “model” using tensorflow-serving that only gathers data and once it has enough data, i would use the end-to-end TFX solution.

I don’t know if tensorflow-serving has a way to keep the data it used for inference or preprocessing

Thank you,