Serving Stable Diffusion in TF Serving

Hi folks,

I am happy to share the project that I have done with @Sayak_Paul. In this project, our main focus is to serve Stable Diffusion in TF Serving (V1 and V2 as well). To this end, we made three SavedModels for text_encoder, diffusion_model, and decoder with the pre/post operations included.

In the project repository, you can find out the following information:

  • how each SavedModel is constructed
  • Docker images of TF Serving for each SavedModel
  • how to deploy each TF Serving on GKE(Google Kubernetes Engine)
  • how to run inference by interacting with three TF Serving


Additionally, you could learn some more information about different target environments:

  • Hugging Face Inference Endpoint
  • FastAPI on GKE

I hope this project is helpful for some of you!

Repo link: GitHub - deep-diver/keras-sd-serving: showing various ways to serve Keras based stable diffusion

1 Like

This project looks great!

Congrats to both of you!

1 Like