Reduce size of @tensorflow/tensorflow-nightly-gpu Container

My worker microservice uses TFJS to predict video frames using a container running on a cluster of VMs on Google Kubernetes Engine (GKE). I’m using a gpu-enabled container which is built on top of the tensorflow/tensorflow-nightly-gpu image. That image is 2.67 GB! and it takes several minutes to start up after my worker VM is ready. It looks like the NVIDIA CUDA libs are the bulk of that, at 1.78 GB + 624 MB.

Can I minimize the CUDA installation in any way since I’m only using TFJS for prediction/inference, not training, and using the tfjs-node-gpu WebGL-enabled backend? Are there any smaller base images that will support TFJS prediction?

Aha! Since I’m using only TFJS, there’s no need to use the tensorflow/tensorflow-nightly-gpu image at all since I have no need to run Python or use the TF version installed in the base image. tfjs-node-gpu bundles everything it needs to use WebGL and just needs a container with a valid NVIDIA CUDA installation that includes CuDnn-8. Here’s one from the official nvidia/cuda Docker images. that works with express. Switching to this image saved me about 0.5GB, which, unfortunately, is less than I hoped. I used 11.2 and cudnn8 because that’s what the tensorflow/tensorflow-nightly-gpu image uses.