Build Serving Image with Batching Inference Request and How to check if its worked

How to test if the batching request work?

Create Serving Image

docker run -d --name serving_base tensorflow/serving

Batchin_Parameters txt file

max_batch_size { value: 32 }
batch_timeout_micros { value: 5000 }
pad_variable_length_inputs: true

Copy SavedModel

docker cp /home/Desktop/tf/models/my_model serving_base:/models/my_model

docker cp /home/Desktop/tf/batch_config/batching_parameters.txt serving_base:/server_config

Commit

docker commit --change "ENV MODEL_NAME my_model" serving_base acnet

Stop Serving Base

docker kill serving base

Checking Docker image and running
Screenshot from 2021-10-05 14-48-18

docker run --rm --name serve -p8500:8500 -p8501:8501 -d acnet
curl http://localhost:8501/v1/models/model

{
 "model_version_status": [
  {
   "version": "1",
   "state": "AVAILABLE",
   "status": {
    "error_code": "OK",
    "error_message": ""
   }
  }
 ]
}