Tensorflow memory leak during inference in loop

I’m running the following code and noticing a never-ending increase in RAM usage. Eventually, the script terminates with an out-of-memory error. I can’t understand what the issue is. I also tried using tf.keras.backend.clear_session() once every 10,000 iterations, but it didn’t help. I monitor the specific RAM usage of the PID script. Tensorflow ver is 2.13.1. I would appreciate any insights.

import os
import tensorflow as tf
import numpy as np
import cv2
import time

main_script_pid = os.getpid()
print("PID of the main script's process:", main_script_pid)

model_path = '.../Models/model_Ch_0_trt'

dummy_frame = np.random.randint(0, 255, size=(128,128, 3), dtype=np.uint8)

img = cv2.cvtColor(dummy_frame, cv2.COLOR_BGR2GRAY)
img = np.expand_dims(img, axis=0)
img = np.expand_dims(img, axis=-1)
img = img / 255.0  # Normalize pixel values to [0, 1]

trt_saved_model = tf.saved_model.load(model_path)
inference_function = trt_saved_model.signatures["serving_default"]
input_tensor_name = list(inference_function.structured_input_signature[1].keys())[0]
output_tensor_name = list(inference_function.structured_outputs.keys())[0]

while True:

    predictions = inference_function(**{input_tensor_name: tf.constant(img, dtype=tf.float32)})[output_tensor_name].numpy()

Even simpler way to reproduce the problem:

import os
import tensorflow as tf
import numpy as np

main_script_pid = os.getpid()
print("PID of the main script's process:", main_script_pid)

dummy_frame = np.random.randint(0, 255, size=(128,128, 3), dtype=np.uint8)

while True:

    input_tensor = tf.constant(dummy_frame, dtype=tf.float32)

I upgraded to python 3.10 and TF2.15. It seems that the memory leak does not happen.
Nevertheless, with python3.11 and TF2.15, it seems that the memory leak does happen.