I’m training multiple models sequentially, which will be memory-consuming if I keep all models without any cleanup. However, I am not aware of any way to the graph and free the GPU memory in Tensorflow 2.x. Is there a way to do so?
What I’ve tried but not working
tf.keras.backend.clear_session
does not work in my case as I’ve defined some custom layers
tf.compat.v1.reset_default_graph
does not work either.