Hi, I’m training a model with
model.fitDataset. The input dimensions are [480, 640, 3] with just 4 outputs of size [1, 4] and a batch size of 3.
Before the first
onBatchEnd is called, I’m getting a
High memory usage in GPU, most likely due to a memory leak warning, but the
numTensors after every yield of the generator function is just ~38, the same after each
onBatchEnd, so I don’t think I have a leak due to undisposed tensors.
While debugging the internals of TFjs I noticed that in
numBytesInGPU goes above 2.2G, which is triggering the warning.
Is this normal behavior for that image size? It means I cannot increase my batchSize because I run out of memory with anything greater than 3.
Is there anything I can do to reduce the GPU memory usage?