TensorboardCallback not logging batch metrics

According to the docs setting update_freq to "batch" will result in writing losses to Tensorboard at the end of each batch

'batch' or 'epoch' or integer. When using 'batch' , writes the losses and metrics to TensorBoard after each batch. The same applies for 'epoch' . If using an integer, let’s say 1000 , the callback will write the metrics and losses to TensorBoard every 1000 batches. Note that writing too frequently to TensorBoard can slow down your training.

However, when using the Tensorboard callback the only batch metric I see is batch_steps_per_second, no loss or any other batch metric.

Looking at the code it looks like the on_train_batch_end() hook only handles the steps_per_second and profiling now. It seems this commit on March 17th 2020 in the tensorflow repo removed the writing of batch metrics. Was this the intended effect of this commit and the docs just weren’t updated? Or is batch_loss (among other metrics) still meant to be written to Tensorboard?