EventAccumulator not showing scalar data while TensorBoard UI displays graphs correctly

Hello TensorFlow Community,

I am facing an issue with TensorBoard’s EventAccumulator that seems to be a common problem, as I have noticed several unanswered questions on both Stack Overflow and the TensorFlow Community regarding this topic. I have written some code to read the contents of the event files generated by TensorFlow and display the scalar data. The code is able to list all the tags correctly but doesn’t show the contents of those tags.

import os
import tensorflow as tf
from tensorboard.backend.event_processing.event_accumulator import EventAccumulator

def process_event_file(event_file):
    print(f"Processing event file: {event_file}")
    
    event_acc = EventAccumulator(event_file)
    event_acc.Reload()
    
    print(event_acc.Tags())
    scalar_tags = event_acc.Tags()['scalars']
    for tag in scalar_tags:
        scalar_events = event_acc.Scalars(tag)
        print(f"\nData for tag '{tag}':")
        for event in scalar_events:
            print(f"Step {event.step}, Wall time {event.wall_time}: {event.value}")

def find_and_process_event_files(directory):
    for root, _, files in os.walk(directory):
        for file in files:
            if file.startswith("events.out.tfevents"):
                event_file_path = os.path.join(root, file)
                process_event_file(event_file_path)
                print("\n" + "=" * 80 + "\n")

find_and_process_event_files("/content/drive/MyDrive/logs")

However, when I run TensorBoard in my browser, all the graphs are plotted correctly. Why is the code unable to print the contents of the scalar data, even though TensorBoard displays the graphs correctly? What am I missing, or how can I fix this issue?

I previously asked this question on Stack Overflow but haven’t received any response yet. Here’s the link to the [original question]

The print output looks

Processing event file: /content/drive/some-folder/logs/2023-04-13 13:52:36.450081/tensorboard/trainer/UNIQUE_ID_HERE/events.out.tfevents.xxxxxx.localhost.localdomain.xxxx.x.v2

{'images': [], 'audio': [], 'histograms': [], 'scalars': [], 'distributions': [], 'tensors': ........}

I believe this is a recurring issue that many developers are experiencing, but I haven’t found any clear solutions in the existing questions on both platforms. Any help would be greatly appreciated, and I am sure it will benefit others facing the same problem.

Thank you in advance for your assistance!

I tried to resolve the issue by modifying the size_guidance parameter when initializing the EventAccumulator to load all available scalar data, as shown in the following code snippet:

pythonCopy code

def process_event_file(event_file):
    print(f"Processing event file: {event_file}")
    
    size_guidance = {
        'scalars': 0,
        'images': 0,
        'audio': 0,
        'histograms': 0,
        'compressedHistograms': 0,
        'tensors': 0,
    }
    event_acc = EventAccumulator(event_file, size_guidance=size_guidance)
    event_acc.Reload()
    
    print(event_acc.Tags())
    scalar_tags = event_acc.Tags()['scalars']
    for tag in scalar_tags:
        scalar_events = event_acc.Scalars(tag)
        print(f"\nData for tag '{tag}':")
        for event in scalar_events:
            print(f"Step {event.step}, Wall time {event.wall_time}: {event.value}")

However, this approach did not resolve my issue, and I still cannot see the scalar data in the console output even though the TensorBoard UI correctly displays the graphs. Any suggestions or alternative solutions would be greatly appreciated.

You need to check the “tensors.” Please ensure that size_guidance matches the number of steps you used to train your model. In the appendix you can see an example for reading out the learning rate. If you don’t define size_guidance, the default value 10 is chosen and you only get 10 random points.

I hope it helps

event_acc = EventAccumulator(latest_file, size_guidance={‘tensors’ : 1000})
event_acc.Reload()

tensorItems = event_acc.Tensors(‘learning_rate’)
timeStamps = []
steps = []
values = []
for item in tensorInfo:
values.append(tf.make_ndarray(item.tensor_proto))
timeStamps.append(item[0])
steps.append(item[1])

dataCollection = zip(timeStamps, steps, values)
for data in dataCollection:
print(data)