How to convert BertLM logits to readable text?

Hi,
I want to convert BertMaskedLM logits to human-readable text. Can anyone suggest to me how can I post-process the data? Here is the code,

features = ["The quick brown fox jumped.", "I forgot my homework."]
# Pretrained language model.
masked_lm = keras_nlp.models.BertMaskedLM.from_preset(
    "bert_base_en_uncased",
)
masked_lm.fit(x=features, batch_size=2)

output = masked_lm.predict(['It is a good [MASK]."])

The format of the output is numpy.ndarray.

Thanks for helping in advance!:innocent: