Tensorflow BERT pooled_output vs sequence_output

I would like to understand difference between pooled_output, sequence_output and encoder_output in

encoder = hub.KerasLayer(“TensorFlow Hub”)
text_embeddings = encoder(text_preprocessed)
text_embeddings.keys() # this has pooled_output, sequence_output etc as keys

My understanding is that pooled_output is an embedding for entire sentence where sequence_output is contenxtualized embdeding of individual tokens in a sentence Going by that shouldn’t the pooled_output embdeding vector and sequence_output embdeding vector for first token be the same? I looked at them in my notebook and they are different. I couldn’t find much documentation online so it will be great if someone can provide detailed understanding of these three keys in the output object above.

Do you the answer for this @markdaoust ?