Questions about the fine-tuning BERT

Hi,

I am new to deep learning and NLP area. I have tried to follow the tutorial from the tensorflow website.
But I found one tutorial is called fine-tuning BERT, which is use the BertTokenizer to do the classification stuff. And I found another tutorial called using BERT to do text classification, which just import prepressing layer and encoding layer to encode the text.

I am little confused about the difference between these two tutorials. The fine-tunning seems a lot harder. Does the fine-tuning approach mean that it can be applied to customed data?

Thanks