How do I build a neural network with variable length outputs?

Hi! I’m a beginner when it comes to TensorFlow and machine learning in general, and I want to try and create a model that can syllabify words, for example:

Input data: 'syllable', 'tensorflow'
Output data: 'syl-la-ble', 'ten-sor-flow'

I think that a Recurrent Neural Network with LSTM would be most appropriate for this, but I am unsure as to what the model’s structure would be because the output (syllabified words) would be variable-length based on the length of the word that is inputted, and I have not encountered anything like this in the course of my self-study.

Just to take an initial idea you can find an old reference of Language-Agnostic Syllabification with Neural Sequence Labeling in: