Text-generation tutorial - why not just predicting the next letter?

I’m reading ‘Text generation with an RNN’ tutorial Text generation with an RNN  |  TensorFlow and having a question.

For example, say seq_length is 4 and our text is “Hello”. The input sequence would be “Hell”, and the target sequence “ello”.

Why the target is “ello”, instead of just “o”?
Why we need to train the model to also shift the input? Why we don’t train it to just predict the next letter?
E.g. input=“Hell” target=“o”

Thanks.

@markdaoust can help here

1 Like

Predicting the entire sequence, shifted, is a more efficient way of training.

Runing the RNN across “hell” to predict “o” calculates nearly everything you need to also predict “h” > “e”, “he” > “l”, and “hel”, "l’. So predict them all and calculate the loss for all of it.

2 Likes