I’m reading ‘Text generation with an RNN’ tutorial Text generation with an RNN | TensorFlow and having a question.
For example, say
seq_lengthis 4 and our text is “Hello”. The input sequence would be “Hell”, and the target sequence “ello”.
Why the target is “ello”, instead of just “o”?
Why we need to train the model to also shift the input? Why we don’t train it to just predict the next letter?
E.g. input=“Hell” target=“o”