OpenAI’s GPT-2 param run model

need short help to param run model.

examle

text = “start text”

encoding the input text

input_ids = tokenizer.encode(text, return_tensors=‘tf’)

tf.random.set_seed(randrange(1000000000000000))

// set top_k = 50 and set top_p = 0.95 and num_return_sequences = 3
sample_outputs = model.generate(
input_ids,
temperature = .8,
do_sample=True,
no_repeat_ngram_size=3,
max_length=200,
top_k=50,
top_p=0.95,
num_return_sequences=1
)

content = tokenizer.decode(sample_outputs[0], skip_special_tokens=True)

i’m need generate text via


is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy start text text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.

content => minimum 200 (the proposal must be completed !!!
text = “start text” ==> ( should be in the middle of the text