Apply a traied model with tensorflow on transformer pipeline pop out error

i’m using this github text summarization and I have a problem. I have been struggling for two week and I could not figure that out.
im using a notebook from this github repository:
GitHub - flogothetis/Abstractive-Summarization-T5-Keras: This abstractive text summarization is one of the most challenging tasks in natural language processing, involving understanding of long passages, information compression, and language generation. The dominant paradigm for training machine learning models to do this is sequence-to-sequence (seq2seq) learning, where a neural network learns to map input sequences to output sequences. While these seq2seq models were initially developed using recurrent neural networks, Transformer encoder-decoder models have recently become favored as they are more effective at modeling the dependencies present in the long sequences encountered in summarization.

notebook link:
https://github.com/flogothetis/Abstractive-Summarization-T5-Keras/blob/main/AbstractiveSummarizationT5.ipynb

after train model i wanna use huggingface transformer pipe line to generate summerization from transformers import pipeline

summarizer = pipeline(“summarization”, model=model, tokenizer=“t5-small”, framework=“tf”)
summarizer(“some text”)

but it pop out an error:

AttributeError: ‘Functional’ object has no attribute ‘config’

Anyone has any idea how can i solve it?

full error:

AttributeError Traceback (most recent call last)
/tmp/ipykernel_20/1872405895.py in
----> 1 summarizer = pipeline(“summarization”, model=model, tokenizer=“t5-small”, framework=“tf”)
2
3 summarizer(“The US has passed the peak on new coronavirus cases, President Donald Trump said and predicted that some states would reopen”)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/init.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, use_auth_token, model_kwargs, **kwargs)
432 break
433
→ 434 return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs)

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/text2text_generation.py in init(self, *args, **kwargs)
37
38 def init(self, *args, **kwargs):
—> 39 super().init(*args, **kwargs)
40
41 self.check_model_type(

/opt/conda/lib/python3.7/site-packages/transformers/pipelines/base.py in init(self, model, tokenizer, modelcard, framework, task, args_parser, device, binary_output)
548
549 # Update config with task specific parameters
→ 550 task_specific_params = self.model.config.task_specific_params
551 if task_specific_params is not None and task in task_specific_params:
552 self.model.config.update(task_specific_params.get(task))

AttributeError: ‘Functional’ object has no attribute ‘config’