Didn't find op for builtin opcode 'PAD' version '2'

Hello, I would glad if someone could help me.
I am trying to convert my CNN model and run it in a himax board using tensorflow lite.
The model was trained in pytorch, I converted it to ONNX, then tensorflow, then tensorflow lite.
The model is a simple mobilenetv2, with the last encoder layers removed. I removed the last layers so the model could fit inside the board, since it has so little memory.

I am using the TF lite miro op resolver like this:

  static tflite::MicroMutableOpResolver<5> micro_op_resolver;
  micro_op_resolver.AddAveragePool2D();
  micro_op_resolver.AddConv2D();
  micro_op_resolver.AddDepthwiseConv2D();
  micro_op_resolver.AddReshape();
  micro_op_resolver.AddRelu();
  micro_op_resolver.AddFullyConnected();
  micro_op_resolver.AddPad();
  micro_op_resolver.AddPadV2();

I am receiving this erro message:

Didn't find op for builtin opcode 'PAD' version '2'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?
                                                                                                                                                
Failed to get registration from op code PAD

I can share my onnx model file and the python codes I am using to convert the model if needed.
Thank you so much for the attention.

Found the problem. It is really silly.

static tflite::MicroMutableOpResolver<5> micro_op_resolver;

I forgot to increase the number 5 in this template. I probably made the op resolver ignore the last operations I added.

Now I am having a different problem.

Arena size is too small for all buffers. Needed 14961408 but only 842320 was available.

The message makes obvious what the problem is, but it doesn’t make any sense because my model is super super small. The .cc file with model data has only 368KB, and the model provided in the examples from Himax has 1.5MB.
I don’t know why Tensorflow is saying that I need all this memory.