TensorFlow Lite C API - Problem with input and output dimensions

Hello,

I am working on a neural network built with tensorflow. The format is .tflite. the model input dimensions are [1,30,1] / the ouput ones are [1,30].

I am aiming to feed a float input[30] in the model, and retrieve an inference output[30] with a C program

The thing is that i can’t figure out what is the problem as the code is crashing each time when copying the output tensor.

Here is the code I wrote :

     float input[30];
 float output[30];

     // resize input tensor
 int inputTensorSize = 30;
 int inputDims[3] = { 1, inputTensorSize, 1 };

 TfLiteInterpreterResizeInputTensor(IA_config0->interpreter_direct, 0, inputDims, 3);

 // re-allocate tensors
 TfLiteInterpreterAllocateTensors(IA_config0->interpreter_direct);

 // input buffer to input tensor
 IA_config0->input_tensor_direct = TfLiteInterpreterGetInputTensor(IA_config0->interpreter_direct, 0);

 ///////////////////////////////////////////////////////////////////

 statut = TfLiteTensorCopyFromBuffer(IA_config0->input_tensor_direct, input, inputTensorSize * sizeof(float));

 // Execute inference.
 statut = TfLiteInterpreterInvoke(IA_config0->interpreter_direct);

 // Extract the output tensor data.

 statut = TfLiteTensorCopyToBuffer(IA_config0->output_tensor_direct, output, 
     TfLiteTensorByteSize(IA_config0->output_tensor_direct));

IA_config0 is initialized before with a function :

IA_config IA_config0;

IA_config0.interpreter_direct = nullptr;
IA_config0.options_direct = nullptr;
IA_config0.model_direct = nullptr;
IA_config0.input_tensor_direct = nullptr;
IA_config0.output_tensor_direct = nullptr;

bool m_modelQuantized = false;

TfLiteDelegate* m_xnnpack_delegate;

std::string filename(name);

const char* filename0 = filename.c_str();

IA_config0.model_direct = TfLiteModelCreateFromFile(filename0);

IA_config0.options_direct = TfLiteInterpreterOptionsCreate();
TfLiteInterpreterOptionsSetNumThreads(IA_config0.options_direct, 1);

// Create the interpreter.
IA_config0.interpreter_direct = TfLiteInterpreterCreate(IA_config0.model_direct, IA_config0.options_direct);

IA_config0.input_tensor_direct = TfLiteInterpreterGetInputTensor(IA_config0.interpreter_direct, 0);

IA_config0.output_tensor_direct = TfLiteInterpreterGetOutputTensor(IA_config0.interpreter_direct, 0);

If the someone has ever faced this kind of issue, I would be really grateful if he could help me solving this out.

Thanks in advance,

Laurick