Tensorflow micro returning null pointers after AllocateTensors

I am having problems reading a model into tensorflow lite micro. This is sample code to reproduce the error.
The input and output tensors appear to know their size in bytes, but not their type, which is expected to be float32, and
the pointers returned are null.

The example model is not trained, but the results are the same after training. In the full code, the tensors data pointers
are cast to float*

The code is all running on Debian Linux. I downloaded tensorflow lite micro from:
GitHub - tensorflow/tflite-micro: Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). and built:
tflite-micro-main/gen/linux_x86_64_default/lib/libtensorflow-microlite.a
with command:
make -f tensorflow/lite/micro/tools/make/Makefile
so its running under linux as an x86 target.

The model is loaded without error from the model.cc file. The interpreter is instantiated and:
interpreter->AllocateTensors();
runs without error.

I would appreciate any insight into this issue.

This C code:

tflite::InitializeTarget();

// Map the model into a usable data structure. This doesn’t involve any
// copying or parsing, it’s a very lightweight operation.
model = tflite::GetModel(no_alloc_model_tflite);
if (model->version() != TFLITE_SCHEMA_VERSION) {
MicroPrintf(
"Model provided is schema version %d not equal "
“to supported version %d.”,
model->version(), TFLITE_SCHEMA_VERSION);
return NULL;
}

// This pulls in all the operation implementations we need.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::AllOpsResolver resolver;

// Build an interpreter to run the model with.
tflite::MicroInterpreter theInterpreter(model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &theInterpreter ;

// Allocate memory from the tensor_arena for the model’s tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
MicroPrintf(“AllocateTensors() failed”);
return NULL;
}

input = interpreter->input_tensor(0);
output = interpreter->output_tensor(0);

// Obtain pointers to the model’s input and output tensors.
// These return null no matter how they are typed
auto inData = input->data.data;
auto outData = output->data.data;

printf(“Input pointer %i, Output pointer %i\n”,inData,outData);
printf(“Input type %i, Output type %i\n”,input->type,output->type);
printf(“Input typename %s, Output typename %s\n”,TfLiteTypeGetName(input->type),TfLiteTypeGetName(output->type));
printf(“Input bytes %i, Output bytes %i\n”,input->bytes,output->bytes);

Produces this Result:

Input pointer 0, Output pointer 0
Input type 0, Output type 0
Input typename NOTYPE, Output typename NOTYPE
Input bytes 2304, Output bytes 8

A simplified model (in python) looks like this:

model = models.Sequential()

model.add(layers.Conv2D(16, 
						(2, 2), 
						activation='relu',
						input_shape=[24,24,1]))
model.add(layers.MaxPooling2D(pool_size=(4, 4)))

model.add(layers.Flatten())
model.add(layers.Dense(2, activation='sigmoid'))

model.compile(optimizer='adam', loss='mse', metrics=['mae'])
1 Like

I am facing a similar problem and as much as I have tried to debug, it appears that arises from the flatten layer. I tried to us reshape but it still does not work. I am still stuck with the problem. Did you already find a solution?