Tensorflow micro returning null pointers after AllocateTensors

I am having problems reading a model into tensorflow lite micro. This is sample code to reproduce the error.
The input and output tensors appear to know their size in bytes, but not their type, which is expected to be float32, and
the pointers returned are null.

The example model is not trained, but the results are the same after training. In the full code, the tensors data pointers
are cast to float*

The code is all running on Debian Linux. I downloaded tensorflow lite micro from:
GitHub - tensorflow/tflite-micro: Infrastructure to enable deployment of ML models to low-power resource-constrained embedded targets (including microcontrollers and digital signal processors). and built:
tflite-micro-main/gen/linux_x86_64_default/lib/libtensorflow-microlite.a
with command:
make -f tensorflow/lite/micro/tools/make/Makefile
so its running under linux as an x86 target.

The model is loaded without error from the model.cc file. The interpreter is instantiated and:
interpreter->AllocateTensors();
runs without error.

I would appreciate any insight into this issue.

This C code:

tflite::InitializeTarget();

// Map the model into a usable data structure. This doesn’t involve any
// copying or parsing, it’s a very lightweight operation.
model = tflite::GetModel(no_alloc_model_tflite);
if (model->version() != TFLITE_SCHEMA_VERSION) {
MicroPrintf(
"Model provided is schema version %d not equal "
“to supported version %d.”,
model->version(), TFLITE_SCHEMA_VERSION);
return NULL;
}

// This pulls in all the operation implementations we need.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::AllOpsResolver resolver;

// Build an interpreter to run the model with.
tflite::MicroInterpreter theInterpreter(model, resolver, tensor_arena, kTensorArenaSize);
interpreter = &theInterpreter ;

// Allocate memory from the tensor_arena for the model’s tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
MicroPrintf(“AllocateTensors() failed”);
return NULL;
}

input = interpreter->input_tensor(0);
output = interpreter->output_tensor(0);

// Obtain pointers to the model’s input and output tensors.
// These return null no matter how they are typed
auto inData = input->data.data;
auto outData = output->data.data;

printf(“Input pointer %i, Output pointer %i\n”,inData,outData);
printf(“Input type %i, Output type %i\n”,input->type,output->type);
printf(“Input typename %s, Output typename %s\n”,TfLiteTypeGetName(input->type),TfLiteTypeGetName(output->type));
printf(“Input bytes %i, Output bytes %i\n”,input->bytes,output->bytes);

Produces this Result:

Input pointer 0, Output pointer 0
Input type 0, Output type 0
Input typename NOTYPE, Output typename NOTYPE
Input bytes 2304, Output bytes 8

A simplified model (in python) looks like this:

model = models.Sequential()

model.add(layers.Conv2D(16, 
						(2, 2), 
						activation='relu',
						input_shape=[24,24,1]))
model.add(layers.MaxPooling2D(pool_size=(4, 4)))

model.add(layers.Flatten())
model.add(layers.Dense(2, activation='sigmoid'))

model.compile(optimizer='adam', loss='mse', metrics=['mae'])
1 Like

I am facing a similar problem and as much as I have tried to debug, it appears that arises from the flatten layer. I tried to us reshape but it still does not work. I am still stuck with the problem. Did you already find a solution?

Hi, in our case this issue stemmed from the requirement that, for applications linking against tflite micro library, TF_LITE_STATIC_MEMORY=1 should be defined in makefile.
If you define this flag, it can be resolved.

Yes that should work and I can explain why:

The TfLiteTensor struct you created in app has size: 64 bits
Whereas TfLiteTensor struct created in Tensorflow’s side: 32 bits

So you cannot copy the library side struct values inside your app struct due to different size and overflow issues.

Why it happened and how to check and fix it:

On June 19, 2020 a tensorflow commit changed a few things.
It introduced two sizes for the structure TfLiteTensor.
There are different structures of size 32 bit and 64 bit, which can be set using the macro: TF_LITE_STATIC_MEMORY. If defined: 32 bit size is selected else 64.

So, by default the tensorflow libs have enabled this macro, hence 32 bit TfLiteTensor is used at library side. Whereas in your side where you create the interpreter, the size selected is 64 bit as you have not enabled the macro.

So either add this macro to your Makefile or change Tensorflow’s make.

You can check this by logging the size of the structure in App and in TF lib code:

In c app add:

printf(“size of TfLiteTensor is: %zu”, sizeof(TfLiteTensor));

same you can add in TF side at

micro_interpreter.cc

Regards,
GRT