Allways getting the same model output from a tflite micro model, regardless of the input?

Hey there,

i’m trying to run a CNN on an ESP32-S2 microcontroller, programming it trough the Arduino IDE. This model is quite simple. The input shape is (1,256,256,1) and the images are either filled with 0s or 1s. When the image is filled with 0s, the model should output a 0 and vice versa. I quantized the model to use only INT8 data types for everything. I allocated about 500kb of PSRAM to run the model there, as the normal DRAM proves to be insufficient in size.
My current problem is that, regardless of what i am doing, the output is always 1, even when the input images are all 0. The training and testing went fine, so no problems here.
As i never tried to run a CNN on an ESP32 before, i am wondering if i provided the data in the right format to the model. There is not much documentation on how exactly this should be done. In code it allways says: input->data.int8[i] = data[i]
where data is stored into the models input tensor iteratively. But how is this done with images, where there are multiple dimensions?

I also figured that running the model in PSRAM could cause problems aswell, because the tensor_arena is a pointer itself. In code i wrote this:
constexpr int kTensorArenaSize = 524288; //2^19
uint8_t* tensor_arena;
// Allocate memory in PSRAM for TensorFlow Lite
tensor_arena = (uint8_t*)ps_malloc(kTensorArenaSize);

I would be glad if someone could shed some light onto this.