Trying to modify person_detection.ino project to use my own model. Need help!

Hi all,

I’m looking to use the person_detection arduino script that you were the teacher for, but modify it to run my own network. Unfortunately, what I did is not working, and I keep getting “Invoke Failed()” errors on my Nano BLE 33. I was hoping you’d be able to help guide me in the right direction, and show me what I’m doing wrong.

Below is the procedure I follow, and attached is my tflite model file and my modified person detection project. The only two files within the project that I actually modify from the original are the person_detection.ino and the person_deteciton_model_data.cc.

Thank you in advance for your help!

Here is the procedure I followed:

1)Create a model with Teachable Machine

  1. Download my model as a keras file

  2. Convert from Keras model to tflite model, and then integer quantize the model using the code here: https://colab.research.google.com/drive/12O9qO6bAI72B0RTt88sQPkcHkC16Mb8O?usp=sharing

  3. Convert from tflite to .cc using : xxd -i converted_model.tflite > model_data.cc

  4. Replace value of g_person_detect_model_data_len with new value from my model_data.cc

  5. Replace model_data with new model data from my model_data.cc

  6. Use https://netron.app/ to visualize the network and see what MicroOpsResolvers need to be included. Include as necessary.

The only two files in the project that I touch at all are person_detection.ino and person_detection_model_data.cpp. Please help me figure out why it’s not working!!!

Here is my person_detection.ino file for you to look

/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

#include <TensorFlowLite.h>

#include "main_functions.h"

#include "detection_responder.h"
#include "image_provider.h"
#include "model_settings.h"
#include "person_detect_model_data.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;

// In order to use optimized tensorflow lite kernels, a signed int8_t quantized
// model is preferred over the legacy unsigned model format. This means that
// throughout this project, input images must be converted from unisgned to
// signed format. The easiest and quickest way to convert from unsigned to
// signed 8-bit integers is to subtract 128 from the unsigned value to get a
// signed value.

// An area of memory to use for input, output, and intermediate arrays.
constexpr int kTensorArenaSize = 136 * 1024;
static uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

// The name of this function is important for Arduino compatibility.
void setup() {
  // Set up logging. Google style is to avoid globals or statics because of
  // lifetime uncertainty, but since this has a trivial destructor it's okay.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(g_person_detect_model_data);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
TF_LITE_REPORT_ERROR(error_reporter,
                     "Model provided is schema version %d not equal "
                     "to supported version %d.",
                     model->version(), TFLITE_SCHEMA_VERSION);
return;
  }

  // Pull in only the operation implementations we need.
  // This relies on a complete list of all the ops needed by this graph.
  // An easier approach is to just use the AllOpsResolver, but this will
  // incur some penalty in code space for op implementations that are not
  // needed by this graph.
  //
  // tflite::AllOpsResolver resolver;
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroMutableOpResolver<10> micro_op_resolver;
  
  micro_op_resolver.AddPad();
  micro_op_resolver.AddConv2D();
  micro_op_resolver.AddDepthwiseConv2D();
  micro_op_resolver.AddSoftmax();
  micro_op_resolver.AddRelu6();
  micro_op_resolver.AddRelu();
  micro_op_resolver.AddAdd();
  micro_op_resolver.AddMean();
  micro_op_resolver.AddFullyConnected();
  micro_op_resolver.AddQuantize();
  
  // Build an interpreter to run the model with.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroInterpreter static_interpreter(
  model, micro_op_resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
return;
  }

  // Get information about the memory area to use for the model's input.
  input = interpreter->input(0);
}

// The name of this function is important for Arduino compatibility.
void loop() {
  // Get image from provider.
  if (kTfLiteOk != GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
                        input->data.int8)) {
TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
  }

  // Run the model on this input and make sure it succeeds.
  if (kTfLiteOk != interpreter->Invoke()) {
TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
  }

  TfLiteTensor* output = interpreter->output(0);

  // Process the inference results.
  int8_t person_score = output->data.uint8[kPersonIndex];
  int8_t no_person_score = output->data.uint8[kNotAPersonIndex];
  RespondToDetection(error_reporter, person_score, no_person_score);
}
1 Like

I am exactly doing the same thing

when running I am getting following output

Didn’t find op for builtin opcode ‘DEPTHWISE_CONV_2D’ version ‘3’

Failed to get registration from op code d

AllocateTensors() failed

Guru Meditation Error: Core 1 panic’ed (LoadProhibited). Exception was unhandled.

Core 1 register dump:
PC : 0x400d2890 PS : 0x00060d30 A0 : 0x800daa57 A1 : 0x3ffb2800
A2 : 0x3ffd3efc A3 : 0x00000000 A4 : 0x3ffc16e0 A5 : 0x3ffc1638
A6 : 0x3ffc1640 A7 : 0x00000001 A8 : 0x800d2851 A9 : 0x3ffb27b0
A10 : 0x00000000 A11 : 0x00000060 A12 : 0x00000060 A13 : 0x00000001
A14 : 0x00011800 A15 : 0x3ffc1640 SAR : 0x00000006 EXCCAUSE: 0x0000001c
EXCVADDR: 0x00000004 LBEG : 0x40089029 LEND : 0x40089039 LCOUNT : 0xffffffff

Backtrace:0x400d288d:0x3ffb28000x400daa54:0x3ffb2820

ELF file SHA256: 0000000000000000
Rebooting…