Opcode not found on Edge TPU (Google Coral Micro)

I am trying to convert a TensorFlow Lite file trained on Edge Impulse to run on my Google Coral Micro. The model is MobileNetV2 SSD FPN-Lite 320x320 trained on custom images. The TF file is here.

With that, I converted it to TFLite using the following code:

# Load model
loaded_model = tf.saved_model.load(TF_MODEL_PATH)

# Get input dimensions
input_dims = loaded_model.signatures["serving_default"].inputs[0].shape

# Set default dimensions
input_dims = list(input_dims)
if None in input_dims:
  print("None dimension found, setting defaults")
  input_dims[0] = input_dims[0]
  input_dims[1] = 320
  input_dims[2] = 320
  input_dims[3] = input_dims[3]
input_dims = tuple(input_dims)

# Construct a representative dataset for quantization
# NOTE: this works for images only (assume uniform 2D images). You
# will need to modify this for other types of data
def get_representative_dataset():
  for _ in range(250):
    data = np.random.uniform(0, 1.0, size=input_dims).astype(np.float32)
    yield [data]

# Convert to quantized TFLite model
converter = tf.lite.TFLiteConverter.from_saved_model(TF_MODEL_PATH)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = get_representative_dataset
tflite_model = converter.convert()
with open(TFLITE_MODEL_PATH, "wb") as f:

From there, I used the Edge TPU compiler to map most of the ops to the TPU:

edgetpu_compiler --min_runtime_version 13 ei-xrp-bucket-delivery.lite 

which gave the following output:

Edge TPU Compiler version 16.0.384591198
Started a compilation timeout timer of 180 seconds.

Model compiled successfully in 888 ms.

Input model: ei-xrp-bucket-delivery.lite
Input size: 3.57MiB
Output model: ei-xrp-bucket-delivery_edgetpu.tflite
Output size: 3.89MiB
On-chip memory used for caching model parameters: 3.20MiB
On-chip memory remaining for caching model parameters: 4.40MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 166
Operation log: ei-xrp-bucket-delivery_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 112
Number of operations that will run on CPU: 54
See the operation log file for individual operation details.
Compilation child process completed within timeout period.
Compilation succeeded! 

I then used the following code to run the compiled model on my Google Coral Micro.

The board gives me the following error:

Didn't find op for builtin opcode 'PACK' version '1'. An older version of this builtin might be supported. Are you using an old TFLite binary with a newer model?

Failed to get registration from op code PACK
ERROR: AllocateTensors() failed

I have tried the TFLite conversion process with TF versions 2.8.2, 2.11, and 2.14. They all result in the same error. From what I understand, PACK is supported by TFLite.

[EDIT] Full disclosure: I cross-posted this help request to the GitHub issue here: github[dot]com/google-coral/edgetpu/issues/813


Here are a few things you can do to tackle this issue:

  1. Make sure your TensorFlow Lite runtime version plays nice with the Edge TPU compiler. Mismatched versions can be a headache.
  2. Ensure you’re using the latest Edge TPU Compiler. Sometimes, a simple update does wonders.
  3. Confirm that all operations in your model are friends with the Edge TPU. Check the Edge TPU Model Requirements for a compatibility list.
  4. Play around with the optimization settings during TFLite conversion. Tweaking flags might be the key.
  5. Consider tweaking your model to stick to operations supported by the Edge TPU. The model requirements page is your guiding light.
1 Like


Thank you for the quick response! I tried a few versions of the TFLite converter, and I still ended up with the same error. I think it has something to do with how the model was trained (i.e. which version of TF was used), which I do not have control over.

I am trying a different route: using the Google MediaPipe (Object detection model customization guide  |  MediaPipe  |  Google for Developers), and that seems to be (so far) working when converted to Edge TPU. The MobileNetV2-I320 model from there is running on the Coral Micro, which is very promising.

1 Like

@ShawnHymel great to hear that you found a fix.

Google MediaPipe’s Object Detection aligns well with Coral devices and could potentially save you from the complexities of compatibility issues between TensorFlow Lite and Edge TPU.

1 Like