Mask-R-CNN to TFLite with quantization

I’m trying to get ether Fast-R-CNN or Mask-R-CNN running on an NXP NPU, but each model I’ve tried hits layers that give the converter fits.
The model posted by leekunhee/Mask_RCNN successfully converts to TFLite with Tensorflow 2.13 (and earlier versios), but turning on quantization blocks on “tf.CropAndResize”. Enabling converter.allow_custom_ops causes the quantize engine to crash. I’ve tried Fast-R-CNN models, but they use BilinearInterpolation layers that work fine run as eager, but can’t be converted to TF.keras models due to their using tensor indexing, which can’t be done with symbolic keras tensors.

How do you create a tensorflow model that have layers that absolutely require real data to create a run graph?

What is involved in getting custom tensor processing layers into A.) TFLite, and B.) TFLite Quantized, and C.) using NPU native opcodes?