If you want to generate a TensorFlow.js model, you can use the following procedure to inverse-quantify tflite to generate onnx, then generate TensorFlow saved_model again, and convert from saved_model to TFJS.
TypeError: argument of type 'NoneType' is not iterable
ERROR: input_onnx_file_path: /Users/someuser/Downloads/myModel/lite-model_craft-text-detector_dr_1.onnx
ERROR: onnx_op_name: onnx_tf_prefix_MatMul_84;StatefulPartitionedCall/onnx_tf_prefix_MatMul_845
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
I expect that this is probably due to an outdated version of some package used in the back end, etc. Therefore, I recommend trying the following. If you are a Windows user, running the commands inside WSL2 Ubuntu should go very smoothly.
...
Model optimizing complete!
Automatic generation of each OP name started ========================================
Automatic generation of each OP name complete!
Model loaded ========================================================================
Model convertion started ============================================================
INFO: input_op_name: input shape: [1, 3, 800, 600] dtype: float32
Killed
I used the docker approach you suggest above.
My computer is a Macbook (13.3.1 (a) )
I am not going to advise you on issues specific to your environment. Killed is probably due to lack of RAM or something. TensorFlow unnecessarily consumes a lot of RAM, not an onnx2tf problem.
ERROR: input_onnx_file_path: lite-model_rosetta_dr_1.onnx
ERROR: onnx_op_name: transpose_1;StatefulPartitionedCall/transpose_11
ERROR: Read this and deal with it. https://github.com/PINTO0309/onnx2tf#parameter-replacement
ERROR: Alternatively, if the input OP has a dynamic dimension, use the -b or -ois option to rewrite it to a static shape and try again.
ERROR: If the input OP of ONNX before conversion is NHWC or an irregular channel arrangement other than NCHW, use the -kt or -kat option.
ERROR: Also, for models that include NonMaxSuppression in the post-processing, try the -onwdt option.
Could you please suggest me a way to fix it?
Thank you in advance
PS: I’ve tried to use in my code the model exported as json+bin but it gave me following error:
Error: layer: Improper config format:
{
... <- all json content here
}
'className' and 'config' must set.