How to check the performance of a neural network in tfjs_graph_model format in a browser?

Colleagues, please help a novice programmer to launch a neural network in a browser.

I trained a pre-trained neural network ssd_mobilenet_v2_fpnlite_320x320_coco17 in a colobarator.
Saved the network in tensorflow format - save_model:

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:
signature_def['__saved_model_init_op']:
  The given SavedModel SignatureDef contains the following input(s):
  The given SavedModel SignatureDef contains the following output(s):
    outputs['__saved_model_init_op'] tensor_info:
        dtype: DT_INVALID
        shape: unknown_rank
        name: NoOp
  Method name is: 

signature_def['serving_default']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_UINT8
        shape: (1, -1, -1, 3)
        name: serving_default_input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['detection_anchor_indices'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:0
    outputs['detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 4)
        name: StatefulPartitionedCall:1
    outputs['detection_classes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:2
    outputs['detection_multiclass_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 249)
        name: StatefulPartitionedCall:3
    outputs['detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:4
    outputs['num_detections'] tensor_info:
        dtype: DT_FLOAT
        shape: (1)
        name: StatefulPartitionedCall:5
    outputs['raw_detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 130944, 4)
        name: StatefulPartitionedCall:6
    outputs['raw_detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 130944, 249)
        name: StatefulPartitionedCall:7
  Method name is: tensorflow/serving/predict

Concrete Functions:
  Function Name: '__call__'
    Option #1
      Callable with:
        Argument #1
          input_tensor: TensorSpec(shape=(1, None, None, 3), dtype=tf.uint8, name='input_tensor')

I checked the quality of the model, it is wonderful!
After that, I converted, save the model to tensorflow js, with the following code

tensorflowjs_converter \
--input_format=tf_saved_model \
--output_node_names='detection_boxes','detection_classes','detection_features','detection_multiclass_scores','num_detections','raw_detection_boxes','raw_detection_scores' \
--output_format=tfjs_graph_model \
/content/gdrive/MyDrive/model_scoarbord/export/inference_graph/saved_model
    /content/gdrive/MyDrive/model_scoarbord/web_model

When loading the network into the browser, I created a zero tensor to test the performance of the neural network.

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@3.12.0/dist/tf.min.js"></script>
    <title>Document</title>

</head>

<body onload="">

    <script>
    async function loadModel() {
    const modelUrl ='model.json';
    const model = await tf.loadGraphModel(modelUrl);

    console.log('Model loaded')

    //create a zero tensor to test the model
    const zeros = tf.zeros([1, -1, -1, 3]);
    const zeros2 = zeros.toInt()

    //checking the performance of the model
    model.predict(zeros2).print();
    return model
    }
    loadModel()
    </script>
</body>
</html>

Accordingly, my directory looks like this:
group1-shard1of3.bin
group1-shard2of3.bin
group1-shard3of3.bin
index.html
model.json

After starting the live server in visual code, I see the following error: *

util_base.js:153 Uncaught (in promise) Error: The dtype of dict[‘input_tensor’] provided in model.execute(dict) must be int32, but was float32

I tried to explicitly specify the type of tensor const zeros2 = zeros.toInt()

And made a test prediction with zeros2 And got other errors: graph_executor.js:166 Uncaught (in promise) Error: This execution contains the node ‘StatefulPartitionedCall/map/while/exit/_435’, which has the dynamic op ‘Exit’. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [StatefulPartitionedCall/map/TensorArrayV2Stack_1/TensorListStack]

Please tell me what am I doing wrong?
How else can you check the performance of a neural network in the tfjs_graph_model format?

the error message says it all.

basically, model.predict is just a wrapper for model.execute.

and if a model has any of the flow operations (for, loop, while, etc.), its automatically marked as async, so it must be executed with await model.executeAsync instead.

Thank you. You’re right.

If I define zero tensor
const zeros = tf.zeros([1, 1, 1, 3]);
and explicitly assign its type
const zeros2 = zeros.toInt()

instead model.predict I use model.executeAsync
const result = await model.executeAsync(zeros2)
There is no error, but there is no tensor either.
I see an incomprehensible array.

1. 8) [e, e, e, e, e, e, e, e]

  1. 0: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 100, …}
  2. 1: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 25608, …}
  3. 2: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 51216, …}
  4. 3: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 400, …}
  5. 4: e {kept: false, isDisposedInternal: false, shape: Array(3), dtype: 'float32', size: 200, …}
  6. 5: e {kept: false, isDisposedInternal: false, shape: Array(1), dtype: 'float32', size: 1, …}
  7. 6: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 100, …}
  8. 7: e {kept: false, isDisposedInternal: false, shape: Array(2), dtype: 'float32', size: 100, …}

I would be grateful if you could give a code example for my case.
Thanks

Oh yes, I found a transcript of the output

Thanks again.