TFJS model not working problem

Hi, Custom object detection in the browser using TensorFlow.js — The TensorFlow Blog I tried everything on this site in colab. There are some problems. If I load the requirements.txt specified on the site, the training does not start at all. it is producing numpy error. If I continue without loading reqirements.txt, only opencv gives an error and when I fix it, I can train. The tensorflow 2.8 model produced works fine in colab. When I want to run the tfjs model I obtained after converting it to tfsj in the react web interface, the web browser generates an error and fail. Is there anyone who can make and run tfsj-based custom object detection with their own data in 2022 successfully? Thanks in advance.

@hugozanini May have some thoughts here.

1 Like

Thanks for your question @Musa_Atas and thanks for notifying me @Jason.

Musa, are you doing your training in an isolated virtual environment? If you are installing the requirements in a clean environment, it should work well (I have just tested it).

If you were able to convert your model you are at one step to having your model running in javascript.

Probably, you are facing an index problem during the prediction. Every time you train a new model the predictions indexes can be changed and you have to check to guarantee that you are handling them properly.

The code that set the indexes can be found on the lines 118-120 and the numbers are set for the model trained in the blog post.

const boxes = predictions[4].arraySync();
const scores = predictions[5].arraySync();
const classes = predictions[6].dataSync();

I guess that’s why the kangaroo model is working and your model is not.

To identify what each index of your model represents, use the saved_model_cli on your model directory.

saved_model_cli show --dir {mobilenet_save_path} --tag_set serve

This function is going to return your model signature. You are going to see something similar to this:

// saved_model_cli show --dir saved_model --all

  The given SavedModel SignatureDef contains the following input(s):
    inputs['input_tensor'] tensor_info:
        dtype: DT_UINT8
        shape: (1, -1, -1, 3)
        name: serving_default_input_tensor:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['detection_anchor_indices'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:0
    outputs['detection_boxes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 4)
        name: StatefulPartitionedCall:1
    outputs['detection_classes'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:2
    outputs['detection_multiclass_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100, 13)
        name: StatefulPartitionedCall:3
    outputs['detection_scores'] tensor_info:
        dtype: DT_FLOAT
        shape: (1, 100)
        name: StatefulPartitionedCall:4

Each of the outputs has its own StatefulPartitionedCall. In this case, detection_boxes is 1 and detection_classes is 2. You should use these indexes to attribute the variables.

const boxes = predictions[1].arraySync();
const scores = predictions[4].arraySync();
const classes = predictions[2].dataSync();

I hope this helps. Let me know if you get it working for your use case.

Hey @hugozanini,

I’m running into a similar problem with my model.
I’m able to train and test my model on CoLab, but the exported one does not show any predictions.

I tried running the saved_model_cli show --dir {mobilenet_save_path} --tag_set serve command you suggested, but I get different outputs than you:

The given SavedModel MetaGraphDef contains SignatureDefs with the following keys:
SignatureDef key: "__saved_model_init_op"
SignatureDef key: "serving_default"

Maybe I exported the model incorrectly?
Is that a different way to derive the correct indices?


I ended up relying on the shape of each tensor and using it to retrieve the right index.

This is obviously a heuristic approach, but I still prefer it to manually updating the indices on each conversion of the tf model. Let me know if there’s a “tighter” approach to getting this right (maybe we could just get meaningful labels attached to each tensor?)

  // Boxes are the only ones with the shape [1, 100, 4].
  getBoxes = predictions => {
    return predictions.find(f => {
      return (
        f.shape.length === 3 && 
        f.shape[0] === 1 && 
        f.shape[1] === 100 && 
        f.shape[2] === 4 
  // Helper, gets all elements that have the shape [1, 100]
  getOneDTensors = predictions => predictions.filter(f => {
    return (
      f.shape.length === 2 && 
      f.shape[0] === 1 && 
      f.shape[1] === 100

  // Classes have the shape [1, 100] and their values should be the same as the keys of `classesDir`
  getClasses = predictions => {
    const areSetsEqual = (a, b) => a.size === b.size && [...a].every(value => b.has(value));
    const oneDTensors = this.getOneDTensors(predictions);
    return oneDTensors.find(od => {
      const odSet = new Set(od.dataSync());
      return areSetsEqual(odSet, new Set(Object.keys(classesDir).map(x => Number.parseInt(x))))

  // Scores have the shape [1, 100] and their values are *not* integers.
  getScores = predictions => {
    let isIntegerArray = (arr) => {
      return arr.find(item => !Number.isInteger(item)) === undefined
    const oneDTensors = this.getOneDTensors(predictions);
    return oneDTensors.find(od => {
      return !isIntegerArray(od.dataSync());

1 Like

Hi, @lizozom

I don’t know why the saved_model_cli show command showed it. The idea is that by using the command you could identify the correct indexes and use them in your code.

Your solution is creative and solved the problem well, thanks for sharing.

I’ll search more to try to figure out why saved_model_cli didn’t work… But I guess it is related to the conversion process, which I have to test to try to understand.