Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3]

Hi, I converted my saved model to model json using tensorflowjs_converter

    tensorflowjs_converter \                            
    --input_format=tf_saved_model \   
    --output_format=tfjs_graph_model \    
    --saved_model_tags=serve \
    --signature_name=serving_default \
    /saved_model \
   /json-model

model.predict({input_tensor: inputTensor});

throws the following errors

Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3], but was [1,600,800,4]
    at Object.assert (/object-detection/node_modules/@tensorflow/tfjs-core/dist/tf-core.node.js:337:15)
    at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7478:28
    at Array.forEach (<anonymous>)
    at GraphExecutor.checkInputShapeAndType (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7470:29)
    at GraphExecutor.<anonymous> (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7272:34)
    at step (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:81:23)
    at Object.next (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:62:53)
    at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:55:71
    at new Promise (<anonymous>)
    at __awaiter (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:51:12) 
    

If I add ----control_flow_v2=True to do the conversion, it will fail to loadGraphModel.

I'm running @tensorflow/tfjs-node v 3.7.0 and I'm still getting this error "Cannot read property 'outputs' of undefined" when tried to load model json that was converted from saved model using tensorflowjs_converter.
When I changed to @tensorflow/tfjs-node@next, it would throw "Cannot read property 'children' of undefined"

model = await tf.loadGraphModel(modelPath);

2021-07-10 13:26:00.618147: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.

TypeError: Cannot read property 'outputs' of undefined
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3851:31
at Array.forEach ()
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3848:29
at Array.forEach ()
at OperationMapper.mapFunction (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3846:18)
at /object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3679:56
at Array.reduce ()
at OperationMapper.transformGraph (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:3678:48)
at GraphModel.loadSync (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7763:68)
at GraphModel. (/object-detection/node_modules/@tensorflow/tfjs-converter/dist/tf-converter.node.js:7737:52)

If this is an image model are you loading RGBA when it expects RGB? That would maybe explain the 4 vs 3 error you have in the first case?

@Jason, I’m not sure. It works when I load it with tfnode.node.loadSavedModel().

Maybe it’s the way I’m loading the model.json? Can you elaborate?

Was the original Python model Keras saved model (.h5 file) or regular TF savedModel (typically .pb file)?

For Keras saved model you would use:

tf.loadLayersModel(MODEL_URL);

For TF SavedModel you would use:

tf.loadGraphModel(MODEL_URL);

Looking at your error message though in the first instance it seems your passing a tensor of the wrong shape. Check the tensor of the data you are sending into the model and figure out why it is 4 instead of 3 in that last dimension. What image did you load? PNG? What exactly is your input_tensor? Do you have this on a Glitch or Codpen somewhere for me to see running?

It’s TF savedModel.
using loadGraphModel()
loading png image

    const image = fs.readFileSync(imageFile); 
    let decodedImage = tfnode.node.decodeImage(image);
    let inputTensor = decodedImage.expandDims();
   model.predict()

the repo is here GitHub - playground/tfjs-object-detection

Can you try with JPG? I think it may be because PNG has 4 channels, JPG will have 3. IF that is the case then you simply need to force the channels to be 3 and ignore the alpha channel for transparency on PNGs:

There is parameter to force it to be 3 under channels which is optional parameter you can specify.

1 Like

with jpg image, I get Error: The dtype of dict[‘input_tensor’] provided in model.execute(dict) must be int32, but was float32

In that case you will need to convert the resulting tensor to be integer values and not floating point.

You can use something like:

tf.cast(data, 'int32')

However before doing that inspect the resulting Tensor to see what the values are after the image read. Depending how it decoded if it decided to take RGB values and normalize them eg instead of numbers from 0-255 it gives numbers 0 -1 as a floating point then you would want to convert back to whole number by multiplying by 255 first. Else 0.1239803 would just become 0 in a cast to integer which is not what you want!

However if you are using the PNG method I listed above it automatically returns an int32 tensor so it is probably better to do that if you are not using that method and you know your images are always PNG.

Now, it’s getting Image:

  kept: false,
  isDisposedInternal: false,
  shape: [ 1, 600, 800, 3 ],
  dtype: 'int32',
  size: 1440000,
  strides: [ 1440000, 2400, 3 ],
  dataId: {},
  id: 918,
  rankType: '4',
  scopeId: 2
}
Error: This execution contains the node 'StatefulPartitionedCall/map/while/exit/_727', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [StatefulPartitionedCall/map/TensorArrayV2Stack_1/TensorListStack]

If use loadSavedModel(savedModel) without the conversion, it works fine for both jpg and png images.

Did you try what it recommended? Eg dont use predict but use executeAsync() instead?

Actually, I have tried that earlier and this is what throws with model.executeAsync

with jpg image

Image: 1440000 bytes with shape: Tensor {
  kept: false,
  isDisposedInternal: false,
  shape: [ 1, 600, 800, 3 ],
  dtype: 'int32',
  size: 1440000,
  strides: [ 1440000, 2400, 3 ],
  dataId: {},
  id: 918,
  rankType: '4',
  scopeId: 2
}

Error: Invalid TF_Status: 3
Message: In[0] and In[1] has different ndims: [1,8,8,64,2] vs. [2,1]

With png image

Error: The shape of dict['input_tensor'] provided in model.execute(dict) must be [1,-1,-1,3], but was [1,600,800,4]

Hello again! Sorry for the slight delay here. I have been discussing with our software engineers on the team. It seems we may have found a bug here and they would like you to submit an issue on the TFJS github which you can find here: GitHub - tensorflow/tfjs: A WebGL accelerated JavaScript library for training and deploying ML models.

Once you have submitted a bug, feel free to let me know the link and I can also send that link to the SWE directly who is looking into this to kick that process off.

Thanks for your patients here and for being part of the community :slight_smile:

Hi @Jason,

Thanks for the update, I have submitted a bug here Error: The shape of dict[‘input_tensor’] provided in model.execute(dict) must be [1,-1,-1,3] · Issue #5366 · tensorflow/tfjs · GitHub.

Also this might be a regression bug with the recent release of tfjs-node v3.8.0, I have posted that here Failed to find bogomips warning · Issue #38260 · tensorflow/tensorflow · GitHub

A different question, is there an example of training savedmodel using nodejs you can point me to, similar to this example mnist-node

This is the only codelab for Node that we have right now:

If you do get this working it may be a good blog writeup if you are interested!

Ok, thanks, will take a look and see.

Hi @Jason. Another question if I may, the reason I’m asking for a Nodejs version for training models is that my environment that I have been using to train models on my Mac for some reason in the last couple of weeks it seems to be acting up and not producing the proper result like it did before. Not sure what got changed, the trained savedmodel instead of recognizing the proper objects, it’s producing many random bounding boxes. In last couple of days I have been trying to dockerize the training pipeline using Ubuntu 20.04, 21.04 with different python version 3.7, 3.8 and 3.9 but the training keeps failing with the same errors

WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0723 03:05:32.545922 140238382945856 utils.py:78] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
W0723 03:05:44.013801 140238382945856 utils.py:78] Gradients do not exist for variables ['top_bn/gamma:0', 'top_bn/beta:0'] when minimizing the loss.
Killed

    at ChildProcess.exithandler (node:child_process:397:12)
    at ChildProcess.emit (node:events:394:28)
    at maybeClose (node:internal/child_process:1067:16)
    at Socket.<anonymous> (node:internal/child_process:453:11)
    at Socket.emit (node:events:394:28)
    at Pipe.<anonymous> (node:net:662:12) {
  killed: false,
  code: 137,
  signal: null,
  cmd: 'python /server/models/research/object_detection/model_main_tf2.py --pipeline_config_path=/server/data-set/ssd_efficientdet_d0_512x512_coco17_tpu-8.config --model_dir=/server/data-set/training --alsologtostderr'

Any idea or suggestions?