Use PoseNet model with MoveNet Tensorflow.js

currently have a trained model using google’s teachable machine, found here: Teachable Machine

Google’s teachable machine uses PoseNet to train the model, but I was curious if there was a way to use this trained model with MoveNet instead, found here: tfjs-models/pose-detection/src/movenet at master · tensorflow/tfjs-models · GitHub

From what I can tell, both MoveNet and PoseNet are tracking the same body points - MoveNet just does it better.

My current model, set up with PoseNet, looks something like this (URL being the trained model):

 const URL = "https://teachablemachine.withgoogle.com/models/vbPTn21tN/";
 let model;
  
 async function init() {
          const modelURL = URL + "model.json";
          const metadataURL = URL + "metadata.json";
  
          // load the model and metadata
          // Note: the pose library adds a tmPose object to your window (window.tmPose)
          model = await tmPose.load(modelURL, metadataURL);
          maxPredictions = model.getTotalClasses();
   }


Movenet seems to set up models like so:

const detectorConfig = {modelType: poseDetection.movenet.modelType.SINGLEPOSE_LIGHTNING};
let detector;
  
 async function init() {
      const detector = await poseDetection.createDetector(poseDetection.SupportedModels.MoveNet, detectorConfig);
   }

I’d like to use my PoseNet model with MoveNet if possible. Any help/advice is appreciated!

2 Likes

Hello there! Actually I am creating a course right now that explains how to make teachable machine yourself from a blank canvas. You may be interested in joining that when it starts here:

I also show in this course how to load and use the mobile net model directly. It may be possible to take those outputs and add a multi layer perceptron on the end to then classify poses for example.

Unfortunately the content is not live yet so I can not link you to it, but will come out 3rd March hopefully and should be free to view if you dont need the certificate I think.

2 Likes

Hello Jason, I would be interested in the course, but I would have a question about what modeling algorithm we would use during the course, and would the course affect Movenet and human pose estimation in real time?

I show how to use the raw movenet model from tfhub along with how to do the pre/post processing which may be useful to you, and also how to create your own teachable machine from blank canvas so with those 2 examples it may help you get some ideas on how to tackle your situation above. I think the teachable machine part is prob the most useful as you may be able to take the outputs from movenet directly and feed into a multi layer perceptron as I show in the course to then estimate poses.

2 Likes

thank you for your reply, the said course will definitely be useful for me. I have one more question. Have you ever encountered such a problem? “Uncaught (in promise) Error: Weight StatefulPartitionedCall / kpt_offset_0 / separable_conv2d_6 / separable_conv2d / ReadVariableOp has unknown quantization dtype float16. Supported quantization dtypes are: ‘uint8’ and ‘uint16’.” - I think the given model should be converted somehow, but I don’t know what would be the best way to do it, maybe it could help?

You may want to check the documentation of the TensorFlow.js converter if you are trying to use a model from Python in JS - there are a number of quantization options to convert to certain dtypes:

–quantize_uint8 Comma separated list of node names to apply 1-byte affine quantization. You can also use wildcard symbol () to apply quantization to multiple nodes (e.g., conv//weights). When the flag is provided without any nodes the default behavior will match all nodes.
–quantize_uint16 Comma separated list of node names to apply 2-byte affine quantization. You can also use wildcard symbol () to apply quantization to multiple nodes (e.g., conv//weights). When the flag is provided without any nodes the default behavior will match all nodes.
1 Like

Thanks Jason for the reply. Unfortunately, the conversion did not completely solve the problem. I put up the tfjs-converter, loaded my model, converted it to uint8, uint16 as well, but unfortunately I didn’t make any changes with it. Is it possible that this conversion does not affect everything in the model? Would there be another method or should the whole model be rebuilt?

I converted with this command:

tensorflowjs_converter
 --input_format=tfjs_layers_model
 --metadata
 --output_format=tfjs_layers_model
 --quantize_uint8=*
 --weight_shard_size_bytes=4194304
 C:\Users\gyorg\OneDrive\Asztali gép\konvertalt-modell\model.json
 C:\Users\gyorg\OneDrive\Asztali gép\konvertalt-modell

This is the content of the resulting model:

{“format”: “layers-model”, “generatedBy”: “keras v2.6.0”, “convertedBy”: “TensorFlow.js Converter v3.13.0”, “modelTopology”: {“keras_version”: “2.6.0”, “backend”: “tensorflow”, “model_config”: {“class_name”: “Sequential”, “config”: {“name”: “sequential_4”, “layers”: [{“class_name”: “InputLayer”, “config”: {“batch_input_shape”: [null, 14739], “dtype”: “float32”, “sparse”: false, “ragged”: false, “name”: “dense_Dense3_input”}}, {“class_name”: “Dense”, “config”: {“name”: “dense_Dense3”, “trainable”: true, “batch_input_shape”: [null, 14739], “dtype”: “float32”, “units”: 100, “activation”: “relu”, “use_bias”: true, “kernel_initializer”: {“class_name”: “VarianceScaling”, “config”: {“scale”: 1, “mode”: “fan_in”, “distribution”: “truncated_normal”, “seed”: null}}, “bias_initializer”: {“class_name”: “Zeros”, “config”: {}}, “kernel_regularizer”: null, “bias_regularizer”: null, “activity_regularizer”: null, “kernel_constraint”: null, “bias_constraint”: null}}, {“class_name”: “Dropout”, “config”: {“name”: “dropout_Dropout2”, “trainable”: true, “dtype”: “float32”, “rate”: 0.5, “noise_shape”: null, “seed”: null}}, {“class_name”: “Dense”, “config”: {“name”: “dense_Dense4”, “trainable”: true, “dtype”: “float32”, “units”: 2, “activation”: “softmax”, “use_bias”: false, “kernel_initializer”: {“class_name”: “VarianceScaling”, “config”: {“scale”: 1, “mode”: “fan_in”, “distribution”: “truncated_normal”, “seed”: null}}, “bias_initializer”: {“class_name”: “Zeros”, “config”: {}}, “kernel_regularizer”: null, “bias_regularizer”: null, “activity_regularizer”: null, “kernel_constraint”: null, “bias_constraint”: null}}]}}}, “weightsManifest”: [{“paths”: [“group1-shard1of1.bin”], “weights”: [{“name”: “dense_Dense3/kernel”, “shape”: [14739, 100], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.01743759173972934, “scale”: 0.00013623118546663546, “original_dtype”: “float32”}}, {“name”: “dense_Dense3/bias”, “shape”: [100], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.0005331449365864198, “scale”: 3.9201833572530865e-06, “original_dtype”: “float32”}}, {“name”: “dense_Dense4/kernel”, “shape”: [100, 2], “dtype”: “float32”, “quantization”: {“dtype”: “uint8”, “min”: -0.19357375818140365, “scale”: 0.0015362996681063782, “original_dtype”: “float32”}}]}]}

Error message after running the code :

Uncaught (in promise) Error: Weight StatefulPartitionedCall/kpt_offset_0/separable_conv2d_6/separable_conv2d/ReadVariableOp has
 unknown quantization dtype float16. Supported quantization dtypes are: 'uint8' and 'uint16'.

Maybe try from your original Python saved model to JS instead of JS to JS which may be confusing it as already in JS format?