How to load tflite model in tfjs for pose classification?

Hi, I am working on a project for yoga pose detection aiming at a much accurate model. I have trained the model using the MoveNet pose detection model using Keras CNN layers and finally exported tflite model after training. But I am finding difficulty in how to load and use this model in my web app. I will be using tfjs for pose detection using webcam and want my trained model to detect the particular yoga pose. Can anyone guide me!

1 Like

@Jason would probably be of great help. I am new here and to ML in general but I hope you get the help you need. :slight_smile:

Is there a reason you are trying to load a TFLite model for this when we have MoveNet natively in TensorFlow.js in our model.json format for browser?

The MoveNet model we support would be this one that you can import in just a few lines of code:

Via our general purpose Pose Detection API:

1 Like

Thanks for your reply but I think my question wasn’t clear. Yes, I am using the MoveNet-tfjs model for pose detection in the browser. Now I need to train a model which can detect which yoga pose user is performing. For that, I have trained the Keras model using MoveNet in python. That is the issue I am facing like how to load that model in tfjs. I have created a .json file as well for the model and uploaded it on GitHub but still facing errors. This was the snippet that I was using for loading my model
const tfmodel = await tf.loadLayersModel(‘https://raw.githubusercontent.com/manglaaseem28/Pose-Detection-tf/main/model/model.json’)

1 Like

@Jason I tried loading the model using tf.loadLayersModel function but an error is occurring. I tried but I couldn’t find the solution.
This was the error:
Unhandled Rejection (Error): Unknown layer: SlicingOpLambda. This may be due to one of the following reasons:

  1. The layer is defined in Python, in which case it needs to be ported to TensorFlow.js or your JavaScript code.
  2. The custom layer is defined in JavaScript, but is not registered properly with tf.serialization.registerClass().

I see. From what you have written it seems you have used a Python specific layer that does not exist in TensorFlow.js. I would recommend building this directly in TensorFlow.js to avoid conversion issues like this. TensorFlow.js allows you to build and train new models too both in Node.js for server side if you want to train with your existing server infrastructure, and also on client side in browser too! This is exactly how teachable machine’s pose detection works infact:

If you want to learn how to do what teachable machine does but in JavaScript you can check out this tutorial that shows how to take output from existing model and then add your own layers on top to then train to learn something new:

While this lab is for images, you could adapt to use output of pose instead and then add your custom network on the end as you desire.

I also have a Glitch that shows how to fetch arbitrary layers of a TensorFlow.js model here:

So with all of those you should be able to pull what you want off directly in JS (or Node.js) both of which use the TensorFlow.js APIs.

Thanks a lot, Jason! I will look into this.

Hi @Aseem_Mangla! I’m trying to do the same and running into the exact same issues. Did you ever figure out a way forward? :slight_smile:

Hello @NadiaP7406 and @Aseem_Mangla !

I was talking to one of our SWEs on the team and they suggested for you to try converting your layers model to a graph model first with the TensorFlow.js converter.

Check this documentation here:

In this situation you could use --input_format keras and then set the --output_format tfjs_graph_model as frozen model should be suitable for inference in browser as you do not need to retain in the browser.

Let me know if that helps!

I converted my model into a graphs layer model and that was working fine!

1 Like

Yeah, I did exactly the same and the model was loaded successfully as well. But there was some performance issue my system faces lags when I run the website on localhost. This is maybe due to more RAM requirements but I doubt if this is the reason. I am still trying to improve the performance speed. Rest, the loading model problem is solved I guess.

hi @Aseem_Mangla, I am also trying to load a custom model into tensorflow.js pose detection, right now I am using one of the default models like this,

poseDetection
      .createDetector(poseDetection.SupportedModels.BlazePose, {
        runtime: "tfjs",
        enableSmoothing: true,
        modelType: "lite",
      })

is there any way you can tell me how to load a custom model trained in python instead of BlazePose?
Thank you!!

You may want to run your model through our benchmarking tooling to find what may be causing it.

Local benchmarking:

Browser stack support too: