Get the quantization parameters from a model in tfjs

Hi everybody,

I am trying to implement object detection in tfjs, via a yolov5 model running on my Coral tpu usb stick.

I found this model, but the example code in the repo is implemented in python while I use Tfjs. Since my detection results are not correct, the developer explained me friendly that I need scale my input and output tensors. To do that, he needs the following 4 quantization parameters that he gets from the model:

self.input_zero = self.input_details[0]['quantization'][1]
self.input_scale = self.input_details[0]['quantization'][0]
self.output_zero = self.output_details[0]['quantization'][1]
self.output_scale = self.output_details[0]['quantization'][0]

Unfortunately I have no clue how to get these 4 parameters from the same model in tfjs.

I would really apprecieate if anybody have some tips to achieve this?
In simple beginner terminology if possible please :slight_smile:

Thanks!!
Bart

So first off as far as I am aware Coral edge USB TPUs only support running TFlite models if I remember correctly?

If you are looking to retrain YoLO check out Hugo’s tutorial here for Python to TFJS:

Or his older one but maybe simpler version here

1 Like

Hi @Jason,
Thanks for joining here!!

Note: I have used in my explanation below /// in my links (instead of //) because as a new user here I am not allowed to enter more than 1 url in my post…

I will try to explain my issue a bit more in detail:

  1. I want to do object detection (people, cars, …) on my ip camera’s, but since I am using a Raspberry I need a Coral TPU USB stick to speed up my calculations.

  2. I already used the SSD models (from https:///coral.ai/models/object-detection/) for edge tpu with success, but the accuracy is not good enough to relay on it for video surveillance at home. E.g. when it thinks a tree is a human, the score is almost identical to when it detects a real human.

  3. So I wanted to try whether yolov5 fits better for my use case. Therefore I downloaded the yolov5 edge tpu model from https:///github.com/jveitchmichaelis/edgetpu-yolo/tree/main . So I am just using an existing model, no retraining.

    Remark: I use the https:///www.npmjs.com/package/@tensorflow/tfjs-tflite npm package to load such models in tfjs.

  4. I got it running without errors, but the bounding boxes and scores of the detected objects made no sense:

    • The scores were integers instead of floats
    • The bounding boxes did not match at all the objects in the image (i.e. at completely other locations).
  5. The developer of that repo explained to me that the reason for my issues is because this is a quantized model, so I needed to SCALE my tensors:

    • Input (image) tensor: float → int
    • Output (detected objects) tensor: int → float
  6. I tried to calculate those scaling values myself (for the output tensor only currently), every time an image arrives:

    let max = imageTensor.max().cast('float32');
    let  min = imageTensor.min().cast('float32');
    let qmax = detectionResult.max().cast('float32');
    let qmin = detectionResult.min().cast('float32');
    let scaleFactor = max.sub(min).div(qmax.sub(qmin)).cast('float32');
    let zeroPoint = qmin.sub(min.div(scaleFactor)).cast('float32');
    let scaledDetectionResult = detectionResult.add(zeroPoint).mul(scaleFactor).cast('float32');
    

    Not sure if my code is correct, but then indeed the bounding boxes match the detected objects much better already.

  7. However then the developer responded that I shouldn’t calculate those scaling parameters myself:

    The quantisation values are essentially weights. They are computed once when you calibrate the model and are the same for every subsequent input.
    They’re stored in the model checkpoint - I assume you can access it with
    tfjs but I’m not familiar. We load the in/out scale factors here:
    https://github.com/jveitchmichaelis/edgetpu-yolo/blob/784d9be1bb13ce4b8b3c1bad729a02a69cca97bb/edgetpumodel.py#L90
    This uses functions from pycoral, but there must be an equivalent

    I can’t use his code, because it is written in Python while I need Javascript. And I did not find a way to get those quantization parameters (input_zero, input_scale, output_zero, output_scale) from my model via Tfjs. Unfortunately I don’t have enough background knowledge about the topic to solve this by myself :frowning_face:

Ah I understand now. As this is a tflite file not TFJS model this may be better suited for someone on TFLite to explain how to read those values out of that file which you could then use in TFJS.

Adding @Matthew_Soulanille from TFJS side who made the Node bindings for Coral who may have some thoughts, but if not this may be a question to the TFLite team not TFJS due to the model format being used - we just support the running of such models via JS ecosystem only.

1 Like

Cool project! I don’t think tfjs-tflite-node has support for reading input / output details out of the model file yet. This could probably be added by exposing an additional two functions in the node-api bindings file. I think the correct approach would be to reimplement the python get_input_details and get_output_details functions.

I don’t have time to implement it right now, but you can send a PR, or I can take a look in a few weeks. It might also be easier to get just the quantization params instead of the full tensor details (python - Is there an equivalent of tf.lite.Interpreter.get_input_details for C++? - Stack Overflow).

1 Like

@Jason, thanks for pointing me to the correct people! I was not really aware that the Tensorflow team was divided in subteams. Although due to the high complexity of these things, that of course makes sense…

@Matthew_Soulanille: no not my project, but your bindings for the Coral stick are cool… Thanks to your packages I am able to do all kind of nice stuff with a simple Raspberry. It is quite impressive at which speed tfjs can process images this way.

About the PR. It is not that I don’t want to contribute, because I do open source development all the time. But to implement these kind of features is a bit too far away from my comfort zone I am afraid. I had already seen last week your StackOverflow article, but it doesn’t ring a bell to me how I could use that to solve my issue…

So if you would find some time to implement it, that would be very kind! It is absolutely no problem if you don’t have time in the next weeks.

Thanks!!
Bart

Hi Bart,

It’s great that you’re working on object detection with a YOLOv5 model on TensorFlow.js and a Coral USB TPU! To get the quantization parameters for your TensorFlow.js model, you can follow these steps:

  1. Convert the Model to TensorFlow.js Format: First, make sure you have converted your YOLOv5 model to TensorFlow.js format. You can use the tfjs-converter tool to do this. Here’s a command you can use as a starting point:

bashCopy code

tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve saved_model_dir/ tfjs_model_dir/

Replace saved_model_dir/ with the path to your YOLOv5 saved model and tfjs_model_dir/ with the directory where you want to save the converted model.
2. Load the Converted Model in TensorFlow.js: In your JavaScript code, load the converted model using TensorFlow.js. You can use the tf.loadGraphModel() function for this:

javascriptCopy code

const model = await tf.loadGraphModel('tfjs_model_dir/model.json');

Make sure to replace 'tfjs_model_dir/model.json' with the correct path to your TensorFlow.js model.
3. Retrieve Quantization Parameters: You can access the quantization parameters from the model’s input and output details as follows:

javascriptCopy code

const inputZero = model.modelSignature['serving_default'].inputs[0].quantization.zero;
const inputScale = model.modelSignature['serving_default'].inputs[0].quantization.scale;
const outputZero = model.modelSignature['serving_default'].outputs[0].quantization.zero;
const outputScale = model.modelSignature['serving_default'].outputs[0].quantization.scale;

These lines of code will extract the four quantization parameters that you need for scaling your input and output tensors.
4. Scale Input and Output: Now that you have the quantization parameters, you can scale your input and output tensors accordingly:

javascriptCopy code

// Scale your input tensor
const scaledInput = tf.mul(tf.sub(inputTensor, inputZero), inputScale);

// Run inference with the scaled input tensor
const output = model.predict(scaledInput);

// Scale the output tensor back to the original range
const scaledOutput = tf.add(tf.div(output, outputScale), outputZero);

Replace inputTensor with your actual input tensor, and you can perform inference with the scaled input tensor. Afterward, Mickey Minors scale the output tensor back to its original range.

With these steps, you should be able to retrieve the quantization parameters and correctly scale your input and output tensors for object detection with your YOLOv5 model in TensorFlow.js. Good luck with your project, and feel free to ask if you have any more questions!

1 Like

Hi @Bc_sk,

That is a very nice step-by-step tutorial. Really appreciated!!!

I am going to try to find some time in the next days to try it out.

But meanwhile a noob question: I wanted to use an existing model, because this kind of stuff is quite far outside my comfort zone. In the python code from that same project, they get the quantization parameters from that model without problems. Which means the quantization parametes must be available within that model. But when I try to get it via tfjs from that same model, it doesn’t seem to be available:

image

Do I understand it correctly that there are two ways to solve this?

  1. Using the tfjs_converter I generate a new model that contains js code out-of-the-box, which allows us to get the quantization parameters.
  2. When I want to reuse my current model, the tfjs-tflite-node package from @Matthew_Soulanille needs to get two extra functions to get the quantization parameters from the (non-Tensorflow.js format) model.

Did I understand this correctly?

@Matthew_Soulanille,
I know that you are very busy with all kind of stuff, but we would really appreciate if you could find a bit of time to have a look at this. I had a couple of times a look at the code myself, but I have to admit that this is WAY above my paygrade :frowning_face:. So our idea to implement yolov5 object recognition (on an USB Coral TPU stick) in Node-RED is currently unfortunately in the freezer, because this is a key feature for our video surveillance concept…

Although @Bc_sk has very clearly described above how to convert my own model, all my successive attempts failed due to (for me at least) cryptic errors. If we could get the quantization parameters from an “existing” model, that would be very helpful for us. That way we can benefit from the skills and experience from others, that are more experienced in this area…

Thanks again and have a nice weekend!
Bart

@Matthew_Soulanille,
Since you are very busy I am trying to implement this myself, so we can continue with our yolo experiment. But I am completely stuck, so I would appreciate if you could give me some tips.

You mentioned to add 2 new functions (get_input_details and get_output_details) to the node_tflite_binding.cc file, similar to the Python implementation. However since these Python functions return almost the same information as the TensorInfo class, I tried to add my quantization param to that class:

class TensorInfo : public Napi::ObjectWrap<TensorInfo> {
  static Napi::Object Init(Napi::Env env, Napi::Object exports) {
    Napi::Function func = DefineClass(env, "TensorInfo", {
        ...
        InstanceAccessor<&TensorInfo::GetQuantizationParams>("quantizationParameter"),
        ...
   });

   Napi::Value GetQuantizationParams(const Napi::CallbackInfo& info) {
      Napi::Env env = info.Env();

      // Get the quantization parameters from the tensor
      TfLiteQuantizationParams quant_params = TfLiteTensorQuantizationParams(tensor);

      // Create a new JavaScript object to hold the parameters
      Napi::Object js_quant_params = Napi::Object::New(env);
      js_quant_params.Set("scale", Napi::Number::New(env, quant_params.scale));
      js_quant_params.Set("zero_point", Napi::Number::New(env, quant_params.zero_point));

      return js_quant_params;
   }
}

Next I tried to add a unit test, but first I needed to pass my new field through the tflite_model.ts file:

  private convertTFLiteTensorInfos(infos: TFLiteWebModelRunnerTensorInfo[]): ModelTensorInfo[] {
    return infos.map(info => {
      const dtype = getDTypeFromTFLiteType(info.dataType);
      return {
        name: info.name,
        shape: this.getShapeFromTFLiteTensorInfo(info),
        dtype,
        quantizationParameter: info.quantizationParameter,
      };
    });
  }

However that fails because the TFLiteWebModelRunnerTensorInfo interface doesn’t contain my new field.

So my plan B was to add the 2 methods that you mentioned to the Interpreter class, similar to the Python API. But I am now wondering if my two methods would be accessible, because the TFLiteWebModelRunner interface is being used (which again doesn’t contain my methods).

Perhaps I am completely mistaken. No idea to be honest…

Thanks!!