Trying to use Tracker in Movenet Pose detector

Hi i`m trying to make a motion tracking application with Movenet in React Native

Confirmed keypoints are detected and shown up on console but having trouble to enable tracker

How can I enable built in keypoints tracker in Movenet???

Attached my source code below

import React, { useState, useEffect, useCallback, useMemo } from ‘react’;

import { View, StyleSheet, Platform, TouchableOpacity, Text } from ‘react-native’;

import Icon from ‘react-native-vector-icons/Ionicons’

import { Colors } from ‘react-native-paper’;

import { Camera } from ‘expo-camera’;

import * as tf from ‘@tensorflow/tfjs’;

import {cameraWithTensors} from ‘@tensorflow/tfjs-react-native’;

import * as poseDetection from ‘@tensorflow-models/pose-detection’;

import ‘@tensorflow/tfjs-backend-webgl’;

import ‘@mediapipe/pose’;

let coords = []

export const CameraView = () => {

const [hasPermission, setHasPermission] = useState(null);

const [poseDetector, setPoseDetector] = useState(null);

const [frameworkReady, setFrameworkReady] = useState(false);

const backCamera = Camera.Constants.Type.back

const frontCamera = Camera.Constants.Type.front

const [camType, setCamType] = useState(backCamera)

const TensorCamera = cameraWithTensors(Camera);

let requestAnimationFrameId = 0;

const textureDims = Platform.OS === "ios"? { width: 1080, height: 1920 } : { width: 1600, height: 1200 };

const tensorDims = { width: 152, height: 200 }; 

const iconPressed = useCallback(() => camType === backCamera? setCamType(frontCamera):setCamType(backCamera),[camType])

const model = poseDetection.SupportedModels.MoveNet;

const detectorConfig = {

    modelType: poseDetection.movenet.modelType.MULTIPOSE_LIGHTNING,

    enableTracking: true,

    trackerType: poseDetection.TrackerType.Keypoint,

    trackerConfig: {maxTracks: 4,

        maxAge: 1000,

        minSimilarity: 1,

        keypointTrackerParams:{

            keypointConfidenceThreshold: 1,

            keypointFalloff: [],

            minNumberOfKeypoints: 4

        }

    }

}



const detectPose = async (tensor) =>{

    if(!tensor) return

    const poses = await poseDetector.estimatePoses(tensor)

    if (poses[0] !== undefined) {

        const points = poses[0].keypoints.map(point => [point.x,point.y,point.name])

        console.log(points)

        coords = points

    } else {

        coords = []

    }

    ///console.log(coords)

}

const handleCameraStream = (imageAsTensors) => {

    const loop = async () => {

        const nextImageTensor = await imageAsTensors.next().value;

        await detectPose(nextImageTensor);

        requestAnimationFrameId = requestAnimationFrame(loop);

    };

    if (true) loop();

  }

useEffect(() => {

if(!frameworkReady) {

    ;(async () => {

    const { status } = await Camera.requestPermissionsAsync();

    console.log(`permissions status: ${status}`);

    setHasPermission(status === 'granted');

    await tf.ready();

    setPoseDetector(await poseDetection.createDetector(model, detectorConfig))

    setFrameworkReady(true);

    })();

}

}, []);

useEffect(() => {

    return () => {

    cancelAnimationFrame(requestAnimationFrameId);

    };

}, [requestAnimationFrameId]);

return( 

    <View style={styles.cameraView}>

        <TensorCamera

            style={styles.camera}

            type={camType}

            zoom={0}

            cameraTextureHeight={textureDims.height}

            cameraTextureWidth={textureDims.width}

            resizeHeight={tensorDims.height}

            resizeWidth={tensorDims.width}

            resizeDepth={3}

            onReady={(imageAsTensors) => handleCameraStream(imageAsTensors)}

            autorender={true}

        >

        </TensorCamera>

        <TouchableOpacity style={[styles.absoluteView]} activeOpacity={0.1}>

            <Icon name="camera-reverse-outline" size={40} color="white" onPress={iconPressed}/>

        </TouchableOpacity>

    </View>

)

}

const styles = StyleSheet.create({

camera:{flex:1},

cameraView:{flex:1},

absoluteView:{

    position:'absolute',

    right:30,

    bottom: Platform.select({ios:40, android:30}),

    padding: 10,

},

tracker:{

    position:'absolute',

    width:10,

    height:10,

    borderRadius:5,

    backgroundColor: Colors.blue500

}

})

Hi @11130 , you need to lower minSimilarity, the semantics of this field is that the similarity between current pose and tracked pose, if their similarity is larger than minSimlarity, then we consider them as the same preson. 1 is the largest possible score for similarity, if you set it to 1, then that means we only consider this is the same person if current pose is exactly same as before. The default minSimilarity is 0.15. If you want to use the default, you can omit this field. Same for keypointConfidenceThreshold, the default is 0.3. Also you need to set values for keypointFalloff, default is [
0.026, 0.025, 0.025, 0.035, 0.035, 0.079, 0.079, 0.072, 0.072, 0.062,
0.062, 0.107, 0.107, 0.087, 0.087, 0.089, 0.089
]. I suggest just use default, to use default, just omit keypointTrackerParams.

2 Likes

Thanks for the reply

At the first time, I already tried it without TrackerConfig but nothing happened
That`s why i tried to input parameters manually

Wondering if its because im using SINGLEPOSE_LIGHTING model

It is only needed for multipose, from your shared code, it seems you are using multipose_lightning. Can you clarify what do you mean by nothing happened? Did the poseId change every time?

Yes i can see Id is updated

What i said nothing happened means even though i enabled tracker w/ or w/o TrackConfig, tried either SINGLEPOSE or PULTIPOSE model, couldn`t see tracker is working on screen

Can you open an issue in our tfjs repo? We’ll have someone look into it. Thanks.

@11130 , could you share the link to your repo. It seems the code is not working , and are you running the expo app locally that is the models and weights are saved on the device ?

@lina128 , do you have a recommended tutorial suggestion on how to run movenet in react native expo . The information seems to be scattered everywhere and would be nice , to check if there is a working example (with react native expo) . It could be great if you could point to that ?