Tensorflow.js video is wrong size

Unhandled Rejection (Error): The shape of dict[‘x’] provided in model.execute(dict) must be [1,640,640,3], but was [1,480,640,3]
using GitHub - hugozanini/TFJS-object-detection: Real-time custom object detection in the browser using tensorflow.js with my model training for object detection

Check this link and let me know if it’s helpful Error the shape of dict.

I will be able to assist you more quickly if you can elaborate your question and include the codebase and problem you are having with the appropriate embed.

can you help me? I am trying to use the body-pix model in my react-native app . I want this in my app if I open my app and open a camera and do human body segmentation. I am new to coding. this is a code of react-native camera app
import { Camera, CameraType } from ‘expo-camera’;

import { useState } from ‘react’;

import { Button, StyleSheet, Text, TouchableOpacity, View } from ‘react-native’;

export default function App() {

const [type, setType] = useState(CameraType.back);

const [permission, requestPermission] = Camera.useCameraPermissions();

if (!permission) {

// Camera permissions are still loading

return <View />;

}

if (!permission.granted) {

// Camera permissions are not granted yet

return (

  <View style={styles.container}>

    <Text style={{ textAlign: 'center' }}>We need your permission to show the camera</Text>

    <Button onPress={requestPermission} title="grant permission" />

  </View>

);

}

function toggleCameraType() {

setType(current => (current === CameraType.back ? CameraType.front : CameraType.back));

}

return (

<View style={styles.container}>

  <Camera style={styles.camera} type={type}>

    <View style={styles.buttonContainer}>

      <TouchableOpacity style={styles.button} onPress={toggleCameraType}>

        <Text style={styles.text}>Flip Camera</Text>

      </TouchableOpacity>

    </View>

  </Camera>

</View>

);

}

const styles = StyleSheet.create({

container: {

flex: 1,

justifyContent: 'center',

},

camera: {

flex: 1,

},

buttonContainer: {

flex: 1,

flexDirection: 'row',

backgroundColor: 'transparent',

margin: 64,

},

button: {

flex: 1,

alignSelf: 'flex-end',

alignItems: 'center',

},

text: {

fontSize: 24,

fontWeight: 'bold',

color: 'white',

},

});

this is a code with body pix but not working :frowning: . will u help me .
import { Camera, CameraType } from ‘expo-camera’;
import React, { useRef } from “react”;
// import logo from ‘./logo.svg’;
import * as tf from “@tensorflow/tfjs”;
import * as bodyPix from “@tensorflow-models/body-pix”;
import { useState } from ‘react’;
import { Button, StyleSheet, Text, TouchableOpacity, View } from ‘react-native’;

export default function App() {
const [type, setType] = useState(CameraType.back);
const [permission, requestPermission] = Camera.useCameraPermissions();
const canvasRef = useRef(null);

const runBodysegment = async () => {
const net = await bodyPix.load();
console.log(“BodyPix model loaded.”);
// Loop and detect hands
setInterval(() => {
detect(net);
}, 100)
};

const detect = async (net) => {
const person = await net.segmentPersonParts(video);
console.log(person);

const coloredPartImage = bodyPix.toColoredPartMask(person);
bodyPix.drawMask(
  canvasRef.current,
  video,
  coloredPartImage,
  0.7,
  0,
  false
);
runBodysegment();

if (!permission) {
// Camera permissions are still loading
return ;
}

if (!permission.granted) {
// Camera permissions are not granted yet
return (

<Text style={{ textAlign: ‘center’ }}>We need your permission to show the camera


);
}

function toggleCameraType() {
setType(current => (current === CameraType.back ? CameraType.front : CameraType.back));
}
detect();

return (




Flip Camera




);
}

const styles = StyleSheet.create({
container: {
flex: 1,
justifyContent: ‘center’,
},
camera: {
flex: 1,
},
buttonContainer: {
flex: 1,
flexDirection: ‘row’,
backgroundColor: ‘transparent’,
margin: 64,
},
button: {
flex: 1,
alignSelf: ‘flex-end’,
alignItems: ‘center’,
},
text: {
fontSize: 24,
fontWeight: ‘bold’,
color: ‘white’,
},
});
};

fixed it with

 const webCamPromise = navigator.mediaDevices
        .getUserMedia({
          audio: false,
          video: {
            width: { ideal: 640 },
            height: { ideal: 640 },
            facingMode: "user"
          }
        })

but now i get
Uncaught (in promise) Error: The dtype of dict[‘x’] provided in model.execute(dict) must be float32, but was int32