segmentPeople with mask rendered through webGL

I had an early implementation of bodypix I was rendering the segment pixel data through webGL. To render the mask with a video texture. I need to update this to the new version. I can upload a test if needed.

The “drawMask” seems to be using software rendering still. I’m not sure why all the effort to use hardware and webassembly and then force to do software render to canvas of the mask and video.

How can I use the data returned from segmentPeople to render a mask on top of a video texture with webgl. There is no example. Is this how I get pixel data to render to a webgl texture as the mask ? I tried using the data from binaryMask but it didnt work. The output is ImageData. In bodypix I would get pixel data to use with webgl.

Video printing to canvas via software is resource intensive and less efficient than webgl.

Could I use the webgl context directly somehow window.exposedContext ?

const segmentation = await segmenter.segmentPeople(localVideo, {
		flipHorizontal: false,
      	multiSegmentation: false,
      	segmentBodyParts: true,
	});

	const gl2 = window.exposedContext;
    if (gl2) {
      gl2.readPixels(
          0, 0, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, new Uint8Array(4));
 	}



	const data = await bodySegmentation.toBinaryMask(
      segmentation, {r: 0, g: 0, b: 0, a: 0}, {r: 0, g: 0, b: 0, a: 255},
      false, 1);

And then something like I had before

gl.activeTexture(gl.TEXTURE1);
  gl.texImage2D(
    gl.TEXTURE_2D,        // target 
    0,                    // level
    gl.ALPHA,             // internalformat
    segmentation.width,   // width
    segmentation.height,  // height
    0,                    // border, "Must be 0"
    gl.ALPHA,             // format, "must be the same as internalformat"
    gl.UNSIGNED_BYTE,     // type of data below
    data     // pixels
  );


   gl.viewport(0, 0, metadata.width, metadata.height);
  gl.activeTexture(gl.TEXTURE0);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, localVideo);
  gl.uniform1i(frameTexLoc, 0);
  gl.uniform1i(maskTexLoc, 1);
  gl.uniform1f(texWidthLoc, metadata.width);
  gl.uniform1f(texHeightLoc, metadata.height);
  gl.drawArrays(gl.TRIANGLE_FAN, 0, 4);

I believe I am going to attempt to just use mediapipe directly. The example codepen is doing software rendering still and it needed adjusting to remove the background as it colors the foreground.

It uses high cpu to render the mask because it’s software render. In the webgl render there is less cpu used but trying to use the mask returned from mediapipe as before doesn’t seem to work now like I had working for bodypix.

If I try to do this im not getting masking working. I need to change my shader.

gl.texImage2D(

    gl.TEXTURE_2D,        // target

    0,                    // level

    gl.ALPHA,             // internalformat

    results.segmentationMask.width,   // width

    results.segmentationMask.height,  // height

    0,                    // border, "Must be 0"

    gl.ALPHA,             // format, "must be the same as internalformat"

    gl.UNSIGNED_BYTE,     // type of data below

results.segmentationMask

    //segmentation.data     // pixels

  );

My fragment shader looks like this. Its adding a background image texture also

precision mediump float;

    uniform sampler2D background;

    uniform sampler2D frame;

    uniform sampler2D mask;

 

    uniform float texWidth;

    uniform float texHeight;

 

    void main(void) {

      vec2 texCoord = vec2(gl_FragCoord.x/texWidth, 1.0 - (gl_FragCoord.y/texHeight));

       

      gl_FragColor = mix(texture2D(background, texCoord), vec4(texture2D(frame, texCoord).rgb, 1.0), vec4(texture2D(mask, texCoord).a * 255.));

    }

Using canvas rendering of mask and video is considerably more cpu. But less blocky than the gpu shader version. Using mediapipe directly than through tensorflow is more blocky around the mask egdes using webgl than using mediapipe via tensorflow. Need to smooth this out somehow.