Link to Sensoria: https://sensoria.herokuapp.com/
GitHub: enter in GH search field AlexShafir/Sensoria
This project is a demo for P2P WebRTC communication in 3D space using TFJS Face-Landmarks-Detection for face & iris tracking.
What you will see is unfiltered result of TensorFlow Facemesh. WASM (CPU) computation backend, MediaPipe Facemesh model.
Once I was on massive online Zoom event and I felt lots of video “rectangles” make me feel a bit disconnected.
So I became curious are there alternative solutions, that do not require VR goggles, and still providing immersive experience.
Projecting video as 2D rectangle into 3D space still breaks immersion, so I opted for TFJS Face-Landmarks-Detection for face & iris tracking.
How to try
you can open Sensoria in another browser tab to simulate conversation (it will run 2x slower though due to two processing threads).
Chrome browser is recommended.
Model was originally trained for smartphones, so you should be close to camera for the best result.
Virtual camera has fixed location, while face mesh moves freely around it.
Eyes currently have fixed size due to scaling problems.