Action Recognition on Live Stream

I was looking at the sample in: Riconoscimento dell'azione con una CNN 3D gonfiata  |  TensorFlow Hub

and works just fine but how to run this against a life feed? is it possible? and how to train my own model if required?

1 Like

For streaming I suggest you to take a look at the streaming models in:

https://tfhub.dev/google/collections/movinet/

You can finetune these on your data or train from scratch.

3 Likes

Thank you for answering i did review that one but i am a bit confuse, i would like to test this against a webcam on life feed. i dont see how the code would work for this, as i believe this will read the hole video but in live data there is no end.

1 Like

You can pass a stream chunk as you can see in the example at:

https://tfhub.dev/tensorflow/movinet/a5/stream/kinetics-600/classification/2

You need to access to the camera with your code (Opencv, TFIO, Video4Linux etc…)

Instead If you want to run this on Android you need to use TF lite and write your own demo/example.

You can also try to use Mediapipe if you like:

https://blog.gofynd.com/mediapipe-with-custom-tflite-model-d3ea0427b3c1

3 Likes

Thank you very much, this TFHUB Is new to me but this seems to be the solution.
I really appreciate your time, thank you. :+1:

2 Likes

Have you successfully used movinet by shooting real-time video with your webcam? I’m researching right now, but it’s so difficult. help me

hi hamseungmi796, welcome to the TF Forum

what part are you finding hard?

did you check this tutorial: 스트리밍 동작 인식을 위한 MoViNet  |  TensorFlow Hub

this may help you with your task