Gesture triggered video playback

Hi all,

I have a client who has an idea of installing an LED screen in a shop window. They would like to be able to trigger videos by people outside moving in a certain way (gesture triggered videos basically).

I have a project set up using the NVIDIA AR Body Tracker. Is there a way to get data out of this tracker in order to use it to trigger different videos? For example, left arm raised up past a certain point, this video plays. Right arm raised up past a certain point, another video plays.

I’m also curious, would you foresee issues with having the camera pointing through glass, or sunlight hitting the camera?

Hey there,

For AR body tracking you can just add a 3d character, hide the mesh ,and track e.g a hand and detect if it goa above a certain height. It can only track one person at a time though, so it would be tricky to force it to only focus on one person in a busy area like a shop window.

Also, camera pointing through glass, in principle its fine, but if sunlight gets into the lens it might over expose the camera and you wont be able to see anything, and therefor, track anything. And anti glare coat on the glass might work, but I’m not experienced with this so I’d suggest contacting an expert on that part.

– Ryan

Thanks for the reply Ryan. I get what you mean about the camera exposure and the tracking of one person, it’s definitely something I need to flag with the client.

Regarding the tracking of certain parts of the model, such as the hand, what’s the best way of doing that? I did discover the Direction Weighted Motion modifier, but I haven’t been able to get any meaningful results out of it.

Direction weighted is probably the wrong avenue there - there are hundreds of ways you could do it but I’d stick to the easy way.
Child a node to the skeleton, e.g a null, and pass that into a hot zones test points. place the hotzone where you like, and you can then use an extractor to test if a zone has been hit, and use that to trigger whatever effects you fancy.

– Ryan

This is actually the first time I’ve heard of Hot Zones and Extractors, sounds like something I’ve been looking for! Is there an example nodegraph you could point me to, I’m not sure how exactly the Hot Zone works?

Edit: Never mind, I figured it out. Shape into ‘Test Points’ input of the Hot Zone, output of Hot Zone into input of the Extractor. I need to scale the Hot Zone to make it easier to understand what is happening.

Thanks for your help, I think I’m sorted for now.

1 Like

just a side note, in some similar-ish R+D I’m doing at the moment, I’ve been enjoying using the calculate distance and find midpoint setups in the bins that ship with Notch, linked to the wrists of a tracked skeleton, and the speed tracker modifier also.

You should be able to develop some interesting gesture rules with these added to the mix.