KINECT - Detecting the distance to an object


I need to create a live installation where people are going to play in front of a KINECT camera. I need to create some kind of “stand by” logo and as soon as someone gets in front of the Kinect, the logo will dissapear and the interactive scene will start. Does anybody how to measure the distance to a certain position, or how to detect if a 3D object enters a certain 3D space?

Thanks a lot!

Ok, I think I have a solution. The Kinect camera is giving me a depth image (a ramp from black to white), so placing a video sampler modifier can give me an estimation of the “distance” to the object placed in the center of the screen. Passing a certain threshold will activate the “playing” scene, and if not an standby scene will be played.

Correct me if there’s a simple way to do this.

Seems like you have the right solution for your needs but just FYI…

The Hot Zone node will register when an object passes into it, you’ll need the Extractor Modifier with it. This is useful also for getting a count on how many times a zone is hit.

And Notch ships with some node setups you can drop into your graph via the Bins panel, there is a calculate distance setup.

For that you need to define what point A and point B are, so the Camera being 1 and an object being another. Whether that works with Kinect generated meshes I don’t know. But useful in many cases generally.

Hi Jamie!

Thanks a lot! I’ll try that solution too. I haven’t used Hot Zones yet. It seems to be a more “clean” solution for what I’m looking …

Hi Jamie!

I’ve been experimenting with the hotzone node and the Kinect. I didn’t manage to make it work as the Hotzone node seems to detect the whole kinect 3D space instead of single elements within the space. As the manual states for the Hotzone node it will detect when the CENTER of an object enters the defined volume. As the Kinect volume is fixed in the space although no geometry is generated or detected by the camera the CENTER will be always a fixed point so the hotzone will send a detection value as long as the kinect center is inside. It doesn’t matter what the kinect “renders” as 3D elements. At least this is what I’m getting. Please correct me if someone thinks I’m doing something wrong.

Im trying to do a interactive toggle in some areas of a Kinect 1.

I was trying with HotZone and video Sampler.

With HotZone, yes, its detects the whole node, no an specific item of the Kinect Mesh. Im using Kinect Mesh for this and does not worked yet.
I been trying also with the cloners, using Kinect Mesh to a Clone to Mesh and using Shape 3d to generate 3d objects clones in the area, but still dont work, I tried giving as Test Points (in Hotzone) the Clone to Mesh and also the Shape 3d, but nothing, looks that the HotZone does not work with specific items of the Cloners. Also tried with Clone to Particles using the Particles generated with Kinect Mesh.
Im will try to assign Shape 3d to Kinect Skeleton in the hands, maybe it would work.

About Video Sampler looks as a good choice also, but I still didnt get which area or space of the video Sampler is the selected one to send or extract the data. I understand that the UV coordinates on the Node is 0.5 x 0.5 that represents the middle of the image feed and 0x0 its the lower left corner, and the 1x1 its the upper right.

I will appreciate some guidance.