Procedural Meshing wKinect Cutoff

I keep having this strange problem - when I try to use a kinect mesh to make a procedural mesh I get this weird cutoff line even though the bounding box covers the kinect mesh and goes well beyond its bounds.


You can se there is a line going across my chest and hand and you can see the kinect mesh underneath.
Also worth mentioning is that the Kinect mesh is parented to a null and is slightly rotated so the line looks diagonal, but if i leave the kinect mesh rotation transforms at 0.0.0 the line is perfectly horizontal. Another thing worth mentioning is that if I move the Kinect mesh on the Y axis the procedural mesh follows the kinect mesh and the line does not shift on my body - the procedural root node and all it’s children do not move with the Kinect mesh so it is not as if i am also translating the bounding box as a child of the kinect mesh.
Also tried to see if it works in different scales and the line rescales to the same position.
image image

I tried it on Kinect 2 and Azure Kinect and always have this bug.

Edit 1: Bumping up the mesh resolution scale to 1.3 in the kinect mesh attributes seems to fix the problem , but if bump up the far clip plane to reveal the back wall i get weird artefacts and the line shifts even higher so I end up bumping the Mesh resolution scale even higher. Playing with the kinect depth scale on the depth camera node shifts the line higher or lower depending on how much i go up or down from the original kinect depth scale value image

Thats a pretty interesting find - I think the Kinect mesh might have too much geometry for the 3d object or something - could you upload the file or send it over to support @notch.one?

I think a better workflow would be to use the Image generator instead, and convert the depth map into the image node instead. This way you don’t have to take a detour through geometry, and can probably improve performance a bit.

– Ryan

1 Like

Hi , i sent the file to support@notch,
Also uploading it here Kinect Procedural Cutoff.dfx (45.6 KB) .
Your Idea about the depth processing node is actually really cool, I was thinking about something similar for particles by using the view of the 3d mesh piping it in an image emitter.
I tried a few things based on your suggestion, but I dont think I have figured it out.
I tried using the Depth Processing node and input it in the Image generator in the procedural node system and it creates a displaced surface, but it is very flat. Alto tried just turning on the preview depth
checkbox in the kinect source but still get the same result.The kinect 2 just seem to output a white silhouette instead of a depth image. Havent tested it with the azure yet. Is there a way i can adjust the depth gradient? Right now I am getting this white image for where I should be. image

Edit 1: I think i figured it out using the luminance from depth node.

.

1 Like