Is it possible to process the Kinect Depth data on one machine, and send that to Notch on another networked machine, and if so can it greatly improve performance for real-time scenes?
Also - I wonder if it’s worth having some new forum categories such as VP and Interactive etc… ?
You probably could using NDI or some other tool, But I think you’d just add extra latency into the loop and Kinect is already a bit laggy as is.
As for whether it would boost performance, probably not. The Kinect device does most of the processing, not the notch machine, so it wouldn’t even help with that.
Finally, you’d only be passing the Kinect depth data - not the colour or skeleton data. if you only need the depth data, all good, but if you need others, its a non-starter.