Best approach for multi-cam virtual sets?

Hi,

Fairly new to Notch and, as usual, diving in at the deep end… I have been through the essentials course, and watched many of the video tutorials, as well as picking apart some of the example scenes provided by Notch.

I’m interested in using Notch for virtual sets. At the moment, a lot of the streaming productions I’m working on involve 3-4 PTZ cameras and, the other day, I got the opportunity to test hooking up the FreeD tracking output of Panasonic UE150 PTZ cameras to Notch cameras (via Touchdesigner to translate FreeD to OSC). That worked well, so now wondering how best to manage multiple cameras in a scene.

I’ve got 3 cameras in my test scene, and I’m using a Select Child node to switch between them based on an OSC input. That’s working fine but, of course, as soon as I started switching between cameras I realised that my Motion Blur node was smooshing between the cameras in an ugly way, since the post-fx nodes are applied to the entire scene. I tried using a single camera and switching the source of the tracking data to make it jump to different positions, but the results weren’t great either.

What approaches should I be looking at for the different cameras ? Should each camera be on it’s own render layer, so that motion blur and other post-fx nodes don’t conflict ? Or would I be better off with using timeline layers, each with its own camera ? And what are the performance trade-offs between different approaches ?

In general, I’m wondering what is best practice for a multi-cam setup. Should I be looking at sending individual outputs from each Notch camera to the vison mixer, for example, so that the director can see the output from each camera (assuming the Notch system could cope)? Or should we be sending the master camera output to Notch ?

I’m grateful for any pointers about what works and pitfalls to avoid !

Thanks,
Matthew

P.S. Just found the ‘Notch Quick Tip’ videos the other day - they’re really useful !