Hello. I am working on a scene with multiple UV cameras. Everything appears to be mapping correctly, however when viewed through the UV cameras the fields system becomes smeared, even though the particle system that I have still appears to be correct. UV camera set-ups aren’t my strong point, so any help or tips would be greatly appreciated.
That does sound odd - are you using render to object surfaces? got a dfx to look at?
Thanks Ryan. I have a Render to Texture feeding into a material. I’ll work on getting the dfx tidied up and uploaded this afternoon.
Hey Ryan. Finally got around to getting a test scene built. Viewing through camera 1 and camera 2 everything looks correct, but through 3, 4, and 5 everything is smeared.
Fields UV Camera Test.dfx (68.7 KB)
That was tricky to figure out! Basically, it looks like the Fields use the correct camera when rendered in a Render To Texture. Easy solution here is to take the content in the render to texture and use it in a layer precomp instead - just copy the view camera between both layers.
Champ! Thanks Ryan, I’ll test this out. Another question I have about using Render to Texture node: should the Render to Texture resolution be set to the combined resolution of all of the surfaces that it’s being sent to? And if the surfaces are of different resolutions is it recommended to use multiple Render to Texture nodes?
So I think you need to think about what the Render To Texture node is doing - RTT renders an image from a perspective set by the camera, and the mapping node maps that image onto the surface. It doesn’t know where the image is and isn’t being used in the mapping, so there is a lot of empty space that you will process but won’t end up in the final result.
In your scene, I would focus the cameras view to fit the exact dimensions of you surfaces width/height, and from that set the resolution you would need.
As for using multiple RTT nodes, you can, but I’m not sure the performance improvement will be too noticable.
Thanks Ryan. The problem that I was having using multiple cameras is that I have multiple vanishing points, so when I add any depth to the scene it starts to look strange.
Is is possible to use this with multiple UV cameras being fed into a Multi Camera? I’m bringing in a layer precomp to use in the material for the surfaces, but when I plug everything in I’m just getting a black screen, even though I can see each individual UV through it’s camera.
The Multicam only supports normal cameras at present, so you’ll need to make sure each meshes UV’s don’t overlap.
Thank you Ryan.
Nice one. Cheers, Ryan.
To elaborate a little further - if you look at the 3D Object node, you should see an “Output UVs” property section - this will allow you to transform the objects UVs before they are sent to the UV camera. So useful in fact, it’s going in my next Quick Tip.