Render to Texture/UV camera PIXELATED

Hi! I need help, even if use Render to Texture method or UV camera I’m getting pixelated the content. What I’m doing wrong? even I put a huge resolution…

This is a simple file, please check the cameras:
Setup2.dfx (3.4 MB)

Hey there,

We can’t replicate the pixelation you’re encountering with the .dfx you’ve provided. Could you please send over a .dfx that includes the image you sent in the post?

image

Thanks,
Jack

Hi! Thanks to follow it.
I upload you an example with a text in a render texture node to some cubes.
So I would like to render that text from the top cameras or uv camera with a good resolution and without pixellation, but I can’t even I give a 6000x3000 resolution in the render texture node.
Please have a look
Test.dfx (3.5 MB)

Hey there,

Your content is only a small part of the area you’re trying to render.

If you want to fix your low resolution, focus your render to texture camera onto the area you’re interested in or continue to increase your resolution in the RTT node.

Could you let us know your objective is with this scene? Are you trying to do perspective mapping?

I want to render from the top cameras (top camera in the TestUV text) or the uv camera, not from that camera. Please check other cameras not that one you post the image.
I’m getting pixellated render

Hey there,

As Jack said, the output is pixelated because you are Rendering a larger area from the POI camera, then taking a small area of that render and mapping it onto a surface. it doesn’t matter if after the RTT render you change the camera position, its still using the same render as a source.

Let me explain a bit by breaking down your scene.


Here I’ve just moved things around a bit to get a clearer view of what’s going on.

The first thing that happens in your scene is the render to texture. That node takes all the nodes which are child to it, and renders to an internal target, so it can be used by other nodes. in this scene, the output of that node is this:


Notice the area the text in what image is taking up, 90% of the image is empty. You might have a resolution of 6000x3000 in the RTT node, but the actual text is taking up ~600x~100 pixels.

You are then taking this entire area and again, mapping a small area of it onto a surface.


Once again, a lot of empty space. A large portion of the texture this Render to texture is generating isn’t being used, but is taking up VRAM and resolution.

When you then view this object from that above camera (camera line 2), picture taking a 600x100 image, rotating it and resizing it up to fill 6000x3000. That result being pixelated isn’t too surprising really.


But that’s just what’s going on - how do we solve it. This will depend on what your overall effect is, and what outcome you are going for. Based on the setup I’m looking at, I’m guessing you are looking to perspective map those surfaces, then output one or all of them from one output feed?

If I were to do this, the first thing I would to is make a seperate RTT for each plane, and focus the camera on that area specifically. This is as easy as changing the POI camera rotation and FOV to focus specifically on the area you actually need to render.
Before


After

As for combining the outputs, there’s a couple different ways to do this, but I think the obvious way is to connect all the meshes to a UV camera, and adjust the UV output to control where they appear on the UV cam.

– Ryan

1 Like

Hi @ryan.barth thanks for your answer! very clear and direct, now I understand what @jack-hale tried to say.
Correct, I want to render those textured planes (each one separated and from the POI camera).
I understand the problem is the RTT camera but it’s a difficult solution in that scene, I mean how I’m gonna fit my RTT camera pixels with my view of interest.
Anyhow, much appreciated guys for your help.