Hi, Notch Team.
I’m trying to render mesh vertices as sprites by using Clone to Mesh + Shape3D->Circle (or Image Plane) + Target Effector with a Camera set as its target. Looks like this setup is capable of aligning the clones to just one camera at a time. It’s obvious that each time I switch to another camera the clones remain aligned to the one used as their target thus preventing me from using multiple cameras for shooting the object (not to mention the Orbit Camera that I use for blocking out shots). I thought it would be cool to have an option allowing to keep the clones aligned to any currently active camera. What do you think?
Cheers,
-Andy
Hi Andrei, out of interest why not use particles for this? The point renderer is designed to make sprites face the camera…
Hi, Matt.
Particles emitted from Mesh Emitter in vertex mode can’t steal color from the source mesh color texture. Besides, Mesh Emitter doesn’t have this wonderful ‘Spread over UV Map mode’ available in the Clone to Mesh node. This means that currently particles can’t get me an ordered grid of specifically colored sprites spread over the UVs of a mesh. The look I’m after is much like this: https://i.ytimg.com/vi/TqV7D4_1-fo/maxresdefault.jpg but I want the particles the continents are made of to inherit colors of the texture mapped over the planet.
-Andy
It’s good to get an idea of what you’re looking to make. I can see a “Target to current camera” option would be useful in that case.
The only major limitation of this is that you can have multiple cameras active simultaneously (via the Multi-Camera node) or a camera can have multiple positions simultaneously (VR headsets eyes), but clones can only exist in one state for the entire frame. That means the “current camera” selected could only be the first of multi-cameras - the alignment couldn’t be done separately for each.
Another possibility would be to allow you to link multiple nodes to the Target Effector’s target node input, and choose the one which is alive at any time (by time bar).
For the Target Effector I definitely favor the "Target to current camera” option you mentioned. Along with that, there’s nothing stopping you from adding a ‘Multiple camera nodes’ mode (in which several cameras can be attached to the target node input) in the future. The limitations are quite obvious and acceptable (at least for me ).
On the other hand, it might be easier to adapt some clone distribution and source color sampling methods for the Mesh particle Emitter. Another possibility might be to add a Sprite shape to the Shape3D node.
What do you think?