360/VR Video Performance


My goal is to set up a project that alternates between realtime graphics and playback of steoreoscopic 360° video (40964096px@60fps). I currently tested only with a still image (Notch trial) using the s
Skybox node as described in the manual. It works good as long as the resolution is set to 1024 max. When I change to 2048 or higher, which I assume refers to 4096
2048px per eye in my case and which would be the setting to get the max sharpness from my source material, it shutters tremendously when turning my head. There is nothing in the scene but a VR Headet Camera and a Skybox node.
On my machine (980GTX) I can also play the video version in full res without problems (e.g. with DeoVR, or Whirligig player using Media Foundation), so that should not me the problem.
From reading the manual I assume it could have something to do with the Skybox node transforming the material to a cube map which costs performance?
One way that I know from other packages is to create 2 spheres with one holding the 360 image for the left eye and the other one getting the right eye assigned. And then each sphere is only visible to one eye in the VR headset. Is there a way for me to do this in Notch? And do you think it will help performance?

Thanks a lot,


The Skybox resolution is per face. So resolution 1024 per face is good for 4096 images.


Good to know thanks!
Still I’m getting a bit more sharpness out of my material especially in detailed areas when setting to 2048, I guess due to the extra cube map transformation benefitting from the extra resolution.

But regardless, would the workflow with 2 spheres and standard UV mapping onto those maybe more performant? Let’s say when working with 8K material at some point in the future.
Or do you think that wouldn’t make a difference?