Tuesday 26 January 2016

Stereoscopic rendering (part 18)

Ok, I decided to take a little fun sidestep. Those who've been following my blog for awhile know I've played around with this before. Back then it was just a bit of fun but a month or two ago Ian Munsie dropped by our little game developers meetup at the North Sydney TAFE and showed of the fun he was having with stereoscopic rendering. He brought in his AlienWare laptop that has a 3D capable screen using active shutter glasses and demoed a copy of Abe's Oddessy that had been enhanced for stereoscopic rendering. Needless to say it very much impressed me and I very much believe the industry gave up on 3D stereoscopic games way too early.

As I have a 3D TV I've played a number of titles on my PS3 and XBOX 360 that allow stereoscopic rendering but in order to cope with the additional work often go down to a 720p resolution and it does suffer but is still a lot of fun. The demo Ian gave was at full resolution of his laptop (full 1024p I'm guessing) and simply looked rock solid.

Unfortunately my AlienWare laptop does not have stereoscopic support (though the NVidia chip may be able to output a 3D signal over HDMI which I hope to test later on) nor does my trusty Macbook.
My Panasonic 3D TV however does have the capability to use a split screen signal.

It is incredibly easy to convert our little example to allow stereoscopic rendering. It is also incredibly easy to do it wrong. Below are two links which explain this stuff way better then I possibly could:
There are basically 3 things that we need to know about to do things right.
The first is that it is not as simple as moving the camera position slightly. The eye looks forward in a parallel direction but the created frustrum is slightly skewed. Have a read through the NVidia document as it explains it way better then I could possibly word it and more importantly, it has pictures :)

The second is the distance between the eyes, the "intraocular distance". Every persons eyes are spaced slightly differently and getting this value wrong can create a very uncomfortable experience or even ruin the effect completely. The NVidia document mentions this distance on average is 6.5cm which is the value I'm using and seems to work well on my TV screen but it is likely something you'd want to make configurable. Also it's not a given a unit in your 3D world is actually equal to 1 cm so play around with what works.

The last thing that we're introducing is the "screen projection plane" or convergence distance. This is the distance from the eye(s) at which something rendered overlaps. If you render something at that distance it would be rendered in the same place for both eyes. Anything closer to the view will seem to pop out of the screen, anything after that will seem to be behind the screen.

Obviously things popping out of the screen is the bit that gives the 3D thrill, but it is also the one that can confuse the eyes if it isn't fully visible as the 3D effect is ruined if part of the object falls outside of our viewing frustrum as a different part of the object is obscured for each eye.
For games having the convergence distance be relatively close and having everything seem to be behind the screen is possibly a good practice.

The biggest drawback of stereoscopic rendering is that we need to do everything twice, everything needs to be rendered for each eye. Now there is a lot that can be reused like shadow maps, occlusion checks (with the footnote you might end up rendering something twice that is only visible for one eye, though that should already be a red flag), calculating model matrices effected by animation, etc.
We have none of those in our example yet but basically we would do those as part of rendering our left eye, and then reusing the information for our right eye, or doing it as part of our engineUpdate call which we do before our rendering happens.

A new matrix function in math3d.h


First thing is first, we need a new function for calculating a projection matrix that is adjusted for each eye. Now I've gone down the route to have our view matrix based on our mono view and have the adjustment for each eye fully embedded in the projection matrix. There is an argument for adding the needed translation to the view matrix and we may end up doing so but for now this seems to make sense.

Anyway our new function is pretty much a refactor as it's presented in the OpenGL document linked earlier on in this writeup:
// Applies a 3D projection matrix for stereo scopic rendering
// Same parameters as mat4Projection but with additional:
// - pIOD = intraocular distance, distance between the two eyes (on average 6.5cm)
// - pProjPlane = projection plane, distance from eye at which our frustrums intersect (also known as convergence)
// - pMode = 0 => center eye, 1 => left eye, 2 => right eye (so 0 results in same projection as mat4Projection)
mat4* mat4Stereo(mat4* pMatrix, MATH3D_FLOAT pFOV, MATH3D_FLOAT pAspect, MATH3D_FLOAT pZNear, MATH3D_FLOAT pZFar, float pIOD, float pProjPlane, int pMode) {
  MATH3D_FLOAT left, right, modeltranslation, ymax, xmax, frustumshift;
  vec3 tmpVector;

  ymax = pZNear * tan(pFOV * PI / 360.0f);
  xmax = ymax * pAspect;
  frustumshift = (pIOD/2)*pZNear/pProjPlane;

  switch (pMode) {
    case 1: { // left eye
      left = -xmax + frustumshift;
      right = xmax + frustumshift;
      modeltranslation = pIOD / 2.0;
    }; break;
    case 2: { // right eye
      left = -xmax - frustumshift;
      right = xmax - frustumshift;
      modeltranslation = -pIOD / 2.0;
    }; break;
    default: {
      left = -xmax;
      right = xmax;
      modeltranslation = 0.0;
    }; break;
  };
  
  mat4Frustum(pMatrix, left, right, -ymax, ymax, pZNear, pZFar);
  mat4Translate(pMatrix, vec3Set(&tmpVector, modeltranslation, 0.0, 0.0));
  
  return pMatrix;
};
Note that if you specify pMode as 0 our IOD and projection plane have no effect and the function is as if we're calling mat4Projection and I'm using that fact later on in the implementation.

Changes to main.c


As we may extent this later on or even completely replace our main container by something else I've decided to put the logic for whether we render stereoscopic inside our main.c file.
For now there is no switching between modes, we simply have a variable called stereo_mode with the values:
0 = mono rendering
1 = stereo rendering in split screen
2 = stereo rendering with full left/right back buffers (untested)

For option 2 we need to give another window hint to initialize our stereo buffers, the main render loop now looks like this:
      ...
      ratio = (float) frameWidth / (float) frameHeight;
      
      switch (stereo_mode) {
        case 1: {
          // clear our viewport
          glClearColor(0.1, 0.1, 0.1, 1.0);
          glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);      

          // render split screen 3D
          int halfWidth = frameWidth / 2;

          // set our viewport
          glViewport(0, 0, halfWidth, frameHeight);
      
          // and render left
          engineRender(halfWidth, frameHeight, ratio, 1);          
          
          // set our viewport
          glViewport(halfWidth, 0, halfWidth, frameHeight);
      
          // and render right
          engineRender(halfWidth, frameHeight, ratio, 2);          
        }; break;
        case 2: {
          // render hardware left/right buffers
          
          // clear our viewport
          glDrawBuffer(GL_BACK);
          glClearColor(0.1, 0.1, 0.1, 1.0);
          glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);                

          // set our viewport
          glViewport(0, 0, frameWidth, frameHeight);
          
          // render left
          glDrawBuffer(GL_BACK_LEFT);
          engineRender(frameWidth, frameHeight, ratio, 1);
               
          // render right
          glDrawBuffer(GL_BACK_RIGHT);
          engineRender(frameWidth, frameHeight, ratio, 2);          
        }; break;
        default: {
          // render normal...

          // clear our viewport
          glClearColor(0.1, 0.1, 0.1, 1.0);
          glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);      
          
          // set our viewport
          glViewport(0, 0, frameWidth, frameHeight);
      
          // and render
          engineRender(frameWidth, frameHeight, ratio, 0);          
        }; break;
      };
      
      // swap our buffers around so the user sees our new frame
      glfwSwapBuffers(window);
      glfwPollEvents();

      ...
Note that I've moved the screen ratio outside of our engineRender function, mainly for our splitscreen option as we're now only have half our space but still need our normal aspect ratio to properly render things.

Note that our glClear ignores our glViewPort in our split screen rendering which is why we do this upfront. With our left/right buffer we set it to updating the full backbuffer when clearing.
Next we either set our viewport to the right half of the screen or select the correct back buffer for each eye and call engineRender for each eye.

Changes to engine.c


Besides using our pRatio instead of calculating it we simply switch over our projection matrix call to our new mat4Stereo function:
  ...
  // distance between eyes is on average 6.5 cm, this should be setable
  mat4Stereo(&matrices.projection, 45.0, pRatio, 1.0, 10000.0, 6.5, 200.0, pMode);
  ...
As mentioned we've hard coded our IOD to 6.5. I've also set our convergence distance to 200.0, as our tie bomber is rendered roughly 550.0 away from our eye that seemed a pretty sound option.

One unexpected little thing was moving the camera around so the little earth globe is positioned right in front of the camera it actually really nicely popped out of the screen :)

I have not done anything to the projection matrix for our FPS counter. Obviously when rendering stereoscopic more care needs to be taken with this as bad positioning can be very intrusive. You can either start rendering this using a proper Z positioning or make sure that everything that is being rendered is indeed behind the convergence distance as rendering 2D the way that is currently implemented positions everything at that distance.

Below is a stereo rendered video of the end result, it actually worked great playing it on youtube on my 3D TV, I even managed to play it on my phone with google cardboard. 


Now I wish I had a Oculus Rift lying around to hook up to my laptop :)

Download the source code here

So where from here?


So obviously this is a very basic stereoscopic setup. There are a few improvements needed such as making a few things configurable and when rendering for a HMD adjusting the image for lens distortion needs to be added.

What's next


As I mentioned in my previous post I've got some ground to cover at this end in creating some 3D content before I can move on with my platformer. I'll probably throw up a few more of these intermediate posts though coming out of the summer holidays I'm not sure how much spare time I'll have in the coming weeks. We'll see:)



1 comment:

  1. The impact of 3D visualisation is spreading across the globe and the reason behind this is the limitless want of visualisation of properties across the seas. The 3D technology(3D Home Architect) has made it possible to look a like the real physical structures or buildings or architectural projects, House Rendering Service and helps the consumers or users or business benefactors to visualise the creation in a better way. It provides a great push to create highly accurate, 3D Modelling, HD visualizations to show the mass what the future project will look a like once accomplished UK & Poland. other services 3d services Like 3D Rendering, 3D Architecture and 3d Architectural Rendering Company.

    ReplyDelete