Thursday, 3 March 2016

Adding a height field #2 (part 23)

Okay, let's continue with our height field. In this post I'll look at using OpenGLs tessellation shader to automatically vary the level of detail for our height field.

Now this is OpenGL 4.0 functionality only supported on relatively new hardware so we'll keep our shader from last session as a backup if we lack support.

But before we dive into the subject matter, as always, we have some housekeeping to do.

Doing a backflip...


Unfortunately we start with a backflip, I hate doing these but in my previous post I did make a dumb mistake. I decide to take our skybox out of our scene and make our height field a separate mesh object and render both separately after our scene has rendered. The idea being to force these to be the latest things being rendered as they fill large parts of the screen and more often then not are drawn behind other objects.

What I forgot is that doing so also means we're drawing them after we draw any transparent things. That would be pretty desasterous in the long run.

So both are added to our scene and rendered as part of our scene. Once we start optimizing our render loop we'll have to keep in mind that we'll want to render these at the right moment.


Isolating a few things


If you look at the source you'll see I also made two small but significant changes to two headers.
The first is that I've renamed my errorlog.h header to system.h and added the code for loading a file into a buffer in there. The idea is to start using this library to gather some support functions that I may want to be able to swap out easily when compiling for other platforms.

Similarly I've added a little helper header file called incgl.h which contains our includes for GLEW and GLFW. Again the same idea, I want to have file I can easily change to swap out our framework without having to redo my rendering libraries. For instance I may leave GLEW and GLFW behind for compiling to iOS and instead include iOS's OpenGL headers.

That is future music though.

Enhancing the shader library


With the introduction of 3 new shaders (yes 3) it was also long overdue to enhance our shader library. This library is now working along the same lines as a number of the others. We no longer use the shaderStdInfo struct directly but always use pointers through calling the newShader() function. We're using a retain count so we also have a shaderRetain() and shaderRelease() function.

I've renamed the structure to shaderInfo, I wanted to simply call it shader but alas, thats such a common word its already used in one of the 3rd party support libraries and since we're sticking with plain vanilla C, no namespacing.... rats.

I'm still using my shaderCompile and shaderLink functions as they were before but am now calling them from our newShader function. I've also added a shaderLoad function that loads a given file and calls shaderCompile basically making it loads easier to load stuff.

In the end in our engine we can simply use newShader, give the shader a name and provide the file names of the 5 individual shaders that make up a shader. These are:
  • Our vertex shader
  • Our tessellation control shader
  • Our tessellation evaluation shader
  • Our geometry shader
  • Our fragment shader
Don't worry, we'll look into them in more detail later.

In the end to compile our 5 current shaders our load_shaders() function becomes very simple:
void load_shaders() {
  // init some paths
  #ifdef __APPLE__
    shaderSetPath("Shaders/");
  #else
    shaderSetPath("Resources\\Shaders\\");
  #endif

  skyboxShader = newShader("skybox", "skybox.vs", NULL, NULL, NULL, "skybox.fs");
  hmapShader = newShader("hmap", "hmap.vs", NULL, NULL, NULL, "hmap.fs");
  colorShader = newShader("flatcolor", "standard.vs", NULL, NULL, NULL, "flatcolor.fs");
  texturedShader = newShader("textured", "standard.vs", NULL, NULL, NULL, "textured.fs");
  reflectShader = newShader("reflect", "standard.vs", NULL, NULL, NULL, "reflect.fs");
};
If anything goes wrong during compilation of a shader we'll deal with the error and a NULL pointer is returned and as we specifically check for NULLs in our shader code, we're pretty safe.

We'll get back to our shaders in a minute but first....

Quads!!


When we did our height field in our previous post we created a mesh just like all others build out of triangles. OpenGL can only render triangles, very simple.

For tessellation triangles aren't really the best geometry. Quads (4 sided polygons) are much better suited. Tessellation just in case you haven't realized yet is a process where we take a polygon and subdivide it into smaller polygons so we can increase our detail. Our squares in our height field, before we apply our height, are 1000.0 x 1000.0 squares (made up of 2 triangles). When it's far enough from the camera that's a fine size but as we get closer to our camera, that's an awfully big square. In our previous example we could see that our terrain got very blocky. Here is a wireframe that more clearly shows the effect:

We could use much smaller squares for our height field which would really work well closer to our camera, but it would increase our polygon count greatly and polygons further away from our camera would soon be smaller then a pixel which is incredibly inefficient.

I suggested in my previous post that you could change the mesh we're creating to bake in additional detail nearer to the camera but this static approach ultimately is very limited (but still a worth while enhancement).

It would be great if OpenGL can add in the required detail and, tada, it can. That is exactly what the tessellation shader does. Our shaders I should say because there are two. The first allows us to take a square as input and decide how much detail we wish to add. The second, our evaluation shader, is pretty much a vertex shader that works on the newly added vertices. While we input quads the output of our tessellation shaders are triangles which are subsequently rendered just like before.

While we're introducing a pretty linear implementation of a tessellation shader here you can use these for much more interesting processes such as implementing nurbs. For this reason we talk about our quads as being patches.

Anyway, to make a very long story short, we need to do two things before we can start implementing our new shader.

The first is detecting if we have support for tessellation and we do this by requesting the number of patches and the maximum tessellation level that is supported like this:
  // get info about our tesselation capabilities
  glGetIntegerv(GL_MAX_PATCH_VERTICES, &maxPatches);
  errorlog(0, "Supported patches: %d", maxPatches);
  if (maxPatches >= 4) {
    // setup using quads
    glPatchParameteri(GL_PATCH_VERTICES, 4);

    glGetIntegerv(GL_MAX_TESS_GEN_LEVEL, &maxTessLevel);
    errorlog(0, "Maximum supported tesselation level: %d", maxTessLevel);
  };
Here we've requested our maxPatches, which we check is atleast 4 to make a quad (yes some OpenGL hardware support patches with many more control points) and then figure out our maximum tessellation level which defines how far we can subdivide a single quad.

Now that we know whether we have support for quads, we need to enhance our mesh object to be able to store indices for quads instead of triangles. So when you look at mesh3d we see a new variables has been added to our structure called verticesPerFace.
This defaults to 3.

Now we can't mix triangles and quads so I've simply added a function called meshAddQuad which adds a quad instead of a triangle and changes our variable to 4. Doing so for the first time will also reset our indices buffer just in case, it should be empty. Our meshAddFace method has been changed similarly but sets verticesPerFace to 3.

I've also enhanced my meshMakePlane function to specify whether I want triangles or quads, it is currently the only code that generates a mesh that supports quads.

Finally our meshRender function has been enhanced to use the constant GL_PATCHES instead of GL_TRIANGLES when calling glDrawElements.

When we initialize our height field we simply send true to our meshMakePlane pAddQuads parameter if we support quads.

Our new vertex shader

So now it's time to have a look at our shaders. Note that I've names the height map shader that support tessellation "hmap_ts" so we can load either those shaders or "hmap" depending on whether tessellation is supported.

When we look at our vertex shader we see that we mostly gutted it. Note that we assign gl_Position BEFORE applying our view matrix and without adding our height. This is very important if we want even tessellation.

I've also tweaked some of our scaling factors. First off I've gone to a 2000.0x2000.0 starting size of our square as we'll be adding enough detail in to warrant this, and I've also enlarged the space our height map covers. It depends a little on the quality of our height map but having steep inclines can result in some nasty visual artifacts as the level of detail is adjusted. Remember we're using a pretty crappy 225x225 map. 

I do calculate the height of our vertex, then apply our view and projection matrix and output our screen coordinate for use in our tessellation shader. This is because we use the size at which we render our polygon on screen to determine how much we'll tessellate our polygons.

Other then that I've pretty much explained all that is done in our shader in our previous parts.

Our tessellation control shader

This is where our magic starts. It's a bit funny how this is called because it is called for every vertex in our quad but with all quad information available. Note that this means it is also called multiple times for the same vertex if it is used for multiple quads.

For a quad it is called with gl_InvocationID set from 0 to n-1 where n is the number of vertices for that quad and as you can see from our mesh3d code a quad has its vertices ordered as follows:
 2------3
 |      |
 |      |
 0------1
We do the bulk of our work when InvocationID is 0 and for the other 3 calls simply pass through our control points.

Let's look at our inner function in more detail:
void main(void) {
  if (gl_InvocationID == 0) {
    // get our screen coords
    vec3 V0 = Vp[0];
    vec3 V1 = Vp[1];
    vec3 V2 = Vp[2];
    vec3 V3 = Vp[3];
Maybe a bit of overkill but we copy our 4 points into local variables.
    // check if we're off screen and if so, no tessellation => nothing rendered
    if (
      ((V0.z <= 0.0) && (V1.z <= 0.0) && (V2.z <= 0.0) && (V3.z <= 0.0))              // behind camera
      || ((V0.x <= -falloff) && (V1.x <= -falloff) && (V2.x <= -falloff) && (V3.x <= -falloff)) // to the left
      || ((V0.x >=  falloff) && (V1.x >=  falloff) && (V2.x >=  falloff) && (V3.x >=  falloff)) // to the right
      || ((V0.y <= -falloff) && (V1.y <= -falloff) && (V2.y <= -falloff) && (V3.y <= -falloff))   // to the top
      || ((V0.y >=  falloff) && (V1.y >=  falloff) && (V2.y >=  falloff) && (V3.y >=  falloff)) // to the bottom
    ) {
      gl_TessLevelOuter[0] = 0.0;
      gl_TessLevelOuter[1] = 0.0;
      gl_TessLevelOuter[2] = 0.0; 
      gl_TessLevelOuter[3] = 0.0; 
      gl_TessLevelInner[0] = 0.0;
      gl_TessLevelInner[1] = 0.0;
Here we're checking if all 4 points are behind the camera, to the left of the screen, to the right of the screen, to the top of the screen or to the bottom of the screen, or in other words, if our whole quad is visible. If it's not, why bother rendering it? Simply set all our levels to 0 and we won't render the quad.
    } else {
      float level0 = maxTessGenLevel;
      float level1 = maxTessGenLevel;
      float level2 = maxTessGenLevel;
      float level3 = maxTessGenLevel;
Our tessellation works by specifying how many subdivisions we want for each edge. We default to the maximum level possible.
      // We look at the lenght of each edge, the longer it is, the more detail we want to add
      // If any edge goes through our Camera plane we set maximum level
      
      if ((V0.z>0.0) && (V2.z>0.0)) {
        level0 = min(maxTessGenLevel, max(length(V0.xy - V2.xy) * precision, 1.0));
      }
      if ((V0.z>0.0) && (V1.z>0.0)) {
        level1 = min(maxTessGenLevel, max(length(V0.xy - V1.xy) * precision, 1.0));
      }
      if ((V1.z>0.0) && (V1.z>0.0)) {
        level2 = min(maxTessGenLevel, max(length(V1.xy - V3.xy) * precision, 1.0));
      }
      if ((V3.z>0.0) && (V2.z>0.0)) {
        level3 = min(maxTessGenLevel, max(length(V3.xy - V2.xy) * precision, 1.0));
      }
So above we've determined our actual levels by taking the length of each edge of our quad and multiplying that by our precision. The longer the edge, the more we subdivide it.
      gl_TessLevelOuter[0] = level0;
      gl_TessLevelOuter[1] = level1;
      gl_TessLevelOuter[2] = level2;  
      gl_TessLevelOuter[3] = level3;  
      gl_TessLevelInner[0] = min(level1, level3);
      gl_TessLevelInner[1] = min(level0, level2);
So the levels we've calculated are copyied in our gl_TessLevelOuter[n] output variables which define how much the outer edge of our quad get subdivided. But we also have two inner levels, one for our 'horizontal' spacing and one for our 'vertical' one. We take the lower of the two related edges. This is how much the quad gets subdivided internally.
    }
  };
  
  // just copy our vertices as control points
  gl_out[gl_InvocationID].gl_Position = gl_in[gl_InvocationID].gl_Position;
}
And at the end we simply copy our position. Again we're doing mostly linear interpolation here, no fancy nurbs stuff. If we did we would possibly be adjusting our points here.

Our tessellation evaluation shader


As a result of our control shader our mesh is now being subdivided into many more quads and as a result we're introducing many more vertices. Our evaluation shader is run for each of those vertices (including our original ones) so that we can determine their final position. Note that OpenGL hasn't done anything in positioning our new vertices at all, instead it is calling our evaluation shader with all adjoining vertices and two weights with which we can determine the new position of our vertex.

Again we're doing a simple linear interpolation here so:
  // Interpolate along bottom edge using x component of the
  // tessellation coordinate
  vec4 V1 = mix(gl_in[0].gl_Position,
                gl_in[1].gl_Position,
                gl_TessCoord.x);
  // Interpolate along top edge using x component of the
  // tessellation coordinate
  vec4 V2 = mix(gl_in[2].gl_Position,
                gl_in[3].gl_Position,
                gl_TessCoord.x);
  // Now interpolate those two results using the y component
  // of tessellation coordinate
  vec4 V = mix(V1, V2, gl_TessCoord.y);
We can see that we have our 4 input vertices which are the four related to the quad we've just subdivided and inserted our vertex for and our gl_TessCoord weights. The code above does a simple interpolation of our points to end up with vertex V.

From this point on wards we're basically doing what we did in our original vertex shader in our previous post, calculate our height, calculate our texture coordinate, calculate our normals and apply our matrices.

If you want an example of a much more complex tessellation shader look back a bit on this blog to my posts about my procedurally generated tree or look up its source code on my github page. It contains a sub-divisor that also rounds our resulting mesh.

Our geometry shader


Okay, so our geometry shader actually does nothing, you could take it out and simply change our fragment shader to take the inputs of our tessellation evaluation shader.

I only added it in so I could introduce it. The geometry shader allows you to take something that we're rendering, in our case our triangle, and output additional geometry shapes. You could implement a tessellation shader with it if you wanted to but be aware, it wouldn't perform as well as our tessellation shader does.

But using our geometry shader can be very useful for a height field as we could introduce things like simple ground vegetation. Something for another day however.

Our fragment shader


And finally we've ended up at our fragment shader and it is nearly identical to the one in our previous post. It's only difference is that it takes its inputs from our geometry shader instead of our vertex shader.

On a side note here, note that instead of defining each output and input individually as we did before, we've now started using structs such as these:
in GS_OUT {
  vec2  T;
  vec3  N;
  vec4  V;
} fs_in;
I'm not aware of any performance benefits, it's mostly about readability of the shader.

The final result

So here is our final result:

I'll see about adding a movie in due time as it's interesting to see the LOD change as you move around.

Note that for our source code I'm now branching, we'll see if that is easier then maintaining the archive subfolder.

Download the source here

So where from here?


Well the same thing really stands from our last post. It makes a lot of sense to add some basic level of detail to our initial mesh. Especially our quads further away are now so small we're wasting rendering time but it would be good to have some very large quads at the edges of our mesh for far away geometry.

Having a way to load in additional texture maps to create an ever expanding world would be good too.

Also actually creating an interface that allows us to edit the height maps from within our engine would be great. I actually have this up and running in a previous experiment so who knows I may come back to that some day.

What's next?

I honestly don't know yet. I've been working on the 3D version of the platformer a bit during the week and I might return to it for awhile, but I'm also tempted to extend my height field write up. I guess you'll see soon which one won my attention :)

No comments:

Post a Comment