Saturday 27 February 2016

Adding a height field #1 (part 22)

Height fields are another simple technique that are often used in game engines to create a ground for a character to move on. In this part we're going to start with a very basic simple height field that is automatically centered in relation to the camera position to create a never ending ground plane.

From there we'll start building it out to increase its capabilities probably over a few parts revisiting this as time goes by.

For now in this part we'll keep things simple with a standard grid and a single tiled texture for a height field and the next part we'll look into increasing the level of detail depending on distance to the camera. In future parts we'll dive deeper into techniques to extend the map by swapping height textures and looking at how we can apply different terrain textures.

Adding structure to our shader matrices


Before we dive into our height field there is one structural change I made to the project and that is adding a number of access methods for our shaderMatrices structure and enhancing this structure to contain matrices that are calculated from our projection, view and model matrices.

As a result the structure is now considered to be read only and most members considered private. To set our projection, view and model matrices we call respectively shdMatSetProjection, shdMatSetView and shdMatSetModel.
Doing so resets appropriate flags inside the structure to indicate which calculated matrices need updating.
While the projection, view and model matrices can be read directly, the calculated matrices are accessed through the following functions:
  • shdMatGetInvView to get an inverse of the view matrix
  • shdMatGetEyePos to get the position of the eye/camera
  • shdMatGetModelView to get our model/view matrix
  • shdMatGetInvModelView to get the inverse of our model/view matrix
  • shdMatGetMvp to get our model/view/projection matrix
  • shdMatGetNormal to get our normal matrix (world space)
  • shdMatGetNormalView to get our normal matrix (view space)
What these access functions allow us to do is calculate these matrices as few times as possible and reuse them as much as we can. As some of these are relatively expensive to calculate but don't always change when rendering multiple objects its a worth while improvement. It also allows us to access these calculated matrices outside of our materials logic.

As a result this structure is now passed to our render logic with a pointer.

Our height map


To create a height field we could load up a 3D plane already adjusted for height and just render it but it would be very inefficient. Either it would need to be immensely large and bog down your GPU or you'd need to constantly update the vertex positions and waist a lot of bandwidth.

With our vertex shaders however there is much simpler solution. We can use a texture to indicate the height at a specific point on our map and adjust the vertices that way. Here is the texture I'm currently using. It is a tilable map I grabbed off of google images.
This map is not ideal for a real height field implementation but for our example it will do fine. First of all its too small. It's only a 225x225 and covers only a small area before it starts repeating. Especially once we add our automatic level of detail adjustment we'll need a higher detail for our maps.
The other issue is that it is a gray scale RGB image and it really only makes sense to use one color channel in our shader. This limits the precision of our height and wastes alot of memory.
OpenGL is able to load single channel 32bit floating point images that would give us the precision we need. The code doesn't change other then loading a different type of image so for our goal today, this limitation isn't an issue.

To communicate which texture map we are using as our height map I've added a new member to our shaderStdInfo structure called bumpMapId and a texture map object called bumpMap to our material structure.
BumpMap may be a name that isn't completely fitting but as I don't want to increase my structures with too many unnecessary members I figured I'd only need one member that I can use as a heightmap/bumpmap/normalmap type thing as we start writing more shaders.

While not yet used I've also enhanced our texture map class to retain dimension information about the images that was loaded.

Creating the object we'll render


It was tempting to not use an object at all and do everything in our vertex shader but I figured this would be overkill and just waste GPU cycles. Instead I've added a simple method to our mesh3d library called meshMakePlane:
bool meshMakePlane(mesh3d * pMesh, int pHorzTiles, int pVertTiles, float pWidth, float pHeight) {
  int     x, y, idx = 0;
  float   posY = -pHeight / 2.0;
  float   sizeX = pWidth / pHorzTiles;
  float   sizeY = pHeight / pVertTiles;

  for (y = 0; y < pVertTiles+1; y++) {
    float   posX = -pWidth / 2.0;

    for (x = 0; x < pHorzTiles+1; x++) {
      // add our vertice
      vertex v;
      v.V.x = posX;
      v.V.y = 0.0;
      v.V.z = posY;
      v.N.x = 0.0;
      v.N.y = 1.0;
      v.N.z = 0.0;
      v.T.x = posX / pWidth;
      v.T.y = posY / pHeight;

      meshAddVertex(pMesh, &v);

      if ((x>0) && (y>0)) {
        // add triangles
        meshAddFace(pMesh, idx - pHorzTiles - 2, idx - pHorzTiles - 1, idx);
        meshAddFace(pMesh, idx - pHorzTiles - 2, idx, idx - 1);
      };

      posX += sizeX;
      idx++;
    };

    posY += sizeY;
  };

  return true;
};
This simply creates a mesh for a flat 3D surface of a given size with a given number of tiles. It also sets up texture coordinates and normals but we will be ignoring those.

In our load_objects function we're setting up a mesh3d object for this plane. Note that I backtracked on my previous post and am no longer adding our skybox to our scene and neither will I add our height field to our scene. Instead they are rendered separately after the whole scene has rendered.

Our height field is initialized using this function:
void initHMap() {
  material *    mat;

  mat = newMaterial("hmap");                  // create a material for our heightmap
  mat->shader = &hmapShader;                  // texture shader for now
  matSetDiffuseMap(mat, getTextureMapByFileName("grass.jpg", GL_LINEAR, GL_REPEAT, false));
  matSetBumpMap(mat, getTextureMapByFileName("heightfield.jpg", GL_LINEAR, GL_REPEAT, true));

  hMapMesh = newMesh(102 * 102, 101 * 101 * 3 * 2);
  strcpy(hMapMesh->name, "hmap");
  meshSetMaterial(hMapMesh, mat);
  meshMakePlane(hMapMesh, 101, 101, 101.0, 101.0);
  meshCopyToGL(hMapMesh, true);

  // we can release these seeing its now all contained within our scene
  matRelease(mat);
};
Note that we have a new shader called hmapShader which we've loaded alongside our other shaders. I'll discuss the shader itself in a minute.

We load two texture maps, a diffuse map which will be the texture that we draw our surface with and our bump map. Note that both are setup as repeating textures (tiled).

Note also that our tiles are 1.0x1.0 sized and we have 101x101 tiles in our surface. We'll scale this up in the vertex shader as needed.

We then render our surface in our engineRender function like so:
  if (hMapMesh != NULL) {
    // our model matrix is ignore here so we don't need to set it..
    matSelectProgram(hMapMesh->material, &matrices, &sun);
    meshRender(hMapMesh);
  };
Note also the extra glDisable(GL_BLEND) after we render our scene as the last meshes we rendered would have been our transparent ones.

Our shaders

Now that we've got our object that needs to be rendered its time to have a look at our shaders.

Let's talk about the fragment shader first because it is nearly identical to our texture map shader. I've left out specular highlighting and simplified the input parameters but there really isn't much point at having a closer look as we've discussed texture mapping in detail before.

The magic happens in our vertex shader so let's have a look at that one part at a time:
#version 330

layout (location=0) in vec3 positions;

uniform vec3 eyePos;        // our eye position
uniform mat4 projection;    // our projection matrix
uniform mat4 view;          // our view matrix
uniform sampler2D bumpMap;  // our height map

out vec4 V;
out vec3 N;
out vec2 T;
There is little to discuss about our inputs and outputs, all that is new here is that we ignore many of the inputs we had in our standard vertex shader and that we have a sampler for our bumpmap.
float getHeight(vec2 pos) {
  vec4 col = texture(bumpMap, pos / 10000.0);

  return col.r * 1000.0;
}
This is where the fun starts, above is a function that takes a real world X/Z coordinate, adjusts it, looks it up in our height map and scales the "red" channel up. With red ranging from 0.0 to 1.0 which translates to 0 to 255 we thus get a height between 0.0 (black) and 1000.0 (red/white) with a precision of roughly 3.9 per color increase. The scale by which we divide our coordinates defines how large an area our height map covers. Remember that in texture coordinates we're going from 0.0 to 1.0 so at 1.0 we're at position 225 on our tiny texture map. With larger texture maps we thus need to increase the amount we divide by to covert the intended area.
vec3 calcNormal(vec2 pos) {
  vec3 cN;

  cN.x = getHeight(vec2(pos.s-10.0,pos.t)) - getHeight(vec2(pos.s+10.0,pos.t));
  cN.y = 20.0;
  cN.z = getHeight(vec2(pos.s,pos.t-10.0)) - getHeight(vec2(pos.s,pos.t+10.0));

  return normalize(cN);
}
The function up above will calculate the normal for our surface at a given point. Here what we do is get the height slightly to the left of our location and slightly to the right to calculate a gradient. We do the same in our z direction (which is y on our map). This is a quick and dirty and not super accurate method of calculating our normal but it will do for the time being. Now its time to look at the main body of the shader:
void main(void) {
  // get our position, we're ignoring our normal and texture coordinates...
  V = vec4(positions, 1.0);

  // our start scale, note that we may vary this depending on distance of ground to camera
  float scale = 1000.0;

  // and scale it up
  V.x = (V.x * scale);
  V.z = (V.z * scale);
}
The bit above is pretty simple and straight forward, we get the position of the vertex we're handling with and then scale it up. Our 1.0x1.0 tiles have thus become 1000.0x1000.0 tiles. Note that we only scale the x and z members of our vector. While multiplying our y isn't a problem, multiplying w will cause some funky issues.
  // Use our eyepos defines our center. Use our center size.
  V.x = V.x + (int(eyePos.x / scale) * scale);
  V.z = V.z + (int(eyePos.z / scale) * scale);
Now this is the first part of the magic, we always want to center our surface to our camera but aligned to our tiles or else we get some really funky wave effect on our surface. We thus take our eye position, divide it by our scale, floor it, and scale it back up.
  // and get our height from our hmap.
  V.y = getHeight(V.xz);
So now it's time to call our height function to adjust our V as we now know our real world position of our vertex. The nice thing here is that we can start scaling and modifying our surface to our hearts content, for a given world space coordinate we'll always get the right height.
  // calculate our normal.
  N = calcNormal(V.xz);
  N = (view * vec4(N+eyePos, 1.0)).xyz;
Here we call our calcNormal function to calculate our normal. We add in our eye position as a trick to be able to apply our view matrix as is. Basically the view matrix would translate the vertex by the inverse of our camera position making the total movement 0 and we have a nicely rotated normal.
  // and use our coordinates as texture coordinates in our frament shader
  T = vec2(V.x / 2000.0, V.z / 2000.0);
Calculating our texture coordinate becomes a simple scale as well. We generally want to have our texture cover a much smaller area then our height map hence the lower number even though our texture map is much larger then our height map.
  // our on screen position by applying our view and projection matrix
  V = view * V;
  gl_Position = projection * V;
}
Last but not least we need to apply our view and projection matrix. We store our position adjusted by the view matrix in V as our output for our lighting.

The end result is this:

Note that I've added z/c as controls to move the camera forward/backwards and f to toggle wireframe. I've also adjusted our joystick control provided you have a gamepad type controller as I'm using the second stick for movement.

Download the source here

So where from here? 


The example works well but when you start playing with it you'll quickly see its limitations. First off the detail in the ground is very course. The tiles are much to large, it looks like an old vector game from the mid 90ies :) The other issue is that when you move the camera up eventually the surface becomes small enough that the edges are very clearly visible.

There are two things we can do to combat this before we start using tessellation (which we'll talk about in the next part) especially when needed as a fall back if you're hardware doesn't support tessellation shaders.
  • The first we've already briefly mentioned and that is to up the scale as the camera increases height. Just like with horizontally adjusting our position based on camera movement it is important we increment this in fixed blocks or you'll end up with a wave effect as you smoothly move vertices of your height field accross your height map
  • The second is 'prebaking' some of the LOD in our mesh. Instead of all tiles having a uniform size of 1.0 we could start increasing the size of tiles as we move away from the center of our map. This is harder then it seems because you could again end up with a wave effect if you don't think this out properly but it is well worth the effort
We've already talked about the need for a better sized height map and using full single color channel 32bit floating point maps but instead of repeating these maps through tiling another worth while enhancement is to load a grid of maps and load up new maps as you move around the world. This is something we might do a part on in the future.

Finally having a never ending field of grass is fun but not very practical. What we want is to have a number of different textures for different terrains and blend from one texture to another. For this we need to introduce a 3rd set of maps that serve as indices into which texture map needs to be used and come up with a nice way to blend from one to the other. Another topic we'll look at in the near future.

What's next?

So that's mostly future music but for the next part we'll start looking at a new type of shader to add automatic increasing of LOD for our terrain.

Monday 15 February 2016

Instantiating objects (part 21)

I actually wrote this code some weeks back but didn't have the time to put it into my little tutorial project until this weekend. Slowly our little 3D engine is taking shape.

Housekeeping


As always there are a few fixes for things I found I had done wrong. One was setting the eye coordinate for our reflection mapping which required us to first calculate the inverse of our view matrix instead of using the view matrix directly.
The other was a dump typo in the code that applies a 4x4 matrix to a 3D vector. It initialized W as 0.0 instead of 1.0. The result is that our translation isn't applied.

I also made a few small enhancements to our mesh3d code.
First off, if you do not explicitly call meshCopyToGL it is automatically called the first time you try to render the mesh. It will free up our CPU buffers so if you wish to keep those, call meshCopyToGL before rendering.
I also added an adjust matrix to our loading code. Our tie-bomber for some reason was way off center and allowing us to adjust all the vertices of our object as part of loading it gives us a way to fix this.
There is also a new method called MeshCenter which finds the center of a single mesh, adjust all the vertices to properly center the mesh and then updates its model matrix accordingly. This can be handy but in the case of our tie-bomber it would still have the wrong end result so it is not used in our example.

A new library, meshnode.h


The big change is the introduction of a new library called meshnode. This library implements a "tree" in which to place everything we wish to render on screen. Nodes within this tree can (re)use the same mesh but apply a different model matrix to position the model somewhere else.
Nodes can also have child nodes. Here the model matrix of the parent node applies to all child nodes. For our tie-bomber this is nice because it allows us to move the entire model with all its individual pieces as one unit but it will also allow us at some later stage to move parts of a model in relation to itself. Picture a model of a car where you rotate the wheels around their own axis to simulate steering or animate the car driving.
Also the node structure allows us to show or hide entire models in one go.
This structure forms the basis of a lot of techniques that allow us to improve the versatility and performance of our engine and we'll get to those as time goes by.

The other mayor change this brings on is that we no longer handle the core render logic in our engine_update function. Instead the bulk is moved into the meshNodeRender method that is part of our node system.

You will notice way at the start of our engine.c file that we have replaced our meshes array with a single node called scene:
meshNode *    scene = NULL;
Ignore for a moment that we define an array of nodes on the next line, I'll come back to that in a minute.

Our scene node is the main container node into which we load everything that we render. This can be completely self contained and at the end of loading our meshes and organizing everything we'll end up with the following:
* scene
  * tie-bomber-0
    * TBSOLAR_01 => renders mesh3d TBSOLAR_01
    * TBWING_L01 => renders mesh3d TBWING_L01
    * ...
    * TBTOP_BO02 => renders mesh3d TBTOP_BO02
  * tie-bomber-1
    * TBSOLAR_01 => renders mesh3d TBSOLAR_01
    * TBWING_L01 => renders mesh3d TBWING_L01
    * ...
    * TBTOP_BO02 => renders mesh3d TBTOP_BO02
  * ...
  * tie-bomber-9
    * TBSOLAR_01 => renders mesh3d TBSOLAR_01
    * TBWING_L01 => renders mesh3d TBWING_L01
    * ...
    * TBTOP_BO02 => renders mesh3d TBTOP_BO02
  * skybox => renders mesh3d skybox

So we've included our tie-bomber 10 times in our tree and will thus end up rendering our tie-bomber 10 times, each time at a different location. The location is set on the tie-bomber-n node and applies to all its child nodes automatically by multiplying the parent nodes matrix with each child node.

As mentioned I also keep an array called tieNodes which is defined right after our scene node. These pointers point to our individual nodes within our scene. We could do without at the present time but looking forward doing this is very handy as it allows us to move our tie-bombers around without having to go search for our nodes within our tree. Easy at this point in time as we have only 3 layers but as we go on our tree may become far more complex.

After much internal debate I decided against changing my wavefront obj loader, it still returns an array of meshes as before. It was tempting to change the logic so it returned a single node with all the meshes loaded as child nodes. We could even go as far as to make it create a 3 level tree by taking the object and face group structure as separate layers.
While this makes sense I felt it over complicated matters and I wasn't going to go down the route of having my entire scene held within a single obj file as it would defy our goal of drawing multiple instances of the same meshes.
Instead a single wavefront file should contain one object like a tie-bomber, or a car, or anything else we might like and we composite our scene by some other means (currently in code).

Setting up our scene


So let's have a look at our load_objects() function bit by bit to see how we're using our meshNode library.

First off, I've moved all material loading to the start of this function. I was even tempted to give this its own function but for now I'm happy.

After our materials are loaded we create our scene object:
  // create our root node
  scene = newMeshNode("scene");
  if (scene != NULL) {
Now that was easy and pretty self explanatory.

Next up we load our tie-bomber file:
    // load our tie-bomber obj file
    text = loadFile(modelPath, "tie-bomber.obj");
    if (text != NULL) {
      llist *       meshes = newMeshList();

      // setup our adjustment matrix to center our object
      mat4Identity(&adjust);
      mat4Translate(&adjust, vec3Set(&tmpvector, 250.0, -100.0, 100.0));

      // parse our object file
      meshParseObj(text, meshes, materials, &adjust);
Now this is pretty similar to what we were doing before. The main differences are that we are creating our meshes linked list locally and that we introduce our adjust matrix which will end up setting a proper center for our tie-bomber
      // add our tie bomber mesh to our containing node
      tieNodes[0] = newMeshNode("tie-bomber-0");
      meshNodeAddChildren(tieNodes[0], meshes);
This is our first new bit, here we create a new mesh node for our tie-bomber and then call meshNodeAddChildren to add child nodes for each of the mesh3d objects held within our meshes linked list.
      // and add it to our scene, note that we could free up our tie-bomber node here as it is references by our scene
      // but we keep it so we can interact with them.
      meshNodeAddChild(scene, tieNodes[0]);
Next we add our node containing our tie-bomber to our scene. We could now release our tie-node as it is retained within our scene node but as I mentioned before we keep our array so we can more easily access our tie-bomber node in our update code (something for later).
      // and free up what we no longer need
      llistFree(meshes);
      free(text);
    };
That said, we do release our meshes linked list as our mesh3d objects are retained by the child nodes of our tie-bomber node.

At this point in time we have only one tie-bomber that we draw, to draw our other 9 tie-bombers we simply need to create new nodes that point to the same meshes. For this we have a handy little method called newCopyMeshNode. This makes a copy of a node. By default we reuse our child nodes but it has a deep copy option that would copy all the child nodes as well (but still reuse the same meshes). You would deep copy the nodes if you want to animate individual parts of your objects. Take our car as an example again, if we didn't copy all the child nodes changing the orientation of a wheel would apply that change to all instances of our object. For our tie-bomber however it'll do just fine.
    tieNodes[1] = newCopyMeshNode("tie-bomber-1", tieNodes[0], false);
    mat4Translate(&tieNodes[1]->position, vec3Set(&tmpvector, -400.0, 0.0, -100.0));
    meshNodeAddChild(scene, tieNodes[1]);
Here we see it in action, we make a copy of our tieNodes[0] node, then we change the position matrix of our new node so our tie-bomber moves to a new location, and we add our new node to our scene.

Now we repeat this for our other 8 tie-bombers.

Finally at the end of our load_objects function we add our skybox. I have moved this code into a separate function but it's basically the same code as in our previous example but with the extra step that we add a node to our scene for our skybox mesh.

Rendering our scene

Now that everything that we wish to render is contained within our scene node we can simply call the function meshNodeRender to render everything to screen:
  // init our projection matrix, we use a 3D projection matrix now
  mat4Identity(&matrices.projection);
  // distance between eyes is on average 6.5 cm, this should be setable
  mat4Stereo(&matrices.projection, 45.0, pRatio, 1.0, 10000.0, 6.5, 200.0, pMode);
  
  // copy our view matrix into our state
  mat4Copy(&matrices.view, &view);
  
  if (scene != NULL) {
    // and render our scene
    meshNodeRender(scene, matrices, (material *) materials->first->data, &sun);    
  };
Now that became a lot simpler:)

Obviously the code that used to be in our engine_update function has mostly moved to our meshNodeRender function but there is a key difference. In our original implementation we rendered opaque meshes directly while we placed meshes with a transparent material into an array.
Our new function builds two arrays, one for opaque meshes and one for transparent ones. Then they are rendered.

At this stage there is little added benefit to this other then simplifying our meshNodeBuildRenderList function. This function recursively 'walks' through our tree of nodes and calculates all the model matrices that are required to render our meshes.

However we now have the base for starting to add optimizations such as sorting this list to minimize switching shaders, minimize changing texture maps, minimize switch VAOs, etc. Again, stuff we'll start implementing in later parts of this series but we've done the ground work now.

The end result: we have 10 tie-bombers in a nice formation:

Download the source here

What's next?

I'm working on a simple height field implementation to add a group surface to the project I'm working on, I'll probably add that to our tutorial in the next session.

After that I finally want to make the jump to deferred rendering so we can start making the lighting a bit more fun.


Sunday 7 February 2016

Adding a skybox (part 20)

Okay, I'm back with an other installment. Before we get to our skybox, I've starting cleaning up some code and polishing a few things up so lets look at that first.

A new folder structure


One thing that was really starting to annoy me was dumping all the files into our resources folder and together with our .exe on Windows. One thing that really makes this a pain is that we are dealing with different path delimiters on each platform. I'm not entirely happy with how I solved things yet but at least we've now got a bit more of a manageable structure. Fonts go into Fonts, Model files into Models, Textures into Textures and Shaders into Shaders.

Also on Windows I decided to just create a Resources folder to put everything into. I haven't tested the Windows build yet so expect some typos or things I've forgotten to change in the makefile.

For Windows I'll eventually change the working folder to the Resources folder so we can make it work in line with our Mac code and maybe look into a path parser that replaces path delimiters for the correct platform. We'll see.

Materials and shaders


I've made a start in better structuring the materials and how they use shaders. It is now the material that is leading and sets up our information OpenGL needs to render the material and the shader to use is simply a property of the material now. That means that if we don't know what material to use for an object we simply use a default material we've added to our material list at the start (obviously none should be missing).

For now there is a little loop in load_objects that assigns the shaders to our materials:
  // assign shaders to our materials
  node = materials->first;
  while (node != NULL) {
    mat = (material * ) node->data;

    if (mat->reflectMap != NULL) {  
      mat->shader = &reflectShader;
    } else if (mat->diffuseMap != NULL) {          
      mat->shader = &texturedShader;
    } else {
      mat->shader = &colorShader;
    };
    
    node = node->next;
  };
This is a bit of a place holder for the time being, this still needs to change to something better.

Texturemaps


One thing that was a bit of an eyesore was that I was loading the same texture maps multiple times in my material loader as they were reused for different materials.
I've given the texture map loading code it's own place in a new library called "texturemap.h".

It works along the same line as all the other support libraries we've created so far but has one new trick up its sleeve. It maintains a linked list of all texture maps currently loaded when texture maps are loaded by calling "getTextureMapByFileName".
As we do not know whether we are getting a texture map returned that was already loaded or a new texture map the rule is that if the function calling this function wishes to retain the texture map, it needs to do so.

In this I'm following a rule that that comes from objective-C (where I first encountered retain/release) that a function that begins with "new" returns an already retained object and that a function that begins with "get" returns an object that needs to be retained.
We've got no need for autoreleasing objects just yet and I'm hoping we won't but I'm starting to wonder if I shouldn't create some sort of base library for my retain/release approach. Another day...

When we look at material.h we can see that we no longer use texture id's directly but go through our texturemap object. We can also see that we no longer have our load functions but instead have the functions "matSetDiffuseMap" and "matSetReflectMap" that set a pointer to our texturemap object.
If you look at the code for these two methods you again see a trick that stems from obj-C's property structure where we nicely release our old map and retain our new one.

As a result we can do something like this (as we indeed do for our skybox):
matSetDiffuseMap(mat, getTextureMapByFileName("skybox.png", GL_LINEAR, GL_CLAMP_TO_EDGE));
Our getTextureMapByFileName function checks i we've already loaded our map and if not loads it, our matSetDiffuseMap assigns it to our material and retains it. If loading fails a NULL pointer is returned by getTextureMapByFileName but matSetDiffuseMap deals with this just fine.

While all our materials will release their texture maps as they get destroyed it is important to remember our cache still retains all the loaded textures. We must not forget to unload this as well as we clean up at the end:
void unload_objects() {
  // free our object data
  tmapReleaseCachedTextureMaps();
  ...

Our skybox


So first off, what is a skybox? This is a technique often applied to create our far away background, say the map of the sky (what's in a name right?). By projecting our background on a very large box for which we're on the inside we create a background that automatically adjusts itself as we move our camera around.

Seeing we've got a tie-bomber a sensible skybox would be a star scape but I wanted something that shows off the technique a little better. Doing a bit of googling there are hundreds if not thousands of graphics available if you're not able to make one yourself. I came across RB Whitaker's site who's written up a lot of interesting graphics things but using C# and XNA. He also has a some graphics he's kindly donated to the public domain so there you go.

We often see the texture maps laid out in a T shape to minimize the number of borders that don't line up nicely. This is important because we interpolate our textures when rendering. I've however gone for a 2x3 layout to minimize wasting space in my texture but we have to keep in mind that we can't move our textures up to each border as we get some strange artifacts on the edges.


You'll see in the code where we generate our cube mesh I've left a 2 pixel border around each image.
The image itself show the front, back, left, right and finally top and bottom images in 3 rows. They are simply mapped on a box or cube.

Three more things set our skybox apart from rendering other objects.
First is that it's inside out. Since we are inside of our cube we need to see it's insides.
Two is that we don't want to apply any lighting but render the cube as is. We've thus got our own shader in the form of the shader programs "skybox.vs" and "skybox.fs". The code should be easy to read by now.
The third is that we're not using our model matrix but always center our skybox on the user. The idea is that the background is infinitely away so it only moves as you look around.

Infinity itself isn't really possible so the size of our skybox is dictated by our far plane. Now our far plane in our example stands at 10,000.0, our box is also 10,000 wide, high and deep. The observant amongst you will thus say that it's only going to be at a distance of 5,000 and that would be true, but as the box rotates the furthest point would be just over 8,500 (the furthest point is at 5000,5000,5000, calculate the distance to that point using pythagoras theorem).

This is one reason why some use a sphere as a "skybox" but a sphere is harder to texture without getting artifacts at the poles.  The principles stay the same.

The code for adding the skybox can be found in load_objects:
  ...
  // add the skybox last so it renders at the end
  mat = newMaterial("skybox");                // create a material for our skybox
  mat->shader = &skyboxShader;                 // use our skybox shader, this will cause our lighting and positioning to be ignored!!!
  matSetDiffuseMap(mat, getTextureMapByFileName("skybox.png", GL_LINEAR, GL_CLAMP_TO_EDGE)); // load our texture map (courtesy of http://rbwhitaker.wikidot.com/texture-library)
  mesh = newMesh(24, 36);                     // init our cube with enough space for our buffers
  strcpy(mesh->name,"skybox");                // set name to cube
  meshSetMaterial(mesh, mat);                 // assign our material
  matRelease(mat);                            // and release it, our mesh now owns it
  meshMakeCube(mesh, 10000.0, 10000.0, 10000.0, true);  // create our cube, we make it as large as possible
  meshFlipFaces(mesh);                        // turn the mesh inside out
  meshCopyToGL(mesh, true);                   // copy our cube data to the GPU
  llistAddTo(meshes, mesh);                   // add it to our list
  meshRelease(mesh);                          // and release it, our list now owns it
  ...
Note that small changes were added to our cube generation code to handle the correct texture mapping.

As the comment mentions we also add our skybox last. It can even make sense to leave it our of our render buffer and just render it separately at the end of our render loop. While the parallel processing on our GPU may result in more being rendered then we expect the skybox covers the entire screen while at the end of our render loop many object will already obscure it. Rendering it last does increase the chance of our Z-buffer discarding many pixels. Still the shader is simple enough for this to potentially not have much of an impact.

And here is the end result (sorry no movie):




And yes yes, before anyone complains, obviously I'm still using the star scape as a reflection map isn't the proper thing to do. I simply haven't had the time to create a reflection map out of our skybox. That might be a nice thing to actually automate, I'll give that some thought as that in itself would be a nice topic for a tutorial.

One last thing, I did try and see how this came out in stereoscopic mode and I was pretty happy. I have no idea if it will keep holding up if we find a need to push the far plane further out.

Download the source here

What's next?


As I mentioned in my last write-up, I'll be working on a separate project for a bit and as a result enhancing our little engine as I go. Hopefully as a result I'll be able to write up some more sessions like this one as I add more things.

One that is nearly ready to go and an important step forward is to not use our meshes directly but indirectly through instances of those meshes and these will be held within a 'tree'.
This will allow us to render the same object many times while positioned at different locations.
So next up, we'll change our single tie-bomber to a fleet of tie-bombers :)




Wednesday 3 February 2016

Using a joystick (part 19)

Ok, it's time for another little sidetrack here:)

I bought myself a little USB 4 axis gamepad and decided to have a look at the cross platform joystick input functions that are part of GLFW.

First off, for those people using Mac, Windows guys have a one up on us. Joystick support seems to be a lot better on Windows. You can use XBox one controllers right off the bat, many bluetooth controllers work, etc.
On Mac it seems a bit more of a secret art. There are many 3rd party drivers you can install to make things work. Interestingly enough it's more that a lot of it seems a bit hidden. I was glad to find out the gamepad I bought worked without a problem with GLFW even though Mac OS X itself seemed oblivious to it being connected.

The gamepad I got is a $20 wired Logitech F310. Important here is that it has two modes that may exist on other gamepads as well. There is a switch on the back label X <=> D.
X stands for something like XInput and is the default and seems to be directly related to DirectX support, needless to say it does not work on the Mac.
D stands for DirectInput, it needs to be selected before connecting the device to the Mac and then GLFW will pick it up straight away.

Now when you look at my checked in source code you'll see that I've written a wrapper for the GLFW joystick code but in this particular case, that really is extreme overkill. Again my reasoning for this is that I want to be able to use a different framework for platforms that GLFW doesn't support.

In this case it is absolute overkill and I'll just discuss the GLFW calls here.
GLFW has defined a number of constants from GLFW_JOYSTICK_1 to GLFW_JOYSTICK_LAST to index each joystick. Basically the first joystick you connect to your system will be GLFW_JOYSTICK_1, the second will be GLFW_JOYSTICK_2, up until the number of joysticks you've got connected. Once numbered a joystick will remain accessibly through that constant so even if I disconnect joystick #1, joystick #2 will remain joystick #2.

To find out if a joystick is connected you simply call:
if (glfwJoystickPresent(GLFW_JOYSTICK_1) == GL_TRUE) {
  // process our joystick info

Now an interesting difference with the joysticks is that unlike the keyboard or mouse, you don't get any events that the user made any input. Instead you poll the joystick state. That may sound wasteful but it is not, a user may be holding a control stick firmly in a certain direction and you'll need to react to that, it is far more likely you will need to react to the joystick in every frame.

This is something you would add for each joystick in your update function (engineUpdate in our example). Good practice is to query your input devices, do whatever changing to positions of object you need to do, etc. in an update function, then call your render function to render the result.

For this also note that we send pSecondsPassed to our update function. This is a fairly precise timer and if you keep it's value from last frame you can obtain a delta of time elapsed. It's often a good idea to use this delta in your adjustments so that on slower machines the users input will seem just as responsive as on faster machines. I'm not doing this in our current example however.

On a quick side note purely for informational purpose you can call glfwGetJoystickName if you wish, it will return a device name for your joystick.

The important things on the joystick are the buttons and axes. Now here is where things get a little tricky because every devices has a different layout. You may thus need to react differently depending on the number of axes and buttons are available and maybe offer the user with the ability to map them to certain actions.
We can generally be certain that most joysticks have 2 axes and 2 buttons that are the primary controls and that further controls are logically added.

GLFW allows you to get the current state for all axes with the function glfwGetJoystickAxes. It returns a pointer to a buffer of floats, each entry referring to an axis with its value between -1.0 and 1.0.
The state of our buttons can be queried by calling glfwGetJoystickButtons. This returns a pointer to a buffer of ints, one entry per button with the value 0 or 1, one being the button is pressed.

For example you could do:
if (glfwJoystickPresent(GLFW_JOYSTICK_1) == GL_TRUE) {
  int axesCount, buttonCount;
  float * axes = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &axesCount);
  int * buttons = glfwGetJoystickButtons(GLFW_JOYSTICK_1, &buttonCount);
  
  // process our joystick info
  ...
};
It's no more difficult then that.

Our sample application now allows for both joystick and keyboard input to rotate our Tie-bomber.

Download the source here

What's next

So I've gotten a bit sidetracked on a new project that I hope to be able to tell more about in the near future so that's put building my platformer game on the backburner a bit.

Not to worry however, this side project is using GLFW and using my tutorial as a base and I'm planning to feed back lessons learned into this series.

More to come soon!