Friday, 27 March 2015

The world, in 2 dimensions (part 5)

Many of these tutorials I've seen come and go jump straight into 3D rendering at this point. I feel however there is still plenty to talk about before we get there. Heck there are still plenty of really nice 2D games being build today and then there are a legion of games where a good mix of both 2D and 3D techniques are applied. Even when you're working on a nice 3D game or 3D application 2D still has a place when it comes to the user interface.

In the following parts of the series we'll look a bit in some handy 2D rendering techniques and learn a little bit more about OpenGL in general along the way.

But first we need to do a bit of house keeping and a bit more preparation then I initially thought.

On the subject of Mac and Windows I'll probably stop splitting these posts in two. We're at a point where things compile on both platforms and while there is a little bit extra needed on Windows in adding stuff to the makefiles manually that's hardly a big deal. I'll try and make the changes as I go but I may lag behind actually testing things on Windows so expect that you may need to correct the makefile. I'll generally be using the issue tracker on my GitHub page to clear up those sorts of issues keeping this writeup relatively clean.

The other is that I'm finally going to strip most of the code from main.c that I feel doesn't belong there. A lot of the code has been moved into engine.h and engine.c. The thing needed a name and I couldn't think of anything better but the main principle we'll be following is that we've got a straight forward interface into whatever it is we're running.

At this point in time that interface consists of 4 methods:
- load, called at the start to load resources
- update, called in our loop and basically should implement updating data structures etc. as a result of either time passing or user interaction
- render, called in our loop to render our stuff
- unload, called to unload resources

The idea here is that you could have multiple sets of these "engines" or "parts" or whatever you wish to call them. You could have one for our start screen that shows game stats and a start game button, and then a second for the game itself or maybe for each level of your game. When the user presses start game you unload the start screen and load the games first level and start calling the update/render functions for the game. Once the player dies or reaches the end of the game and its game over, the game is unloaded and the start screen is reloaded. We may one day bring even more structure to this. We will see. In C++ you would most likely create a base class to define this interface and implement each part of your program as a subclass.

As you'll notice I also left the viewport and clear logic in main.c. Think of an application where we call the render function of our game first but then after that we call a UI render function that renders the UI on top of that. We can swap out the game part depending on the level we're running but reuse the UI part.

The split between update and render logic is simply good structure but won't really come to shine until we start to create some real user interaction.

Off course for now we'll simply be running examples so there isn't any major user interaction and we'll leave that for the future.

Windows in GLFW

At this point I want to go briefly into the subject of windows and GLFW.

GLFW has the ability to open up multiple windows but that is something I won't go into here. For the applications I have for GLFW I've not needed this (yet) and I think it's just a distraction to what we're trying to achieve here.

GLFW also has the ability to open up our main window full screen. What it lacks at this point in time is an easy way to switch between windowed mode and full screen. You could close one window and open another to switch between modes but as this creates a new OpenGL context it means reloading all your resources as well. I don't know if context sharing might be a solution here but I've not seen any samples of code dealing with this. I do hope to add this into our little frame work at some point.

For now I'm pretty happy just having it as a toggle and just starting my application with a setting or switch. At this stage I'm managing it with an #ifdef block but in due course we'll improve on this:
#ifdef GLFW_FULLSCREEN
  window = glfwCreateWindow(1024, 768, "GLFW Tutorial", glfwGetPrimaryMonitor(), NULL);
#else
  window = glfwCreateWindow(640, 480, "GLFW Tutorial", NULL, NULL);
#endif
Note that we always give a screen resolution as the first two parameters. In full screen mode GLFW will find the closest screen mode that will match this resolution. The 3rd parameter is a window title, the fourth identifies the monitor on which we wish to run full screen. The 5th and final parameter allows context sharing between windows but I've yet to experiment with this.

We'll look into querying the OS for available resolutions later. Do note that I noticed some issues with full screen on my retina display, even though the documentation suggest the querying the frame buffer size it seems this isn't working correctly in full screen mode at the this time (or I'm doing it wrong:)).

Window coordinate system and 2D projections

It's time to take a closer look at how our coordinate system works.

Lets start with glViewPort. glViewPort basically tells OpenGL what area of the window we're rendering too using screen coordinates.  Generally speaking you would set this to the full size of your window.

Inside OpenGL however the world is upside down and unified and we have no knowledge of the underlying screen resolution.


As far as OpenGL is concerned the world starts in the bottom left of your screen at (-1.0, -1.0) and ends at the top right at (1.0, 1.0). This always takes some getting used to as historically developers are used to (0.0, 0.0) being at the top left of the screen. This is how Dos, Windows and Mac OS have always handled screen coordinates.

However in 3D having Y pointing 'up' makes sense, in a 3D world going up means going higher so an increasing Y value should equal going up.

Another interesting thing about this approach is that our render area seems kinda square even when most monitors are in some sort of landscape format. If you would draw a square with equal sized sides, say both width and height are 1.0, you'd actually end up with a rectangle as it is all stretched to fit your widescreen monitor.

The orthographic projection that we used in our examples so far allows us to specify what coordinate system we want to use. We specifying a left, right, bottom, top, near and far value and from that a matrix is calculated that handles the transformation from our coordinate system to the one OpenGL requires. We'll ignore the near and far values until the next section but here is how the left, right, bottom and top parameters work.

In our example we saw that we left our top and bottom at -1.0 and 1.0 respectively but we set our left and right based on the aspect ratio of our window size which at 640x480 would be roughly 1.33:
The nice thing is that whether we actually open our window at 640x480, 800x600, 1024x768 or any other resolution, our triangle would take up an equal portion of the screen. If we go wide screen the aspect ration will add space to the sides but our triangle would still have the same relative height.
Also if we were to draw a square, it would actually come out looking square as an equal number of pixels are used for the height and the width of the square.

If the real resolution is important to what you are rendering, which it can be when you are dealing with UI, you can easily adjust your projection matrix to accommodate this:
mat4ortho(&mvp, 0, ScreenWidth, ScreenHeight, 0, 1.0, -1.0);
For our 640x480 window this results in a 1:1 mapping to our screen coordinates:

For our examples going further we'll be using a "virtual" resolution with (0.0, 0.0) at the center of the screen, the height of our screen being 1000 units and the width adjusted by aspect ratio:
mat4Ortho(&mvp, -ratio * 500.0, ratio * 500.0, 500.0f, -500.0f, 1.0f, -1.0f);
We end up with a coordinate system like so for a 4:3 screen:
The main reason for the Y is down choice is for rendering out text but also, as I mentioned before, it's a more natural choice for 2D rendering. It does mean that to render our triangle, we'll have to make it 500x larger as well.

The real resolution is still important to us, if we're on a small screen, say we're working on a small tablet, we might end up with unreadable text if we don't accommodate for this. Also bitmaps may be scaled down too much and look ugly.
Equally so, if we're on a high resolution monitor we may start to get an interface that is pixelated when bitmaps are scared up.
Often this is easily solved by having bitmaps in multiple resolutions and simply selecting the set of bitmaps that best suit the resolution you are rendering to.

The depth buffer

While we haven't used it yet, and its turned off by default, a depth buffer is pretty much a standard thing you get in any modern day rendering engine. We can already see in our example so far, even though all we're doing is 2D rendering, that we are specifying all our coordinates in 3D. Even our orthographic projection had a near and far value that determine our translation into the Z axis.

As long as using our depth buffer is disabled things are simply drawn in the order they are given. This can be good enough in many 2D applications as that is the natural order in which you draw things anyway but even in 2D applications using the Z buffer can be very handy.

With a Z buffer the depth you specify for what you are drawing will determine what will overlap. If you draw two squares that overlap, the square that is "closest" to you will be drawn over the square that is furthest away even if it is drawn first.

Note that in our orthographic projection we defined near as being 1.0 and far as being -1.0 but it actually ends up with 1.0 being furthest away:
I'm not sure why the near/far planes for orthographic projections behave like this, obviously a larger Z value being further back does make sense.

Note that when we start using 3D projections later on we'll find out our near plane defines our closer value but there is lots more going on in this case.

The one situation that you do need to be mindful off is once you start using transparency. We'll discuss this in more detail later on but the order in which you draw does become important when you want to draw something semi transparent as you can't draw "underneath" something you've already drawn.

Rendering text

There is one omission in OpenGL that we should deal with first and that is rendering out text. Yes there were some simple text rendering functions in OpenGL once but you guessed it, we can't use them anymore now that we're using our programmable pipeline.

There are a number of libraries out there that deal with this. Some simple, some bloated, some completely beyond what we need.

We'll be using fontstash which was written by Mikko Mononen and while it originally only had an implementation for OpenGL 1/2 I added OpenGL 3 support to it some time ago. Fontstash uses the truetype to bitmap capabilities of one of the STB libraries to generate a texture with which we can render out text. We'll get back to STB later as we'll be using it to load textures as well but it is worth mentioning that STB now also has an implementation called stb_easy_font that allows direct rendering of text through OpenGL. I haven't looked into that yet as its very new but it sounds like a promising alternative.

For now to demonstrate how to use fontstash we'll add a simple frames per second counter to our interface.

First we need a font to actually use to draw our text in. Fontstash comes with a font called DroidSerif which we're also using in our sample code. For now I'm still putting all these support files in the resources folder, on Mac they are nicely placed within our bundle, on windows they're next to our exe. We'll deal with this another day.

While we include our header files through our engine.h header file we again want our implementation to be compiled along with our main.c file so our first change is way at the top of main.c:
#define FONTSTASH_IMPLEMENTATION
#define GLFONTSTASH_IMPLEMENTATION
In our engine.h we're including the header file for both fontstash itself and our OpenGL 3 implementation:
#include "fontstash/fontstash.h"
#include "fontstash/gl3fontstash.h"
Note that the files themselves are stored in our 3rdparty subfolder.

The rest of our logic is all in our engine.c file. First, way at the top we define a few globals. Again I'm using globals to keep these examples simple, you probably want to put things in a bit more targeted scope in your application.
// and some globals for our fonts
FONScontext * fs = NULL;
int font = FONS_INVALID;
float lineHeight = 0.0f;

// and some runtime variables.
double rotation = 0.0f;
double frames = 0.0f;
double fps = 0.0f;
double lastframes = 0.0f;
double lastsecs = 0.0f;
FONScontext is a structure that holds all the information fontstash needs to render out fonts. This is basically a struct that contains all our data from our raw font data to our vertex array and vertex buffer in OpenGL.
font is simply a variable that holds an index to one of the fonts loaded into our context.
lineHeight is the height of a single line of text for the font we are using.

Besides these 3 variables that we use for our font rendering we've also got a few variables for calculating our FPS.

And a little further down we have added a load and unload function that loads, and unloads, our font:
void load_font() {
  // we start with creating a font context that tells us about the font we'll be rendering
  fs = gl3fonsCreate(512, 512, FONS_ZERO_TOPLEFT);
  if (fs != NULL) {
    // then we load our font
    font = fonsAddFont(fs, "sans", "DroidSerif-Regular.ttf");
    if (font != FONS_INVALID) {
      // setup our font
      fonsSetColor(fs, gl3fonsRGBA(255,255,255,255)); // white
      fonsSetSize(fs, 32.0f); // 32 point font
      fonsSetAlign(fs, FONS_ALIGN_LEFT | FONS_ALIGN_TOP); // left/top aligned
      fonsVertMetrics(fs, NULL, NULL, &lineHeight);
    } else {
      engineErrCallback(-201, "Couldn't load DroidSerif-Regular.ttf");       
    };
  } else {
    engineErrCallback(-200, "Couldn't create our font context");
  };
};

void unload_font() {
  if (fs != NULL) {
    gl3fonsDelete(fs);
    fs = NULL;
  };
};
These two functions are called from our engine load and unload functions.

Next is our engine update function in which we calculate our frames per second value:
void engineUpdate(double pSecondsPassed) {
  rotation = pSecondsPassed * 50.0f;

  frames += 1.0f;
  fps = (frames - lastframes) / (pSecondsPassed - lastsecs);

  if (frames - lastframes > 100.0f) {
    // reset every 100 frames
    lastsecs = pSecondsPassed;
    lastframes = frames;
  };
};
Calculating our fps is simple a matter of dividing the number of frames drawn by the number of seconds that have passed. We reset our counter every 100 frames so our fps is calculated over the last few seconds that have passed.

It is important to know that GLFW limits the frame rate depending on your monitors refresh rate. While our sample application drawing our triangle theoretically could render hundreds of frames a second (if not thousands) my Macbook caps out at 60 fps. Most of the time our application is thus twiddling its thumbs doing absolutely nothing.

Finally at the end of our engines render function, after we've drawn our triangle(s), we draw our FPS counter:
  // change our state a little
  glDisable(GL_DEPTH_TEST);
  glEnable(GL_BLEND);
  glBlendEquation(GL_FUNC_ADD);
  glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);  
  
  // now render our FPS
  if ((fs != NULL) && (font != FONS_INVALID)) {
    char info[256];
    
    // we can use our same projection matrix
    gl3fonsProjection(fs, (GLfloat *)projection.m); 

    // what text shall we draw?
    sprintf(info,"FPS: %0.1f", fps);
        
    // and draw some text
    fonsDrawText(fs, -ratio * 500.0f, 460.0f, info, NULL);
  };
Before we render out our text we disable our Z buffer, as we're just drawing our text on top of everything else, but enable blending.
This deserves a little more explanation.
To make text look a little prettier we do not draw the text in solid colors but apply some anti-aliasing to make it all look a bit smoother. As a result we need to blend the color of our text with the underlying colors we have already drawn on screen. This is done by using the alpha channel in our texture map that contains our pre-rendered characters for our font.

Doing so is relatively costly as we're not just writing colors to our screen buffer, we're first reading the current color and applying a blend. As a result by default this behavior is turned off. By default the alpha channel works as an on/off switch. If the alpha value is 0 pixels are simply not drawn. If the alpha value is non-zero the pixel is drawn without any blending. Nice and fast.

For rendering our font we need to turn blending on. More specifically we need to tell OpenGL we want to use standard alpha color blending meaning our alpha value in our color is directly used to blend the underlying color with the color we are drawing. There are other blending modes that allow us to do some really funky color effects but that is a subject for another day.

Next, as the shader embedded in fontstash has no idea of our projection matrix, we need to give it a copy. We're reusing the projection matrix we already have here but obviously we could create one specifically for rendering text. It all depends on what we are doing.

Then we simply use sprintf to create a nicely formatted string and finally we use fonsDrawText to write out our text. This function takes our font context, an left and top coordinate, our text and an optional end of text pointer (if NULL the text is assumed to be zero terminated).

After this we should have a nice FPS counter at the bottom left of our screen.

What's next?

At this point in time we've got our original triangle spinning but with a Z-buffer enabled and an FPS counter, still not all that spectacular. We've done more then I was expecting when I started writing this part so and I suspect the next step will be of equal length.

Next part we're going to dive into one of the most common used techniques for 2D games and that is using a tiled map to draw the background to a game.






1 comment:

  1. Note, I had to make a correction as I got the internal coordinate system in OpenGL wrong.. It is now correct...

    ReplyDelete