Friday, 27 March 2015

The world, in 2 dimensions (part 5)

Many of these tutorials I've seen come and go jump straight into 3D rendering at this point. I feel however there is still plenty to talk about before we get there. Heck there are still plenty of really nice 2D games being build today and then there are a legion of games where a good mix of both 2D and 3D techniques are applied. Even when you're working on a nice 3D game or 3D application 2D still has a place when it comes to the user interface.

In the following parts of the series we'll look a bit in some handy 2D rendering techniques and learn a little bit more about OpenGL in general along the way.

But first we need to do a bit of house keeping and a bit more preparation then I initially thought.

On the subject of Mac and Windows I'll probably stop splitting these posts in two. We're at a point where things compile on both platforms and while there is a little bit extra needed on Windows in adding stuff to the makefiles manually that's hardly a big deal. I'll try and make the changes as I go but I may lag behind actually testing things on Windows so expect that you may need to correct the makefile. I'll generally be using the issue tracker on my GitHub page to clear up those sorts of issues keeping this writeup relatively clean.

The other is that I'm finally going to strip most of the code from main.c that I feel doesn't belong there. A lot of the code has been moved into engine.h and engine.c. The thing needed a name and I couldn't think of anything better but the main principle we'll be following is that we've got a straight forward interface into whatever it is we're running.

At this point in time that interface consists of 4 methods:
- load, called at the start to load resources
- update, called in our loop and basically should implement updating data structures etc. as a result of either time passing or user interaction
- render, called in our loop to render our stuff
- unload, called to unload resources

The idea here is that you could have multiple sets of these "engines" or "parts" or whatever you wish to call them. You could have one for our start screen that shows game stats and a start game button, and then a second for the game itself or maybe for each level of your game. When the user presses start game you unload the start screen and load the games first level and start calling the update/render functions for the game. Once the player dies or reaches the end of the game and its game over, the game is unloaded and the start screen is reloaded. We may one day bring even more structure to this. We will see. In C++ you would most likely create a base class to define this interface and implement each part of your program as a subclass.

As you'll notice I also left the viewport and clear logic in main.c. Think of an application where we call the render function of our game first but then after that we call a UI render function that renders the UI on top of that. We can swap out the game part depending on the level we're running but reuse the UI part.

The split between update and render logic is simply good structure but won't really come to shine until we start to create some real user interaction.

Off course for now we'll simply be running examples so there isn't any major user interaction and we'll leave that for the future.

Windows in GLFW

At this point I want to go briefly into the subject of windows and GLFW.

GLFW has the ability to open up multiple windows but that is something I won't go into here. For the applications I have for GLFW I've not needed this (yet) and I think it's just a distraction to what we're trying to achieve here.

GLFW also has the ability to open up our main window full screen. What it lacks at this point in time is an easy way to switch between windowed mode and full screen. You could close one window and open another to switch between modes but as this creates a new OpenGL context it means reloading all your resources as well. I don't know if context sharing might be a solution here but I've not seen any samples of code dealing with this. I do hope to add this into our little frame work at some point.

For now I'm pretty happy just having it as a toggle and just starting my application with a setting or switch. At this stage I'm managing it with an #ifdef block but in due course we'll improve on this:
#ifdef GLFW_FULLSCREEN
  window = glfwCreateWindow(1024, 768, "GLFW Tutorial", glfwGetPrimaryMonitor(), NULL);
#else
  window = glfwCreateWindow(640, 480, "GLFW Tutorial", NULL, NULL);
#endif
Note that we always give a screen resolution as the first two parameters. In full screen mode GLFW will find the closest screen mode that will match this resolution. The 3rd parameter is a window title, the fourth identifies the monitor on which we wish to run full screen. The 5th and final parameter allows context sharing between windows but I've yet to experiment with this.

We'll look into querying the OS for available resolutions later. Do note that I noticed some issues with full screen on my retina display, even though the documentation suggest the querying the frame buffer size it seems this isn't working correctly in full screen mode at the this time (or I'm doing it wrong:)).

Window coordinate system and 2D projections

It's time to take a closer look at how our coordinate system works.

Lets start with glViewPort. glViewPort basically tells OpenGL what area of the window we're rendering too using screen coordinates.  Generally speaking you would set this to the full size of your window.

Inside OpenGL however the world is upside down and unified and we have no knowledge of the underlying screen resolution.


As far as OpenGL is concerned the world starts in the bottom left of your screen at (-1.0, -1.0) and ends at the top right at (1.0, 1.0). This always takes some getting used to as historically developers are used to (0.0, 0.0) being at the top left of the screen. This is how Dos, Windows and Mac OS have always handled screen coordinates.

However in 3D having Y pointing 'up' makes sense, in a 3D world going up means going higher so an increasing Y value should equal going up.

Another interesting thing about this approach is that our render area seems kinda square even when most monitors are in some sort of landscape format. If you would draw a square with equal sized sides, say both width and height are 1.0, you'd actually end up with a rectangle as it is all stretched to fit your widescreen monitor.

The orthographic projection that we used in our examples so far allows us to specify what coordinate system we want to use. We specifying a left, right, bottom, top, near and far value and from that a matrix is calculated that handles the transformation from our coordinate system to the one OpenGL requires. We'll ignore the near and far values until the next section but here is how the left, right, bottom and top parameters work.

In our example we saw that we left our top and bottom at -1.0 and 1.0 respectively but we set our left and right based on the aspect ratio of our window size which at 640x480 would be roughly 1.33:
The nice thing is that whether we actually open our window at 640x480, 800x600, 1024x768 or any other resolution, our triangle would take up an equal portion of the screen. If we go wide screen the aspect ration will add space to the sides but our triangle would still have the same relative height.
Also if we were to draw a square, it would actually come out looking square as an equal number of pixels are used for the height and the width of the square.

If the real resolution is important to what you are rendering, which it can be when you are dealing with UI, you can easily adjust your projection matrix to accommodate this:
mat4ortho(&mvp, 0, ScreenWidth, ScreenHeight, 0, 1.0, -1.0);
For our 640x480 window this results in a 1:1 mapping to our screen coordinates:

For our examples going further we'll be using a "virtual" resolution with (0.0, 0.0) at the center of the screen, the height of our screen being 1000 units and the width adjusted by aspect ratio:
mat4Ortho(&mvp, -ratio * 500.0, ratio * 500.0, 500.0f, -500.0f, 1.0f, -1.0f);
We end up with a coordinate system like so for a 4:3 screen:
The main reason for the Y is down choice is for rendering out text but also, as I mentioned before, it's a more natural choice for 2D rendering. It does mean that to render our triangle, we'll have to make it 500x larger as well.

The real resolution is still important to us, if we're on a small screen, say we're working on a small tablet, we might end up with unreadable text if we don't accommodate for this. Also bitmaps may be scaled down too much and look ugly.
Equally so, if we're on a high resolution monitor we may start to get an interface that is pixelated when bitmaps are scared up.
Often this is easily solved by having bitmaps in multiple resolutions and simply selecting the set of bitmaps that best suit the resolution you are rendering to.

The depth buffer

While we haven't used it yet, and its turned off by default, a depth buffer is pretty much a standard thing you get in any modern day rendering engine. We can already see in our example so far, even though all we're doing is 2D rendering, that we are specifying all our coordinates in 3D. Even our orthographic projection had a near and far value that determine our translation into the Z axis.

As long as using our depth buffer is disabled things are simply drawn in the order they are given. This can be good enough in many 2D applications as that is the natural order in which you draw things anyway but even in 2D applications using the Z buffer can be very handy.

With a Z buffer the depth you specify for what you are drawing will determine what will overlap. If you draw two squares that overlap, the square that is "closest" to you will be drawn over the square that is furthest away even if it is drawn first.

Note that in our orthographic projection we defined near as being 1.0 and far as being -1.0 but it actually ends up with 1.0 being furthest away:
I'm not sure why the near/far planes for orthographic projections behave like this, obviously a larger Z value being further back does make sense.

Note that when we start using 3D projections later on we'll find out our near plane defines our closer value but there is lots more going on in this case.

The one situation that you do need to be mindful off is once you start using transparency. We'll discuss this in more detail later on but the order in which you draw does become important when you want to draw something semi transparent as you can't draw "underneath" something you've already drawn.

Rendering text

There is one omission in OpenGL that we should deal with first and that is rendering out text. Yes there were some simple text rendering functions in OpenGL once but you guessed it, we can't use them anymore now that we're using our programmable pipeline.

There are a number of libraries out there that deal with this. Some simple, some bloated, some completely beyond what we need.

We'll be using fontstash which was written by Mikko Mononen and while it originally only had an implementation for OpenGL 1/2 I added OpenGL 3 support to it some time ago. Fontstash uses the truetype to bitmap capabilities of one of the STB libraries to generate a texture with which we can render out text. We'll get back to STB later as we'll be using it to load textures as well but it is worth mentioning that STB now also has an implementation called stb_easy_font that allows direct rendering of text through OpenGL. I haven't looked into that yet as its very new but it sounds like a promising alternative.

For now to demonstrate how to use fontstash we'll add a simple frames per second counter to our interface.

First we need a font to actually use to draw our text in. Fontstash comes with a font called DroidSerif which we're also using in our sample code. For now I'm still putting all these support files in the resources folder, on Mac they are nicely placed within our bundle, on windows they're next to our exe. We'll deal with this another day.

While we include our header files through our engine.h header file we again want our implementation to be compiled along with our main.c file so our first change is way at the top of main.c:
#define FONTSTASH_IMPLEMENTATION
#define GLFONTSTASH_IMPLEMENTATION
In our engine.h we're including the header file for both fontstash itself and our OpenGL 3 implementation:
#include "fontstash/fontstash.h"
#include "fontstash/gl3fontstash.h"
Note that the files themselves are stored in our 3rdparty subfolder.

The rest of our logic is all in our engine.c file. First, way at the top we define a few globals. Again I'm using globals to keep these examples simple, you probably want to put things in a bit more targeted scope in your application.
// and some globals for our fonts
FONScontext * fs = NULL;
int font = FONS_INVALID;
float lineHeight = 0.0f;

// and some runtime variables.
double rotation = 0.0f;
double frames = 0.0f;
double fps = 0.0f;
double lastframes = 0.0f;
double lastsecs = 0.0f;
FONScontext is a structure that holds all the information fontstash needs to render out fonts. This is basically a struct that contains all our data from our raw font data to our vertex array and vertex buffer in OpenGL.
font is simply a variable that holds an index to one of the fonts loaded into our context.
lineHeight is the height of a single line of text for the font we are using.

Besides these 3 variables that we use for our font rendering we've also got a few variables for calculating our FPS.

And a little further down we have added a load and unload function that loads, and unloads, our font:
void load_font() {
  // we start with creating a font context that tells us about the font we'll be rendering
  fs = gl3fonsCreate(512, 512, FONS_ZERO_TOPLEFT);
  if (fs != NULL) {
    // then we load our font
    font = fonsAddFont(fs, "sans", "DroidSerif-Regular.ttf");
    if (font != FONS_INVALID) {
      // setup our font
      fonsSetColor(fs, gl3fonsRGBA(255,255,255,255)); // white
      fonsSetSize(fs, 32.0f); // 32 point font
      fonsSetAlign(fs, FONS_ALIGN_LEFT | FONS_ALIGN_TOP); // left/top aligned
      fonsVertMetrics(fs, NULL, NULL, &lineHeight);
    } else {
      engineErrCallback(-201, "Couldn't load DroidSerif-Regular.ttf");       
    };
  } else {
    engineErrCallback(-200, "Couldn't create our font context");
  };
};

void unload_font() {
  if (fs != NULL) {
    gl3fonsDelete(fs);
    fs = NULL;
  };
};
These two functions are called from our engine load and unload functions.

Next is our engine update function in which we calculate our frames per second value:
void engineUpdate(double pSecondsPassed) {
  rotation = pSecondsPassed * 50.0f;

  frames += 1.0f;
  fps = (frames - lastframes) / (pSecondsPassed - lastsecs);

  if (frames - lastframes > 100.0f) {
    // reset every 100 frames
    lastsecs = pSecondsPassed;
    lastframes = frames;
  };
};
Calculating our fps is simple a matter of dividing the number of frames drawn by the number of seconds that have passed. We reset our counter every 100 frames so our fps is calculated over the last few seconds that have passed.

It is important to know that GLFW limits the frame rate depending on your monitors refresh rate. While our sample application drawing our triangle theoretically could render hundreds of frames a second (if not thousands) my Macbook caps out at 60 fps. Most of the time our application is thus twiddling its thumbs doing absolutely nothing.

Finally at the end of our engines render function, after we've drawn our triangle(s), we draw our FPS counter:
  // change our state a little
  glDisable(GL_DEPTH_TEST);
  glEnable(GL_BLEND);
  glBlendEquation(GL_FUNC_ADD);
  glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);  
  
  // now render our FPS
  if ((fs != NULL) && (font != FONS_INVALID)) {
    char info[256];
    
    // we can use our same projection matrix
    gl3fonsProjection(fs, (GLfloat *)projection.m); 

    // what text shall we draw?
    sprintf(info,"FPS: %0.1f", fps);
        
    // and draw some text
    fonsDrawText(fs, -ratio * 500.0f, 460.0f, info, NULL);
  };
Before we render out our text we disable our Z buffer, as we're just drawing our text on top of everything else, but enable blending.
This deserves a little more explanation.
To make text look a little prettier we do not draw the text in solid colors but apply some anti-aliasing to make it all look a bit smoother. As a result we need to blend the color of our text with the underlying colors we have already drawn on screen. This is done by using the alpha channel in our texture map that contains our pre-rendered characters for our font.

Doing so is relatively costly as we're not just writing colors to our screen buffer, we're first reading the current color and applying a blend. As a result by default this behavior is turned off. By default the alpha channel works as an on/off switch. If the alpha value is 0 pixels are simply not drawn. If the alpha value is non-zero the pixel is drawn without any blending. Nice and fast.

For rendering our font we need to turn blending on. More specifically we need to tell OpenGL we want to use standard alpha color blending meaning our alpha value in our color is directly used to blend the underlying color with the color we are drawing. There are other blending modes that allow us to do some really funky color effects but that is a subject for another day.

Next, as the shader embedded in fontstash has no idea of our projection matrix, we need to give it a copy. We're reusing the projection matrix we already have here but obviously we could create one specifically for rendering text. It all depends on what we are doing.

Then we simply use sprintf to create a nicely formatted string and finally we use fonsDrawText to write out our text. This function takes our font context, an left and top coordinate, our text and an optional end of text pointer (if NULL the text is assumed to be zero terminated).

After this we should have a nice FPS counter at the bottom left of our screen.

What's next?

At this point in time we've got our original triangle spinning but with a Z-buffer enabled and an FPS counter, still not all that spectacular. We've done more then I was expecting when I started writing this part so and I suspect the next step will be of equal length.

Next part we're going to dive into one of the most common used techniques for 2D games and that is using a tiled map to draw the background to a game.






Saturday, 14 March 2015

Making the move to OpenGL 3+ (Part 4)

So I tried to get my little example to run on Windows and it threw up a few interesting things.

First off, I had forgotten to mention the shader files are in the resource folder. On Mac OS X I copy the contents into the resource folder of the application bundle. GLFW already ensures that is the working directly after initialization.
On windows the files get copied into the same folder as the exe. This is only a temporary solution. Eventually we'll look into packing the files for distribution.

I also had to make the move to GLEW. Without it there is no support for any of the OpenGL 3 commands. Microsoft isn't very interested to include support in their compilers because they rather have you work on Direct X. Understandable as Direct X generally is a generation ahead of OpenGL as far as the latest hardware is concerned though things are balancing out with OpenGL ES becoming as popular as it is in the mobile market.

I'll be changing the make files soon to also use GLEW on Mac OS X. On Windows we are lucky that fully compiled binaries are available so just download those and go. Again I've included just the base files into my GitHub repository but I highly recommend you download the latest files from the GLEW website. I also had issues getting the static libraries to link into the project so it is using the DLL for now.

GLEW is an extension manager. Basically it takes care of all the platform and hardware dependent issues so you don't have to worry about which specific extension function to call. You can also query the hardware to find out if it supports what you want to use. I currently am not doing so and already found out it will simply fall flat on its face if things are not supported.

I'm using this really nice AlienWare laptop with a state of the art nVidia graphics card but as it eats power like there is no tomorrow there is also a low power Intel GPU embedded that is used for less spectacular stuff. Generally the Intel chip will be used unless you start a game that requires power. For some reason that I have to look into it decided my little test application would work just fine with the Intel chipset not withstanding that OpenGL 3+ support on Intel hardware is non-existent. I had to go into the nVidia control panel to tell it to always used the nVidia GPU.

So there is a warning for you Windows people. Yes OpenGL is supported on Windows but poorly so. Especially all you poor people with non-gamer laptops, most have the Intel chipset in one form or another and you'll most likely get stuck. So far for X-platform support.

Embedding GLEW is pretty straight forward and requires us to do 2 changes, first include our GLEW library before you include GLFW:
// include GLEW
#include <GL/glew.h>

// include these defines to let GLFW know we need OpenGL 3 support 
#define GLFW_INCLUDE_GL_3
#include <GLFW/glfw3.h>
And then right after creating your GLFW window you need to initialize GLEW:
  // create our window
  window = glfwCreateWindow(640, 480, "GLFW Tutorial", NULL, NULL);
  if (window) {
    GLenum err;
 
    // make our context current
    glfwMakeContextCurrent(window);
 
    // init GLEW
    glewExperimental=true;
    err = glewInit();
    if (err != GLEW_OK) {
      error_callback(err, glewGetErrorString(err)); 
      exit(EXIT_FAILURE); 
    };
Note that we set a global called glewExperimental to true, without this OpenGL 3 won't be properly supported. I guess they are still working things out too:)

I also found out that the compiler that came with VC 2010 is slightly stricter when it comes to C then the mac one. It does not allow inline variable declarations, they must be placed at the start of code blocks.

Finally we had to enhance our makefile to copy our shader files and our GLEW32.DLL in place. Pretty straight forward.

GLEW on the MAC

Originally I wasn't planning on adding this but as I just updated the source code for Mac OS X I was reminded of the default installation of GLEW not compiling on my machine.

Download the source code for GLEW from: http://glew.sourceforge.net/index.html
Then unzip the contents somewhere to your liking. I've downloaded and tested this with GLEW 1.12.0 which is currently the latest version.

Unfortunately there are a few faults in the makefiles. First off, for Mac OS X there are three, Makefile.darwin (used by default), Makefile.darwin-ppc and Makefile.darwin-x86_64. You can find these in the config folder.

They build GLEW for the different platforms but then go and use AR to build the static libraries which will fail if we want to support universal binaries. So we need to make a few enhancements to the Makefile.darwin file.

We're going to add at the end of the file:
AR = LIBTOOL
STRIP =
CFLAGS.EXTRA += -arch i386 -arch x86_64
LDFLAGS.EXTRA += -arch i386 -arch x86_64
LDFLAGS.STATIC += -static -o

Also note that all though LDFLAGS.STATIC is defined it isn't used in our central make file so we also need to edit the makefile in the root. Search for any accurance of "$(AR) cr" (there should be two) and change it to: $(AR) $(LDFLAGS.STATIC)

Now open up terminal, cd into the root of your GLEW source and type make.

Finally we have to add our new library to our folder structure and add it to our Mac OS X makefile. I've placed these all in the GitHub repository.

What's next?


I'll be playing around for a little while so the next blog post may take a week or two but I'm planning on diving into more 2D samples first. Most of these tutorials skip ahead right away to rotating 3D cubes and I feel an entire body of knowledge gets skipped. Learning some of the basics using 2D rendering techniques is definitely not wasted time and it postpones some of the really difficult stuff until we've covered some more of the basics. 

Thursday, 12 March 2015

Making the move to OpenGL 3+ (part 3)

Ok, this is going to be a loooong one...

One of the things that was so appealing in OpenGL 1/2 was that the setup was so easy. The hard work was in the platform specific stuff which was handled with GLUT.
GLUT was nice to get you started and to learn but did fall short when you wanted to do more serious stuff. GLUT is also retired which was one of the reasons GLFW and other such frameworks were brought into existence.

Once you make the jump to OpenGL 3+ or its little brother OpenGL ES 2 suddenly you find that there is a lot less out of the box stuff you can use. Not only has the fixed pipeline been replaced by a programmable pipeling (in OpenGL 2 they still lived side by side), there are many more "must dos" that were optional before and suddenly you have to do a lot more of the math yourself.

This is one of the reasons I haven't attempted to explain any of the code presented in the "Getting started" part of this series. We're about to abandon half of it. I will explain most of the code I'll be presenting in this and following parts.  I'll mostly rely on comments I've added to any of the supporting libraries (where I've written them) but will go in depth into the code you'll find yourself writing or dealing with.

Before we get started I'm going to impose a few rules (on myself):
  • I've already mentioned this but we're sticking with C. While I love C++ and am more confident with C++ sticking with C leaves out some complexities that get in the way of explaining things. Also it makes it easier to make the jump to other platforms.
  • Our main.c file should contain only that which is needed to interact with the user and setup our environment so basically the code that relates directly to GLFW. The main reason for this is to make it easier to port the code to platforms not supported by GLFW or for you to use the stuff here with the framework of your choice but it also just makes sense. Note that I'll break that rule in the beginning as we are still building everything up but further on in this series we'll move more and more out of there.
  • Reusable support libraries we will be creating will follow a "single file" approach instead of having a separate .h and .c file. I encountered this first when I started using Sean T. Barrett's excellent STB libraries (which we will end up using here as well) and I found it a great way to write code that is easily distributable. Basically it means that you include the file as normal where you have dependencies on the code but in one place (for us always in main.c) you precede the include with a define that tells the library to include the implementation.
I'll start off again testing and compiling everything on my Mac and once we've got all running I'll spend some time adding the bits and pieces so it works on Windows too.

Math library

The first bit we need to implement are vectors and matrices. There are plenty of math libraries around (GLM is a really nice one written in C++ that mimics what you write in GLSL shaders). I'm sure there are some really good ones written in C but I have to admit, I haven't looked and only know some of the C++ ones. Over the years I've build a collection of applicable functions many dating from a time when such libraries weren't available and mostly out of habit I've stuck with them.

Most vector logic is basic trigonometry anyway, it gets a bit more complex when we talk about matrices. I'm going to talk about neither in detail here, those are topics on themselves and while most interesting only a basic understanding is needed to use them and I hope we'll learn by example here.

Ergo, I present you with math3d.h which is a self contained support library with various mathematical functions useful for what we are trying to do here. The name is a little misleading because it is just as applicable for 2D applications as 3D.

I do fully have to admit that some of the more complex functions are simply copied from the OpenGL documentation or other sources related on the subject.

As I'm writing this the library contains basic vector logic for 2D, 3D and 4D vectors and support for 3x3 and 4x4 matrices with only a few of our most basic matrix functions (I've had to rewrite most of my C++ code back into C code and as I've not tested it all end to end yet, we'll be fixing bugs for awhile).
I'll be adding more functionality over time and we may even get to quaternions at some point. It definitely is missing some key functions that we will need at some point in this series (like inverting a matrix).

Projection, view and model matrices

Before we continue there are 3 matrices that require a bit of a closer look as they determine how something will be drawn on screen.
I'll leave out for a moment how we actually calculate these matrices but will purely take a look the theory behind them.

A matrix is a mathematical tool that allows us to apply a transformation to a vector. This transformation can move that vector, rotate that vector around a center, scale that vector etc. etc. etc.
What makes matrices super cool is that you can multiply them to create a new matrix that combines the two transformations of the other two.

When we look at an object that we wish to draw that object usually starts of as a series of vertices and faces (triangles) that make up that object.
Lets say that we have a simply 3D box. That box consists of 8 vertices and 12 faces (2 triangle for each side, 6 sides in total). This object is usually stored in a way where its center is at (0.0, 0.0, 0.0). For a square box that is 10.0 units wide, high and deep that means our 8 vertices are at: (-5.0, -5.0, -5.0), (5.0, -5.0, -5.0), (5.0, 5.0, -5.0), (-5.0, 5.0, -5.0), (-5.0, -5.0, 5.0), (5.0, -5.0, 5.0), (5.0, 5.0, 5.0) and (-5.0, 5.0, 5.0).

We call this our model space. The first thing we need to do is move and rotate our box to where it actually is positioned within our virtual 3D world.
The matrix that performs the transformation is called our model matrix.
Note that this matrix often is formed by combining a number of individual transformations that determine how our box needs to be positioned.

Once our model is positioned at its place in the 3D world, we say that our coordinates are now in, you guessed it, world space.

Unfortunately our "observer", our virtual "camera" so to say, isn't stuck at the center of the 3D world. The "observer" moves around the 3D world and as such, our view into the 3D world changes. We thus need to move and rotate our box in relation to our "observer" to place it in front of our "camera".

The matrix that performs this transformation is called our view matrix and after applying it to our model we're now in view space.

Very often, actually nearly always, the model matrix and the view matrix are combined to form a model-view matrix that does the transformation from model space into view space in one go. This simply halves the time required to project all our models into view space and as we can still do all our lighting calculations properly that works out pretty good. You can actually see this in our OpenGL 1 code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); // our "view matrix" stays at our center looking forward
glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f); // our model matrix rotates our triangle as time passes

The last step we need to do is to take our 3D view space coordinates and decide how they translate to screen coordinates.
We do this by applying a projection matrix.

In our OpenGL 1 sample code we apply an orthographic projection:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);

This projection is simply a 2D projection that scales our x and y coordinates based on the aspect ratio of our screen resolution. The z coordinate doesn't influence how we draw anything other then how things overlap.

We'll look into calculating other projection matrices later on in the series.

There are many situations, especially when dealing with lighting where you want to work in view space first and only perform our projection last and this is why in OpenGL 1 we set the model-view matrix and the projection matrix separately.

However what is more often done is that we set the model-view matrix and a model-view-projection matrix where this last matrix combines all three matrices into one final matrix that takes model space straight to screen space. We'll be doing that here as well.

Shader library

With the fixed pipeline gone you have to implement your own shader code. A very basic shader isn't all that hard to implement but there are a few steps into actually getting the shader loaded into your application.

Our main shader code will reside in shader.h, another self contained support library. Again I won't go into too much detail about how it works and initially rely on the comments I've placed in this library. I've also only included basic code for now and we'll enhance on this as time goes by. I will go into more detail about the shaders themselves.

Do note that my shader library does not contain any code to load the shader text from disk. This code resides in main.c for the time being. This is simply because I believe they don't go hand in hand.  If you wanted to, and I've indeed done this in some of my projections, you could keep your shader code inline in you C file. On the other end of the spectrum you may use a text file loader library that allows you to add precompiler directives into your shader code or you may generate your shader code based on some sort of higher level design document.

A shader program is build up (currently) from 5 programmable stages:
- the vertex shader
- the tessellation control shader
- the evaluation shader
- the geometry shader
- the fragment shader
Only the vertex shader and fragment shader are mandatory, the other 3 were added later and are optional. We'll ignore the existence of the new 3 shader stages for now and implement a simple vertex shader and fragment shader that will allow us to reproduce what we did in OpenGL 1.

The vertex shader
When we look at our triangle that we are rendering it consists of 3 vertices. The vertex shader is responsible for transforming those 3 vertices so they are projected in the right spot and we setup any additional information. For each of our 3 vertices our vertex shader is called
At the bare minimum we have:
- our model-view-projection matrix as a variable to apply to our vertices
- an input for the vertex that we are handling
- an output for our projected vertex in screen space

For our example code we need two more things:
- an input for the color of the vertex that we are handling
- an output for this color so we draw our triangle in the right color

Lets look at the shader code bit by bit and explain what is happening:
#version 330
This simply tells OpenGL what capabilities we need for our shader and is directly related to the version of OpenGL we are targeting

uniform mat4 mvp;

Here we define a uniform of type mat4 (a 4x4 matrix) called mvp, which is our model-view-projection variable. A uniform simply is a variable that can be set from outside our shader but is basically a constant inside of our shader. We'll end up setting it in our code later on.

layout (location=0) in vec3 vertices;
layout (location=1) in vec3 colors;
Here we define two vec3 (3D vectors) variables called vertices and colors. The "in" keyword in front specifies they are inputs.
The bit in front of that is our bit of magic: "layout (location=n)".
We'll look at this a bit closer once we start putting things together but in essence this allows us to bind our array of vertices and our array of colors that make up our triangle to our shader.

out vec4 color;

Here we define a single vec4 (4D vectors) variable called color. This is our color output that is sent to our fragment shader. As I mentioned up above we also need an output for our projected vertex however this is one of the few bonuses we get and it is called gl_Position. In OpenGL 2 when programmable shaders were first added we had loads of these build in variables but only a few survive in OpenGL 3.

void main() {
  gl_Position = mvp * vec4(vertices, 1.0);
  color = vec4(colors, 1.0);
}
And finally our shader itself. It looks a bit like a C program doesn't it? Every shader stage has its own main function which is its entrypoint. Just like in C we can define additional functions but it is the main function that OpenGL looks for.
The first line of code takes our current vertex transforms it into a 4D vector and then multiplies this with our model-view-projection matrix storing the end result into gl_Position.
Our second line of code simply copies our input color into our output color.

The fragment shader
Now that we know the state of affairs at each corner of our triangle we need to draw the triangle itself. OpenGL nicely interpolates our two outputs from our vertex shader to draw out our triangle. For each pixel that needs to be drawn to screen our fragment shader is called.
Obviously the inputs of our fragment shader must match the outputs of our vertex shader but the fragment shader itself has only one output: the color our pixel needs to be drawn with.

Our fragment shader therefor is as simple as can be:
#version 330

in vec4 color;
out vec4 fragcolor;

void main() {
  fragcolor = color;
}
The first line is the same as our vertex shader and identifies what OpenGL capabilities we require for this shader.
The 3rd line contains our input called color which matches the output of our vertex shader.
The 4th line defines our output called fragcolor, this used to be a build in variable but we now need to define it. As our fragment shader is only allowed to have one output OpenGL knows what you're intending here (this change may not make any sense but there is good reason for it but it won't be apparent until you start using frame buffers, which is a topic for another day).
Finally we have our main function and in it a single line of code which copies our color input to our output.

This is about as simple as it gets and obviously it gets more complex from here.

Buffer objects and vertex arrays

The last piece of the puzzle we need before we can build are our buffer objects and arrays.

When we look at our original OpenGL 1 code we see that we use glBegin, glColor3f, glVertex3f and glEnd to send all our data related to our triangle to OpenGL. We do this every frame. We waste a lot here. Not noticeable on a single triangle but once you have tens of thousands of triangles to draw it really starts to show.

In OpenGL 1 they realized this and initially solved this with the functions glVertexPointer, glColorPointer and glDrawElements (and a couple more but these 3 would draw our triangle).
These allowed us to copy arrays of vertex data and color data into GPU memory and then draw all elements making up whatever it was we are drawing. But it still meant copying this data into GPU memory every frame.

Vertex Buffer Object
Eventually Vertex Buffer Objects (or VBO for short) were added to OpenGL. These allowed us to copy vertex data into GPU memory once and then use it repeatedly saving us a lot of overhead.. The solution was a bit of a hodgepodge however as they kept using the old functions but you needed to bind the VBO containing the data and then call this function with an optional offset.

VBOs however weren't just for vertice data but could be used for much more such as indice data to draw many faces, color data, texture coordinate data, normal vectors data, etc. (don't worry, we'll explain what these are in due time).

This was already a giant leap forward however this still dependent on a very fixed architecture. For instance, we could only use two sets of texture coordinates. What if we needed 3? Or what if I had data I needed in my shader that didn't have a construct in Open GL?

I'm not going into too much detail here yet but eventually we became pretty free as to what data we loaded into a VBO and we could define how OpenGL should dissect this data by defining attributes using the glVertexAttribPointer function.

At this point we saved ourselves lots of overhead in copying data into GPU memory each frame but the tradeoff was an immense amount of setup each time you needed to draw an object.

Vertex Array Objects
To solve this Vertex Arrays Objects were introduced (or VAO for short). A VAO remembers a certain amount of state and once you activate it, that state is restored.
The first time you make a new VAO active it is blank and you would then setup the required state once binding the right VBO, setting up your attributes, etc.

Then before you need to draw your object you simply make the correct VAO active and call our age old glDrawElements function.

When they were initially introduced in OpenGL 2 (I believe) they were optional.
In OpenGL 3 they are mandatory and it is a constant source of blank screens as it is very easy to forget this or to realize you haven't set the correct state in relation to your VAO.

Putting it all together

Now that we have all our basic ingredients it is time to take a closer look at our new main.c and this time I'm going to explain most of it line by line. I am removing most of the comments from the source code and a lot of the error checking so be sure to look at the original on GitHub.

Do note that in explaining I'm trying to stick to the basics and not go too far in depth. Eventually we'll get there as we'll build more and more complex examples in future posts.

// include these defines to let GLFW know we need OpenGL 3 support
#define GLFW_INCLUDE_GL_3
#define GLFW_INCLUDE_GLCOREARB
#include <GLFW/glfw3.h>
Here we include our GLFW library. Note the addition of two defines which enable OpenGL 3 and COREARB support.
You must have the first. The second is optional in the sence that if you do not include it, you will need an extension manager such as GLEW.
We may do this in a future post as I'm not sure how much is supported by default in GLFW as I've previously used GLEW in all my projects.
// include some standard libraries
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <syslog.h>
These are just some standard C libraries that are being included. Note that we also include syslog for error logging.
I think this is also supported on Linux but I haven't done enough testing on Windows, we'll probably get there in the next post.
// include support libraries including their implementation
#define MATH3D_IMPLEMENTATION
#include "math3d.h"
#define SHADER_IMPLEMENTATION
#include "shaders.h"
This is where we include our two new libraries. Note the preceding defines that result in the implementation being included.
// For now just some global state to store our shader program and buffers
GLuint shaderProgram = NO_SHADER;
GLuint VAO = 0;
GLuint VBOs[2] = { 0, 0 };
Here we define some global variables into which we'll store our shader program ID and the IDs for our VAO and two VBOs.
These are globals for now to keep our example simple.
// function for logging errors
void error_callback(int error, const char* description) {
  syslog(LOG_ALERT, "%i: %s", error, description);  
};
GLFW requires us to define a callback function that it can call when an error is encountered. We will tell GLFW later on that this is our function.
I have taken the same approach within the shader library and we will thus also give our shader library a pointer to this function.
For now the function simply writes the error to our system log.
// load contents of a file
// return NULL on failure
// returns string on success, calling function is responsible for freeing the text
char* loadFile(const char* pFileName) {
  ...
};
This function loads the contents of a file into a string. This is C 101 so I'll skip the code.
void load_shaders() {
  char* shaderText = NULL;
  GLuint vertexShader = NO_SHADER, fragmentShader = NO_SHADER;
  
  // set our error callback...
  shaderSetErrorCallback(error_callback);
  
  // and load our vertex shader
  shaderText = loadFile("simple.vs");
  vertexShader = shaderCompile(GL_VERTEX_SHADER, shaderText);
  free(shaderText);
    
  // and load our fragment shader
  shaderText = loadFile("simple.fs");
  fragmentShader = shaderCompile(GL_FRAGMENT_SHADER, shaderText);
  free(shaderText);
    
  // link our program
  shaderProgram = shaderLink(2, vertexShader, fragmentShader);
                
  // no longer need our shaders
  glDeleteShader(fragmentShader);
  glDeleteShader(vertexShader);
};
I've removed the error handling here to make the code more readable. Basically we load our vertex shader, then compile it, then free our text, then repeat the same for our fragment shader and finally link our shader program. After this we no longer need the shaders so we free those up.
void unload_shaders() {
  if (shaderProgram != NO_SHADER) {
    glDeleteProgram(shaderProgram);
    shaderProgram = NO_SHADER;
  };
};
We also have a function to delete our shader program which we'll call when we're cleaning up.
void load_objects() {
  // data for our triangle
  GLfloat vertices[9] = {
    -0.6f, -0.4f,  0.0f,
     0.6f, -0.4f,  0.0f,
     0.0f,  0.6f,  0.0f
  };
  GLfloat colors[9] = {
    1.0f, 0.0f, 0.0f,
    0.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 1.0f
  };
  GLuint indices[3] = { 0, 1, 2 };
    
  // we start with creating our vertex array object
  glGenVertexArrays(1, &VAO);
  
  // and make it current, all actions we do now relate to this VAO
  glBindVertexArray(VAO);
  
  // and create our two vertex buffer objects
  glGenBuffers(2, VBOs);
  
  // load up our vertices
  glBindBuffer(GL_ARRAY_BUFFER, VBOs[0]);
  
  // size our buffer
  glBufferData(GL_ARRAY_BUFFER, sizeof(vertices) + sizeof(colors), NULL, GL_STATIC_DRAW);
  
  // layout (location=0) in vec3 vertices;
  glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
  glEnableVertexAttribArray(0);
  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, (GLvoid *) 0);
  
  // layout (location=1) in vec3 colors;
  glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertices), sizeof(colors), colors);
  glEnableVertexAttribArray(1);
  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, (GLvoid *) sizeof(vertices));
  
  // load up our indices
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VBOs[1]);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
  
  // and clear our selected vertex array object
  glBindVertexArray(0);
};
Here it all gets a bit more complex. In this method we initialise our VAO and two VBOs that hold all our data needed to draw our one triangle.
At the top we define a few arrays containing the data.

We then create our VAO using the function glGenVertexArrays and make that our current VAO by binding it with glBindVertexArray.

We then create our two VBOs with a single call to glGenBuffers. Our first will be used for vertex and color data, the second for index data.

We bind our first VBO using glBindBuffer to make it current. Note the constant GL_ARRAY_BUFFER which tells OpenGL we're building our buffer containing vertex data. Also note that because our VAO was bound before, our first VBO now becomes the buffer containing vertex data for our VAO.
At this point however our VBO is an empty buffer, we need to tell OpenGL how large our buffer needs to be. We do this by calling glBufferData.
Note here that the first parameter specifies that we're still dealing with our buffer containing our vertex data, the second parameter defines the size of our buffer (we are storing vertex and color data here), our third parameter is NULL (we'll come back to this later) and our last parameter defines this as a static buffer (this tells OpenGL we'll set our data once and it won't change after that.

Now we need to load our vertex and color data into our data buffer. We do these with the next set of commands.
glBufferSubData loads data into a part of our buffer and we use it to load our vertex data in the first half and the color data in the second half of our buffer.
glEnableVertexAttribArray simply enables the use of an attribute, again this is remembered in the state of our VAO.
glVertexAttribPointer finally binds the data we just loaded into our buffer to an attribute in our shader, remember our "layout (location=n)" prefix in our vertex shader? The n corrosponds to the attribute number in our calls. Here we are telling OpenGL how to interpret the data, in our case we tell it that we have 3 floats for each entry.
Attribute 0 is now our vertice data, attribute 1 our color data.

We again use glBindBuffer and glBufferData but now with GL_ELEMENT_ARRAY_BUFFER to load our index information. Our indices define which vertices make up our triangles (well in our case our single triangle).
Note that this time we do use our 3rd parameter of glBufferData as we load our indices directly into our buffer. There is no need to break this up. There is also no need to define attributes here.

Finally we unbind our VAO by calling glBindVertexArray with our parameter set to 0. This isn't very important in our little example here but it is a good habit to learn as it prevents accidental changes to whatever VAO is current.

void unload_objects() {
  glDeleteBuffers(2, VBOs);
  glDeleteVertexArrays(1, &VAO);
};
Eventually we'll want to clean up and we do this by deleting our two VBOs and our VAO.
static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
  if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
    glfwSetWindowShouldClose(window, GL_TRUE);
};
This is another callback GLFW allows us to use. GLFW will call this function whenever the user presses a key. We'll tell it to do so later on. Diving into this and other such callbacks is a subject for another day
int main(void) {
  GLFWwindow* window;
  
  // tell GLFW how to inform us of issues
  glfwSetErrorCallback(error_callback);

  // see if we can initialize GLFW
  if (!glfwInit()) {
    exit(EXIT_FAILURE);    
  };
Finally we got to the start of our main function.
This is just some standard setup
  • a variable to hold a pointer to the window we're about to open,
  • telling GLFW about our error callback routine
  • and initializing GLFW
  // make sure we're using OpenGL 3.2+
  glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
  glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
  glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
  glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
This is a new bit, I'm not entirely sure if this is required on every platform but it is on Mac OS X. Basically this will instruct GLFW once it creates our window that this window needs at least OpenGL 3.2 support including our core profile (i.e. our new rendering pipeline).
  // create our window
  window = glfwCreateWindow(640, 480, "GLFW Tutorial", NULL, NULL);
  if (window) {
    // make our context current
    glfwMakeContextCurrent(window);

    // tell GLFW how to inform us of keyboard input
    glfwSetKeyCallback(window, key_callback);

    // load, compile and link our shader(s)
    load_shaders();
    
    // load our objects
    load_objects();
Now we open our window and set the OpenGL context of our window as the current context. We'll talk about our options when opening a window such as full screen rendering, multi monitor support, etc. in a later post.
We also set our keyboard callback and call our load shaders and load objects functions we already talked about up above.

    // and start our render loop
    while (!glfwWindowShouldClose(window)) {
      float ratio;
      int width, height;
      mat4 mvp;
      vec3 axis;
Now it is getting interesting, this is the start of our render loop. We'll keep repeating the next bit of code over and over again for as long as our window remains open. This is where a graphics program differs from a normal window application as we're constantly updating the contents of our window instead of waiting for an event to come in.
      glfwGetFramebufferSize(window, &width, &height);
      ratio = width / (float) height;
      
      glViewport(0, 0, width, height);
      glClear(GL_COLOR_BUFFER_BIT);
First we retrieve our frame buffer size and set our viewport, then we clear the contents of our OpenGL buffer so we can start on a nice blank canvas. There are improvements to be made here but for now this will do. One thing that is important here is that we retrieve our frame buffer size, not our window size. On most hardware these will be the same but on for instance Retina screens the frame buffer might be much larger.
      mat4Identity(&mvp);
      mat4Ortho(&mvp, -ratio, ratio, -1.0f, 1.0f, 1.0f, -1.0f);
      mat4Rotate(&mvp, (float) glfwGetTime() * 50.0f, vec3Set(&axis, 0.0f, 0.0f, 1.0f));
Next we set up our model-view-projection matrix. Interestingly we do this in "reverse" order.
First we apply our orthographic projection.
We skip our view matrix as we're just looking out of origin.
Last we apply our model matrix by rotating our model.
      // select our shader
      glUseProgram(shaderProgram);
      glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "mvp"), 1, false, (const GLfloat *) mvp.m);
Next we tell OpenGL which shader program we wish to use and we load our mvp into our shader.
To do this we first need to get our uniform ID for our mvp variable and then use that to set our mvp.
This is the quick and dirty way of doing things, we'll talk about better strategies another day as it is a subject on itself.
      // draw our triangles:)
      glBindVertexArray(VAO);
      glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, NULL);
      glBindVertexArray(0);
Thanks to all the setup we've done actually drawing our triangle is brought down to these 3 calls. Binding our VAO sets the entire state, then we draw, and finally we unbind.
      // unset our shader
      glUseProgram(0);
      
      // swap our buffers around so the user sees our new frame
      glfwSwapBuffers(window);
      glfwPollEvents();
    };
Again for our simple example unsetting our shader program is a bit of overkill but it is a good habit to learn for when things get more complex.
In our final part of our render loop we tell GLFW to swap our buffers to make all our drawing work visible and to poll for any events like a good application should.
    // close our window
    glfwDestroyWindow(window);  
  };
  
  // lets be nice and cleanup
  unload_objects();
  unload_shaders();
  
  // the end....
  glfwTerminate();
};
And in this last bit we simply nicely clean up after ourselves....

That's it folks! Compile and we get the same colorful triangle as before, but after 10x as much code it is now being rendered using OpenGL 3+ techniques.

What's next?

What's next is that I'm going to get some sleep:) I'll reread what I've posted here over the weekend and fix any stupid mistakes I've made. The code will be up on GitHub in a minute.
I'll also look at the windows side of this in the weekend.

After thats behind me the next full post we'll start looking into some 2D techniques to draw things a bit more interesting then a spinning triangle.

Friday, 6 March 2015

Getting started with GLFW (part 2)

This will be a relative small part though I was surprised to find I took the hard way in doing this.
I'm going to walk through the steps for getting our sample library to compile on Windows.

Obviously you need a compiler, I've had Visual Studio 2010 Express installed on my laptop for awhile now so that is what I'm going to stick with. I am however going to compile this using makefiles and using the static libraries. A better way forward probably is to just create a solution in Visual Studio and let it take care of the difficult stuff, you simply start with an empty console project and work from there.

One thing that is nice is that for Windows you can download all the libraries precompiled and save you a bunch of hassle, no need to go through CMake and compile the source code yourself. Off course if you feel inclined to do so, or wish to use the latest source out of github, you have no choice.

The binaries come as dynamic and static libraries. I'm going for static but if you rather use the dynamic libraries simply link in glfw3dll.lib and copy the dll in place instead of using glfw3.lib. Also the makefile becomes a little simpler as you need to include less standard static libraries.

I was surprised to find that the static library was compiled with /MD instead of /MT and that caused a bit of a hassle.
The other thing is that the last time I've worked with makefiles for nmake was years ago and I struggled with the syntax for a bit. A lot of the stuff I'm doing in the makefile on the Mac didn't want to work.
While the makefile for Mac OS X I probably won't need to touch anymore (except maybe for resources) I'll be editing the makefile on Windows each time I add files. If I find how to do this better, I'll revisit this post.

The other thing that is important to know when using makefiles, and thus compiling on the command line, is that the environment is not setup by default. VC does come with a handy batch file you can run that does all the work for you. I've got a batch file init.bat in my home folder so all I need to remember after opening my command prompt is to type in init:
call "c:\Program Files (x86)\Microsoft Visual Studio 10.0\vc\vcvarsall.bat" x86
cd Development

So lets first look at our new folder structure with our new windows files in place:
3rdparty
  - GLFW
    - glfw3.h
    - libglfw3_mac.a
    - glfw3.lib
build
include
macosx
  - Info.plist
  - makefile
resources
  - app.icns
source
  - main.c
windows
  - makefile
Pretty straight forward, just the windows glfw library and the makefile in our windows folder that have been added.
The contents of the Windows makefile is as follows:
# Compiler directives for Windows
CFLAGS = /c /WX /MD /nologo /D "WIN32" /I..\include /I..\3rdparty
LDFLAGS = /nologo /SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup /WX 
WINLIBS = opengl32.lib kernel32.lib user32.lib gdi32.lib comdlg32.lib advapi32.lib shell32.lib uuid.lib 

APPNAME = glfw-tutorial.exe
OBJECTDIR = ..\build\Objects
CONTENTSDIR = ..\build

OBJECTS = $(OBJECTDIR)\main.obj

all: $(CONTENTSDIR) \
  $(OBJECTDIR) \
  $(CONTENTSDIR)\$(APPNAME) \
 
$(CONTENTSDIR): 
  mkdir $(CONTENTSDIR) 

$(OBJECTDIR):
  mkdir $(OBJECTDIR)
  
$(CONTENTSDIR)\$(APPNAME): $(OBJECTS) ..\3rdparty\GLFW\glfw3.lib 
  link $(LDFLAGS) /out:$@ $** $(WINLIBS)

$(OBJECTDIR)\main.obj:
  cl $(CFLAGS) /Fo$@ ..\source\main.c

clean:
  rmdir ..\build /s /q

Now in your command line window cd into the windows subfolder and type nmake

This time our result is a simple exe file.

What's next?

As mentioned in part 1, now that we've got our basic example working it is time to make it work using OpenGL 3.

Wednesday, 4 March 2015

Getting started with GLFW (part 1)

In my little hobby projects I've been using GLFW and I've been thinking awhile about writing a small tutorial series based on my experiences with this.

Two things struck me while playing around with this.
  1. Mac OS X as a platform seems to be underrepresented. Many tutorials focus on Windows and it seems that there is a bit more freedom here. On Mac Os X you hit roadblocks sooner such as OpenGL 1/2 full deprecation which brings us to:
  2. Many tutorials focus on OpenGL 1/2 but at least on the Mac, once you want to use the power of OpenGL 3 or upwards you loose OpenGL 1/2 support. Loosing the fixed rendering pipeline isn't a big deal but loosing all the global state and all the extra setup that is done for you means you find yourself looking at a blank screen for ages till you find that last illusive missing ingredient
Before we go any further however we need to ask ourselves a really important question:

Why?

There are a number of facets to this question and the first isn't even about choosing GLFW but why go through the trouble of writing something from scratch? GLFW only offers us a cross platform blank canvas, why go through all the work to implement everything from there when there are full featured alternatives?

The easy answer to that question is that I'm crazy. I'm not writing this because I've got a delusion that I'm going to write the next big game engine. I've got a good paying job and this is a hobby for me and the biggest reason for most of the things I've done so far is that I simply want to learn how to build these things.

But the hard answer to that question is a little more involving. Deciding to write your own engine instead of using an off the shelf one can make sense. It really depends on what you are building.
If you're after building the next AAA title you probably want to go and save yourself the time and effort and just buy Unity or Unreal or something like that.
If you're building an indie game on a budget and you're willing to do a little more yourself I would suggest having a look at Ogre3D which really seemed to be nice.
But for a lot of simpler concepts you'll actually find that you can get what you need build relatively quickly and doing it yourself does have the added advantage that you'll end up with something smaller, more nibble and easier to adjust to your needs.

The second burning question is why GLFW.
That one is simpler to answer though, it meets my needs and its easy to use.
There are other alternatives and you could use them just as well.

Depending on your style of coding you can write the internals in such a way that you could swap any supporting library out from underneath it easily enough. I'll try to adhere to this as I have the hope to get this to a point where I can replace GLFW with a wrapper made in XCode to support iOS (one of the things lacking from GLFW as a platform). I've got the basics of this working but it's not at a state yet where I am convinced it will get me where I want to go.

What I will not do is abstract the rendering code itself. I'm purely targeting OpenGL 3+ and as far as possible try to stay compatible with OpenGL ES 2

Before we can begin

This is the boring part. I'm going to be writing most from the perspective of Mac OS X.

If you're doing this on Windows or Linux your life is going to be loads easier. On Linux just about everything you need is there, on Windows download the express version of VC++ and you're on your way too.
Just check the GLFW pages just to make sure you get everything right, if you're on Windows you're probably best off just grabbing the binaries and you're up.

On Mac OS X, you start off by downloading XCode (but if you're a developer I'd be surprised if you're not already using it). I'm running 6.1.1 at the moment and have a huge love/hate relationship with it. XCode 4 was probably the last I actually enjoyed using but maybe that's because I'm not doing any fullfletched iOS development for which I'm sure XCode 6 is a superior being.

Now life gets a little more crazy, XCode went through a phase where it didn't automatically install the command line tools needed to compile stuff the old fashioned way. Apple has come back to their senses but if you're on XCode 4 or 5 you'll need to open up XCode, go to preferences and check the download tab. There should be an option there to install the command line tools.

The missing tool here is cmake. This is a handy little tool that will generate platform specific makefiles for a project. Installing it is easy, go to the download page, download the dmg and drag cmake from the dmg into applications. If like me your on Mavericks or Yosemite cmake won't start until you go into system preferences -> Security & Privacy and tell OS X you trust it.

Finally we need GLFW, download the latest source (I'm using 3.1 at the moment) and unzip it in a folder of your choice.

Compiling GLFW on Mac OS X

We now need to build GLFW. Again I'm doing the Mac version here, Windows users you are in luck as precompiled binaries are available, get them, don't reinvent the wheel.

One choice you have to make at this point is whether you want to build a static library or a dynamic library. Call me old fashioned but I'm a sucker for static libraries. This means GLFW is fully compiled into your executable and you're not dependent on external files. Dynamic libraries are great for large pieces of kit that you want to replace easily with never version but this rarely goes unpunished and in today's cheap abundent disk storage you really don't care.

If you haven't already opened cmake when you installed it, run it now. You'll get a screen that asks you for the location of the source. Type in the path to where you unzipped GLFW.
It also asks you for a build destination, I use the same path but add /build at the end.
Now press the configure button (it will ask you to create the build folder if needed).
A popup should open asking your what you wish to create. I'm going for "Unix Makefiles" using default native compilers.

It will take a few seconds to fill and present you with a nice list of configure options.
The default options bar one are what I wanted, the option I'm changing is GLFW_BUILD_UNIVERSAL which when ticked builds both 32bit and 64bit copies of the library as a universal library so you can deploy a single executable regardless of having 32bit or 64bit hardware (a really neat feature in OSX).
Now hit Generate and we should end up with some make files.
Open up a terminal and CD into your build folder and type make and grab a coffee!
(don't worry about some of the warnings, most are related to GLFWs support for OpenGL 1 and Apple letting us know it won't be around for much longer)

Once its done rumbling you should end up with a fully compiled copy of GLFW including a few example programs that you can test out in the examples sub folder of your build folder.

After all this there really are only two bits of GLFW that are interesting:
glfw-3.1/include/GLFW/glfw3.h
glfw-3.1/build/src/libglfw3.a
I often find myself copying these two files into my project folder so I can keep it nice and clean.

Our first GLFW application

It's customary to do a hello world application but unfortunately GLFW doesn't natively support font rendering (we'll deal with that another day).

Now at this point you could go and use XCode to manage your source code. It drives me nuts seeing that I'm about to structure my application in a way it doesn't like, it seems to be convinced only full COCOA applications should be deploy-able. Now I fully admit I've given up on this a long time ago so it may simply be a lack of knowledge on my part but I'm ruling out XCode for now.

The other option is this wonderful free x-platform IDE called code::blocks. It would allow us to maintain a single project file that can build our application to both Windows, Mac and Linux. I may one day explore that a little more and revisit this.

For now, I'm staying with old fashioned makefiles. Looking ahead a little I'm going to start with the following folder structure:
3rdparty
  - GLFW
    - glfw3.h
    - libglfw3_mac.a
build
include
macosx
  - Info.plist
  - makefile
resources
  - app.icns
source
  - main.c
windows
Obviously the windows folder will stay empty for now as I'm just doing the Mac side of life.
The files in the 3rdparty folder are the files from our GLFW source.
The build folder will eventually contain our build files
Our include folder is for later use
We also have a resources folder that we'll end up copying into our application bundle. This is a Mac OS X construct that treats a folder of files that form an application to be treated as a single entity.
We'll eventually be putting other distributables here. On windows we'll need to treat this slightly differently.
Finally our source folder contains our source files. 

Lets look at Info.plist first. This is a file that will be copied into our bundles root and tells Mac OS X a little about our application. It's an XML file but with XCode installed you get a nice little plist editor that makes it easier to edit. Our file looks as follows:
The next file is our makefile:
# Compiler directives for Mac OS X
CC = gcc
CPP = g++
CFLAGS = -c -arch i386 -arch x86_64 -I../include -I../3rdparty
LDFLAGS = -framework Cocoa -framework OpenGL -framework IOKit -framework CoreVideo -arch i386 -arch x86_64

APPNAME = glfw-tutorial
OBJECTDIR = ../build/Objects
CONTENTSDIR = ../build/$(APPNAME).app/Contents

OBJECTS = $(patsubst ../source/%,$(OBJECTDIR)/%,$(patsubst %.c,%.o,$(wildcard ../source/*.c)))
OBJECTS += $(patsubst ../source/%,$(OBJECTDIR)/%,$(patsubst %.cpp,%.o,$(wildcard ../source/*.cpp)))
RESOURCES = $(patsubst ../Resources/%,$(CONTENTSDIR)/Resources/%,$(wildcard ../Resources/*.*))

all: $(CONTENTSDIR)/MacOS \
  $(CONTENTSDIR)/Info.pList \
  $(CONTENTSDIR)/MacOS/$(APPNAME) \
  $(RESOURCES)
 
$(CONTENTSDIR)/MacOS: 
  mkdir -p $(CONTENTSDIR)/MacOS
 
$(CONTENTSDIR)/Info.pList: Info.plist
  cp -f $^ $@
  @chmod 444 $@

$(CONTENTSDIR)/Resources/%: ../Resources/%
  @mkdir -p $(@D)
  @chmod 755 $(@D)
  cp -f $^ $@
  @chmod 444 $@
 
$(CONTENTSDIR)/MacOS/$(APPNAME): $(OBJECTS) ../3rdparty/GLFW/libglfw3.a
  $(CPP) $(LDFLAGS) -o $@ $^

$(OBJECTDIR)/%.o: ../source/%.c
  @mkdir -p $(@D)
  $(CC) $(CFLAGS) -o $@ $<

$(OBJECTDIR)/%.o: ../source/%.cpp
  @mkdir -p $(@D)
  $(CPP) $(CFLAGS) -o $@ $<

clean:
  rm -R -f ../build

I'm not going into how this makefile works. Suffice to say it will do its job for what we're needing now.

Finally we need our main source file. Initially I started this project as a C++ project and it may evolve back into that. But as so far I'm not doing anything with C++ yet I thought it would be best to stick with C for now.

I have half a mind to make this tutorial focus purely on C. Don't get me wrong, I love C++, I have been writing software in C++ for well over a decade and it remains my favorite language. Staying with C however means this code is more portable and will make it easier to make the jump to iOS later on. We'll see how things develop.

This is pretty much a copy of the sample file on the main GLFW website so I take no credit for it. It is just our starting point. Also its OpenGL 1 which is great just to get the ball rolling but we'll be tossing this away very soon.

#include <GLFW/glfw3.h>

#include <stdlib.h>
#include <stdio.h>

void error_callback(int error, const char* description) {
  // we'll implement some error handling here soon...
};

static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
  if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
    glfwSetWindowShouldClose(window, GL_TRUE);
};

int main(void) {
  glfwSetErrorCallback(error_callback);
  if (!glfwInit()) {
    exit(EXIT_FAILURE);    
  };
  
  GLFWwindow* window = glfwCreateWindow(640, 480, "Hello world", NULL, NULL);
  if (window) {
    glfwMakeContextCurrent(window);
    glfwSetKeyCallback(window, key_callback);
        
    while (!glfwWindowShouldClose(window)) {
      float ratio;
      int width, height;
      
      glfwGetFramebufferSize(window, &width, &height);
      ratio = width / (float) height;
      
      glViewport(0, 0, width, height);
      glClear(GL_COLOR_BUFFER_BIT);
      glMatrixMode(GL_PROJECTION);
      glLoadIdentity();
      glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);
      glMatrixMode(GL_MODELVIEW);
      glLoadIdentity();
      glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f);
      glBegin(GL_TRIANGLES);
      glColor3f(1.f, 0.f, 0.f);
      glVertex3f(-0.6f, -0.4f, 0.f);
      glColor3f(0.f, 1.f, 0.f);
      glVertex3f(0.6f, -0.4f, 0.f);
      glColor3f(0.f, 0.f, 1.f);
      glVertex3f(0.f, 0.6f, 0.f);
      glEnd();
      
      glfwSwapBuffers(window);
      glfwPollEvents();
    };
  
    glfwDestroyWindow(window);  
  };
  
  glfwTerminate();
};

With that all in place open up a terminal, cd into the macosx folder and type in: make
You should end up with a nice little Mac OS X application called glfw-tutorial that shows a little spinning triangle.

I've placed the source code so far on my github page:
https://github.com/BastiaanOlij/glfw-tutorial
Note that I'll be updating the files here as this tutorial progresses but I'll be placing  zip files in the archive subfolder containing the files as they are after each article.

What's next?

If I have some spare time later in the week I'll try and add an article to this getting it to work on Windows though I will be mostly targeting Mac OS X in this series.

After that we'll convert the code above to work using OpenGL 3. Be prepared to see this suddenly grow as the setup for OGL 3 is massive but once that is out of the way, trust me, life becomes cool:)