Thursday, 12 March 2015

Making the move to OpenGL 3+ (part 3)

Ok, this is going to be a loooong one...

One of the things that was so appealing in OpenGL 1/2 was that the setup was so easy. The hard work was in the platform specific stuff which was handled with GLUT.
GLUT was nice to get you started and to learn but did fall short when you wanted to do more serious stuff. GLUT is also retired which was one of the reasons GLFW and other such frameworks were brought into existence.

Once you make the jump to OpenGL 3+ or its little brother OpenGL ES 2 suddenly you find that there is a lot less out of the box stuff you can use. Not only has the fixed pipeline been replaced by a programmable pipeling (in OpenGL 2 they still lived side by side), there are many more "must dos" that were optional before and suddenly you have to do a lot more of the math yourself.

This is one of the reasons I haven't attempted to explain any of the code presented in the "Getting started" part of this series. We're about to abandon half of it. I will explain most of the code I'll be presenting in this and following parts.  I'll mostly rely on comments I've added to any of the supporting libraries (where I've written them) but will go in depth into the code you'll find yourself writing or dealing with.

Before we get started I'm going to impose a few rules (on myself):
  • I've already mentioned this but we're sticking with C. While I love C++ and am more confident with C++ sticking with C leaves out some complexities that get in the way of explaining things. Also it makes it easier to make the jump to other platforms.
  • Our main.c file should contain only that which is needed to interact with the user and setup our environment so basically the code that relates directly to GLFW. The main reason for this is to make it easier to port the code to platforms not supported by GLFW or for you to use the stuff here with the framework of your choice but it also just makes sense. Note that I'll break that rule in the beginning as we are still building everything up but further on in this series we'll move more and more out of there.
  • Reusable support libraries we will be creating will follow a "single file" approach instead of having a separate .h and .c file. I encountered this first when I started using Sean T. Barrett's excellent STB libraries (which we will end up using here as well) and I found it a great way to write code that is easily distributable. Basically it means that you include the file as normal where you have dependencies on the code but in one place (for us always in main.c) you precede the include with a define that tells the library to include the implementation.
I'll start off again testing and compiling everything on my Mac and once we've got all running I'll spend some time adding the bits and pieces so it works on Windows too.

Math library

The first bit we need to implement are vectors and matrices. There are plenty of math libraries around (GLM is a really nice one written in C++ that mimics what you write in GLSL shaders). I'm sure there are some really good ones written in C but I have to admit, I haven't looked and only know some of the C++ ones. Over the years I've build a collection of applicable functions many dating from a time when such libraries weren't available and mostly out of habit I've stuck with them.

Most vector logic is basic trigonometry anyway, it gets a bit more complex when we talk about matrices. I'm going to talk about neither in detail here, those are topics on themselves and while most interesting only a basic understanding is needed to use them and I hope we'll learn by example here.

Ergo, I present you with math3d.h which is a self contained support library with various mathematical functions useful for what we are trying to do here. The name is a little misleading because it is just as applicable for 2D applications as 3D.

I do fully have to admit that some of the more complex functions are simply copied from the OpenGL documentation or other sources related on the subject.

As I'm writing this the library contains basic vector logic for 2D, 3D and 4D vectors and support for 3x3 and 4x4 matrices with only a few of our most basic matrix functions (I've had to rewrite most of my C++ code back into C code and as I've not tested it all end to end yet, we'll be fixing bugs for awhile).
I'll be adding more functionality over time and we may even get to quaternions at some point. It definitely is missing some key functions that we will need at some point in this series (like inverting a matrix).

Projection, view and model matrices

Before we continue there are 3 matrices that require a bit of a closer look as they determine how something will be drawn on screen.
I'll leave out for a moment how we actually calculate these matrices but will purely take a look the theory behind them.

A matrix is a mathematical tool that allows us to apply a transformation to a vector. This transformation can move that vector, rotate that vector around a center, scale that vector etc. etc. etc.
What makes matrices super cool is that you can multiply them to create a new matrix that combines the two transformations of the other two.

When we look at an object that we wish to draw that object usually starts of as a series of vertices and faces (triangles) that make up that object.
Lets say that we have a simply 3D box. That box consists of 8 vertices and 12 faces (2 triangle for each side, 6 sides in total). This object is usually stored in a way where its center is at (0.0, 0.0, 0.0). For a square box that is 10.0 units wide, high and deep that means our 8 vertices are at: (-5.0, -5.0, -5.0), (5.0, -5.0, -5.0), (5.0, 5.0, -5.0), (-5.0, 5.0, -5.0), (-5.0, -5.0, 5.0), (5.0, -5.0, 5.0), (5.0, 5.0, 5.0) and (-5.0, 5.0, 5.0).

We call this our model space. The first thing we need to do is move and rotate our box to where it actually is positioned within our virtual 3D world.
The matrix that performs the transformation is called our model matrix.
Note that this matrix often is formed by combining a number of individual transformations that determine how our box needs to be positioned.

Once our model is positioned at its place in the 3D world, we say that our coordinates are now in, you guessed it, world space.

Unfortunately our "observer", our virtual "camera" so to say, isn't stuck at the center of the 3D world. The "observer" moves around the 3D world and as such, our view into the 3D world changes. We thus need to move and rotate our box in relation to our "observer" to place it in front of our "camera".

The matrix that performs this transformation is called our view matrix and after applying it to our model we're now in view space.

Very often, actually nearly always, the model matrix and the view matrix are combined to form a model-view matrix that does the transformation from model space into view space in one go. This simply halves the time required to project all our models into view space and as we can still do all our lighting calculations properly that works out pretty good. You can actually see this in our OpenGL 1 code:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); // our "view matrix" stays at our center looking forward
glRotatef((float) glfwGetTime() * 50.f, 0.f, 0.f, 1.f); // our model matrix rotates our triangle as time passes

The last step we need to do is to take our 3D view space coordinates and decide how they translate to screen coordinates.
We do this by applying a projection matrix.

In our OpenGL 1 sample code we apply an orthographic projection:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-ratio, ratio, -1.f, 1.f, 1.f, -1.f);

This projection is simply a 2D projection that scales our x and y coordinates based on the aspect ratio of our screen resolution. The z coordinate doesn't influence how we draw anything other then how things overlap.

We'll look into calculating other projection matrices later on in the series.

There are many situations, especially when dealing with lighting where you want to work in view space first and only perform our projection last and this is why in OpenGL 1 we set the model-view matrix and the projection matrix separately.

However what is more often done is that we set the model-view matrix and a model-view-projection matrix where this last matrix combines all three matrices into one final matrix that takes model space straight to screen space. We'll be doing that here as well.

Shader library

With the fixed pipeline gone you have to implement your own shader code. A very basic shader isn't all that hard to implement but there are a few steps into actually getting the shader loaded into your application.

Our main shader code will reside in shader.h, another self contained support library. Again I won't go into too much detail about how it works and initially rely on the comments I've placed in this library. I've also only included basic code for now and we'll enhance on this as time goes by. I will go into more detail about the shaders themselves.

Do note that my shader library does not contain any code to load the shader text from disk. This code resides in main.c for the time being. This is simply because I believe they don't go hand in hand.  If you wanted to, and I've indeed done this in some of my projections, you could keep your shader code inline in you C file. On the other end of the spectrum you may use a text file loader library that allows you to add precompiler directives into your shader code or you may generate your shader code based on some sort of higher level design document.

A shader program is build up (currently) from 5 programmable stages:
- the vertex shader
- the tessellation control shader
- the evaluation shader
- the geometry shader
- the fragment shader
Only the vertex shader and fragment shader are mandatory, the other 3 were added later and are optional. We'll ignore the existence of the new 3 shader stages for now and implement a simple vertex shader and fragment shader that will allow us to reproduce what we did in OpenGL 1.

The vertex shader
When we look at our triangle that we are rendering it consists of 3 vertices. The vertex shader is responsible for transforming those 3 vertices so they are projected in the right spot and we setup any additional information. For each of our 3 vertices our vertex shader is called
At the bare minimum we have:
- our model-view-projection matrix as a variable to apply to our vertices
- an input for the vertex that we are handling
- an output for our projected vertex in screen space

For our example code we need two more things:
- an input for the color of the vertex that we are handling
- an output for this color so we draw our triangle in the right color

Lets look at the shader code bit by bit and explain what is happening:
#version 330
This simply tells OpenGL what capabilities we need for our shader and is directly related to the version of OpenGL we are targeting

uniform mat4 mvp;

Here we define a uniform of type mat4 (a 4x4 matrix) called mvp, which is our model-view-projection variable. A uniform simply is a variable that can be set from outside our shader but is basically a constant inside of our shader. We'll end up setting it in our code later on.

layout (location=0) in vec3 vertices;
layout (location=1) in vec3 colors;
Here we define two vec3 (3D vectors) variables called vertices and colors. The "in" keyword in front specifies they are inputs.
The bit in front of that is our bit of magic: "layout (location=n)".
We'll look at this a bit closer once we start putting things together but in essence this allows us to bind our array of vertices and our array of colors that make up our triangle to our shader.

out vec4 color;

Here we define a single vec4 (4D vectors) variable called color. This is our color output that is sent to our fragment shader. As I mentioned up above we also need an output for our projected vertex however this is one of the few bonuses we get and it is called gl_Position. In OpenGL 2 when programmable shaders were first added we had loads of these build in variables but only a few survive in OpenGL 3.

void main() {
  gl_Position = mvp * vec4(vertices, 1.0);
  color = vec4(colors, 1.0);
}
And finally our shader itself. It looks a bit like a C program doesn't it? Every shader stage has its own main function which is its entrypoint. Just like in C we can define additional functions but it is the main function that OpenGL looks for.
The first line of code takes our current vertex transforms it into a 4D vector and then multiplies this with our model-view-projection matrix storing the end result into gl_Position.
Our second line of code simply copies our input color into our output color.

The fragment shader
Now that we know the state of affairs at each corner of our triangle we need to draw the triangle itself. OpenGL nicely interpolates our two outputs from our vertex shader to draw out our triangle. For each pixel that needs to be drawn to screen our fragment shader is called.
Obviously the inputs of our fragment shader must match the outputs of our vertex shader but the fragment shader itself has only one output: the color our pixel needs to be drawn with.

Our fragment shader therefor is as simple as can be:
#version 330

in vec4 color;
out vec4 fragcolor;

void main() {
  fragcolor = color;
}
The first line is the same as our vertex shader and identifies what OpenGL capabilities we require for this shader.
The 3rd line contains our input called color which matches the output of our vertex shader.
The 4th line defines our output called fragcolor, this used to be a build in variable but we now need to define it. As our fragment shader is only allowed to have one output OpenGL knows what you're intending here (this change may not make any sense but there is good reason for it but it won't be apparent until you start using frame buffers, which is a topic for another day).
Finally we have our main function and in it a single line of code which copies our color input to our output.

This is about as simple as it gets and obviously it gets more complex from here.

Buffer objects and vertex arrays

The last piece of the puzzle we need before we can build are our buffer objects and arrays.

When we look at our original OpenGL 1 code we see that we use glBegin, glColor3f, glVertex3f and glEnd to send all our data related to our triangle to OpenGL. We do this every frame. We waste a lot here. Not noticeable on a single triangle but once you have tens of thousands of triangles to draw it really starts to show.

In OpenGL 1 they realized this and initially solved this with the functions glVertexPointer, glColorPointer and glDrawElements (and a couple more but these 3 would draw our triangle).
These allowed us to copy arrays of vertex data and color data into GPU memory and then draw all elements making up whatever it was we are drawing. But it still meant copying this data into GPU memory every frame.

Vertex Buffer Object
Eventually Vertex Buffer Objects (or VBO for short) were added to OpenGL. These allowed us to copy vertex data into GPU memory once and then use it repeatedly saving us a lot of overhead.. The solution was a bit of a hodgepodge however as they kept using the old functions but you needed to bind the VBO containing the data and then call this function with an optional offset.

VBOs however weren't just for vertice data but could be used for much more such as indice data to draw many faces, color data, texture coordinate data, normal vectors data, etc. (don't worry, we'll explain what these are in due time).

This was already a giant leap forward however this still dependent on a very fixed architecture. For instance, we could only use two sets of texture coordinates. What if we needed 3? Or what if I had data I needed in my shader that didn't have a construct in Open GL?

I'm not going into too much detail here yet but eventually we became pretty free as to what data we loaded into a VBO and we could define how OpenGL should dissect this data by defining attributes using the glVertexAttribPointer function.

At this point we saved ourselves lots of overhead in copying data into GPU memory each frame but the tradeoff was an immense amount of setup each time you needed to draw an object.

Vertex Array Objects
To solve this Vertex Arrays Objects were introduced (or VAO for short). A VAO remembers a certain amount of state and once you activate it, that state is restored.
The first time you make a new VAO active it is blank and you would then setup the required state once binding the right VBO, setting up your attributes, etc.

Then before you need to draw your object you simply make the correct VAO active and call our age old glDrawElements function.

When they were initially introduced in OpenGL 2 (I believe) they were optional.
In OpenGL 3 they are mandatory and it is a constant source of blank screens as it is very easy to forget this or to realize you haven't set the correct state in relation to your VAO.

Putting it all together

Now that we have all our basic ingredients it is time to take a closer look at our new main.c and this time I'm going to explain most of it line by line. I am removing most of the comments from the source code and a lot of the error checking so be sure to look at the original on GitHub.

Do note that in explaining I'm trying to stick to the basics and not go too far in depth. Eventually we'll get there as we'll build more and more complex examples in future posts.

// include these defines to let GLFW know we need OpenGL 3 support
#define GLFW_INCLUDE_GL_3
#define GLFW_INCLUDE_GLCOREARB
#include <GLFW/glfw3.h>
Here we include our GLFW library. Note the addition of two defines which enable OpenGL 3 and COREARB support.
You must have the first. The second is optional in the sence that if you do not include it, you will need an extension manager such as GLEW.
We may do this in a future post as I'm not sure how much is supported by default in GLFW as I've previously used GLEW in all my projects.
// include some standard libraries
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <syslog.h>
These are just some standard C libraries that are being included. Note that we also include syslog for error logging.
I think this is also supported on Linux but I haven't done enough testing on Windows, we'll probably get there in the next post.
// include support libraries including their implementation
#define MATH3D_IMPLEMENTATION
#include "math3d.h"
#define SHADER_IMPLEMENTATION
#include "shaders.h"
This is where we include our two new libraries. Note the preceding defines that result in the implementation being included.
// For now just some global state to store our shader program and buffers
GLuint shaderProgram = NO_SHADER;
GLuint VAO = 0;
GLuint VBOs[2] = { 0, 0 };
Here we define some global variables into which we'll store our shader program ID and the IDs for our VAO and two VBOs.
These are globals for now to keep our example simple.
// function for logging errors
void error_callback(int error, const char* description) {
  syslog(LOG_ALERT, "%i: %s", error, description);  
};
GLFW requires us to define a callback function that it can call when an error is encountered. We will tell GLFW later on that this is our function.
I have taken the same approach within the shader library and we will thus also give our shader library a pointer to this function.
For now the function simply writes the error to our system log.
// load contents of a file
// return NULL on failure
// returns string on success, calling function is responsible for freeing the text
char* loadFile(const char* pFileName) {
  ...
};
This function loads the contents of a file into a string. This is C 101 so I'll skip the code.
void load_shaders() {
  char* shaderText = NULL;
  GLuint vertexShader = NO_SHADER, fragmentShader = NO_SHADER;
  
  // set our error callback...
  shaderSetErrorCallback(error_callback);
  
  // and load our vertex shader
  shaderText = loadFile("simple.vs");
  vertexShader = shaderCompile(GL_VERTEX_SHADER, shaderText);
  free(shaderText);
    
  // and load our fragment shader
  shaderText = loadFile("simple.fs");
  fragmentShader = shaderCompile(GL_FRAGMENT_SHADER, shaderText);
  free(shaderText);
    
  // link our program
  shaderProgram = shaderLink(2, vertexShader, fragmentShader);
                
  // no longer need our shaders
  glDeleteShader(fragmentShader);
  glDeleteShader(vertexShader);
};
I've removed the error handling here to make the code more readable. Basically we load our vertex shader, then compile it, then free our text, then repeat the same for our fragment shader and finally link our shader program. After this we no longer need the shaders so we free those up.
void unload_shaders() {
  if (shaderProgram != NO_SHADER) {
    glDeleteProgram(shaderProgram);
    shaderProgram = NO_SHADER;
  };
};
We also have a function to delete our shader program which we'll call when we're cleaning up.
void load_objects() {
  // data for our triangle
  GLfloat vertices[9] = {
    -0.6f, -0.4f,  0.0f,
     0.6f, -0.4f,  0.0f,
     0.0f,  0.6f,  0.0f
  };
  GLfloat colors[9] = {
    1.0f, 0.0f, 0.0f,
    0.0f, 1.0f, 0.0f,
    0.0f, 0.0f, 1.0f
  };
  GLuint indices[3] = { 0, 1, 2 };
    
  // we start with creating our vertex array object
  glGenVertexArrays(1, &VAO);
  
  // and make it current, all actions we do now relate to this VAO
  glBindVertexArray(VAO);
  
  // and create our two vertex buffer objects
  glGenBuffers(2, VBOs);
  
  // load up our vertices
  glBindBuffer(GL_ARRAY_BUFFER, VBOs[0]);
  
  // size our buffer
  glBufferData(GL_ARRAY_BUFFER, sizeof(vertices) + sizeof(colors), NULL, GL_STATIC_DRAW);
  
  // layout (location=0) in vec3 vertices;
  glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(vertices), vertices);
  glEnableVertexAttribArray(0);
  glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, (GLvoid *) 0);
  
  // layout (location=1) in vec3 colors;
  glBufferSubData(GL_ARRAY_BUFFER, sizeof(vertices), sizeof(colors), colors);
  glEnableVertexAttribArray(1);
  glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 3, (GLvoid *) sizeof(vertices));
  
  // load up our indices
  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, VBOs[1]);
  glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW);
  
  // and clear our selected vertex array object
  glBindVertexArray(0);
};
Here it all gets a bit more complex. In this method we initialise our VAO and two VBOs that hold all our data needed to draw our one triangle.
At the top we define a few arrays containing the data.

We then create our VAO using the function glGenVertexArrays and make that our current VAO by binding it with glBindVertexArray.

We then create our two VBOs with a single call to glGenBuffers. Our first will be used for vertex and color data, the second for index data.

We bind our first VBO using glBindBuffer to make it current. Note the constant GL_ARRAY_BUFFER which tells OpenGL we're building our buffer containing vertex data. Also note that because our VAO was bound before, our first VBO now becomes the buffer containing vertex data for our VAO.
At this point however our VBO is an empty buffer, we need to tell OpenGL how large our buffer needs to be. We do this by calling glBufferData.
Note here that the first parameter specifies that we're still dealing with our buffer containing our vertex data, the second parameter defines the size of our buffer (we are storing vertex and color data here), our third parameter is NULL (we'll come back to this later) and our last parameter defines this as a static buffer (this tells OpenGL we'll set our data once and it won't change after that.

Now we need to load our vertex and color data into our data buffer. We do these with the next set of commands.
glBufferSubData loads data into a part of our buffer and we use it to load our vertex data in the first half and the color data in the second half of our buffer.
glEnableVertexAttribArray simply enables the use of an attribute, again this is remembered in the state of our VAO.
glVertexAttribPointer finally binds the data we just loaded into our buffer to an attribute in our shader, remember our "layout (location=n)" prefix in our vertex shader? The n corrosponds to the attribute number in our calls. Here we are telling OpenGL how to interpret the data, in our case we tell it that we have 3 floats for each entry.
Attribute 0 is now our vertice data, attribute 1 our color data.

We again use glBindBuffer and glBufferData but now with GL_ELEMENT_ARRAY_BUFFER to load our index information. Our indices define which vertices make up our triangles (well in our case our single triangle).
Note that this time we do use our 3rd parameter of glBufferData as we load our indices directly into our buffer. There is no need to break this up. There is also no need to define attributes here.

Finally we unbind our VAO by calling glBindVertexArray with our parameter set to 0. This isn't very important in our little example here but it is a good habit to learn as it prevents accidental changes to whatever VAO is current.

void unload_objects() {
  glDeleteBuffers(2, VBOs);
  glDeleteVertexArrays(1, &VAO);
};
Eventually we'll want to clean up and we do this by deleting our two VBOs and our VAO.
static void key_callback(GLFWwindow* window, int key, int scancode, int action, int mods) {
  if (key == GLFW_KEY_ESCAPE && action == GLFW_PRESS)
    glfwSetWindowShouldClose(window, GL_TRUE);
};
This is another callback GLFW allows us to use. GLFW will call this function whenever the user presses a key. We'll tell it to do so later on. Diving into this and other such callbacks is a subject for another day
int main(void) {
  GLFWwindow* window;
  
  // tell GLFW how to inform us of issues
  glfwSetErrorCallback(error_callback);

  // see if we can initialize GLFW
  if (!glfwInit()) {
    exit(EXIT_FAILURE);    
  };
Finally we got to the start of our main function.
This is just some standard setup
  • a variable to hold a pointer to the window we're about to open,
  • telling GLFW about our error callback routine
  • and initializing GLFW
  // make sure we're using OpenGL 3.2+
  glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
  glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
  glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
  glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);
This is a new bit, I'm not entirely sure if this is required on every platform but it is on Mac OS X. Basically this will instruct GLFW once it creates our window that this window needs at least OpenGL 3.2 support including our core profile (i.e. our new rendering pipeline).
  // create our window
  window = glfwCreateWindow(640, 480, "GLFW Tutorial", NULL, NULL);
  if (window) {
    // make our context current
    glfwMakeContextCurrent(window);

    // tell GLFW how to inform us of keyboard input
    glfwSetKeyCallback(window, key_callback);

    // load, compile and link our shader(s)
    load_shaders();
    
    // load our objects
    load_objects();
Now we open our window and set the OpenGL context of our window as the current context. We'll talk about our options when opening a window such as full screen rendering, multi monitor support, etc. in a later post.
We also set our keyboard callback and call our load shaders and load objects functions we already talked about up above.

    // and start our render loop
    while (!glfwWindowShouldClose(window)) {
      float ratio;
      int width, height;
      mat4 mvp;
      vec3 axis;
Now it is getting interesting, this is the start of our render loop. We'll keep repeating the next bit of code over and over again for as long as our window remains open. This is where a graphics program differs from a normal window application as we're constantly updating the contents of our window instead of waiting for an event to come in.
      glfwGetFramebufferSize(window, &width, &height);
      ratio = width / (float) height;
      
      glViewport(0, 0, width, height);
      glClear(GL_COLOR_BUFFER_BIT);
First we retrieve our frame buffer size and set our viewport, then we clear the contents of our OpenGL buffer so we can start on a nice blank canvas. There are improvements to be made here but for now this will do. One thing that is important here is that we retrieve our frame buffer size, not our window size. On most hardware these will be the same but on for instance Retina screens the frame buffer might be much larger.
      mat4Identity(&mvp);
      mat4Ortho(&mvp, -ratio, ratio, -1.0f, 1.0f, 1.0f, -1.0f);
      mat4Rotate(&mvp, (float) glfwGetTime() * 50.0f, vec3Set(&axis, 0.0f, 0.0f, 1.0f));
Next we set up our model-view-projection matrix. Interestingly we do this in "reverse" order.
First we apply our orthographic projection.
We skip our view matrix as we're just looking out of origin.
Last we apply our model matrix by rotating our model.
      // select our shader
      glUseProgram(shaderProgram);
      glUniformMatrix4fv(glGetUniformLocation(shaderProgram, "mvp"), 1, false, (const GLfloat *) mvp.m);
Next we tell OpenGL which shader program we wish to use and we load our mvp into our shader.
To do this we first need to get our uniform ID for our mvp variable and then use that to set our mvp.
This is the quick and dirty way of doing things, we'll talk about better strategies another day as it is a subject on itself.
      // draw our triangles:)
      glBindVertexArray(VAO);
      glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, NULL);
      glBindVertexArray(0);
Thanks to all the setup we've done actually drawing our triangle is brought down to these 3 calls. Binding our VAO sets the entire state, then we draw, and finally we unbind.
      // unset our shader
      glUseProgram(0);
      
      // swap our buffers around so the user sees our new frame
      glfwSwapBuffers(window);
      glfwPollEvents();
    };
Again for our simple example unsetting our shader program is a bit of overkill but it is a good habit to learn for when things get more complex.
In our final part of our render loop we tell GLFW to swap our buffers to make all our drawing work visible and to poll for any events like a good application should.
    // close our window
    glfwDestroyWindow(window);  
  };
  
  // lets be nice and cleanup
  unload_objects();
  unload_shaders();
  
  // the end....
  glfwTerminate();
};
And in this last bit we simply nicely clean up after ourselves....

That's it folks! Compile and we get the same colorful triangle as before, but after 10x as much code it is now being rendered using OpenGL 3+ techniques.

What's next?

What's next is that I'm going to get some sleep:) I'll reread what I've posted here over the weekend and fix any stupid mistakes I've made. The code will be up on GitHub in a minute.
I'll also look at the windows side of this in the weekend.

After thats behind me the next full post we'll start looking into some 2D techniques to draw things a bit more interesting then a spinning triangle.

1 comment:

  1. Please note, in the writeup above I'm defining GLFW_INCLUDE_GLCOREARB
    to include headers for supporting OpenGL 3 but this didn't work well on Windows so I ended up switching to using GLEW. Please check part 4 for the needed changes.

    ReplyDelete