Saturday 10 January 2015

Space Colonization Algorithm part 3

Holidays are over unfortunately which means time is more constraint. Most of what is in this part I did on the last days of my holidays and the changes to the source code have already been added to the GitHub repository.

First I had to make a few improvements to the tree optimisation algorithm. I had actually made a few typos in some of the vector classes and it wasn't calculating things correctly. Once that was all ready I could tweak the optimisation algorithm slightly and the result is great. The only noteworthy tweak is that I calculate and keep the direction vector as long as I'm merging the same section. This results in very slow bends still being optimised much better.

OpenGL 3/4

The most work went into changing over to using OpenGL 3/4. As I'm using a tessellation shader step for automatic LOD (more on this later on) the program now requires OpenGL 4 capable hardware. If that isn't what you have you could remove the tessellation logic and enhance the mesh generator to add more detail.

I'm not going into much detail about this step, I might do a separate (set of) blog post(s) on setting up OpenGL 3 as a render environment. But I'll go through the basics.

As I'm developing on a Mac most of the fixed rendering pipeline is gone. It basically means you're doing everything in shaders yourself. I've had to add a matrix class (only basic functions supported for now) and add a few things to the vector classes to make things easier. Again I'm not switching to an existing maths library as it is easy to just build these classes with just what you need but you could get a head start by using something like GLM.

I did add the excellent stb_image.h library into the mix to load textures.
Also note that in constructing the GLFW window a number of hints have been added that are required to enable OpenGL 3+ (and get you to the point of no return):

// make sure we're using OpenGL 3
glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);
glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2);
glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);
glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);

Shaders

First we need to implement shaders now that we no longer have our fixed rendering pipeline. The shaders are all inline which makes them a little harder to read but its just easier to deploy them that way. If I had a larger project I would create them as files on disk and just load them in (in fact the game engine I'm working on comes with a complete precompiler for shaders).
There are two shaders:
- "simpleshader" which basically passes through a color
- "treeshader" which renders our final mesh and has a tessellation shader step

Lets have a quick look at my simple shader as it explains a few things on how this stuff works and is used to render what we used to do in OpenGL 1.

-- vertex shader
#version 330

uniform mat4 mvp;
layout (location=0) in vec3 vertices;

void main() {
  vec4 V = vec4(vertices, 1.0);
  gl_Position = mvp * V;
}
-- fragment shader
#version 330

uniform vec4 color;
out vec4 fragcolor;

void main() {
  fragcolor = color;
}

Both shaders start with "#version 330" which simply tells OpenGL what feature set we want. Our tree shader is set to "410 core" as our tessellation shader isn't supported else wise. After 330 basically new stuff has been added but before there are difference in dialect. For instance we used to have build in variables for the color and our model/view/projection matrix but no more.

Our vertex shader is where most of our logic resides. Let's have a look at each line of code a little closer starting with our 3rd line:
uniform mat4 mvp;
The first word on this line, uniform, tells the shader this is a variable that will be set from outside of the shader but is otherwise immutable. Our variable in this case is a 4x4 matrix called mvp (model/view/projection matrix).
In our rendering code we see that we calculate our model/view/projection matrix and then give it to our shader using the setMat4Uniform method on our shader class which in turn calls glUniformMatrix4fv which is our OpenGL command for copying the matrix.
layout (location=0) in vec3 vertices;
This one is a bit more tricky. "layout (location=0)" tells OpenGL this is our "0" attribute in our vertex buffer object. We bind this by calling glVertexAttribPointer. Before OpenGL 3 this already existed but was somewhat hidden and hardcoded. Also before OGL3 you had to call this every time before you rendered your mesh while it is now stored in our vertex array object (more on this later).
To make a long story short, this line of code makes our vertices available to us in our shader and automatically refers to the correct vertex being handled in our shader.
void main() {
Indicates the start of our shader program
vec4 V = vec4(vertices, 1.0);
Simply turns our 3D vertice into a 4D vector
gl_Position = mvp * V;
Applies our matrix to our vertex so that we end up with the correct coordinates on screen and in our depth buffer.

Our fragment shader is much simpler
uniform vec4 color;
Provides us with a way to set the color we want to render with from outside of our shader.
out vec4 fragcolor;
Defines that the output of our shader is a RGBA color variabled called fragcolor.
fragcolor = color
Simply assigns our input color to our output.

The shaders are compiled once and then simply used by calling glUseProgram.
For compiling the shaders have a look at the shader class in the source code.

Vertex Array Object 

The second change we must do is that we need to bind all our state into a Vertex Array Object or VAO for short. A VAO is little more then a container of state, before VAOs were added to OpenGL you had to first bind all the buffers and related state before rendering the object.
VAOs simply added some convenience allowing your to do all the binding calls once and then simply make the VAO active.
In OpenGL 3 they've made the use of a VAO mandatory which can be a bit of a nuisance.

If you're doing the right thing before you render anything, you would create a VAO for your 3D object, bind the buffers, load them with data, and then reuse the VAO when you need it during rendering.
In my tree application I do some of this setup in my render loop and sometimes even repeat it needlessly. This is purely due to wanting to keep showing each step of the algorithm and I wouldn't recommend it as the right approach:)

Lastly, the VAO binds Vertex Buffer Objects to itself and when activated reloads that state. The contents of those buffers is another matter entirely and you don't need to have a VAO active to update or set the contents of the buffers. Equally so, a buffer object is not restricted to a single VAO. If you have a mesh that is rendered using different shaders to simulate different materials you might create a VAO for each of those materials, bind the same vertex buffer that contains the vertices of your mesh but bind a different buffer that contains the indices of the faces being rendered.

Vertex Buffer Objects

Vertex Buffer Objects, or VBO for short, have been around for awhile now. They are very simply put buffers of data, generally speaking either a buffer containing the vertices of your mesh (and their normals, texture coordinates and other such things) or indices to form the faces of your mesh (triangles mostly).

As mentioned before, generally you would load up your mesh, create and load date into your VBOs, bind them to a VAO and  you'd be done, the VAO would be ready for rendering the mesh in your render loop.

For practicality I'm doing most of this logic inside of the render loop for the tree application. But thats generally not what you would do in a normal application.

There is more in relation to the OpenGL 3 conversion of the code but these 3 are the major highlights.

Mesh generation

With all that ground work done we finally come back to our tree generation algorithm. We've basically got a working model of the Space Colonization Algorithm already but now we need to turn it into a 3D mesh that can be rendered.

I'm taking a shortcut here because I've moved some of the complexity into the shader logic. This allows me to simply box up the tree and create a very simple mesh. It also means my mesh is made up of quads, not triangles.

For each vertex of my tree node I need to create 4 vertices within the correct plane. That plane is defined by taking the direction of the branch to that vertex and the direction to the next vertex and averaging the two, and using that as a normal for the plane.

Where a tree branches I create multiple sets depending on the direction I branch into.

After that I simply create a box using the 4 vertices of each of the 2 points of a node.

Creating our 4 vertices for our cross section was the tricky part and I'm only 90% happy with the solution I came up with. I simply take the cross product of my plane normal vector and an arbitrary constant vector to get a vector that is perpendicular to my normal vector all pointed roughly in the same direction. Then I rotate that vector in 90 degree steps to create the other 3 vertices. Finally I scale the vector depending on how far down the tree I am so the tree is thin at the end of its branches and thick at its roots.

It works really well unless my tree is growing in roughly the same direction as my arbitrarily chosen vector.
Also it didn't work very well with my roots so I've disabled them for now.

The last step is done within the tessellation control shader and the tessellation evaluation shader.

The control shader is basically a straight forward control shader were I take the size of each edge of my quad as its projected on screen and multiply it by a constant. Remembering that our screen coordinates are currently in the -1.0 to 1.0 range, not at screen resolution, we thus end up with a number that neatly divides our mesh into roughly equally sized triangles. The closer our tree, the more triangles.

If we take this division as is we would have a nicely highly detailed tree which would still be square. Our evaluation shader therefor applies a smoothing algorithm and we get a really nicely rounded mesh. For now I've used a technique called Phong Tessellation which works very well. It's described here much better then I ever could: http://liris.cnrs.fr/Documents/Liris-6161.pdf

Still I'm pretty happy with the end result:


Next steps

The most obvious omission is that we don't have any leaves. That will have my next focus.
The other thing that I've been planning to add is to add interface controls for creating the tree, maybe give the user the ability to "paint" the point cloud and define our starting tree shape.
What is also needed is a way to load/save our tree as we're working on it and to export it into a mesh format that can be used in other applications.

Saturday 3 January 2015

Tree update

The last few days have kept me busy with the kids, playing to much on my XBox, trying to finish A Dance With Dragons and then some, but I did manage to put more time into my tree thingy.

I haven't had time to write up part 3 of the series and when I do it will be a long one as I've got an initial version of the code working that extrudes the tree nodes to a mesh, ported everything I had written over to using Vertex Array Buffers, Vertex Buffer Objects, programmable shaders including using tessellation shaders, etc. That last one requires Open GL 4.0 support but hey...

As my holidays end in a little over 24h it may be some time before I can do the write-up but for the time being, just a few progress screenshots:



I did upload all the changes to my github page.