In hindsight I should have added logic for at least one point light before we moved to a deferred rendering approach to better illustrate the differences but it is in handling these lights that deferred rendering starts to shine.
Traditionally in single pass shaders we would either find a loop that runs through all the lights or fixed logic that handles a fixed set of lights (usually generated by a parser). Because this logic is repeated for every fragment rendered to screen whether the fragment is lit by the light or not and whether the fragment will later be overwritten or not there is a lot of performance that is wasted.
Now with the speed of modern GPUs and only including lights that likely illuminate an object a single pass shader tip the balance back in its favour, I'm not sure.
Deferred rendering ensures that we prevent a lot of this overhead by doing the lighting calculation as few times as possible by working on the end result of rendering our scene to our geobuffer.
Our main light calculation
The basis of our light calculation remains the same for our point light as for our directional sunlight. I've skipped the ambient component as I feel either using the ambient sunlight as we have now or using some form of environment mapping gives enough results with this.So we restrict our light calculation to diffuse and specular highlighting. Those calculations remain the same as with our directional light with the one difference that our light to fragment vector plays a much larger role.
The thing that is new is that the intensity of our light diminishes as we move further away from the light. To be exact, it diminishes by the square of the distance to our light.
For computer graphics however we find that we trick this a little, you can find much better explanations then I can possibly give but the formulas that we'll be using is the following:
float attenuation = constant + (linear * distance) + (exponential * distance squared); fragcolor = lightcolor / attenuationI've left out a few details there but lightColor is the color we calculated in the same way had with our sunlight and we divide it with our calculation based on our distance. There are 3 values that we input into this formula next to our distance:
- a constant
- a linear component we multiply with our distance
- an exponential component we multiply with the our distance squared
You can create your engine to allow for the manual input of all 3 values to give loads of flexibility but in our engine I've simplified it. Note that when attenuation is 1.0 we get the color as is. Basically the distance at which our formula results 1.0 is where the light starts loosing its strength.
On a quick sidenote in my shader you'll see that if this formula returns larger then 1.0 I cap it. You can have some fun by letting it become overly bright by putting this threshold higher and add some bloom effects to your lighting, but thats a topic for another day.
I'm using the fact that our light starts to loose its intensity at attenuation = 1.0 to calculate our 3 values by specifying the radius at which I want this to happen and then calculating our 3 values as follows:
- our constant is simply 0.2
- our linear component is calculated as 0.4 / radius
- our exponential component is calculated as 0.4 / radius squared
When distances equals radius our formula will be 0.2 + 0.4 + 0.4 = 1.0
Finally, in theory our light as unlimited range, the intensity will keep getting smaller and smaller but it will never reach 0. But there is a point where our intensity becomes so low that it won't have an effect on our scene anymore. In a single stage renderer you could use this to filter out which lights are close enough to your object to be evaluated, in our deferred renderer we use it to limit how much of our screen we update with our lighting color.
Now truth be told, I'm taking a shortcut here and pretending our linear component is 0.0 and our exponential component is 0.8 / radius squared. This makes the calculation slightly easier but I overestimate the range slightly.
Our range calculation simply becomes: range = radius * sqrt((maxIllum / threshold) - 0.2)
maxIllum is simply the highest of our 3 RGB values and threshold is a threshold at which our light has become to low.
Adding shadowmaps
This is where point lights get a bit ridiculous and why using spotlights can be way more effective. Point lights shine in every direction and thus cast shadows in every direction. The way we solve this is by mapping our shadowmaps on a cube and we thus create 6 individual shadowmaps. One for lookup up from the light, one for down, one for left, right, forwards and backwards.Then when we do our shadow checks we figure out which of those 6 shadow maps applies. I have to admit, this bit needs some improvement, I used a fairly blunt force approach here mostly because I couldn't be bothered to figure out a better way.
Unlike our shadow maps for our directional lights we use a perspective projection for these shadowmaps. I'm using our distance calculations we performed just now to set our far value. Also these are static shadowmaps which means we calculate them once and reuse them unless our lights position changes instead of redoing them every frame. This saves a bunch of overhead especially if we have loads of lights. To be exact, you could save them and skip the first render step all together.
The problem with static shadowmaps is that they won't update if objects move around, so say your character walks past a point light he/she won't cast a shadow.
We'll deal with this in another article but in short we'll leave any object that moves or is animated out of our static shadowmaps, keep a copy, and render just the objects that move or are animated before rendering our frame.
Again as with our sunlight we can also reuse our shadow maps for both eyes.
The code for creating the shadow maps is nearly identical to the code for our directional light other then the added loop to update 6 maps and the change to calculating our projection and view matrices.
Also note that we only check the rebuild flag for the first map, if one map needs changing we assume all need to change (unlike our directional light where we check them individually):
void lsRenderShadowMapsForPointLight(lightSource * pLight, int pResolution, meshNode * pScene) { int i; vec3 lookats[] = { 0.0, -100.0, 0.0, 100.0, 0.0, 0.0, -100.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, -100.0, }; // as we're using our light position and its the same for all shadow maps we only check our flag on the first if ((pLight->shadowLA[0].x != pLight->position.x) || (pLight->shadowLA[0].y != pLight->position.y) || (pLight->shadowLA[0].z != pLight->position.z)) { vec3Copy(&pLight->shadowLA[0], &pLight->position); pLight->shadowRebuild[0] = true; }; // we'll initialize our shadow maps for our point light if (pLight->shadowRebuild[0] == false) { // reuse it as is... } else if (pScene == NULL) { // nothing to render.. } else { for (i = 0; i < 6; i++) { if (pLight->shadowMap[i] == NULL) { // create our shadow map if we haven't got one already pLight->shadowMap[i] = newTextureMap("shadowmap"); }; if (tmapRenderToShadowMap(pLight->shadowMap[i], pResolution, pResolution)) { mat4 tmpmatrix; vec3 tmpvector, lookat; shaderMatrices matrices; // rest our last used material matResetLastUsed(); // set our viewport glViewport(0, 0, pResolution, pResolution); // enable and configure our backface culling, note that here we cull our front facing polygons // to minimize shading artifacts glEnable(GL_CULL_FACE); // enable culling glFrontFace(GL_CW); // clockwise glCullFace(GL_FRONT); // frontface culling // enable our depth test glEnable(GL_DEPTH_TEST); // check our depth glDepthMask(GL_TRUE); // enable writing to our depth buffer // disable alpha blending glDisable(GL_BLEND); // solid polygons glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); // clear our depth buffer glClear(GL_DEPTH_BUFFER_BIT); // set our projection mat4Identity(&tmpmatrix); mat4Projection(&tmpmatrix, 90.0, 1.0, 1.0, lightMaxDistance(pLight) * 1.5); shdMatSetProjection(&matrices, &tmpmatrix); // call our set function to reset our flags // now make a view based on our light position mat4Identity(&tmpmatrix); vec3Copy(&lookat, &pLight->position); vec3Add(&lookat, &lookats[i]); mat4LookAt(&tmpmatrix, &pLight->position, &lookat, vec3Set(&tmpvector, 0.0, 1.0, 0.0)); shdMatSetView(&matrices, &tmpmatrix); // and render meshNodeShadowMap(pScene, &matrices); // now remember our view-projection matrix, we need it later on when rendering our scene mat4Copy(&pLight->shadowMat[i], shdMatGetViewProjection(&matrices)); // we can keep it. pLight->shadowRebuild[i] = false; // and we're done glBindFramebuffer(GL_FRAMEBUFFER, 0); }; }; }; };
Rendering our lights
Now it is time to actually render our lights. This is done by calling gBufferDoPointLight for each light that needs to be rendered. We make the assumption that our directional light has been rendered and we thus have content for our entire buffer. Each light is now rendered on top of that result by using additive blending. This means that instead of overwriting our pixel the result of our fragment shader is added to the end result.gBufferDoPointLight assumes our blending has already been setup as we need the same settings for every light. Our loop in our render code therefor looks like this:
// now use blending for our additional lights glEnable(GL_BLEND); glBlendEquation(GL_FUNC_ADD); glBlendFunc(GL_ONE, GL_ONE); // loop through our lights for (i = 0; i < MAX_LIGHTS; i++) { if (pointLights[i] != NULL) { gBufferDoPointLight(geoBuffer, &matrices, pointLights[i]); }; };
As you can see for now we've just got a simple array of pointers to our lights and it currently holds 3 test lights. Eventually I plan to place the lights inside of our scene nodes so we can move lights around with objects (and accept the overhead in recalculating shadow maps). For now this will do just fine.
The rendering of our light itself is implemented in the vertex and fragment shaders called geopointlight. Most implementations I've seen render a full sphere with a radius of our maximum light distance but for now I've stuck with rendering a flat circle and doing so fully within our vertex shader (using a triangle fan):
#version 330 #define PI 3.1415926535897932384626433832795 uniform float radius = 100.0; uniform mat4 projection; uniform vec3 lightPos; out vec2 V; out float R; void main() { // doing full screen for a second, we're going to optimize this by drawing a circle !! // we're going to do a waver // first point is in the center // then each point is rotated by 10 degrees // 4 // 3 --- 5 // /\ | /\ // / \|/ \ // 2|------1------|6 // \ /|\ / // \/ | \/ // 9 --- 7 // 8 if (gl_VertexID == 0) { vec4 Vproj = projection * vec4(lightPos, 1.0); V = Vproj.xy / Vproj.w; R = radius; } else { float ang = (gl_VertexID - 1) * 10; ang = ang * PI / 180.0; vec4 Vproj = projection * vec4(lightPos.x - (radius * cos(ang)), lightPos.y + (radius * sin(ang)), lightPos.z, 1.0); V = Vproj.xy / Vproj.w; R = 0.0; }; gl_Position = vec4(V, 0.0, 1.0); }
Now drawing a circle this way ensure that every pixel that requires our lighting calculation to be applied will be included. For very bright lights this means the entire screen but for small lights the impact is severely minimised.
You can do a few more things if you use a sphere to render the light but there are also some problems with it. We'll revisit this at some other time.
I'm not going to put the entire fragment shader here, its nearly identical to our directional light fragment shader. The main differences are:
- we discard any fragment that doesn't effect our scene
- we ignore the ambient buffer
- we use our boxShadow function to check the correct shadowmap
- we calculate our attenuation and divide our end result with that
Note that if our attenuation is smaller then 1.0 we ignore it. Basically we're within the radius at which our light is at full strength. If we didn't do this we'd see that things close to the light become overly bright. Now that can be a fun thing to play around with. The OpenGL superbible has an interesting example where they write any value where any color component is bigger then 1.0 to a separate buffer. They then blur that buffer and write it back over the end result to create a bloom effect.
But at this stage we keep that bit easy.
Seeing our buffers
Last but not least, I'm now using an array of shaders instead of individual variables and have introduced an enum to manage this.There are two new shaders both using the same vertex and fragment shader called rect and rectDepth. These two shaders simple draw texture rectangles onto the screen.
At the end of our render loop, if we have our interface turned on (toggle by pressing i) we now see our main buffers.
At the top we see our 5 geobuffer textures.
Then we see our 3 shadow maps for our directional light.
Finally we see our 6 shadow maps of our first point light.
Handy for debugging :)
Here is a shot where we can see the final result of our 3 lights illuminating the scene:
Check out the sourcecode so far here
I'll be implementing some spotlight next time. These are basically easier then our point lights as the shine in a restricted direction and we thus can implement these with a single shadowmap.
But we can also have some fun with the color of these lights.
I'm also going to look into adding support for transparent surfaces.
Last but not least, I want to have a look into volumetric lighting. This is something I haven't played around with before so it is going to take a bit of research on my side.
No comments:
Post a Comment