Alrighty, this one is going to take a bit of time to slowly put together. I'm tinkering with the idea of changing the format of these blog posts. This is really no longer a tutorial but is starting to become a development blog on building my little 3D engine.
The sheer amount of changes I've made to the engine over the past two weeks, many of which are just background things, would just deter from the topic I'm trying to discuss...
So let's just see where this will lead us but I may just start focussing on the core changes and leave it up to you to examine the other stuff by looking at the changes checked in on github.
Speaking of github, I'm still polishing up things so I won't be checking things in just yet. This mostly is more of an introduction into what we'll be doing.
Deferred lighting, I've mentioned it a few times now since I started this series. What is it all about?
Deferred lighting or deferred shading is a technique that implements our rendering into two (or more) separate passes. So far we've been doing all the work in a single pass, every pixel we render to the screen we do all the calculations for to determine its final color in one go.
With deferred lighting we first render our scene to a number of temporary buffers in which we store information about all our objects without doing any lighting calculations, and then we do a pass for each light in our scene to render our final scene.
So why is this a good thing? Well lets start by saying it doesn't have to be :)
When we look at our current example, an outdoor scene with very little detail, well it won't help us at all. In fact after all the work of the last two weeks, the engine is slower then before..
But there is something we can already see in our current implementation of our shaders. Just take a good look at our standard fragment shader. Even with just the calculations of a single light, a very simple directional light, 90% of the fragment shader is all focussed on calculating the amount of light illuminating our fragment.
Now with a simple scene, a bit of sorting of our objects, and good use of our Z-buffer it's a likely assumption we're already discarding all the fragments that would result in all these calculations being done more then once for the same pixel as objects overlap.
But with a complex scene, with many overlapping objects, the likelihood of doing many of these complex calculation for nothing increases.
This is where the first strong point of a deferred shader comes into play. By rendering all the geometry to a set of temporary buffers without doing all our lighting calculations making this step very fast, we can then do a second pass to only do the lighting calculations for those fragments that actually end up being pixels in our end result. The overhead of rendering our scene in two passes can easily be offset by the time we save not doing complex lighting calculations multiple times for the same pixel.
But the second strong point is where this approach comes into its own. Right now we have one light. A light that potentially effects the whole screen, and in that our sun is actually rather unique.
Imagine an indoor scene with a number of lights scattered around the room, maybe a desk lamp somewhere, or the light cast by a small LED clock display.
With a single pass renderer we're checking for the effects of these light for every fragment we attempt to render even though they may only effect a tiny part of our end result.
In a deferred renderer we're using 3D meshes that encompass the area effected by these types of lights. We then render these "lights" onto our end result and only apply the lighting calculations for each light where it counts as part of our 2nd pass.
When we look at an environment where we want to deal with many lights effecting complex geometry a deferred renderer can be much faster then a single pass renderer.
But it is very important to realise that as with many techniques there is a tradeoff.
Basically there are four main weak points of a deferred renderer that I can name off the top of my head:
- for simpler or more open space environments the added overhead won't be won back as I've explained up above.
- as we'll see in this part of the series, transparency is relatively hard to do. You either need to move rendering transparent objects to your lighting pass or introduce additional buffers
- multi layer materials with different lighting properties are very hard to do because we don't do our lighting until we've already "flattened" our material.
- we can't use some optimisations in lighting normally performed in the vertex shader because the input into our lighting scene is in view space. It's a shame we didn't already implement it before we got to this point so I could show the differences but an example of this is normal mapping. You can avoid doing some costly calculations in the fragment shader by projecting your light based on the orientation of the polygon you're rendering and using your normal map directly. But in a deferred shader your have to project all the normals in your normal map to view space.
When we look at the open world environment we've been working with so far a single pass shader would definitely be the better choice. Let's say we turn this into a little flight simulator, or turn this into a little racing simulator, we're probably want to stick with the single pass shaders we've build so far.
But when I look at the platformer I want to build, I'm suddenly less concerned about the drawbacks and much more enthusiastic about the lighting capabilities of our deferred lighting approach.
At this point in time the changes to the renderer to enable deferred shading have replaced the single pass approach. This is one of the areas where I want to do a bit more spit and polish. Ideally I want to create a situation where I can chose which approach I want to take.
Anyways, that's it for now. In the next post we'll start having a closer look at how the deferred lighting rendered is actually works.
No comments:
Post a Comment