marciano wrote:
the uniform
nodeId is available in the latest trunk now
Excellent! I'm doing this work on the trunk at the moment, so I'll do a SVN update soon. This should make the DSF algorithm almost perfect.
Quote:
Do you already have an idea how to realize that [normal-group-IDs]? We don't store any smoothing group info for the meshes...
The paper doesn't seem to mention smoothing groups, so I think they're generating their own groups by defining a threshold for the change in normal from one face to the next. Either the colladaconv tool, or a new tool that uses H3G files as input and output, could be used to perform this step. In the paper, they store these IDs in the .w component of the tangent vector.
However, I don't actually
need these normal-group IDs for the technique to work -- they're just an
optimisation!
Instead of checking normal-IDs, I can decode the normals from the GBuffer and use these in the DSF test. The IDs are used in the paper to avoid the cost of decoding the normals (which isn't actually that expensive).
In fact, checking the normals is working so well for DSF in my latest test that I don't need to do use the depth information any more. There's only a few occasional errors, which nodeId should fix
Quote:
I'm a bit concerned with the quality of a quarter resolution normal buffer. It would be fine for low frequency albedo or specular data but for normal maps that contain high frequency details a good resolution is important, otherwise features like detail normal mapping would suffer.
Yeah that is a problem with it, but in a PC game you could even expose the GBuffer/LBuffer resolution to the user as a "lighting quality" option.
Currently, I'm getting good results on Chicago with 50% resolution, in the paper they use 62.5% resolution, and it even works fine at 100% resolution (
at 100%, you can disable DSF and it becomes a light-pre-pass pipeline).
On my "100 lights" test, Inferred with 100% resolution is
still faster than forward lighting
Not
everything that uses normals has to use the low-res data from the GBuffer though -- for example, when reading from the ambient cube map (in the final/material pass) you can use the actual normal-mapped normal.
One thing I'd like to try is, when getting an LBuffer sample, I could multiply it by the dot product of the actual normal and the GBuffer normal. This isn't quite correct, but it might allow a high-frequency normal map to "imprint" it's pattern at full resolution a bit more...