I'm currently attempting to create a more optimal deferred renderer using mostly 8-bit RBGA textures.
I've become stuck trying to reconstruct positions from depth though:
marciano wrote:
As stated in another thread, the deferred shading implementation could quite easily be optimized using 8 bit targets and reconstructing fragment position from depth (google for this).
Has anyone done this before, and could provide me with advice?
I'm using the following modifications to write depth to the G-Buffer:
Code:
<VertexShader>
...
varying vec4 vsPos;
void main( void )
{
...
// Calc eye space pos
vsPos = calcViewPos( pos );
...
</VertexShader>
<FragmentShader>
...
varying vec4 vsPos;
void main( void )
{
...
setDepth( -vsPos.z / 1000.0 );
// setPos( vsPos ); // <-- for debugging
...
</FragmentShader>
...
...
float getDepth( const vec2 coord ) { return texture2D( tex9, coord ).a; }
void setDepth( const float depth ) { gl_FragData[1].a = depth; }
The above code works fine. I'm having trouble in the lighting shader reconstructing a (view space) position from these depth values.
My (attempted, but incorrect) algorithm is to construct a ray that passes through the current pixel, and then scale that ray by the depth value. When I compare my results against the actual view-space position though, my results are slightly wrong.
Code:
vec4 ray; ray.w = 1.0;
float fov = radians(45.0);
float off = 0.5;
float sca = 2.0;
vec2 screen = vec2( 640.0, 480.0 );
float aspect = screen.x / screen.y;
float farPlane = 1000.0;
float xratio = gl_FragCoord.x/screen.x;
float yratio = gl_FragCoord.y/screen.y;
ray.x = (xratio-off) * sca * sin(fov);
ray.y = (yratio-off) * sca * sin(fov / aspect);
ray.z = cos( asin(ray.y) );
ray *= getDepth(texCoords) * farPlane;
// ray.rgb = getPos(texCoords); ray.z = ray.z * -1.0; // <-- for debugging
gl_FragColor = ray;