Horde3D
http://horde3d.org/forums/

Floating-point depth buffers for render targets
http://horde3d.org/forums/viewtopic.php?f=3&t=352
Page 1 of 1

Author:  swiftcoder [ 31.05.2008, 13:03 ]
Post subject:  Floating-point depth buffers for render targets

It turns out that it would be very handy to have access to floating point depth-buffers for render targets. Particularly with planets, and long range shadowing, 32-bit depth textures are a must.

The only change required to get a 16 or 32 bit depth buffer is a single line in egRendererBase.cpp, line 526:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, 0x0 );

To this:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0x0 );

or this:
Code:
glTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, rb.width, rb.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0x0 );


I have made this change to my local source, but it would be nice to be able to specify this per-render target in the pipeline config.

Author:  marciano [ 01.06.2008, 15:29 ]
Post subject:  Re: Floating-point depth buffers for render targets

Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets. And not specifying anything for GL_DEPTH_COMPONENT makes the driver automatically select the optimal format. I also don't think it makes a difference if you specify GL_UNSIGNED_BYTE or GL_FLOAT since it refers to the image data you pass to glTexImage2D (which is NULL in our case). But I'm not entirely certain here...

Author:  swiftcoder [ 02.06.2008, 12:07 ]
Post subject:  Re: Floating-point depth buffers for render targets

marciano wrote:
Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets.
I can't be absolutely sure, since I am judging based on visual quality, but passing GL_DEPTH_COMPONENT32 does seem to remove my z-buffer artifacts.

However, if you are correct, this raises another issue. Since the main framebuffer has a 32-bit depth buffer (at least on my X1600), if the render buffers have only 16 bit, then we may need some way to render into the framebuffer, and then copy the depth portion into a texture (before we render more into the framebuffer). Perhaps 2 new pipeline commands, CopyColor and CopyDepth, which use glCopyTexImage2D to copy the framebuffer into the specified texture?

Author:  swiftcoder [ 07.06.2008, 14:31 ]
Post subject:  Re: Floating-point depth buffers for render targets

marciano wrote:
Are you sure there is a difference? As far as I know ATI X1000 cards only have 16 bit depth for render targets. And not specifying anything for GL_DEPTH_COMPONENT makes the driver automatically select the optimal format.
OK, you are partially correct. Specifying just GL_DEPTH_COMPONENT does select the optimal format, but that does not have to be the deepest format - in particular, on all ATI X1000 series cards, it will never choose a 32-bit depth buffer, unless you are using a 32-bit float colour buffer.

So we do need to provide a setting for this, especially as some people may want to limit this to 8 or 16 bit for bandwidth reasons.

Quote:
I also don't think it makes a difference if you specify GL_UNSIGNED_BYTE or GL_FLOAT since it refers to the image data you pass to glTexImage2D (which is NULL in our case).
You are entirely correct here.

Author:  marciano [ 08.06.2008, 10:35 ]
Post subject:  Re: Floating-point depth buffers for render targets

If I recall correctly the problem was that you don't get a valid render target when you specify a depth format that is higher than 16 bit on an ATI X1000. NVidia seemed to support better precision. So by specifying explicitely a precision higher than 16 bits you either break the code on ATI or get less quality on NVidia. One thing we could do though is adding an attribute depthPrecisionHint that tries to set the the depth explicitely but uses GL_DEPTH_COMPONENT as fallback in case of failure.

Author:  swiftcoder [ 08.06.2008, 11:59 ]
Post subject:  Re: Floating-point depth buffers for render targets

marciano wrote:
If I recall correctly the problem was that you don't get a valid render target when you specify a depth format that is higher than 16 bit on an ATI X1000.
I definitely can get 32-bits on my X1600, as opposed to 8 bits with only GL_DEPTH_COMPONENT.
Quote:
NVidia seemed to support better precision. So by specifying explicitely a precision higher than 16 bits you either break the code on ATI or get less quality on NVidia.
I don't quite get you here, do you mean that NVidia's default depth buffer target format is higher than 32-bits?

Quote:
One thing we could do though is adding an attribute depthPrecisionHint that tries to set the the depth explicitely but uses GL_DEPTH_COMPONENT as fallback in case of failure.
I can't say I am surprised, that OpenGL fails to do proper fallback on its own - are you sure that we can detect correctly when we should fallback? Something is necessary, with the silly 8-bit default on my card (which may be a problem with the apple drivers, who knows).

Author:  marciano [ 09.06.2008, 20:18 ]
Post subject:  Re: Floating-point depth buffers for render targets

Hmm, I thought the framebuffer object state was "incomplete" when I tried to attach a 24/32 bit depth buffer on an ATI X1600 but I'm no more sure here (checking the incomplete state is also how I meant to check if a fallback is required). The default precsion on Windows for this card when using GL_DEPTH_COMPONENT seems to be 16 (I always got the "Shadow map precision is limited to 16 bit" warning in the log on ATI cards).

Page 1 of 1 All times are UTC + 1 hour
Powered by phpBB® Forum Software © phpBB Group
https://www.phpbb.com/