Horde3D

Next-Generation Graphics Engine
It is currently 24.11.2024, 03:19

All times are UTC + 1 hour




Post new topic Reply to topic  [ 3 posts ] 
Author Message
PostPosted: 04.10.2010, 13:17 
Offline

Joined: 15.02.2009, 02:13
Posts: 161
Location: Sydney Australia
Hi all, as far as I understand hardware occlusion queries, you render occluders (and in Horde's case, the bounding boxes) on top of the uncleared depth buffer which tests how many fragments get passed the depth buffer, and the query will reply back saying if 0 or more fragment samples did pass and if it is 0, the next frame which is drawn will not submit that node/mesh to the driver. As you know there are a few drawbacks to this technique especially when you use OpenGL ES on mobiles where these queries don't exist. Marciano kindly linked me to a solution that does the queries in software:

http://msinilo.pl/blog/?p=194

I might not understand this fully, but would this solution be faster: Instead of doing depth tests I was thinking would it be faster to just render the bounding boxes each with a unique colour and then get the CPU to check if that colour is in the image to determine if that model should be visible? Example steps using an EGL context (could be replaced with a software renderer):

1. Switch EGL context to a native pixmap or some place where the CPU can later grab it, make it tiny like 256x128.
2. Batch render bounding boxes to the pixmap with leaving colour fragments turned on, each bbox has a unique colour drawn to the screen.
3. On the CPU, make a loop which tests for if there are colours and store it into a 16-bit bitmask of sorts (limit of 65536 objects/occluders?) so if there's any pixel with that colour then that node/occluder set would be visible.
4. Switch back to the native surface (screen).
5. If certain indicator colour is 1, draw complex object which represents that said colour.

There wouldn't be any frame delays because there doesn't need to be an existing depth buffer to test off of. I'm probably missing something huge here but would this be a nice solution for culling on mobiles?

Thanks all!

_________________
-Alex
Website


Top
 Profile  
Reply with quote  
PostPosted: 04.10.2010, 16:25 
Offline

Joined: 11.09.2010, 20:21
Posts: 44
Location: Germany
Hello,

It does not work this way, because you can use boxes as occludees but not as occluders (as long as they are bigger than the real object). If a box is occluded, its real geometry is always also occluded. But if a box occludes something, it is not garanteed, that the real geometry also occludes the object behind it, as the box is usually larger than the actual geometry, especially when using AABBs.
It should work ,if you use the real geometry instead of the boxes, but then you would actually do a depth pre-pass, that is often done in most forward rendering applications, where you just write your objects to the depthbuffer and profit from early z-cull (which does the depth test before fragment shader, as far as I know) in the next passes.
As this depth pre-pass has a trivial fragment shader, the smaller resolution in your idea wont make that difference (as depth-prepass has to be rendered on full resolution, of course), as you have to read and analyse the image in your idea.


Top
 Profile  
Reply with quote  
PostPosted: 05.10.2010, 01:42 
Offline

Joined: 08.11.2006, 03:10
Posts: 384
Location: Australia
Yeah you can use AABB's for the occludees, but you've either got to render the real geometry of your occluders or use a low-detailed occlusion mesh. An occlusion mesh is like the lowest LOD level, but special care is taken to ensure that the occlusion mesh fits perfectly inside the regular graphical mesh (none of the occlusion mesh should be visible if it and the real mesh were both rendered in the same spot).

-------
Instead of checking if colours (IDs) are visible in the final result, there's another approach described in the "rendering with conviction" presentation where each object is designated only a single pixel in the (e.g. 256x128) output texture (Assuming you've got pixel shaders, or vertex shaders with texture-fetch capability). To determine whether to render an object or not, you just have to check if that pixel contains 0/1 (black/white).

Each occludable object (occludee) is represented as a point primitive in a vertex stream. The stream contains the output position (the pixel in the output texture to store the result), and also contains the AABB of the occludee. When the point is rendered, the shader projects the object's 3D AABB into screen space to determine a 2D screen-space AABB.
The shader then chooses a mip-level of the depth buffer so that this 2D AABB will only cover 4 texels. The shader samples these 4 depth values, and compares them against the min depth of the AABB. Either a 0 or 1 is written as output depening on whether this comparison passes.

1) Render the depth of all the occlusion meshes.
2) Create a full mip-chain of the resulting depth buffer, using the MAX depth when blending pixels to create each mip level.
3) Render all points (occludees) to the output texture (with this mipped-depth texture bound)
4) Read the results of this texture on the CPU to determine which ocludees to render.


Top
 Profile  
Reply with quote  
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 3 posts ] 

All times are UTC + 1 hour


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  
cron
Powered by phpBB® Forum Software © phpBB Group