Hi all, as far as I understand hardware occlusion queries, you render occluders (and in Horde's case, the bounding boxes) on top of the uncleared depth buffer which tests how many fragments get passed the depth buffer, and the query will reply back saying if 0 or more fragment samples did pass and if it is 0, the next frame which is drawn will not submit that node/mesh to the driver. As you know there are a few drawbacks to this technique especially when you use OpenGL ES on mobiles where these queries don't exist. Marciano kindly linked me to a solution that does the queries in software:
http://msinilo.pl/blog/?p=194I might not understand this fully, but would this solution be faster: Instead of doing depth tests I was thinking would it be faster to just render the bounding boxes each with a unique colour and then get the CPU to check if that colour is in the image to determine if that model should be visible? Example steps using an EGL context (could be replaced with a software renderer):
1. Switch EGL context to a native pixmap or some place where the CPU can later grab it, make it tiny like 256x128.
2. Batch render bounding boxes to the pixmap with leaving colour fragments turned on, each bbox has a unique colour drawn to the screen.
3. On the CPU, make a loop which tests for if there are colours and store it into a 16-bit bitmask of sorts (limit of 65536 objects/occluders?) so if there's any pixel with that colour then that node/occluder set would be visible.
4. Switch back to the native surface (screen).
5. If certain indicator colour is 1, draw complex object which represents that said colour.
There wouldn't be any frame delays because there doesn't need to be an existing depth buffer to test off of. I'm probably missing something huge here but would this be a nice solution for culling on mobiles?
Thanks all!