Horde3D
http://horde3d.org/forums/

Rendering decals
http://horde3d.org/forums/viewtopic.php?f=1&t=1231
Page 1 of 1

Author:  zoombapup [ 19.08.2010, 19:47 ]
Post subject:  Rendering decals

I dont know if I've asked this before, but I'm thinking about how to draw decals. Anyone got any useful info?

Author:  DarkAngel [ 20.08.2010, 06:02 ]
Post subject:  Re: Rendering decals

There's a lot of different techniques for decas.

The traditional idea is that you render the affected mesh again, except with a projected decal texture and alpha-testing enabled.
This is obviously wasteful, so it's a good idea to use the bounding-box of the projection area to extract only the affected triangles, which you use to create a new temporary mesh for that decal.
Some games even have their artists create a 'decal mesh' - which is like a low-poly LOD, just used for this decal-projection technique.

There's a screen-space technique that doesn't require mesh-generation though. For this you need access to per-pixel position -- in a deferred renderer this should be readily available, and in a forward renderer, you can use a bit of math to reconstruct position from the non-linear Z-buffer values.
The idea is that you draw your decals in the same way that you draw lights in a deferred renderer. You just render the bounding-cube of the decal to the screen, fetch the position of any covered pixels, and perform the texture-projection using the decal's position/size and the pixel's position.
In a deferred renderer you could do this after initial G-Buffer generation, and overwrite the albedo term in the G-Buffer (so that the decals will be lit as usual), but in a forward renderer, you'll also have to include the lighting-math in your decal shader.

Author:  ZONER [ 21.08.2010, 21:46 ]
Post subject:  Re: Rendering decals

There is also another way of decals rendering. You can use geometry shader for quad building and projecting onto object, optimization could be using of Instancing. One quad information for every decal without duplicating in memory. Another way is to using Rendering To Vertex Buffer(another name as Transform Feedback in OpenGL and Stream Out in DirectX 10). You render your geometry to a buffer, then using results from buffer make re-rendering of object using that buffer. The best feature of these methods that you dont have wasted cycles. With R2VB you have every info about geometry and can render lights the same way. And now cons. The first method works better on ATi hardware. On NVIDIA the could be small FPS degrading from frame to frame. The second technique suffers from rendering object twice - one for buffer, one for final rendering. This method works great on NVIDIA hardware, but ATi cards have problems with FPS using that method(degrading is 15% - 30%)

Page 1 of 1 All times are UTC + 1 hour
Powered by phpBB® Forum Software © phpBB Group
https://www.phpbb.com/