I've just started implement a dds loader, just to see the problems. I'm ill so I have some free time.
I removed the stbi loader. I think this should be in an extension or in utils. It could convert the image to dds data.
1. I think for dds it should be uploaded as it is in the file. For example if there are only 3 mipmaps in the file the opengl texture should have only 3 mipmaps.
For non-dds images I would be happy with the d) option except that in the given example the parameters would be the default paramaters (nomip, nocomp, 2d). So the name should be postfixed only if mipmap, compression or cube conversion is needed.
2. For non-compressed dds textures the conversion could be kept. But for compressed textures the speed and quality is a big problem so I wouldn't support it.
I don't know OpenGL but D3D has a conditional pow2 texture support. If this caps flag is set (it is true for all ps20 hardware) it supports non-pow2 texture without mipmaps. I i remember correctly OGL has texture_rectangle but with different UV mapping. I don't know if this has changed and has something similar in OGL too.
3. I would use the native dds cube texture support. The non-dds images the name postfix "-cube" could be used.
Some problems that I found:
- The vertical flip of textures. This is a costly operation. I think compressed textures can be flipped without decompression (just have to flip the indices inside the block). Moreover as far as i know D3D and OGL uses different mapping. To solve this UV's in vertices could be mirrored too, but I don't know what could be the best solution.
- In D3D the caps tells that the cards supports ARGB8 format instead of OGL's RGBA8. I don't know if this is just a different naming convention or different byte ordering. For float format ABGR16F is exposed.
After a little investigation I found this on an OpenGL forum:
Quote:
RGBA is the preferred source format for float textures on nvidia, and BGRA is the prefered source format for uint8 texture on Nvidia. uint16 source data is pretty bad, on most nvidias they convert them to uint8 internally and that conversion kills performance.
RGBA is the preferred format for both float/uint8 on ATI. Most ATI cards handle uint16 textures pretty good.
So it seems that OGL and D3D uses different naming convention (D3D's ARGB8 -> OGL's BGRA and D3D's ABGR16F -> GL's ARGB) The forum said that Ati preferres RGBA, but d3d caps tell that only old Ati cards supports the ABGR8 format. I can confirm that Nvidia has some problem with uint16 formats, i have some bad experience with this.