Emulate big NPOT textures for old GPUs
In my 2D game I need textures that may be arbitrarily large, can be NPOT and nonsquare. They are only ever mapped to one kind of primitive: rectangles via GL_QUADS (the four QUAD corners are mapped to the four texture corners). Sometimes the texture matrix has been scaled before drawing.
I want my game to work everywhere, even on old/cheap videocards that only allow small and/or POT textures. What solution should I use? It should...
- Be easy to implement
- Have very good performance.
Currently I know of the following options:
Use an extension like
开发者_StackOverflow中文版GL_TEXTURE_RECTANGLE_{EXT,NV,ARB}
so NPOT works on cards that don't support OGL2.0 native NPOT textures.- this doesn't solve the "big texture" problem.
- does it work on a maximal number of PCs? do videocards that support native OGL2.0 NPOT support those extensions as well?
- which of the three variants EXT/NV/ARB to use?
Implement BigTextures as a bunch of small, POT texture slices, which are them carefully drawn on several adjacent QUADs. This both provides "big textures" and NPOT but it's a bit difficult and limiting.
- Implement my NPOT textures as POT textures padded with transparency to the right and bottom. Will waste memory and make texture tiling a bit more difficult, and it doesn't solve the "big texture" problem.
- Use some premade solution.
An example of a problematic videocard is the Mobile Intel® 945GM Express Chipset which doesn't seem to support native NPOT.
Update: I ended up using the third option. The cool thing is, I was able to use glTexSubImage2D, rather than padding the texture manually. This is crazy fast. Yeah, it doesn't provide "big texture" support, but I realized my target GPUs support up to 2048x2048 which is good enough for me. Oh, and GL_TEXTURE_RECTANGLE_EXT wasn't even supported on the Mobile Intel® 945GM Express Chipset.
If all your UV-coordinates are in the [0, 1] range you could pack your NPOT textures into a texture atlas.
Since your textures can be arbitrarily large I would probably go with the suggestion to split up the NPOT textures (and corresponding quads) into POT pieces no larger than the hardware can handle. If you chose this solution remember to set the wrap mode to GL_CLAMP_TO_EDGE to minimize rendering artifacts at the edge (if you need minification, you'll still get some).
If a card doesn't support OpenGL 2.0 NPOT textures, it's unlikely that they will support GL_TEXTURE_RECTANGLE_{EXT,NV,ARB}
.
Well you are best off using fallbacks for each level.
If your card supports large textures but only POT then you can just allocate a texture of the next power of 2 up. ie for a 1024x768 texture allocate a 1024x1024 and stretch it to the full texture size. This will tile nicely and you shouldn't notice the distortion from the stretch operation badly.
If you card can't do large textures you have a problem. The only real solution is to subdivide triangles and use lots of smaller textures. As you say this isn't the greatest solution as it is very intensive to do so. In fact on hardware that can't support such textures you may well find that the extra vertex load cripples performance as well. In such a case you may just have to scale the textures back to a lower detail version. Its not a great solution but it may be your only choice. Your only other option is to not support such cards ...
As a rule of thumb I would stick to making all your textures POT. It's faster and you don't have to worry about compatibility. Remember that your texture size doesn't have to have a 1-to-1 correlation with its displayed size--you can stretch a NPOT texture to a POT size in your image editor and just display it at the correct size. Texture atlases are a good thing also; they reduce the amount of texture switching you do which can be quite slow.
For large textures you only have the option to split them into smaller textures. This is a strict hardware limit and as such you need to put some work into splitting your textures (on the fly or beforehand).
With respect to those NPOT extensions, from memory they're all quite similar and simply indicate that the normal restrictions on Power of Two textures aren't enforced. You can look them up in the OpenGL Extension Registry
From memory, the three extensions are actually the SAME extension, it was introduced by NVidia as GL_ NV_ texture_ non_ power_ of_ two, before being accepted as by the Architecture Review Board (that's the GL_ ARB_ texture_ non_ power_ of_ two). It would've become GL_ EXT_ texture_ non_ power_ of_ two once it was accepted into OpenGL 2.0. I could be wrong on some of these details but there you have the life-cycle of a new OpenGL extension.
You're certainly free to check for the presence of these extensions and take advantage of them where present. There may be a few pre-OpenGL2.0 cards that support it, but not many, so you'll need an additional backup strategy.
Two options come to mind.
- For a non repeating texture, just blank out the surplus space and make sure you set texture co-ordinates so they're not shown.
- Stretch out the texture so its a power of two.
精彩评论