1) I'm given to understand that in OpenGL ES the texture co-ordinates map to the vertex buffer only, not the index buffer.
Right, texture coordinates are relative to vertex data (either buffered or in cpu space) NOT index data. The index mechanism is independant from texturing.
Suppose you have this vertex array, made of 3 vertices (3 components each):
float vdata[] = {x0,y0,z0, x1,y1,z1, x2,y2,z2};
and this texcoord array, made of 3 coordinates (2 components each):
float tdata[] = {u0,v0, u1,v1, u2,v2};
When declaring this data to OpenGL, you associate the vertex 0 (x0,y0,z0) with the texture coordinate 0 (u0,v0), the vertex 1 with the texture coordinate 1 and so forth. At the end it will map on your triangle made of 3 vertices/3 texture coordinates the portion of the texture corresponding.
Here's the traditional OpenGL picture but for a four-vertices polygon.
(source: glprogramming.com)
Index data (buffered or not) is a way of specifying the vertices with an indirection and not in a sequentially way. In my previous example, if i would like to render the triangle twice i would specify an index array like that:
unsigned int idata[] = {0,1,2, 0,1,2};
So for responding to 1), index data is independent from texcoords or other vertex attributes like colors, normals, etc, hence it makes no sense willing to bind texcoords to index data.
2) After all to properly map the textures one will need to write out the vertex buffer with all the redundancy that would have been saved with the index buffer. Is there still a performance increase to be gained or are index buffers redundant for textured data?
Usually, indexing your meshes is a way of eliminating redundancies when reusing same vertices and consequently have a less costly memory footprint. In most of the cases I believe there is a lot of redundancies.
Of course if you take a 3D cube, no vertices are sharing the same texcoords or normals, but that's not a representative model! I believe most of the meshes in Gaming/CAD applications are continuous surfaces with a lot of vertex redundancies and benefit a lot of indexing.
Secondly, when having indices, the GPU can use pre/post-vertex caches for speeding up rendering.
About memory bandwidth, having indices is almost free because the graphics driver put them in pci-express memory (DMAs) and so doesn't eat up video memory bandwidth.
All in all, i don't think it's a bad thing for performances using index buffers even if you have few repetitions of vertices, but as usually you should check against different OpenGL implementations and make your own tests.
Best Answer
The effect you’re looking for can be achieved with texture combiners in OpenGL ES 1.1. By default, each texture unit that you enable is set up to multiply the output of the previous stage by the color of the current texture. In the case of the first texture unit, the previous stage is simply the vertex color. By changing the texture combiner state, you can add, subtract, interpolate, or take dot products of your texture samples instead.
The second and third examples on the linked page, which interpolate between two textures, should be fairly similar to what you’re trying to do. If you compare the source code for the two examples, you should see that they’re nearly identical, except for the configuration for
GL_SRC2_RGB
/GL_SRC2_ALPHA
andGL_OPERAND2_RGB
/GL_OPERAND2_ALPHA
. What’ll you'll need to specify here depends on where/how you’re generating the blend factor for the two textures. You can source from the vertex color by specifyingGL_PRIMARY_COLOR
forGL_SRC2_*
, which isn’t shown in the examples.(Note: the page I linked to recommends using GLSL instead of texture combiners. This is unfortunately not an option if your software needs to run on older hardware that doesn’t support OpenGL ES 2.0.)