Hey, let's slow down a bit here :) Yes, that's true that you receive the matrix by glGetFloatv(GL_MODELVIEW_MATRIX, ptr)
... But that's definitely not the thing you should do here!
Let me explain:
In GLSL, built-in variables like gl_ModelViewProjectionMatrix
or functions like ftransform()
are deprecated - that's right, but that's only because the whole matrix stack is deprecated in GL 3.x and you're supposed to use your own matrix stack (or use any other solution, a matrix stack is helpful but isn't obligatory!).
If you're still using the matrix stack, then you're relying on functionality from OpenGL 2.x or 1.x. That's okay since all of this is still supported on modern graphics cards because of the GL compatibility profile - it's good to switch to a new GL version, but you can stay with this for now.
But if you are using an older version of OpenGL (with matrix stack), also use an older version of GLSL. Try 1.2, because higher versions (including your 1.5) are designed to be compatible with OpenGL3, where things such as projection or modelview matrices no longer exist in OpenGL and are expected to be passed explicitly as custom, user-defined uniform
variables if needed.
The correspondence between OpenGL and GLSL versions used to be a bit tricky (before they cleaned up the version numbering to match), but it should be more or less similar to:
GL GLSL
4.1 - 4.1
4.0 - 4.0
3.3 - 3.3
3.2 - 1.5
3.1 - 1.4
3.0 - 1.3
2.x and lower - 1.2 and lower
So, long story short - the shader builtin uniforms are deprecated because the corresponding functionality in OpenGL is also deprecated; either go for a higher version of OpenGL or a lower version of GLSL.
Edit: this answer is in serious need of an update. Namely, there is no consideration about shaders.
As @gman points out in the comments, whether to use row-major or column-major depends on how you do your math. You may choose one or the other (or even both at different times if you don't think that's confusing) just as long as they match your coordinate systems and order of operations.
I'm leaving this answer as community wiki in case someone has the time and will to update it.
OpenGL specifies matrices as a one-dimensional array listed in column-major order, ie with elements ordered like this:
m0 m4 m8 m12
m1 m5 m9 m13
m2 m6 m10 m14
m3 m7 m11 m15
So if you initialize an array this way, in C or pretty much any other language, the resulting matrix will look like it needs transposing, because C code reads left-to-right first and then top-to-bottom (in other words, like if it were in row-major order):
int mat[16] = {
0, 1, 2, 3,
4, 5, 6, 7,
8, 9, 10, 11,
12, 13, 14, 15,
}
By the way, OpenGL has both glLoadTransposeMatrix and glMultTransposeMatrix, which you can use instead of glLoadMatrix and glMultMatrix, so this shouldn't be a problem.
Best Answer
Nope. Fixed function was replaced by programmable pipeline that lets you design your transformations however you want.
If you want to have something that would work just like the old OpenGL pair of matrix stacks, then you'd want to make your vertex shader look, for instance, like:
(You can optimise that a bit, of course)
And the corresponding client-size code (shown as C++ here) would be like:
I've assumed here that you have a Matrix4x4 class that supports operations like
.translate()
. A library like GLM can provide you with client-side implementations of matrices and vectors that behave like corresponding GLSL types, as well as implementations of functions likegluPerspective
.You can also keep using the OpenGL 1 functionality through the OpenGL compatibility profile, but that's not recommended (you won't be using OpenGL's full potential then).
OpenGL 3 (and 4)'s interface is more low level than OpenGL 1; If you consider the above to be too much code, then chances are you're better off with a rendering engine, like Irrlicht.