There is some weirdness and confusion in notation for transformations, which becomes real confusion when you actually need to implement it using someone else’s libraries and code.
Short verson:
- In class, we will use right-handed coordinate systems and the post-multiply convention.
- In GL (whether its old-fashioned GL, OpenGL, WebGL, …) matrices are stored in column-major form. You can think of this as still being post-multiply, just that all the matrices are transposed (so when you do the multiplications, you actually multiply on the left).
- In GL, the coordinate system can be either right handed or left handed. The only place where the Z axis direction (into or out of the screen) matters is for the Z-test – and you can pick which function you want for the z-test.
You can see a discussion at the glmatrix web page as well.
Long version:
When we compose transforms, it’s nice to think of it functionally:
non-local coords = transform(local coords)
So we get
final coords = transform( transform( … (transform (local coords))))
If our transforms are linear operators (matrices), this ends up looking like:
where x is a point in local coordinates,A,B,C and D are transforms (matrices, and x’ is the point in global coords
This is called the post-multiply convention.
If you think about it, this is writing backwards. First apply transform A to the object. Then apply transform B to that. Then apply C. So, some people like to write it:
In this case, y is transpose(x), P is transpose(A), etc.
This is called the premultiply convention.
Aside: be careful that this is re-ordering, not inversion. It is one thing to say:
it’s quite another to say
transpose is not inversion.
Notice that this is just notation that you might not need. If your program doesn’t look inside the matrices, you can happily speak in terms of transformations (assuming that each transformation operator applies to the local coordinates of the objects inside)
translate(...)
rotate(...)
translate(...)
rotate(...)
scale(...)
draw object
Except that sometimes your program actually has to look inside of those matrices. Especially since with WebGL it doesn’t compute them for you.
So now you might ask… is WebGL pre-multiply or post-multiply? And the sad news is you won’t find a consistent straight answer.
Here’s a way to think about it: WebGL does follow the post-multiply convention. However, it stores all matrices transposed. Therefore, you either need to transpose the matrices before sending them to GL (in old-fashioned GL you used the “loadMatrixTranspose” and “multMatrixTranspose” functions rather than “loadMatrix” and “multMatrix.” Usually, you’d use the commands that created the transformations (translate, rotate), so this wasn’t a big issue.
Now, with “modern” GL, you have to implement matrix stuff yourself. Even if you get a library that implements matrix multiply, you still have to make your own stack.
This doesn’t seem to bad. I want to do
I define “multmatrix(M)” to be multMatrix(D)
multMatrix(C)
multMatrix(B)
multMatrix(A)
use matrix for drawing the points
And I just need to remember to transpose the matrix before i sent it to GL (in that last line).
Except… if I am using a library, that library might be designed to keep all of the matrices transposed. So if I say “give me a translation matrix” it will actually give me the TRANSPOSE of the rotation matrix – since it knows that eventually you will want to send it to GL.
We could transpose these matrices into the form we like, multiply the way we think about stuff, and transpose them back.multMatrix(transpose(get D from library))
multMatrix(transpose(get C from library))
multMatrix(transpose(get B from library))
multMatrix(transpose(get A from library))
use transpose of matrix stack top to draw the points
But that’s a lot of transposing. Instead, we remember that:
So we can just keep all the matrices transposed all the time, but just switch left-multiply to right multiply.
So, hopefully, you are wondering “why does OpenGL store the transpose of the matrices?” Which is a good question. This isn’t a mean trick from the developers, it’s a historical artifact.
In modern programming languages (C, JavaScript, Python, …), the convention is to store matrices in row major order. That is is you turn your 2D array (matrix) into a 1D array (list):
1 2 3 4 5 6 1 2 3 4 5 6 7 8 9 7 8 9
In some other programming languages, popular at the time GL was invented, matrices were stored in column major order:
1 2 3 4 5 6 1 4 7 2 5 8 3 6 9 7 8 9
So, in the minds of the designers of GL, they are storing the matrices the right way: non-transposed, but in column-major order. They also thought about things using the pre-multiply convention.
For me, using modern languages, I think of GL as using the matrices transposed, and working with my post-multiply convention. I just need to remember that I need to left multiply rather than right multiply.
I’ve been working with GL variants (beginning with the original IrisGL, and now up to WebGL) since 1989. This has bugged me since day 1. And I still forget it and have to check from time to time.