Sunday, December 14, 2008

Surface Normals in OpenGL

Before we can learn to load objects in from a 3D program, we need to talk about another feature of OpenGL - Surface Normals (usually referred to just as "normals"). A surface normal is nothing more than a vector (or line) that is perpendicular to the surface of a given polygon. Here's a nice illustration of the concept (this one is from Wikipedia, not from me:)

OpenGL doesn't need to know the normals to render a shape, but it does need them when you start using directional lighting. OpenGL needs the surface normals to know how the light interacts with the individual polygons.

Normals can be calculated, but OpenGL ES won't do it for you. To find the normal for a triangle, simply calculate the vector cross product of the three vertices. That's easy enough, right? Well, maybe it's been a while since you took Geometry, so let's dive in a little deeper here (as much for my benefit as for yours - my last Geometry class was over fifteen years ago).

Before we look at the code for calculating the surface normal, let's do a little setup. So far, throughout our code, we've just been dealing with vertex arrays where the first element is the X position, the second is the Y position, and the third is the Z position, then it starts over with the fourth being the X position for the next vertex, etc.. For clarity in our code sample, however, let's define a struct to hold a single vertex:
typedef struct {
GLfloat x;
GLfloat y;
GLfloat z;
} Vertex3D, Vector3D;
That shouldn't need much explanation, right? Three floating point values to represent a point in space. Notice, however, that we've actually defined two types based on this struct,: Vertex3D and Vector3D. Computationally speaking, vectors and vertices are the same - they are represented by a single point in three-dimensional space. Conceptually, they are different, however. A vertex represents a single point in space, while a vector represents both a direction and a distance (usually called "magnitude").

This is one of those little things that people who work with 3D graphics and Euclidean geometry take for granted but rarely bother to explain to newcomers. How can a single point in space represent a distance? It takes two points to make a line, right? Everybody knows that.

The answer is simple - the other point is assumed to be the origin. A vector is a line segment drawn from the origin to the point in space represented by the data structure. As a practical matter, the distinction between a vector and a vertex is often academic, since the data structures are the same. Normals are vectors, not vertices, so we've defined a separate type for each just to be semantically accurate.

Anyway, back from that little tangent, to make the code readable, let's also define a struct to hold a single triangle:
typedef struct {
Vertex3D v1;
Vertex3D v2;
Vertex3D v3;
} Triangle3D;
So, given a triangle, here is how we calculate the surface normal:
Vector3D calculateTriangleSurfaceNormal(Triangle3D triangle)
{
Vector3D surfaceNormal;
surfaceNormal.x = (triangle.v1.y * triangle.v2.z) - (triangle.v1.z * triangle.v2.y);
surfaceNormal.y = -((triangle.v2.z * triangle.v1.x) - (triangle.v2.x * triangle.v1.z));
surfaceNormal.z = (triangle.v1.x * triangle.v2.y) - (triangle.v1.y * triangle.v2.x);
return surfaceNormal;
}
Surface normals, however, are usually "normalized" so that they have a length of one. This makes calculations faster. So, to normalize a normal (say that ten times fast standing on your head), we have to first figure out the magnitude (length) of the surface normal:
GLfloat vectorMagnitude(Vector3D vector)
{
return sqrt((vector.x * vector.x) + (vector.y * vector.y) + (vector.z * vector.z));
}
Eww... now you're starting to see why OpenGL doesn't do this for us at runtime - performance. Square root is a costly calculation, and it would have to do it for every polygon. But we're still not done. Nope, once we have the length of the vector, then we can normalize it like so:
void normalizeVector(Vector3D *vector)
{
GLfloat vecMag = vectorMagnitude(*vector);
vector->x /= vecMag;
vector->y /= vecMag;
vector->z /= vecMag;
}
Note: Things can get confusing with the terminology here. "Normalizing" a vector has nothing to do with a surface "normal". A surface normal is called a "normal" based on the use of the word "normal" as a synonym for "perpendicular". On the other hand, when you're "normalizing" a vector, you're reducing that vector to a standard (or "normal") magnitude. So, normalizing a normal is cool - a normal doesn't have to be normalized, but it often is, and with OpenGL, they should be to avoid causing the shader additional work.

Only, if you look at the illustration above, there are actually two normals, pointing in opposite directions - triangles have two sides, and therefore two potential normals. The polygons in most 3D computer objects only have one normal, and it's pointing outwards from the object. There's no point in defining the normal for the side that light will never hit because it's inside an object (the side without a surface normal is often called a "backface"). OpenGL would have no way of knowing which normal to use if it tried to calculate them for you, so it would have to use both - another performance hit, since it would be doing calculations on polygons that could never be seen.

The two surface normals for a triangle (or any polygon) point in completely opposite directions - 180° - so it's easy to "flip a normal" so that it's representing the other side of the triangle:
void flipVector(Vector3D *vector)
{
vector->x = -vector->x;
vector->y = -vector->y;
vector->z = -vector->z;
}
Now that I've shown you how to calculate, normalize, and flip vectors, forget all about it. In general, you won't need to worry about calculating surface normals because 3D computer modeling programs like Maya, 3D Max, LightWave, and Blender will calculate those normals for your objects, and you can export your 3D data so that you have the pre-calculated, normalized normals available as part of your data model (something we'll look at before too long)

Here is the basic process for using normals in OpenGL. To tell OpenGL that you are going to provide it with normals, you have to call
glEnableClientState(GL_NORMAL_ARRAY);
This is usually done once during startup, but could also be turned on and off during run time if you only had normals for some objects.

Normal data, like pretty much everything else we've worked in with OpenGL ES 1.1, have to be packed into an array (a "normal array"). Before calling glDrawElements(), you have to feed OpenGL that array using the glNormalPointer() function, like:
glNormalPointer(GL_FLOAT, 0, myNormalArray);
The first parameter tells OpenGL that our array is an array of GLfloats. The second... just ignore the second for now - it's that same one I keep telling you to ignore, the one that allows you to skip data in the array. The third and final argument is a pointer to the actual normal data.

That's all there is to it. You won't notice any difference in the drawing of the object with or without surface normals until you start adding some directional lighting (which we'll do in a future blog posting), but I wanted you to have a good grasp on what normals were and why we need them before we get into trying to load a 3D model.

One last note - if you're wondering why we're working so much with C structs and C functions rather than declaring Objective-C classes, the answer once again is performance. There is a bit of additional overhead associated with instantiating Objective-C objects and with dynamic messaging. That overhead is trivial in most applications, but a 3D model will usually be made up of thousands of triangles. You do NOT want the additional instantiation or dispatch overhead in this situation, believe me, especially not on a small device like the iPhone. It might make sense to declare a class to hold a 3D model, but not to hold the individual polygons or vertices. Too much cost, too little benefit.

No comments:

Post a Comment