What the human eye ( or virtual camera ) sees is a result of light coming off of an object or other light source and striking receptors in the eye. In order to understand and model this process, it is necessary to understand different light sources and the ways that different materials reflect those light sources.
Trying to recreate reality is difficult.
Lighting calculations can take a VERY long time.
The techniques described here are heuristics which produce appropriate results, but they do not work in the same way reality works - because that would take too long to compute, at least for interactive graphics.
Instead of just specifying a single colour for a ploygon we will instead specify the properties of the material that the polygon is supposed to be made out of, ( i.e. how the material responds to different kinds of light ), and the properties of the light or lights shining onto that material.
No Lighting ( Emissive Lighting )
I = Ki
I: intensity
Ki: object's intrinsic intensity, 0.0 - 1.0 for each of R, G, and B
This scene from battalion has no lighting ( ? )
I = IaKa
I: intensity
Ia: intensity of Ambient light
Ka: object's ambient reflection coefficient, 0.0 - 1.0 for each
of R, G, and B
Here is the same object (Christina Vasilakis' SoftImage Owl) under different lighting conditions:
bounding boxes of the components of the owl
self-luminous owl
directional light from the front of the owl
point light slightly in front of the owl
spotlight slightly in front of the owl aimed at the owl
Using a point light:
I = Ip Kd cos(theta) or I = Ip Kd(N' * L')
I: intensity
Ip: intensity of point light
Kd: object's diffuse reflection reflection coefficient, 0.0 -
1.0 for each of R, G, and B
N': normalized surface normal
L': normalized direction to light source
*: represents the dot product of the two vectors
Using a directional light:
Directional lights are faster than point lights because L' does not need to be recomputed for each polygon.
It is rare that we have an object in the real world illuminated only by a single light. Even on a dark night there is a some ambient light. To make sure all sides of an object get at least a little light we add some ambient light to the point or directional light:
I = Ia Ka + Ip Kd(N' * L')
Currently there is no distinction made between an object close to a point light and an object far away from that light. Only the angle has been used so far. It helps to introduce a term based on distance from the light. So we add in a light source attenuation factor: Fatt.
I = Ia Ka + Fatt Ip Kd(N' * L')
Coming up with an appropriate value for Fatt is rather tricky.
It can take a fair amount of time to balance all the various types of lights in a scene to give the desired effect (just as it takes a fair amount of time in real life to set up proper lighting)
I = Ip cos^n(a) W(theta)
I: intensity
Ip: intensity of point light
n: specular-reflection exponent (higher is sharper falloff)
W: gives specular component of non-specular materials
So if we put all the lighting models depending on light together we add up their various components to get:
I = Ia Ka + Ip Kd(N' * L') + Ip cos^n(a) W(theta)
As shown in the following figures:
+ + =
Ambient + Diffuse + Specular = Result
In OpenGL a polygon can have the following material properties:
These properties describe how light is reflected off the surface of the polygon. a polygon with diffuse color (1, 0, 0) reflects all of the red light it is hit with, and absorbs all of the blue and green. If this red polygon is hit with a white light it will appear red. If it with a blue light, or a green light, or an aqua light it will appear black (as those lights have no red component.) If it is hit with a yellow light or a purple light it will appear red (as the polygon will reflect the red component of the light.)
The following pictures will help to illustrate this:
ball | light |
|
white | red | |
red | white | |
red | green | |
purple | blue | |
yellow | aqua |
One important thing to note about all of the above equations is that each object is dealt with separately. That is, one object does not block light from reaching another object. The creation of realistic shadows is quite expensive if done right, and is a currently active area of research in computer graphics. ( Consider, for example, a plant with many leaves, each of which could cast shadows on other leaves or on the other nearby objects, and then further consider the leaves fluttering in the breeze and lit by diffuse or unusual light sources. )
With multiple lights the affect of all the lights are additive.
We often use polygons to simulate curved surfaces. If these cases we want the colours of the polygons to flow smoothly into each other.
Flat Shading
Given a single normal to the plane the lighting equations and the material properties are used to generate a single colour. The polygon is filled with that colour.
Here is another of the OpenGL samples with a flat shaded scene:
Goraud Shading
Given a normal at each vertex of the polygon, the colour at each vertex is determined from the lighting equations and the material properties. Linear interpolation of the colour values at each vertex are used to generate colour values for each pixel on the edges. Linear interpolation across each scan line is used to then fill in the colour of the polygon.
Here is another of the OpenGL samples with a smooth shaded scene:
Phong Shading
Where Goraud shading uses normals at the vertices and then interpolates the resulting colours across the polygon, Phong shading goes further and interpolates tha normals. Linear interpolation of the normal values at each vertex are used to generate normal values for the pixels on the edges. Linear interpolation across each scan line is used to then generate normals at each pixel across the scan line.
Whether we are interpolating normals or colours the procedure is the same:
To find the intensity of Ip, we need to know the intensity of Ia and Ib. To find the intensity of Ia we need to know the intensity of I1 and I2. To find the intensity of Ib we need to know the intensity of I1 and I3.
Ia = (Ys - Y2) / (Y1 - Y2) * I1 + (Y1 - Ys) / (Y1 - Y2) * I2
Ib = (Ys - Y3) / (Y1 - Y3) * I1 + (Y1 - Ys) / (Y1 - Y3) * I3
Ip = (Xb - Xp) / (Xb - Xa) * Ia + (Xp - Xa) / (Xb - Xa) * Ib
We talked earlier about how atmospheric affects give us a sense of depth as particles in the air make objects that are further away look less distinct than near objects.
Fog, or atmospheric attenuation allows us to simulate this affect.
Fog is implemented by blending the calculated color of a pixel with a given background color ( usually grey or black ), in a mixing ratio that is somehow proportional to the distance between the camera and the object. Objects that are farther away get a greater fraction of the background color relative to the object's color, and hence "fade away" into the background. In this sense, fog can ( sort of ) be thought of as a shading effect.
Fog is typically given a starting distance, an ending distance, and a colour. The fog begins at the starting distance and all the colours slowly transition to the fog colour towards the ending distance. At the ending distance all colours are the fog colour.
Here are those o-so-everpresent computer graphics teapots from
the OpenGL samples:
To use fog in OpenGL you need to tell the computer a few things:
Here is a scene from battalion without fog. The monster sees a very sharp edge to the world
Here is the same scene with fog. The monster sees a much softer horizon as objects further away tend towards the black colour of the sky
Lines and the edges of polygons still look jagged at this point. This is especially noticable when moving through a static scene looking at sharp edges.
This is known as aliasing, and is caused by the conversion from the mathematical edge to a discrete set of pixels. We saw near the beginning of the course how to scan convert a line into the frame buffer, but at that point we only dealth with placing the pixel or not placing the pixel. Now we will deal with coverage.
The mathematical line will likely not exactly cover pixel boundaries - some pixels will be mostly covered by the line (or edge), and others only slightly. Instead of making a yes/no decision we can assign a value to this coverage (from say 0 to 1) for each pixel and then use these values to blend the colour of the line (or edge) with the existing contents of the frame buffer.
In OpenGL you give hints setting the hints for GL_POINT_SMOOTH_HINT, GL_LINE_SMOOTH_HINT, GL_POLYGON_SMOOTH_HINT to tell OpenGL to be GL_FASTEST or GL_NICEST to try and smooth things out using the alpha (transparency) values.
You also need to enable or disable that smoothing
glEnable(GL_POINT_SMOOTH);
glEnable(GL_LINE_SMOOTH);
glEnable(GL_POLYGON_SMOOTH);