i
i
i
i
i
i
i
i
7.4. Scan Conversion or Rasterization 157
value for the second polygon can be assigned to that fragment and the associ-
ated 3D x-coordinate will be updated. Otherwise, the current attributes will
remain unchanged. This procedure determines which polygon is visible from
the viewpoint. Remember, we have defined the viewing plane to be at x = 1.
Therefore, the 3D x-coordinate of each vertex determines how far it is from
the viewing plane. The closer it is to the viewing plane, the more likely that
the object connected to that vertex will not be occluded by other objects in
the scene. Essentially, this procedure is carried out by the Z-buffer algorithm,
which we will now cover in more detail.
7.4.1 The Z-Buffer Algorithm
The Z-buffer algorithm is used to manage the visibility problem when ren-
dering 3D scenes; that is, which elements of the rendered scene will be visible
and which will be hidden. Look at Figure 7.5; it shows a polygon and its
projection onto the viewing plane where an array of fragments is also illus-
Figure 7.5. Projecting back from the viewpoint through pixel (i, j)leadstoapointin
the interior of polygon k.
i
i
i
i
i
i
i
i
158 7. The Rendering Pipeline
trated. Imagine adding a matching array of similar dimension that is capable
of recording a real number for each fragment. Let us call this the Z-buffer.
The main action of the Z-buffer algorithm is to record the depth coordinate
(or the x-coordinate) associated with the 3D point P into the buffer at the
address (i, j). This action is repeated for all fragments inside the projection of
polygon k.
When it comes to drawing another polygon, say l,ittoowillpaintintoa
set of fragments, some of which may overlap with those previously filled with
data from polygon k. Now the Z-buffer comes into play. Before informa-
tion for polygon l is written to the fragment at location (i, j), the Z-buffer is
checked to see whether polygon l appears to be in front of polygon k.Ifl is in
front of k then the data from l is placed in the frame buffer and the Z-buffer
depth at location (i, j) is updated to take account of l’s depth.
The algorithm is known as the Z-buffer algorithm because the first pro-
grams to use it arranged their frame of reference with the viewpoint at the
origin and the direction of view aligned along the z-axis. In these programs,
the distance to any point on the polygon from the viewpoint was simply the
point’s z-coordinate. (OpenGL and Direct3D use this direction for their di-
rection of view.) The Z-buffer algorithm is summarized in Figure 7.6.
To nd P, and hence its x-coordinate, we call on the ideas of Section 6.4.1
for the intersection of line and plane. Initially, we n eed to transform the 2D
Fill the depth buffer at Z (i, j)witha“far away depth;
i.e., set Z(i, j) = for all i, j.
Repeat for all polygons k {
For polygon k find pixels (i, j) covered by it. Fill
thesewiththecolorortextureofpolygonk.
With each pixel (i, j) covered by k repeat {
Calculate the depth (Δ)ofP from V (see Figure 7.5)
If Δ < Z(i, j) {
Set pixel (i, j) to color of polygon k
Update the depth buffer Z(i, j) = Δ
}
}
}
Figure 7.6. The b asic ideas of the Z-buffer rendering algorithm.
i
i
i
i
i
i
i
i
7.5. Fragment Processing 159
fragment coordinate at (i, j) into a 3D coordinate by using the inverse of
the projective transformation previously discussed. The new 3D coordinate,
along with the viewpoint, is used to derive the equation of the line intersecting
the polygon k at the point P. The vertices, P
1
, P
2
, P
3
,ofthepolygonk can
be used to form a bounded plane. Thus, P can be found using standard 3D
geometry, as it is simply the point of intersection between a line and bounded
plane.
7.5 Fragment Processing
In the early generations of hardware graphics processing chips (GPUs), frag-
ment processing primarily consisted of interpolating lighting and color from
the vertex values of the polygon which has been determined to be visible
in that fragment. However, today in software renderers and even in most
hardware rendering processors, the lighting and color values are calculated on
a per-fragment basis, rather than interpolating the value from the polygons
vertices.
In fact, determining the color value for any fragment in the output frame
buffer is the most significant task any rendering algorithm has to perform. It
governs the shading, texturing and quality of the final rendering, all of which
are now outlined in more detail.
7.5.1 Shading (Lighting)
The way in which light interacts with surfaces of a 3D model is the most sig-
nificant effect that we can model so as to provide visual realism. Whilst the
Z-buffer algorithm may determine what you can see, it is mainly the interac-
tion with light that determines what you do see. To simulate lighting effects,
it stands to reason that the location and color of any lights within the scene
must be known. In addition to this, we also need to classify the light as being
one of three standard types, as illustrated in Figure 7.7. That is:
1. A point light source that illuminates in all directions. For a lot of
indoor scenes, a point light source gives the best approximation to the
lighting conditions.
2. A directional or parallel light source. In this case, the light comes from
a specific direction which is the same for all points in the scene. (The
illumination from the sun is an example of a directional light source.)
i
i
i
i
i
i
i
i
160 7. The Rendering Pipeline
Figure 7.7. Three types of light source.
3. A spotlight illumination is limited to a small region of the scene. The
beam of the spotlight is normally assumed to have a graduated edge so
that the illumination is a maximum inside a cone of half angle
1
and
falls to zero intensity inside another cone of half angle
2
. Naturally,
2
>
1
.
Now that we know a little about the three standard light sources that are
available for illuminating our scene, we need to consider how the light from
these sources interacts with objects within our scene. To do this, we must con-
sider the spatial relationship between the lights, the camera and the polygons.
This is illustrated in Figure 7.8. In the same way that we have three standard
light sources, there are a lso standard lighting components which represent the
way light interacts within the scene. These are ambient reflection (I
a
), diffuse
reflection (I
d
), specular reflection (I
s
) and depth cuing (I
c
). It is further pos-
Figure 7.8. Lights, camera and objects. Reflected light finds its way to the observer by
being reflected from the objects sur faces.
i
i
i
i
i
i
i
i
7.5. Fragment Processing 161
sible to represent these lighting components as mathematical models, which
we will now consider. In these models, we will assume that p is the point to
be illuminated on the visible polygon which has a surface normal
ˆ
n at p.
A mbient reflection (I
a
). In a simulated scene, a surface that has no light
incident upon it will appear blank. This is not realistic, however, be-
cause in a real scene, that surface would be partially illuminated by
light that has been reflected from other objects in the scene. Thus, the
ambient reflection component is used to model the reflection of light
which arrives at a point on the surface from all directions after being
bounced around the scene from all other surfaces. In practice, the am-
bient component is defined to have a constant intensity for all surfaces
within the scene. Thus for each surface, the intensity of the ambient
lighting is I
a
= k.
Diffuse reflection (I
d
).Thetermreflection is used here because it is
light reflected from surfaces which enters the camera. To model the
effect, we assume that a polygon is most brightly illuminated when the
incident light strikes the surface at right angles. The illumination falls
to zero when the direction of the beam of light is parallel to the surface.
This behavior is known as the Lambert cosine law and is illustrated in
Figure 7.9.
Figure 7.9. Diffuse illumination. (a) The brightest illumination occurs when the
incident light direction is at right angles to the surface. (b) The illumination tends
to zero as the direction of the incident light becomes parallel to the surface.
The diffuse illumination component I
d
is calculated differently for
each of the standard types of light source that we have within the scene:
For a point light source located at P
l
:
I
d
=
P
l
p
|P
l
p|
· ˆn.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.164.164