i
i
i
i
i
i
i
i
152 7. The Rendering Pipeline
should obviously be culled, but the situation is complicated by the presence
of portals (the doors): some objects in distant rooms may still be visible from
the viewpoint. In computer games and other real-time VR applications, spe-
cial pre-processing is sometimes applied to determine the visibility status of
objects in such scenes.
7.2.1 Culling
For 3D graphics, there are three important types of cull:
1. Remove polygons behind the viewpoint. If the x-coordinate (again,
the assumption being that the viewing direction is along the x-axis) of
all vertices for polygon k are such that x 1thencullpolygonk.
Remember, in Section 6.6.7 we m ade the assumption that the viewing
plane
3
was located at x = 1. So this is equivalent to removing any
polygons that lie behind the projection plane.
2. Cull any backfaces.Abackface is any polygon that faces away from
the viewpoint. It is called a backface because if the model wer e solid
then any polygons facing away from the viewer would be obscured
from view by polygons closer to the viewer. For example, in a street
scene, there would be no point rendering the polygons on the sides of
the buildings facing away from the virtual camera, since they will be
occluded by those sides of the building facing the camera.
In order to say whether a polygon faces towards or away from an ob-
server, a convention must be established. The most appropriate con-
vention to apply is one based on the surface normal vector. Ever y poly-
gon (which is essentially a bounded plane) has a surface normal vector
as defined in Section 6.4. Since it is possible for a surface normal to
point in two opposite directions, it is usually assumed that it is directed
so that, in a general sense, it points away from the inside of an object.
Figure 7.2 shows a cube with the normal vectors illustrated. If the dot
product of the normal vector with a vector connecting the viewpoint to
a point on the polygon is negative, the polygon is said to be a frontface;
if the dot product is positive, the polygon is a backface.
3. Cull any polygons that lie completely outside the field of view. This
type of cull is not so easy to carry out, and it is usually combined with
the clipping operation described next.
3
also called the plane of projection.
i
i
i
i
i
i
i
i
7.2. Culling and Clipping 153
Figure 7.2. Surface normal vectors for a cube and a plan of the vertices showing a
consistent ordering of the vertices round each polygonal face. For example, n
1
=
(P
2
P
1
) × (P
3
P
1
).
7.2.2 Clipping
After the culling stage, we will probably have rejected a large number of
polygons that could not possibly be seen from the viewpoint. We can also
be sure that if all of a polygons vertices lie inside the field of view then it
might be visible and we must pass it to the rasterization stage. However, it is
very likely that there will still be some polygons that lie partially inside and
outside the field of view. There may also be cases where we cannot easily tell
whether a part of a polygon is visible or not. In these cases, we need to clip
away those parts of the polygon which lie outside the field of view. This is
non-trivial, because we are essentially changing the shape of the polygon. In
some cases, the end result of this might be that none of the polygon is visible
at all.
Clipping is usually done against a set of planes that bound a volume. Typ-
ically this takes the form of a pyramid with the top end of the pyramid at the
viewpoint. This bounded volume is also known as a frustum. Each of the
sides of the pyramid are then known as the bounding planes, and they form
the edges of the field of view. The top and bottom of the frustum form the
front and back clipping planes. These front and back clipping planes are at
right angles to the direction of view (see Figure 7.3). The volume contained
within the pyramid is then known as the viewing frustum. Obviously, poly-
gons and parts of polygons that lie inside that volume are retained for render-
ing. Some polygons can be marked as lying completely outside the frustum as
the result of a simple test, and these can be culled. However, the simple test is
often inconclusive and we must apply the full rigor of the clipping algorithm.
i
i
i
i
i
i
i
i
154 7. The Rendering Pipeline
Figure 7.3. Cubic and truncated pyramidal clipping volumes penetrated by a rect-
angular polygon. Only that portion of the rectangle which lies inside the clipping
volume would appear in the rendered image.
To use these multiple clipping planes, it is sufficient to apply them one
at a time in succession. For example, clip with the back plane, then the front
plane, then the side plane etc.
Clipping a Triangular Polygon with a Clipping Plane
To see how 3D clipping is achieved, we consider the following. Look at
Figure 7.4; it shows a triangular polygon ABC which is to be clipped at the
plane PP
(seen end on). Clipping is accomplished by splitting triangle ABC
into two pieces, BDE and ACED. The pieces are then either removed from
or added to the database of polygons that are to be rendered. For example,
when PP
is the back clipping plane, triangles ADE and AEC are added to
the database, whilst triangle BDE is removed. Points D and E are determined
by finding the intersection between the lines joining vertices of the polygon
and the clipping plane.
This calculation is straightforward for both the front or back clipping
planes because they are perpendicular to the x-axis (that is, the direction of
view in the right-handed coordinate system convention). That means they
form a plane in the yz-axis at points x
f
and x
b
respectively. That is, x
f
refers
to the x-coordinate of the front clipping plane and x
b
to the x-coordinate of
the back clipping plane. Because clipping at either plane is commonly used,
it is worth writing expressions specifically for these particular circumstances.
Lets look at the back plane specifically.
i
i
i
i
i
i
i
i
7.3. Screen Mapping 155
Figure 7.4. Clipping a triangular polygon, ABC,withayz plane at PP
,at(x
p
, 0, 0).
Clipping divides ABC into two pieces. If the polygons are to remain triangular, the
piece ADEC must be divided in two.
The intersection point, p, between a line joining points P
1
and P
2
and
the yz plane at x
b
is given by
p
x
= x
b
,
p
y
= P
1y
+
(x
b
P
1x
)
(P
2x
P
1x
)
(P
2y
P
1y
),
p
z
= P
1z
+
(x
b
P
1x
)
(P
2x
P
1x
)
(P
2z
P
1z
).
Likewise for the front viewing plane, except x
b
is replaced with x
f
in
the calculations. Finding the intersection point between polygons inter-
secting the side viewing planes is more complicated, since these planes are
not at right angles to the x-axis. As such, a more general calculation of the
intersection of a line and a plane needs to be performed. This is shown in
Section 6.4.1.
However, regardless of the plane we clip on, once the coordinates of the
points D and E are known, the triangular polygon ABC is divided into two
or three triangular polygons and stored in the polygon database. This is in
essence how several of the most common clipping algorithms work.
7.3 Screen Mapping
At this stage, we know the 3D vertex coordinates of every visible polygon
within the viewing v olume. Now we need to convert these to 2D image or
i
i
i
i
i
i
i
i
156 7. The Rendering Pipeline
screen coordinates. For example, in the real world, when we use a camera,
we create a 2D picture of the 3D world. Likewise, in the virtual world, we
now need to create a 2D image of what our virtual camera can see based on
its viewpoint within the 3D environment. 3D coordinate geometry allows us
to do this by using a projective transformation, where a ll the 3D coordinates
are transformed to their respective 2D locations within the image array. The
intricacies of this are detailed in Section 6.6.7. This process only affects the
3D y- and z-coordinates (again assuming the viewing direction is along the
x-axis). The new 2D (X , Y )-coordinates, along with the 3D x-coordinate, are
then passed on to the next stage of the pipeline.
7.4 Scan Conversion or Rasterization
At the next stage, the visible primitives (points, lines, or polygons) are de-
composed into smaller units corresponding to pixels in the destination frame
buffer. Each of these smaller units generated by rasterization is referred to
as a fragment. For instance, a line might cover five pixels on the screen, and
the process of rasterization converts the line (defined by two vertices) into five
fragments. A fragment is comprised of a frame buffer coordinate, depth infor-
mation and other associated attributes such as color, texture coordinates and
so on. The values for each of these attributes are determined by interpolating
between the values specified (or computed) at the vertices of the primitive.
Remember that in the first stage of the graphics pipeline, attributes are only
assigned on a per-vertex basis.
So essentially, this stage of the pipeline is used to determine which frag-
ments in the frame buffer are covered by the projected 2D polygons. Once it
is decided that a given fragment is covered by a polygon, that fragment needs
to be assigned its attributes. This is done by linearly interpolating the at-
tributes assigned to each of the corresponding 3D vertex coordinates for that
polygoninthegeometry and vertex operations stage of the graphics pipeline.
Now, if w e simply worked through the list of polygons sequentially and
filled in the appropriate fragments with the attributes assigned to them, the
resulting image would look very strange. This is because if the second sequen-
tial polygon covers the same fragment as the first then all stored information
about the first polygon would be overwritten for that fragment. Therefore,
as the second polygon is considered, we need to know the 3D x-coordinate
of the relevant vertex and determine if it is in front or behind the previous
x-coordinate assigned to that fragment. If it is in front of it, a new attribute
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.128.205.21