i
i
i
i
i
i
i
i
220 9. Image-Based Rendering
9.1 General Approaches to IBR
It is generally accepted that there are three different classifications of IBR,
which are based on the significance of the geometric component involved in
the rendering process [17]. These are:
1. Rendering with no geometry. Hypothetically, it is possible to envision
storing a complete description of a 3D scene using a seven-dimensional
function that gives the light intensity at every point of space (three
dimensions x, y, z), looking in every direction (another two dimen-
sions
, ), at all times t and for every wavelength of light .Adelson
and Bergen [1] gave this function the name plenoptic,
p = P(x, y, z,
, , t, ). Plenoptic modeling is the general name given
to any approach to rendering that may be expressed in these terms.
For example, environment mapping as described in Section 7.6.2 is
a function of two variables p = P(
, ), viewed from a single point, at
a single instant in time and under fixed lighting conditions. That is,
x, y, z, t and
are constant.
It is impractical to work with a seven-dimensional scene, but a four-
dimensional plenoptic model such as the lumigraph [7] will allow a
view in any direction to be determined from any point on a surface
surrounding a convex object. The four dimensions are the two surface
coordinates (u, v) and two angular directions of view, up/down (
) and
left/right ( ). That is, p = P(u, v, , ). The lumigraph is illustrated
in Figure 9.1. The lumigraph attempts to capture the image that would
be seen at every point and by looking in every direction on the surface
of a box that surrounds an object. In practice, the surface of the box
is divided up into little patches (e.g., 32 × 32 on each side), and for
each patch, a finite number of images (e.g., 256 × 256) are made by
looking in a range of directions that span a wide angle. One of the
advantages of this concept is that it allows parallax and stereoscopic
effects to be easily and quickly rendered. In addition, the reduction
to four degrees of freedom, appropriate discretization and compression
reduce the scene description data size even further.
Apple’s QuickTime VR [3] and Shum and Szeliski’s [18] method are
also examples of the use of no-geometry plenoptic functions. In this
case, they are functions of two variables, and so they have manageable
data requirements and very high rendering speeds. Shum and Szeliski’s