i
i
i
i
i
i
i
i
224 9. Image-Based Rendering
Figure 9.5. (a) Scanning a model with multiple cameras and light sources, after [11].
(b) Panoramic camera mounted on its side for wide vertical field of view. (c) The
Omnicam with parabolic mirror. (d) A Catadioptric camera derived from the Om-
nicam. (Photograph of the Omnicam and Catadioptric camera courtesy of Professor
Nayar of Columbia University.)
(see Figure 9.5(b)). QuickTime VR can also be used in object movie mode, in
which we look inward at an object rather than outward towards the environ-
ment. To make a QuickTime VR object movie, the camera is moved in an
equatorial orbit round a fixed object whilst taking pictures every 10
. Several
more orbits are made at different vertical heights.
Many other ingenious methods have been developed for making panor-
amic images. In the days before computers, cameras that used long film strips
would capture panoramic scenes [14]. Fish-eye lenses also allow wide-angle
panoramas to be obtained. Mirrored pyramids and parabolic mirrors can also
capture 360
views, but these may need some significant image processing to
remov e distortion. 360
viewing systems are often used for security surveil-
lance, video conferencing (e.g., Behere [2]) and devices such as the Omnicam
(see Figure 9.5(c) and other Catadioptric cameras.) [6] from Columbia or
that used by Huang and Trivedi for research projects in view estimation in
automobiles [9].
9.3 Mosaicing and Making Panoramic
Images
Once acquired, images have to be combined into a large mosaic if they are
going to be used as a panoramic source for IBR. If the images are taken from a
i
i
i
i
i
i
i
i
9.3. Mosaicing and Making Panoramic Images 225
Figure 9.6. (a) Input images; (b) combined images; (c) transform in two steps. Image 1
is being blended with image 2. Under the transform defined by the quadrilaterals,
point p in image 1 is moved to point p
in the coordinate system used by image 2.
Other points map according to the same transformation.
i
i
i
i
i
i
i
i
226 9. Image-Based Rendering
set produced by camera rotation, the term stitching may be more appropriate,
as sequential images usually overlap in a narrow strip. This is not a trivial task,
and ideally one would like to make the process fully automatic if possible.
If we can assume the images are taken from the same location and with
the same camera then a panorama can be formed by reprojecting adjacent
images and then mixing them together where they overlap. We have already
covered the principle of the idea in Section 8.3.2; the procedure is to manually
identify a quadrilateral that appears in both images. Figure 9.6(a) illustrates
images from two panoramic slices, one above the other, and Figure 9.6(b)
shows the stitched result. The quadrilaterals have been manually identified.
This technique avoids having to directly determine an entire camera model,
and the equation which describes the transformation (Equation (8.4)) can
be simplified considerably in the case of quadrilaterals. As shown in Fig-
ure 9.6(c), the transformation could be done in two steps: a transform from
the input rectangle to a unit square in (u, v) space is followed by a transform
from the unit square in (u, v) to the output quadrilateral. This has the advan-
tage of being faster, since Equation (8.4) when applied to step 2 can be solv ed
symbolically for the h
ij
.Oncetheh
ij
have been determined, any pixel p in
image 1 can be placed in its correct position relative to image 2, at p
.
Of course, the distortion of the mapping means that we shouldnt just do
a naive copy of pixel value from p to p
. It will be necessary to carry out some
form of anti-alias filtering, such as a simple bilinear interpolation from the
neighboring pixels to p in image 1 or the more elaborate elliptical Gaussian
filter of Heckbert [8].
Manually determining the homogeneous m apping between images being
mosaiced is fine for a small number of images, but if we want to combine a
Figure 9.7. The planar mosaic on the right is produced by automatically overlapping
and blending a sequence of images. Two of the input images are shown on the left.
i
i
i
i
i
i
i
i
9.3. Mosaicing and Making Panoramic Images 227
large number and they overlap to a significant extent, or we want to build a
big library of panoramas, it could get very tedious. Szeliski [19] introduced an
algorithm in the context of planar image mosaicing which applies equally well
for panoramic images. An example of its application is shown in Figure 9.7.
The algorithm works with two images that should have a considerable
overlap, and it attempts to determine the h
ij
coefficients of Equation (8.4),
for the specific case of a 2D transformation, by minimizing the difference
in image intensity between the images. The algorithm is iterative. It starts
with an assumption for the h
ij
, uses them to find corresponding pixels (x, y)
and (x
, y
) in the two images and seeks to minimize the sum of the squared
intensity error:
E =
i
I
(x
i
, y
i
) I(x
i
, y
i
)
2
over all corresponding pixels i.Oncetheh
ij
coefficients have been deter-
mined, the images are blended together using a bilinear weighting function
that is stronger near the center of the image. The minimization algorithm
uses the Levenberg-Marquardt method [15]; a full description can be found
in [19]. Once the relationship between the two images in the panorama has
been established, the next image is loaded and the comparison procedure be-
gins again.
Szeliski and Shum [20] have refined this idea to produce full-view panor-
amic image mosaics and environment maps obtained using a handheld digi-
tal camera. Their method also resolves the stitching problem in a cylindrical
panorama where the last image fails to align properly with the first. Their
method will still work so long as there is no strong motion parallax (transla-
tional movement of the camera) between the images. It achieves a full view
1
by storing the mosaic not as a single stitched image (for example, as depicted
in Figure 9.6), but as a collection of images each with their associated posi-
tioning matrices: translation, rotation and scaling. It is then up to the viewing
software to generate the appropriate view, possibly using traditional texture
mapping onto a polygonal surface surrounding the viewpoint, such as a sub-
divided dodecahedron or tessellated globe as depicted in Figure 9.8. For each
triangular polygon in the sphere, any panoramic image that would overlap
it in projection from the center of the sphere is blended together to form a
composite texture map for that polygon. Texture coordinates are generated
for the vertices by projecting the vertices onto the texture map.
1
The method avoids the usual problem one encounters when rendering from panoramic
images, namely that of rendering a view which looks directly up or directly down.
i
i
i
i
i
i
i
i
228 9. Image-Based Rendering
Figure 9.8. Mapping onto polygonal spherical shapes allows a full-view panoramic
image to be rendered without any discontinuities.
Figure 9.9. Building up a panoramic mosaic by moving the next image I
n
into po-
sition via a rotation about the optical center. Once in position, I
n
is blended with
the growing mosaic I
c
. A least squares minimization algorithm which matches the
overlap between I
n
and I
c
is used to determine the optimal rotation matrix R
n
.
Starting with a sequence of images I
0
, I
1
, I
2
etc., the refined method seeks
to combine the next image, I
n
, into the panorama by minimizing the differ-
ence between it and the current state of the composite panorama, I
c
.Amatrix
R is used to specify a transformation that will rotate I
n
about the view center
so that it overlaps part of I
c
(see Figure 9.9). The difference between I
n
and
I
c
can be determined as the sum of intensity differences in each pixel where
they overlap. A general iterative least squares minimization algorithm [15]
will refine R until the alignment between I
n
and I
c
has minimum difference.
Once I
n
has been placed in the panorama, it can be blended into I
c
by a radial
opacity function that decreases to zero at the edge of I
n
. The analytic steps in
the algorithm are deriv ed in Appendix C.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.22.160