i
i
i
i
i
i
i
i
9.3. Mosaicing and Making Panoramic Images 227
large number and they overlap to a significant extent, or we want to build a
big library of panoramas, it could get very tedious. Szeliski [19] introduced an
algorithm in the context of planar image mosaicing which applies equally well
for panoramic images. An example of its application is shown in Figure 9.7.
The algorithm works with two images that should have a considerable
overlap, and it attempts to determine the h
ij
coefficients of Equation (8.4),
for the specific case of a 2D transformation, by minimizing the difference
in image intensity between the images. The algorithm is iterative. It starts
with an assumption for the h
ij
, uses them to find corresponding pixels (x, y)
and (x
, y
) in the two images and seeks to minimize the sum of the squared
intensity error:
E =
i
I
(x
i
, y
i
) − I(x
i
, y
i
)
2
over all corresponding pixels i.Oncetheh
ij
coefficients have been deter-
mined, the images are blended together using a bilinear weighting function
that is stronger near the center of the image. The minimization algorithm
uses the Levenberg-Marquardt method [15]; a full description can be found
in [19]. Once the relationship between the two images in the panorama has
been established, the next image is loaded and the comparison procedure be-
gins again.
Szeliski and Shum [20] have refined this idea to produce full-view panor-
amic image mosaics and environment maps obtained using a handheld digi-
tal camera. Their method also resolves the stitching problem in a cylindrical
panorama where the last image fails to align properly with the first. Their
method will still work so long as there is no strong motion parallax (transla-
tional movement of the camera) between the images. It achieves a full view
1
by storing the mosaic not as a single stitched image (for example, as depicted
in Figure 9.6), but as a collection of images each with their associated posi-
tioning matrices: translation, rotation and scaling. It is then up to the viewing
software to generate the appropriate view, possibly using traditional texture
mapping onto a polygonal surface surrounding the viewpoint, such as a sub-
divided dodecahedron or tessellated globe as depicted in Figure 9.8. For each
triangular polygon in the sphere, any panoramic image that would overlap
it in projection from the center of the sphere is blended together to form a
composite texture map for that polygon. Texture coordinates are generated
for the vertices by projecting the vertices onto the texture map.
1
The method avoids the usual problem one encounters when rendering from panoramic
images, namely that of rendering a view which looks directly up or directly down.