65

Computational Geometry: Fundamental Structures*

Mark de Berg

Eindhoven University of Technology

Bettina Speckmann

Eindhoven University of Technology

65.1Introduction

65.2Arrangements

Substructures and ComplexityDecompositionDuality

65.3Convex Hulls

ComplexityConstructionDynamic Convex Hulls

65.4Voronoi Diagrams

ComplexityConstructionVariations

65.5Triangulations

Delaunay TriangulationPolygonsPolyhedraPseudo-Triangulations

References

65.1Introduction

Computational geometry deals with the design and analysis of algorithms and data structures for problems involving spatial data. The questions that are studied range from basic problems such as line-segment intersection (“Compute all intersection points in a given set of line segments in the plane.”) to quite involved problems such as motion-planning (“Compute a collision-free path for a robot in workspace from a given start position to a given goal position.”) Because spatial data plays an important role in many areas within and outside of computer science—CAD/CAM, computer graphics and virtual reality, and geography are just a few examples—computational geometry has a broad range of applications. Computational geometry emerged from the general algorithms area in the late 1970s. It experienced a rapid growth in the 1980s and by now is a recognized discipline with its own conferences and journals and many researchers working in the area. It is a beautiful field with connections to other areas of algorithms research, to application areas like the ones mentioned earlier, and to areas of mathematics such as combinatorial geometry.

To design an efficient geometric algorithm or data structure, one usually needs two ingredients: a toolbox of algorithmic techniques and geometric data structures and a thorough understanding of the geometric properties of the problem at hand. As an example, consider the classic post-office problem, where we want to preprocess a set S of n points in the plane—the points in S are usually called sites—for the following queries: report the site in S that is closest to a query point q. A possible approach is to subdivide the plane into n regions, one for each site, such that the region of a site sS consists of exactly those points qR2 for which s is the closest site. This subdivision is called the Voronoi diagram of S. A query with a point q can now be answered by locating the region in which q lies, and reporting the site defining that region. To make this idea work, one needs an efficient data structure for point location. But one also needs to understand the geometry: What does the Voronoi diagram look like? What is its complexity? How can we construct it efficiently?

In Part IV, many data structures for spatial data were already discussed. Hence, in this chapter we will focus on the second ingredient: we will discuss a number of basic geometric concepts. In particular, we will discuss arrangements in Section 65.2, convex hulls in Section 65.3, Voronoi diagrams in Section 65.4, and triangulations in Section 65.5.

More information on computational geometry can be found in various sources: there are several general textbooks on computational geometry [14], as well as more specialized books, for example, on arrangements [5,6] and Voronoi diagrams [7]. Finally, there are two handbooks that are devoted solely to (discrete and) computational geometry [8,9].

65.2Arrangements

The arrangement A(S) defined by a finite collection S of curves in the plane is the subdivision of the plane into open cells of dimensions 2 (the faces), 1 (the edges), and 0 (the vertices), induced by S—see Figure 65.1 for an example. This definition generalizes readily to higher dimensions: the arrangement defined by a set S of geometric objects in Rd such as hyperplanes or surfaces, is the decomposition of Rd into open cells of dimensions 0, …, d induced by S. The cells of dimension k are usually called k-cells. The 0-cells are called vertices, the 1-cells are called edges, the 2-cells are called faces, the (d − 1)-cells are called facets, and the d-cells are sometimes just called cells.

fig65_1.jpg

Figure 65.1An arrangement of curves in the plane.

Arrangements have turned out to form a fundamental concept underlying many geometric problems and the efficiency of geometric algorithms is often closely related to the combinatorial complexity of (certain parts of) some arrangement. Moreover, to solve a certain problem geometric algorithms often rely on some decomposition of the arrangement underlying the problem. Hence, the following subsections give some more information on the complexity and the decomposition of arrangements.

65.2.1Substructures and Complexity

Let H be a collection of n hyperplanes in Rd. As stated earlier, the arrangement A(H) is the decomposition of Rd into open cells of dimensions 0, …, d induced by H. The combinatorial complexity of A(H) is defined to be the total number of cells of the various dimensions. This definition immediately carries over to arrangements induced by other objects, such as segments in the plane, or surfaces in Rd. For example, the complexity of the arrangement in Figure 65.1 is 58, since it consists of 27 vertices, 27 edges, and 4 faces (one of which is the unbounded face).

It is easy to see that the maximum complexity of an arrangement of n lines in the plane is Θ(n2): there can be at most n(n − 1)/2 vertices, at most n2 edges, and at most n2/2 + n/2 + 1 faces. Also for an arrangement of curves the maximum complexity is Θ(n2), provided that any pair of curves intersects at most s times, for a constant s. More generally, the maximum complexity of an arrangement of n hyperplanes in Rd is Θ(nd). The same bound holds for well-behaved surfaces (such as algebraic surfaces of constant maximum degree) or well-behaved surface patches in Rd.

Single cells. It becomes more challenging to bound the complexity when we consider only a part of an arrangement. For example, what is the maximum complexity of a single cell, that is, the maximum number of i-cells, for i < d, on the boundary of any given d-cell? For lines in the plane this is still rather easy—the maximum complexity is Θ(n), since any line can contribute at most one edge to a given face—but the question is already quite hard for arrangements of line segments in the plane. Here it turns out that the maximum complexity can be Θ((n)), where α(n) is the extremely slowly growing functional inverse of Ackermann’s function. More generally, for Jordan curves where each pair intersects in at most s points, the maximum complexity is Θ(λs+2(n)), where λs+2(n) is the maximum length of a Davenport-Schinzel sequence of order s + 2 on n symbols. The function λs+2(n) is only slightly super-linear for any constant s. In higher dimensions, tight bounds are known for hyperplanes: the famous Upper Bound Theorem states that the maximum complexity of a single cell is Θ(nd/2⌋). For most other objects, the known upper and lower bounds are close but not tight—see Table 65.1.

Table 65.1Maximum Complexity of Single Cells and Envelopes in Arrangements

Single cell

Upper envelope

Reference

d = 2

Lines

Θ(n)

Θ(n)

Trivial

Segments

Θ((n))

Θ((n))

[10]

Circles

Θ(n)

Θ(n)

Linearization

Jordan arcs

Θ(λs+2(n))

Θ(λs+2(n))

[11]

d = 3

Planes

Θ(n)

Θ(n)

Euler’s formula

Triangles

Ω(n2α(n)), O(n2 log n)

Θ(n2α(n))

[12], [13]

Spheres

Θ(n2)

Θ(n2)

Linearization

Surfaces

Ω(q(n)), O(n2+ɛ)

Ω(q(n)), O(n2+ɛ)

[14]

d > 3

Hyperplanes

Θ(nd/2⌋)

Θ(nd/2⌋)

Upper Bound Thm [15]

(d − 1)-simplices

Ω(nd−1α(n)), O(nd−1 log n)

Θ(nd−1α(n))

[12], [13]

(d − 1)-spheres

Θ(nd/2⌉)

Θ(nd/2⌉)

Linearization

Surfaces

Ω(nd−2λq(n)), O(nd−1+ɛ)

O(nd−1+ɛ)

[16], [17]

The parameter s is the maximum number of points in which any two curves meet; the parameter q is a similar parameter for higher dimensional surfaces. The function λt(n) is the maximum length of a Davenport-Schinzel sequence [6] of order t on n symbols, and is only slightly super-linear for any constant t. Bounds of the form O(nd−1+ɛ) hold for any constant ɛ > 0.

Lower envelopes. Another important substructure is the lower envelope. Intuitively, the lower envelope of a set of segments in the plane is what one would see when looking at the segments from below—see Figure 65.2. More formally, if we view the segments as graphs of partially defined (linear) functions, then the lower envelope is the point-wise minimum of these functions. Similarly, the upper envelope is defined as the point-wise maximum of the functions. The definition readily extends to x-monotone curves in the plane, to planes, triangles, or xy-monotone surface patches in R3, etc.

fig65_2.jpg

Figure 65.2The lower envelope of a set of segments in the plane.

Envelopes are closely related to single cells. The lower envelope of a set of lines in the plane, for instance, is the boundary of the single cell in the arrangement that is below all the lines. The vertices of the lower envelope of a set of segments in the plane are also vertices of the unbounded cell defined by those segments, but here the reverse is not true: vertices of the unbounded cell that are above other segments are not on the lower envelope. Nevertheless, the worst-case complexities of lower envelopes and single cells are usually very similar—see Table 65.1.

Other substructures. More types of substructures have been studied than single cells and envelopes: zones, levels, multiple cells, etc. The interested reader may consult the Chapter 21 of the CRC Handbook of Discrete and Computational Geometry [8], or the books by Edelsbrunner [5] or Sharir and Agarwal [6].

65.2.2Decomposition

Full arrangements, or substructures in arrangements, are by themselves not convenient to work with, because their cells can be quite complex. Thus it is useful to further decompose the cells of interest into constant-complexity subcells: triangles or trapezoids in 2D, and simplices or trapezoid-like cells in higher dimensions. There are several ways of doing this.

Bottom-vertex triangulations. For arrangements of hyperplanes, the so-called bottom-vertex triangulation is often used. This decomposition is obtained as follows. Consider a bounded face f in a planar arrangement of lines. We can decompose f into triangles by drawing a line segment from the bottommost vertex v of f to all other vertices of f, except the vertices that are already adjacent to v—see Figure 65.3a. Note that this easy method for triangulating f is applicable since f is always convex. To decompose the whole arrangement of lines (or some substructure in it) we simply decompose each face in this manner.*

fig65_3.jpg

Figure 65.3(a) The bottom-vertex triangulation of a face in a planar arrangement. (b) The vertical decomposition of a planar arrangement of line segments.

To decompose a d-cell C in a higher-dimensional arrangement of hyperplanes, we proceed inductively as follows. We first decompose each (d − 1)-cell on the boundary of C, and then extend each (d − 1)-simplex in this boundary decomposition into a d-simplex by connecting each of its vertices to the bottommost vertex of C. Again, this can readily be used to decompose any subset of cells in an arrangement of hyperplanes.

The total number of simplices in a bottom-vertex decomposition is linear in the total complexity of the cells being decomposed.

Vertical decompositions. The bottom-vertex triangulation requires the cells to be convex, so it does not work for arrangements of segments or for arrangements of surfaces. For such arrangements one often uses the vertical decomposition (or: trapezoidal decomposition). For an arrangement of line segments or curves in the plane, this decomposition is defined as follows. Each vertex of the arrangement—this can be a segment endpoint or an intersection point—has a vertical connection to the segment immediately above it, and to the segment immediately below it. If there is no segment below or above a vertex, then the connection extends to infinity. This decomposes each cell into trapezoids: subcells that are bounded by at most two vertical connections and by at most two segments—see Figure 65.3b. This definition can be generalized to higher dimensions as follows. Suppose we wish to decompose the arrangement A(S) induced by a collection S of surfaces in Rd, where each surface is vertically monotone (i.e., any line parallel to the xd-axis intersects the surface in at most one point). Each point of any (d − 2)-dimensional cell of A(S) is connected by a vertical segment to the surface immediately above it and to the surface immediately below it. In other words, from each (d − 2)-cell we extend a vertical wall upward and downward. These walls together decompose the cells into subcells bounded by vertical walls and by at most two surfaces from S—one from above and one from below. These subcells are vertically monotone, but do not yet have constant complexity. Hence, we recursively decompose the bottom of the cell, and then extend this decomposition vertically upward to obtain a decomposition of the entire cell.

The vertical decomposition can be used for most arrangements (or substructures in them). In the plane, the maximum complexity of the vertical decomposition is linear in the total complexity of the decomposed cells. However, in higher dimensions this is no longer true. In R3, for instance, the vertical decomposition of an arrangement of n disjoint triangles can consist of Θ(n2) subcells, even though in this case the total complexity of the arrangement in obviously linear. Unfortunately, this is unavoidable, as there are collections of disjoint triangles in R3 for which any decomposition into convex subcells must have Ω(n2) subcells. For n intersecting triangles, the vertical decomposition has complexity O(n2α(n) log n + K), where K is the complexity of the arrangement of triangles [12]. More information about the complexity of vertical decompositions in various settings can be found in Halperin’s survey on arrangements [8, Chapter 21]. In many cases, vertical decompositions can be constructed in time proportional to their complexity, with perhaps a small (logarithmic or O(nɛ)) multiplicative factor.

65.2.3Duality

Consider the transformation in the plane that maps the point p = (px, py) to the line p*: y = pxxpy, and the line ℓ:y = ax + b to the point ℓ* = (a, − b). Such a transformation that maps points to lines and vice versa is called a duality transform. Often the term primal plane is used for the plane in which the original objects live, and the term dual plane is used for the plane in which their images live. The duality transform defined above has a few easy-to-verify properties:

1.It is incidence preserving: if a point p lies on a line ℓ, then the point ℓ* dual to ℓ lies on the line p* dual to p.

2.It is order preserving: if a point p lies above a line ℓ, then ℓ* lies above p*.

These properties imply several others. For example, three points on a line become three lines through a point under the duality transform—see Figure 65.4. Another property is that for any point p we have (p*)* = p. Notice that the duality transform above is not defined for vertical lines. This technicality is usually not a problem, as vertical lines can often be handled separately.

fig65_4.jpg

Figure 65.4Illustration of the duality transform.

This duality transform is so simple that it does not seem very interesting at first sight. However, it turns out to be extremely useful. As an example, consider the following problem. We are given a set P of n points in the plane, which we wish to preprocess for strip-emptiness queries: given a query strip—a strip is the region between two parallel lines—decide whether that strip is empty or if it contains one or more points from P. If we know about duality, and we know about data structures for point location, then this problem is easy to solve: we take the duals of the points in P to obtain a set P* of lines in the plane, and we preprocess the arrangement A(P) induced by P* for logarithmic-time point location. To decide whether the strip bounded by lines ℓ1 and ℓ2 is empty, we perform point locations with 1 and 2 in A(P); the strip is empty if and only if 1 and 2 lie in the same face of the arrangement. (This does not work if ℓ1 and ℓ2 are vertical, since then their duals are undefined. But for this case we can simply build one extra data structure, which is a balanced binary search tree on the x-coordinates of the points.)

In principle, of course, we could also have arrived at this solution without using duality. After all, duality does not add any new information: it is just a different way of looking at things. Hence, every algorithm or data structure that works in the dual plane can also be interpreted as working in the primal plane. But some things are simply more easy to see in the dual plane. For example, a face in the arrangement A(P) in the dual plane is much more visible than the collection of all lines in the primal plane dividing P into two subsets in a certain way. So without duality, we would not have realized that we could solve the strip-emptiness problem with a known data structure, and we would probably not have been able to develop that structure ourselves either.

This is just one example of the use of duality. There are many more problems where duality is quite useful. In fact, we will see another example in the next section, when we study convex hulls.

65.3Convex Hulls

A set ARd is convex if for any two points p, qA the segment pq¯ is completely contained in A. The convex hull of a set S of objects is the smallest convex set that contains all objects in S, that is, the most tightly fitting convex bounding volume for S. For example, if S is a set of objects in the plane, we can obtain the convex hull by taking a large rubber band around the objects and then releasing the band; the band will snap around the objects and the resulting shape is the convex hull. More formally, we can define CH(S) as the intersection of all convex sets containing all objects in S:

CH(S):={A:A is convex, and oA for all oS}.

We denote the convex hull of the objects in S by CH(S).

It is easy to see that the convex hull of a set of line segments in the plane is the same as the convex hull of the endpoints of the segments. More generally, the convex hull of a set of bounded polygonal objects in Rd is the same as the convex hull of the vertices of the objects. Therefore we will restrict our discussion to convex hulls of sets of points. Table 65.2 gives an overview of the results on the complexity and construction of convex hulls discussed below.

Table 65.2Maximum Complexity of the Convex Hull of a Set of n Points, and the Time Needed to Construct the Convex Hull

Complexity

Construction

Reference

d = 2, worst case

Θ(n)

O(n log n)

[1,18]

d = 3, worst case

Θ(n)

O(n log n)

[1922]

d > 3, worst case

Θ(nd/2⌋)

O(nd/2⌋)

[19,20,22,23]

d ≥ 2, uniform distr.

Θ(logd−1n)

O(n)

[24]

The bounds on uniform distribution refer to points drawn uniformly at random from a hypercube or some other convex polytope.

65.3.1Complexity

Let P be a set of n points in Rd. The convex hull of P, denoted by CH(P), is a convex polytope whose vertices are a subset of the points in P. The complexity of a polytope is defined as the total number of k-facets* (i.e., k-dimensional features) on the boundary of the polytope, for k = 0, 1, …, d − 1: the complexity of a planar polygon is the total number of vertices and edges, the complexity of a 3-dimensional polytope is the total number of vertices, edges, and faces, and so on.

Because the vertices of CH(P) are a subset of the points in P, the number of vertices of CH(P) is at most n. In the plane this means that the total complexity of the convex hull is O(n), because the number of edges of a planar polygon is equal to the number of vertices. In higher dimensions this is no longer true: the number of k-facets (k > 0) of a polytope can be larger than the number of vertices. How large can this number be in the worst case? In R3, the total complexity is still O(n). This follows from Euler’s formula, which states that for a convex polytope in R3 with V vertices, E edges, and F faces it holds that VE + F = 2. In higher dimensions, the complexity can be significantly higher: the worst-case complexity of a convex polytope with n vertices in Rd is Θ(nd/2⌋).

In fact, the bound on the complexity of the convex hull immediately follows from the results of the previous section if we apply duality. To see this, consider a set P of n points in the plane. For simplicity, let us suppose that CH(P) does not have any vertical edges and that no three points in P are collinear. Define the upper hull of P, denoted by UH(P), as the set of edges of CH(P) that bound CH(P) from above. Let P* be the set of lines that are the duals of the points in P. A pair p, qP defines an edge of UH(P) if and only if all other points rP lie below the line through p and q. In the dual this means that all lines r*P* lie above the intersection point p*q*. In other words, p*q* is a vertex on the lower envelope LE(P) of the lines in P*. Furthermore, a point pP is a vertex of UH(P) if and only if its dual p* defines an edge of LE(P). Thus there is a one-to-one correspondence between the vertices (or, edges) of UH(P), and the edges (or, vertices) of LE(P). In higher dimensions a similar statement is true: there is a one-to-one correspondence between the k-facets of the upper hull of P and the (dk − 1)-facets of the lower envelope of P*. The bound on the complexity of the convex hull of a set of n points in Rd therefore follows from the Θ(nd/2⌋) bound on the complexity of the lower envelope of a set of n hyperplanes Rd.

The Θ(nd/2⌋) bound implies that the complexity of the convex hull can be quite high when the dimension gets large. Fortunately this is not the case if the points in P are distributed uniformly: in that case only few of the points in P are expected to show up as a vertex on the convex hull. More precisely, the expected complexity of the convex hull of a set P that is uniformly distributed in the unit hypercube is O(logd−1n) [24,25].

65.3.2Construction

We now turn our attention to algorithms for computing the convex hull of a set P of n points in Rd. In the previous subsection we have shown that there is a correspondence between the upper hull of P and the lower envelope of P*, where P* is the set of hyperplanes dual to the points in P. It follows that any algorithm that can compute the convex hull of a set of points in Rd can also be used to compute the intersection of a set of half-spaces in Rd, and vice versa.

First consider the planar case. By a reduction from sorting, one can show that Ω(n log n) is a lower bound on the worst-case running time of any convex-hull algorithm. There are many different algorithms that achieve O(n log n) running time and are thus optimal in the worst case. One of the best known algorithms is called Graham’s scan [1,18]. It treats the points from left to right, and maintains the upper hull of all the points encountered so far. To handle a point pi, it is first added to the end of the current upper hull. The next step is to delete the points that should no longer be part of the hull. They always form a consecutive portion at the right end of the old hull, and can be identified easily—see Figure 65.5.

fig65_5.jpg

Figure 65.5Constructing the upper hull.

After these points have been deleted, the next point is handled. Graham’s scan runs in linear time, after the points have been sorted from left to right. This is optimal in the worst case.

The Ω(n log n) lower bound does not hold if only few points show up on the convex hull. Indeed, in this case it is possible to do better: Kirkpatrick and Seidel [26], and later Chan [27], gave output-sensitive algorithms that compute the convex hull in O(n log k) time, where k is the number of vertices of the hull.

In R3, the worst-case complexity of the convex hull is still linear, and it can be computed in O(n log n) time, either by a deterministic divide-and-conquer algorithm [21] or by a simpler randomized algorithm [19,20,22]. In dimensions d > 3, the convex hull can be computed in Θ(nd/2⌋) time [23]. This is the same as the worst-case complexity of the hull, and therefore optimal. Again, the simplest algorithms that achieve this bound are randomized [19,20,22]. There is also an output-sensitive algorithm by Chan [27], which computes the convex hull in O(nlogk+(nk)11/(d/2+1)logO(1)n) time, where k is its complexity.

As remarked earlier, the expected complexity of the convex hull of a set of n uniformly distributed points is much smaller than the worst-case complexity. This means that if we use an output-sensitive algorithm to compute the convex hull, its expected running time will be better than its worst-case running time. One can do even better, however, by using specialized algorithms. For example, Dwyer [24] has shown that the convex hull of points distributed uniformly in, for example, the unit hypercube can be computed in linear expected time.

65.3.3Dynamic Convex Hulls

In some applications the set P changes over time: new points are inserted into P, and some existing points are deleted. The convex hull can change drastically after an update: the insertion of a single point can cause Θ(n) points to disappear from the convex hull, and the deletion of a single point can cause Θ(n) points to appear on the convex hull. Surprisingly, it is nevertheless possible to store the convex hull of a planar point set in such a way that any update can be processed in O(log2n) in the worst case, as shown by Overmars and van Leeuwen [28]. The key to their result is to not only store the convex hull of the whole set, but also information about the convex hull of certain subsets. The structure of Overmars and van Leeuwen roughly works as follows. Suppose we wish to maintain UH(P), the upper hull of P; maintenance of the lower hull can be done similarly. The structure to maintain UH(P) is a balanced binary tree T, whose leaves store the points from P sorted by x-coordinate. The idea is that each internal node ν stores the upper hull of all the points in the subtree rooted at ν. Instead of storing the complete upper hull at each node ν, however, we only store those parts that are not already on the upper hull of nodes higher up in the tree. In other words, the point corresponding to a leaf μ is stored at the highest ancestor of μ where it is on the upper hull—see Figure 65.6 for an illustration.

fig65_6.jpg

Figure 65.6The Overmars-van Leeuwen structure to maintain the convex hull.

Note that the root still stores the upper hull of the entire set P. Because a point is stored in only one upper hull, the structure uses O(n) storage. Overmars and van Leeuwen show how to update the structure in O(log2n) time in the worst case.

Although the result by Overmars and van Leeuwen is more than 20 years old by now, it still has not been improved in its full generality. Nevertheless, there have been several advances in special cases. For example, for the semi-dynamic case (only insertions, or only deletions), there are structures with O(log n) update time [14,29]. Furthermore, O(log n) update time can be achieved in the off-line setting, where the sequence of insertions and deletions is known in advance [30]. Improvements are also possible if one does not need to maintain the convex hull explicitly. For example, in some applications the reason for maintaining the convex hull is to be able to find the point that is extreme in a query direction, or the intersection of the convex hull with a query line. Such queries can be answered in logarithmic time if the convex hull vertices are stored in order in a balanced binary tree, but it is not necessary to know all the convex hull vertices explicitly to answer such queries. This observation was used by Chan [31], who described a fully dynamic structure that can answer such queries in O(log n) time and that can be updated in O(log1+ɛn) time. This result was recently improved by Brodal and Jacob [32], who announced a structure that uses O(n) storage, has O(log n) update time, and can answer queries in O(log n) time. Neither Chan’s structure nor the structure of Brodal and Jakob, however, can report the convex hull in time linear in its complexity, and the update times are amortized.

65.4Voronoi Diagrams

Recall the post-office problem mentioned in the introduction. Here we want to preprocess a set S of n points, referred to as sites, in the plane such that we can answer the query: which site in S is closest to a given query point q? In order to solve this problem we can divide the plane into regions according to the nearest-neighbor rule: each site s gets assigned the region which is closest to s. This subdivision, which compactly captures the distance information inherent in a given configuration, is called the Voronoi diagram of S—see Figure 65.7a.

fig65_7.jpg

Figure 65.7(a) The Voronoi diagram of a set of points. (b) The dual graph of the Voronoi diagram. (c) The Delaunay triangulation of the points.

More formally, the Voronoi diagram of a set of sites S = {s1, …, sn} in Rd, which we refer to as Vor(S), partitions space into n regions—one for each site—such that the region for a site si consists of all points that are closer to si than to any other site sjS. The set of points that are closest to a particular site si forms the so-called Voronoi cell of si, and is denoted by V (si). Thus, when S is a set of sites in the plane we have

V(si)={pR2:dist(p,si)<dist(p,sj) for all ji},

where dist(., .) denotes the Euclidean distance.

Now consider the dual graph of the Voronoi diagram, that is, the graph that has a node for every Voronoi cell and an arc between any two Voronoi cells that share a common edge—see Figure 65.7b. (Observe that the concept of dual graph used here has nothing to do with the duality transform discussed in Section 65.2.3.) Suppose we embed this graph in the plane, by using the site si to represent the node corresponding to the cell V (si) and by drawing the edges as straight line segments, as in Figure 65.7c. Somewhat surprisingly perhaps, this graph is always planar. Moreover, it is actually a triangulation of the point set S, assuming that no four points in S are co-circular. More details on this special triangulation, which is called the Delaunay triangulation, will be given in Section 65.5.1.

There exists a fascinating connection between Voronoi diagrams in Rd and half-space intersections in Rd+1. Assume for simplicity that d = 2, and consider the transformation that maps a site s = (sx, sy) in R2 to the non-vertical plane h(s):z=2sxx+2syy(sx2+sy2) in R3. Geometrically, h(s) is the plane tangent to the unit paraboloid z = x2 + y2 at the point vertically above (sx, sy, 0). Let H(S) be the set of planes that are the image of a set of point sites S in the plane. Let S denote the convex polyhedron that is formed by the intersection of the positive half-spaces defined by the planes in H(S), that is, S=hH(S)h+, where h+ denotes the half-space above h. Surprisingly, the projection of the edges and vertices of S vertically downward on the xy-plane is exactly the Voronoi diagram of S.

The Voronoi diagram can be defined for various sets of sites, for example points, line segments, circles, or circular arcs, and for various metrics. Sections 65.4.1 and 65.4.2 discuss the complexity and construction algorithms of Voronoi diagrams for the usual case of point sites in Rd, while Section 65.4.3 describes some of the possible variations. Additional details and proofs of the material presented in this section can be found in References 3335.

65.4.1Complexity

Let S be a set of n points in Rd. The Voronoi diagram of S is a cell complex in Rd. If d = 2 then the Voronoi cell of a site is the interior of a convex, possibly infinite polygon. Its boundary consists of Voronoi edges, which are equidistant from two sites, and Voronoi vertices, which are equidistant from at least three sites. The Voronoi diagram of n ≥ 3 sites has at most 2n − 5 vertices and at most 3n − 6 edges, which implies that the Delaunay triangulation has at most 2n − 5 triangles and 3n − 6 edges. What is the complexity of the Voronoi diagram in d ≥ 3? Here the connection between Voronoi diagrams in Rd and intersections of half-spaces in Rd+1 comes in handy: we know from the Upper Bound Theorem that the intersection of n half-spaces in Rd+1 has complexity O(n(d+1)/2)=O(nd/2). This bound is tight, so the maximum complexity of the Voronoi diagram—and of the Delaunay triangulation, for that matter—in Rd is Θ(nd/2⌉).

65.4.2Construction

In this section we present several algorithms to compute the Voronoi diagram or its dual, the Delaunay triangulation, for point sites in Rd. Several of these algorithms can be generalized to work with metrics other than the Euclidean, and to sites other than points. Since the Voronoi diagram in the plane has only linear complexity one might be tempted to search for a linear time construction algorithm. However the problem of sorting n real numbers is reducible to the problem of computing Voronoi diagrams and, hence, any algorithm for computing the Voronoi diagram must take Ω(n log n) time in the worst case.

Two data structures that are particularly well suited to working with planar subdivisions like the Voronoi diagram are the doubly-connected edge list (DCEL) by Muller and Preparata [36] and the quad-edge structure by Guibas and Stolfi [37]. Both structures require O(n) space, where n is the complexity of the subdivision, and allow to efficiently traverse the edges adjacent to a vertex and the edges bounding a face. In both structures, one can easily obtain in O(n) time a structure for the Voronoi diagram from a structure for the Delaunay triangulation, and vice versa. In fact, the quad-edge structure, as well as a variant of the DCEL [1], are representations of a planar subdivision and its dual at the same time, so there is nothing to convert (except, perhaps, that one may wish to explicitly compute the coordinates of the vertices in the Voronoi diagram, if the quad-edge structure or DCEL describes the Delaunay triangulation).

Divide-and-conquer. The first deterministic worst-case optimal algorithm to compute the Voronoi diagram was given by Shamos and Hoey [38]. Their divide-and-conquer algorithm splits the set S of point sites by a dividing line into subsets L and R of approximately the same size. The Voronoi diagrams Vor(L) and Vor(R) are computed recursively and then merged into Vor(S) in linear time. This algorithm constructs the Voronoi diagram of a set of n points in the plane in O(n log n) time and linear space.

Plane Sweep. The strategy of a sweep line algorithm is to move a line, called the sweep line, from top to bottom over the plane. While the sweep is performed, information is maintained regarding the structure one wants to compute. It is tempting to apply the same approach to Voronoi diagrams, by keeping track of the Voronoi edges that are currently intersected by the sweep line. It is problematic, however, to discover a new Voronoi region in time: when the sweep line reaches a new site, then the line has actually been intersecting the Voronoi edges of its region for a while. Fortune [39] was the first to find a way around this problem. Fortune’s algorithm applies the plane sweep paradigm in a slightly different fashion: instead of maintaining the intersection of the sweep line with the Voronoi diagram, it maintains information of the part of the Voronoi diagram of the sites above the line that can not be changed by sites below it. This algorithm provides an alternative way of computing the Voronoi diagram of n points in the plane in O(n log n) time and linear space.

Randomized incremental construction. A natural idea is to construct the Voronoi diagram by incremental insertion, that is, to obtain Vor(S) from Vor(S{s}) by inserting the site s. Insertion of a site means integrating its Voronoi region into the diagram constructed so far. Unfortunately the region of s can have up to n − 1 edges, for |S| = n, which may lead to a running time of O(n2). The insertion process is probably better described and implemented in the dual environment, for the Delaunay triangulation DT: construct DTi=DT({s1,,si}) by inserting si into DTi−1. The advantage of this approach over a direct construction of Vor(S) is that Voronoi vertices that appear only in intermediate diagrams but not in the final one need not be computed or stored. DTi is constructed by exchanging edges, using edge flips [40], until all edges invalidated by si have been removed. Still, the worst-case running time of this algorithm can be quadratic. However, if we insert the sites in random order, and the algorithm is implemented carefully, then one can prove that the expected running time is O(n log n), and that the expected amount of storage is O(n).

Other approaches. Finally, recall the connection between Delaunay triangulations and convex hulls. Since there exist O(n log n) algorithms to compute the convex hull of points in R3 (see Section 65.3) we therefore have yet another optimal algorithm for the computation of Voronoi diagrams.

65.4.3Variations

In this section we present some of the common variations on Voronoi diagrams. The first is the order-k Voronoi diagram of a set S of n sites, which partitions Rd on the basis of the first k closest point sites. In other words, each cell in the order-k Voronoi diagram of a set S of sites in the plane corresponds to a k-tuple of sites from S and consists of those points in the plane for which that k-tuple are the k closest sites. One might fear that the order-k Voronoi diagram has Θ(nk) cells, but this is not the case. In two dimensions, for example, its complexity is O(k(nk)), and it can be computed in O(k(nk) log n + n log3n) expected time [41].

The furthest-site Voronoi diagram partitions Rd according to the furthest site, or equivalently, according to the closest n − 1 of n sites. The furthest-site Voronoi diagram can be computed in O(n log n) time in two dimensions, and in O(nd/2⌉) in dimension d ≥ 3.

One can also consider different distance functions than the Euclidean distance. For example, one can alter the distance function by the addition of additive or multiplicative weights. In this case every point site si is associated with a weight wi and the distance function d(si, x) between a point x and a site si becomes d(si, x) = wi + dist(si, x) (additive weights) or d(si, x) = wi · dist(si, x) where dist(si, x) denotes the Euclidean distance between si and x. The Voronoi diagram for point sites in 2 dimensions with additive weights can be computed in O(n log n) time, for multiplicative weights the time increases to O(n2) time.

Finally the power diagram, or Laguerre diagram, is another Voronoi diagram for point sites si that are associated with weights wi. Here the distance function is the power distance introduced by Aurenhammer [42], where the distance from a point x to a site si is measured along a line tangent to the sphere of radius wi centered at si, that is, d(si,x)=dist(si,x)2wi. The power diagram can be computed in O(n log n) time in two dimensions.

65.5Triangulations

In geometric data processing, structures that partition the geometric input, as well as connectivity structures for geometric objects, play an important role. Versatile tools in this context are triangular meshes, often called triangulations. A triangulation of a geometric domain such as a polygon in R2 or a polyhedron in R3 is a partition into simplices that meet only at shared faces. A triangulation of a point set S is a triangulation of the convex hull of S, such that the vertices in the triangulation are exactly the points in S.

In the following sections, we first discuss the most famous of all triangulations, the Delaunay triangulation. We then address triangulations of polygons and polyhedra in R3. Finally we describe a recent generalization of triangulations: the pseudo-triangulation.

65.5.1Delaunay Triangulation

In this section we provide additional detail on the Delaunay triangulation of a set S={s1,,sn} of points in Rd, which was introduced in Section 65.4. There we defined the Delaunay triangulation as the dual of the Voronoi diagram. In the plane, for instance, the Delaunay triangulation has an edge between sites si and sj if and only if the Voronoi cells of si and sj share a boundary edge. In higher dimensions, there is an edge between si and sj if their Voronoi cells share a (d − 1)-facet. The Delaunay triangulation can also be defined directly. If we restrict ourselves for the moment to a set S of points in R2 then the Delaunay triangulation of S, DT(S), is defined by the empty-circle condition: a triangle Δ defined by three points si, sj, and sk is part of DT(S) if and only if Δ’s circumcircle neither encloses nor passes through any other points of S. More generally, d + 1 points in Rd define a simplex in the Delaunay triangulation if and only if its circumscribed sphere neither encloses nor passes through any other points of S. If no d + 1 points of S are co-spherical then DT(S) is indeed a triangulation. If d + 2 or more points are co-spherical, then DT(S) can contain cells with more than d + 1 sides. Fortunately, such cells can easily be further decomposed in simplices—with a bottom-vertex triangulation, for example—so such degeneracies are not a real problem. To simplify the description, we from now on assume that these degeneracies do not occur.

In the previous section we have seen a close connection between Voronoi diagrams in Rd and intersections of half-spaces in Rd+1. Similarly, there is a close connection between the Delaunay triangulation in Rd and convex hulls in Rd+1. Let’s again restrict ourselves to the case d = 2. Consider the transformation that maps a site s = (sx, sy) in R2 onto the point λ(sx,sy)=(sx,sy,sx2+sy2) in R3. In other words, s is “lifted” vertically onto the unit paraboloid to obtain λ(s). Let λ(S) be the set of lifted sites. Then if we project the lower convex hull of λ(S)—the part of the convex hull consisting of the facets facing downward—back onto the xy-plane, we get the Delaunay triangulation of S.

The Delaunay triangulation is the “best” triangulation with respect to many optimality criteria. The Delaunay triangulation:

Minimizes the maximum radius of a circumcircle;

Maximizes the minimum angle;

Maximizes the sum of inscribed circle radii;

Minimizes the “potential energy” of a piecewise-linear interpolating surface.

Also, the distance between any two vertices of the Delaunay triangulation along the triangulation edges is at most 2.42 times their Euclidean distance, that is, the dilation of the Delaunay triangulation is 2.42. Finally, the Delaunay triangulation contains as a subgraph many other interesting graphs:

EMSTRNGGGDT

where EMST is the Euclidean minimum spanning tree, RNG is the relative neighborhood graph, and GG is the Gabriel graph (see Reference 4 for details on these graphs).

Since the Delaunay triangulation is the dual of the Voronoi diagram, any algorithm presented in Section 65.4.2 can by used to efficiently compute the Delaunay triangulation. We therefore refrain from presenting any additional algorithms at this point.

65.5.2Polygons

Triangulating a simple polygon P is not only an interesting problem in its own right, but it is also an important preprocessing step for many algorithms. For example, many shortest-path and visibility problems on a polygon P can be solved in linear time if a triangulation of P is given [43]. It is fairly easy to show that any polygon can indeed by decomposed into triangles by adding a number of diagonals and, moreover, that the number of triangles in any triangulation of a simple polygon with n vertices is n − 2.

There are many algorithms to compute a triangulation of a simple polygon. Most of them run in O(n log n) time. Whether it is possible to triangulate a polygon in linear time was a prominent open problem for several years until Chazelle [44], after a series of interim results, devised a linear-time algorithm. Unfortunately, his algorithm is more of theoretical than of practical interest, so it is probably advisable to use a deterministic algorithm with O(n log n) running time, such as the one described below, or one of the slightly faster randomized approaches with a time complexity of O(n log*n) [20].

In the remainder of this section we sketch a deterministic algorithm that triangulates a simple polygon P with n vertices in O(n log n) time. We say that a polygon P is monotone with respect to a line ℓ if the intersection of any line ℓ′ perpendicular to ℓ with P is connected. A polygon that is monotone with respect to the x-axis is called x-monotone. Now the basic idea is to decompose P into monotone polygons and then to triangulate each of these monotone polygons in linear time.

There are several methods to decompose a simple polygon into x-monotone polygons in O(n log n) time. One approach is to sweep over P twice, from left to right and then from right to left, and to add appropriate edges to vertices that did not previously have at least one edge extending to the left and at least one edge extending to the right. A more detailed description of this or related approaches can be found in References 1,8.

Triangulating a monotone polygon. Now suppose we are given an x-monotone polygon P—see Figure 65.8. We consider its vertices p1,,pn from left to right and use a stack to store the vertices of the not-yet-triangulated part of the polygon (which necessarily form a reflex chain) to the left of our current vertex pi. If pi is adjacent to pi−1, as in see Figure 65.8a, then we pop vertices from the stack and connect them to pi until the stack (including pi) forms a reflex chain again. In particular that might mean that we simply add pi to the stack. If pi is adjacent to the leftmost vertex on the stack, which could be pi−1, as in see Figure 65.8b, then we connect pi to each vertex of the stack and clear the stack of all vertices except pi and pi−1. This algorithm triangulates P in linear time.

fig65_8.jpg

Figure 65.8Triangulating an x-monotone polygon.

65.5.3Polyhedra

In this section we briefly discuss triangulations (or tetrahedralizations) of three-dimensional polyhedra. A polyhedron P is a connected solid with a piecewise linear boundary (i.e., its boundary is composed of polygons). We assume P to be non-degenerate: A sufficiently small ball around any point of the boundary of P contains a connected component of the interior as well as the exterior of P. The number of vertices, edges, and faces of a non-degenerate tetrahedron are linearly related.

Three dimensions unfortunately do not behave as nicely as two. Two triangulations of the same input data may contain quite different numbers of tetrahedra. For example, a triangulation of a convex polyhedron with n vertices may contain between n − 3 and (n22) tetrahedra. Furthermore, some non-convex polyhedra can not be triangulated at all without the use of additional (so-called Steiner) points. The most famous example is Schönhardt’s polyhedron. Finally, it is NP-complete to determine whether a polyhedron can be triangulated without Steiner points and to test whether k Steiner points are sufficient [45].

Chazelle [46] constructed a polyhedron with n vertices that requires Ω(n2) Steiner points. On the other hand, any polyhedron can be triangulated with O(n2) tetrahedra, matching his lower bound. If one pays special attention to the reflex vertices of the input polyhedron P, then it is even possible to triangulate P using only O(n + r2) tetrahedra, where r is the number of reflex edges of P [47].

65.5.4Pseudo-Triangulations

In recent years a relaxation of triangulations, called pseudo-triangulations, has received considerable attention. Here, faces are bounded by three concave chains, rather than by three line segments. More formally, a pseudo-triangle is a planar polygon that has exactly three convex vertices with internal angles less than π, all other vertices are concave. Note that every triangle is a pseudo-triangle as well. The three convex vertices of a pseudo-triangle are called its corners. Three concave chains, called sides, join the three corners. A pseudo-triangulation for a set S of points in the plane is a partition of the convex hull of S into pseudo-triangles whose vertex set is exactly S—see Figure 65.9. Pseudo-triangulations, also called geodesic triangulations, were originally studied for convex sets and for simple polygons in the plane because of their applications to visibility [48,49] and ray shooting [50,51]. But in the last few years they also found application in robot motion planning [52] and kinetic collision detection [53,54].

fig65_9.jpg

Figure 65.9Triangulations of a point set, a simple polygon, and a polyhedron; a pseudotriangulation of a point set.

Of particular interest are the so-called minimum pseudo-triangulations, which have the minimum number of pseudo-triangular faces among all possible pseudo-triangulations of a given domain. They were introduced by Streinu [52], who established that every minimum pseudo-triangulation of a set S of n points consists of exactly n − 2 pseudo-triangles (here we do not count the outer face). Note that such a statement cannot be made for ordinary triangulations: there the number of triangles depends on the number of points that show up on the convex hull of S. Minimum pseudo-triangulations are also referred to as pointed pseudo-triangulations. The name stems from the fact that every vertex v of a minimum pseudo-triangulation has an incident region whose angle at v is greater than π. The converse is also true (and can be easily established via Euler’s relation): If every vertex of a pseudo-triangulation is pointed—it has an incident angle greater than π—then this pseudo-triangulation has exactly n − 2 pseudo-triangles and is therefore minimum. A pseudo-triangulation is called minimal (as opposed to minimum) if the union of any two faces is not a pseudo-triangle. In general, all minimum pseudo-triangulations are also minimal, but the opposite is not necessarily true—see Figure 65.9a for an example of a minimal but not minimum pseudo-triangulation.

The great variety of applications in which pseudo-triangulations are successfully employed prompted a growing interest in their geometric and combinatoric properties, which often deviate substantially from those of ordinary triangulations. An example of a nice property of pseudo-triangulations is that every point set in the plane admits a pseudo-triangulation of maximum vertex degree 5 [55]. (This bound is tight.) Again, this is not the case for ordinary triangulations: any ordinary triangulation of the point set depicted in Figure 65.10b, contains a vertex of degree n − 1.

fig65_10.jpg

Figure 65.10(a) A minimal but not minimum pseudo-triangulation. (b) A point set for which every triangulation has a vertex of high degree.

Up to now there are unfortunately no extensions of pseudo-triangulations to dimensions higher than two which retain a significant subset of their useful planar properties.

References

1.M. de Berg, M. van Kreveld, M. Overmars, and O. Schwarzkopf: Computational Geometry: Algorithms and Applications (2nd Ed.). Springer-Verlag, 2000.

2.J.-D. Boissonnat and M. Yvinec: Algorithmic Geometry. Cambridge University Press, 1998.

3.J. O’Rourke: Computational Geometry in C (2nd Ed.). Cambridge University Press, 1998.

4.F. P. Preparata and M. I. Shamos: Computational Geometry: An Introduction. Springer-Verlag, 1985.

5.H. Edelsbrunner: Algorithms in Combinatorial Geometry. Springer-Verlag, 1987.

6.M. Sharir and P. K. Agarwal: Davenport-Schinzel Sequences and Their Geometric Applications. Cambridge University Press, 1995.

7.A. Okabe, B. Boots, K. Sugihara, and S. N. Chiu: Spatial Tessellations: Concepts and Applications of Voronoi Diagrams (2nd Ed.). Wiley, 2000.

8.J. E. Goodman and J. O’Rourke (Eds.): Handbook of Discrete and Computational Geometry. CRC Press LLC, 1997.

9.J.-R. Sack and J. Urrutia (Eds.): Handbook of Computational Geometry. Elsevier Science, 2000.

10.R. Pollack, M. Sharir, and S. Sifrony: Separating two simple polygons by a sequence of translations. Discr. Comput. Geom. 3:123–136, 1988.

11.L. J. Guibas, M. Sharir, and S. Sifrony: On the general motion planning problem with two degrees of freedom. Discr. Comput. Geom. 4:491–521, 1989.

12.B. Tagansky: A new technique for analyzing substructures in arrangements of piecewise linear surfaces. Discr. Comput. Geom. 16:455–479, 1996.

13.H. Edelsbrunner: The upper envelope of piecewise linear functions: Tight complexity bounds in higher dimensions. Discr. Comput. Geom. 4:337–343, 1989.

14.D. Halperin and M. Sharir: New bounds for lower envelopes in three dimensions, with applications to visibility in terrains. Discr. Comput. Geom. 12:313–326, 1994.

15.P. McMullen: The maximal number of faces of a convex polytope. Mathematica 17:179–184, 1970.

16.S. Basu: On the combinatorial and topological complexity of a single cell. In Proc. 39th Annu. IEEE Sympos. Found. Comput. Sci., pp. 606–616, 1998.

17.M. Sharir: Almost tight upper bounds for lower envelopes in higher dimensions. Discr. Comput. Geom. 12:327–345, 1994.

18.R. L. Graham: An efficient algorithm for determining the convex hull of a finite planar set. Inf. Proc. Lett. 1:132–133, 1972.

19.K. L. Clarkson and P. W. Shor: Applications of random sampling in computational geometry, II. Discr. Comput. Geom. 4:387–421, 1989.

20.K. Mulmuley: Computational Geometry: An Introduction through Randomized Algorithms. Prentice Hall, 1994.

21.F. P. Preparata and S. J. Hong: Convex hulls of finite sets of points in two and three dimensions. Comm. ACM 20:87–93, 1977.

22.R. Seidel: Small-dimensional linear programming and convex hulls made easy. Discr. Comput. Geom. 6:423–434, 1991.

23.B. Chazelle: An optimal convex hull algorithm in any fixed dimension. Discr. Comput. Geom. 10:377–409, 1993.

24.R. A. Dwyer: On the convex hull of random points in a polytope. J. Appl. Probab. 25:688–699, 1988.

25.R. A. Dwyer: Kinder, gentler average-case analysis for convex hulls and maximal vectors. SIGACT News 21:64–71, 1990.

26.D. Kirkpatrick and R. Seidel: The ultimate planar convex hull algorithm? SIAM J. Comput. 15:287–299, 1986.

27.T. Chan: Output-sensitive results on convex hulls, extreme points, and related problems. Discr. Comput. Geom. 16:369–387, 1996.

28.M. H. Overmars and J. van Leeuwen: Maintenance of configurations in the plane. J. Comput. Systs. Sci. 23:166–204, 1981.

29.F. P. Preparata: An optimal real-time algorithm for planar convex hulls. Comm. ACM 22:405–408, 1979.

30.J. Hershberger and S. Suri: Off-line maintenance of planar configurations. J. Algorithms 21:453–475, 1996.

31.T. Chan: Dynamic planar convex hull operations in near-logarithmic amortized time. J. ACM. 48:1–12, 2001.

32.G. S. Brodal and R. Jacob: Dynamic Planar Convex Hull. In Proc. 43rd Annu. IEEE Sympos. Found. Comput. Sci., pp. 617–626, 2002.

33.F. Aurenhammer: Voronoi diagrams—A survey of a fundamental geometric data structure. ACM Comput. Surv., 23:345–405, 1991.

34.F. Aurenhammer and R. Klein: Voronoi diagrams. In: J. Sack and G. Urrutia (eds.), Handbook of Computational Geometry, Chapter V, pp. 201–290. Elsevier Science Publishing, 2000.

35.S. Fortune: Voronoi diagrams and delaunay triangulations. In: F. K. Hwang and D.-Z. Du (eds.), Computing in Euclidean Geometry (2nd Ed.). World Scientific, pp. 225–265, 1995.

36.D. E. Muller and F. P. Preparata: Finding the intersection of two convex polyhedra. Theoret. Comput. Sci., 7:217–236, 1978.

37.L. J. Guibas and J. Stolfi: Primitives for the manipulation of general subdivisions and the computation of Voronoi diagrams. ACM Trans. Graph., 4:74–123, 1985.

38.M. I. Shamos and D. Hoey: Closest-point problems. In Proc. 16th IEEE Symp. Found. Comput. Science, pp. 151–162, 1975.

39.S. Fortune: A sweepline algorithms for Voronoi diagrams. Algorithmica, 2:153–174, 1987.

40.C. L. Lawson: Software for C1 surface interpolation. In: J. R. Rie (ed.), Math. Software III, Academic Press, New York, NY, pp. 161–194, 1977.

41.P. K. Agarwal, M. de Berg, J. Matousek, and O. Schwartzkopf: Constructing levels in arrangements and higher order Voronoi diagrams. In Proc. 11th Symp. Comput. Geom., pp. 71–78, 1995.

42.F. Aurenhammer: Power diagrams: properties, algorithms, and applications. Siam J. Comput., 16:78–96, 1987.

43.L. J. Guibas, J. Hershberger, D. Leven, M. Sharir, and R. E. Tarjan. Linear-time algorithms for visibility and shortest path problems inside triangulated simple polygons. Algorithmica, 2:209–233, 1987.

44.B. Chazelle: Triangulating a simple polygon in linear time. Discrete Comput. Geom., 6:485–524, 1991.

45.J. Rupert and R. Seidel: On the difficulty of tetrahedralizing 3-dimensional non-convex polyhedra. Discrete Comput. Geom., 7:227–253, 1992.

46.B. Chazelle: Convex partitions of polyhedra:A lower bound and worst-case optimal algorithm. SIAM J. Comput., 13:488–507, 1984.

47.B. Chazelle and L. Palios: Triangulating a nonconvex polytope. Discrete Comput. Geom., 5:505–526, 1990.

48.M. Pocchiola and G. Vegter: Minimal tangent visibility graphs. Comput. Geom. Theory Appl., 6:303–314, 1996.

49.M. Pocchiola and G. Vegter: Topologically sweeping visibility complexes via pseudo-triangulations Discrete Comp. Geom., 16:419–453, 1996.

50.B. Chazelle, H. Edelsbrunner, M. Grigni, L. J. Guibas, J. Hershberger, M. Sharir, and J. Snoeyink: Ray shooting in polygons using geodesic triangulations. Algorithmica, 12:54–68, 1994.

51.M. Goodrich and R. Tamassia: Dynamic ray shooting and shortest paths in planar subdivision via balanced geodesic triangulations. J. Algorithms 23:51–73, 1997.

52.I. Streinu: A combinatorial approach to planar non-colliding robot arm motion planning. In Proc. 41st FOCS, 2000, pp. 443–453.

53.P. K. Agarwal, J. Basch, L. J. Guibas, J. Hershberger, and L. Zhang: Deformable free space tilings for kinetic collision detection. Int. J. Robotics Research 21(3):179–197, 2002.

54.D. Kirkpatrick, J. Snoeyink, and B. Speckmann: Kinetic collision detection for simple polygons. Int. J. Comput. Geom. Appl. 12(1&2):3–27, 2002.

55.L. Kettner, D. Kirkpatrick, A. Mantler, J. Snoeyink, B. Speckmann, and F. Takeuchi: Tight degree bounds for pseudo-triangulations of points. Comp. Geom. Theory Appl., 25:1–12, 2003.

*This chapter has been reprinted from first edition of this Handbook, without any content updates.

*Unbounded faces require a bit of care, but they can be handled in a similar way.

*In the previous section we used the term k-cells for the k-dimensional features in an arrangement, but since we are dealing here with a single polytope we prefer the term k-facet.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.202.30