Algorithm 5: Nonlocal pixel exchange

Input: Original image f, pixel mask c, size m of candidate set, the set K of pixel indices of the mask c.

Initialization: u := r(c, f) and cnew := c.

Compute: Repeat

(i)Choose randomly m |K| pixel indices i from JK and compute the local error ei := (ui fi)2.

(ii)Exchange step:

Choose randomly a j K and set

For the largest value of ei, set

(iii)Compute unew := r(cnew, f ).

(iv)If MSE (u, f) > MSE (unew, f )

u := unew and c := cnew.

Update K.

else

Reset cnew := c.

Until no pairs can be found for exchange.

Output: Optimized mask cnew.

As for the previous algorithms, we are interested in an optimal parameter selection. Table 3 shows the results for different choices of m when the nonlocal pixel exchange is applied to the masks from the previous section (randomly selected, regular grid, and analytic approach from Figure 7) and in addition the one we obtained by probabilistic sparsification (also Figure 7). Technically one could always choose m = 1, but a noticeable speedup can be obtained by choosing a larger m. In our experiments, values around m = 20 result in fastest convergence.

The nonlocal pixel exchange improves the masks from any method we have considered so far; see Figure 6. Especially within the first few iterations we achieve significant quality gains. After 500,000 iterations, we reach with all three masks an MSE below 45. The best reconstruction with an MSE of 41.92 is obtained with the mask from probabilistic sparsification. It is depicted in Figure 7. As for the previous methods, Table 4 provides also quantitative results for the test images walter and peppers. They support the above observations.

Table 3: Mean squared error after 500,000 iterations for different values for m, when the nonlocal pixel exchange is applied to different masks of the test image trui (see also Figure 7). The best result for each mask is marked in boldface.

Fig. 6: Convergence behavior for the first 50,000 iterations, when the nonlocal pixel exchange is applied to different masks (Figure 7(a,d,g)) of the trui test image with optimal parameter m (cf. Table 3).

2.5.2Optimizing tonal data

So far, all our 2D optimization approaches only propose a solution for the spatial optimization problem. However, the results in Section 2.4.2 show that a tonal optimization, that is an optimization of the gray values, can be very worthwhile. From a data compression point of view, it is important to notice that changing the gray values at the chosen data points does not increase the amount of data that needs to be stored. Conversely, the quality improvements can be remarkable. In this section, we present an approach that allows us to determine the optimal gray values for any given mask.

In order to find the optimal gray values g for a fixed mask c, we consider the following minimization approach:

where || is the standard Euclidean norm, f denotes the original image, and r(c, g) the reconstruction from (5.2). Because of the linearity of r with respect to g, this is a linear least squares problem. In our next steps, we analyze its well-posedness properties and propose an efficient numerical algorithm.

Existence and uniqueness results

Let ei denote the i-th canonical basis vector of |J|, that is ei,j = 1 if i = j, and 0 otherwise. Then we call r(c, ei) the inpainting echo of pixel i. Since r is linear in g we can express the reconstruction u as a superposition of its inpainting echoes:

Since r(c, ei) is 0 for ci = 0 (i.e., for i J K), we can simplify this to a summation over K:

For our minimization problem (5.3), this means that the coefficients gi can be chosen arbitrarily if i J K. For simplicity, we fix them at 0. The remaining gi with i K can be obtained by considering the least squares problem

where gK = (gi)iK is a vector of size |K|, and B is a |J| × |K| matrix that contains the vectors {r(c, ei) | i K} as columns. The associated normal equations are given by

By construction, the |K| × |K| matrix BB is positive semidefinite. It is, however, not obvious that the eigenvalue 0 cannot appear. The theorem below excludes such a singular situation.

Theorem 5.1. Let K be nonempty. Then the matrix BB is invertible, and thus the linear system (5.7) has a unique solution.

Proof. Mainberger et al. [95] have proven that for a nonempty set K, the |J| × |J| matrix M = C (I C)A is invertible, that is M1 exists. Moreover, for i K we have

This shows that r(c, ei) is the i-th column of M1. Since M is invertible, also M1 is regular. Thus, all column vectors of M1 have to be linearly independent. In particular, this implies that also all columns {r(c, ei) | i K} of the |J| × |K| matrix B are linearly independent. Therefore, BB is invertible, and the linear system (5.7) has a unique solution.

Numerical algorithm

To find the solution of the minimization problem (5.3), Mainberger et al. [96] have solved the associated normal equations (5.7) iteratively. In particular, they have chosen a randomized GaußSeidel scheme that updates the gray values at the individual mask points one after another. Alternatively, one could also employ different iterative solvers or solve the system of equations with direct methods such as LU- or QR-decompositions [66]. In general, however, one has to state that typical methods which rely on inpainting echoes suffer from a relatively high computational cost to obtain the individual echoes. Although one can precompute and reuse them for the subsequent iteration steps, they still need to be computed at least once. This is particularly inefficient when a large amount of mask points is present. Moreover, precomputing inpainting echoes leads to higher memory requirements. The more recent strategies in [29, 68] avoid the direct computation of the inpainting echoes. Instead, the original minimization problem (5.3) is solved directly with the help of primal-dualmethods or related sophisticated optimization strategies. This gives more efficient algorithms for tonal optimization.

In the following, we propose an alternative approach to find optimal tonal data. It is based on an accelerated gradient descent strategy that allows an efficient gray value optimization. Similar to [29, 68], we consider the original minimization problem (5.3) directly. Thus, we also avoid computing inpainting echoes. This leads to a reduced run time as well as to low memory requirements. Below we first explain the classical gradientmethodwith exact line search. Afterwards, we present a novel variant that benefits from an acceleration with a fast explicit diffusion scheme in the sense of Grewenig et al. [62].

Our goal is to minimize the objective function

Starting with an initialization g0 = Cf, a gradient descent scheme minimizes this energy E by iteratively updating the current gray values for k > 0 as

with a step size α and the gradient E(gk) depending on the iterates gk. Denoting the inpainting solution at iteration step k by uk, that is uk := r(c, gk), the gradient can be written as

where J is the Jacobian of r(c, gk) with respect to the second component. With the definition of r(c, gk) from (5.2), we obtain

where we have exploited the symmetries of the matrices C and A. Computing the iterates uk and the gradient E(gk) means to solve a linear system of equations for each of them. This can be done efficiently with a bidirectional multigrid solver as is suggested in [95].

The main parameter that we have to specify is the step size α. As one possibility, it can be optimized to yield the largest possible decay of the energy in each step. This comes down to the least squares problem

Exploiting the linearity of r in the second component allows us to obtain the minimizer in closed form:

This strategy is also known as exact line search. It guarantees that the sequence (gk)k converges to the minimum of the energy [15]. As stopping criterion, we consider the relative norm of the gradient, which should approach zero at the optimum. In other words, we stop as soon as

with some small number ε > 0. An algorithmic overview of the classical gradient descent method with exact line search is shown below. It serves as our baseline method.

Algorithm 6: Gray value optimization with exact line search

Input: Original image f, pixel mask c.

Initialization: g0 := Cf , k := 0.

Compute: Repeat for k 0

(i)Compute the gradient

(ii)Determine the step size α with equation (5.14).

(iii)Update the tonal data:

until the stopping criterion (5.15) is fulfilled.

Output: Optimized gray values g.

In order to speed up the gradient descent approach, we propose an accelerated algorithm based on a so-called fast explicit diffusion (FED) scheme. First applications of FED to image processing problems go back to Grewenig et al. [62]. FED can be used to speed up any explicit diffusion-like algorithm that involves a symmetric matrix. While classical explicit schemes employ a constant time step size that has to satisfy a restrictive stability limit, FED schemes involve cycles of time step sizes where up to 50% of them can violate this stability limit. Nevertheless, at the end of each cycle, stability in the Euclidean norm is achieved. In contrast to classical explicit schemes that reach a stopping time of order O(M) in M steps, FED schemes with cycle length M progress to O(M2). This allows a very substantial acceleration.

Since the gradient descent scheme can be seen as an explicit scheme with a symmetric matrix, FED is applicable: If α denotes a fixed step size for which (5.10) is stable in the Euclidean norm, one replaces it by the cyclically varying step sizes

For large cycle lengths M, one should permute the order of the step sizes to tame rounding errors; see [140] for more details on this and an exhaustive explanation of the FED framework in general. The FED cycles should be iterated until the stopping criterion (5.15) is fulfilled. A related cyclic optimization strategy has also been investigated in [127].

In order to determine the individual step sizes within each cycle, we have to find a step size α that is within the stability limit. The following result is well known in optimization theory [105]: If E (g) is continuous and its gradient is Lipschitz continuous, that is there is a constant L such that

for all g1 and g2, then the gradient descent scheme is stable for all step sizes α fulfilling

It is straightforward to verify that in our case L can be chosen as the squared spectral norm of the inpainting matrix D := M1C, that is

where ρ(DD) denotes the spectral radius of the symmetric matrix DD. One possibility to estimate the spectral radius is to use Gershgorins circle theorem. However, this may give a too pessimistic estimate. Instead, we propose to use the power method to determine the maximum eigenvalue of DD; see for example [2]. The convergence of this method turns out to be relatively fast, such that one already obtains a reasonable estimate of the spectral radius after 5 iterations.

Although it may appear tempting to choose α close to the stability limit 2 /L, this can result in a suboptimal convergence speed, since high frequent error components are damped too slowly. Our experiments suggest that a good choice for α is two thirds of the stability limit:

Similar strategies are also common for example in multigrid approaches that use a damped Jacobi method with damping factor 2/3 as a baseline solver [17].

Below we give an algorithmic overview of all steps involved in performing tonal optimization with FED-accelerated gradient descent:

Algorithm 7: Gray value optimization with FED

Input: Original image f, pixel mask c, FED cycle length M.

Initialization: g0 := Cf , k := 0.

Compute:

  1. Estimate L in (5.21) with the power method.
  2. Determine the FED time steps α0, . . . , αM1 according to (5.18) with α = 4/ (3L). If necessary, permute them.
  3. Repeat for k 0

(a)gk,0 := gk

(b)For i = 0, . . . , M 1 do

i. Compute the gradient

ii. Update the tonal data:

(c)gk+1 := gk,M

until the stopping criterion (5.15) is fulfilled.

Output: Optimized gray values g.

Since the gray value optimization problem is strictly convex, all convergent algorithms yield the same minimizer and are therefore qualitatively equivalent. They only differ by their run times. The following experiment gives an impression of realistic run times of both gradient descent algorithms for gray value optimization. As before, we consider the inpainting problem with the test image trui (256 × 256 pixels) and 4% mask density. The stopping parameter for our iterations was set to ε := 0.001. With a C implementation on a desktop PC with Intel Xeon processor (3.2 GHz), the exact line search algorithm requires 458 seconds to perform 262 iterations. A corresponding FED algorithm with α = 0.01 needs only 77 seconds to compute 4 cycles of length M = 15. In comparison, a CPU implementation of the primaldual approach of Hoeltgen and Weickert [68] requires for the same problem a run time of 346 seconds. This illustrates the favorable performance of the FED-accelerated gradient descentmethod.

As the FED algorithm is based on an explicit scheme, it is also well suited for implementations on parallel hardware such as GPUs. This can lead to very substantial additional accelerations.

Qualitative evaluation

In order to evaluate the capabilities of the gray value optimization, we apply it to the masks obtained so far for the test image trui. In all cases this results in a clear improvement of the reconstruction quality, both visually and in terms of their MSE; see Figure 7 as well as Table 4. This confirms also our impressions from the 1Dscenario in Section 2.4.2. Especially suboptimal masks benefit a lot from gray value optimization: It can compensate for many deficiencies that are caused by an inferior spatial selection strategy. This becomes particularly apparent when considering the result for the random mask or the mask on the regular grid. It is remarkable how much gain in quality is possible by simply choosing different gray values. However, also the reconstruction

Fig. 7: Evaluation of different inpainting data using 4% of all pixels. Left column: Different masks obtained by using a regular grid, the analytic approach (s = 0.80, σ = 1.6), a probabilistic sparsification (p = 0.3, q = 106), and with an additional nonlocal pixel exchange (m = 20, 5 105 iterations) after the previous probabilistic sparsification. Middle column: Reconstructions with homogeneous diffusion inpainting with the masks from the first column. Right column: Same as in the middle column, but with optimal tonal data.

we obtain with the best mask using probabilistic sparsification and nonlocal pixel exchange can be improved substantially: The MSE is reduced from 41.92 to 27.24. Similar improvements can also be observed for the images walter and peppers; see Table 4. The best reconstructions are depicted in Figure 5.

2.6Extensions to other inpainting operators

We have seen that optimizing the interpolation data allows us to obtain high-quality reconstructions with only 4% of all pixels. These results are also remarkable in view of the fact that so far we have used a very simple interpolation operator: the Laplacian. It is unlikely that it offers the best performance. In this section we investigate the question if the algorithms of Section 2.5 can be used with, or extended to more advanced inpainting operators. We focus on two representative operators that have proven their good performance in the context of PDE-based image compression with sparse data [29, 55, 56, 125]: the biharmonic operator and the EED operator.

A straightforward extension of the Laplacian is the biharmonic operator, that is we replace Δu in (3.5) by Δ2u. Using it for interpolation comes down to thin plate spline interpolation [45], a rotationally invariant multidimensional generalization of cubic spline interpolation. Compared to the Laplace operator, it yields a smoother solution u around the interpolation data, since its Greens function is twice differentiable. This avoids the typical singularities that distort the visual quality with homogeneous diffusion inpainting. These artifacts are caused by the logarithmic singularities of the Greens function of the two-dimensional Laplacian. Conversely, biharmonic inpainting is prone to over- and undershoots, that is the values of u may leave the range of the inpainting data in K. This cannot happen for homogeneous diffusion inpainting which fulfils a maximumminimum principle. Nevertheless, a number of evaluations show that biharmonic inpainting can offer a better reconstruction quality than homogeneous diffusion inpainting [29, 55, 56, 125].

Secondly, we consider an anisotropic nonlinear diffusion operator, namely edgeenhancing diffusion (EED). Originally it was introduced for image denoising [138], and its application in image compression goes back to Galić et al. [55]. Using EED means that we replace Δu in (3.5) by div(D(uσ)u). The positive definite matrix D(uσ) is the so-called diffusion tensor. It steers the diffusion process by its eigenvectors and eigenvalues. They depend on the gradient of a Gaussian-smoothed version uσ of the image u, where σ denotes the standard deviation of the Gaussian. The first eigenvector of D is chosen to be orthogonal to uσ, and the corresponding eigenvalue is fixed at 1. This gives full diffusion / inpainting along image edges. In contrast, the second eigenvector is chosen to be parallel to uσ, and its eigenvalue is a decreasing function of the local image contrast |uσ|. Thus, one reduces diffusion / inpainting across high contrast edges. For image compression, one usually chooses the diffusivity of Charbonnier et al. [26], which is given by (1 + |uσ|2/λ2)1/2. The parameter λ > 0 allows one to steer the contrast dependence.

Image inpainting with EED can reconstruct edges in high quality, even when the specified data is sparse [125]. This explains why EED has become one of the best performing operators for PDE-based image compression [55, 56, 125]. Moreover, for second-order elliptic differential operators such as EED, the continuous theory guarantees a maximumminimum principle [139]. Experiments show that in contrast to homogeneous diffusion inpainting, EED does not suffer from singularities [125].

What are the changes in our framework, when we replace the Laplacian by the biharmonic or the EED operator? If we discretize the biharmonic operator with central finite differences, we have to exchange the matrix A in (3.7) by another constant matrix, but the structure of this equation remains the same as for homogeneous diffusion inpainting: It is a linear system of equations in the unknown vector u. For EED, however, the discrete differential operator A depends nonlinearly on the reconstruction u. Thus, the resulting system of equations becomes nonlinear:

While this nonlinearity does not affect our spatial optimization, we will see that it makes the tonal optimization more difficult.

2.6.1Optimizing spatial data

Let us consider the spatial optimization first. The theory behind the analytic approach is strongly based on homogeneous diffusion, and no extensions to more sophisticated inpainting operators have been discussed in [8]. Therefore, we do not consider this approach in this section.

In contrast, the probabilistic approach can be used without any restrictions and changes for both the biharmonic operator as well as for EED. As before, we search for the best parameters of the spatial optimization methods. In all our experiments, we fix the EED parameters to λ = 0.8 and σ = 0.7, since this gives reconstructions of high quality.

Results for both operators and methods in comparison to homogeneous diffusion inpainting are shown in Table 5. Interestingly, the biharmonic operator does not achieve better results than the homogeneous one in the case of pure probabilistic sparsification without nonlocal pixel exchange. Our explanation for this observation is as follows: Biharmonic inpainting has a positive effect of increased smoothness around data points, and a negative effect by over- and undershoots in the inpainting domain. The latter is ignored in the sparsification decision, since the point selection is based on a purely local error measurement. Thus, for very sparse data, over- and undershoots can become particularly detrimental on global error measures such as the MSE. In our experiment, the minimum and maximum values of the reconstruction lie around 59 and 346, respectively. This is far beyond the range of the original image which is given by [56, 214]. With an additional nonlocal pixel exchange, we can overcome this problem, since it uses the MSE as criterion to redistribute the mask pixels to more favorable locations. As a consequence, the gray value range of the reconstruction shrinks to the interval [47, 248], and the MSE falls from 79.26 to 20.89. This is far better than the MSE of 41.92 that we achieve for homogeneous diffusion inpainting.

Fig. 8: Best mask and reconstruction for the biharmonic operator (a, b) and EED (c, d) for the test image trui with 4% of all pixels, using probabilistic sparsification, nonlocal pixel exchange, and tonal optimization. The image (b) has an MSE of 16.73, while image (d) has an MSE of 10.79. Parameters have been chosen as in Table 5.

However, EED with probabilistic sparsification gives far superior results. Already without nonlocal pixel exchange, we obtain an MSE of 24.20. Postprocessing with nonlocal pixel exchange reduces the error to only 12.62.

Figures 8(a) and (c) depict the masks obtained with probabilistic sparsification combined with the nonlocal pixel exchange for both operators. Comparing them with the one from homogeneous diffusion inpainting (Figure 7), we observe interesting differences: Homogeneous diffusion inpainting requires more pixels near edges to represent these discontinuities well. Thus, for a specified pixel number, it has less pixels to approximate the interior of the regions. This explains why biharmonic or EED inpainting can achieve lower errors.

Table 5: Comparison of the reconstruction error (MSE) with 4% of all pixels for different inpainting operators and different data optimization algorithms. The parameters were applied to both approaches, with and without gray value optimization (GVO). The nonlocal pixel exchange was performed with 5 105 iterations for each experiment.

2.6.2Optimizing tonal data

We have seen that for homogeneous diffusion inpainting, an additional tonal optimization yields significant improvements in the reconstruction quality. Thus, we should also investigate if it is beneficial for biharmonic and EED inpainting.

In the case of biharmonic inpainting, our tonal optimization strategy from Subsection 2.5.2 carries over literally, since we are still facing a linear least squares problem: For a fixed inpainting mask, the reconstruction depends linearly on the specified gray values. Thus, all we have to do is to exchange the discrete differential operator for homogeneous diffusion inpainting by its biharmonic counterpart. Also our algorithms such as the FED-accelerated gradient descent remain applicable. The final results for biharmonic inpainting with spatial and tonal optimization are listed in Table 5, and the best reconstruction is depicted in Figure 8(b). As one can see, tonal optimization allows one to reduce the MSE from 20.89 to 16.73.

Since our tonal optimization methods are tailored towards linear least squares formulations, specific challenges arise when we want to extend them to EED-based inpainting. The nonlinearity of the EED inpainting scheme prevents closed-form expressions such as (5.11) and (5.12). This is caused by the fact that the matrix A is now a function of the inpainted image uk, which itself depends on the gray value data gk in the specified pixels. Although this concatenated mapping is formally smooth, one should keep in mind that EED has the ability to create edge-like structures. This means that in practice the problem is fairly ill-conditioned: Small local changes in gk may have a strong global impact on uk, on A(uk), and on the Jacobian of the error function (5.9). Moreover, there is no longer a guarantee that the tonal optimization problem is strictly convex. Thus, it may have many local minimizers. It is therefore not surprising that different algorithms and even different parameter settings within the same algorithm may end up in different minimizers. Since it is difficult to design practically feasible algorithms that guarantee to find the global minimizer, we focus on transparent and conceptually simple local optimization strategies such as gradient descent. We have done a number of experiments with several variants of gradient descent approaches for tonal optimization in EED-based inpainting. Below we describe the method that has yielded the best results in terms of reconstruction quality.

We suggest modifying the gradient descent approach for (5.9) as follows. While we keep the basic structure of (5.10) and (5.11), we lack a closed form solution of type (5.12) for the Jacobian J that is now an unknown function of the evolving gray value data gk. As a remedy, we approximate the Jacobian by numerical differentiation:

where i denotes the pixel index, j is the index of an individual mask point, and the parameter η > 0 quantifies a small gray value perturbation at the mask point j. As before, ej is the canonical basis vector with a unit impulse in pixel j. Note that the gray values at all nonmask points do not have any influence on the inpainting result. Thus, the Jacobian does not have to be computed for those locations.

In contrast to our linear gray value optimization strategies, we abstain from adapting the gradient descent step size α by means of exact line search or cyclic FED-based variations: In our experience, the high sensitivity of the Jacobian w.r.t. gk suggests that one keep the gradient descent step size α fairly small to capture the dynamics in an adequate way. Conversely, too small values for α can also result in a convergence towards a fairly poor local minimizer. Thus, in practice, one fixes α to some small value.

Similar considerations apply to the parameter η in (6.2): The nonlinear dynamics of the Jacobian suggests use of small values for η in order to approximate the derivative well. However, if η is too small, numerical problems can arise, since both the numerator and the denominator in (6.2) approach zero.

Table 6 shows how an appropriate selection of the parameters α and η can be used to optimize the reconstruction quality for EED-based inpainting. We observe that in our test scenario the resulting tonal optimization step allows us to reduce the MSE from 12.62 to 10.79. The corresponding reconstruction is depicted in Figure 8(d). Table 5 shows that the MSE of 10.79 obtained for EED is far better than the errors for homogeneous diffusion inpainting (MSE=27.24) and for biharmonic inpainting (MSE=16.73). To the best of our knowledge, such a reconstruction quality has never been achieved before for PDE-based inpainting with a mask density of only 4%.

2.7Summary and conclusions

The contributions of our work are twofold: On the one hand we gave a comprehensive survey on the state-of-the-art in PDE-based image compression. It enables the readers to obtain an overview of the achievements that have been made in this field and guide them to the relevant literature.

On the other hand, we focused on one key problem in PDE-based image compression: spatial and tonal data optimization. Here we aimed at obtaining insights into the full potential of the quality gains, without imposing any constraints on the type of inpainting operator, the computational costs, or the coding costs for the optimized data. Our strategy was to start with more restricted settings that allow a deeper understanding of the problem, and to extend our findings to more general scenarios afterwards.

Table 6: MSE obtained after a tonal optimization for the EED inpainting approach with the final mask from Figure 8. The parameter α denotes the fixed step size in each gradient descent iteration, and η is the step size used for the computation of the derivatives by means of finite differences.

In the 1D setting, we restricted ourselves to homogeneous diffusion inpainting of strictly convex functions. For the resulting free knot problem for linear spline interpolation, we came up with a new algorithm and discussed optimal approximation results. We showed the nonconvexity of the problem for more than three knots, and we saw that a splitting into a spatial optimization step followed by a tonal one does not deteriorate the reconstruction quality substantially.

This motivated us to use such a splitting strategy also for the 2D setting and without restrictions on the convexity or concavity of the image data. For spatial data optimization with homogeneous diffusion inpainting, we studied a probabilistic sparsification approach with nonlocal pixel exchange in detail. For the subsequent tonal optimization step we established the existence of a unique solution. We showed that it can be found efficiently with a gradient descent approach which is accelerated by a fast explicit diffusion (FED) strategy.

In our investigations we were aiming at fairly generic algorithms that can also be applied to other inpainting operators such as biharmonic inpainting and inpainting based on edge-enhancing anisotropic diffusion (EED). Data optimization for EED-based inpainting allowed us to come up with reconstructions of hitherto unparalleled quality.

Our framework for data optimization permits us to specify the desired mask density a priori. This is a conceptual advantage over approaches that have to tune a regularization parameter in order to influence the density which results a posteriori [29, 67]. In practice this means that the latter approaches may have to run the algorithm many times.

While our results show the large potential of PDE-based inpainting, it should be emphasized that data optimization is a key problem, but not the only one that must be solved for building well-performing codecs: For example, both the location and the gray values of the data must be stored efficiently, and some real-world applications require very fast algorithms. This poses additional constraints and challenges that will be addressed in our forthcoming publications.

Acknowledgment: Our research is partly funded by the Deutsche Forschungsgemeinschaft (DFG) through a Gottfried Wilhelm Leibniz Prize. This is gratefully acknowledged.

We thank Pascal Gwosdek and Christian Schmaltz for providing the electrostatic halftoning images for us.

Bibliography

[1]T. Acar and M. Gökmen. Image coding using weak membrane model of images. In A. K. Katsaggelos, editor, Visual Communications and Image Processing 94, volume 2308 of Proceedings of SPIE, pages 12211230. SPIE Press, Bellingham, 1994.

[2]G. Allaire, K. Trabelsi, and S.M. Kaber. Numerical Linear Algebra. Texts in Applied Mathematics. Springer, New York, 2008.

[3]F. Alter, S. Durand, and J. Froment. Adapted total variation for artifact free decompression of JPEG images. Journal of Mathematical Imaging and Vision, 23(2):199211, September 2005.

[4]M. Arigovindan, M. Sühling, P. Hunziker, and M. Unser. Variational image reconstruction from arbitraily spaced samples: A fast multiresolution spline solution. IEEE Transactions on Image Processing, 14(4):450460, April 2005.

[5]A. Azzam and E. Kreyszig. On solutions of elliptic equations satisfying mixed boundary conditions. SIAM Journal of Mathematical Analysis, 13(2):254262, 1982.

[6]E. Bae and J. Weickert. Partial differential equations for interpolation and compression of surfaces. In M. Daehlen, M. Floater, T. Lyche, J.-L. Merrien, K. Mørken, and L. L. Schumaker, editors, Mathematical Methods for Curves and Surfaces, volume 5862 of Lecture Notes in Computer Science, pages 114. Springer, Berlin, 2010.

[7]V. Bastani, M. S. Helfroush, and K. Kasiri. Image compression based on spatial redundancy removal and image inpainting. Journal of Zhejiang University Science C (Computers & Electronics), 11(2):92100, 2010.

[8]Z. Belhachmi, D. Bucur, B. Burgeth, and J. Weickert. How to choose interpolation data in images. SIAM Journal on Applied Mathematics, 70(1):333352, 2009.

[9]R. Bellman. On the approximation of curves by line segments using dynamic programming. Communications of the ACM, 4(6):284, 1961.

[10]M. Bertalmío, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proc. SIGGRAPH 2000, pages 417424, New Orleans, LI, July 2000.

[11]T. Blu, P. Thévenaz, and M. Unser. Linear interpolation revitalized. IEEE Transactions on Image Processing, 13(5):710719, May 2004.

[12]F. Bornemann and T. März. Fast image inpainting based on coherence transport. Journal of Mathematical Imaging and Vision, 28(3):259278, July 2007.

[13]S. Bougleux, G. Peyré, and L. Cohen. Image compression with anisotropic triangulations. In Proc. Tenth International Conference on Computer Vision, pages 23432348, Kyoto, Japan, October 2009.

[14]A. Bourquard and M. Unser. Anisotropic interpolation of sparse generalized image samples. IEEE Transactions on Image Processing, 22(2):459472, 2013.

[15]S.P. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004.

[16]K. Bredies and M. Holler. A total variation-based JPEG decompression model. SIAM Journal on Imaging Sciences, 5(1):366393, 2012.

[17]W. L. Briggs. A Multigrid Tutorial. SIAM, Philadelphia, 1987.

[18]E.-M. Brinkmann, M. Burger, and J. Grah. Regularization with sparse vector fields: From image compression to TV-type reconstruction. In J.-F. Aujol, M. Nikolova, and N. Papadakis, editors, Scale Space and Variational Methods in Computer Vision, volume 9087 of Lecture Notes in Computer Science, pages 191202. Springer, Berlin, 2015.

[19]R. Brown. The mixed problem for Laplaces equation in a class of Lipschitz domains. Communications in Partial Differential Equations, 19(78):12171233, 1994.

[20]M. D. Buhmann. Radial Basis Functions. Cambridge University Press, Cambridge, UK, 2003.

[21]S. Carlsson. Sketch based coding of grey level images. Signal Processing, 15:5783, 1988.

[22]V. Caselles, J.-M. Morel, and C. Sbert. An axiomatic approach to image interpolation. IEEE Transactions on Image Processing, 7(3):376386, March 1998.

[23]F. Catté, P.-L. Lions, J.-M. Morel, and T. Coll. Image selective smoothing and edge detection by nonlinear diffusion. SIAM Journal on Numerical Analysis, 32:18951909, 1992.

[24]T. F. Chan and J. Shen. Non-texture inpainting by curvature-driven diffusions (CDD). Journal of Visual Communication and Image Representation, 12(4):436449, 2001.

[25]T. F. Chan and H. M. Zhou. Total variation improved wavelet thresholding in image compression. In Proc. Seventh International Conference on Image Processing, volume II, pages 391394, Vancouver, Canada, September 2000.

[26]P. Charbonnier, L. Blanc-Féraud, G. Aubert, and M. Barlaud. Deterministic edge-preserving regularization in computed imaging. IEEE Transactions on Image Processing, 6(2):298311, 1997.

[27]J. Chen, F. Ye, J. Di, C. Liu, and A. Men. Depth map compression via edge-based inpainting. In Proc. 2012 Picture Coding Symposium, pages 5760, Kraków, Poland, May 2012.

[28]S. Chen. Image reconstruction from zero-crossings. In Proc. Eighth International Joint Conference on Artificial Intelligence, volume 2, pages 742744, Milan, Italy, August 1987.

[29]Y. Chen, R. Ranftl, and T. Pock. A bi-level view of inpainting-based image compression. In Z. Kúkelová and J. Heller, editors, Proc. 19th Computer Vision Winter Workshop, Křtiny, Czech Republic, February 2014.

[30]A. Cohen, N. Dyn, F. Hecht, and J.-M. Mirebeau. Adaptive multiresolution analysis based on anisotropic triangulations. Mathematics of Computation, 81(278):789810, 2012.

[31]M. G. Cox. An algorithm for approximating convex functions by means of first degree splines. The Computer Journal, 14(3):362364, 1971.

[32]S. R. Curtis, A. V. Oppenheim, and J. S. Lim. Reconstruction of two-dimensional signals from threshold crossings. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, volume 10, pages 10571060, Tampa, FL, March 1985.

[33]E. DAngelo, L. Jacques, A. Alahi, and P. Vandergheynst. From bits to images: Inversion of local binary descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 36(5):874887, 2014.

[34]P. J. Davis. Interpolation and Approximation. Blaisdell Publishing Company, 1963.

[35]C. de Boor. Good approximation by splines with variable knots II. In G. Watson, editor, Conference on the Numerical Solution of Differential Equations, volume 363 of Lecture Notes in Mathematics, pages 1220. Springer, 1974.

[36]L. Demaret, N. Dyn, and A. Iske. Image compression by linear splines over adaptive triangulations. Signal Processing, 86(7):16041616, 2006.

[37]L. Demaret, A. Iske, and W. Khachabi. Contextual image compression from adaptive sparse data representations. In Rémi Gribonval, editor, Proceedings of SPARS09 (Signal Processing with Adaptive Sparse Structured Representations Workshop), 2009. Available online: https://hal.inria.fr/inria-00369491.

[38]U. Y. Desai, M. M. Mizuki, I. Masaki, and B. K. P. Horn. Edge and mean based image compression. Technical Report 1584 (A.I. Memo), Artificial Intelligence Lab., Massachusetts Institute of Technology, Cambridge, MA, November 1996.

[39]R. A. DeVore and V. A. Popov. Free multivariate splines. Constructive Approximation, 3:239248, 1987.

[40]G. Di Blasi, E. Francomano, A. Tortorici, and E. Toscano. A smoothed particle image reconstruction method. Calcolo, 48(1):6174, 2011.

[41]N. D. Dikoussar and Cs. Török. Data smoothing by splines with free knots. Physics of Particles and Nuclei Letters, 5(3):324327, 2008.

[42]R. Distasi, M. Nappi, and S. Vitulano. Image compression by B-tree triangular coding. IEEE Transactions on Communications, 45(9):10951100, September 1997.

[43]D. Doshkov, P. Ndjiki-Nya, H. Lakshman, M. Köppel, and T. Wiegand. Towards efficient intra prediction based on image inpainting methods. In Proc. 28th Picture Coding Symposium, pages 470473, Nagoya, Japan, December 2010.

[44]L. Dron. The multiscale veto model: A two-stage analog network for edge detection and image reconstruction. International Journal of Computer Vision, 11(1):4561, August 1993.

[45]J. Duchon. Interpolation des fonctions de deux variables suivant le principe de la flexion des plaquesminces. RAIRO Analyse Numérique, 10(3):512, 1976.

[46]N. Dyn, D. Levin, and S. Rippa. Data dependent triangulations for piecewise linear interpolation. IMA Journal of Numerical Analysis, 10(1):137154, 1990.

[47]J. H. Elder. Are edges incomplete? International Journal of Computer Vision, 34(2/3):97122, 1999.

[48]J. H. Elder and R. M. Goldberg. Image editing in the contour domain. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(3):291296, March 2001.

[49]G. Facciolo, P. Arias, V. Caselles, and G. Sapiro. Exemplar-based interpolation of sparsely sampled images. In D. Cremers, Y. Boykov, A. Blake, and F. R. Schmidt, editors, Energy Minimisation Methods in Computer Vision and Pattern Recognition, volume 5681 of Lecture Notes in Computer Science, pages 331344. Springer, Berlin, 2009.

[50]G. Facciolo, F. Lecumberry, A. Almansa, A. Pardo, V. Caselles, and B. Rougé. Constrained anisotropic diffusion and some applications. In Proc. 2006 British Machine Vision Conference, volume 3, pages 10491058, Edinburgh, Scotland, September 2006.

[51]F. Faille and M. Petrou. Invariant image reconstruction from irregular samples and hexagonal grid splines. Image and Vision Computing, 28(8):11731183, August 2010.

[52]G. Fichera. Analisi esistenziale per le soluzioni dei problemi al contorno misti, relativi all equazione e ai sistemi di equazioni del secondo ordine di tipo elliptico, autoaggiunti. Annali della Scuola Normale Superiore di Pisa Classe di Scienze, 3(1):75100, 1949.

[53]R. W. Floyd and L. Steinberg. An adaptive algorithm for spatial grey scale. Proceedings of the Society of Information Display, 17:7577, 1976.

[54]G. E. Ford. Application of inhomogeneous diffusion to image and video coding. In Proc. 13th Asilomar Conference on Signals, Systems and Computers, volume 2, pages 926930, Asilomar, CA, November 1996.

[55]I. Galić, J. Weickert, M. Welk, A. Bruhn, A. Belyaev, and H.-P. Seidel. Towards PDE-based image compression. In N. Paragios, O. Faugeras, T. Chan, and C. Schnörr, editors, Variational, Geometric and Level-Set Methods in Computer Vision, volume 3752 of Lecture Notes in Computer Science, pages 3748. Springer, Berlin, 2005.

[56]I. Galić, J. Weickert, M. Welk, A. Bruhn, A. Belyaev, and H.-P. Seidel. Image compression with anisotropic diffusion. Journal of Mathematical Imaging and Vision, 31(23):255269, July 2008.

[57]J. Gautier, O. Le Meur, and C. Guillemot. Efficient depth map compression based on lossless edge coding and diffusion. In Proc. 2012 Picture Coding Symposium, pages 8184, Kraków, Poland, May 2012.

[58]D. Gilbarg and N. Trudinger. Elliptic Partial Differential Equations of Second Order. Springer, Berlin, 2001.

[59]B. Gluss. Further remarks on line segment curve-fitting using dynamic programming. Communications of the ACM, 5:441443, 1962.

[60]R. Gomathi and A. V. A. Kumar. A multiresolution image completion algorithm for compressing digital color images. Journal of Applied Mathematics, 2014: Article ID 757318, 2014.

[61]P. Grattoni and A. Guiducci. Contour coding for image description. Pattern Recognition Letters, 11(2):95105, February 1990.

[62]S. Grewenig, J. Weickert, and A. Bruhn. From box filtering to fast explicit diffusion. In M. Goesele, S. Roth, A. Kuijper, B. Schiele, and K. Schindler, editors, Pattern Recognition, volume 6376 of Lecture Notes in Computer Science, pages 533542. Springer, Berlin, 2010.

[63]H. Grossauer and O. Scherzer. Using the complex GinzburgLandau equation for digital impainting in 2D and 3D. In L. D. Griffin and M. Lillholm, editors, Scale-Space Methods in Computer Vision, volume 2695 of Lecture Notes in Computer Science, pages 225236. Springer, Berlin, 2003.

[64]H. Hamideh. On the optimal knots of first degree splines. Kuwait Journal of Science and Engineering, 29(1):113, 2002.

[65]W. K. Hastings. Monte carlo sampling methods using markov chains and their applications. Biometrika, 57:97109, 1970.

[66]N. J. Higham. Accuracy and Stability of Numerical Algorithms. SIAM, Philadelphia, 2nd edition, 2002.

[67]L. Hoeltgen, S. Setzer, and J. Weickert. An optimal control approach to find sparse data for Laplace interpolation. In A. Heyden, F. Kahl, C. Olsson, M. Oskarsson, and X.-C. Tai, editors, Energy Minimisation Methods in Computer Vision and Pattern Recognition, volume 8081 of Lecture Notes in Computer Science, pages 151164. Springer, Berlin, 2013.

[68]L. Hoeltgen and J. Weickert. Why does non-binary mask optimisation work for diffusion-based image compression? In X.-C. Tai, E. Bae, T. F. Chan, S. Y. Leung, and M. Lysaker, editors, Energy Minimisation Methods in Computer Vision and Pattern Recognition, volume 8932 of Lecture Notes in Computer Science, pages 8598. Springer, Berlin, 2015.

[69]S. Hoffmann, M. Mainberger, J. Weickert, and M. Puhl. Compression of depth maps with segment-based homogeneous diffusion. In A. Kuijper, K. Bredies, T. Pock, and H. Bischof, editors, Scale Space and Variational Methods in Computer Vision, volume 7893 of Lecture Notes in Computer Science, pages 319330. Springer, Berlin, 2013.

[70]S. Hoffmann, G. Plonka, and J. Weickert. Discrete Greens functions for harmonic and biharmonic inpainting with sparse atoms. In X.-C. Tai, E. Bae, T. F. Chan, and M. Lysaker, editors, Energy Minimization Methods in Computer Vision and Pattern Recognition, volume 8932 of Lecture Notes in Computer Science, pages 169182. Springer, Berlin, 2015.

[71]B. Horn and B. Schunck. Determining optical flow. Artificial Intelligence, 17:185203, 1981.

[72]D. A. Huffman. A method for the construction of minimum redundancy codes. Proceedings of the IRE, 40:10981101, 1952.

[73]R. Hummel and R. Moniot. Reconstructions from zero-crossings in scale space. IEEE Transactions on Acoustics, Speech, and Signal Processing, 37:21112130, 1989.

[74]S. Jeschke, D. Cline, and P. Wonka. Estimating color and texture parameters for vector graphics. Computer Graphics Forum, 30(2):523532, 2011.

[75]P. Johansen, S. Skelboe, K. Grue, and J. D. Andersen. Representing signals by their toppoints in scale space. In Proc. Eighth International Conference on Pattern Recognition, pages 215217, Paris, France, October 1986.

[76]Joint Bi-level Image Experts Group. Information technology progressive lossy/lossless coding of bi-level images. ISO/IEC JTC1 11544, ITU-T Rec. T.82, 1993. Final Committee Draft 11544.

[77]David L. B. Jupp. Approximation to data by splines with free knots. SIAM Journal on Numerical Analysis, 15(2):328343, 1978.

[78]F. M. W. Kanters, M. Lillholm, R. Duits, B. J. P. Jansen, B. Platel, L. M. J. Florack, and B. M. ter Haar Romeny. On image reconstruction from multiscale top points. In R. Kimmel, N. Sochen, and J. Weickert, editors, Scale Space and PDE Methods in Computer Vision, volume 3459 of Lecture Notes in Computer Science, pages 431439. Springer, Berlin, 2005.

[79]R. Kazinnik, S. Dekel, and N. Dyn. Low bit-rate image coding using adaptive geometric piecewise polynomial approximation. IEEE Transactions on Image Processing, 16(9):22252233, 2007.

[80]J. B. Kioustelidis and K. J. Spyropoulos. L1 approximation of strictly convex functions by means of first degree splines. Computing, 20(1):3545, 1978.

[81]S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science, 220(4598):671680, 1983.

[82]J. Kohout. On digital image representation by the Delaunay triangulation. In D. Mery and L. Rueda, editors, Advances in Image and Video Technology, volume 4872 of Lecture Notes in Computer Science, pages 826840. Springer, Berlin, 2007.

[83]I. Kopilovic and T. Szirányi. Artifact reduction with diffusion preprocessing for image compression. Optical Engineering, 44(2):114, February 2005.

[84]H. Köstler, M. Stürmer, C. Freundl, and U. Rüde. PDE based video compression in real time. Technical Report 0711, Lehrstuhl für Informatik 10 (Systemsimulation), FriedrichAlexander Universität ErlangenNürnberg, Germany, 2007.

[85]B. Kurt, M. Gökmen, and A. K. Jain. Image compression based on Centripede Model. In A. Del Bimbo, editor, Image Analysis and Processing, volume 1310 of Lecture Notes in Computer Science, pages 303310. Springer, Berlin, 1997.

[86]O. A. Ladyženskaja and N. N. Uralceva. Linear and Quasilinear Elliptic Equations. Academic Press, 1968.

[87]B. Lehner, G. Umlauf, and B. Hamann. Image compression using data-dependent triangulations. In G. Bebis, R. Boyle, B. Parvin, D. Koracin, N. Paragios, S.-M. Tanveer, T. Ju, Z. Liu, S. Coquillart, C. Cruz-Neira, T. Müller, and Tom Malzbender, editors, Advances in Visual Computing, volume 4841 of Lecture Notes in Computer Science, pages 351362. Springer, Berlin, 2007.

[88]X. Li. Anisotropic mesh adaptation for image representation and scaling. arXiv:1402.4893v1 [cs.CV], February 2014.

[89]Y. Li, M. Sjostrom, U. Jennehag, and R. Olsson. A scalable coding approach for high quality depth image compression. In Proc. 3DTV-Conference: The True Vision Capture, Transmission and Display of 3D Video, Zurich, Switzerland, October 2012. IEEE.

[90]M. Lillholm, M. Nielsen, and L. D. Griffin. Feature-based image analysis. International Journal of Computer Vision, 52(2/3):7395, 2003.

[91]D. Liu, X. Sun, and F. Wu. Inpainting with image patches for compression. Journal of Visual Communication and Image Representation, 23(1):100113, January 2012.

[92]D. Liu, X. Sun, F. Wu, S. Li, and Y.-Q. Zhang. Image compression with edge-based inpainting. IEEE Transactions on Circuits, Systems and Video Technology, 17(10):12731286, October 2007.

[93]B. F. Logan, Jr. Information in the zero crossings of bandpass signals. Bell System Technical Journal, 56:487510, April 1977.

[94]M. Mahoney. Data compression programs. http://mattmahoney.net/dc/, 2009. Last visited November 30, 2009.

[95]M. Mainberger, A. Bruhn, J. Weickert, and S. Forchhammer. Edge-based image compression of cartoon-like images with homogeneous diffusion. Pattern Recognition, 44(9):18591873, September 2011.

[96]M. Mainberger, S. Hoffmann, J. Weickert, C. H. Tang, D. Johannsen, F. Neumann, and B. Doerr. Optimising spatial and tonal data for homogeneous diffusion inpainting. In A. M. Bruckstein, B. ter Haar Romeny, A. M. Bronstein, and M. M. Bronstein, editors, Scale Space and Variational Methods in Computer Vision, volume 6667 of Lecture Notes in Computer Science, pages 2637. Springer, Berlin, 2012.

[97]M. Mainberger, C. Schmaltz, M. Berg, J. Weickert, and M. Backes. Diffusion-based image compression in steganography. In G. Bebis, R. Boyle, B. Parvin, D. Koracin, C. Fowlkes, S. Wang, M.-H. Choi, S. Mantler, J. Schulze, D. Acevedo, K. Mueller, and M. Papka, editors, Advances in Visual Computing, Part II, volume 7432 of Lecture Notes in Computer Science, pages 447457. Springer, Berlin, 2012.

[98]S. Mallat and S. Zhong. Characterisation of signals from multiscale edges. IEEE Transactions on Pattern Analysis and Machine Intelligence, 14:720732, 1992.

[99]S. Masnou and J.-M. Morel. Level lines based disocclusion. In Proc. 1998 IEEE International Conference on Image Processing, volume 3, pages 259263, Chicago, IL, October 1998.

[100]N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller. Equation of state calculations by fast computing machines. Journal of Chemical Physics, 21:10871092, 1953.

[101]C. Miranda. Sul problema misto per le equazioni lineari ellittiche. Annali di Matematica Pura ed Applicata, 39:279303, 1955.

[102]C. Miranda. Partial Differential Equations of Elliptic Type. Springer, Berlin, 2nd edition, 1970.

[103]M. Moinard, I. Amonou, P. Brault, and P. Duhamel. Image and video compression scheme based on the prediction of transformed coefficients. In Proc. Seventh International Symposium on Image and Signal Processing and Analysis, pages 385389, Dubrovnik, Croatia, September 2011.

[104]K. W. Morton and L. M. Mayers. Numerical Solution of Partial Differential Equations. Cambridge University Press, Cambridge, UK, 2nd. edition, 2005.

[105]Y. Nesterov and I.U.E. Nesterov. Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization. Springer, Berlin, 2004.

[106]G. Nürnberger and D. Braess. Nonuniqueness of best Lp-approximation for generalized convex functions by splines with free knots. Numerical Functional Analysis and Optimization, 4(2):199209, 1982.

[107]P. Ochs, Y. Chen, T. Brox, and T. Pock. iPiano: Inertial proximal algorithm for nonconvex optimization. SIAM Journal on Imaging Sciences, 7:13881419, 2014.

[108]S. Olsen and B. Gooch. Image simplification and vectorization. In Proc. ACM SIG-GRAPH/Eurographics Symposium on Non-Photorealistic Animation and Rendering, pages 6574, Vancouver, BC, Canada, August 2011.

[109]A. Orzan, A. Bousseau, H. Winnemöller, P. Barla, J. Thollot, and D. Salesin. Diffusion curves: A vector representation for smooth-shaded images. ACM Transactions on Graphics, 27(3):Article No. 92, August 2008.

[110]W. B. Pennebaker and J. L. Mitchell. JPEG: Still Image Data Compression Standard. Springer, New York, 1992.

[111]P. Perona and J. Malik. Scale space and edge detection using anisotropic diffusion. IEEE Transactions on Pattern Analysis and Machine Intelligence, 12:629639, 1990.

[112]P. Peter, L. Hoeltgen, S. Hoffmann, F. Nedwed, and J. Weickert. From optimised inpainting with linear pdes towards competitive image compression codecs. In Proc. Seventh Pacific Rim Symposium on Image and Video Technology (PSIVT 2015, Auckland, New Zealand, Nov. 2015), Image and Video Technology, volume 9431 of Lecture Notes in Computer Science, pages 6374. Springer International Publishing, Switzerland, 2016.

[113]P. Peter, C. Schmaltz, N. Mach, M. Mainberger, and J. Weickert. Beyond pure quality: Progressive mode, region of interest coding and real time video decoding in PDE-based image compression. Technical Report 354, Department of Mathematics, Saarland University, Saarbrücken, Germany, January 2015.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.117.159.229