PDF文库 - 千万精品文档,你想要的都能搜到,下载即用。

广义交替近似梯度算法的线性收敛分析.pdf

非常perfect28 页 6.131 MB下载文档
广义交替近似梯度算法的线性收敛分析.pdf广义交替近似梯度算法的线性收敛分析.pdf广义交替近似梯度算法的线性收敛分析.pdf广义交替近似梯度算法的线性收敛分析.pdf广义交替近似梯度算法的线性收敛分析.pdf广义交替近似梯度算法的线性收敛分析.pdf
当前文档共28页 2.88
下载后继续阅读

广义交替近似梯度算法的线性收敛分析.pdf

An Alternating Direction Method for Total Variation Denoising Zhiwei (Tony) Qin ∗ Donald Goldfarb † Shiqian Ma ‡ arXiv:1108.1587v4 [math.OC] 23 Aug 2014 August 26, 2014 Abstract We consider the image denoising problem using total variation (TV) regularization. This problem can be computationally challenging to solve due to the nondifferentiability and non-linearity of the regularization term. We propose an alternating direction augmented Lagrangian (ADAL) method, based on a new variable splitting approach that results in subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic TV model. For the isotropic TV model, by doing further variable splitting, we are able to derive an ADAL method that is globally convergent. We compare our methods with the split Bregman method [16],which is closely related to it, and demonstrate their competitiveness in computational performance on a set of standard test images. 1 Introduction In signal processing, total variation (TV) regularization is a very popular and effective approach for noise reduction and has a wide array of applications in digital imaging. The total variation is the integral of the absolute gradient of the signal. Using TV regularization to remove noise from signals was originally proposed in [30] and is based on the observation that noisy signals have high total variation. By reducing the total variation of a noisy signal while keeping the resulting signal close to the original one removes noise while preserving important details such as sharp edges. Other existing denoising include median filtering and Tikhonov-like regularization, P techniques 2 kukT IK := i (∇x u)i + (∇y u)2i of the desired solution u, where ∇x and ∇y are defined ∗ Department of Industrial Engineering and Operations Research, Columbia University, New York, NY 10027. zq2107@columbia.edu. † Department of Industrial Engineering and Operations Research, Columbia University, New York, NY 10027. goldfarb@columbia.edu. ‡ Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong, China. sqma@se.cuhk.edu.hk. 1 below in (3). It is known that they tend to smooth away important texture details along with the noise [34, 38]. For a 2-D signal u ∈ Rn×m , such as an image, the total variation kukT V [30] of u can be defined anisotropically or isotropically: ( P i |(∇x u)i | + |(∇y u)i |, (Anisotropic); P q kukT V := (1) (∇x u)2i + (∇y u)2i , (Isotropic). i P 2 Concisely, kukT V can be expressed as nm i=1 kDi ukp , where Di u ∈ R denotes the discrete gradient of u at pixel i. Hence, kukT V is isotropic when p = 2 and is anisotropic when p = 1. TV denoising (also called ROF (Rudin-Osher-Fatemi) denoising) corresponds to solving the following optimization problem, min λ u nm X i=1 1 kDi ukp + ku − bk2 , 2 (2) where p = 1 or 2; b ∈ Rn×m is the noisy image, and the solution u is the desired denoised image. k · k without a subscript denotes the l2 -norm. We assume that all 2-D images are in column-major vectorized form; hence, if one-dimensional index of (i, j) is k and 1 ≤ i ≤ n, 1 ≤ j ≤ m, the elements of ∇u are given by     ∇x u uk+1 − uk . (3) = [∇u]ij = Dk u ≡ ∇y u ij uk+n − uk The anisotropic TV model that we consider in this paper is the four-neighbor form. Algorithms for anisotropic TV denoising for other different sets of neighbors are presented in [15]. Due to the non-differentiability and non-linearity of the TV term in problem (2), this problem can be computationally challenging to solve despite its simple form. Hence, much effort has been devoted to devise efficient algorithms for solving it. A number of references are provided in Section 1 of [16]. In addition, Chambolle’s algorithm [6] solves problem (2) with the isotropic TV-norm. The approach that we develop in this paper for solving problem (2) is based on variable splitting followed by the application, to the resulting constrained minimization problem, of an alternating minimization algorithm (specifically, in our case, the alternating direction augmented Lagrangian (ADAL) method). In contrast with previously proposed variable splitting approaches for solving problem (2), our approach introduces two sets of auxiliary variables, one to replace the solution image u and one to replace the vector of gradients (D1 u, · · · , Dnm u). When the ADAL method is applied to the constrained optimization problem that is derived from this variable splitting, the resulting subproblems that must be solved at each iteration can be solved easily and exactly. Moreover, for the anisotropic TV version of problem (2), convergence of our algorithm can be proved, and for both the anisotropic and isotropic TV models, preliminary numerical experiments indicate that the number of iterations required to obtain an accurate 2 solution is quite small. By introducing a third set of auxiliary variables, we are also able to derive an ADAL method for the isotropic TV model with guaranteed convergence. Before outlining the organization of the remaining sections of the paper that contain our main results, let us first review three previously proposed methods that use splitvariable alternating minimization approaches. All of these methods apply to the slightly more general TV-based denoising/deblurring problem min λ u X i 1 kDi ukp + kKu − bk2 , 2 (4) where p is either 1 or 2, and K is a blurring (or convolution) operator. 1.1 Closely Related Methods In the straightforward variable splitting approach proposed in [1], a vector of auxiliary variables w is introduced to replace u in the non-differentiable TV term in (4): X 1 kDi wkp + kKu − bk2 2 min λ s.t. w = u. u,w i (5) The algorithm SALSA (Split-Augmented Lagrangian Shrinkage Algorithm) in [1] then obtains a solution to problem (4) by applying the ADAL method to problem (5), in which the non-differentiable TV term λkΦ(w)kp , where Φi (w) ≡ Di w, has been decoupled from the quadratic fidelity term R(u) ≡ 12 kKu − bk2 in the objective function. For the case of isotropic TV regularization, SALSA uses five iterations of Chambolle’s algorithm to compute the corresponding Moreau proximal mapping. In [38], variable-splitting combined with a penalty function approach is applied to problem (4) by introducing an auxiliary variable di = Di u ∈ R2 for each pixel, yielding the following approximation to problem (4) min λ d,u X i 1 1 X kdi k1 + kKu − bk2 + kdi − Di uk2 . 2 2µ (6) i Problem (6) is then minimized alternatingly with respect to w and u, with a continuation scheme that drives the penalty parameter µ1 gradually to a sufficiently large number. This method is extended in [41, 43] to solve the multi-channel (color) image deblurring problem. In [43], the TV regularization with 1-norm fidelity (TVL1) model X min λ kDi ukp + kKu − bk1 u i is considered. The same approach has also been applied to reconstruct signals from partial Fourier data in the compressed sensing context [44]. These methods take full advantage of the structures of the convolution operator and the finite difference operator so that the subproblems can be solved exactly and efficiently, which is important for fast 3 convergence. A downside to this quadratic penalty approach is that when µ1 is very large, problem (6) becomes ill-conditioned and numerical stability becomes an issue. Although our algorithm is closely related to the algorithms in [1] and [38] described above, it is even more closely related to the split Bregman method [16], which is an application of the variable splitting approach in [38] to the Bregman iterative regularization method [24]. The Bregman iterative regularization method was first introduced in [24] as a better (iterative) approach to the TV denoising/deblurring problem (4) than directly applying an iterative solver to it. Subsequently, this method was extended in [34] to the solution of l1 -minimization problems that arise in compressed sensing and in [22] to nuclear norm minimization problems that are convex relaxations of matrix rank minimization problems. The Bregman distance associated with a convex function E(·) between u and v is defined as p DE (u, v) := E(u) − E(v) − pT (u − v), where p ∈ ∂E(v) and ∂E(v) denotes the subdifferential of E(·) at the point v. The Bregman iteration for the unconstrained minimization problem min E(u) + u 1 H(u), µ where both functions E(·) and H(·) are convex, is p (u, u(k) ) + u(k+1) = arg min DE u 1 H(u) µ = arg min E(u) − (u − u(k) )T p(k) + u p(k+1) = p(k) − ∇H(u(k+1) ). 1 H(u), µ (7) (8) Superscripts denote iteration indices to differentiate between the values of the variables for the current iteration from those computed at the previous iteration. With the introduction of an auxiliary variable d as in [38], the TV denoising/deblurring problem (4) can be reformulated as the constrained problem min u,d λkdk1 + R(u) (9) s.t. d = Φ(u),   ∇ u x where R(u) = 12 kKu − bk2 , and Φ(u) = . Now, converting problem (9) into an ∇y u unconstrained problem (by penalizing kd − Φ(u)k2 ), we obtain min λkdk1 + R(u) + u,d 1 kd − Φ(u)k2 , 2µ (this is the same as problem (5)). Then, applying the general Bregman iteration (7)-(8) with E(u, d) = λkdk1 + R(u) and H(u, d) = kd − Φ(u)k2 , we obtain after simplification 4 the following specific Bregman iteration: (u(k+1) , d(k+1) ) = min λkdk1 + R(u) + u,d 1 kd − Φ(u) − r(k) k2 , 2µ r(k+1) = r(k) + (Φ(u(k+1) ) − d(k+1) ), (10) (11) with r(0) = 0. In [16], an approximate solution to (10) was proposed by alternatingly minimizing the right-hand-side of (10) with respect to u and d once. This yields the following Split Bregman algorithm (Algorithm 1.1). 1 For notational conciseness, the superscripts are suppressed in the main steps. As is well known (e.g., see [42]), the BregAlgorithm 1.1 SplitBregman 1: Given u(0) , d(0) , and r (0) . 2: for k = 0, 1, · · · , K do 1 u ← minu R(u) + 2µ kd − Φ(u) − rk2 1 kd − Φ(u) − rk2 4: d ← mind λkdk1 + 2µ 5: r ← r + (Φ(u) − d) 6: end for 7: return u 3: man iterative algorithm (10)-(11) is equivalent to applying the augmented Lagrangian method [20, 25] to solve problem (9). Hence, the split-Bregman algorithm is equivalent to applying the ADAL method to (9). 1.2 Organization of The Paper The outline of the rest of the paper is as follows. We first briefly review the ADAL method and its applications to linearly constrained optimization problems that arise from variable splitting in Section 2. In Section 3.1, we describe our proposed variable-splitting alternating direction augmented Lagrangian method for the anisotropic TV-model and prove its global convergence in Section 3.2. We then discuss in Sections 3.3 and 3.4 the isotropic case and the difference between our algorithm and the split Bregman method, respectively. In Section 3.5, we present a globally convergent variable-splitting ADAL variant for the isotropic TV-model. In Section 4, we compare our algorithms against the split Bregman method on a set of standard test images and demonstrate the effectiveness of our methods in terms of denoising speed and quality. 2 The Alternating Direction Augmented Lagrangian Method The ADAL method is also known as the alternating direction method of multipliers (ADMM) and was first proposed in the 1970s [12, 14]. It belongs to the family of the 1 The Split Bregman method in its original form in [16] has an inner loop. We consider the simplified form, which was used to solve TV denoising problems in [16]. 5 classical augmented Lagrangian (AL) method [25, 29, 20], which iteratively solves the linearly constrained problem min F (x) s.t. Ax = b. x (12) 1 The augmented Lagrangian of problem (12) is L(x, γ) = F (x)+γ T (b−Ax)+ 2µ kAx−bk2 , where γ is the Lagrange multiplier and µ is the penalty parameter for the quadratic infeasibility term. The AL method minimizes L(x, γ) followed by an update to γ in each iteration as stated in the following algorithm. We denote by kmax the user-defined maximum number of iterations or the number of iterations required to satisfy the termination criteria. Algorithm 2.1 AL (Augmented Lagrangian method) 1: Choose γ (0) . 2: for k = 0, 1, · · · , kmax do x ← arg minx L(x, γ) 4: γ ← γ − µ1 (Ax − b) 5: end for 6: return x 3: For a structured unconstrained problem min F (x) ≡ f (x) + g(Ax), (13) x where both functions f (·) and g(·) are convex, we can decouple the two functions by introducing an auxiliary variable y and transform problem (13) into an equivalent linearly constrained problem min f (x) + g(y) s.t. Ax = By, x,y (14) where for (13), B = I. Henceforth, we consider the more general case of problem (14). The augmented Lagrangian function for problem (14) is L(x, y, γ) = f (x) + g(y) + γ T (By − Ax) + 1 kBy − Axk2 . 2µ Exact joint minimization of L(x, y, γ) with respect to both x and y is usually difficult. Hence, in practice, an inexact version of the AL method (IAL) is often used, where L(x, y, γ) is minimized only approximately. Convergence is still guaranteed in this case, as long as the subproblems are solved with increasing accuracy [29]. ADAL (Algorithm 2.2 below) is a particular case of IAL in that it finds the approximate minimizer of L(x, y, γ) by alternatingly optimizing with respect to x and y once. 6 Algorithm 2.2 ADAL (ADMM) 1: Choose γ (0) . 2: for k = 0, 1, · · · , kmax do x ← arg minx L(x, y, γ) 4: y ← arg miny L(x, y, γ) 5: γ ← γ + µ1 (By − Ax) 6: end for 7: return x 3: This is often desirable because joint minimization of L(x, y, γ) even approximately can be hard. The convergence of ADAL has been established for the case of two-way splitting as above. This result, which is a modest extension of results in [9], is given in [10] and contained in the following theorem. Theorem 2.1. Consider problem (14), where both f and g are proper, closed, convex functions, and A ∈ Rn×m and B ∈ Rn×l have full column rank. Then, starting with an arbitrary µ > 0 and x0 ∈ Rm , y 0 ∈ Rl , the sequence {xk , y k , γ k } generated by Algorithm 2.2 converges to a primal-dual optimal solution pair (x∗ , y ∗ ), γ ∗ to problem (14), if (14) has one. If (14) does not have an optimal solution, then at least one of the sequences {(xk , y k )} and {γ k } diverges. It is known that µ does not have to decrease to a very small value (it can simply stay constant) in order for the method to converge to the optimal solution of problem (14) [23, 4]. Inexact versions of ADAL, where one or both of the subproblems are solved approximately have also been proposed and analyzed [9, 18, 42]. The versatility and simple form of ADAL have attracted much attention from a wide array of research fields. ADAL has been applied to solve group sparse optimization problems in [8], semidefinite programming problems in [39] and matrix completion problems with nonnegative factors in [40]. In signal processing/reconstruction, ADAL has been applied to sparse and low-rank recovery, where nuclear norm minimization is involved [21, 45, 32], and to the l1 -regularized problems in compressed sensing [42]. ADAL-based algorithms have also been proposed to solve a number of image processing tasks, such as image inpainting and deblurring (SALSA and C-SALSA) [1, 2, 3, 36], motion segmentation and reconstruction [28, 46], in addition to denoising [1, 16, 11, 33]. In machine learning, ADAL and IAL-based methods have been successfully applied to structured-sparsity estimation problems [26] as well as many others [5]. 3 Our Proposed Method 3.1 Application to Anisotropic TV Denoising We consider the anisotropic TV denoising model (2). The isotropic TV model will be considered in Section 3.3. As in [16], we introduce auxiliary variables dx and dy 7 for the discretized gradient components ∇x u and ∇y u respectively. Under reflective Neumann boundary conditions, ∇x u = Du, where the discretization matrix D is an (nm − m) × nm block diagonal matrix, each of whose m diagonal (n − 1) × n rectangular blocks is upper bidiagonal with -1’s on its diagonal and 1’s on its super-diagonal. For simplicity, henceforth we will assume that n = m. Consequently, ∇y u = Dv, where v = P u, and P is a permutation matrix so that v is the row-major vectorized form of the 2-D image. (Recall that u is in the column-major form.) Hence, problem (2) is equivalent to the following constrained problem min dx ,dy ,u,v s.t. 1 λ(kdx k1 + kdy k1 ) + ku − bk2 2 dx = Du, (15) dy = Dv, v = P u. The augmented Lagrangian for problem (15) is 1 L(dx , dy , u, v, µ) ≡ ku−bk2 +λ(kdx k1 +kdy k1 )+γxT (Du−dx )+γyT (Dv−dy )+γzT (P u−v) 2 1 1 (kDu − dx k2 + kDv − dy k2 ) + kP u − vk2 . (16) + 2µ1 2µ2   dx , we solve the subproblem To minimize L with respect to d = dy min λ(kdx k1 +kdy k1 )+γxT (Du−dx )+γyT (Dv−dy )+ dx ,dy 1 (kDu−dx k2 +kDv−dy k2 ). (17) 2µ1 Problem (17) is strictly convex and decomposable with respect to dx and dy , so the unique minimizer can be computed through two independent soft-thresholding operations d∗x = T (Du + µ1 γx , λµ1 ), d∗y = T (Dv + µ1 γy , λµ1 ), where the soft-thresholding operator T is defined componentwise as T (x, λ)i := max{|xi | − λ, 0}sign(xi ). To minimize L over u, we solve 1 1 1 min ku − bk2 + γxT Du + kDu − dx k2 + γzT P u + kP u − vk2 , u 2 2µ1 2µ2 which simplifies to the linear system       µ1 µ1 T T T D D+ + µ1 I u = µ1 b + D (dx − µ1 γx ) + P v − µ1 γ z . µ2 µ2 8 (18) (19) It is easy to verify that, since µ1 and µ2 are both positive scalars, the matrix on the left-hand-side of the above system is positive definite and tridiagonal. Hence, (19) can be solved efficiently by the Thomas algorithm in 8nm flops [17]. We denote the solution to the above tridiagonal system by u(dx , v, γx , γz ). Similarly, the sub-problem with respect to v simplifies to the tridiagonal system   µ1 µ1 T D D + I v = DT (dy − µ1 γy ) + µ2 γz + P u. (20) µ2 µ2 Its solution is denoted by v(dy , v, γy , γz ). With all the ingredients of the algorithm explained, we formally state this ADAL method in Algorithm 3.1 below. Note that in line 7, the vectors of Lagrange multipliers and scaled infeasibilities are combined into the vectors     1 γx µ1 (Du − dx )   γ ≡  γy  and ∆ ≡  µ11 (Dv − dy )  . 1 γz µ2 (P u − v) Algorithm 3.1 ADAL (Anisotropic TV Denoising) 1: Given u(0) , v (0) , λ, γ (0) . 2: for k = 0, 1, · · · , K do dx ← T (Du + µ1 γx , λµ1 ) v ← v(dy , u, γy , γz ), the solution of (20) 5: dy ← T (Dv + µ1 γy , λµ1 ) 6: u ← u(dx , v, γx , γz ), the solution of (19) 7: γ ←γ+∆ 8: end for 9: return 12 (u + P T v) 3: 4: 3.2 Convergence Analysis We establish the convergence of Algorithm 3.1 by expressing problem (15) as an instance of problem (14) and then showing that Algorithm 3.1 is, in fact, an ADAL method for problem (14), employing to the variables.   two-way  updates  dx dy Define X := , Y := , f (X) := λkdx k1 , and g(Y ) := λkdy k1 + 12 ku−bk2 . v u Then, we can write problem (15) in the form of problem (14) as min X,Y f (X) + g(Y ) s.t. AX = BY,     I 0 0 D where A =  0 D  ∈ R3mn×2mn , and B =  I 0  ∈ R3mn×2mn . 0 I 0 P 9 (21) Observe that Lines 3 and 4 of Algorithm 3.1 exactly solve the Lagrangian subproblem of (21) with respect to X - the subproblem is decomposable with respect to dx and v. Similarly, Lines 5 and 6 of Algorithm 3.1 solve the Lagrangian subproblem with respect to Y - the subproblem is decomposable with respect to dy and u. The matrices A and B obviously have full column rank. Hence, the convergence of Algorithm 3.1 follows as a result of Theorem 2.1. 3.3 The Isotropic Case The isotropic TV denoising model differs from the anisotropicqmodel in the definiP tion of the TV norm. In this case, we define kukISO := (∇x u)2i + (∇y u)2i = i TV P i k([∇x u]i , [∇y u]i )k, and the optimization problem to solve is 1 min λkukISO + ku − bk2 . T V u 2 (22) Note that kukISO T V is the group lasso regularization on (∇x u, ∇y u), with each group consisting of ([∇x u]i , [∇y u]i ). We introduce the same auxiliary variables and linear constraints defining them as in the previous section, except that the constraint coupling dy and v becomes dy = P T Dv. (23) This modification is necessary because dx and dy are now coupled by the isotropic TV norm and the order of their elements have to match, i.e. column-major with respect to the original image matrix. The subproblem with respect to dx and dy now becomes 1 (kDu−dx k2 +kP T Dv−dy k2 ), dx ,dy 2µ1 i (24) which is a proximal problem associated with the group l1,2 -norm kdk1,2 with dx ≡ ∇x u, dy ≡ ∇y u, where the groups are defined as above. The solution  to this subproblem   D 0 u + is thus given by a block soft-thresholding operation [37, 27, 7], S( v 0 PTD   γx µ1 , λµ1 ), where the block soft-thresholding operator is defined blockwise as γy min λ X k([dx ]i , [dy ]i )k+γxT (Du−dx )+γyT (P T Dv−dy )+ S(x, λ)i := max{kxi k − λ, 0} xi , kxi k and xi is the i-th block of x, i.e. ([Du + µ1 γx ]i , [P T Dv + µ1 γy ]i ) in our case. The subproblem with respect to u is the same as (19), and that with respect to v is   µ1 µ1 T (25) D D + I v = DT P (dy − µ1 γy ) + µ2 γz + P u. µ2 µ2 10 Algorithm 3.2 ADAL (Isotropic TV Denoising) 1: Given u(0) , v (0) , λ, γ (0) . 2: fork = 0, , kmax do  1, · · ·      γx dx Du , λµ1 3: ←S + µ1 γy dy P T Dv 4: v ← v(dy , u, γy , γz ), the solution of (25) 5: u ← u(dx , v, γx , γz ), the solution of (19) 6: γ ←γ+∆ 7: end for 8: return 12 (u + P T v) We state the ADAL TV denoising in Algorithm 3.2, where  method for the isotropic  1 µ1 (Du − dx )  1 (P  because of (23), ∆ ≡  µ1 T Dv − dy ) . 1 µ2 (P u − v) Due to the non-decomposability of problem (24) with respect to dx and dy in this case, Algorithm 3.2 cannot be interpreted as an algorithm that employs alternating updates to two blocks of variables as in Section 3.1. Hence, the convergence analysis for the anisotropic case cannot be extended to this case in a straightforward manner. However, our experimental results in the next section show strong indication of convergence to the optimal solution. 3.4 Comparison with The Split Bregman Method Since the split Bregman method (Algorithm 1.1) is equivalent to the ADAL method (Algorithm 2.2) [35, 10, 31] applied to the constrained problem min d,u s.t. 1 λ(kdx k1 + kdy k1 ) + ku − bk2 2 dx = ∇x u, dy = ∇y u, it is clear that the main difference between ADAL Algorithms 3.1 and 3.2 and the split Bregman method comes from the introduction of the additional variable v = P u in problem (15). The split Bregman subproblem with respect to u (line 3 in Algorithm 1.1) can be simplified to the linear system  (k) T (k) (k) µI + (∇Tx ∇x + ∇Ty ∇y ) u(k+1) = µb + ∇Tx (d(k) (26) x − rx ) + ∇y (dy − ry ), whose left-hand-side matrix includes a Laplacian matrix and is strictly diagonally dominant. Solving this linear system exactly in each iteration is relatively expensive. Hence, one iteration of the Gauss-Seidel method is applied in [16] to solve (26) approximately. Consequently, the condition for the convergence guarantee is violated in this case. 11 In contrast, the subproblems with respect to v and u in ADAL have simpler structures and thus can be solved exactly in an efficient manner as we saw in Section 3.1. The splitting of u and v also leads to the establishment of the global convergence of Algorithm 3.1 in the anisotropic case. We surmised that this was a better approach for the TV denoising problem; the results in the next section confirmed this. 3.5 A Globally Convergent ADAL Method for the Isotropic TV-Model  Let us introduce three sets of variables dx = Du, dx dy dy = P T Dv,  , v, and w as follows: u = w, v = P w. The isotropic TV model then has the form of (21) with       I 0 0 D 0   dx T  0 I 0    u  , and B =  0 P D  .   dy X= ,Y = ,A =     v 0 0 I I 0  w 0 0 P 0 I When the augmented Lagrangian L(X, Y, γ) ≡ L(dx , dy , w, u, v, γx , γy , γu , γv ) is minimized with respect to X, the minimization is separable in terms of (dx , dy ) and w. Similarly, the minimization with respect to Y is separable in terms of u and v. If we use the same penalty parameter µ2 for both constraints that involve w, then the subproblems that one obtains for u, v, and w require sovling, respectively,       1 µ1 T (27) I u = µ1 b − γu − w + DT (dx − µ1 γx ), D D + µ1 + µ2 µ2     µ1 1 T D D + I v = µ1 P w − γv + DT P (dy − µ1 γy ), (28) µ2 µ2 and w=  1 u + P T v + µ2 (γu + P T γv ) . 2 (29)   γx  γy   We incorporate these procedures in Algorithm 3.3 below, where now γ ≡   γu , and γv   1 µ1 (Du − dx )  1 (P T Dv − d )  y   µ1 ∆ ≡  . Note that both Algorithms 3.2 and 3.3 compute u, v, γx , 1   µ2 (w − u) 1 µ2 (P w − v) and γy . In addition, Algorithm 3.3 requires the computation of w, γu , and γv whereas the isotropic ADAL Algorithm 3.2 only requires computation of γz . Consequently, the new convergent algorithm has slightly more work at each iteration. 12 Algorithm 3.3 ADAL (Isotropic TV Denoising) - Convergent 1: Given u(0) , v (0) , λ, γ (0) . 2: fork = 0, , kmax do  1, · · ·      γx dx Du , λµ1 3: ←S + µ1 γy dy P T Dv 4: w ← w(u, v, γv , γu , by (29) 5: v ← v(dy , w, γy , γv ), the solution of (28) 6: u ← u(dx , w, γx , γu ), the solution of (27) 7: γ ←γ+∆ 8: end for 9: return 31 (u + P T v + w) 3.6 Practical Implementation It is often beneficial to associate a step-size θ to the Lagrange multiplier updates, i.e. γ ← γ + θ∆ at line 7 in Algorithms 3.1 and √ 3.3 and line 6 in Algorithm 3.2. The convergence of this ADAL variant with θ ∈ (0, 5+1 2 ) has been established in [13],[19], and [39] under various contexts. In our implementation, we set θ = 1.618. In practice, we can often set µ1 = µ2 = µ. In this case, we can save some scalar-vector multiplications by maintaining γ̃ = µγ instead of the γ variables themselves. Then, for example, (27) can be simplified to  DT D + (µ + 1) I u = µb − γ˜u − w + DT (dx − γ˜x ). All the steps in the proposed algorithms remain the same with this substitution, except for the computation of ∆, which no longer involves division by µ. In addition, if µ is kept constant throughout the algorithms, we can factorize the constant left-hand-sides of the tri-diagonal linear systems and cache the factors for subsequent iterations. The factors are lower-triangular and are stored in the form of two vectors representing the diagonal and the sub-diagonal of these factors. Then, we can solve for u and v quickly through forward and backward substitutions, which require 5mn flops each. In our implementation, we used the LAPACK routines dpttrf (for factorization) and dpttrs (for forward/backward substitutions). We also developed a simple updating scheme for µ, which decreases µ by a constant factor κ after every J iterations, starting from µ̄, bounded below by µ, i.e. µ(k) = max(µ, µ̄ k ). κJ Such an updating scheme allows different values of µ to be applied at different stages of convergence. From our computational experience in the next Section, this often led to improved convergence speed. We denote this variant by “ADAL-µ” and “ADAL-conv-µ” corresponding to ADAL (Algorithms 3.1 and 3.2) and Algorithm 3.3 respectively. 13 Figure 1: The set of standard test images. 4 Experiments We implemented our ADAL algorithms (Algorithms 3.1,3.2,and 3.3) in C++ with BLAS and LAPACK routines. SplitBregman is in C with a Matlab interface. 2 We ran all the algorithms on a laptop with an Intel Core i5 Duo processor and 6G memory. 4.1 Test Images We compared our ADAL algorithm with the split Bregman method on a set of six standard test images: lena, house, cameraman, peppers, blonde, and mandril (see Figure 1). They present a range of challenges to image denoising algorithms, such as the reproduction of fine detail and textures, sharp transitions and edges, and uniform regions. Each image is a 512 × 512 array of grey-scale pixels and is denoted by u0 in vectorized form. 4.2 Set-up We constructed noisy images by adding Gaussian noise to the original images, i.e. b = u0 + , where  ∼ N (0, σ 2 ) and b is the vectorized noisy image. We set σ = 30, which introduced a considerable amount of noise. The quality of the denoised image in the k-th iteration, u(k) is measured by the normalized error with respect to a high quality 2 Code downloaded from http://www.stanford.edu/ tagoldst/code.html. 14 (k) ∗ k reference solution u∗ , i.e. η (k) = ku ku−u , as well as the peak-signal-to-noise ratio ∗k (PSNR). The PSNR of an image u with respect to the noiseless image u0 , in the case where the maximum pixel magnitude is 255, is defined as   √ 255 nm P SN R = 20 log10 . ku − u0 k PSNR is monotone decreasing with the ku − u0 k, i.e. a higher PSNR indicates better reconstruction quality. In practice, the algorithms can be stopped once an acceptable level of optimality has been reached. For ADAL, we used the maximum of the relative primal and dual residuals [5], denoted by  to approximately measure the optimality of the solution. For each image, we computed a reference solution u∗ and the corresponding PSNR p∗ by running Algorithm 3.1 for the anisotropic TV model and algorithm 3.3 for the isotropic TV model until the measure  fell below 10−12 . We then recorded the number of iterations K required by each of the algorithms to reach a normalized error η (K) less than 10−5 . We also recorded∗ the number of iterations required to reach a PSNR p whose relative gap to p∗ , gap ≡ |p p−p| , was less than 10−3 . ∗ We set all initial values (u(0) , v (0) , w(0) , γ (0) ) to zeros. We tuned all the algorithms under investigation for convergence speed with respect to both u∗ and p∗ to reach the tolerance levels defined above, and the same set of parameters were used for all six images. For ADAL, for both the anisotropic and isotropic TV models, we set µ1 = µ2 so that there was only one parameter to tune. We tried values of µ from the set {0.1δ : δ = 1 2 , 1, 2, 4, 8, 16} and found the value µ = 0.2 to work best; the results reported below used this value. Figure 2 illustrates this selection using the image house as an example, by K (δ) plotting the ratios Kp ∗ and KKu (δ) as functions of δ for isotropic ADAL, where Ku (δ) is ∗ p u the number of iterations needed to reduce the normalized error η (K) below the tolerance 10−5 with µ = δ0.1, and Ku∗ = minδ {Ku (δ)}. Kp (δ) and Kp∗ are defined similarly with respect to p. For ADAL-µ and ADAL-conv-µ, we set κ = 1.5, J = 50, µ̄ = 0.5, and µ = 0.05. For SplitBregman, we set µ = λ4 , which was the value of µ from the set { λδ : δ = 12 , 1, 2, 4, 8, 16}, chosen in the same manner as that for ADAL. In general, the parameters produced fairly consistent performance on the set of different images. 4.3 Convergence Comparisons We now present experimental results for both the anisotropic and the isotropic TV models. “ADAL-conv” denotes the convergent ADAL Algorithm 3.3 for the isotropic model. We also tested a version of SplitBregman, denoted by “SplitBregman2”, where two cycles of Gauss-Seidel were performed to solve the linear system (26). In Table 1, we report the number of iterations and the corresponding CPU time required by the three algorithms to reach the stopping criterion discussed above. Figures 3 and 4 plot the relative gaps with respect to the reference solution as a function of the iteration number for all the algorithms. 15 K (δ) as functions of δ for isotropic ADAL on the Figure 2: Plots of ratios Kp ∗ and KKu (δ) ∗ p u image house. The optimal point on the frontier is labeled in red. In general, ADAL required fewer iterations than SplitBregman to reach the prescribed relative gap with respect to the reference solution. The difference was particularly significant at high accuracy levels, about 45% for the anisotropic model and 25% for the isotropic model. We believe that ADAL benefits from the fact that it is able to solve its subproblems exactly and efficiently, while the approximation made in the solution to the linear system in the iterations of SplitBregman slows down the convergence as its iterates approach the exact solution. Figures 5 and 6 plot the relative gaps with respect to the reference PSNR as a function of the iteration number. In terms of denoising speed, ADAL also requried fewer iterations than SplitBregman to reach the denoising quality of the reference solution for both TV models. SplitBregman2 generally required about the same number of iterations as SplitBregman to reach within the gap tolerance with respect to u∗ , while requiring half the number of iterations as SplitBregman to reach the prescribed gap with respect to p∗ . It appeared that the additional cycle of Gauss-Seidel in each iteration helped improve the convergence speed initially but not in the later stages. We note that ADAL was faster than ADAL-conv for the isotropic model. This appears to be due to the need for ADAL-conv to make the norm of the difference between an additional set of variables and what they are defined to be equal to small enough so as to obtain high quality images. We observe that the updating scheme for µ improved the speed of convergence, especially for the isotropic TV model. ADAL-µ and ADAL-conv-µ reduced the number of iterations required by ADAL and ADAL-conv, to achieve the same normalized errors on the six test images by an amount between 40% and 60%, and 60% and 67%, respectivelly. We show in Figures 7, 9, 8, and 10 the solutions obtained by ADAL-µ and SplitBregman after the number of iterations specified in the ADAL-µ row in Table 1. In terms of CPU time, all of the algorithms required more time to reach the prescribed relative gap with respect to the reference solution for the isotropic model than that for the anisotropic model because of more iterations required. For the anisotropic model, ADAL 16 Figure 3: Convergence plots of normalized errors w.r.t. the reference solution against iterations for the anisotropic TV model. Figure 4: Convergence plots of normalized errors w.r.t. the reference solution against iterations for the isotropic TV model. 17 Figure 5: Convergence plots of normalized errors w.r.t. the reference PSNR against iterations for the anisotropic TV model. Figure 6: Convergence plots of normalized errors w.r.t. the reference PSNR against iterations for the isotropic TV model. 18 Model Algs Isotropic ADAL ADAL-µ SplitBregman SplitBregman2 ADAL ADAL-µ ADAL-conv ADAL-conv-µ SplitBregman SplitBregman2 Model Algs Anisotropic Anisotropic Isotropic ADAL ADAL-µ SplitBregman SplitBregman2 ADAL ADAL-µ ADAL-conv ADAL-conv-µ SplitBregman SplitBregman2 iters 334 291 584 595 1036 472 1482 587 1372 1376 lena P-iters 31 11 47 23 33 14 62 25 48 24 blonde iters P-iters 370 40 293 15 617 50 619 25 929 44 447 18 1320 68 547 27 1292 50 1298 25 CPU 11.1 9.2 12.5 15.7 37.2 17.1 54.8 22.1 32.4 41.7 cameraman iters P-iters CPU 595 31 18.9 360 14 11.5 1070 47 23.6 1093 23 29.0 1286 33 46.0 531 15 19.3 1826 62 67.0 666 26 25.2 1767 49 41.6 1779 25 53.5 mandril iters P-iters CPU 210 38 7.0 232 14 7.4 385 45 8.6 394 23 10.8 701 42 25.2 396 16 14.5 1023 66 38.2 482 25 18.3 983 47 23.2 996 23 29.4 CPU 12.0 9.3 13.6 17.2 34.1 16.4 51.5 20.6 30.4 37.8 house P-iters 28 14 49 25 29 14 58 26 50 25 peppers iters P-iters CPU 279 28 9.4 262 12 8.4 504 46 11.2 513 23 14.0 1043 29 43.4 476 13 18.9 1518 57 68.1 595 23 26.5 1368 47 33.7 1375 23 42.5 iters 621 364 1126 1128 1308 537 1860 677 1848 1856 CPU 20.7 11.6 25.6 33.1 49.4 19.5 70.6 25.6 42.9 55.6 Table 1: Computational statistics. “iters” denotes the number of iterations to reach within a gap of 1e-5 w.r.t. the reference solution. “P-iters” denotes number of iterations to reach within a gap of 1e-3 w.r.t. the reference PSNR. CPU time is in seconds and corresponds to the “iters” column. 19 Figure 7: Comparison of reconstruction quality for lena, cameraman, mandril, and blonde with the anisotropic TV model. Top left: noisy image. Top right: reference solution obtained by ADAL. Bottom left: ADAL solution obtained after the corresponding number of iterations indicated in Table 1. Bottom right: SplitBregman solution obtained after the same number of iterations. 20 Figure 8: Comparison of reconstruction quality for house, and peppers with the anisotropic TV model. Top left: noisy image. Top right: reference solution obtained by ADAL. Bottom left: ADAL-µ solution obtained after the corresponding number of iterations indicated in Table 1. Bottom right: SplitBregman solution obtained after the same number of iterations. 21 Figure 9: Comparison of reconstruction quality for lena, cameraman, mandril, and blonde with the isotropic TV model. Top left: noisy image. Top right: reference solution obtained by ADAL. Bottom left: ADAL-µ solution obtained after the corresponding number of iterations indicated in Table 1. Bottom right: SplitBregman solution obtained after the same number of iterations. 22 Figure 10: Comparison of reconstruction quality for house, and peppers with the isotropic TV model. Top left: noisy image. Top right: reference solution obtained by ADAL. Bottom left: ADAL solution obtained after the corresponding number of iterations indicated in Table 1. Bottom right: SplitBregman solution obtained after the same number of iterations. 23 took less time than SplitBregman, and ADAL-µ took the least, with the least number of iterations. The apparent more per-iteration work for ADAL is due to the computation for the additional tridiagonal linear system and the Lagrange multiplier, as compared to one sweep of Gauss-Seidel employed by SplitBregman. SplitBregman2 required longer time than SplitBregman because of the additional sweep of Gaus-Seidel per iteration with about the same number of iterations. The approximate total number of flops per iteration3 (including solving the linear systems, updating the Lagrange multipliers, and performing the shrinkage operations) is 33 mn, 46 mn, and 44 mn for SplitBregman, SplitBregman2, and ADAL respectively. For the isotropic model, ADAL-µ and ADAL-conv-µ required the least amount of time to reach the same level of accuracy, with about half the number of iterations required by SplitBregman. On the other hand, ADAL took more time than SplitBregman in this case, though still requiring less time than SplitBregman2. The driving factor here was the greater number of iterations required than in the anisotropic case. ADAL-conv required slightly more per-iteration work than ADAL due to the additional computation for w and the Lagrange multiplier for the additional constraint introduced. The approximate total number of flops per iteration is 34 mn, 47 mn, 48 mn, and 57 mn for SplitBregman, SplitBregman2, ADAL, and ADAL-conv respectively. 5 Conclusion We have proposed new ADAL algorithms for solving TV denoising problems in image processing. The key feature of our algorithms is their use of multiple variable splittings which results in their ability to solve the ADAL subproblems exactly and efficiently. Our first ADAL algorithm has a global convergence guarantee for the case of anisotropic TV model, and the experimental results show that its iterates converge significantly faster than those of SplitBregman. Even though the convergence guarantee of this ADAL variant cannot be extended easily to the isotropic TV model, empirical results show that with a simple updating scheme for µ, it still compares favorably to SplitBregman in convergence speed. We also proposed another ADAL variant for the isotropic TV model that has a global convergence guarantee, with a slightly higher per-iteration computational cost. Because of the additional variable splitting required to obtain the convergence guarantee, the method also takes more iterations than the simpler isotropic ADAL variant. 6 Acknowledgement We would like to thank the anonymous referees for their careful reading and valuable comments, which have helped improve the paper significantly. This work was supported in part by DMS 10-16571, ONR Grant N00014-08-1-1118 and DOE Grant DE-FG0208ER25856. Research of Shiqian Ma was supported in part by a Direct Grant of the 3 Depending on the implementation, the number of flops required may vary slightly. 24 Chinese University of Hong Kong (Project ID: 4055016) and the Hong Kong Research Grants Council Early Career Scheme (Project ID: CUHK 439513). References [1] M. Afonso, J. Bioucas-Dias, and M. Figueiredo. Fast image recovery using variable splitting and constrained optimization. Image Processing, IEEE Transactions on, 19(9):2345–2356, 2010. [2] M. Afonso, J. Bioucas-Dias, and M. Figueiredo. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Transactions on Image Processing, (20):681–695, 2011. [3] M. S. Almeida and M. A. Figueiredo. Deconvolving images with unknown boundaries using the alternating direction method of multipliers. IEEE Transactions on Image Processing, to appear (arXiv 1210.02687v2). [4] D. Bertsekas. Nonlinear Programming. Athena Scientific Belmont, MA, 1999. [5] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1):1–123, 2010. [6] A. Chambolle. An algorithm for total variation minimization and applications. Journal of Mathematical Imaging and Vision, 20(1):89–97, 2004. [7] P. Combettes and J. Pesquet. Proximal splitting methods in signal processing. Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pages 185–212, 2011. [8] W. Deng, W. Yin, and Z. Y. Group sparse optimization by alternating direction method. Technical report, TR 11-06, Rice University, 2011. [9] J. Eckstein and D. Bertsekas. On the Douglas-Rachford splitting method and the proximal point algorithm for maximal monotone operators. Mathematical Programming, 55(1):293–318, 1992. [10] E. Esser. Applications of lagrangian-based alternating direction methods and connections to split bregman. CAM report, 9:31, 2009. [11] M. A. Figueiredo and J. M. Bioucas-Dias. Restoration of poissonian images using alternating direction optimization. Image Processing, IEEE Transactions on, 19(12):3133–3145, 2010. [12] D. Gabay and B. Mercier. A dual algorithm for the solution of nonlinear variational problems via finite element approximation. Computers & Mathematics with Applications, 2(1):17–40, 1976. 25 [13] R. Glowinski and P. Le Tallec. Augmented Lagrangian and operator-splitting methods in nonlinear mechanics, volume 9. SIAM, 1989. [14] R. Glowinski and A. Marroco. Sur l’approximation, par elements finis d’ordre un, et la resolution, par penalisation-dualite d’une classe de problemes de dirichlet non lineares. Rev. Francaise d’Automat. Inf. Recherche Operationelle, (9):41–76, 1975. [15] D. Goldfarb and W. Yin. Parametric maximum flow algorithms for fast total variation minimization. SIAM Journal on Scientific Computing, 31(5):3712–3743, 2009. [16] T. Goldstein and S. Osher. The split bregman method for l1-regularized problems. SIAM Journal on Imaging Sciences, 2:323, 2009. [17] G. Golub and C. Van Loan. Matrix Computations. Johns Hopkins Univ Pr, 1996. [18] B. He, L. Liao, D. Han, and H. Yang. A new inexact alternating directions method for monotone variational inequalities. Mathematical Programming, 92(1):103–118, 2002. [19] B. He, H. Yang, and S. Wang. Alternating direction method with self-adaptive penalty parameters for monotone variational inequalities. Journal of Optimization Theory and applications, 106(2):337–356, 2000. [20] M. Hestenes. Multiplier and gradient methods. Journal of Optimization Theory and Applications, 4(5):303–320, 1969. [21] Z. Lin, M. Chen, L. Wu, and Y. Ma. The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Arxiv Preprint arXiv:1009.5055, 2010. [22] S. Ma, D. Goldfarb, and L. Chen. Fixed point and bregman iterative methods for matrix rank minimization. Mathematical Programming, pages 1–33, 2009. [23] J. Nocedal and S. Wright. Numerical Optimization. Springer Verlag, 1999. [24] S. Osher, M. Burger, D. Goldfarb, J. Xu, and W. Yin. An iterative regularization method for total variation-based image restoration. Multiscale Modeling and Simulation, 4(2):460–489, 2006. [25] M. Powell. A method for nonlinear constraints in minimization problems. In R. Fletcher, editor, Optimization. Academic Press, New York, New York, 1972. [26] Z. Qin and D. Goldfarb. Structured sparsity via alternating direction methods. Journal of Machine Learning Research, 13:1373–1406, 2012. [27] Z. Qin, K. Scheinberg, and D. Goldfarb. Efficient block-coordinate descent algorithms for the group lasso. Mathematical Programming Computation, 5:143–169, 2013. 26 [28] S. Ramani and J. A. Fessler. A splitting-based iterative algorithm for accelerated statistical x-ray ct reconstruction. IEEE Transactions on Medical Imaging, 31(3):677–688, 2012. [29] R. Rockafellar. The multiplier method of hestenes and powell applied to convex programming. Journal of Optimization Theory and Applications, 12(6):555–562, 1973. [30] L. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1-4):259–268, 1992. [31] S. Setzer. Split bregman algorithm, douglas-rachford splitting and frame shrinkage. Scale space and variational methods in computer vision, pages 464–476, 2009. [32] Y. Shen, Z. Wen, and Y. Zhang. Augmented lagrangian alternating direction method for matrix separation based on low-rank factorization. TR11-02, Rice University, 2011. [33] G. Steidl and T. Teuber. Removing multiplicative noise by douglas-rachford splitting methods. Journal of Mathematical Imaging and Vision, 36(2):168–184, 2010. [34] D. Strong and T. Chan. Edge-preserving and scale-dependent properties of total variation regularization. Inverse problems, 19:S165, 2003. [35] X. Tai and C. Wu. Augmented lagrangian method, dual methods and split bregman iteration for rof model. Scale Space and Variational Methods in Computer Vision, pages 502–513, 2009. [36] M. Tao and J. Yang. Alternating direction algorithms for total variation deconvolution in image reconstruction. Optimization Online, 2009. [37] E. van den Berg, M. Schmidt, M. Friedlander, and K. Murphy. Group sparsity via linear-time projection. Technical report, TR-2008-09, Department of Computer Science, University of British Columbia, 2008. [38] Y. Wang, J. Yang, W. Yin, and Y. Zhang. A new alternating minimization algorithm for total variation image reconstruction. SIAM Journal on Imaging Sciences, 1(3):248–272, 2008. [39] Z. Wen, D. Goldfarb, and W. Yin. Alternating direction augmented lagrangian methods for semidefinite programming. Mathematical Programming Computation, pages 1–28, 2010. [40] Y. Xu, W. Yin, Z. Wen, and Y. Zhang. An alternating direction algorithm for matrix completion with nonnegative factors. Frontiers of Mathematics in China, 7(2):365–384, 2012. 27 [41] J. Yang, W. Yin, Y. Zhang, and Y. Wang. A fast algorithm for edge-preserving variational multichannel image restoration. SIAM Journal on Imaging Sciences, 2(2):569–592, 2009. [42] J. Yang and Y. Zhang. Alternating direction algorithms for l1-problems in compressive sensing. SIAM Journal on Scientific Computing, 33(1):250–278, 2011. [43] J. Yang, Y. Zhang, and W. Yin. An efficient tvl1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput, 31(4):2842–2865, 2009. [44] J. Yang, Y. Zhang, and W. Yin. A fast alternating direction method for tvl1-l2 signal reconstruction from partial fourier data. Selected Topics in Signal Processing, IEEE Journal of, 4(2):288–297, 2010. [45] X. Yuan and J. Yang. Sparse and low-rank matrix decomposition via alternating direction methods. Preprint, 2009. [46] L. Zappella, A. Del Bue, X. Llado, and J. Salvi. Simultaneous motion segmentation and structure from motion. In Applications of Computer Vision (WACV), 2011 IEEE Workshop on, pages 679–684, 2011. 28

相关文章