Skip to main content

A new reweighted l 1 minimization algorithm for image deblurring

Abstract

In this paper, a new reweighted l 1 minimization algorithm for image deblurring is proposed. The algorithm is based on a generalized inverse iteration and linearized Bregman iteration, which is used for the weighted l 1 minimization problem min u R n { u ω :Au=f}. In the computing process, the effective using of signal information can make up the detailed features of image, which may be lost in the deblurring process. Numerical experiments confirm that the new reweighted algorithm for image restoration is effective and competitive to the recent state-of-the-art algorithms.

1 Introduction

Image deblurring is a fundamental problem in image processing, since many real-life problems can be modeled as deblurring problems [1]. In this paper, a new reweighted l 1 minimization algorithm for image deblurring is proposed. The algorithm is obtained based on a generalized inverse iteration and a linearized Bregman iteration.

Simply, we shall denote images as vectors in R n by concatenating their columns. Let u R n be the underlying image. Then the observed blurred image f R n is given by

f=Au+η,
(1.1)

where η R n is an additive noise and A R m × n is a linear blurring operator. This problem is ill-posed due to the large condition number of the matrix A. Any small perturbation on the observed blurred image f may cause the direct solution A 1 f, which is very difficult to obtain from the original image u [2]. This is a widely studied subject and many corresponding approaches have been developed, and one of them is to minimize some cost functionals [1]. The simplest method is a Tikhonov regularization, which minimizes an energy consisting of a data fidelity term and an l 2 norm regularization term. A is a convolution, which can solve the problem in the Fourier domain. In this case, the method is called a Wiener filter [3], this is a linear method, and the edges of restored image are usually smeared. To overcome this, a total variation (TV)-based regularization was proposed by Rudin et al. in [4], which is known as the ROF model. Due to its virtue of preserving edges, it is widely used in image processing, such as blind deconvolution, inpainting, and superresolution; see [1]. However, as we know, for the TV yields staircasing [5, 6], these TV-based methods do not preserve the fine structures, details, and textures. To avoid these drawbacks, nonlocal methods were proposed for denoising [7, 8], and then extended to deblurring [9]. Also, the Bregman iteration, introduced to image science [10], was shown to improve TV-based blind deconvolution [1113]. Recently, a nonlocal TV regularization was invented based on graph theory [14] and applied to image deblurring [15]. Another approach for deblurring is the wavelet-based method, etc. [16].

Normally, the original image u R n will be found by solving the following constrained minimization problem:

min u R n { J ( u ) : A u = f } ,
(1.2)

where J(u) is a continuous convex function, and when J(u) is strictly or strongly convex, the solution of (1.2) is unique.

This constrained optimization problem (1.2) arise in many applications, like in image compression, reconstruction, inpainting, segmentation, compressed sensing, etc. The problem (1.2) can be transformed into a linear programming problem, and then solved by a conventional linear programming solver in many cases. Recently, fixed-point continuation method [17] and Bregman iteration [18] are very popular. Specially, Bregman iterative regularization was proposed by Osher et al. [10]. In the past few years, a series of new methods have been developed, and among them, the linearized Bregman method [1922] and the split Bregman method [2326] got most attention.

Specially, when J(u)= u 1 , the problem (1.2) becomes

min u R n { u 1 : A u = f } .
(1.3)

Obviously, the problem (1.3) is an l 1 -norm minimization problem. Since many practical problems related to the sparsity of the solution make the problem (1.3) stay on focus for years, like in signal processing, compressive sensing etc. [18, 19]. Similar to the problem (1.2), the problem (1.3) also can be transformed into a linear program and then solved by conventional linear programming solvers. However, such solvers are not tailored for the matrix A that is large-scale and completely dense. Fortunately, the problem (1.3) can be solved very effectively by the linearized Bregman method [1922, 27]. The computing speed of its simplified form with soft threshold operator is faster [19, 21, 22]. The corresponding convergence analysis was discussed in [20].

In this paper we highlight numerical computation of coefficient in sparse reconstruction methods for image deblurring, described by an operator Φ:XY between Hilbert spaces X and Y. We seek sparse solutions in an orthogonal basis { ψ j } j N . The standard approach is the weighted 1 minimization (1.3):

min u 2 ( N ) ω 1 ( N ) { 1 2 j u j Φ ψ j f 2 + α j ω j | u j | } .
(1.4)

Here ω 1 (N) denotes the space of coefficients u j such that j ω j | u j |<. In order to simplify the notation we introduce the operator A: 2 (N)Y, ( u j ) j u j Φ ψ j . Moreover, we will assume that { ω j } j N entail positive weights and there is a constant ω 0 >0 such that ω j ω 0 for all jN. Hence j ω j | u j | is really a norm on 1 (N), denoted by u ω . Then the 1 minimization can be rewritten as

min u 2 ( N ) ω 1 ( N ) { α u ω + 1 2 A u f 2 } .
(1.5)

Naturally one can set ω k + 1 (i)= 1 | u k ( i ) | . Then we can see the weighted 1 norm as a kind of approximation to 0 norm, but we can easily note that when u k (i)=0, ω k + 1 (i) is not well defined. The good news is we can regularize it as ω k + 1 (i)= 1 | u k ( i ) | + ϵ , where ϵ>0 is a small number [28]. So in this paper we set

ω k + 1 (i)= 1 | u k ( i ) | + ϵ .

On this basis, the authors propose a new reweighted l 1 minimization method to solve the problem (1.5) and illustrate by numerical experiments.

The rest of the paper is organized as follows. In Section 2, we summarize the existing methods for solving the constrained problem (1.3). In Section 3, the generalized shrinkage operator is proposed. The new algorithm is proposed in Section 4. Numerical results are shown in Section 5. Finally, we draw some conclusions in Section 6.

2 Preliminaries

2.1 Generalized inverse

We are interested in the iterative formula of the generalized inverse, because it is used by our new algorithm. Therefore, before we give a detailed discussion, we first give some definitions and lemmas.

Definition 2.1 [29]

Let A C m × n , then X is called the pseudoinverse of A and denoted by  A . If X satisfies the following properties, i.e., the Moore-Penrose conditions:

1 . A X A = A , 2 . X A X = X , 3 . ( A X ) = A X , 4 . ( X A ) = X A .
(2.1)

Remark 2.1 The inner inverse is not unique. In general, the set of the inner inverses of the matrix A is denoted A .

Definition 2.2 [29]

Let A,B C n × m , the set

μ(A,B)= { X | X = A Y B , Y C m × n }
(2.2)

is called the range of (A,B).

Lemma 2.1 [30]

Let A C m × n 0; if initial matrix V 0 satisfies

V 0 μ ( A , A ) ,
(2.3)
ρ(IA V 0 )<1,
(2.4)

where I is an identity matrix with the same dimension as matrix A and A is the conjugate transpose of matrix A. Then the sequence { V q } q N generated by

V q + 1 = V q + V 0 (IA V q ),q=1,2,
(2.5)

is convergent to A .

2.2 Linearized Bregman iteration

The Bregman distance [31], based on the convex function J, between points u and v, is defined by

D J p (u,v)=J(u)J(v)p,uv,
(2.6)

where pJ(v) is an element in the subgradient set of J at the point v. In general D J p (u,v) D J p (v,u) and the triangle inequality is not satisfied, so D J p (u,v) is not a distance in the usual sense. For details, see [31].

To solve (1.3), in [19] the linearized Bregman iteration is generated by

{ u k + 1 = arg min u { μ D J p k ( u , u k ) + 1 2 δ u ( u k δ A T ( A u k f ) ) 2 } , p k + 1 = p k 1 μ δ ( u k + 1 u k ) 1 μ A T ( A u k f ) , p k J ( u k ) ,
(2.7)

where δ is a constant and p 0 = u 0 =0. Hereafter, we use = 2 to denote the l 2 norm.

When J(u)= u 1 , algorithm (2.7) can be rewritten as

{ v k + 1 = v k + A T ( f A u k ) , u k + 1 = δ T μ ( v k + 1 ) ,
(2.8)

where u 0 = v 0 =0, and

T λ (ω):= [ t λ ( ω ( 1 ) ) , t λ ( ω ( 2 ) ) , , t λ ( ω ( n ) ) ] T
(2.9)

is the soft thresholding operator [18] with

t λ (ξ)= { 0 , | ξ | λ , sgn ( ξ ) ( | ξ | λ ) , | ξ | > λ .
(2.10)

Namely, the algorithm (2.8) is called an A T linearized Bregman iteration.

Subsequently, when A is any matrix, the constraint condition Au=f of the problem (1.3) is not satisfied. So the conditions will be extended to solve the least-squares problem min u R n A u f 2 , and the algorithm becomes the following A linearized Bregman iteration [22]:

{ f k + 1 = f k + f A u k , u k + 1 = δ T μ ( A f k + 1 ) ,
(2.11)

where A is generalized inverse of matrix A.

3 The generalized shrinkage operator

Theorem 3.1 T μ (v)=arg min u R n {μ u 1 + 1 2 u v 2 }.

Proof Let f(u)=μ u 1 + 1 2 u v k 2 =μ i = 1 n | u i |+ 1 2 i = 1 n ( v i k u i ) 2 , then we have

f ( u ) u i = { μ + u i v i k , u i > 0 , μ + u i v i k , u i < 0 .
(3.1)

Case 1: v i k >μ>0.

  1. (1)

    If u i >0, and notice that f ( u ) u i =0 then u i = v i k μ>0, for this case f(u) gets its minimum at point u i = v i k μ along the direction e i and the minimum is

    f(u) | u i = v i k μ =μ ( v i k μ ) + 1 2 μ 2 + δ 1 (>0)= Δ 1 + δ 1 .
    (3.2)
  2. (2)

    If u i <0, and notice that f ( u ) u i = u i v i k μ<0, again we find that f(u) decreases along the direction e i :

    f(u) | u i = 0 = 1 2 ( v i k ) 2 + δ 1 (>0)= Δ 2 + δ 1 .
    (3.3)

Since Δ 2 Δ 1 = 1 2 ( v i k ) 2 (μ v i k 1 2 μ 2 )= 1 2 ( v i k μ ) 2 >0, along the direction e i we find that the minimizer of f(u) is u i = v i k μ.

Case 2: v i k <μ<0.

  1. (1)

    If u i >0, since f ( u ) u i = u i v i k +μ>0, f(u) increases along the direction e i :

    f(u) | u i = 0 = 1 2 ( v i k ) 2 + δ 3 = Δ 3 + δ 3 .
    (3.4)
  2. (2)

    If u i <0, since f ( u ) u i =0 we have u i = v i k +μ<0, the minimizer of f(u) along the direction e i is u i = v i k +μ and the corresponding minimum is

    f(u) | u i = v i + μ =μ ( v i k + μ ) + 1 2 μ 2 + δ 3 = Δ 4 + δ 3 .
    (3.5)

Since Δ 3 Δ 4 = 1 2 ( v i k ) 2 +μ( v i k +μ) 1 2 μ 2 = 1 2 ( v i k + μ ) 2 >0, we can get the minimizer of f(u) at u i = v i k +μ along the direction e i .

Case 3: μ v i k μ.

  1. (1)

    If u i >0, since f ( u ) u i = u i v i k +μ>0, f(u) increases along the direction e i :

    f(u) | u i = 0 = 1 2 ( v i k ) 2 +δ.
    (3.6)
  2. (2)

    If u i <0, since f ( u ) u i = u i v i k μ<0, f(u) decreases along the direction e i :

    f(u) | u i = 0 = 1 2 ( v i k ) 2 +δ,
    (3.7)

when u i =0, the minimum of f(u) along the direction e i is f(u)= 1 2 ( v i k ) 2 +δ.

In conclusion, we have the following soft shrinkage operator:

t μ (ξ)= { 0 , | ξ | μ , sgn ( ξ ) ( | ξ | μ ) , | ξ | > μ .
(3.8)

The minimizer of the minimization problem is given by

u = arg min { μ | u | + 1 2 ( u v k ) 2 | u R n , v k R n } = { v i k μ , v i k > μ > 0 , 0 , μ v i k μ , v i k + μ , v i k < μ < 0 = [ t μ ( ω 1 ) , t μ ( ω 2 ) , , t μ ( ω n ) ] T = T μ ( v k ) .
(3.9)

 □

The unknown variable u is component-wise separable in the problem

u=arg min u 2 ( N ) ω 1 ( N ) { μ u ω + 1 2 u v 2 }
(3.10)

for any v 2 (N) ω 1 (N) and ω>0. Then each of its components u i can be independently obtained by the shrinkage operation, which is also referred as soft thresholding [32]:

u i = T μ ω i ( v i )=shrink( v i ,μ ω i ),i=1,2,.
(3.11)

For v i , ω i and μR, we define u i R

u i = shrink ( v i , μ ω i ) : = sgn ( v i ) max { | v i | μ ω i , 0 } = { v i μ ω i , v i > μ ω i , 0 , μ ω i v i μ ω i , v i + μ ω i , v i < μ ω i .
(3.12)

The generalized shrinkage operator leads to the sparse solution and removes noises. Hence, the algorithm with the generalized shrinkage operator converges to a sparse solution and is robust to noises.

4 The new reweighted l 1 minimization algorithm

The sequence { u k } given by A linearized Bregman iteration converges to an optimal solution of the problem (1.3). The computation of generalized inverse A is time consuming; to overcome this, a method called chaotic iterative algorithm is proposed combined with (2.5). In this algorithm we just need matrix-vector multiplication, so the generalized inverse A can be computed efficiently. In order to understand the algorithm better, we give a brief description of this method as follows:

{ f k + 1 = f k + ( f A u k ) , y k + 1 = y k + V 0 f k + 1 V 0 ( A y k ) , u k + 1 = δ T μ ( y k + 1 ) , k=0,1,2,,
(4.1)

where y 0 = V 0 f 0 , V 0 =α A and 0<α< 2 A 2 . The corresponding sequence { u k } also converges to an optimal solution of the problem (1.3).

Here we first study an iteratively reweighted least-squares (IRLS) method [33] for robust statistical estimation. Considering a regression problem Ax=b where the observation matrix A is underdetermined; it was noticed as regards a standard least-squares regression, in which r 2 is minimized where r=Axb is the residual vector. To overcome the problem of lacking of robustness of the algorithm, IRLS was proposed as an iterative method to

min x i ρ ( r i ( x ) ) ,
(4.2)

where ρ() is a penalty function such as the 1 norm. This minimization can be accomplished by solving a sequence of weighted least-squares problems where the weights { w i } depend on the previous residuals w i = ρ ( r i )/ r i . The typical choice of ρ is inversely proportional to the residual, so that the large residuals will be penalized less in the subsequent iterations. Then an IRLS involving an iteratively reweighted 2 -norm can be better approximated by an 1 -like criterion. Inspired by the above idea, in order to better approximate an 0 -like criterion [34], our algorithm involves the iteratively reweighted 1 -norm.

Since that reweighted minimization can enhance the sparsity and the chaotic iterative algorithm can reduce the computational complexity of the generalized inverse A , we iteratively solve the following weighted 1 minimization problem:

min u { u ω : A u = f } .
(4.3)

We refine the chaotic iterative algorithm, and obtain a new reweighted l 1 minimization algorithm as follows:

{ f k + 1 = f k + ( f A u k ) , y k + 1 = y k + V 0 f k + 1 V 0 ( A y k ) , u k + 1 = δ T μ ω k ( y k + 1 ) , ω i k + 1 = 1 / ( | u i k + 1 | + ϵ ) , i = 1 , , n , k=0,1,2,,
(4.4)

where y 0 = V 0 f 0 , V 0 =α A , and 0<α< 2 A 2 .

5 Numerical experiments

In this section, we test the reweighted l 1 minimization algorithm for the problem (4.3). We used Word image. Here Word is a 256×256 sparse image. In our experiments we tested several kinds of blurring kernels including disk, Gaussian, and motion. We compare different algorithms through both visual effects and quality measurements. Here, the quality of restoration is measured by the signal-to-noise ratio (SNR), defined by

SNR=10×ln i = 1 m i = 1 n ( u ( i , j ) mean ( u ) ) 2 i = 1 m i = 1 n ( u ( i , j ) u 0 ( i , j ) mean ( u u 0 ) ) 2 ,
(5.1)

where u , u 0 , and mean() are the restored image, original image, and average operator, respectively.

Our code is written in MATLAB and run on a Windows PC with a Intel(R) Core(TM) 2 Duo CPU T8100 @ 2.10 GHz 2.10 GHz and 1.5 GB memory. The MATLAB version is 7.1.

Reweighted l 1 minimization algorithm: Step 1. Set u 0 =0, f 0 =0, y 0 = V 0 f 0 , V 0 =α A T , 0<α< 2 A 2 2 , 0<δ<1, μ=parameter.

Step 2. The sequence { u k } k N generated by (4.4).

Step 3. Until u k + 1 u k u k <ϵ.

We demonstrate the performance of the reweighted l 1 minimization algorithm, the chaotic iterative algorithm, the A T Bregman iteration, and the A Bregman iteration with pinv(A) in MATLAB.

In the first experiment, the images we used were blurred with a ‘disk’ kernel of hsize=15. The blurry and restored images are presented in Figure 1. By comparing these three algorithms, it is clear that the reweighted l 1 minimization algorithm performs better in terms of SNR than the chaotic iterative algorithm, and the A T Bregman iteration lemma is a little slower than the chaotic iterative algorithm and the A T Bregman iteration, which is still acceptable.

Figure 1
figure 1

Deblurring results of256×256sparse Word image convolved by a15×15disk kernel generated by the MATLAB commandfspecial(disk,7). Upper left: original image; upper middle: blurred image. The other three are reconstructed images, respectively, by an A T Bregman iteration, a reweighted 1 minimization algorithm, and a chaotic iteration.

In the second experiment the images were blurred with a ‘Gaussian’ kernel of hsize=7. The results are shown in Figure 2. The comparison of the restored effect and the computing time is basically the same as the first one.

Figure 2
figure 2

Deblurring results of256×256sparse Word image convolved by a7×7Gaussian kernel generated by the MATLAB commandfspecial(Gaussian,7,15). Upper left: original image; upper middle: blurred image. The other three are reconstructed images, respectively, by the A T Bregman iteration, the reweighted 1 minimization algorithm, and the chaotic iteration.

In the third experiment we used a part of the Word image blurred with a 3×5 ‘motion’ kernel to better show the local information of the recovered image. The restored small sparse Word images after using the reweighted l 1 minimization algorithm, the chaotic iterative algorithm, the A T Bregman iteration, and the A Bregman iteration are plotted in Figure 3. Again we obtain a similar conclusion to the above experiments.

Figure 3
figure 3

Deblurring results of64×80part of sparse Word image convolved by a3×5motion kernel generated by the MATLAB commandfspecial(motion,5,7). Upper left: original image; upper middle: blurred image. The other four are reconstructed images, respectively, by the A T Bregman iteration, the reweighted 1 minimization algorithm, the A Bregman iteration, and the chaotic iteration.

In fact, the complexity analysis also shows comparative results of several methods. Set the same loop number is K. So, the workload of the A algorithm (2.11) is two parts. They are the workload of the A and the loop of the (2.11). The workload is O( n 3 ) during the computation of A=USV, A = V T S U T , when m<n, because of the singular value decomposition involving multiplication of the matrix and matrix and eigenvalue calculation. The workload of the loop of the (2.11) is O(mnK), because the loop only contains multiplication of matrix and vector. Therefore, the total workload of the A algorithm (2.11) is O( n 3 )+O(mnK). The workload of the chaotic iteration (4.1), the reweighted l 1 minimization algorithm (4.4) and the A T Bregman iteration (2.8) are O(mnK), respectively. Obviously, K<mn, the workload of the A algorithm (2.11) is bigger than the other three algorithms.

All the experiment data are listed in Table 1. In summary, for the restored quality of the three methods we have Reweighted>Chaotic> A A T , while for the computing time the order of magnitude is about 1:1: 10 2 :1. The numerical examples illustrate that the new reweighted l 1 minimization algorithm is fast and efficient for deblurring the image. It is a very useful method.

Table 1 The comparison of different algorithms

6 Conclusion

In this paper, we propose the reweighted l 1 minimization algorithm for image deblurring. Above all, we can see that the recovery of the image effect is obvious. Especially in the case of a large degree of blurring and difficult to recover details, it is stable and effective. In addition, we can improve the efficiency of this reweighted l 1 minimization algorithm combining with the ‘kicking’ technology. Because of the scale factor and efficiency of the algorithm A , the new method proposed in this paper can be used in a parallel operation to get a better algorithm.

References

  1. Chan TF, Shen J: Image Processing and Analysis. SIAM, Philadelphia; 2005.

    Book  Google Scholar 

  2. Aubert G, Kornprobst P Appl. Math. Sci. 147. In Mathematical Problems in Image Processing. 2nd edition. Springer, New York; 2006.

    Google Scholar 

  3. Andrews HC, Hunt BR: Digital Image Restoration. Prentice Hall, Englewood Cliffs; 1977.

    Google Scholar 

  4. Rudin L, Osher S, Fatemi E: Nonlinear total variation based noise removal algorithms. Physica D 1992, 60: 259–268. 10.1016/0167-2789(92)90242-F

    Article  Google Scholar 

  5. Dobson DC, Santosa F: Recovery of blocky images from noise and blurred data. SIAM J. Appl. Math. 1996, 56: 1181–1198. 10.1137/S003613999427560X

    Article  MathSciNet  Google Scholar 

  6. Nikolova M: Local strong homogeneity of a regularized estimator. SIAM J. Appl. Math. 2000, 61: 633–658. 10.1137/S0036139997327794

    Article  MathSciNet  Google Scholar 

  7. Buades A, Coll B, Morel JM: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 2005, 4: 490–530. 10.1137/040616024

    Article  MathSciNet  Google Scholar 

  8. Tomasi C, Manduchi R: Bilateral filtering for gray and color images. Proceedings of the 1998 IEEE International Conference on Computer Vision 1998. Bombay, India

    Google Scholar 

  9. Buades, A, Coll, B, Morel, JM: Image enhancement by non-local reverse heat equation. CMLA Tech. rep. 22 (2006)

  10. Osher S, Burger M, Goldfarb D, Xu J, Yin W: An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4: 460–489. 10.1137/040605412

    Article  MathSciNet  Google Scholar 

  11. He L, Marquina A, Osher S: Blind deconvolution using TV regularization and Bregman iteration. Int. J. Imaging Syst. Technol. 2005, 15: 74–83. 10.1002/ima.20040

    Article  Google Scholar 

  12. Marquina, A: Inverse scale space methods for blind deconvolution. UCLA-CAM-Report 06–36 (2006)

  13. Marquina A, Osher S: Image super-resolution by TV-regularization and Bregman iteration. J. Sci. Comput. 2008, 37: 367–382. 10.1007/s10915-008-9214-8

    Article  MathSciNet  Google Scholar 

  14. Gilboa G, Osher S: Nonlocal linear image regularization and supervised segmentation. Multiscale Model. Simul. 2007, 6: 595–630. 10.1137/060669358

    Article  MathSciNet  Google Scholar 

  15. Lou, Y, Zhang, X, Osher, S, Bertozzi, A: Image recovery via nonlocal operators. UCLA-CAM-Report 08–35 (2008)

  16. Cofiman RR, Donoho DL: Translation-invariant de-noising. Lecture Notes in Statistics 103. In Wavelets and Statistics. Edited by: Antoniadis A, Oppenheim G. Springer, New York; 1995.

    Google Scholar 

  17. Hale, E, Yin, W, Zhang, Y: A fixed-point continuation method for l 1 -regularization with application to compressed sensing. CAAM-TR07–07 (2007)

  18. Yin W, Osher S, Goldfarb D, Darbon J: Bregman iterative algorithms for l 1 -minimization with applications to compressed sensing. SIAM J. Imaging Sci. 2008, 1: 143–168. 10.1137/070703983

    Article  MathSciNet  Google Scholar 

  19. Cai J, Osher S, Shen Z: Linearized Bregman iterations for compressed sensing. Math. Comput. 2009,78(267):1515–1536. 10.1090/S0025-5718-08-02189-3

    Article  MathSciNet  Google Scholar 

  20. Cai J, Osher S, Shen Z: Convergence of the linearized Bregman iteration for l 1 -norm minimization. Math. Comput. 2009,78(268):2127–2136. 10.1090/S0025-5718-09-02242-X

    Article  MathSciNet  Google Scholar 

  21. Osher, S, Mao, Y, Dong, B, Yin, W: Fast linearized Bregman iteration for compressive sensing and sparse denoising. UCLA-CAM-Report 08–37 (2008)

    Google Scholar 

  22. Cai J, Osher S, Shen Z: Linearized Bregman iterations for frame-based image deblurring. SIAM J. Imaging Sci. 2009,2(1):226–252. 10.1137/080733371

    Article  MathSciNet  Google Scholar 

  23. Goldstein T, Osher S: The split Bregman method for L 1 -regularized problems. SIAM J. Imaging Sci. 2009,2(2):323–343. 10.1137/080725891

    Article  MathSciNet  Google Scholar 

  24. Cai J, Osher S, Shen Z: Split Bregman method and frame based image restoration. Multiscale Model. Simul. 2009,8(2):337–369.

    Article  MathSciNet  Google Scholar 

  25. Wu C, Tai X: Augmented Lagrangian method, dual methods, and split Bregman iteration for ROF, vectorial TV, and high order models. SIAM J. Imaging Sci. 2010,3(3):300–339. 10.1137/090767558

    Article  MathSciNet  Google Scholar 

  26. Yang, Y, Möller, M, Osher, S: A dual split Bregman method for fast l 1 minimization. UCLA-CAM-Report 11–57 (2011)

    Google Scholar 

  27. Zhang H, Cheng L: A Linearized Bregman iteration algorithm. Math. Numer. Sin. 2010, 32: 97–104. (in Chinese)

    MathSciNet  Google Scholar 

  28. Candés EJ, Wakin MB, Boyd SP: Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008,14(5):877–905.

    Article  MathSciNet  Google Scholar 

  29. Wang G, Wei Y, Qiao S: Generalized Inverses: Theory and Computations. Science Press, Beijing; 2004.

    Google Scholar 

  30. Wang S, Yang Z: Generalized Inverse Matrix and Its Applications. Beijing University of Technology Press, Beijing; 1996.

    Google Scholar 

  31. Bregman L: The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. USSR Comput. Math. Math. Phys. 1967,7(3):200–217. 10.1016/0041-5553(67)90040-7

    Article  Google Scholar 

  32. Donoho D: De-noising by soft-thresholding. IEEE Trans. Inf. Theory 1995, 41: 613–627. 10.1109/18.382009

    Article  MathSciNet  Google Scholar 

  33. Schlossmacher EJ: An iterative technique for absolute deviations curve fitting. J. Am. Stat. Assoc. 1973, 68: 857–859. 10.1080/01621459.1973.10481436

    Article  Google Scholar 

  34. Zhao YB, Li D: Reweighted 1 -minimization for sparse solutions to underdetermined linear systems. SIAM J. Optim. 2012,22(3):1065–1088. 10.1137/110847445

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

This research was partly supported by Fund of Oceanic Telemetry Engineering and Technology Research Center, State Oceanic Administration (grant no. 2012003), the NSFC (grant nos. 60971132,61101208) and Fundamental Research Funds for the Central Universities (grant no. 13CX02086A).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tiantian Qiao.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiao, T., Wu, B., Li, W. et al. A new reweighted l 1 minimization algorithm for image deblurring. J Inequal Appl 2014, 238 (2014). https://doi.org/10.1186/1029-242X-2014-238

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1029-242X-2014-238

Keywords