You are here

Iterative Methods for Ill-Posed Problems: An Introduction

Anatoly B. Bakushinsky, Mihail Yu. Kokurin, and Alexandra Smirnova
Publisher: 
Walter de Gruyter
Publication Date: 
2011
Number of Pages: 
136
Format: 
Hardcover
Series: 
Inverse and Ill-Posed Problems Series 54
Price: 
98.00
ISBN: 
9783110250640
Category: 
Monograph
[Reviewed by
Brian Borchers
, on
10/31/2011
]

Many problems in applied mathematics boil down to the solution of a linear operator equation of the form Kf = g, where K is a known linear operator and the given data g and the unknown function f lie in some specified Hilbert spaces. In some cases we also need to consider more general nonlinear operator equation K(f) = g. In practice the data g is often not exact. In that case, we could simply minimize ‖Kf – g‖ rather than trying to solve Kf = g, but a further complication arises. These problems are typically ill-posed, in the sense that an arbitrarily small change in g can lead to an arbitrarily large change in the solution. Contrast this to linear least squares problems on finite dimensional spaces, where the condition number of A provides a bound on the sensitivity of the solution of least squares problem to changes in b.

Tikhonov regularization stabilizes the solution of these ill-posed operator equations by minimizing the weighted sum of a strictly convex regularization term and ‖Kf – g‖. The regularization term stabilizes the solution of the problem at the expense of biasing the solution. By adjusting the regularization weight, we can balance bias and stability to find an optimal solution. Under appropriate conditions, it can be shown that as the noise in g is reduced to 0. The weight on the regularization term can be also be reduced to 0 to insure convergence to a solution of the original problem. In practice we often have to deal with a fixed level of noise. If the noise level is known, then a priori rules are available for selecting an optimal regularization parameter. If, as is often the case in practice, the noise level is unknown, then other a posteriori rules can be used to pick a suitable regularization parameter.

An alternative to Tikhonov regularization that is often used in practice is to use an iterative algorithm such as the Landweber iteration. Because these methods are unstable, it is necessary to terminate the algorithm after a relatively small number of iterations. The number of iterations at which the procedure is terminated is effectively a regularization parameter for these methods.

As the title indicates, this book is a research monograph that introduces the reader to iterative methods for ill-posed problems. The book begins with a review of Newton’s method, the Gauss-Newton method, and the method of steepest descent (called the gradient method by these authors.) The authors also give a brief presentation of Tikhonov regularization. This sets the stage for the introduction of iterative regularization schemes in which Tikhonov regularization is combined with an iterative algorithm. In each outer iteration of this procedure we set a Tikhonov regularization parameter. A fixed number of Gauss-Newton iterations are executed. Then the Tikhonov regularization parameter is reduced and another outer iteration of the algorithm begins. The authors introduce and analyze the convergence of an iteratively regularized Guass-Newton method. The book concludes with three chapters in which the authors present computational examples.

This book will be of interest to graduate students and researchers who want to implement iterative methods for the solution of ill-posed problems. The computational examples in chapters thirteen through fifteen will be of particular interest to readers who are interested in implementing these methods. The theory in chapters one through twelve is clear but quite concise. For readers who are looking for a broader background in ill-posed inverse problems and Tikhonov regularization, the book by Engl, Hanke, and Neubauer would be a more appropriate starting point.


Reference:

[1] Heinz W. Engl, Martin Hanke, and Andreas Neubauer. Regularization of Inverse Problems. Kluwer, Dordrecht, 2000.


Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and inverse problems.