(1)

where with domain . The exposition will be mainly restricted to the case of and being Hilbert spaces with inner products and norms . Some references for the Banach space case will be given.

We will assume attainability of the exact data y in a ball , i.e., the equation F(x) = y is solvable in . The element x

_{0}is an initial guess which may incorporate a-priori knowledge of an exact solution.The actually available data y

where the noise level δ is assumed to be known. For a convergence analysis with stochastic noise, see the references in section “Further Literature on Gauss–Newton Type Methods”.

^{δ }will in practice usually be contaminated with noise for which we here use a deterministic model, i.e.,(2)

## 2 Preliminaries

### Conditions on F

For the proofs of well-definedness and local convergence of the iterative methods considered here we need several conditions on the operator F. Basically, we inductively show that the iterates remain in a neighborhood of the initial guess. Hence, to guarantee applicability of the forward operator to these iterates, we assume that

for some ρ > 0.

(3)

Moreover, we need that F is continuously Fréchet-differentiable, that is uniformly bounded with respect to , and that problem (1) is properly scaled, i.e., certain parameters occurring in the iterative methods have to be chosen appropriately in dependence of this uniform bound.

The assumption that F

that is often used to show convergence of iterative methods for well-posed problems, implies that

However, this Taylor remainder estimate is too weak for the ill-posed situation unless the solution is sufficiently smooth (see, e.g., case (ii) in Theorem 9 below). An assumption on F that can often be found in the literature on nonlinear ill-posed problems is the tangential cone condition

which implies that

for all . One can even prove (see [70, Proposition 2.1]).

^{′ }is Lipschitz continuous,(4)

(5)

(6)

Proposition 1.

Let

(i)

Then for all

and for all . Moreover,

where instead of equality holds if .

(ii)

If F(x) = y is solvable in , then a unique x

_{0}-minimum-norm solution exists. It is characterized as the solution of F(x) = y in satisfying the condition(7)

If F(x) = y is solvable in but a condition like (6) is not satisfied, then at least existence (but no uniqueness) of an x

_{0}-minimum-norm solution is guaranteed provided that F is weakly sequentially closed (see [36, Chapter 10]).For the proofs of convergence rates one even needs stronger conditions on F

^{′ }than condition (6).