Imaging



(1)




$$\displaystyle\begin{array}{rcl} \frac{\partial u} {\partial \nu } = q(t)\delta _{\varGamma,s}(x),\;x \in \varGamma,\;0 < t < T,& &{}\end{array}$$

(2)



$$\displaystyle\begin{array}{rcl} u(x,t) = 0,\;t < 0& &{}\end{array}$$

(3)
with ν the exterior normal on Γ. We consider the inverse problem of determining c from the values of u(x, t) on Γ × (0, T), or on part of it. The function q represents the source pulse, and δ Γ, s is the Dirac δ-function on Γ concentrated at the source s ∈ Γ. We always assume that $$c^{2}(x) = c_{0}^{2}(x)/(1 + f(x))$$ with the (known) background velocity c 0 and a function f > −1 which has to be determined.

We remark that everything we are doing in this paper can also be done for more general problems, such as problems with varying density [26], in moving media [61] and problems with attenuation [4, 20, 40].



3 The Born Approximation


Even though the underlying differential equation is linear, the inverse problem is nonlinear. Most of the literature on the problem makes use of some kind of linearization, mostly the Born approximation. Our goal is definitely to deal with the fully nonlinear problem. However linearization leads to valuable insights also for the nonlinear problem. Therefore we discuss the Born approximation in some detail.


An Explicit Formula for the Slab



We restrict ourselves to a very simple geometry: Ω is just the slab 0 < x n  < D. Sources and receivers are sitting either on both boundaries (transmission mode) or on only one of them (reflection mode). It is more convenient to work with the inhomogeneous initial value problem


$$\displaystyle\begin{array}{rcl} \frac{\partial ^{2}u} {\partial t^{2}} (x,t) = c^{2}(x)\left (\varDelta u(x,t) + q(t)\delta (x - s)\right ),\quad x \in R^{n},\;0 < t < T,& &{}\end{array}$$

(4)



$$\displaystyle\begin{array}{rcl} u(x,t) = 0,\quad t < 0& &{}\end{array}$$

(5)
with δ the Dirac function in R n and s a source. We also assume c(x) = c 0 > 0 constant in this section. Equations (4), (5) is a simplified version of (1)–(3).

We also look at the problem in the frequency domain. For the Fourier transform in R n , we use the notation


$$\displaystyle{\hat{f}(\xi ) = (2\pi )^{-n/2}\int _{ R^{n}}e^{-ix\cdot \xi }f(x)dx}$$
and we denote the inverse Fourier transform of f by $$\tilde{f}$$. Doing an inverse Fourier transform with respect to t in (4), we get


$$\displaystyle{ \varDelta \tilde{u}(x,\omega ) + k^{2}(1 + f)\tilde{u}(x,\omega ) = -\tilde{q}(\omega )\delta (x - s) }$$

(6)
where $$k =\omega /c_{0}$$ is the wave number. Because of (5) this equation has to be complemented by the Sommerfeld radiation condition for outgoing radiation, i.e.,


$$\displaystyle{ \frac{\partial \tilde{u}} {\partial \vert x\vert }- ik\tilde{u} \rightarrow 0,\vert x\vert \rightarrow \infty.}$$
In this section we derive an explicit representation of the Born approximation by Fourier analysis. We follow [14, 42].

For f = 0, the solution of (6) is $$\tilde{q}(\omega )G_{k}(x - s)$$ with G k the free space Green’s function which we give in the plane wave decomposition


$$\displaystyle{ G_{k}(x) = ic_{n}\int _{R^{n-1}}e^{i(\vert x_{n}\vert \kappa (\xi ^{{\prime}})-x^{{\prime}}\cdot \xi ^{{\prime}}) } \frac{d\xi ^{{\prime}}} {\kappa (\xi ^{{\prime}})} }$$

(7)
where $$c_{2} = 1/(4\pi ),\;c_{3} = 1/(8\pi ^{2}),\;\kappa (z) = \sqrt{k^{2 } - z^{2}}$$. For z > k, κ(z) is defined to be $$i\sqrt{z^{2 } - k^{2}}$$.

For arbitrary f, we put $$\tilde{u}(x) =\tilde{ q}(\omega )G_{k}(x - s) + v(x,s)$$, obtaining for v


$$\displaystyle{\varDelta _{x}v(x,s) + k^{2}(1 + f(x))v(x,s) = -k^{2}\tilde{q}G_{ k}(x - s)f(x).}$$
Neglecting fv leads to the Born approximation


$$\displaystyle{\varDelta _{x}v(x,s) + k^{2}v(x,s) \approx -k^{2}\tilde{q}(\omega )G_{ k}(x - s)f(x)}$$
or


$$\displaystyle{v(x,s) \approx k^{2}\tilde{q}(\omega )\int _{\varOmega }G_{ k}(x - y)G_{k}(y - s)f(y)dy.}$$
With x the first n − 1 coordinates of x and correspondingly for s, y we rewrite this as


$$\displaystyle\begin{array}{rcl} & & v(x^{{\prime}},x_{ n},s^{{\prime}},s_{ n}) \approx k^{2}\tilde{q}(\omega )\int _{ 0}^{D}\int _{ R^{n-1}}G_{k}(x^{{\prime}}- y^{{\prime}},x_{ n} - y_{n})f(y^{{\prime}},y_{ n}) {}\\ & & \qquad \qquad \qquad \ \ \qquad \qquad G_{k}(y^{{\prime}}- s^{{\prime}},y_{ n} - s_{n})dy^{{\prime}}dy_{ n}. {}\\ \end{array}$$
Inserting (7) we get


$$\displaystyle\begin{array}{rcl} v(x^{{\prime}},x_{ n},s^{{\prime}},s_{ n})& \approx & -c_{n}^{2}k^{2}\tilde{q}(\omega )\int _{ 0}^{D}\int _{ R^{n-1}}\int _{R^{n-1}}e^{i(\vert x_{n}-y_{n}\vert \kappa (\xi ^{{\prime}})-(x^{{\prime}}-y^{{\prime}})\cdot \xi ^{{\prime}}) } {}\\ & & \int _{R^{n-1}}f(y^{{\prime}},y_{ n})e^{i(\vert y_{n}-s_{n}\vert \kappa (\sigma ^{{\prime}})-(y^{{\prime}}-s^{{\prime}})\cdot \sigma ^{{\prime}}) } \frac{d\xi ^{{\prime}}d\sigma ^{{\prime}}} {\kappa (\xi ^{{\prime}})\kappa (\sigma ^{{\prime}})}dy^{{\prime}}dy_{ n}. {}\\ \end{array}$$
The y integral is just a (n − 1)-dimensional Fourier transform, hence


$$\displaystyle\begin{array}{rcl} & & \qquad v(x^{{\prime}},x_{ n},s^{{\prime}},s_{ n}) \approx -(2\pi )^{n-1}k^{2}c_{ n}^{2}\tilde{q}(\omega )\int _{ 0}^{D}\int _{ R^{n-1}}\int _{R^{n-1}} \\ & & e^{i(\vert x_{n}-y_{n}\vert \kappa (\xi ^{{\prime}})+\vert y_{ n}-s_{n}\vert \kappa (\sigma ^{{\prime}})-x^{{\prime}}\cdot \xi ^{{\prime}}+s^{{\prime}}\cdot \sigma ^{{\prime}}) }\hat{f}(\sigma ^{{\prime}}-\xi ^{{\prime}},y_{ n}) \frac{d\xi ^{{\prime}}d\sigma ^{{\prime}}} {\kappa (\xi ^{{\prime}})\kappa (\sigma ^{{\prime}})}dy_{n}.{}\end{array}$$

(8)
where $$\hat{f}$$ is the (n − 1)-dimensional Fourier transform of f with respect to the first n − 1 variables. Fourier transforms of dimensions n − 1 with respect to x and s yield


$$\displaystyle\begin{array}{rcl} & & \hat{v}(\xi ^{{\prime}},x_{ n},\sigma ^{{\prime}},s_{ n}) \approx -k^{2}c_{ n}^{2}\tilde{q}(\omega )\frac{(2\pi )^{2(n-1)}} {\kappa (\xi ^{{\prime}})\kappa (\sigma ^{{\prime}})} \int _{0}^{D}e^{i\vert x_{n}-y_{n}\vert \kappa (\xi ^{{\prime}})+i\vert y_{ n}-s_{n}\vert \kappa (\sigma ^{{\prime}}) } {}\\ & & \hat{f}(\xi ^{{\prime}} +\sigma ^{{\prime}},y_{ n})dy_{n}. {}\\ \end{array}$$
In particular we have


$$\displaystyle\begin{array}{rcl} \hat{v}(\xi ^{{\prime}},0,\sigma ^{{\prime}},0)& \approx & A\hat{f}(\xi ^{{\prime}} +\sigma ^{{\prime}},-\kappa (\xi ^{{\prime}}) -\kappa (\sigma ^{{\prime}})),{}\end{array}$$

(9)



$$\displaystyle\begin{array}{rcl} \hat{v}(\xi ^{{\prime}},D,\sigma ^{{\prime}},0)& \approx & Ae^{iD\kappa (\xi ^{{\prime}}) }\hat{f}(\xi ^{{\prime}} +\sigma ^{{\prime}},\kappa (\xi ^{{\prime}}) -\kappa (\sigma ^{{\prime}})){}\end{array}$$

(10)
where $$\hat{f}$$ is now the n-dimensional Fourier transform of f and $$A = -k^{2}\tilde{q}(\omega )c_{n}^{2}(2\pi )^{3n/2-1}/(\kappa (\xi ^{{\prime}})\kappa (\sigma ^{{\prime}}))$$. Equations (9), (10) is the solution of the inverse problem of reflection and transmission imaging, resp., in the Born approximation. Note that the derivation of (9), (10) depends entirely on the plane wave decomposition of G k which was used first in this context in [14].

Let’s look at the transmission case, i.e., (10). For simplicity, we consider the 2D case. Consider the semicircle of radius k in the upper half plane around the origin. Attach to each point of this semicircle the semicircle of radius k around this point that opens downward. According to (10), $$\hat{f}$$ is determined by the data along each of these semicircles. These semicircles fill the circles of radius k with midpoints (±k, 0); see Fig. 1. In the reflection case (9) we proceed in the same way, except that we start out from the semicircle in the lower half plane. Now the attached semicircles fill the semicircle of radius 2k in the lower half plane, except for the circles of radius k around (±k, 0).

A183156_2_En_37_Fig1_HTML.gif


Fig. 1
Left: Fourier coverage for transmission imaging. Right: Fourier coverage for reflection imaging. k is the wave number. The figures are for n = 2. For n = 3, one has to rotate the figures around the vertical axis

Thus we see a fundamental difference between transmission and reflection imaging: In transmission imaging low frequency features f can be recovered irrespectively of the frequency content of the source pulse q, whereas in reflection imaging one needs low frequencies in the source pulse to recover low frequency features in f. This is one of the main difficulties in seismic imaging; see [9, 19, 22, 30, 60]. To the best of our knowledge, Fig. 1 appeared first in [35, 64]. The difficulties in reflection imaging without low frequencies will be discussed in section “Missing Low Frequencies”.

We also conclude from (9), (10) that combined transmission and reflection measurements determine $$\hat{f}$$ within the ball of radius 2k. In other words, since $$\lambda = 2\pi /k$$ is the wavelength of the irradiating sources, the spatial resolution according to Shannon’s sampling theorem is $$2\pi /2k =\lambda /2$$. This is a classical result already known to Born.

The numerical evaluation of (9), (10) is by no means trivial. In principle it can be done by Fourier transforms. However this requires a sophisticated software for non-equispaced fast Fourier transforms. Assuming uniform sampling of $$\hat{v}$$ in ξ and σ leads to non-equispaced sampling of $$\hat{f}$$ on a grid of the type shown in Fig. 2. An implementation similar to the filtered backprojection algorithm of X-ray computerized tomography has been given in [14].

A183156_2_En_37_Fig2_HTML.gif


Fig. 2
Non-equidistant grid at which $$\hat{f}$$ is given by (9), (10) for n = 2


An Error Estimate for the Born Approximation



We now assume that Ω is a ball of radius r, the background speed is constant as in the previous section, and that the illumination done is by plane waves. This leads to the problem


$$\displaystyle\begin{array}{rcl} \varDelta u + k^{2}(1 + f)u = 0,& &{}\end{array}$$

(11)



$$\displaystyle\begin{array}{rcl} u = u_{i} + u_{s}& &{}\end{array}$$

(12)
where u i  = exp ikx ⋅ θ is the incoming wave with direction θ ∈ S n−1 and wave number k and u s is the scattered wave satisfying the Sommerfeld radiation condition of exterior radiation. We assume f to be supported in Ω. θ runs through all of S 1 for n = 2 and through a great circle of S 2 for n = 3. The problem is to determine f from $$g_{\theta }(z) = u_{s}(s\theta + z)$$ for some s with | s |  > r and z ∈ θ  ⊥ . The decisive tool for the error estimate is an estimate for the operator


$$\displaystyle{\mathcal{G}_{k}u(x) = k^{2}\int G_{ k}(x - y)u(y)dy}$$
in L 2( | x |  < r). In [41] it has been shown that


$$\displaystyle{\vert \vert \mathcal{G}_{k}\vert \vert _{L_{2}(\vert x\vert <r)} = kr\gamma _{n}(kr)}$$
where γ n is a function with $$\bar{\gamma }_{n} =\sup _{(0,\infty )}\gamma _{n} < \infty $$. For instance, $$\bar{\gamma }_{2} \leq 0.8$$. We have $$u_{s} = k^{2}\mathcal{G}_{k}(f(u_{i} + u_{s}))$$, hence, with $$q = kr\gamma _{n}(kr),\;q\vert \vert f\vert \vert _{L_{\infty }(\vert x\vert <r)} < 1$$


$$\displaystyle{ \vert \vert u_{s}\vert \vert _{L_{2}(\vert x\vert <r)} \leq \frac{q} {1 - q\vert \vert f\vert \vert _{L_{\infty }(\vert x\vert <r)}}\vert \vert f\vert \vert _{L_{2}(\vert x\vert <r)}. }$$

(13)
We introduce the propagation operator


$$\displaystyle{(U_{\theta }f)(z) =\int G_{k}(s\theta + z - y)f(y)u_{i}(y)dy;}$$
see [14]. For the (n − 1)-dimensional Fourier transform of U θ in θ  ⊥ , we have


$$\displaystyle{(U_{\theta }f)^{\wedge }(\zeta ) = i\sqrt{ \frac{\pi } {2}}e^{i\vert s\vert \kappa (\zeta )} \frac{1} {\kappa (\zeta )}\hat{f}((\epsilon \kappa (\zeta ) - k)\theta +\zeta )}$$
where $$\kappa (\zeta ) = \sqrt{k^{2 } - \vert \zeta \vert ^{2}}$$ and ε = sgn(s). This is essentially Theorem 3.1 of [50].

We have


$$\displaystyle{k^{-2}g_{\theta } = U_{\theta }f + U_{\theta }\frac{u_{s}} {u_{i}} f}$$
hence


$$\displaystyle{k^{-2}\hat{g}_{\theta }(\zeta ) = i\sqrt{ \frac{\pi } {2}}e^{i\vert s\vert \kappa (\zeta )} \frac{1} {\kappa (\zeta )}\left (\hat{f}((\epsilon \kappa (\zeta ) - k)\theta +\zeta ) + (u_{s}f)^{\wedge }((\epsilon \kappa (\zeta )\theta +\zeta )\right ).}$$
Now let ξ ∈ R n be in the ball | ξ |  < 2k. We can find θ ∈ S n−1,  ζ ∈ θ  ⊥  with $$\xi = (\epsilon \kappa (\zeta ) - k)\theta +\zeta$$. For any such choice of θ and ζ, the last formula yields


$$\displaystyle{\hat{f}(\xi ) =\hat{ f}_{B}(\xi ) - (u_{s}f)^{\wedge }(\xi +k\theta ),\;\;\;\hat{f}_{ B}(\xi ) = -i\sqrt{\frac{2} {\pi }} e^{-i\vert s\vert \kappa (\zeta )}\kappa (\zeta )k^{-2}\hat{g}_{\theta }(\zeta ).}$$
Since


$$\displaystyle{\vert (u_{s}f)^{\wedge }\vert \leq (2\pi )^{-n/2}\vert \vert u_{ s}\vert \vert _{L_{2}(\vert x\vert <r)}\vert \vert f\vert \vert _{L_{2}(\vert x\vert <r)}}$$
we have for | ξ | ≤ 2k


$$\displaystyle{ \left \vert \hat{f}(\xi ) -\hat{ f}_{B}(\xi )\right \vert \leq (2\pi )^{-n/2} \frac{q} {1 - q\vert \vert f\vert \vert _{L_{\infty }(\vert x\vert <r)}}\vert \vert f\vert \vert _{L_{2}(\vert x\vert <r)}^{2}. }$$

(14)
Thus we obtained an approximation of second order in f for $$\hat{f}$$ in the ball | ξ | ≤ 2k. This is what we expect from the Born approximation. The restriction to the bandwidth 2k is also natural as this corresponds to the maximal spatial resolution of πk.

We see from (14) that the decisive quantity for the Born approximation to be valid is $$q\vert \vert f\vert \vert _{L_{\infty }(\vert x\vert <r)}$$. In the engineering literature, see, e.g., [32], it is well known that this is a measure for the phase shift generated by the object f. Thus we arrive at the conclusion that the Born approximation works if the phase shift is sufficiently small. We also conclude from (14) that the reconstruction process, if restricted to the resolution πk is fairly stable: there are no unstable operations in the computation of $$\hat{f}_{B}$$. It has been shown in [53] that this is also the case for the fully nonlinear problem, provided the geodesics behave reasonably. Thus the frequently discussed instability of the inverse scattering problem is just a consequence of an inadequate choice of the frequency of the irradiating waves.


4 The Nonlinear Problem in the Time Domain



In this section we deal with the problem in the form (1)–(3). We suggest a very simple but practically useful iterative method and discuss some theoretical and practical aspects of it.


The Kaczmarz Method in the Time Domain



For a finite number of sources $$s_{j},\;j = 1,\ldots,p$$ we have to solve a system of the form


$$\displaystyle{ R_{j}(f) = g_{j},\;j = 1,\ldots,p. }$$

(15)
where R j is an operator between a Hilbert space H of functions on Ω × (0, T) and certain Hilbert spaces H j of functions on Γ × (0, T). In our case the operator R j is defined by R j (f) = u j  | Γ×(0, T), where u j is the solution of (1)–(3) for the source s = s j .

A natural method for solving (15) is the Kaczmarz method. This is an iterative method with the update


$$\displaystyle{f_{j+1} = f_{j} +\alpha (R_{j^{{\prime}}}^{{\prime}}(f_{ j}))^{{\ast}}C_{ j^{{\prime}}}^{-1}(g_{ j^{{\prime}}}- R_{j^{{\prime}}}(f_{j})),\quad j = 1,2,\ldots }$$
where j  = j mod p. Here, R j is the Fréchet derivative of R j , R j its adjoint, and C j is a positive definite operator. Going through all the equations once is called a complete sweep of the Kaczmarz method.

In the linear case, i.e., if R j : H → H j is a linear bounded operator the convergence of the Kaczmarz method is well understood, in particular for the case of CT; see, e.g., [50], Theorem 5.1.

If C j R j R j is positive semidefinite, the sequence f j converges for 0 < α < 2 to $$Pf_{1} + R^{+}g$$, where P is the orthogonal projection on the nullspace of $$R = (R_{1},\ldots,R_{p})$$ and R + is the Moore–Penrose generalized inverse of R. In the nonlinear case the situation is much less clear, see [6] for a theoretical discussion of the issue.

The Kaczmarz method is not to be confused with the Landweber iteration. In the Landweber method one does the update only after all the equations have been processed, while in Kaczmarz we do the update as soon as a single equation is processed. In practice Kaczmarz turns out to be much faster than Landweber. In the linear finite dimensional case Kaczmarz is identical to the SOR method, see Theorem 5.2 in [50]. Also, the speed of convergence depends on the ordering of the sources. Sequential ordering does not yield the best results. Sophisticated arrangements, in particular random orderings, are much better. For the case of CT, see Section 5.3.1 of [50].

In order to compute R j (f) for our imaging problem, we replace f by f + h and u j by u j + w with h,  w small and ignore higher order terms. We get for w the differential equation


$$\displaystyle\begin{array}{rcl} \frac{1} {c^{2}} \frac{\partial ^{2}w} {\partial t^{2}} & =& \varDelta w - \frac{h} {c_{0}^{2}} \frac{\partial ^{2}u_{j}} {\partial t^{2}},\quad x \in \varOmega,\;0 < t < T,{}\end{array}$$

(16)



$$\displaystyle\begin{array}{rcl} \frac{\partial w} {\partial \nu } & =& 0,\quad x \in \varGamma,\;0 < t < T,{}\end{array}$$

(17)



$$\displaystyle\begin{array}{rcl} w& =& 0,\quad t < 0.{}\end{array}$$

(18)
Hence we expect that


$$\displaystyle{R_{j}^{{\prime}}(f)h = w_{ \vert \varGamma \times (0,T)}.}$$
In fact it has been shown in [16] that with a suitable choice of Hilbert spaces H, H j the operator R j is Fréchet differentiable and the expression above is the derivative. A possible choice is H = H 2(Ω), restricted to functions f > −1, and $$H_{j} = H^{1/2}(\varGamma \times (0,T))$$.

Now we come to the problem of computing the adjoint of R j . For this we have to specify the spaces H, H j . To make things easy, we put $$H = L_{2}(\varOmega ),H_{j} = L_{2}(\varGamma \times (0,T))$$, i.e., we consider R j as an unbounded operator between these spaces. Then the adjoint R j satisfies


$$\displaystyle{(R_{j}^{{\prime}}(f)h,g)_{ L_{2}(\varOmega \times (0,T))} = (h,R_{j}^{{\prime}}(f)^{{\ast}})g)_{ L_{2}(\varGamma \times (0,T))}}$$
for sufficiently smooth functions h, g. Determining the exact domain of definition of the adjoint is a more tricky question which is not considered here.

To find an explicit form of the adjoint, we follow [38]. We start out from the identity


$$\displaystyle\begin{array}{rcl} & & \int _{\varOmega }\int _{0}^{T}\left ( \frac{1} {c^{2}} \frac{\partial ^{2}w} {\partial t^{2}} -\varDelta w\right )zdtdx -\int _{\varOmega }\int _{0}^{T}\left ( \frac{1} {c^{2}} \frac{\partial ^{2}z} {\partial t^{2}} -\varDelta z\right )wdtdx {}\\ & & =\int _{\varGamma }\int _{0}^{T}\left (\frac{\partial z} {\partial \nu } w - z\frac{\partial w} {\partial \nu } \right )dtdx + \left [\int _{\varOmega } \frac{1} {c^{2}}\left (\frac{\partial w} {\partial t} z - w\frac{\partial z} {\partial t}\right )dx\right ]_{0}^{T}. {}\\ \end{array}$$
where ν is the exterior normal on Γ. This holds for any sufficiently smooth functions w, z on Ω × (0, T). Choosing for w the solution of (16)–(18) with u = u j the solution of (1)–(3) for source j and for z the solution of the final value problem


$$\displaystyle\begin{array}{rcl} \frac{\partial ^{2}z} {\partial t^{2}} & =& c^{2}\varDelta z\mbox{ in }\varOmega \times (0,T),{}\end{array}$$

(19)

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Apr 9, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on Imaging

Full access? Get Clinical Tree

Get Clinical Tree app for offline access