the von Kries Model: Estimation, Dependence on Light and Device, and Applications


Topic

Short description

Sections

Generalities

Introduction;

1
 
Derivation of the linear model for color variation;

2
 
Derivation of the von Kries model
 
Result 1

Estimation of the von Kries model

3
 
with Application to

4
 
Color Correction and
  
Illuminant invariant image retrieval
 
Result 2

Dependency of the von Kries model on
  
the physical cues of the illuminants;

5
 
von Kries surfaces;
  
Application to estimation of
  
Color temperature and Intensity of an
  
Illuminant.
 
Result 3

Dependency of the von Kries model
  
on the photometric properties of the
  
acquisition device;
  
Applications to

6
 
Device characterization and to
  
Illuminant invariant image representation
 
Conclusions

Final remarks

7





2 Linear Color Changes


In the RGB color space, the response of a camera to the light reflected from a point $$x$$ in a scene is coded in a triplet $$\mathbf{p}(x)$$ = $$(p_0(x)$$, $$p_1(x)$$, $$p_2(x))$$, where


$$\begin{aligned} p_i (x) = \int _\Omega E(\lambda ) S(\lambda , x) F_i(\lambda ) \; d\lambda \qquad i = 0, 1, 2. \end{aligned}$$

(1)
In Eq. (1), $$\lambda $$ is the wavelength of the light illuminating the scene, $$E$$ its spectral power distribution, $$S$$ the reflectance distribution function of the illuminated surface containing $$x$$, and $$F_i$$ is the $$i$$-th spectral sensitivity function of the sensor. The integral ranges over the visible spectrum, i.e. $$\Omega =4$$ [380, 780] nm. The values of $$p_0(x)$$, $$p_1(x)$$, $$p_2(x)$$ are the red, green and blue color responses of the camera sensors at point $$x$$.

For a wide range of matte surfaces, which appear equally bright from all viewing directions, the reflectance distribution function is well approximated by the Lambertian photometric reflection model [44]. In this case, the surface reflectance can be expressed by a linear combination of three basis functions $$S^k(\lambda )$$ with weights $$\sigma _k(x)$$, $$k$$ = 0, 1, 2, so that Eq. (1) can be re-written as follows [42]:


$$\begin{aligned} \mathbf{p}(x)^T = W \mathbf{\sigma }(x)^T \end{aligned}$$

(2)
where $$\mathbf{\sigma }(x) = (\sigma _0(x), \sigma _1(x), \sigma _2(x))$$, the superscript $$T$$ indicates the transpose of the previous vector, and $$W$$ is the $$3 \times 3$$ matrix with entry


$$\begin{aligned} W_{ki} = \int _{\Omega } E(\lambda ) S^k(\lambda ) F_i(\lambda ) d\lambda , \qquad k, i = 0, 1, 2. \end{aligned}$$
The response $$\mathbf{p'}(x) = (p'_0(x), p'_1(x), p'_2(x))$$ captured under an illuminant with spectral power $$E'$$ is then given by $$\mathbf{p'}(x)^T = W' \mathbf{\sigma }(x)^T$$. Since the $$\sigma (x)$$’s do not depend on the illumination, the responses $$\mathbf{p}(x)$$ and $$\mathbf{p'}(x)$$ are related by the linear transform


$$\begin{aligned} \mathbf{p}(x)^T = W [W']^{-1} \mathbf{p'}(x)^T. \end{aligned}$$

(3)
Here we assume that $$W'$$ is not singular, so that Eq. (3) makes sense. In the following we indicate the $$ij$$-th element of $$W [W']^{-1}$$ by $$\alpha _{ij}$$.


3 The von Kries Model


The von Kries (or diagonal) model approximates the color change in Eq. (3) by a linear diagonal map, that rescales independently the color channels by real strictly positive factors, named von Kries coefficients.

Despite its simplicity, the von Kries model has been proved to approximate well a color changes due to an illuminant variation [15, 17, 18], especially for narrow-band sensors and for cameras with non-overlapping spectral sensitivities. Moreover, when the device does not satisfy these requirements, its spectral sensitivities can be sharpened by a linear transform [7, 19], so that the von Kries model still holds.

In the following, we derive the von Kries approximation for a narrow-band camera (Sect. 3.1) and for a device with non-overlapping spectral sensitivities (Sect. 3.2). In addition, we discuss a case in which the von Kries model can approximate also a color change due to a device changing (Sect. 3.3).


3.1 Narrow-Band Sensors


The spectral sensitivity functions of a narrow-band camera can be approximated by the Dirac delta, i.e. for each $$i = 0, 1, 2, F_i(\lambda ) = f_i \delta (\lambda - \lambda _i)$$, where $$f_i$$ is a strictly positive real number and $$\lambda _i$$ is the wavelength at which the sensor maximally responds.

Under this assumption, from Eq. (1), for each $$i = 0, 1, 2$$ we have


$$\begin{aligned} p_i(x) = E(\lambda _i)S(\lambda _i, x) F(\lambda _i) \qquad \mathrm{and} \qquad p_i'(x) = E'(\lambda _i)S(\lambda _i, x) F(\lambda _i) \end{aligned}$$
and thus


$$\begin{aligned} p_i(x) = \frac{E(\lambda _i)}{E'(\lambda _i)} p_i'(x) \qquad \forall \; i = 0, 1, 2. \end{aligned}$$

(4)
This means that the change of illuminant mapping $$\mathbf{p}(x)$$ onto $$\mathbf{p'}(x)$$ is a linear diagonal transform that rescales each channel independently. The von Kries coefficients are the rescaling factors $$\alpha _{i}$$, i.e. the non null elements $$\alpha _{ii}$$ of $$W [W']^{-1}$$:


$$\begin{aligned} \alpha _{i} := \alpha _{ii} = \frac{E(\lambda _i)}{E'(\lambda _i)} \qquad \forall \; i = 0, 1, 2. \end{aligned}$$

(5)


3.2 Non Overlapping Sensitivity Functions


Let $$I$$ and $$I'$$ be two pictures of a same scene imaged under different light conditions. Since the content of $$I$$ and $$I'$$ is the same, we assume a scene-independent illumination model [52] such that


$$\begin{aligned} E(\lambda ) F_k(\lambda ) = \sum _{j = 0}^2 \alpha _{kj} E'(\lambda )F_j(\lambda ). \end{aligned}$$

(6)
Now, let us suppose that the device used for the image acquisition has non overlapping sensitivity functions. This means that for each $$i, j$$ with $$i \not = j$$, $$F_i(\lambda ) F_j(\lambda ) = 0$$ for any $$\lambda $$. Generally, the spectral sensitivities are real-valued positive functions with a compact support in $$\varOmega $$ (see Fig. 1 for an example). Therefore non-overlapping sensitivities have non intersecting supports. We prove that under this assumption, the von Kries model still holds, i.e. matrix $$W[W']^{-1}$$ is diagonal.

A308467_1_En_4_Fig1_HTML.gif


Fig. 1
BARNARD2002: spectral sensitivities for the camera Sony DCX-930 used for the image acquisition of the database [8]

From Eq. (6) we have that


$$\begin{aligned} \int _\Omega E(\lambda ) S(\lambda , x) F_k(\lambda ) \; d\lambda = \sum _{j = 0}^2 \alpha _{kj} \int _\Omega E'(\lambda ) S(\lambda , x) F_j(\lambda ) \; d\lambda . \end{aligned}$$

(7)
i.e. the linear dependency between the responses of a camera under different illuminants is still described by Eq. (3). From Eq. (7) we have that


$$\begin{aligned}{}[E(\lambda ) F_k(\lambda ) - \sum _{j = 0}^2 \alpha _{kj} E'(\lambda ) F_j(\lambda ) ]^2 = 0. \end{aligned}$$

(8)
By minimizing (8) with respect to $$\alpha _{kj}$$ and by using the sensitivity non-overlap hypothesis we get the von Kries model. In fact, suppose that $$k$$ = 0. The derivative of the Eq. (8) with respect to $$\alpha _{00}$$ is


$$\begin{aligned} 0 = -E'(\lambda ) F_0(\lambda )[E(\lambda ) F_0(\lambda ) - \sum _{j=0}^{2} \alpha _{0j} E'(\lambda ) F_j(\lambda )]. \end{aligned}$$
Thanks to the non-overlapping hypothesis, and by supposing that $$E'(\lambda ) \ne 0$$ for each $$\lambda $$ in the support of $$F_0$$, we have that


$$\begin{aligned} E(\lambda ) F_0(\lambda ) - \alpha _{00} E'(\lambda ) F_0(\lambda ) = 0. \end{aligned}$$

(9)
By integrating Eq. (9) with respect to $$\lambda $$ over $$\varOmega $$, and by solving with respect to $$\alpha _{00}$$, we have that


$$\begin{aligned} \alpha _{00} = \frac{\int _\Omega E(\lambda )F_0(\lambda )d\lambda }{\int _\Omega E'(\lambda ) F_0(\lambda ) d\lambda }. \end{aligned}$$

(10)
Since $$E$$, $$E'$$ and $$F_0$$ are not identically null, $$\alpha _{00}$$ is well defined add $$\alpha _{00} \ne 0$$.

Now, we prove that $$\alpha _{0j} = 0$$ for any $$j \ne 0$$. From Eq. (9) we have that


$$\begin{aligned} E(\lambda ) F_0(\lambda ) = \alpha _{00} E'(\lambda ) F_0(\lambda ). \end{aligned}$$
Putting this expression of $$E(\lambda ) F_0(\lambda )$$ into Eq. (8) with $$k$$ = 0, yields


$$\begin{aligned} 0 = [\alpha _{01} E'(\lambda ) F_1(\lambda )]^2 + [\alpha _{02} E'(\lambda ) F_2(\lambda )]^2 + 2\alpha _{01}\alpha _{02} E'(\lambda )^2 F_1(\lambda ) F_2(\lambda ) . \end{aligned}$$
Since the functions $$F_1$$ and $$F_2$$ do not overlap, the last term at left is null, and


$$\begin{aligned}{}[\alpha _{01} E'(\lambda ) F_1(\lambda )]^2 + [\alpha _{02} E'(\lambda ) F_2(\lambda )]^2 = 0. \end{aligned}$$
By integrating this equation over $$\lambda $$ we have that


$$\begin{aligned} \alpha _{01}^2 \int _\Omega [E'(\lambda ) F_1(\lambda )]^2 d\lambda + \alpha _{02}^2 \int _\Omega [E'(\lambda ) F_2(\lambda )]^2 d\lambda = 0, \end{aligned}$$

(11)
and since $$E'$$, $$E$$, $$F_0$$, $$F_1$$, are not identically zero, we have that $$\alpha _{01} = \alpha _{02} = 0$$. By repeating the same procedure for $$k = 1, 2$$, we obtain the von Kries model.

We remark that Eq. (9) has been derived by supposing that $$E'(\lambda )$$ differs from zero for any $$\lambda $$ in the compact support of $$F_0(\lambda )$$. This allows us to remove the multiplicative term $$E'(\lambda )F_0(\lambda )$$ and leads us to Eq. (9). This hypothesis is reliable, because the spectral power distribution of the most illuminants is not null in the visible spectrum. However, in case of lights with null energy in some wavelengths of the support of $$F_0$$, Eq. (9) is replaced by


$$\begin{aligned} E'(\lambda ) E(\lambda )[ F_0(\lambda )]^2 - \alpha _{00} [E'(\lambda )]^2 [F_0(\lambda )]^2 = 0. \end{aligned}$$
The derivation of the von Kries model can be then carried out as before.


3.3 Changing Device


A color variation between two images of the same scene can be produced also by changing the acquisition device. Mathematically turns into changing the sensitivity function $$F_i$$ in Eq. (1). Here we discuss a case in which the color variation generated by a device change can be described by the von Kries model.

Without loss of generality, we can assume that the sensors are narrow bands. Otherwise, we can apply the sharpening proposed in [18] or [7]. Under this assumption, the sensitivities of the cameras used for acquiring the images under exam are approximated by the Dirac delta, i.e.


$$\begin{aligned} F_i(\lambda ) = f_i^{\gamma }\delta (\lambda - \lambda _i) \end{aligned}$$

(12)
where parameter $$f^\gamma $$ is a characteristic of the cameras.

Here we model the change of the camera as a variation of the parameter $$\gamma $$, while we suppose that the wavelength $$\lambda _i$$ remains the same. Therefore the sensitivity functions changes from Eq. (12) to the following Equation:


$$\begin{aligned} F'_i(\lambda ) = f_i^{\gamma ^*}\delta (\lambda - \lambda _i). \end{aligned}$$

(13)
Consequently, the camera responses are


$$\begin{aligned} p_i (x) = f_i^{\gamma } E(\lambda _i)S(\lambda _i, x)\qquad \mathrm{and}\qquad p_i'(x) = f_i^{\gamma ^*} E(\lambda _i)S(\lambda _i, x) \end{aligned}$$
and thus, therefore the diagonal linear model still holds, but in this case, the von Kries coefficients $$\alpha _i$$ depends not only on the spectral power distribution, but also on the device photometric cues:


$$\begin{aligned} \alpha _i = \frac{f_i^{\gamma } E(\lambda _i)}{f_i^{\gamma ^*} E(\lambda _i)}, \qquad \forall \; \; i = 0, 1, 2. \end{aligned}$$

(14)


4 Estimating the von Kries Map


The color correction of an image onto another consists into borrow the colors of the first image on the second one. When the color variation is caused by a change of illuminant, and the hypotheses of the von Kries model are satisfied, the color transform between the two pictures is determined by the von Kries map. This equalizes their colors, so that the first picture appears as it would be taken under the illuminant of the second one. Estimating the von Kries coefficients is thus an important task to achieve color correction between images different by illuminants.

The most methods performing color correction between re-illuminated images or regions compute the von Kries coefficients by estimating the illuminants $$\sigma $$ and $$\sigma '$$ under which the images to be corrected have been taken. These illuminants are expressed as RGB vectors, and the von Kries coefficients are determined as the ratios between the components of the varied illuminants. Therefore, estimating the von Kries map turns into estimating the image illuminants. Some examples of these techniques are the low-level statistical based methods as Gray-World and Scale-by-Max [11, 33, 55], the gamut approaches [4, 14, 20, 23, 29, 56], and the Bayesian or statistical methods [26, 41, 47, 50].

The method proposed in [35], we investigate here, differs from these techniques, because it does not require the computation of the illuminants $$\sigma $$ and $$\sigma '$$, but it estimates the von Kries coefficients by matching the color histograms of the input images or regions, as explained in Sect. 4.1. Histograms provide a good compact representation of the image colors and, after normalization, they guarantee invariance with respect to affine distortions, like changes of size and/or in-plane orientation.

As matter as fact, the method in [35] is not the only one that computes the von Kries map by matching histograms. The histogram comparison is in fact adopted also by the methods described in [9, 34], but their computational complexities are higher than that of the method in [35]. In particular, the work in [9] considers the logarithms of the RGB responses, so that a change in illumination turns into a shift of these logarithmic responses. In this framework, the von Kries map becomes a translation, whose parameters are derived from the convolution between the distributions of the logarithmic responses, with computation complexity $$O(N\log (N))$$, where $$N$$ is the color quantization of the histograms. The method in [34] derives the von Kries coefficients by a variational technique, that minimizes the Euclidean distance between the piecewise inverse of the cumulative color histograms of the input images or regions. This algorithm is linear with the quantizations $$N$$ and $$M$$ of the color histograms and of the piecewise inversions of the cumulative histograms respectively, so that its complexity is $$O(N+M)$$. Differently from the approach of [34], the algorithm in [35] requires the user just to set up the value of $$N$$ and its complexity is $$O(N)$$.

The method presented in [35] is described in detail in Sect. 4.1. Experiments on the accuracy and an analysis of the algorithm complexity and dependency on color quantization are addressed in Sects. 4.2 and  4.3 respectively. Finally, Sect. 4.4 illustrates an application of this method to illuminant invariant image retrieval.


4.1 Histogram-Based Estimation of the von Kries Map


As in [35], we assume that the illuminant varies uniformly over the pictures. We describe the color of an image $$I$$ by the distributions of the values of the three channels red, green, blue. Each distribution is represented by a histogram of $$N$$ bins, where $$N$$ ranges over {1, …, 256}. Hence, the color feature of an image is represented by a triplet $$\mathbf{H} := (H^0, H^1, H^2)$$ of histograms. We refer to $$\mathbf{H}$$ as color histograms, whereas we name its components channel histograms.

Let $$I_0$$ and $$I_1$$ be two color images, with $$I_1$$ being a possibly rescaled, rotated, translated, skewed, and differently illuminated version of $$I_0$$. Let $$\mathbf {H_0}$$ and $$\mathbf{H_1}$$ denote the color histograms of $$I_0$$ and $$I_1$$ respectively. Let $$H_0^i$$ and $$H_1^i$$ be the $$i$$-th component of $$\mathbf {H_0}$$ and $$\mathbf{H_1}$$ respectively. Here fter, to ensure invariance to image rescaling, we assume that each channel $$H_j^i$$ histogram is normalized so that $$\sum _{x = 1}^{N} H_j^i(x)$$ = 1.

The channel histograms of two images which differ by illumination are stretched to each other by the von Kries model, so that


$$\begin{aligned} \sum _{k=1}^x H_1^i(k) = \sum _{k=1}^{\alpha _i x} H_0^i(k) \qquad \forall \;\; i = 0, 1, 2. \end{aligned}$$

(15)
We note that, as the data are discrete, the value $$\alpha _i x$$ is cast to an integer ranging over {1, …, 256}.

The estimate of the parameters $$\alpha _i$$’s consists of two phases. First, for each $$x$$ in {1, …, 256} we compute the point $$y$$ in {1, …, 256} such that


$$\begin{aligned} \sum _{k = 1}^x H_0^i(k) = \sum _{k=1}^y H_1^i(k). \end{aligned}$$

(16)
Second, we compute the coefficient $$\alpha _i$$ as the slope of the best line fitting the pairs $$(x, y)$$.

The procedure to compute the correspondences $$(x, y)$$ satisfying Eq. (16) is implemented by the Algorithm 1 and more details are presented in [35]. To make the estimate robust to possible noise affecting the image and to color quantization, the contribution of each pair $$(x, y)$$ is weighted by a positive real number $$M$$, that is defined as function of the difference $$\sum _{k = 1}^x H_0^i(k) - \sum _{k=1}^y H_1^i(k)$$.

The estimate of the best line $$\fancyscript{A}:= y = \alpha x$$ could be adversely affected by the pixel saturation, that occurs when the incident light at a pixel causes the maximum response (256) of a color channel. To overcome this problem, and to make our estimate robust as much as possible to saturation noise, the pairs $$(x, y)$$ with $$x =256$$ or $$y=256$$ are discarded from the fitting procedure.

A least-square method is used to define the best line fitting the pairs $$(x, y)$$. More precisely, the value of $$\alpha _i$$ is estimated by minimizing with respect to $$\alpha $$ the following functional, that is called in [35] divergence:


$$\begin{aligned} d_{\alpha } (H_0^i, H_1^i) := \sum _k M_k d((x_k, y_k), \fancyscript{A})^2 = \sum _k \frac{M_k}{\alpha ^2 + 1}(\alpha x_k - y_k)^2. \end{aligned}$$

(17)
Here $$(x_k, y_k)$$ and $$M_k$$ indicate the $$k$$-th pair satisfying Eq. (16) and its weight respectively, while $$d((x_k, y_k), \fancyscript{A})$$ is the Euclidean distance between the point $$(x_k, y_k)$$ and the line $$\fancyscript{A}$$.

We observe that:

1.

$$d_{\alpha }(H_0^i, H_1^i)$$ = 0 $$\Leftrightarrow $$ $$H_0^i(\alpha z ) = H_1^i(z)$$, for each $$z$$ in {1, …, $$N$$};

 

2.

$$d_{\alpha }(H_0^i, H_1^i)$$ = $$d_{\frac{1}{\alpha }}(H_1^i, H_0^i)$$.

 
These properties imply that $$d_{\alpha }$$ is a measure of dissimilarity (divergence) between the channel histograms stretched each to other. In particular, if $$d_{\alpha }$$ is zero, than the two histograms are stretched to each other.

Finally we notice that, when no changes of size or in-plane orientation occur, the diagonal map between two images $$I_0$$ and $$I$$ can be estimated by finding, for each color channel, the best line fitting the pairs of sensory responses $$(p_i, p'_i)$$ at the $$i$$-th pixels of $$I_0$$ and $$I$$ respectively, as proposed in [37]. The histogram-based approach in [35] basically applies a least square method in the space of the color histograms. Using histograms makes the estimate of the illuminant change insensitive to image distortions, like rescaling, translating, skewing, and/or rotating.

A308467_1_En_4_Figa_HTML.gif

Figure 2 shows a synthetic example of pictures related by a von Kries map along with the color correction provided by the method described in [35]. The red channel of the image (a) has been rescaled by 0.5, while the other channels are unchanged. The re-illuminated image is shown in (b). Figure 3 shows the red histograms of (a) and (b) and highlights the correspondence between two bins. In particular, we note that the green regions in the two histograms have the same areas. The von Kries map estimated by [35] provides a very satisfactory color correction of image (b) onto image (a), as displayed in Fig. 2c.

A308467_1_En_4_Fig2_HTML.gif


Fig. 2
a A picture; b a re-illuminated version of (a); c the color correction of (b) onto (a) provided by the method [35]. Pictures (a) and (c) are highly similar


A308467_1_En_4_Fig3_HTML.gif


Fig. 3
Histograms of the responses of the red channels of the pictures shown in Fig. 2a, b: the red channel of the first picture has been synthetically rescaled by 0.5. The two red histograms are thus stretched to each other. The method [35] allows to estimate the stretching parameters, and hence to correct the images as they would be taken under the same light. The green parts highlighted on the histograms have the same area, therefore the bin $$x = 128$$ in the first histogram is mapped on the bin $$y = 64$$ of the second one


4.2 Accuracy on the Estimate


The accuracy on the estimate of the von Kries map possibly relating two images or two image regions has been measured in [35] in terms of color correction. In the following, we report the experiments carried out on four real-world public databases (ALOI [27], Outex [45], BARNARD [8], UEA Dataset [16]). Some examples of pictures from these datasets are shown in Fig. 4 (first and second images in each row).

Each database consists of a set of images (references) taken under a reference illuminant and a set of re-illuminated versions of them (test images). For all the databases, we evaluate the accuracy on the estimation of the the von Kries map $$K$$ by


$$\begin{aligned} A := 1 - L^1(I, K(I_0)). \end{aligned}$$

(18)


A308467_1_En_4_Fig4_HTML.gif


Fig. 4
Examples of color correction output by the approach in [35] for the four databases used in the experiments reported in Sect. 4.2: a ALOI, b Outex; c BARNARD; d UEA. In each row, from left to right: an image, a re-illuminated version of it, and the color correction of the second one onto the first one. The images in (d) have been captured by the same camera

Here $$I$$ indicates a test image and $$I_0$$ its correspondent reference, while $$L^1(I, K(I_0))$$ is the $$L^1$$ distance computed on the RGB space between $$I$$ and the color correction $$K(I_0)$$ of $$I_0$$ determined by the estimated $$K$$. This distance has been normalized to range over [0,1]. Therefore, the closer to 1 $$A$$ is, the better our estimate is. To quantify the benefit of our estimate, we compare the accuracy in Eq. (18) with


$$\begin{aligned} A_0 := 1 - L^1(I, I_0). \end{aligned}$$

(19)
The value of $$A_0$$ measures the similarity of the reference to the test image when no color enhancement is applied.

The transform $$K$$ gives the color correction of $$I_0$$ with respect to the reference $$I$$: in fact, $$K(I_0)$$ is the image $$I_0$$ as it would be taken under the same illuminant of $$I$$.

We notice that this performance evaluation does not consider possible geometric image changes, like rescaling, in-plane rotation, or skew. In fact, the similarity between two color corrected images is defined as a pixel-wise distance between the image colors.

In case of a pair of images related by an illuminant change and by geometric distortions, we measure the accuracy of the color correction by the $$L^1$$ distance between their color histograms. In particular, we compute the distance $$\fancyscript{H}_0$$ between the color histograms of $$I$$ and $$I_0$$ before the color correction


$$\begin{aligned} \fancyscript{H}_0 = 1 - L^1(\mathbf{H_0}, \mathbf{H}), \end{aligned}$$

(20)
and the distance $$\fancyscript{H}$$ between the color histograms $$\mathbf{H}$$ and $$\mathbf{H_K}$$ of $$I$$ and $$K(I_0)$$ respectively:


$$\begin{aligned} \fancyscript{H} = 1 - L^1(\mathbf{H}, \mathbf{H_K}). \end{aligned}$$

(21)
Examples of color correction output by the algorithm we described are shown in Fig. 4 for each database used here (third image in each row).


4.2.1 ALOI


ALOI [27] (http://​staff.​science.​uva.​nl/​~aloi/​) collects 110,250 images of 1,000 objects acquired under different conditions. For each object, the frontal view has been taken under 12 different light conditions, produced by varying the color temperature of 5 lamps illuminating the scene. The lamp voltage was controlled to be $$V_j = j \times 0.047$$ V with $$j$$ $$\in $$ {110, 120, 130, 140, 150, 160, 170, 180, 190, 230, 250}. For each pair of illuminants $$(V_j, V_k)$$ with $$j \ne k$$, we consider the images captured with lamp voltage $$V_j$$ as references and those captured with voltage $$V_k$$ as tests.

Figure 5 shows the obtained results: for each pair $$(V_j, V_k)$$, the plot shows the accuracies (a) $$A_0$$ and (b) $$A$$ averaged over the test images.

We observe that, for $$j = 140$$, the accuracy $$A$$ is lower than for the other lamp voltages. This is because the voltage $$V_{140}$$ determines a high increment of the light intensity and therefore a large number of saturated pixels, making the performances worse.

The mean value of $$A_0$$ averaged on all the pairs $$(V_j, V_k)$$ is 0.9913, while that of $$A$$ is 0.9961 by the approach in [35]. For each pair of images $$(I_i, I_j)$$ representing a same scene taken under the illuminants with voltages $$V_i$$ and $$V_j$$ respectively, we compute the parameters $$(\alpha _0, \alpha _1, \alpha _2)$$ of the illuminant change $$K$$ mapping $$I_i$$ onto $$I_j$$. In principle, these parameters should be equal to those of the map $$K'$$ relating another pair $$(I'_i, I'_j)$$ captured under the same pair of illuminants. In practice, since the von Kries model is only an approximation of the illuminant variation, the parameters of $$K$$ and $$K'$$ generally differ. In Fig. 6 we report the mean values of the von Kries coefficients versus the reference set. The error bar is the standard deviation of the estimates from their mean value.

A308467_1_En_4_Fig5_HTML.gif


Fig. 5
ALOI: accuracy a $$A_0$$ and b $$A$$ (see Eqs. (19) and (18)) for the different pairs of reference and test sets. The $$x$$ and $$y$$ axes display the lamp voltages ($${\times }$$0.047 V) of the illuminants used in ALOI. The right axis shows the correspondence between the colors of the plots and the values of the accuracies


4.2.2 Outex Dataset


The Outex database [45] (http://​www.​outex.​oulu.​fi/​) includes different image sets for empirical evaluation of texture classification and segmentation algorithms. In this work we extract the test set named Outex_TC_00014: this consists of three sets of 1360 texture images viewed under the illuminants INCA, TL84 and HORIZON with color temperature 2856, 4100 and 2300 K respectively.

The accuracies $$A_0$$ and $$A$$ are stored in Table 2, where three changes of lights have been considered: from INCA to HORIZON, from INCA to TL84, from TL84 to HORIZON. As expected, $$A$$ is greater than $$A_0$$.

A308467_1_En_4_Fig6_HTML.gif


Fig. 6
ALOI: estimates of the von Kries coefficients



Table 2
Outex: accuracies $$A_0$$ and $$A$$ for three different illuminant changes




























Illuminant change

$$A_0$$

$$A$$

From INCA to HORIZON

0.94659

0.97221

From INCA to TL84

0.94494

0.98414

From TL84 to HORIZON

0.90718

0.97677

Mean

0.93290

0.97771


4.2.3 BARNARD


The real-world image dataset [8] (http://​www.​cs.​sfu.​ca/​~colour/​), that we refer as BARNARD, is composed by 321 pictures grouped in 30 categories. Each category contains a reference image taken under an incandescent light Sylvania 50MR16Q (reference illuminant) and a number (from 2 to 11) of relighted versions of it (test images) under different lights. The mean values of the accuracies $$A_0$$ and $$A$$ are shown in Fig. 7. On average, $$A_0$$ is 0.9447, and $$A$$ is 0.9805.

A308467_1_En_4_Fig7_HTML.gif


Fig. 7
BARNARD: accuracies $$A_0$$ (Eq. 19) and $$A$$ (Eq. 18) for the different illuminants


4.2.4 UEA Dataset


The UEA Dataset [16] (http://​www.​uea.​ac.​uk/​cmp/​research/​) comprises 28 design patterns, each captured under 3 illuminants with four different cameras. The illuminants are indicated by Ill A (tungsten filament light, with color temperature 2865 K), Ill D65 (simulated daylight, with color temperature 6500 K), and Ill TL84 (fluorescent tube, with color temperature 4100 K). We notice that the images taken by different cameras differ not only for their colors, but also for size and orientation. In fact, different sensors have different resolution and orientation. In this case, the accuracy on the color correction cannot be measured by the Eqs. (18) and (19), but it is evaluated by the histogram distances defined in Eqs. (20) and (21).

The results are reported in Table 3. For each pair of cameras $$(i, j)$$ and for each illuminant pair $$(\sigma , \sigma ')$$ we compute the von Kries map relating every image acquired by $$i$$ under $$\sigma $$ and the correspondent image acquired by $$j$$ under $$\sigma '$$, and the accuracies $$\fancyscript{H}_0$$ and $$\fancyscript{H}$$ on the color correction of the first image onto the second one. On average, the $$L^1$$ distance between the color histograms before the color correction is 0.0043, while it is 0.0029 after the color correction.


Table 3
UEA Dataset: Accuracies $$\fancyscript{H}_0$$ and $$\fancyscript{H}$$ respectively (a) before and (b) after the color correction.




















Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Mar 30, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on the von Kries Model: Estimation, Dependence on Light and Device, and Applications

Full access? Get Clinical Tree

Get Clinical Tree app for offline access

Camera

1

2

3

4

(a) Accuracy $$\fancyscript{H}_0$$

1

0.99756