Вы находитесь на странице: 1из 511

Bellingham, Washington USA

Library of Congress Cataloging-in-Publication Data



Kang, Henry R..
Computational color technology / Henry R. Kang.
p. cm.
Includes bibliographical references and index.
ISBN: 0-8194-6119-9 (alk. Paper)
1. Image Processing--Digital techniques. 2. Color. I. Title.

TA1637.K358 2006
621.36'7--dc22
2006042243


Published by

SPIEThe International Society for Optical Engineering
P.O. Box 10
Bellingham, Washington 98227-0010 USA
Phone: +1 360 676 3290
Fax: +1 360 647 1445
Email: spie@spie.org
Web: http://spie.org


Copyright 2006 The Society of Photo-Optical Instrumentation Engineers

All rights reserved. No part of this publication may be reproduced or distributed
in any form or by any means without written permission of the publisher.

The content of this book reflects the work and thought of the author(s).
Every effort has been made to publish reliable and accurate information herein,
but the publisher is not responsible for the validity of the information or for any
outcomes resulting from reliance thereon.

Printed in the United States of America.




Contents
Preface xv
Acknowledgments xix
1 Tristimulus Specication 1
1.1 Denitions of CIE Tristimulus Values 1
1.2 Vector-Space Representations of Tristimulus Values 3
1.3 Object Spectrum 5
1.4 Color-Matching Functions 5
1.5 CIE Standard Illuminants 10
1.5.1 Standard viewing conditions 13
1.6 Effect of Illuminant 14
1.7 Stimulus Function 15
1.8 Perceived Object 15
1.9 Remarks 16
References 16
2 Color Principles and Properties 17
2.1 Visual Sensitivity and Color-Matching Functions 17
2.2 Identity Property 19
2.3 Color Match 20
2.4 Transitivity Law 21
2.5 Proportionality Law 21
2.6 Additivity Law 21
2.7 Dependence of Color-Matching Functions on Choice of Primaries 22
2.8 Transformation of Primaries 22
2.9 Invariant of Matrix A (Transformation of Tristimulus Vectors) 23
2.10 Constraints on the Image Reproduction 23
References 24
3 Metamerism 27
3.1 Types of Metameric Matching 27
3.1.1 Metameric illuminants 28
3.1.2 Metameric object spectra 28
3.1.3 Metameric stimulus functions 28
3.2 Matrix R Theory 29
v
vi Computational Color Technology
3.3 Properties of Matrix R 31
3.4 Metamers Under Different Illuminants 37
3.5 Metameric Correction 39
3.5.1 Additive correction 39
3.5.2 Multiplicative correction 39
3.5.3 Spectral correction 39
3.6 Indices of Metamerism 39
3.6.1 Index of metamerism potential 40
References 40
4 Chromatic Adaptation 43
4.1 Von Kries Hypothesis 43
4.2 Helson-Judd-Warren Transform 46
4.3 Nayatani Model 47
4.4 Bartleson Transform 48
4.5 Fairchild Model 49
4.6 Hunt Model 51
4.7 BFD Transform 52
4.8 Guth Model 53
4.9 Retinex Theory 53
4.10 Remarks 54
References 54
5 CIE Color Spaces 57
5.1 CIE 1931 Chromaticity Coordinates 57
5.1.1 Color gamut boundary of CIEXYZ 57
5.2 CIELUV Space 59
5.2.1 Color gamut boundary of CIELUV 60
5.3 CIELAB Space 60
5.3.1 CIELAB to CIEXYZ transform 62
5.3.2 Color gamut boundary of CIELAB 62
5.4 Modications 65
5.5 CIE Color Appearance Model 69
5.6 S-CIELAB 73
References 73
6 RGB Color Spaces 77
6.1 RGB Primaries 77
6.2 Transformation of RGB Primaries 80
6.2.1 Conversion formula 81
6.2.2 Conversion formula between RGB primaries 83
6.3 RGB Color-Encoding Standards 84
6.3.1 Viewing conditions 84
Contents vii
6.3.2 Digital representation 84
6.3.3 Optical-electronic transfer function 85
6.4 Conversion Mechanism 86
6.5 Comparisons of RGB Primaries and Encoding Standards 86
6.6 Remarks 99
References 99
7 Device-Dependent Color Spaces 103
7.1 Red-Green-Blue (RGB) Color Space 103
7.2 Hue-Saturation-Value (HSV) Space 104
7.3 Hue-Lightness-Saturation (HLS) Space 105
7.4 Lightness-Saturation-Hue (LEF) Space 106
7.5 Cyan-Magenta-Yellow (CMY) Color Space 107
7.6 Ideal Block-Dye Model 108
7.6.1 Ideal color conversion 108
7.7 Color Gamut Boundary of Block Dyes 111
7.7.1 Ideal primary colors of block dyes 112
7.7.2 Additive color mixing of block dyes 115
7.7.3 Subtractive color mixing of block dyes 115
7.8 Color Gamut Boundary of Imaging Devices 120
7.8.1 Test target of color gamut 122
7.8.2 Device gamut model and interpolation method 122
7.9 Color Gamut Mapping 124
7.9.1 Color-mapping algorithm 125
7.9.2 Directional strategy 126
7.9.3 Criteria of gamut mapping 129
7.10 CIE Guidelines for Color Gamut Mapping 129
References 130
8 Regression 135
8.1 Regression Method 135
8.2 Forward Color Transformation 139
8.3 Inverse Color Transformation 141
8.4 Extension to Spectral Data 142
8.5 Results of Forward Regression 143
8.6 Results of Inverse Regression 146
8.7 Remarks 148
References 149
9 Three-Dimensional Lookup Table with Interpolation 151
9.1 Structure of 3D Lookup Table 151
9.1.1 Packing 151
9.1.2 Extraction 152
9.1.3 Interpolation 153
viii Computational Color Technology
9.2 Geometric Interpolations 153
9.2.1 Bilinear interpolation 154
9.2.2 Trilinear interpolation 155
9.2.3 Prism interpolation 157
9.2.4 Pyramid interpolation 159
9.2.5 Tetrahedral interpolation 161
9.2.6 Derivatives and extensions 163
9.3 Cellular Regression 164
9.4 Nonuniform Lookup Table 165
9.5 Inverse Color Transform 166
9.6 Sequential Linear Interpolation 168
9.7 Results of Forward 3D Interpolation 170
9.8 Results of Inverse 3D Interpolation 177
9.9 Remarks 180
References 180
10 Metameric Decomposition and Reconstruction 183
10.1 Metameric Spectrum Decomposition 183
10.2 Metameric Spectrum Reconstruction 189
10.2.1 Spectrum reconstruction from the fundamental
and metameric black 189
10.2.2 Spectrum reconstruction from tristimulus values 191
10.2.3 Error measures 194
10.3 Results of Spectrum Reconstruction 194
10.3.1 Results from average fundamental and metameric black 194
10.3.2 Results of spectrum reconstruction from tristimulus values 199
10.4 Application 200
10.5 Remarks 201
References 202
11 Spectrum Decomposition and Reconstruction 203
11.1 Spectrum Reconstruction 203
11.2 General Inverse Method 204
11.2.1 Spectrum reconstruction via orthogonal projection 205
11.2.2 Spectrum reconstruction via smoothing inverse 205
11.2.3 Spectrum reconstruction via Wiener inverse 209
11.3 Spectrum Decomposition and Reconstruction Methods 212
11.4 Principal Component Analysis 212
11.5 Basis Vectors 214
11.6 Spectrum Reconstruction from the Input Spectrum 220
11.7 Spectrum Reconstruction from Tristimulus Values 223
11.8 Error Metrics 224
11.9 Results and Discussions 224
11.9.1 Spectrum reconstruction from the object spectrum 225
Contents ix
11.9.2 Spectrum reconstruction from the tristimulus values 228
11.10 Applications 229
References 230
12 Computational Color Constancy 233
12.1 Image Irradiance Model 233
12.1.1 Reection phenomenon 234
12.2 Finite-Dimensional Linear Models 236
12.3 Three-Two Constraint 240
12.4 Three-Three Constraint 242
12.4.1 Gray world assumption 243
12.4.2 Sllstrn-Buchsbaum model 244
12.4.3 Dichromatic reection model 245
12.4.4 Estimation of illumination 246
12.4.5 Other dichromatic models 250
12.4.6 Volumetric model 253
12.5 Gamut-Mapping Approach 255
12.6 Lightness/Retinex Model 256
12.7 General Linear Transform 258
12.8 Spectral Sharpening 259
12.8.1 Sensor-based sharpening 260
12.8.2 Data-based sharpening 261
12.8.3 Perfect sharpening 264
12.8.4 Diagonal transform of the 3-2 world 266
12.9 Von Kries Color Prediction 266
12.10 Remarks 268
References 268
13 White-Point Conversion 273
13.1 White-Point Conversion via RGB Space 273
13.2 White-Point Conversion via Tristimulus Ratios of Illuminants 283
13.3 White-Point Conversion via Difference in Illuminants 286
13.4 White-Point Conversion via Polynomial Regression 295
13.5 Remarks 298
References 299
14 Multispectral Imaging 301
14.1 Multispectral Irradiance Model 303
14.2 Sensitivity and Uniformity of a Digital Camera 305
14.2.1 Spatial uniformity of a digital camera 306
14.2.2 Spectral sensitivity of a digital camera 308
14.3 Spectral Transmittance of Filters 308
14.3.1 Design of optimal lters 309
x Computational Color Technology
14.3.2 Equal-spacing lter set 310
14.3.3 Selection of optimal lters 311
14.4 Spectral Radiance of Illuminant 311
14.5 Determination of Matrix

312
14.6 Spectral Reconstruction 314
14.6.1 Tristimulus values using PCA 314
14.6.2 Pseudo-inverse estimation 315
14.6.3 Smoothing inverse estimation 316
14.6.4 Wiener estimation 316
14.7 Multispectral Image Representation 317
14.8 Multispectral Image Quality 319
References 320
15 Densitometry 325
15.1 Densitometer 326
15.1.1 Precision of density measurements 327
15.1.2 Applications 329
15.2 Beer-Lambert-Bouguer Law 331
15.3 Proportionality 332
15.3.1 Density ratio measurement 334
15.4 Additivity 334
15.5 Proportionality and Additivity Failures 335
15.5.1 Filter bandwidth 335
15.5.2 First-surface reection 335
15.5.3 Multiple internal reections 335
15.5.4 Opacity 335
15.5.5 Halftone pattern 336
15.5.6 Tone characteristics of commercial printers 336
15.6 Empirical Proportionality Correction 338
15.7 Empirical Additivity Correction 341
15.8 Density-Masking Equation 342
15.9 Device-Masking Equation 343
15.9.1 Single-step conversion of the device-masking equation 344
15.9.2 Multistep conversion of the device-masking equation 345
15.9.3 Intuitive approach 346
15.10 Performance of the Device-Masking Equation 347
15.11 Gray Balancing 347
15.12 Gray-Component Replacement 349
15.13 Digital Implementation 350
15.13.1 Results of the integer masking equation 351
15.14 Remarks 353
References 354
Contents xi
16 Kubelka-Munk Theory 355
16.1 Two-Constant Kubelka-Munk Theory 356
16.2 Single-Constant Kubelka-Munk theory 357
16.3 Determination of the Single Constant 360
16.4 Derivation of Saundersons Correction 360
16.5 Generalized Kubelka-Munk Model 362
16.6 Cellular Extension of the Kubelka-Munk Model 365
16.7 Applications 365
16.7.1 Applications to multispectral imaging 366
References 366
17 Light-Reection Model 369
17.1 Three-Primary Neugebauer Equations 369
17.2 Demichel Dot-Overlap Model 370
17.3 Simplications 371
17.4 Four-Primary Neugebauer Equation 373
17.5 Cellular Extension of the Neugebauer Equations 375
17.6 Spectral Extension of the Neugebauer Equations 376
References 382
18 Halftone Printing Models 385
18.1 Murray-Davies Equation 385
18.1.1 Spectral extension of the Murray-Davies equation 387
18.1.2 Expanded Murray-Davies model 388
18.2 Yule-Nielsen Model 388
18.2.1 Spectral extension of Yule-Nielsen model 390
18.3 Area Coverage-Density Relationship 392
18.4 Clapper-Yule Model 393
18.4.1 Spectral extension of the Clapper-Yule model 394
18.5 Hybrid Approaches 394
18.6 Cellular Extension of Color-Mixing Models 395
18.7 Dot Gain 396
18.8 Comparisons of Halftone Models 400
References 402
19 Issues of Digital Color Imaging 407
19.1 Human Visual Model 407
19.1.1 Contrast sensitivity function 409
19.1.2 Color visual model 410
19.2 Color Appearance Model 412
19.3 Integrated Spatial-Appearance Model 413
19.4 Image Quality 413
19.5 Imaging Technology 415
xii Computational Color Technology
19.5.1 Device characteristics 415
19.5.2 Measurement-based tone correction 416
19.5.3 Tone level 417
19.6 Device-Independent Color Imaging 418
19.7 Device Characterization 421
19.8 Color Spaces and Transforms 423
19.8.1 Color-mixing models 424
19.9 Spectral Reproduction 425
19.10 Color Gamut Mapping 425
19.11 Color Measurement 426
19.12 Color-Imaging Process 426
19.12.1 Performance 427
19.12.2 Cost 428
19.13 Color Architecture 428
19.14 Transformations between sRGB and Internet FAX Color Standard 430
19.15 Modular Implementation 434
19.15.1 SRGB-to-CIEXYZ transformation 434
19.15.2 Device/RGB-to-CIEXYZ transformation 436
19.15.3 CIEXYZ-to-CIELAB transformation 436
19.15.4 CIELAB-to-CIEXYZ transformation 437
19.15.5 CIEXYZ-to-colorimetric RGB transformation 438
19.15.6 CIEXYZ-to-Device/RGB transformation 438
19.16 Results and Discussion 439
19.16.1 SRGB-to-CIEXYZ transformation 439
19.16.2 Device/RGB-to-CIEXYZ transformation 440
19.16.3 CIEXYZ-to-CIELAB transformation 440
19.16.4 CIELAB-to-CIEXYZ transformation 441
19.16.5 CIEXYZ-to-sRGB transformation 441
19.16.6 Combined computational error 442
19.17 Remarks 443
References 444
Appendices
A1 Conversion Matrices 449
A2 Conversion Matrices from RGB to ITU-R.BT.709/RGB 471
A3 Conversion Matrices from RGB to ROMM/RGB 475
Contents xiii
A4 RGB Color-Encoding Standards 479
A4.1 SMPTE-C/RGB 479
A4.2 European TV Standard (EBU) 480
A4.3 American TV YIQ Standard 481
A4.4 PhotoYCC 482
A4.5 SRGB Encoding Standards 483
A4.6 E-sRGB Encoding Standard 484
A4.7 Kodak ROMM/RGB Encoding Standard 485
A4.8 Kodak RIMM/RGB 486
References 487
A5 Matrix Inversion 489
A5.1 Triangularization 489
A5.2 Back Substitution 491
References 492
A6 Color Errors of Reconstructed CRI Spectra with Respect to
Measured Values 493
A7 Color Errors of Reconstructed CRI Spectra with Respect to
Measured Values Using Tristimulus Inputs 497
A8 White-Point Conversion Accuracies Using Polynomial Re-
gression 499
A9 Digital Implementation of the Masking Equation 503
A9.1 Integer Implementation of Forward Conversion 503
A9.2 Integer Implementation of Inverse Conversion 506
Index 509
Preface
Recent developments in color imaging have evolved from the classical broadband
description to a spectral representation. Color reproductions were attempted with
spectral matching, and image capture via digital camera has extended to multispec-
tral recording. These topics have appeared in a couple of books and scattered across
several digital imaging journals. However, there is no integrated view or consistent
representation of spectral color imaging. This book is intended to ll that void and
bridge the gap between color science and computational color technology, putting
color adaptation, color constancy, color transforms, color display, and color rendi-
tion in the domain of vector-matrix representations and theories. The aim of this
book is to deal with color digital images in the spectral level using vector-matrix
representations so that one can process digital color images by employing linear
algebra and matrix theory.
This is the onset of a new era of color reproduction. Spectral reconstruction
provides the means for the highest level of color matching. As pointed out by Dr.
R. W. G. Hunt, spectral color matching gives color delity under any viewing con-
ditions. However, current color technology and mathematical tools are still insuf-
cient for giving accurate spectral reconstructions (and may never be sufcient
because of device variations and color measurement uncertainties). Nevertheless,
this book provides the fundamental color principles and mathematical tools to pre-
pare one for this new era and for subsequent applications in multispectral imaging,
medical imaging, remote sensing, and machine vision. The intent is to bridge color
science, mathematical formulations, psychophysical phenomena, physical models,
and practical implementations all in one work.
The contents of this book are primarily aimed at digital color imaging profes-
sionals for research and development purposes. This book can also be used as a
textbook for undergraduate and graduate students in digital imaging, printing, and
graphic arts. The book is organized into ve parts. The rst part, Chapters 17, is
devoted to the fundamentals of color science such as the CIE tristimulus specica-
tions, principles of color matching, metamerism, chromatic adaptation, and color
spaces. These topics are presented in vector-matrix forms, giving a new avor to
old material and, in many cases, revealing new perspectives and insights. This is
because the representation of the spectral sensitivity of human vision and related
visual phenomena in vector-matrix form provide the foundation for computational
color technology. The vector-space representation makes possible the use of the
well-developed elds of linear algebra and matrix theory.
Chapter 1 gives the denitions of CIE tristimulus values. Each component, such
as color matching function, illuminant, and object spectrum, is given in vector-
xv
xvi Computational Color Technology
matrix notation under several different vector associations of components. This
sets the stage for subsequent computations. Chapter 2 presents the fundamental
principles governing color matching such as the identity, proportionality, and ad-
ditivity laws. Based on these laws, the conversion of primaries is simply a linear
transform. Chapter 3 discusses the metameric matching from the perspective of
the vector-matrix representation, which allows the derivation of matrix R, the or-
thogonal projection of the tristimulus color space. The properties of matrix R are
discussed in detail. Several levels of the metameric matching are discussed and
metameric corrections are provided. Chapter 4 presents various models of the chro-
matic adaptation from the fundamental von Kries hypothesis to complex retinex
theory. Chapter 5 presents CIE color spaces and their relationships. Color gamut
boundaries for CIELAB are derived, and a spatial extension of CIELAB is given.
The most recent color appearance model, CIE CAM2000, is also included. Chap-
ter 6 gives a comprehensive collection of RGB primaries and encoding standards
and derives the conversion formula between RGB primaries. These standards are
compared and their advantages and disadvantages are discussed. Chapter 7 presents
the device-dependent color spaces based on the ideal block dye model. The meth-
ods of obtaining the color gamut boundary of imaging devices and color gamut
mapping are provided. They are the essential parts of color rendering at the system
level.
The second part of the book, Chapters 811, provides tools for color trans-
formation and spectrum reconstruction. These empirical methods are developed
purely on mathematical grounds and are formulated in the vector-matrix forms to
enable matrix computations. In Chapter 8, the least-square minimization regres-
sion technique is given, and the vector-matrix formulation of the forward and in-
verse color transformations are derived and extended to the spectral domain. To test
the quality of the regression technique, real-world color conversion data are used.
Chapter 9 focuses on lookup-table techniques, and the structure of the 3D lookup
table and geometric interpolations are discussed in detail. Several extensions and
improvements are also provided, and real data are used to test the value of the 3D-
LUT technique. Chapter 10 shows the simplest spectrum reconstruction method
by using the metameric decomposition of the matrix R theory. Two methods are
developed for spectrum reconstruction; one using the sum of metameric black and
fundamental spectra, and the other using tristimulus values without spectral in-
formation. The methods are tested by using CIE illuminants and spectra of the
Color Rendering Index (CRI). Chapter 11 provides several sophisticated meth-
ods of the spectrum reconstruction, including the general inverse methods such as
the smoothing inverse and Wiener inverse and the principal component analysis.
Again, these methods are tested by using CRI spectra because spectrum recon-
struction is the foundation for color spectral imaging, utilizing the vector-matrix
representations.
The third part, Chapters 1214, shows applications of spectral reconstruction to
color science and technology, such as color constancy, white-point conversion, and
multispectral imaging. This part deals with the psychophysical aspect of the sur-
face reection, considering signals reected into the human visual pathway from
Preface xvii
the object surface under certain kinds of illumination. We discuss the topics of sur-
face illumination and reection, including metameric black, color constancy, the
nite-dimensional linear model, white-point conversion (illuminant mapping), and
multispectral image processing. These methods can be used to estimate (or recover)
surface and illuminant spectra, and can be applied to remote sensing and machine
vision. Chapter 12 discusses computational color constancy, which estimates the
surface spectrum and illumination simultaneously. The image irradiance model and
nite-dimensional linear models for approximating the color constancy phenom-
enon are presented, and various constraints are imposed in order to solve the nite-
dimensional linear equations. Chapter 13 describes the application of fundamental
color principles to white-point conversion. Several methods are developed and the
conversion accuracy is compared. Chapter 14 discusses the applications of spec-
trum reconstruction for multispectral imaging. Multispectral images are acquired
by digital cameras, and the camera characteristics with respect to color image qual-
ity are discussed. For device compatibility and cross-media rendering, a proposed
multispectral image representation is given. Finally, the multispectral image qual-
ity is discussed.
The fourth part, Chapters 1518, deals with the physical model accounting
for the intrinsic physical and chemical interactions occurring in the colorants
and substrates. This is mainly applied to the printing process, halftone printing
in particular. In this section, physical models of the Neugebauer equations, the
Murray-Davies equation, the Yule-Nielsen model, the Clapper-Yule model, the
Beer-Lambert-Bouguer law, the density-masking equation, and the Kubelka-Munk
theory are discussed. These equations are then reformulated in the vector-matrix
notation and expanded in both spectral and spatial domains with the help of the
vector-matrix theory in order to derive new insights and develop new ways of em-
ploying these equations. It is shown that this spectral extension has applications in
the spectral color reproduction that greatly improve the color image quality. Chap-
ter 15 describes densitometry beginning with the Beer-Lambert-Bouguer law and
its proportionality and additivity failures. Empirical corrections for proportionality
and additivity failures are then developed. The density-masking equation is then
presented and extended to the device-masking equation, which can be applied to
gray balancing, gray component replacement, and maximum ink loading. Chapter
16 reformulates the Kubelka-Munk theory in the vector-matrix form. A general
Kubelka-Munk model is presented using four uxes that can be reduced to other
halftone printing models. Chapter 17 presents the Neugebauer equations, extend-
ing them to spectral domain by using the vector-matrix notation. This notation
provides the means to nding the inverse Neugebauer equations and to obtaining
the amounts of primary inks. Finally, Chapter 18 contains various halftone print-
ing models such as the Murray-Davies equation, the Yule-Nielsen model, and the
Clapper-Yule model. Chapter 18 also discusses dot gain and describes a physical
model that takes the optical and spatial components into account.
The last part, Chapter 19, expresses my view of the salient issues in digital
color imaging. Digital color imaging is an extremely complex phenomenon, in-
xviii Computational Color Technology
volving the human visual model, the color appearance model, image quality, imag-
ing technology, device characterization and calibration, color space transformation,
color gamut mapping, and color measurement. The complexity can be reduced and
image quality improved by a proper color architecture design. A simple transfor-
mation between sRGB and Internet FAX is used to illustrate this point.
Henry R. Kang
March 2006
Acknowledgments
In the course of writing this book, I received assistance from many people in the
collection of materials. I want to thank Addison-Wesley Longman, Inc., AGFA Ed-
ucational Publishing, Commission Internationale de lclairage (CIE), the Interna-
tional Society for Optical Engineering (SPIE), John Wiley & Sons, Inc., the Optical
Society of America (OSA), and the Society for Imaging Science and Technology
(IS&T) for allowing me to use their publications and gures. I am indebted to the
staff of the Palos Verdes Public Library, Joyce Grauman in particular, for helping
me to search, allocate, and acquire many books and papers that were essential in the
writing of this book. I am also grateful to Prof. Brian V. Funt (Simon Fraser Uni-
versity), Dr. Jan Morovic (Hewlett-Packard Company, Spain), Prof. Joel Trussell
(North Carolina State University), and Prof. Brian Wandell (Stanford University)
for providing me with their publications so that I could gain a better understanding
of the topics in their respective elds to hopefully present a correct and consistent
view throughout this book.
I want to thank the reviewers for their comments, suggestions, and corrections.
Finally, I want to thank Cassie McClellan and Beth Huetter for obtaining permis-
sion from the original authors and publishers to use their gures, and Timothy
Lamkins for managing the publication of this book.
xix
Chapter 1
Tristimulus Specication
Colorimetry is the method of measuring and evaluating colors of objects. The term
color is dened as an attribute of visual perception consisting of any combination
of chromatic and achromatic contents. This visual attribute has three components:
it can be described by chromatic color names such as red, pink, orange, yellow,
brown, green, blue, purple, etc.; or by achromatic color names such as white, gray,
black, etc.; and qualied by bright, dim, light, dark, etc., or by combinations of such
names.
1
The Commission Internationale de lclairage (CIE) was the driving force
behind the development of colorimetry. This international organization is respon-
sible for dening and specifying colorimetry via a series of CIE Publications. The
foundation of colorimetry is the human visual color sensibility, illuminant sources,
and spectral measurements that are described in the context of a color space. The
backbone of colorimetry is the tristimulus specication.
1.1 Denitions of CIE Tristimulus Values
The trichromatic nature of human color vision is mathematically formulated by
CIE to give tristmulus values X, Y, and Z. The CIE method of colorimetric spec-
ication is based on the rules of color matching by additive color mixture. The
principles of additive color mixing are known as the Grassmanns laws of color
mixtures:
2
(1) Three independent variables are necessary and sufcient for specifying a
color mixture.
(2) Stimuli, evoking the same color appearance, produce identical results in
additive color mixtures, regardless of their spectral compositions.
(3) If one component of a color mixture changes, the color of the mixture
changes in a corresponding manner.
The rst law establishes what is called trichromacythat all colors can be
matched by a suitable mixture of three different stimuli under the constraint that
none of them may be matched in color by any mixture of the others. If one stim-
ulus is matched by the other two, then the stimulus is no longer independent from
the other two. The second law means that stimuli with different spectral radiance
distributions may provide the same color match. Such physically dissimilar stimuli
that elicit the same color matches are called metamers and the phenomenon is
1
2 Computational Color Technology
said to be metamerism because an identical color match may consist of different
mixture components. The third law establishes the proportionality and additivity
of the stimulus metric for color mixing:
3
(1) Symmetry lawIf color stimulus
A
matches color stimulus
B
, then color
stimulus
B
matches color stimulus
A
.
(2) Transitivity lawIf
A
matches
B
and
B
matches
C
, then
A
matches

C
.
(3) Proportionality lawIf
A
matches
B
, then
A
matches
B
, where
is any positive factor by which the radiant power of the color stimulus
is increased or reduced, while its relative spectral distribution is kept the
same.
(4) Additivity lawIf
A
matches
B
,
C
matches
D
, and the additive mix-
ture (
A
+
C
) matches the additive mixture (
B
+
D
), then the additive
mixture (
A
+
D
) matches the additive mixture (
B
+
C
).
The CIE tristimulus specication or CIEXYZ is built on the Grassmanns laws us-
ing the spectral information of the object, illuminant, and color-matching functions.
The specication is dened in CIE Publication 15.2.
4
Mathematically, CIEXYZ
is the integration of the product of three spectra given by the object spectrum
S(), the spectral power distribution (SPD) of an illuminant E(), and the color-
matching functions (CMFs) A() = { x(), y(), z()}, where is the wavelength.
The object spectrum S() can be obtained in reectance, transmittance, or radi-
ance.
X = k
_
x()E()S() d

=k

x(
i
)E(
i
)S(
i
), (1.1a)
Y = k
_
y()E()S() d

=k

y(
i
)E(
i
)S(
i
), (1.1b)
Z = k
_
z()E()S() d

=k

z(
i
)E(
i
)S(
i
), (1.1c)
k = 100
___
y()E() d
_

= 100
__

y(
i
)E(
i
)
_
. (1.1d)
The scalar k is a normalizing constant. For absolute tristimulus values, k is set at
the maximum spectral luminous efcacy of 683 lumens/watt and the color stimulus
function () = E()S() must be in the spectral concentration unit of the radio-
metric quantity (watt meter
2
steradian
1
nanometer
1
). Usually, k is chosen to
give a value of 100 for luminance Y of samples with a unit reectance (transmit-
tance or radiance) spectrum as shown in Eq. (1.1d). Equation (1.1) also provides
the approximation of tristimulus values as summations if the sampling rate is high
enough to give accurate results. For an equal interval sampling,

i
=
0
+(i 1), i = 1, 2, 3, . . . , n. (1.1e)
Tristimulus Specication 3
and =
t
/(n1), where
t
is the total range of the spectrum and
0
is the short
wavelength end of the range. CIE recommends a sampling interval of = 5
nanometers (nm) over the wavelength range from 380 to 780 nm. In this case,

0
= 380 nm and
t
= 400 nm, and we have n = 81 samples. However, many
commercial instruments use a = 10 nm interval in the range of 400 to 700 nm
with a sample number n = 31. The summations in Eq. (1.1) sum all sample points.
It is obvious that all spectra must be sampled at the same rate in the same range.
By making this approximation, the integral for tristimulus values becomes a sum
of products in an innite-dimensional vector space. The sum has the form of an in-
ner (or dot) product of three innite-dimensional vectors. The approximation from
integration to summation affects the computational accuracy, which is dependent
on the sampling rate or the interval size. Trussell performed studies of the accuracy
with respect to the sampling rate.
5,6
Using the 2-nm interval size as the standard
for comparison, he computed the color differences of natural objects and paint
samples sampled at 4-nm, 6-nm, 8-nm, and 10-nm intervals under three different
illuminants: A, D
65
, and F
2
(see Figs. 1.5 and 1.6 for SPDs of these illuminants).
Generally, the color difference increases with increasing interval size. Under illu-
minants A and D
65
, all tested samples have an average color difference of less than
0.3 units and a maximum difference of less than 1.3 units for all sampling rates.
Under F
2
, the average and maximum color differences can go as high as 13 and 31
units, respectively. He concluded that most colored objects are sufciently band-
limited to allow sampling at 10 nm for illuminants with slow-varying spectra. The
problem lies with a uorescent illuminant such as F
2
, having spikes in the spectrum
(see Fig. 1.6). This study assured that the approximation given in Eq. (1.1) is valid
and the common practice of using a 10-nm interval is acceptable if uorescent il-
luminants are not used. Figure 1.1 gives a graphic illustration using a = 10 nm
in the range of 400 to 700 nm; the products are scaled by the scalar k (not shown
in the gure) to give the nal tristimulus values.
The 3D nature of color suggests plotting the value of each tristimulus com-
ponent along orthogonal axes. The result is called tristimulus space, which is a
visually nonuniform color space. Often, the tristimulus space is projected onto two
dimensions by normalizing each component with the sum of tristimulus values.
x =X/(X +Y +Z), y =Y/(X +Y +Z), z =Z/(X +Y +Z), (1.2)
where x, y, and z are called the chromaticity coordinates. Tristimulus values are
the nucleus of the CIE color specication. All other CIE color specications such
as CIELUV and CIELAB are derived from tristimulus values.
1.2 Vector-Space Representations of Tristimulus Values
The approximation from integration to summation, as given in Eqs. (1.1), enables
us to express the tristimulus values and the spectral sensitivity of human vision in
4 Computational Color Technology
F
i
g
u
r
e
1
.
1
G
r
a
p
h
i
c
i
l
l
u
s
t
r
a
t
i
o
n
o
f
t
h
e
C
I
E
t
r
i
s
t
i
m
u
l
u
s
c
o
m
p
u
t
a
t
i
o
n
.
Tristimulus Specication 5
a concise representation of the vector-matrix form as given in Eq. (1.3), where the
superscript T denotes the transposition of a matrix or vector.
= k(A
T
E)S =k
T
S, (1.3a)
= kA
T
(ES) =kA
T
, (1.3b)
= k(A
T
S)E =kQ
T
E, (1.3c)
k = 100/( y
T
E). (1.3d)
The symbol represents the product of the color-matching matrix A and the illu-
minant matrix E. Equations (1.3a)(1.3c) are exactly the same, only expressed in
different associative relationships with minor differences in vector-matrix forms.
In this book, we adopt to the convention that matrices and vectors are denoted by
boldface italic capital letters (or symbols) and elements, which are scalars, are de-
noted as the corresponding lowercase italic letter. Vectors are oriented as a vertical
column and viewed as a one-column matrix (or diagonal matrix whenever applica-
ble). For historic reasons, however, there are some exceptions; for example, capital
letters X, Y, and Z are used for the elements of the vector to denote the tristim-
ulus values in the CIE system, and vectors of CMF are represented as the boldface
lowercase italic letters, x, y, and z.
1.3 Object Spectrum
The object spectrum S() can readily be measured via various optical instruments
such as spectrophotometer or spectroradiometer, whereas the object representation
S is a vector of n elements obtained by sampling the object spectrum S() at the
same rate as the illuminant and CMF. Usually, the interval between samples is
constant and n is the number of sample points; for example, n = 31 if the range is
from 400 to 700 nm with a 10-nm interval.
S = [s(
1
) s(
2
) s(
3
) s(
n
)]
T
= [s
1
s
2
s
3
s
n
]
T
. (1.4)
For the purpose of simplifying the notation, we abbreviate s(
i
) as s
i
; that is, an
element and a scalar of the vector.
1.4 Color-Matching Functions
In addition to the spectrum of an object, the CIE color specications require the
color-matching functions and the spectrum of an illuminant. The color-matching
functions, also referred to as CIE standard observers, are intended to represent an
average observer of normal color vision. They are determined experimentally. The
experiment involves a test light incident on half of a bipartite white screen. An
observer attempts to perceptually match hue, brightness, and saturation of the test
light by adjusting three additive r(), g(), and

b() primaries shining on the other
6 Computational Color Technology
half of the screen.
7
According to the International Lighting Vocabulary, hue is an
attribute of the visual sensation in which an area appears to be similar to one of
the perceived colors: red, yellow, green, and blue, or to a combination of them;
brightness is an attribute of a visual sensation in which an area appears to emit
more or less light, and saturation is an attribute of a visual sensation in which the
perceived color of an area appears to be more or less chromatic in proportion to its
brightness.
1
This visual color match can be expressed mathematically as
h = rR + gG+

bB, (1.5a)
where h is the color of the test light; R, G, and B correspond to the red, green, and
blue matching lights (primaries), and r, g, and

b are the relative amounts of the
respective light. With this arrangement, some colors, like those in the blue-green
region, cannot be matched by adding three primaries, and metamers of spectral
colors are physically unattainable because they possess the highest purity, having
the highest light intensity of a single wavelength. To get around this problem, one
of the primaries is moved to the test side to lower the purity such that the trial side
can match; for example, adding the red primary to the test light for matching a
blue-green color.
h + rR = gG+

bB. (1.5b)
Mathematically, moving the red to the test light corresponds to subtracting it from
the other two primaries.
h = rR + gG+

bB. (1.5c)
Figure 1.2 depicts the CIE 1931 color-matching functions r(), g(), and

b(),
showing negative values in some portions of the curves.
The relationships between the r, g,

b and x, y, z are specied in the CIE Pub-
lication No. 15.
8
First, tristimulus values r(), g(),

b() are converted to chro-
maticity coordinates r(), g(), b() using Eq. (1.6); the resulting curves are plot-
ted in Fig. 1.3.
r(
i
) = r(
i
)/
_
r(
i
) + g(
i
) +

b(
i
)
_
i = 1, 2, 3, . . . , n, (1.6a)
g(
i
) = g(
i
)/
_
r(
i
) + g(
i
) +

b(
i
)
_
i = 1, 2, 3, . . . , n, (1.6b)
b(
i
) =

b(
i
)/
_
r(
i
) + g(
i
) +

b(
i
)
_
i = 1, 2, 3, . . . , n. (1.6c)
Then, the chromaticity coordinates r(), g(), b() are converted to chromatic-
ity coordinates x(), y(), z() via Eq. (1.7).
Tristimulus Specication 7
Figure 1.2 CIE rgb spectral tristimulus values.
3
x() = [0.49000r() +0.31000g() +0.20000b()]/
[0.66697r() +1.13240g() +1.20063b()], (1.7a)
y() = [0.17697r() +0.81240g() +0.01063b()]/
[0.66697r() +1.13240g() +1.20063b()], (1.7b)
z() = [0.00000r() +0.01000g() +0.99000b()]/
[0.66697r() +1.13240g() +1.20063b()]. (1.7c)
Finally, chromaticity coordinates x(), y(), z() are converted to the color-
matching functions x(), y(), z() by scaling with the photopic luminous ef-
ciency function V() established by CIE in 1924 for photopic vision.
9,10
The plot
of the photopic luminous efciency function with respect to wavelength gives a
bell shape with a peak at 555 nm and a half width of about 150 nm, indicating
that the visual system is more sensitive to wavelengths in the middle region of the
8 Computational Color Technology
Figure 1.3 The chromaticity coordinates of r(), g(), and b().
3
visible spectrum. This weighting converts radiometric quantities to photometric
quantities.
x() = [x()/y()]V(), (1.8a)
y() = V(), (1.8b)
and z() = [z()/y()]V(). (1.8c)
This projective transformation makes all color-matching functions positive across
the visible spectrum. CIE has recommended two standard observers: CIE 2

(1931)
and CIE 10

(1964) observers. The CMFs of these two observers are depicted in


Fig. 1.4. CIE 1931 2

observer x(), y(), z() were derived from spectral tris-


timulus values r(), g(),

b() using the spectral matching stimuli (R), (G), (B)
at wavelengths 700.0, 546.1, and 435.8 nm, respectively, whereas CIE 1964 10

observer x
10
(), y
10
(), z
10
() were derived from spectral tristimulus values refer-
ring to matching stimuli (R
10
), (G
10
), (B
10
). These are stimuli specied in terms
of wave numbers 15500, 19000, and 22500 cm
1
corresponding approximately to
wavelengths 645.2, 526.3, and 444.4 nm, respectively.
8
Tristimulus Specication 9
Figure 1.4 CIE 1931 2

and CIE 1964 10

observers.
8
The matrix representation of color-matching functions A is a sampled standard
observer with a size of n 3, where the column number of three is due to the
trichromatic nature of the human vision and the row number n is the number of
sample points; for example, n = 31 if the range is from 400 nm to 700 nm with a
10-nm interval.
A=
_
_
_
_
_
_
_
x(
1
) y(
1
) z(
1
)
x(
2
) y(
2
) z(
2
)
x(
3
) y(
3
) z(
3
)
. . . . . . . . .
. . . . . . . . .
x(
n
) y(
n
) z(
n
)
_

_
=
_
_
_
_
_
_
_
x
1
y
1
z
1
x
2
y
2
z
2
x
3
y
3
z
3
. . . . . . . . .
. . . . . . . . .
x
n
y
n
z
n
_

_
. (1.9)
The terms x(
i
), y(
i
), and z(
i
) are the values of the CMF sampled at the wave-
length
i
. For the purpose of simplifying the notation, we abbreviate x(
i
) as x
i
,
y(
i
) as y
i
, and z(
i
) as z
i
.
10 Computational Color Technology
1.5 CIE Standard Illuminants
Standard illuminant is another parameter in the CIE color specication. In this
book, we use the symbol E() for representing the spectrum of an illuminant and
the vector E is a sampled spectral power distribution E() of an illuminant. In
Eq. (1.3), the vector E is represented as a diagonal matrix with a size of n n,
E =
_
_
_
_
_
_
_
e(
1
) 0 0 . . . . . . 0
0 e(
2
) 0 . . . . . . 0
0 0 e(
3
) 0 . . . 0
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
0 0 0 . . . 0 e(
n
)
_

_
=
_
_
_
_
_
_
_
e
1
0 0 . . . . . . . . . 0
0 e
2
0 . . . . . . . . . 0
0 0 e
3
0 . . . . . . 0
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
0 0 0 . . . . . . 0 e
n
_

_
. (1.10)
Again, we abbreviate e(
i
) as e
i
.
There are many standard illuminants such as Sources A, B, and C, and daylight
D Illuminants. CIE Source A is a gas-lled tungsten lamp operating at a correlated
color temperature of 2856 K. Sources B and C are derived from Source A by com-
bining with a lter that is made from chemical solutions in optical cells. Different
chemical solutions are used for Sources B and C. The CIE standards committee
made a distinction between the terms illuminant and source. Source refers to a
physical emitter of light, such as a lamp or the sun. Illuminant refers to a specic
spectral power distribution, not necessarily provided directly by a source, and not
necessarily realizable by a source. Therefore, CIE illuminant A is calculated in
accordance with Plancks radiation law.
8
P
e
(, T ) =c
1

5
[exp(c
2
/T ) 1]
1
Wm
3
, (1.11)
where c
1
= 3.7415 10
16
Wm
2
, c
2
= 1.4388 10
2
mK, and the temperature
T is set equal to 2856 K for the purpose of matching Source A. The relative SPD of
CIE standard illuminant A has been calculated using Plancks equation at a 1-nm
interval between 300 to 830 nm and the standard recommends using linear inter-
polation for ner intervals. CIE D illuminants are the mathematical simulations
of various phases of natural daylight. These illuminants are based on over 600
SPDs measured at different global locations and under various combinations of the
irradiation from sun and sky. Judd, MacAdam, and Wyszecki analyzed these mea-
surements of the combinations obtained in natural daylight and found that there is
a simple relationship between the correlated color temperature (the color temper-
ature of a blackbody radiator that has nearly the same color as the light source of
Tristimulus Specication 11
interest) of daylight and its relative SPD.
11
Therefore, D illuminants are designated
by the color temperature in hundreds Kevin; for example D
50
is the illuminant at
5003 K. Figure 1.5 depicts the relative SPDs of several standard illuminants and
Fig. 1.6 gives the relative SPDs of two uorescent illuminants. SPDs of uores-
cent illuminants have many spikes that lower the computational accuracy if the
sampling interval is increased. Among them, the most frequently used illuminants
are D
50
, which is selected as the standard viewing illuminant for the graphic arts
industry, and D
65
(6504 K), which is the preferred illuminant for colorimetry when
daylight is of interest.
Based on the work of Judd et al.,
11
CIE recommended that a phase of daylight
other than the standard D
65
be dened by three characteristic functions E
0
(),
E
1
(), and E
2
() as
E() = E
0
() +m
1
E
1
() +m
2
E
2
(), (1.12a)
m
1
= (1.3515 1.7703x
d
+5.9114y
d
)/m
d
, (1.12b)
m
2
= (0.0300 31.4424x
d
+30.0717y
d
)/m
d
, (1.12c)
m
d
= 0.0241 +0.2562x
d
0.7341y
d
. (1.12d)
The component E
0
() is the mean function (or vector), E
1
() and E
2
() are char-
acteristic vectors; they are plotted in Fig. 1.7. The multipliers m
1
and m
2
are related
Figure 1.5 Relative spectral power distributions of CIE standard illuminants.
3
12 Computational Color Technology
Figure 1.6 Relative spectral power distributions of two uorescent illuminants.
8
Table 1.1 Values of multipliers for four daylight illuminants.
T
c
x
d
y
d
m
1
m
2
5000 0.3457 0.3587 1.040 0.367
5500 0.3325 0.3476 0.786 0.195
6500 0.3128 0.3292 0.296 0.688
7500 0.2991 0.3150 0.144 0.760
to chromaticity coordinates x
d
and y
d
of the illuminant as given in Eq. (1.12); the
values for D
50
, D
55
, D
65
, and D
75
are computed and tabulated in Table 1.1.
Equation (1.12) can be expressed in the matrix-vector form as given in
Eq. (1.13).
E = [ E
0
E
1
E
2
][ 1 m
1
m
2
]
T
(1.13a)
or
E =
_
_
_
_
_
_
_
e
0
(
1
) e
1
(
1
) e
2
(
1
)
e
0
(
2
) e
1
(
2
) e
2
(
2
)
e
0
(
3
) e
1
(
3
) e
2
(
3
)
. . . . . . . . .
. . . . . . . . .
e
0
(
n
) e
1
(
n
) e
2
(
n
)
_

_
_
_
1
m
1
m
2
_
_
=
_
_
_
_
_
_
_
e
01
e
11
e
21
e
02
e
12
e
22
e
03
e
13
e
23
. . . . . . . . .
. . . . . . . . .
e
0n
e
1n
e
2n
_

_
_
_
1
m
1
m
2
_
_
, (1.13b)
Tristimulus Specication 13
Figure 1.7 The characteristic vectors of the daylight illuminant.
8
m
1
= (m
d
)
1
[ 1.3515 1.7703 5.9114][ 1 x
d
y
d
]
T
, (1.13c)
m
2
= (m
d
)
1
[ 0.0300 31.4424 30.0717][ 1 x
d
y
d
]
T
, (1.13d)
m
d
= [ 0.0241 0.2562 0.7341][ 1 x
d
y
d
]
T
. (1.13e)
The vectors [E
0
, E
1
, E
2
] sampled at a 10-nm interval from 300 to 830 nm are
given in Table 1.2.
8
For smaller sampling intervals, the CIE standard recommends
that intermediate values be obtained by linear interpolation. This denition leads
to the discrepancy between the computed illuminant D
65
and original measured
data. Moreover, the rst derivative of the D
65
SPD is not a smooth continuous
function, which increases the computational uncertainty using interpolation. To
minimize these problems, a new recommendation is proposed to interpolate the
original 10-nm D
65
data by a cubic spline function. An alternative and preferred
approach is to interpolate the vectors [E
0
, E
1
, E
2
], instead of D
65
, using the cubic
spline, and then calculate the D
65
SPD or any other daylight SPD.
12
1.5.1 Standard viewing conditions
The reection and transmission measurements depend in part on the geometry of
illumination and viewing. CIE has recommended four conditions for opaque re-
ecting samples. These conditions are referred to as 45/0, 0/45, d/0, and 0/d.
14 Computational Color Technology
Table 1.2 Values of three characteristic vectors sampled at a 10-nm interval.
7
| |
E
0
E
1
E
2
| E
0
E
1
E
2
| E
0
E
1
E
2
| |
(nm) | (nm) | (nm)
| |
| |
300 0.04 0.02 0.0 | 500 113.1 16.2 1.5 | 700 74.3 13.3 9.6
| |
310 6.0 4.5 2.0 | 510 110.8 13.2 1.3 | 710 76.4 12.9 8.5
| |
320 29.6 22.4 4.0 | 520 106.5 8.6 1.2 | 720 63.3 10.6 7
| |
330 55.3 42.0 8.5 | 530 108.8 6.1 1 | 730 71.7 11.6 7.6
| |
340 57.3 40.6 7.8 | 540 105.3 4.2 0.5 | 740 77 12.2 8
| |
350 61.8 41.6 6.7 | 550 104.4 1.9 0.3 | 750 65.2 10.2 6.7
| |
360 61.5 38.0 5.3 | 560 100 0 0 | 760 47.7 7.8 5.2
| |
370 68.8 42.4 6.1 | 570 96 1.6 0.2 | 770 68.6 11.2 7.4
| |
380 63.4 38.5 3.0 | 580 95.1 3.5 0.5 | 780 65.0 10.4 6.8
| |
390 65.8 35 1.2 | 590 89.1 3.5 2.1 | 790 66.0 10.6 7.0
| |
400 94.8 43.4 1.1 | 600 90.5 5.8 3.2 | 800 61.0 9.7 6.4
| |
410 104.8 46.3 0.5 | 610 90.3 7.2 4.1 | 810 53.3 8.3 5.5
| |
420 105.9 43.9 0.7 | 620 88.4 8.6 4.7 | 820 58.9 9.3 6.1
| |
430 96.8 37.1 1.2 | 630 84 9.5 5.1 | 830 61.9 9.8 6.5
| |
440 113.9 36.7 2.6 | 640 85.1 10.9 6.7 |
| |
450 125.6 35.9 2.9 | 650 81.9 10.7 7.3 |
| |
460 125.5 32.6 2.8 | 660 82.6 12 8.6 |
| |
470 121.3 27.9 2.6 | 670 84.9 14 9.8 |
| |
480 121.3 24.3 2.6 | 680 81.3 13.6 10.2 |
| |
490 113.5 20.1 1.8 | 690 71.9 12 8.3 |
| |
The 45/0 and 0/45 conditions have a reversed geometry relation. For 45/0
geometry, the sample is illuminated by one or more beams whose axes are at an
angle of 45 5 deg from the normal to the sample. Viewing should be normal to
the sample surface or within 10 deg of the normal. For 0/45 geometry, the sample
is illuminated at normal position and viewed at 45 deg.
Similarly, d/0 and 0/d have a reversed geometry relation. The designation d/0 is
the abbreviation of diffuse/normal, a geometry in which the sample is illuminated
with diffused light by an integrating sphere, and viewed through a port in the sphere
at the normal or near-normal position. For 0/d geometry, the sample is illuminated
at the normal position. The reected ux is collected by an integrating sphere, and
the viewing is directed toward an inner wall of the sphere. The integrating sphere
may be of any size, provided that the total area of its ports does not exceed 10% of
the inner surface area of the sphere to minimize the loss of reected light.
1.6 Effect of Illuminant
The associative relationship of (
T
= A
T
E) in Eq. (1.3a) denes the human vi-
sual subspace (HVSS) under an illuminant E or can be viewed as a weighted CMF,
, called the color object matching function by Worthey.
13
It represents the effect
of the illuminant to modify the human visual space by transforming the color-
Tristimulus Specication 15
matching matrix A. It gives a 3n matrix, containing the elements of the products
of the illuminant and CMF. It has three rows because the CMF has only three com-
ponents and n columns given by the number of samples in the visible region for
illuminant as well as CMF.

T
=A
T
E =
_
x
1
x
2
x
3
. . . x
n
y
1
y
2
y
3
. . . y
n
z
1
z
2
z
3
. . . z
n
_
_
_
_
_
_
_
_
e
1
0 0 . . . . . . 0
0 e
2
0 . . . . . . 0
0 0 e
3
0 . . . 0
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
0 0 . . . . . . 0 e
n
_

_
=
_
e
1
x
1
e
2
x
2
e
3
x
3
. . . e
n
x
n
e
1
y
1
e
2
y
2
e
3
y
3
. . . e
n
y
n
e
1
z
1
e
2
z
2
e
3
z
3
. . . e
n
z
n
_
. (1.14)
1.7 Stimulus Function
For the associative relationship of ( =ES) in Eq. (1.3b), we obtain a vector , a
color stimulus function or color signal received by the eyes. It is the product of the
object and illuminant spectra, having n elements.
=ES =
_
_
_
_
_
_
_
e
1
0 0 . . . . . . . . . 0
0 e
2
0 . . . . . . . . . 0
0 0 e
3
0 . . . . . . 0
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
0 0 0 . . . . . . 0 e
n
_

_
_
_
_
_
_
_
_
s
1
s
2
s
3
. . .
. . .
s
n
_

_
=
_
_
_
_
_
_
_
e
1
s
1
e
2
s
2
e
3
s
3
. . .
. . .
e
n
s
n
_

_
. (1.15)
1.8 Perceived Object
The expression (Q
T
=A
T
S) in Eq. (1.3c) does not contain the illuminant; it is an
object spectrum weighted by color-matching functions.
Q
T
=A
T
S =
_
x
1
x
2
x
3
. . . x
n
y
1
y
2
y
3
. . . y
n
z
1
z
2
z
3
. . . z
n
_
_
_
_
_
_
_
_
s
1
s
2
s
3
. . .
. . .
s
n
_

_
=
_
s
1
x
1
s
2
x
2
s
3
x
3
. . . s
n
x
n
s
1
y
1
s
2
y
2
s
3
y
3
. . . s
n
y
n
s
1
z
1
s
2
z
2
s
3
z
3
. . . s
n
z
n
_
. (1.16)
16 Computational Color Technology
1.9 Remarks
Equation (1.3) shows that the tristimulus values are a linear combination of the col-
umn vectors in A. The matrix A has three independent columns (the color-mixing
functions); therefore, it has a rank of three, consisting of a 3D color-stimulus space.
Any function that is a linear combination of the three color-mixing functions is
within the tristimulus space.
References
1. CIE, International Lighting Vocabulary, CIE Publication No. 17.4, Vienna
(1987).
2. C. J. Bartleson, Colorimetry, Optical Radiation Measurements: Color Mea-
surement, F. Grum and C. J. Bartleson (Eds.), Academic Press, New York,
Vol. 2, pp. 33148 (1980).
3. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, p. 118 (1982).
4. CIE, Recommendations on uniform color spaces, color-difference equations
and psychonetric color terms, Supplement No. 2 to Colorimetry, Publication
No. 15, Bureau Central de la CIE, Paris (1978).
5. H. J. Trussell and M. S. Kulkarni, Sampling and processing of color signals,
IEEE Trans. Image Proc. 5, pp. 677681 (1996).
6. H. J. Trussell, A review of sampling effects in the processing of color signals,
IS&T and SIDs 2nd Color Imag. Conf., pp. 2629 (1994).
7. F. W. Billmeyer and M. Saltzman, Principles of Color Technology, 2nd Edi-
tion, Wiley, New York, Chap. 2 (1981).
8. CIE, Colorimetry, Publication No. 15, Bureau Central de la CIE, Paris (1971).
9. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, p. 395 (1982).
10. M. D. Fairchild, Color Appearance Models, Addison-Wesley Longman, Read-
ing, MA, pp. 7981 (1997).
11. D. B. Judd, D. L. MacAdam, and G. Wyszecki, Spectral distribution of typical
daylight as a function of correlated color temperature, J. Opt. Soc. Am. 54,
pp. 10311040 (1964).
12. J. Schanda and B. Kranicz, Possible re-denition of the CIE standard day-
light illuminant spectral power distribution, Color Res. Appl. 21, pp. 473475
(1996).
13. J. A. Worthey, Calculation of metameric reectances, Color Res. Appl. 13,
pp. 7684 (1988).
Chapter 2
Color Principles and Properties
The vector-space representation opens the doors for utilizing the well-developed
mathematical elds of linear algebra and matrix theory. Cohen, Horn, and Trussell,
among others, have elegantly put forth the fundamental properties of the vector-
space color representation. They showed the existence of the color match, the
color additivity, the identity, the transformation of primaries, the equivalent pri-
maries, the metameric relationship, the effect of illumination, and many imaging
constraints.
15
The abilities of representing spectral sensitivity of human vision
and related visual phenomena in matrix-vector form provide the foundation for a
major branch of computational color technology, namely, the studies of the phe-
nomenon of surface reection. This approach deals with signals reected from an
object surface under a given illumination into a human visual pathway; it has no
interest in the physical and chemical interactions within the object and substrate.
On the other hand, the physical interactions of the light with objects and substrates
form the basis for another major branch of the computational color technology,
namely, the physical color-mixing models that are used primarily in the printing
industry.
With the vector-space representation and matrix theory, in this chapter, we
lay the groundwork for these computational approaches by revisiting the Grass-
manns law of color mixing and reexamining color matching as well as other prop-
erties.
2.1 Visual Sensitivity and Color-Matching Functions
Visual spectral sensitivity of the eye measured as the spectral absorption charac-
teristics of human cones is given in Fig. 2.1.
6,7
The sampled visual spectral sensi-
tivity is represented by a matrix of three vectors V = [V
1
, V
2
, V
3
], one for each
cone type, where V
i
is a vector of n elements. Compared to the CMF curves of
Fig. 1.2, they differ in several ways: rst, the visual spectral sensitivity has no
negative elements; second, the overlapping of green (middle-wavelength) and red
(long-wavelength) components is much stronger.
The sensor responses to the object spectrum () can be represented by
=V
T
. (2.1)
17
18 Computational Color Technology
Figure 2.1 Relative spectral absorptances of human cone pigments measured by mi-
crospectrophotometry. (Reprinted with permission of John Wiley & Sons, Inc.)
7
The existence of a color match can be understood by an imaginary visual ex-
periment. Conceptually, we can imagine a set of ideal monochromatic spectra

i
() for i = 1, 2, 3, . . . , n. Each
i
is viewed as a vector of n elements with
a value 1, the full intensity of light, at the ith wavelength and zero elsewhere
(a truncated delta function). Now, we perform the color-matching experiment to
match this set of ideal spectra by the linear combination of three primary spectra
() = [ r(), g(),

b()]. If the sampled primary spectra have the same interval


as the ideal spectra, we get a 3 n matrix =[r, g, b], where vectors r, g, and b
are linearly independent, giving a rank of three to the matrix . As pointed out in
the color-matching experiment, spectral colors are physically unattainable. There-
fore, one of the primaries is moved to the test side to lower the purity. If the relative
intensities of the primaries to match one ideal spectrum
i
is a
i
=[a
1i
, a
2i
, a
3i
]
T
,
then we have
V
T

i
=V
T
a
i
. (2.2)
For matching all ideal monochromatic spectra, Eq. (2.2) becomes
i =1 V
T
[1 0 0 0 . . . 0]
T
=V
T
[r g b][a
11
a
21
a
31
]
T
,
i =2 V
T
[0 1 0 0 . . . 0]
T
=V
T
[r g b][a
12
a
22
a
32
]
T
,
Color Principles and Properties 19
i =3 V
T
[0 0 1 0 . . . 0]
T
=V
T
[r g b][a
13
a
23
a
33
]
T
,
. . . . . . . . . ,
. . . . . . . . . ,
i =n V
T
[0 0 0 0 . . . 1]
T
=V
T
[r g b][a
1n
a
2n
a
3n
]
T
. (2.3)
By combining all components in Eq. (2.3) together, we obtain
V
T
I =V
T
A
T
. (2.4)
The combination of the ideal monochromatic spectra
i
gives the identity matrix I
with a size of nn, and the combination of the relative intensities of the primaries
gives the color matching function A with a size of n 3, thus we have
V
T
=

V
T

A
T
. (2.5)
Equation (2.5) indicates that the matrix V is a linear transformation of the color-
matching matrix A. The conversion matrix (V
T
) has a size of 3 3 because V
T
is 3 n and is n 3. And the product (V
T
) is nonsingular because both V
and have a rank of three; therefore, it can be inverted. As a result, the human
visual space can be dened as any nonsingular transformation of vectors in V.
A
T
=

V
T

1
V
T
(2.6)
or
A=V

T
V

1
. (2.7)
Equation (2.7) shows that the color-matching matrix A is determined solely by the
primaries and the human visual sensitivity. This result is well known; the beauty
is the conciseness of the proof.
5
Another way to describe Eq. (2.6) or Eq. (2.5)
is that the color-matching function is a linear transform of the spectral visual sen-
sitivity, or vice versa. Indeed, Baylor and colleagues have shown the property of
Eq. (2.5) experimentally by nding a linear transformation to convert their cone
photopigment measurements to the color-matching functions after a correction for
absorption by the lens and other inert pigments in the eye.
8,9
The linearly trans-
formed pigment data match well with the color-matching functions.
2.2 Identity Property
For a color-matching matrix A derived from a set of primaries = [ r(),
g(),

b()], the transformation of the primaries by the matrix A is an identity ma-


trix I.
4
A
T
=I. (2.8)
20 Computational Color Technology
The proof of the identity property is attained by rearranging Eq. (2.5) and multi-
plying both sides of the equation by , and we get
V
T
A
T
=V
T
. (2.9)
As given in Eq. (2.6), the product (V
T
) is nonsingular and can be inverted. There-
fore, we multiply both sides of Eq. (2.9) by (V
T
)
1
.

V
T

V
T

A
T
=

V
T

V
T

. (2.10)
Note that

V
T

V
T

=I. (2.11)
Substituting Eq. (2.11) into Eq. (2.10) gives the identity property of Eq. (2.8).
2.3 Color Match
Any visible spectrum S can be matched by a set of primaries, having a unique
three-vector a
A
that controls the intensity of the primaries, to produce a spectrum
that appears the same to the human observer. From Eq. (2.2), we have
V
T
S =V
T
a
A
. (2.12)
The existence of a color match is obtained simply by inverting Eq. (2.12)
a
A
=(V
T
)
1
V
T
S (2.13)
because (V
T
) is nonsingular. The uniqueness is shown by assuming that two
intensity matrices a
A
and a
B
both match the spectrum S, and we have
V
T
(a
A
a
B
) =V
T
a
A
V
T
a
B
. (2.14)
By substituting Eq. (2.13) for a
A
and a
B
, we obtain
V
T

V
T

1
V
T
S V
T

V
T

1
V
T
S =V
T
S V
T
S =0. (2.15)
This means a
A
must be equal to a
B
.
Color Principles and Properties 21
2.4 Transitivity Law
Because the color-matching matrix A is a linear transform of the human visual
sensitivity, we can represent the spectral sensitivity of the eye by the matrix A.
And the response of the sensors to the color stimulus gives a tristimulus value.
5
A
T
=. (2.16)
This transform reduces the dimension from n to 3, and causes a large loss in infor-
mation, suggesting that many different spectra may give the same color appearance
to the observer. In this case, two stimuli
A
() and
B
() are said to be a metameric
match if they appear the same to the human observer. In the vector-space represen-
tation, we have
A
T

A
=A
T

B
=. (2.17)
Now, if a third stimulus
C
() matches the second stimulus
B
(), then A
T

B
=
A
T

C
= . It follows that A
T

A
= A
T

C
= . This proves that the transitivity
law holds.
2.5 Proportionality Law
If
A
matches
B
, then
A
matches
B
, where is a scalar.
A
T
(
A
) =(A
T

A
) = =(A
T

B
) =A
T
(
B
). (2.18)
Thus,
A
must match
B
and the proportionality law holds.
2.6 Additivity Law
For two stimuli
A
and
B
, we have
A
T

A
=
A
and A
T

B
=
B
. (2.19)
If we add them together, we have
A
T

A
+A
T

B
=
A
+
B
, (2.20a)
A
T
(
A
+
B
) =
A
+
B
. (2.20b)
Moreover, if
A
matches
B
,
C
matches
D
, and (
A
+
C
) matches (
B
+
D
),
then (
A
+
D
) matches (
B
+
C
).
A
T

A
=
A
=
B
=A
T

B
and A
T

C
=
C
=
D
=A
T

D
, (2.21)
22 Computational Color Technology
A
T
(
A
+
C
) =
A
+
C
=
B
+
D
=A
T
(
B
+
D
), (2.22)
A
T
(
A
+
D
) =A
T

A
+A
T

D
=
A
+
D
=A
T

B
+A
T

C
=A
T
(
B
+
C
). (2.23)
This shows that the additivity law holds.
2.7 Dependence of Color-Matching Functions on Choice
of Primaries
Two different spectra
A
and
B
can give the same appearance if and only if
A
T

A
=A
T

B
.
V
T

A
=V
T

B
iff A
T

A
=A
T

B
. (2.24)
Applying Eq. (2.6), A
T
=(V
T
)
1
V
T
, to the constraint A
T

A
=A
T

B
, we have

V
T

1
V
T

A
=

V
T

1
V
T

B
. (2.25)
Multiplying both sides of the equation by (V
T
), Eq. (2.25) becomes

V
T

V
T

1
V
T

A
=

V
T

V
T

1
V
T

B
. (2.26)
Applying the identity relationship of Eq. (2.11), we prove Eq. (2.24) is valid if and
only if the constraint is true.
2.8 Transformation of Primaries
If a different set of primaries
j
is used in the color-matching experiment, we will
obtain a different set of the color-matching functions A
j
, but the same color match.
Applying Eq. (2.5) to any input spectrum , we obtain

V
T

A
T
= =

V
T

A
T
j
. (2.27)
This gives the relationship of

V
T

A
T
=

V
T

A
T
j
. (2.28)
Multiplying both sides by (V
T
)
1
, we have

V
T

V
T

A
T
=

V
T

V
T

A
T
j
. (2.29)
Color Principles and Properties 23
Utilizing the identity relationship on the left-hand side of Eq. (2.29), and substitut-
ing Eq. (2.6) into the right-hand side of Eq. (2.29), we obtain
A
T
=

V
T

1
V
T

j
A
T
j
=

A
T

A
T
j
. (2.30)
The product (A
T

j
) is nonsingular with a size of 3 3 because A
T
is a 3 n
matrix and
j
is an n 3 matrix. By inverting (A
T

j
), we obtain
A
T
j
=

A
T

1
A
T
. (2.31)
Equation (2.31) shows that two sets of primaries are related by a 3 3 linear trans-
formation, just like the color-matching functions are related to visual spectral sen-
sitivity by a 3 3 linear transformation.
2.9 Invariant of Matrix A (Transformation of Tristimulus
Vectors)
If
i
and
j
are two tristimulus vectors obtained from an input spectrum S under
two different sets of primaries
i
and
j
, respectively, we have
i
= A
T
i
S and

j
=A
T
j
S. Applying Eq. (2.31) to A
T
j
, we have

j
=A
T
j
S =

A
T
i

j

1
A
T
i
S =

A
T
i

j

i
. (2.32)
Again, the equation shows that two tristimulus vectors of a spectrum under two
different sets of primaries are related by a 3 3 linear transformation.
2.10 Constraints on the Image Reproduction
Extending the theoretical proofs for visual sensitivity and color matching discussed
in this chapter, Horn has proposed a series of constraints on the image sensors, light
sources, and image generators for accurate image capture, archive, and reproduc-
tion. These constraints are presented as theorems, corollaries, or lemmas in the
original paper.
4
They provide the guidance to digital color transform, white-point
conversion, and cross-media color reproduction.
Constraint 1. The spectral response curves of the image sensors must be linear
transforms of the spectral response curves of the human visual system.
Constraint 1.1. Metameric spectral distributions produce identical outputs in the
camera.
In practice, Constraints 1 and 1.1 are difcult to meet, if not impossible; the next
best thing is to design sensor response curves closest to the linear transform of the
human visual responses. The nearest approximation can be obtained in the least-
square sense.
24 Computational Color Technology
Constraint 2. The mapping of image sensor outputs to image generator inputs can
be achieved by means of a linear 3 3 matrix transform.
Constraint 2.1. The spectral response curves of the image sensors must be linearly
independent, and the spectral distributions of the light must also be linearly inde-
pendent.
Constraint 2.2. The light-source spectral distributions need not be linear transforms
of the spectral response curves of the human visual system.
Constraint 2.3. In determining the transfer matrix, we can use matrices based on
the standard observer curves, instead of the matrices based on the actual spectral
response curves of the human visual system. This is the natural consequence of
Eq. (2.5)that standard observer curves are a linear transform away from visual
sensitivity curves.
Constraint 3. The gain factors used to adjust for chromatic adaptation should be
introduced in the linear system after the recovery of the observer stimulation levels
and before computation of the image generator control inputs.
Constraint 3.1. Applying the gain factors to image sensor outputs or image gener-
ator inputs will not, in general, result in correct compensation for adaptation.
Constraint 4. The set of observer stimulation levels that can be produced using
nonnegative light-source levels forms a convex subset of the space of all possible
stimulation levels.
Constraint 4.1. The subset of stimulation levels possible with arbitrary nonnega-
tive spectral distribution is, itself, convex and bounded by the stimulation levels
produced by monochromatic light sources.
Constraint 5. The problem of the determination of suitable image generator control
inputs when there are more than three light sources can be posed as a problem
in linear programming. For the given stimulation levels of the receptors of the
observer, only three of the light sources need be used at a time.
Constraint 6. For the reproduction of reproductions, the spectral response curves
of the image sensors need not be linear transforms of the spectral response curves
of the human visual system.
References
1. J. B. Cohen and W. E. Kappauf, Metameric color stimuli, fundamental
metamers, and Wyszeckis metameric blacks, Am. J. Psychology 95, pp. 537
564 (1982).
2. J. B. Cohen and W. E. Kappauf, Color mixture and fundamental metamers: The-
ory, algebra, geometry, application, Am. J. Psychology 98, pp. 171259 (1985).
3. J. B. Cohen, Color and color mixture: Scalar and vector fundamentals, Color
Res. Appl. 13, pp. 539 (1988).
Color Principles and Properties 25
4. B. K. P. Horn, Exact reproduction of colored images, Comput. Vision Graph
Image Proc. 26, pp. 135167 (1984).
5. H. J. Trussell, Applications of set theoretic methods to color systems, Color
Res. Appl. 16, pp. 3141 (1991).
6. J. K. Bowmaker and H. J. A. Dartnall, Visual pigments of rods and cones in a
human retina, J. Physiol. 298, pp. 501511 (1980).
7. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 623624 (1982).
Original data from J. K. Bowmaker and H. J. A. Dartnall, Visual pigments of
rods and cones in a human retina, J. Physiol. 298, pp. 501511 (1980).
8. D. A. Baylor, B. J. Nunn, and J. L. Schnapf, Spectral sensitivity of cones of the
monkey Macaca Fascicularis, J. Physiol. 390, pp. 145160 (1987).
9. B. A. Wandell, Foundations of Vision, Sinauer Assoc., Sunderland, MA, p. 96
(1995).
Chapter 3
Metamerism
Grassmanns second law indicates that two lights or two stimuli may match in
color appearance even though their spectral radiance (or power) distributions dif-
fer. This kind of condition is referred to as metamerism and the stimuli involved
are called metamers. More precisely, metamers are color stimuli that have the same
color appearance in hue, saturation, and brightness under a given illumination, but
a different spectral composition. Metamers and color mixing are at the heart of
CIE colorimetry. The CMFs that specify the human color stimulus derive from
the metameric color matching by the fact that many colors are matched by addi-
tive mixtures of three properly selected primaries. This makes it possible for the
tristimulus specications of colors such as the one given in Eq. (1.1). Note that
metamers of spectral colors are physically unattainable because they possess the
highest intensity. To get around this problem, a primary is added to the reference
side to lower the intensity so that the trial side can match. This is the reason that
there are negative values in the original CMF, before transformation to have all
positive values (see Section 1.4).
This chapter presents the types of metameric matching, the vector-space rep-
resentation of the metamerism, and Cohens method of object spectrum decom-
position into a fundamental color stimulus function and a metameric black, often
referred to as the spectral decomposition theory or matrix R theory. In addition,
several other metameric indices are also reported.
3.1 Types of Metameric Matching
A general description of the metameric match for a set of mutual metamers with

1
(),
2
(),
3
(), . . . ,
m
() stimulus functions can be expressed as follows:
1
_

1
() x() d =
_

2
() x() d = =
_

m
() x() d = X, (3.1a)
_

1
() y() d =
_

2
() y() d = =
_

m
() y() d = Y, (3.1b)
_

1
() z() d =
_

2
() z() d = =
_

m
() z() d = Z. (3.1c)
For the convenience of the formulation, we drop the constant k as compared to
Eq. (1.1), where the constant k is factored into function .
27
28 Computational Color Technology
3.1.1 Metameric illuminants
The stimulus functions may differ in many ways. For example, they may represent
different illuminants.
1

1
() = E
1
(),
2
() = E
2
(), . . . ,
m
() = E
m
(). (3.2)
This happens in a situation where two lights have the same tristimulus values but
one is a full radiator with smooth and continuous spectral distribution (e.g., tung-
sten lamps) and the other has a highly selective narrowband emissivity distribution
(e.g., uorescent lamps). When two light sources are metameric, they will appear to
be of the same color when the observer looks directly at them. However, when two
metameric sources are used to illuminate a spectrally selective object, the object
will not necessarily appear the same.
3.1.2 Metameric object spectra
Another possibility is that they may represent different objects illuminated by the
same illuminant, such as different substrates under the same light source.

1
() = E()S
1
(),
2
() = E()S
2
(), . . . ,
m
() = E()S
m
(). (3.3)
In this case, metamers with different spectral power distributions give the same col-
orimetric measure, but the appearance will look different if a different illuminant
is used.
1
3.1.3 Metameric stimulus functions
The most complex case is where the stimulus functions represent different objects
illuminated by different illuminants as given in Eq. (3.4).
1

1
() = E
1
()S
1
(),
2
() = E
2
()S
2
(), . . . ,

m
() = E
m
()S
m
(). (3.4)
Typically, a metameric match is specic to one observer or one illuminant. When
either illuminant or observer is changed, it is most common to nd that the
metameric match breaks down. There are some instances in which the metameric
match may hold for a second illuminant. Usually, this will be true if peak re-
ectance values of two samples are equal at three or more wavelengths. Such sam-
ples will tend to be metameric under one light source, and if the wavelength loca-
tions of intersections are appropriate, they may continue to provide a metameric
match for a second illuminant.
In all cases, the resulting color sensation given by Eq. (3.1) is a set of tristimulus
values. As stated by Cohen, the principle purpose of color science is the establish-
ment of lawful and orderly relationships between a color stimulus and the evoked
Metamerism 29
color sensation. A color stimulus is radiation within the visible spectrum described
invariably by radiometric function depicting irradiance as a function of wavelength,
whereas the evoked color sensation is subjective and is described by words of color
terms such as red or blue.
2
Color stimulus is the cause and color sensation is the ef-
fect. Color science is built on psychophysics, where the color matching represented
by the tristimulus values as coefcients is matched psychologically.
3.2 Matrix R Theory
As early as 1953, Wyszecki pointed out that the spectral power distribution of
stimuli consists of a fundamental color-stimulus function () (or simply funda-
mental) intrinsically associated with the tristimulus values and a metameric black
function () (or simply residue) unique to each metamer with tristimulus val-
ues of (0, 0, 0), having no contribution to the color specication.
3,4
He further
noted that the metameric black function is orthogonal to the space of the CMF.
Utilizing these concepts, Cohen and Kappauf developed the method of the orthog-
onal projector to decompose visual stimuli into these two components, where the
fundamental metamer is a linear combination of the CMF such as matrix A, and
the metameric black is the difference between the stimulus and the fundamental
metamer.
2,5,6
This method was named the matrix R theory and has been thor-
oughly examined by Cohen, where several denitions of transformation matrices
were added to the theory.
7
Recall that the color-matching matrix A is an n 3 matrix dened as
A=
_
_
_
_
_
_
_
x
1
y
1
z
1
x
2
y
2
z
2
x
3
y
3
z
3


x
n
y
n
z
n
_

_
, (3.5)
where n is the number of samples taken in the visible spectrum. A transformation
matrix M
a
is dened as the CMF matrix A right multiplied with its transpose.
7
The resulting matrix M
a
is symmetric with a size of 3 3.
M
a
=A
T
A=
_
x
1
x
2
x
3
x
n
y
1
y
2
y
3
y
n
z
1
z
2
z
3
z
n
_
_
_
_
_
_
_
_
x
1
y
1
z
1
x
2
y
2
z
2
x
3
y
3
z
3


x
n
y
n
z
n
_

_
=
_
_

x
2
i

x
i
y
i

x
i
z
i

x
i
y
i

y
2
i

y
i
z
i

x
i
z
i

y
i
z
i

z
2
i
_
_
. (3.6)
30 Computational Color Technology
The summation in Eq. (3.6) carries from i = 0 to i = n. The inverse of the matrix
M
a
is called M
e
.
M
e
=M
1
a
. (3.7)
Table 3.1 gives the values of M
a
and M
e
for both CIE 1931 and 1964 standard
observers using the color-matching functions with the range from 390 to 710 nm
at 10-nm intervals.
For a set of mutual metamers,
1
(),
2
(),
3
(), . . . ,
m
() that are sam-
pled in the same rate as the color-mixing function A, we have a set of vectors,

1
,
2
,
3
, . . . ,
m
, where each
i
is a vector of n elements. Using the vector-space
representation, we can represent Eq. (3.1) in a matrix-vector notation, regardless
of the content of stimulus functions.
A
T

1
=A
T

2
= =A
T

i
= =A
T

m
=. (3.8)
Vector represents the tristimulus values (X, Y, Z). Next, a matrix M
f
is dened
as M
e
right multiplied into matrix A.
M
f
=AM
e
=AM
1
a
=A
_
A
T
A
_
1
. (3.9)
Matrix M
f
has a size of n 3 because A is n 3 and M
e
is 3 3. By multiplying
each term in Eq. (3.8) with M
f
, we have
M
f
A
T

1
= =M
f
A
T

i
= =M
f
A
T

m
=M
f
=A
_
A
T
A
_
1
= , (3.10)
where is the sampled fundamental function (). It is a vector of n elements
because M
f
is an n 3 matrix and is a 3 1 vector. Cohen dened matrix R as
the orthogonal projector projecting
i
into the 3D color-stimulus space.
R =A
_
A
T
A
_
1
A
T
=AM
1
a
A
T
=AM
e
A
T
=M
f
A
T
. (3.11)
Table 3.1 Values of M
a
and M
e
of CIE 1931 and 1964 standard observers.
CIE 1931 standard observer CIE 1964 standard observer
7.19376 5.66609 2.55643 8.38032 6.60554 3.09762
M
a
5.66609 7.72116 0.84986 6.60554 8.34671 1.44810
2.55643 0.84986 14.00625 3.09762 1.44810 16.98368
0.36137 0.25966 0.05020 0.34187 0.26363 0.03987
M
e
0.25966 0.31696 0.02816 0.26363 0.32491 0.02038
0.05020 0.02816 0.07885 0.03987 0.02038 0.06441
Metamerism 31
As shown in Eq. (3.11), matrix R is the matrix M
f
left multiplied into matrix A
T
.
Matrix R has a size of n n because M
f
is n 3 and A
T
is 3 n. But it has a
rank of three because it is derived solely from matrix A, having three independent
columns. As a result, matrix R is symmetric. It decomposes the spectrum of the
stimulus into two components, namely, the fundamental and the metameric black
, as given in Eqs. (3.12) and (3.13), respectively.
=R
i
, (3.12)
and
=
i
=
i
R
i
= (I R)
i
. (3.13)
The vector is a sampled metameric black function () and I is the identity
matrix. The metameric black has tristimulus values of zero.
A
T
= [0, 0, 0]
T
. (3.14)
Equations (3.12) and (3.13) show that any group of mutual metamers has a com-
mon fundamental but different metameric blacks . Inversely, the stimulus spec-
trum can be expressed as

i
= + =R
i
+(I R)
i
. (3.15)
Equation (3.15) states that the stimulus spectrum can be recovered if the funda-
mental metamer and metameric black are known.
1,8
3.3 Properties of Matrix R
There are several interesting properties of the matrix R:
(1) The fundamental metamer depends only on matrix A and the common
tristimulus values.
(2) The fundamental metamer is a metamer of
i
. They have the same tris-
timulus values.
A
T
=A
T
(R
i
) =A
T
A
_
A
T
A
_
1
A
T

i
=IA
T

i
=A
T

i
=. (3.16)
(3) The matrix R is symmetric and idempotent; therefore,
R(R) =R = . (3.17)
When matrix R operates on a fundamental color stimulus, it simply pre-
serves the fundamental. In other words, has no metameric black com-
ponent.
32 Computational Color Technology
(4) The fundamental metamer can be obtained from tristimulus values, =
M
f
[see Eq. (3.10)], without knowing the spectral distribution of the
stimulus because M
f
is known for a given CMF.
(5) The fundamental metamer given in Eq. (3.12) is a linear combination
of the CMF that constitutes matrix A because matrix R depends only on
matrix A, having three independent components. Therefore, matrix R has
a rank of three, meaning that we can reconstruct every entry in the entire
matrix by knowing any three rows or columns in the matrix.
(6) The fundamental metamer is independent of the primaries selected for
obtaining the data in matrix A. Thus, it is invariant under any transforma-
tion of A to a new A
p
, which is based on a different set of primaries.
Using the invariant property of matrix A, that two different sets of pri-
maries are related by a linear transformation (see Chapter 2.9), we can
set
A
p
=AM, (3.18)
where M is a nonsingular 33 transformation matrix. Fromthe denition
of the matrix R [see Eq. (3.11)], we have
R
p
=A
p
_
A
T
p
A
p
_
1
A
T
p
. (3.19)
Substituting Eq. (3.18) into Eq. (3.19), we get
R
p
=AM
_
(AM)
T
AM
_
1
(AM)
T
=AM
_
M
T
A
T
AM
_
1
A
T
M
T
.
(3.20)
Since
_
M
T
A
T
AM
_
1
=M
1
_
A
T
A
_
1
_
M
T
_
1
, (3.21)
we thus have
R
p
=AMM
1
_
A
T
A
_
1
_
M
T
_
1
M
T
A
T
=AI
_
A
T
A
_
1
IA
T
=A
_
A
T
A
_
1
A
T
=R. (3.22)
This proves that R is invariant under transformation of primaries. Hence,
is invariant. This property means that matrix R may be computed from
any CMF matrix A or any linear transformation of matrix A (moreover,
it may also be computed from any triplet of fundamental stimuli).
2
For
example, we may start with the CIE 1931 r, g,

b or 1931 x, y, z color-
matching functions and end with the same matrix R, even though they
have very different M
a
, M
e
, and M
f
matrices (see Table 3.1 for compar-
isons). Figure 3.1 gives the 3D plot of matrix R. There are three peaks in
the diagram. The rst peak is approximately 443447 nm, the second peak
Metamerism 33
Figure 3.1 Three-dimensional plot of matrix R.
2
is approximately 531535 nm, and the third peak is around 595605 nm;
they coincide with the peaks of the CIE r, g,

b color-matching functions
for both 1931 and 1964 standard observers.
9
These peak wavelengths
have appeared frequently in the literature of the color sciences, such as the
theoretical consideration of the color-processing mechanism,
10,11
photo-
metric relationships between complementary color stimuli,
12
modes of
human cone sensitivity functions,
13
color lm sensitivities,
1418
and in-
variant hues.
19
(7) If is a unit monochromatic radiation of wavelength
i
, then R or be-
comes a simple copy of the ith column of R. This indicates that each col-
umn or row of the matrix R (R is symmetric) is the fundamental metamer
of a unit monochromatic stimulus. Therefore, is a weighted sum of the
fundamental metamers of all the monochromatic stimuli that constitute .
Moreover, the unweighted sum of columns or rows in matrix R gives the
fundamental of an equal-energy color stimulus as shown in Fig. 3.2.
(8) Given two color stimuli
1
and
2
, each has its own fundamentals
1
and
2
. The combination of the two fundamentals (
1
+
2
) is the funda-
mental of the combined stimuli (
1
+
2
). This result is the basis of the
Grassmanns additivity law.
34 Computational Color Technology
Figure 3.2 Fundamental spectrum of an equal-energy color stimulus.
6
(9) When two color stimuli
1
and
2
are metamers, they have the same
fundamental but different metameric black
1
and
2
. The difference
between the spectra of two metamers is the difference between their
metameric blacks, which is still a metameric black.

1

2
= ( +
1
) ( +
2
) =
1

2
. (3.23a)
Let
=
1

2
. (3.23b)
Applying Eq. (3.14) to Eq. (3.23b), we obtain
A
T
() =A
T
(
1

2
) =A
T

1
A
T

2
= [0, 0, 0]
T
[0, 0, 0]
T
= [0, 0, 0]
T
. (3.23c)
Equation (3.23c) proves that the difference between the spectra of two
metameric blacks is still a metameric black. Moreover, metameric blacks
may be created from other metameric blacks, such as the addition or sub-
traction of two metameric blacks, or the multiplication of a metameric
by a positive or negative factor. This provides the means to generate an
innite number of residues; thus, an innite number of metamers.
(10) For any set of primaries
j
, the projection of the primaries onto the visual
space can be derived by the color-matching matrix A.
2,8
R
j
=A
_
A
T
A
_
1
A
T

j
=A
_
A
T
A
_
1
=M
f
. (3.24)
Metamerism 35
This is because A
T

j
= I. The matrix M
f
is comprised of the funda-
mental primaries and is depicted in Fig. 3.3 for both CIE 1931 and 1964
standard observers with 33 samples from 390 to 710 nm at 10-nm in-
terval. Note that the sum of XYZ fundamentals is exactly equal to the
fundamental of an equal-energy stimulus (see Fig. 3.2 for comparison).
With the means to derive the fundamentals of primaries, Cohen stated:
2
The color-matching equation has historically been written with tristimulus
values as coefcients, but so written the equation balances only psycholog-
ically. When the stimuli of the color-matching equation are replaced by the
fundamentals processed by the visual system (after the residuals are ejected
by matrix-R operations), the equation balances psychologically, physically,
and mathematically.
(11) Complementary color stimuli can be computed directly using matrix R
operations.
2
First, the white or gray stimulus
w
is decomposed to its
fundamental
w
and metameric black
w
. Then, any color stimulus
is decomposed to and . The difference between the fundamentals,
n =
w
, is the complementary fundamental. Metameric blacks may
be added to the complementary fundamental n to create a suite of com-
plementary metamers.
As shown in Property 10, the sum of all three XYZ fundamentals is an
equal-energy stimulus; it follows that each primary fundamental is com-
Figure 3.3 Fundamental primaries of CIE standard observers obtained by matrix R decom-
positions.
36 Computational Color Technology
plementary to the sum of the other two primaries. Thus, complementary
color stimuli are a special case of the color-matching equation.
2
(12) For a given set of primaries and its associated matching functions A,
any set of primaries
j
that have the same projection onto the visual
subspace as will have the same color-match functions. This gives the
equivalent primaries. The condition for equivalent primaries is that their
tristimulus values are equal based on the identity relationship.
A
T
=A
T

j
=I. (3.25)
The primaries
j
are decomposed as follows:

j
= + =R
j
+(I R)
j
. (3.26)
The primaries
j
have the same projection onto the visual subspace as ;
therefore, Eq. (3.26) can be rewritten as

j
=R +(I R)
j
. (3.27)
Since the visual spectral sensitivity V and the color-matching function A
dene the same 3D visual subspace, the orthogonal project operator R
can be written in terms of V.
R =A
_
A
T
A
_
1
A
T
=V
_
V
T
V
_
1
V
T
. (3.28)
Substituting Eq. (3.28) into Eq. (3.27), we have

j
=V
_
V
T
V
_
1
V
T
+
_
I V
_
V
T
V
_
1
V
T
_

j
. (3.29)
Recall that the color-matching matrix is a linear transform of the visual
spectral sensitivity [see Eq. (2.7)]; thus, any arbitrary set of primaries

j
will have a corresponding color-matching function A
j
as given in
Eq. (3.30).
A
j
=V
_

T
j
V
_
1
. (3.30)
Substituting Eq. (3.29) into Eq. (3.30), we obtain
A
j
=V
__

T
V
_
V
T
V
_
1
V
T
+
T
j
_
I V
_
V
T
V
_
1
V
T
__
V
_
1
.
(3.31)
The second term of Eq. (3.31) is zero because V(V
T
V)
1
V
T
= I; this
null space component has an important implication in that it has nothing
to do with the primaries used; any set of primaries or any function can
Metamerism 37
be used, for that matter. By using the same identity relationship, the rst
term becomes
T
V.
A
j
=V
_

T
V
_
1
. (3.32)
Comparing Eq. (3.32) to Eq. (2.7) A = V(
T
V)
1
, it is clear that
A
j
=A. This proof implies that all primary sets generate equivalent color-
matching functions and it is only the projection of the primaries onto the
visual space that matters. The result provides the means to generate a
set of equivalent primaries, having the same projection but different null
space components, which can be chosen in such a way as to make pri-
maries with better physical properties.
8
(13) Matrix R operations can also be used to derive the monochromatic or-
thogonal color stimuli. With a 300 300 size of matrix R (300 sample
points in the visual spectrum), Cohen and Kappauf have found triplets
of spectral monochromatic stimuli on the spectral envelope that are or-
thogonal to extremely close tolerances.
6
These stimuli are 457, 519, and
583 nm for the CIE 1931 standard observer and 455, 513, and 584 nm
for the CIE 1964 standard observer. The three metameric blacks of these
orthogonal monochromatic stimuli are also mutually orthogonal.
3.4 Metamers Under Different Illuminants
Trussell has developed a method for obtaining metamers under different illumina-
tions such as the scenario given in Eq. (3.4).
8
For two illuminants, we have the
following equivalence in tristimulus values:
A
T
E
1
S
1
=A
T
E
1
S
2
and A
T
E
2
S
1
=A
T
E
2
S
2
(3.33a)
or

T
1
S
1
=
T
1
S
2
and
T
2
S
1
=
T
2
S
2
. (3.33b)
We can apply the orthogonal projection by modifying matrix R with the color
object-matching function (see Section 1.6 for ).

R =
_

_
1

T
. (3.34)
We obtain the fundamentals and metameric blacks
n
11
=
_

1
_

T
1

1
_
1

T
1
_
S
1
=

R
1
S
1
and
11
= (I

R
1
)S
1
(3.35a)
n
12
=
_

1
_

T
1

1
_
1

T
1
_
S
2
=

R
1
S
2
and
12
= (I

R
1
)S
2
(3.35b)
38 Computational Color Technology
n
21
=
_

2
_

T
2

2
_
1

T
2
_
S
1
=

R
2
S
1
and
21
= (I

R
2
)S
1
(3.35c)
n
22
=
_

2
_

T
2

2
_
1

T
2
_
S
2
=

R
2
S
2
and
22
= (I

R
2
)S
2
(3.35d)
n
11
= n
12
and n
21
= n
22
. (3.35e)
For two illuminants, we can set up an n 6 augmented matrix
a
from the two
n 3 matrices
1
and
2
.

a
= [
1

2
] =
_
_
_
_
_
_
_
e
1,1
x
1
e
1,1
y
1
e
1,1
z
1
e
2,1
x
1
e
2,1
y
1
e
2,1
z
1
e
1,2
x
2
e
1,2
y
2
e
1,2
z
2
e
2,2
x
2
e
2,2
y
2
e
2,2
z
2
e
1,3
x
3
e
1,3
y
3
e
1,3
z
3
e
2,3
x
3
e
2,3
y
3
e
2,3
z
3


e
1,n
x
n
e
1,n
y
n
e
1,n
z
n
e
2,n
x
n
e
2,n
y
n
e
2,n
z
n
_

_
,
(3.36)

T
a
S =
T
a
S
1
. (3.37)
Equation (3.37) denes all spectra that match S
1
under both illuminants. The
solution is obtained by applying the matrix R decomposition into fundamental
and metameric black in the six-dimensional space, provided that the
1
and
2
component vectors are independent. Again, we apply the modied matrix

R
a
of
Eq. (3.38) by substituting
a
into Eq. (3.34) to input spectrum S
1
.

R
a
=
a
_

T
a

a
_
1

T
a
. (3.38)
The resulting fundamental and metameric black are given as
n
a,1
=
_

a
_

T
a

a
_
1

T
a
_
S
1
=

R
a
S
1
and
a,1
= (I

R
a
)S
1
. (3.39)
The solution for Eq. (3.37) to obtain metamers is given by
S
2
= n
a,1
+
a,1
=

R
a
S
1
+(I

R
a
)S
1
. (3.40)
Where is an arbitrary scalar because any positive or negative factor multiplied
by a metameric black is still a metameric black (see Property 9).
Mathematically, this approach of using an augmented matrix and modied
matrix-R operator can be generalized to match arbitrary spectrum under several
illuminants as long as the number of illuminants used is no greater than n/3, where
n is the number of samples in the visible spectrum.

a
= [
1

3

m1

m
]. (3.41)
The problem of obtaining a realizable spectrum, i.e., one that is nonnegative, will
require more constraints.
8
Metamerism 39
3.5 Metameric Correction
Metameric correction is a process of correcting a trial specimen to metamerically
match a standard specimen under a reference viewing condition. Three methods
are in use.
3.5.1 Additive correction
The additive correction is performed in CIELAB space (see Section 5.3 for den-
itions of CIELAB space), where the correction in each color channel (L

c
, a

c
,
or b

c
) is the difference between the trial specimen (L

t,r
, a

t,r
, or b

t,r
) and standard
specimen (L

s,r
, a

s,r
, or b

s,r
) under the reference viewing condition. Then, the dif-
ference is added to the corresponding channel of the trial specimen under the test
viewing condition (L

t,t
, a

t,t
, or b

t,t
) to give the corrected CIELAB values (L

t,c
, a

t,c
,
or b

c,r
).
20
L

c
= L

t,r
L

s,r
, a

c
= a

t,r
a

s,r
, b

c
= b

t,r
b

s,r
, (3.42)
L

t,c
= L

t,t
+L

c
, a

t,c
= a

t,t
+a

c
, b

t,c
= b

t,t
+b

c
. (3.43)
3.5.2 Multiplicative correction
The multiplicative correction is performed in CIEXYZ space. The tristimulus val-
ues of the trial specimen in the test viewing condition (X
t,t
, Y
t,t
, Z
t,t
) are multiplied
by the ratio of the corresponding tristimulus values of the standard (X
s,r
, Y
s,r
, Z
s,r
)
to the trial specimen (X
t,r
, Y
t,r
, Z
t,r
) under the reference viewing condition to give
the corrected tristimulus values (X
c
, Y
c
, Z
c
).
21
X
c
= X
t,t
(X
s,r
/X
t,r
), Y
c
= Y
t,t
(Y
s,r
/Y
t,r
), Z
c
= Z
t,t
(Z
s,r
/Z
t,r
). (3.44)
3.5.3 Spectral correction
The spectral correction utilizes the spectral decomposition in the fundamental
metamer and its metameric black. The metameric black of the trial specimen,
t,r
,
is added to the fundamental of the standard specimen,
s,t
, to give the corrected
trial spectral power distribution,
c
.
22

c
=
s,t
+
t,r
=R
s,t
+(I R)
t,r
. (3.45)
As one can see, the spectral correction is an application of Property 12.
3.6 Indices of Metamerism
A measure of the color difference between two metameric specimens caused by a
test illuminant with a different spectral distribution from the reference illuminant
40 Computational Color Technology
is a special metamerism index (change in illuminant) for the two specimens. CIE
recommended that the index of metamerism m
t
is set equal to the index of color
difference E

between the two specimens computed under the test illuminant.


For additive correction, the index is
m
t
=
_
(L

t,c
L

s,t
)
2
+(a

t,c
a

s,t
)
2
+(b

t,c
b

s,t
)
2
_
1/2
, (3.46)
where (L

s,t
, a

s,t
, b

s,t
) is the CIELAB values of the standard specimen under test
viewing conditions.
3.6.1 Index of metamerism potential
This index is derived from spectral information about the standard and trial spec-
imens in the reference viewing conditions. It characterizes the way in which two
particular spectral power distributions differ, and provides an assessment for the
potential variation of their colorimetric performance over a wide range of viewing
conditions. This index is given by Nimeroff
23
as
m
2
t
=

(S
t,r
S
s,t
)
2
, (3.47)
where S is the radiance of the specied specimen. The summation is carried out
over the entire range of the visible spectrum. Nimeroff and Billmeyer have pointed
out that spectral differences need to be weighted by some colorimetric response
function of the eye.
24,25
This weighting makes sense because eye sensitivity is not
uniform across the whole visible spectrum; it is most sensitive in the center and
diminishes at both ends.
References
1. C. J. Bartleson, Colorimetry Optical Radiation Measurements, F. Grum and
C. J. Bartleson (Eds.), Academic Press, Inc., New York, Vol. 2, pp. 105109
(1980).
2. J. B. Cohen, Color and color mixture: Scalar and vector fundamentals, Color
Res. Appl. 13, pp. 539 (1988).
3. G. Wyszecki, Valenzmetrische Untersuchung des Zusammenhanges zwischen
normaler und anomaler Trichromasie, Die Farbe 2, pp. 3952 (1953).
4. G. Wyszecki, Evaluation of metameric colors, J. Opt. Soc. Am. 48, pp. 451
454 (1958).
5. J. B. Cohen and W. E. Kappauf, Metameric color stimuli, fundamental
metamers, and Wyszeckis metameric blacks, Am. J. Psychology 95, pp. 537
564 (1982).
6. J. B. Cohen and W. E. Kappauf, Color mixture and fundamental metamers:
Theory, algebra, geometry, application, Am. J. Psychology 98, pp. 171259
(1985).
Metamerism 41
7. H. S. Fairman, Recommended terminology for matrix R and metamerism,
Color Res. Appl. 16, pp. 337341 (1991).
8. H. J. Trussell, Applications of set theoretic methods to color systems, Color
Res. Appl. 16, pp. 3141 (1991).
9. G. Wyszecki and W. S. Stiles, Color Science: Concept and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, p. 142 (1982).
10. W. A. Thornton, Three color visual response, J. Opt. Soc. Am. 62, pp. 457459
(1972).
11. R. G. Kuehni, Intersection nodes of metameric matches, Color Res. Appl. 4,
pp. 101102 (1979).
12. D. L. MacAdam, Photometric relationships between complementary colors,
J. Opt. Soc. Am. 28, pp. 103111 (1938).
13. G. R. Bird and R. C. Jones, Estimation of the spectral functions of the human
cone pigments, J. Opt. Soc. Am. 55, pp. 16861691 (1965).
14. F. E. Ives, The optics of trichromatic photography, Photogr. J. 40, pp. 99121
(1900).
15. A. C. Hardy and F. L. Wurzburg, The theory of three-color reproduction,
J. Opt. Soc. Am. 27, pp. 227240 (1937).
16. W. T. Hanson and W. L. Brewer, Subtractive color photography: the role of
masks, J. Opt. Soc. Am. 44, pp. 129134 (1954).
17. W. T. Hanson and F. L. Wurzburg, Subtractive color photography: spectral
sensitivities and masks, J. Opt. Soc. Am. 44, pp. 227240 (1954).
18. R. J. Ross, Color Film for Color Television, Hastings House, New York (1970).
19. R. W. Pridmore, Theory of primary colors and invariant hues, Color Res. Appl.
16, pp. 122129 (1991).
20. A. Brockes, The evaluation of color matches under various light sources using
the metameric index, Report 9th FATIPEC Congress, Brussels 1969, Sect. 1,
pp. 39 (1969).
21. A. Brockes, The comparison of calculated metameric indices with visual ob-
servation, Farbe 19(1/6) (1970).
22. H. S. Fairman, Metameric correction using parameric decomposition, Color
Res. Appl. 12, pp. 261265 (1987).
23. I. Nimeroff and J. A. Yurow, Degree of metamerism, J. Opt. Soc. Am. 55,
pp. 185190 (1965).
24. I. Nimeroff, A survey of papers on degree of metamerism, Color Eng. 6,
pp. 4446 (1969).
25. F. W. Billmeyer, Jr., Notes on indices of metamerism, Color Res. Appl. 16,
pp. 342343 (1991).
Chapter 4
Chromatic Adaptation
Color space transformations between imaging devices are dependent on the illu-
minants used. The mismatch of white points is a frequently encountered problem.
It happens in situations where the measuring and viewing of an object are under
different illuminants, the original and reproduction use different illuminants, and
the different substrates are under the same illuminant. Normally, this problem is
dealt with by chromatic adaptation in which the illumination difference is treated
with an appearance transform.
Chromatic adaptation deals with the visual sensitivity regulation of the color
vision such as the selective, yet independent, changes in responsivity of cone pho-
toreceptors with respect to surround and stimuli. For example, a photographic
scene viewed under different illuminants (e.g., tungsten lamp versus daylight D
65
)
looks pretty much the same in spite of the fact that the reected light is very dif-
ferent under these two illuminants. This is because our eyes have adapted un-
der each condition to discount the illuminant difference. This is known as the
color constancy of human vision. To understand and, therefore, to predict the
color appearance of objects under a different illuminant is the single most impor-
tant appearance-matching objective. It has many industry applications, such as the
cross-rendering between different media. There are two main approaches; one is
the chromatic adaptation employing visual evaluations, and the other one is a math-
ematic computation of the color constancy (or white conversion). The main goal
of the computational color constancy is to predict the objects surface reectance.
If the surface spectrum is known, the correct tristimulus can be computed under
any adapted illumination. The computational color constancy will be presented in
Chapter 12. This chapter reviews methods of chromatic adaptation with an empha-
sis on the mathematical formulation. Several models, ranging from the von Kries
hypothesis to the Retinex theory, are presented.
4.1 Von Kries Hypothesis
There are numerous methods developed for the chromatic adaptation that can be
found in many color textbooks.
13
The most important one is the von Kries hy-
pothesis, which states that the individual components present in the organ of vision
are completely independent of one another and each is adapted exclusively accord-
ing to its own function.
4
If the proportionality and additivity hold for adaptations of
43
44 Computational Color Technology
the source and destination conditions, the source and destination tristimulus values,
(X
s
, Y
s
, Z
s
) and (X
d
, Y
d
, Z
d
), are related by a linear transformation with a 3 3
conversion matrix due to the invariant of the matrix A of the color-matching
functions (see Section 2.9).
[X
d
, Y
d
, Z
d
]
T
=[X
s
, Y
s
, Z
s
]
T
, or
d
=
s
, (4.1)
where superscript T denotes the transpose of the vector. The conversion matrix
can be derived from cone responsivity in many ways, depending on the model used.
For the von Kries model, the conversion matrix takes the form
1
=M
1
DM, (4.2)
where M is a 3 3 matrix that is used to transfer from source tristimulus values to
three cone responsivities (L
s
, M
s
, S
s
) of the long, medium, and short wavelengths
as given in Eq. (4.3), and M
1
is the inverse matrix of M, which is not singular
with three independent cone responsivities.

L
s
M
s
S
s

=M

X
s
Y
s
Z
s

or V
s
=M
s
. (4.3)
The transfer matrix is dependent on both matrices M and D, but is weighted
more heavily on M in the forms of both forward and inverse matrices. Thus, M
is critical to the nal form of the transfer matrix . In theory, there are an innite
number of choices for the M matrix. Therefore, many matrices have been proposed
to satisfy the constraints of the trichromacy and color matching. Two frequently
used M matrices and their inversions are given in Table 4.1.
The source cone responsivity (L
s
, M
s
, S
s
) is converted to destination (L
d
, M
d
,
S
d
) using a D matrix that is a diagonal matrix containing the destination-to-source
Table 4.1 Matrix elements of two frequently used M matrices.
|
| |
| Matrix M
1
| Matrix M
2
| |
|
| |
Matrix | 0.4002 0.7076 0.0808 | 0.3897 0.6890 0.0787
| |
| 0.2263 1.1653 0.0457 | 0.2298 1.1834 0.0464
| |
| 0 0 0.9182 | 0 0 1.000
| |
| |
Inverted | 1.860 1.129 0.220 | 1.9102 1.1122 0.2019
| |
matrix | 0.361 0.639 0 | 0.3709 0.6291 0
| |
| 0 0 1.089 | 0 0 1.000
| |
Chromatic Adaptation 45
ratios of the cone responsivities.

L
d
M
d
S
d

=D

L
s
M
s
S
s

, D =

(L
w,d
/L
w,s
) 0 0
0 (M
w,d
/M
w,s
) 0
0 0 (S
w,d
/S
w,s
)

, (4.4)
where L
w
, M
w
, and S
w
are the long-, medium-, and short-wavelength cone respon-
sivities, respectively, of the scene white (or maximum stimulus), and subscripts d
and s denote destination and source, respectively. For the transform from tungsten
light (illuminant A; X
N,A
= 109.8, Y
N,A
= 100, and Z
N,A
= 35.5) to average day-
light (illuminant C; X
N,C
= 98.0, Y
N,C
= 100, and Z
N,C
= 118.1) using Eq. (4.3),
we have

L
w,s
M
w,s
S
w,s

=M
1

X
N,A
Y
N,A
Z
N,A

0.4002 0.7076 0.0808


0.2263 1.1653 0.0457
0 0 0.9182

109.8
100.0
35.5

111.83
93.30
32.60

, (4.5a)

L
w,d
M
w,d
S
w,d

=M
1

X
N,C
Y
N,C
Z
N,C

0.4002 0.7076 0.0808


0.2263 1.1653 0.0457
0 0 0.9182

98.0
100.0
118.1

100.44
99.75
108.44

. (4.5b)
Once the cone responsivities for both source and destination illuminants are ob-
tained, we can determine the diagonal matrix D by taking their ratio as shown in
Eq. (4.6)
D =

L
w,s
0 0
0 M
w,s
0
0 0 S
w,s

L
w,s
0 0
0 M
w,s
0
0 0 S
w,s

1
=

(L
w,d
/L
w,s
) 0 0
0 (M
w,d
/M
w,s
) 0
0 0 (S
w,d
/S
w,s
)

. (4.6)
In this example,
D =

0.898 0 0
0 1.069 0
0 0 3.327

.
46 Computational Color Technology
The last step is to convert the destination cone responsivity (L
d
, M
d
, S
d
) to tristim-
ulus values (X
d
, Y
d
, Z
d
) under the destination illuminant.

X
d
Y
d
Z
d

=M
1
1

L
d
M
d
S
d

, M
1
1
=

1.860 1.129 0.220


0.361 0.639 0
0 0 1.089

. (4.7)
These steps can be combined to give a single 3 3 transfer matrix .
=M
1
1
DM
1
=

1.860 1.129 0.220


0.361 0.639 0
0 0 1.089

0.898 0 0
0 1.0691 0
0 0 3.327

0.400 0.708 0.081


0.226 1.165 0.046
0 0 0.918

0.942 0.225 0.752


0.025 1 0
0 0 3.327

.
4.2 Helson-Judd-Warren Transform
Helson, Judd, and Warren derived the inverse M matrix from a set of RGB pri-
maries with chromaticity coordinates of (0.7471, 0.2529) for red, (1.0, 0.0) for
green, and (0.1803, 0.0) for blue.
5
By converting back to tristimulus values and
keeping the row sum equal to 1, they derived the inverse matrix M
1
3
for the con-
version from the destination cone responsivities to tristimulus values.

X
d
Y
d
Z
d

=M
1
3

L
d
M
d
S
d

, M
1
3
=

2.954 2.174 0.220


1 0 0
0 0 1

. (4.8)
Knowing the inverse matrix M
1
3
, they obtained the forward matrix M
3
by matrix
inversion. The forward matrix, in turn, is used to convert from source tristimulus
to source cone responsivities.

L
s
M
s
S
s

=M
3

X
s
Y
s
Z
s

, M
3
=

0 1 0
0.4510 1.3588 0.1012
0 0 1

. (4.9)
Next, the source cone responsivity is converted to destination cone responsivities
using a D matrix given by Eq. (4.4), where the cone responsivities of scene white
are obtained by the product of M
3
and tristimulus values of the illuminant. If the
Chromatic Adaptation 47
source is illuminant A and the destination is illuminant C, we have

L
w,s
M
w,s
S
w,s

=M
3

X
N,A
Y
N,A
Z
N,A

0 1 0
0.4510 1.3588 0.1012
0 0 1

109.8
100.0
35.5

100.0
89.95
35.5

, (4.10a)

L
w,d
M
w,d
S
w,d

=M
3

X
N,C
Y
N,C
Z
N,C

0 1 0
0.4510 1.3588 0.1012
0 0 1

98.0
100.0
118.1

100.0
103.63
118.1

, (4.10b)
and
D =

(L
w,d
/L
w,s
) 0 0
0 (M
w,d
/M
w,s
) 0
0 0 (S
w,d
/S
w,s
)

1 0 0
0 1.151 0
0 0 3.327

.
(4.11)
The conversion to destination tristimulus values is given in Eq. (4.8). For illuminant
A (tungsten light) to illuminant C (average daylight), the transfer matrix is
=M
1
3
DM
3
=

2.954 2.174 0.220


1 0 0
0 0 1

1 0 0
0 1.151 0
0 0 3.327

0 1 0
0.451 1.359 0.101
0 0 1

1.129 0.446 0.479


0 1 0
0 0 3.327

.
4.3 Nayatani Model
Nayatani and coworkers proposed a nonlinear cone adaptation model to more ad-
equately predict various experimental data from a scaling method of the magni-
tude estimation.
69
The model is a two-stage process. The rst stage operates in
accordance with a modied von Kries transform and the second stage is a non-
linear transform using power functions that perform a compression for each cone
48 Computational Color Technology
response.
L
d
=
L
[(L
s
+L
n
)/(L
w,d
+L
n
)]
p
L
, (4.12a)
M
d
=
M
[(M
s
+M
n
)/(M
w,d
+M
n
)]
p
M
, (4.12b)
S
d
=
S
[(S
s
+S
n
)/(S
w,d
+S
n
)]
p
S
, (4.12c)
where L
n
, M
n
, and S
n
are the noise terms. These added noise terms take the thresh-
old behavior into account. The adaptation for each channel is proportional to a
power function of the maximum cone response by a constant . These coefcients

L
,
M
, and
S
are determined to produce color constancy for medium gray stim-
uli. The exponents p
L
, p
M
, and p
S
of the power function are dependent on the
luminance of the adapting eld.
Nayatanis nonlinear model provides a relatively simple extension to the von
Kries hypothesis . It is also capable of predicting the Hunt effect (increase in color-
fulness with adapting luminance),
10
the Stevens effect (increase in lightness con-
trast with luminance),
11
and the Helson-Judd effect (hue of nonselective samples
under chromatic illumination).
12
4.4 Bartleson Transform
The Bartleson transform is based on the von Kries hypothesis with nonlinear adap-
tations. Bartleson chooses the Knig-Dieterici fundamentals that are linearly re-
lated to the CIE 1931 standard observer by a matrix M
4
.
1315
This same matrix is
also used to convert tristimulus values to cone responses.

L
s
M
s
S
s

=M
4

X
s
Y
s
Z
s

, M
4
=

0.0713 0.9625 0.0147


0.3952 1.1668 0.0815
0 0 0.5610

. (4.13)
For the alteration from the source to the destination cone sensitivities, Bartleson
believed that the von Kries linear relationship was not adequate to predict the adap-
tation on color appearance, and has proposed a prototypical form with nonlinear
adaptations.
L
d
=
L
[(L
w,d
/L
w,s
)L
s
]
p
L
, (4.14a)
M
d
=
M
[(M
w,d
/M
w,s
)M
s
]
p
M
, (4.14b)
S
d
=
S
[(S
w,d
/S
w,s
)S
s
]
p
S
, (4.14c)
where the exponent p has the general expression of
p =c
L
(L
w,d
/L
w,s
)
d
L
+c
M
(M
w,d
/M
w,s
)
d
M
+c
S
(S
w,d
/S
w,s
)
d
S
, (4.14d)
Chromatic Adaptation 49
where c
L
, c
M
, and c
S
are constants and the coefcients are

L
=L
w,d
(1p
L
)
,
M
=M
w,d
(1p
M
)
,
S
=S
w,d
(1p
S
)
. (4.14e)
When CIE illuminants are used, the general equations for adaptation of cone sen-
sitivities are reduced to Eq. (4.15), where the long and medium wavelengths are
the same as the von Kries transform and short-wavelength response (or blue fun-
damental) is compressed.
L
d
= (L
w,d
/L
w,s
)L
s
, (4.15a)
M
d
= (M
w,d
/M
w,s
)M
s
, (4.15b)
S
d
=
S
[(S
w,d
/S
w,s
)S
s
]
p
S
, (4.15c)
P
S
= 0.326(L
w,d
/L
w,s
)
27.45
+0.325(M
w,d
/M
w,s
)
3.91
+0.340(S
w,d
/S
w,s
)
0.45
. (4.15d)
This gives the following diagonal D matrix:

L
d
M
d
S
d

=D

L
s
M
s
S
s

,
D =

(L
w,d
/L
w,s
) 0 0
0 (M
w,d
/M
w,s
) 0
0 0
S
(S
w,d
/S
w,s
)
p
S
S
s
(p
S
1)

. (4.16)
The conversion to the destination tristimulus values is given as

X
d
Y
d
Z
d

=M
1
4

L
d
M
d
S
d

, M
1
4
=

2.5170 2.0763 0.3676


0.8525 0.1538 0
0 0 1.7825

. (4.17)
4.5 Fairchild Model
In order to predict the incomplete chromatic adaptation, Fairchild modied the von
Kries hypothesis to include the ability to predict the degree of adaptation using
the adapting stimulus.
3,16,17
Fairchilds Model is designed to be fully compatible
with CIE colorimetry and to include the illuminant discount, the Hunt effect, and
incomplete chromatic adaptation. The rst step of the transform from tristimulus
values to cone responses is the same as the von Kries model as given in Eq. (4.3).

L
s
M
s
S
s

=M
1

X
s
Y
s
Z
s

, M
1
=

0.4002 0.7076 0.0808


0.2263 1.1653 0.0457
0 0 0.9182

. (4.18)
50 Computational Color Technology
It then takes the incomplete chromatic adaptation into account:

=
s

L
s
M
s
S
s

,
s
=

L,s
0 0
0
M.s
0
0 0
S.s

. (4.19)
The coefcients
L,s
,
M,s
, and
S,s
of the diagonal matrix
s
are given as fol-
lows:

L,s
= p
L
/L
w,s
,
p
L
= (1 +Y
w,s
1/3
+l
e
)/(1 +Y
w,s
1/3
+1/l
e
),
l
e
= 3(L
w,s
/L
e
)/(L
w,s
/L
e
+M
w,s
/M
e
+S
w,s
/S
e
), (4.20a)

M,s
= p
M
/M
w,s
,
p
M
= (1 +Y
w,s
1/3
+m
e
)/(1 +Y
w,s
1/3
+1/m
e
),
m
e
= 3(M
w,s
/M
e
)/(L
w,s
/L
e
+M
w,s
/M
e
+S
w,s
/S
e
), (4.20b)

S,s
= p
S
/S
w,s
,
p
S
= (1 +Y
w,s
1/3
+s
e
)/(1 +Y
w,s
1/3
+1/s
e
), and
s
e
= 3(S
w,s
/S
e
)/(L
w,s
/L
e
+M
w,s
/M
e
+S
w,s
/S
e
), (4.20c)
where Y
w,d
is the luminance of the adapting stimulus (or source illuminant) in
candelas per square meter or cd/m
2
. Terms with the subscript e refer to the equal-
energy illuminant that has a constant spectrum across the whole visible region.
The postadaptation signals are derived by a transformation that allows luminance-
dependent interaction among three cones.

L
d
M
d
S
d

=
s

,
s
=

1
s

s

s
1
s

s

s
1

, (4.21)
and

s
= 0.219 0.0784logY
w,s
.
The term is adopted from the work of Takahama et al. For the destination
chromaticities, one needs to derive the and matrices for that condition, then
invert and apply them in reverse sequence.

=
1
d

L
d
M
d
S
d

,
d
=

1
d

d

d
1
d

d

d
1

, (4.22)
Chromatic Adaptation 51
and

d
= 0.219 0.0784logY
w,d
.

d
M

d
S

=
1
d

,
d
=

L,d
0 0
0
M,d
0
0 0
S,d

, (4.23)

X
d
Y
d
Z
d

=M
1
1

d
M

d
S

, (4.24)

X
d
Y
d
Z
d

=M
1
1

1
d

1
d

s

s
M
1

X
s
Y
s
Z
s

. (4.25)
Subsequent experiments showed that the matrix introduced an unwanted lumi-
nance dependency that resulted in an overall shift in lightness with luminance level.
This shift introduced signicant systematic error in predictions for simple object
colors, lending Fairchild to revise the model by removing the matrix.
4.6 Hunt Model
Hunt proposed a rather complex model to predict color appearance under various
viewing conditions. First, a linear transform using matrix M
2
is performed to con-
vert CIEXYZ to cone responses.

L
s
M
s
S
s

=M
2

X
s
Y
s
Z
s

, M
2
=

0.3897 0.6890 0.0787


0.2298 1.1834 0.0815
0 0 1.0

. (4.26)
A nonlinear response function, similar to that used by Nayatani and coworkers, is
used to predict the chromatic adaptation that is inversely proportional to the purity
of the color of the reference light. Hunts nonlinear chromatic adaptation model is
very comprehensive, having many parameters for various adjustments. It includes
the cone bleach factors, luminance-level adaptation factors, chromatic-adaptation
factors, discounting the illuminant, and prediction of the Helson-Judd effect. The
detail of the formulation is given in the original publications.
1821
Results indicate
that the Hunt model provides good predictions for the appearance of colors in the
Natural Color System (NCS) hues (which were developed in Sweden and adopted
as a national standard in Sweden), Munsell samples, prints, and projected slides.
52 Computational Color Technology
4.7 BFD Transform
Like the Bartleson transform, the BFD transform, developed at the University
of Bradford, England, is a modied von Kries transform in which the short-
wavelength cone signals are nonlinearly corrected by a power function with an
exponent that is adaptive to the input cone signal.
2224
The middle- and long-
wavelength cone signals are the same as the von Kries transform. First, the source
tristimulus values are normalized with respect to the luminance Y
s
then linearly
transformed to cone response signals by using the Bradford matrix M
B
.

L
s
M
s
S
s

=M
B

X
s
/Y
s
Y
s
/Y
s
Z
s
/Y
s

, M
B
=

0.8951 0.2664 0.1614


0.7502 1.7135 0.0367
0.0389 0.0685 1.0296

. (4.27)
The source cone responsivity is converted to destination cone signals using
Eq. (4.28).
L
d
= [F
D
(L
w,d
/L
w,s
) +1 F
D
]L
s
, (4.28a)
M
d
= [F
D
(M
w,d
/M
w,s
) +1 F
D
]M
s
, (4.28b)
S
d
= [F
D
(S
w,d
/S

w,s
) +1 F
D
]S

s
, (4.28c)
= (S
w,s
/S
w,d
)
0.0834
, (4.28d)
where F
D
is a factor to discount the illuminant difference. For the complete adap-
tation, F
D
= 1 and Eq. (4.28) becomes
L
d
= (L
w,d
/L
w,s
)L
s
, (4.29a)
M
d
= (M
w,d
/M
w,s
)M
s
, (4.29b)
S
d
= (S
w,d
/S

w,s
)S

s
. (4.29c)
If there is no adaptation, F
D
= 0 and we have L
d
= L
s
, M
d
= M
s
, and S
d
= S

s
.
For incomplete adaptation, 0 <F
D
<1, specifying the proportional level of adap-
tation to the source illuminant, observers are adapted to chromaticities that lie
somewhere between the source and destination illuminants. However, the diago-
nal D matrix, no longer as simple as Eq. (4.4), contains the destination to source
ratios of the cone responsivities with the blue channel compression.

L
d
M
d
S
d

=D

L
s
M
s
S
s

, D =

(L
w,d
/L
w,s
) 0 0
0 (M
w,d
/M
w,s
) 0
0 0 (S
w,d
/S

w,s
)S
(1)
s

.
(4.30)
Chromatic Adaptation 53
The conversion to the destination tristimulus values is given in Eq. (4.31).

X
d
Y
d
Z
d

=M
1
B

L
d
Y
s
M
d
Y
s
S
d
Y
s

, M
1
B
=

0.98699 0.14705 0.15996


0.43231 0.51836 0.04929
0.00853 0.04004 0.96849

.
(4.31)
The BFD transform is adopted in the CIECAM97s color appearance model.
4.8 Guth Model
Guths model is derived from the eld of vision science rather than colorimetry.
Its cone responsivities are not a linear transform of the color-matching functions.
Therefore, the model is a signicant variation of the von Kries transform and simi-
lar in some ways with the Nayatani model
25,26
in that it uses a power function with
an exponent of 0.7.
L
d
= L
r
{1 [L
w,d
/(L
w,d
+)]}, and L
r
= 0.66L
s
0.7
+0.002, (4.32a)
M
d
= M
r
{1 [M
w,d
/(M
w,d
+)]}, and M
r
= 1.0M
s
0.7
+0.003, (4.32b)
S
d
= S
r
{1 [S
w,d
/(S
w,d
+)]}, and S
r
= 0.45S
s
0.7
+0.00135. (4.32c)
The is a constant that can be thought of as a noise factor. A von Kries-type gain
control coefcient can be drawn if the nonlinear relationship is ignored.
3
4.9 Retinex Theory
Edwin Land, founder of Polaroid, and his colleagues developed the retinex the-
ory. Land coined the term retinex from retina and cortex to designate the
physiological mechanisms in the retinal-cortical structure, where elements with
the same wavelength sensitivity cooperate to form independent lightness images.
27
Land recognized that although any color can be matched by three primaries with
proper intensities, a particular mixture of these primaries does not warrant a unique
color sensation because the color of an area changes when the surrounding col-
ors are changed. To demonstrate this phenomenon and his theory that a particu-
lar wavelength-radiance distribution can produce a wide range of color sensation,
Land developed Color Mondrian experiments. The Color Mondrian display uses
about 100 different color patches arranged arbitrarily so that each color patch is
surrounded by at least ve or six different color patches.
28
The display is illu-
minated by three projectors; each projector is tted with a different interference
lter to give red, green, and blue, respectively, and is equipped with an indepen-
dent voltage control for intensity adjustments. The observer picks an area in the
Mondrian, then the experimenter measures the three radiances coming from that
54 Computational Color Technology
area. A second area of a different color is picked by the observer, the experimenter
then adjusts the strengths of the three projectors so that the radiances that come
from the second patch are the same as the rst patch. The interesting result is that
the observer reports the same color appearance even though the radiance measure-
ments show that the light reaching the eye is identical to the rst color patch. Land
suggested that color appearance is controlled by surface reectances rather than by
the distribution of reected light. The theory states that each color is determined
by a triplet of lightnesses, and that each lightness, in a situation like the Color
Mondrian, corresponds to the reectance of the area measured with a photorecep-
tor having the same spectral sensitivity as one of the three cone pigments.
29
The
output of a retinex is three arrays of computed lightnesses that are determined by
taking the ratio of the signal at any given point in the scene and by normalizing it
with an average of the signals in that retinex throughout the scene.
30
The theory acknowledges the color change due to the changes in surrounding
colors, where the inuence of the surrounding colors can be varied by changing the
spatial distribution of the retinex signals. Therefore, the key feature of the retinex
theory is that it takes the spatial distribution of colors into consideration for the
purpose of modeling the visual perceptions in a complex scene.
4.10 Remarks
Chromatic adaptation is the single most important property in understanding and
modeling color appearance. There are similarities in these models, such as the ini-
tial linear transform from tristimulus values to cone responses. The general form
can be summarized as (i) the tristmulus values to cone responsivity transform that
is usually linear except in the Guth model, (ii) the cone responsivity transform be-
tween source and destination illumination that is the central part of the chromatic
adaptation and uses a variety of functions, and (iii) the conversion from the desti-
nation cone responsivity to tristimulus values, which is usually a linear transform.
As far as the computational cost is concerned, the linear models based on the
von Kries hypothesis are only a matrix transform between the source and destina-
tion tristimulus. In addition, there are many zeros in the transfer matrices that fur-
ther lower the cost of computation. For nonlinear transforms, having more complex
equations and power functions, the computational cost is much higher. However, if
all parameters and exponents are xed, the transfer matrix may be precomputed to
lower the computational cost and enhance the speed.
References
1. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 429449 (1982).
2. B. A. Wandell, Foundations of Vision, Sinauer Assoc., Sunderland, MA,
pp. 309314 (1995).
Chromatic Adaptation 55
3. M. D. Fairchild, Color Appearance Models, Addison-Wesley Longman, Read-
ing, MA, pp. 173214 (1998).
4. J. von Kries, Chromatic adaptation, (1902), Translation: D. L. MacAdam,
Sources of Color Science, MIT Press, Cambridge, MA (1970).
5. H. Helson, D. B. Judd, and M. H. Warren, Object-color changes from daylight
to incandescent lament illumination, Illum. Eng. 47, pp. 221233 (1952).
6. K. Takahama, H. Sobagaki, and Y. Nayatani, Analysis of chromatic adaptation
effect by a linkage model, J. Opt. Soc. Am. 67, pp. 651656 (1977).
7. Y. Nayatani, K. Takahama, and H. Sobagaki, Estimation of adaptation effects
by use of a theoretical nonlinear model, Proc. of 19th CIE Session, Kyoto,
Japan (1979), CIE Publ. No. 5, pp. 490494 (1980).
8. Y. Nayatani, K. Takahama, and H. Sobagaki, Formulation of a nonlinear model
of chromatic adaptation, Color Res. Appl. 6, pp. 161171 (1981).
9. Y. Nayatani, K. Takahama, and H. Sobagaki, On exponents of a nonlinear
model of chromatic adaptation, Color Res. Appl. 7, pp. 3445 (1982).
10. R. W. G. Hunt, Light and dark adaptation and the perception of color, J. Opt.
Soc. Am. 42, pp. 190199 (1952).
11. J. C. Stevens and S. S. Stevens, Brightness functions: Effects of adaptation,
J. Opt. Soc. Am. 53, pp. 375385 (1963).
12. H. Helson, Fundamental problems in color vision, I: The principle governing
changes in hue, saturation, and lightness of non-selective samples in chromatic
illumination, J. Exp. Psych. 23, pp. 439477 (1938).
13. C. J. Bartleson, Changes in color appearance with variations in chromatic
adaptation, Color Res. Appl. 4, pp. 119138 (1979).
14. C. J. Bartleson, Predicting corresponding colors with changes in adaptation,
Color Res. Appl. 4, pp. 143155 (1979).
15. C. J. Bartleson, Colorimetry, Optical Radiation Measurements, Color Mea-
surement, F. Grum and C. J. Bartleson (Eds.), Academic Press, New York,
Vol. 2, pp. 130134 (1980).
16. M. D. Fairchild, A model of incomplete chromatic adaptation, Proc. 22nd Ses-
sion of CIE, CIE, Melbourne, pp. 3334 (1991).
17. M. D. Fairchild, Formulation and testing of an incomplete-chromatic-
adaptation model, Color Res. Appl. 16, pp. 243250 (1991).
18. R. W. G. Hunt, A model of colour vision for predicting colour appearance,
Color Res. Appl. 7, pp. 95112 (1982).
19. R. W. G. Hunt, A model of colour vision for predicting colour appearance in
various viewing conditions, Color Res. Appl. 12, pp. 297314 (1987).
20. R. W. G. Hunt, Revised colour-appearance model for related and unrelated
colours, Color Res. Appl. 16, pp. 146165 (1991).
21. R. W. G. Hunt, An improved predictor of colourfulness in a model of colour
vision, Color Res. Appl. 19, pp. 2326 (1994).
22. M. R. Luo, M.-C. Lo, and W.-G. Kuo, The LLAB (l:c) color model, Color Res.
Appl. 21, pp. 412429 (1996).
56 Computational Color Technology
23. M. R. Luo, The LLAB model for color appearance and color difference eval-
uation, Recent Progress in Color Science, R. Eschbach and K. Braum (Eds.),
IS&T, Springeld, VA, pp. 158164 (1997).
24. C. Li, M. R. Luo, and R. W. G. Hunt, CAM97s2 model, IS&T/SID 7th Imaging
Conf.: Color Science, System, and Application, pp. 262263 (1999).
25. S. L. Guth, Model for color vision and light adaptation, J. Opt. Soc. Am. A 8,
pp. 976993 (1991).
26. S. L. Guth, Further applications of the ATD model for color vision, Proc. SPIE
2414, pp. 1226 (1995).
27. E. H. Land, The retinex, Am. Sci. 52, pp. 247274 (1964).
28. E. H. Land, The retinex theory color vision, Proc. R. Inst. Gt. Brit. 47, pp. 23
58 (1975).
29. J. J. McCann, S. P. McKee, and T. H. Taylor, Quantitative studies in retinex
theory, Vision Res. 16, pp. 445458 (1976).
30. E. H. Land, Recent advances in retinex theory, Vision Res. 26, pp. 721 (1986).
Chapter 5
CIE Color Spaces
Different color-imaging devices use different color spaces; well-known examples
are the RGB space of televisions and cmy (or cmyk) space of printers. Colors pro-
duced by these devices are device specic, depending on the characteristics of the
device. To ensure a proper color rendition in various devices, a device-independent
color space, such as CIE color spaces using colorimetry to give a quantitative mea-
sure for all colors, is needed to serve as a reliable interchange standard. CIE color
spaces are the colorimetric spaces that are device-independent. The nucleus of the
CIE color space is the tristimulus values or CIEXYZ space that is discussed in
Chapter 1. In this chapter, we present derivatives and modications of the CIEXYZ
space and the related topics of the gamut boundary, color appearance model, and
spatial domain extension.
5.1 CIE 1931 Chromaticity Coordinates
Chromaticity coordinates are the normalization of the tristimulus values X, Y,
and Z; they are the projection of the tristimulus space to the 2D x-y plane.
1
Math-
ematically, they are expressed as
x =X/(X +Y +Z), (5.1a)
y =Y/(X +Y +Z), (5.1b)
z =Z/(X +Y +Z), (5.1c)
and
x +y +z = 1, (5.1d)
where x, y, and z are the chromaticity coordinates.
5.1.1 Color gamut boundary of CIEXYZ
The color gamut boundary in CIEXYZ space is well dened in chromaticity dia-
grams as the spectrum locus enclosed by spectral stimuli. The gamut boundary is
computed from color-matching functions (CMF), x(), y(), and z() of the CIE
57
58 Computational Color Technology
1931 2

or CIE 1964 10

observer by using Eq. (5.2) to give chromaticity coordi-


nates of spectral stimuli.
1
x() = x()/[ x() + y() + z()], (5.2a)
y() = y()/[ x() + y() + z()], (5.2b)
where is the wavelength. The chromaticity diagram is the plot of x versus y
because the three chromaticity coordinates sum to 1; only two chromaticity coor-
dinates x and y sufce for a specication of the chromaticity. Figure 5.1 shows
the gamut boundaries of two CMFs, where the CIE 1931 CMF gives a slightly
larger gamut size than the CIE 1964 CMF in the green, blue, and purple regions.
The boundary of the color locus is occupied by the spectral colors. The chromatic-
ity coordinates represent the relative amounts of the three stimuli X, Y, and Z
required to obtain any color. However, they do not indicate the luminance of the
resulting color. The luminance is incorporated into the Y value. Thus, a complete
description of a color is given by the triplet (x, y, Y).
The chromaticity diagram provides useful information such as the dominant
wavelength, complementary color, and color purity. The dominant wavelength is
Figure 5.1 Color gamuts of CIEXYZ space.
CIE Color Spaces 59
based on the Grassmanns laws that the chromaticities of all additive stimulus mix-
tures from two primary colorants lie along a straight line in the chromaticity dia-
gram. Therefore, the dominant wavelength of a color is obtained by extending the
line connecting the color and the illuminant to the spectrum locus. The comple-
ment of a spectral color is on the opposite side of the line connecting the color and
the illuminant used. A color and its complement, when added together in a proper
proportion, yield white. If the extended line for obtaining the dominant wavelength
intersects the purple line, the straight line that connects two extreme spectral col-
ors (usually, 380 and 770 nm), then the color will have no dominant wavelength
in the visible spectrum. In this case, the dominant wavelength is specied by the
complementary spectral color with a sufx c. The value is obtained by extending
a line backward through the illuminant to the spectrum locus. CIE denes purity
of a given intermediate color as the ratio of two distances: the distance from the
illuminant to the color and the distance from the illuminant through the color to the
spectrum locus or the purple line. Pure spectral colors lie on the spectrum locus in-
dicating a fully saturated purity of 100%. The illuminant represents a fully diluted
color with a purity of 0%.
5.2 CIELUV Space
CIEXYZ is a visually nonuniform color space, where colors, depending on the lo-
cation in the color locus, show different degrees of error distributions. Many efforts
have been dedicated to minimizing this visually nonuniform error distribution. One
way to achieve more visually uniform color spaces is to perform nonlinear trans-
forms of the CIEXYZ that describe color using opponent-type axes relative to a
given absolute white-point reference. CIELUV and CIELAB are two well-known
examples. CIELUV is a transform of the 1976 UCS chromaticity coordinate u

, v

,
and Y.
1,2
L

= 116(Y/Y
n
)
1/3
16 if Y/Y
n
0.008856, (5.3a)
L

= 903.3(Y/Y
n
) if Y/Y
n
<0.008856, (5.3b)
u

= 13L

(u

n
), (5.3c)
v

= 13L

(v

n
), (5.3d)
where L

is the lightness, X
n
, Y
n
, Z
n
are tristimulus values of the illuminant, and
u

= 4X/(X +15Y +3Z), (5.4a)


v

= 9Y/(X +15Y +3Z). (5.4b)


The color difference in CIELUV space is calculated as the Euclidean distance.
E
uv
=
_
L
2
+u
2
+v
2
_
1/2
. (5.5)
60 Computational Color Technology
It is sometimes desirable to identify the components of a color difference as corre-
lates of perceived hue and colorfulness. Relative colorfulness or chroma is dened
as
C

uv
=
_
u
2
+v
2
_
1/2
, (5.6)
and the relative colorfulness attribute called saturation is dened as
S

uv
= 13
_
u
2
+v
2
_
1/2
(5.7a)
or S

uv
=C

uv
/L

. (5.7b)
Hue angle is dened as
h

uv
= tan
1
(v

/u

). (5.8)
and hue difference is
h

uv
=
_
E
2
uv
L
2
C
2
uv
_
1/2
. (5.9)
5.2.1 Color gamut boundary of CIELUV
Like CIEXYZ, CIELUV has a well-dened gamut boundary that is based on the
CIE 1976 uniform chromaticity scale (UCS) u

and v

.
u

() = 4 x()/[ x() +15 y() +3 z()], (5.10a)


v

() = 9 y()/[ x() +15 y() +3 z()]. (5.10b)


Figure 5.2 gives the color gamut boundaries of the CIELUV space from the CIE
1931 and 1964 CMFs. Again, the CIE 1931 CMF is larger than the CIE 1964
CMF, where the gamut size difference between these two observers seems wider
in CIELUV color space.
5.3 CIELAB Space
CIELAB is a nonlinear transform of 1931 CIEXYZ.
1
L

= 116f (Y/Y
n
) 16
a

= 500[f (X/X
n
) f (Y/Y
n
)]
b

= 200[f (Y/Y
n
) f (Z/Z
n
)]
or
_
L

_
=
_
0 116 0 16
500 500 0 0
0 200 200 0
_
_
_
_
f (X/X
n
)
f (Y/Y
n
)
f (Z/Z
n
)
1
_

_
(5.11a)
CIE Color Spaces 61
Figure 5.2 Color gamuts of CIELUV space.
and
f (t ) =t
1/3
1 t >0.008856, (5.11b)
= 7.787t +(16/116) 0 t 0.008856, (5.11c)
where X
n
, Y
n
, and Z
n
are the tristimulus values of the illuminant. Similar to
CIELUV, the relative colorfulness or chroma is dened as
C

ab
=
_
a
2
+b
2
_
1/2
. (5.12)
Hue angle is dened as
h

ab
= tan
1
(b

/a

). (5.13)
Again, the color difference E
ab
is dened as the Euclidean distance in the 3D
CIELAB space.
E
ab
=
_
L
2
+a
2
+b
2
_
1/2
. (5.14)
62 Computational Color Technology
The just noticeable color difference is approximately 0.5 to 1.0 E
ab
units.
CIELAB is perhaps the most popular color space in use today. However, the prob-
lem of deriving suitably uniform metrics for a perceived small extent of color is
not yet solved. CIE continues to carry out coordinated research on color difference
evaluation.
5.3.1 CIELAB to CIEXYZ transform
The inverse transform from CIELAB to CIEXYZ is given in Eq. (5.15).
X =X
n
[a

/500 +(L

+16)/116]
3
if L

>7.9996, (5.15a)
=X
n
(a

/500 +L

/116)/7.787 if L

7.9996, (5.15b)
Y =Y
n
[(L

+16)/116]
3
if L

>7.9996, (5.15c)
=Y
n
L

/(116 7.787) if L

7.9996, (5.15d)
Z =Z
n
[b

/200 +(L

+16)/116]
3
if L

>7.9996, (5.15e)
=Z
n
(b

/200 +L

/116)/7.787 if L

7.9996. (5.15f)
5.3.2 Color gamut boundary of CIELAB
Unlike CIEXYZ and CIELUV spaces, CIELAB does not have a chromaticity di-
agram. As pointed out by Bartleson, this is because values of a

and b

depend
on L

.
2
However, this does not mean that there is no gamut boundary for CIELAB.
As early as 1975, Judd and Wyszecki already gave a graphic outer boundary of the
CIELAB space viewed under illuminant D
65
and CIE 1964 observer.
3
Although
the detail was not revealed on how this graph was generated, they did mention the
use of optimal color stimuli. An optimal color stimulus is an imaginary stimulus,
having two or more spectral components (usually, no more than three wavelengths)
with a unit reectance at the specied wavelength, and zero elsewhere. No real ob-
ject surfaces can have this kind of abrupt reectance curves.
Although CIELAB space does not have a chromaticity diagram, it does have
boundaries. The boundaries are obtained by using spectral and optimal stimuli if
the standard observer and illuminant are selected.
4
First, we compute tristimulus
values of the spectral stimuli. Second, we compute tristimulus values of all possible
two-component optimal stimuli. These tristimulus values are converted to L

, a

,
and b

via Eq. (5.3). For computing gamut boundary in CIELAB, tristimulus values
of the object in Eq. (5.3) are replaced by a CMF at a given wavelength , then
tristimulus values of the illuminant are normalized to Y
n
= 1.
L

= 116[ y()/Y
n
]
1/3
16, (5.16a)
CIE Color Spaces 63
a

= 500
_
[ x()/X
n
]
1/3
[ y()/Y
n
]
1/3
_
, (5.16b)
b

= 200
_
[ y()/Y
n
]
1/3
[ z()/Z
n
]
1/3
_
, (5.16c)
or
_
L

_
=
_
0 116 0 16
500 500 0 0
0 200 200 0
_
_
_
_
x()/X
n
y()/Y
n
z()/Z
n
1
_

_
. (5.16d)
Two-dimensional projection of CIE spectral stimuli under either D
50
or D
65
in
CIELAB shows that stimuli at both the high- and low-wavelength ends locate near
the origin of the diagram where the illuminant resides as shown in Fig. 5.3. These
results give a concave area in the positive a

quadrants encompassed color regions


from red through magenta to purple. This is because stimuli are weak at both ends
of the CMF, having very small brightness and colorfulness. Thus, they are per-
ceived as if they were achromatic hues.
It is a known fact that a single spectral stimulus does not give saturated magenta
colors. Magenta colors require at least two spectral stimuli in proper wavelengths
Figure 5.3 The spectral stimuli of CIE 1931 and 1964 CMFs in CIELAB space under D
50
or D
65
illuminant.
64 Computational Color Technology
with respect to each other such as one in the red region and the other in the blue
region. Therefore, we start by computing all possible optimal stimuli of two spec-
tral wavelengths in the range from 390 to 750 nm. This is achieved by xing one
stimulus at a wavelength
1
and moving the other stimulus across the whole spec-
trum at a 5-nm interval, starting from
2
=
1
+5 nm until it reaches the other end
of the spectrum. The rst stimulus is then moved to the next wavelength and the
second stimulus again is scanned across the remaining spectrum at the 5 nm inter-
val. This procedure is repeated until both stimuli reach the other end. Note that the
5-nm interval is an arbitrary choice. One can reduce the interval to get more precise
wavelength location, or one can widen the interval to save computation cost with
some sacrice in accuracy.
We employ Grassmanns additivity law (see Section 1.1 for more detail) given
in Eq. (5.17) with normalization (actually, this is the averaging because Y
n
is nor-
malized to 1) to compute tristimulus values X, Y, and Z of two stimuli.
X = [ x(
1
) + x(
2
)]/2, (5.17a)
Y = [ y(
1
) + y(
2
)]/2, (5.17b)
Z = [ z(
1
) + z(
2
)]/2. (5.17c)
These tristimulus values are then plugged into Eq. (5.16) to obtain CIELAB values.
With additional data from two-component optimal stimuli, we ll the concave area
in the positive a

quadrants somewhat to give a near convex boundary as shown


in Fig. 5.4.
Connecting the outermost colors produced by optimal and spectral stimuli, we
obtain the boundary of the CIELAB space with respect to the illuminant used as
shown in Fig. 5.5. For D
65
, the resulting gamut looks pretty close to the outer
boundary given by Judd and Wyszecki.
3
We also obtain results for other combina-
tions of illuminant and standard observer. For the CIE 1931 observer under illumi-
nants D
50
and D
65
, D
65
gives a substantially larger area in the green, cyan, blue,
and purple regions, whereas D
50
gives a slightly larger area in the magenta and red
regions. The CIE 1964 observer under D
50
and D
65
gives a similar trend. Under
the same illuminant, the CIE 1964 observer is slightly larger than the CIE 1931
observer in the cyan and blue regions, whereas the CIE 1931 observer is larger in
the green, red, magenta, and purple regions. Generally speaking, the color gamut
difference is small with respect to the difference in standard observer, but it is large
with respect to the difference in illuminant used.
By employing Eqs. (5.16) and (5.17), the computation indicates that there is
indeed a gamut boundary in CIELAB space using spectral and optimal stimuli.
However, unlike chromaticity diagrams of CIEXYZ and CIELUV that are xed
under any illumination (the gamut boundary is illuminant independent), the size
of the CIELAB gamut depends on the illuminant used because the CIELAB is
dened as the normalization with respect to the illuminant, which is a form of the
chromatic adaptation [see Eq. (5.11)]. The illuminant has a signicant inuence on
CIE Color Spaces 65
Figure 5.4 The additional space of CIELAB.
the size of the gamut boundary in CIELAB space, whereas the standard observer
has only a minor effect.
From this computation, the common practice of connecting six primaries in
CIELAB space to give a projected 2D color gamut is not a strictly correct way of
dening a color gamut. However, it is a pretty close approximation for real col-
orants. To have a better denition of the color gamut in CIELAB plot, one should
generate and measure more two-color mixtures.
5.4 Modications
The primary concerns of the CIE color space seem to be the visual uniformity and
color difference measure. Since 1931, many attempts have been made to derive sat-
isfactory formulas for establishing a uniform color space and a constant color dif-
ference across the whole space. One approach was based on the Munsell system,
5
where Munsell samples were dened by three uniform scales of hue, value, and
chroma. Thus, the difference between successive samples in the Munsell book of
colors should be the same. If one plots Munsell samples of constant value (light-
ness), for example, in a uniform color space, one should get circles of increasing
radius. However, none of the existing CIE color spaces and modications meets
66 Computational Color Technology
Figure 5.5 Color gamut boundaries of CIELAB space under D
50
or D
65
illuminant.
this criterion as shown in Fig. 5.6.
6
The benet of plotting Munsell samples is that
it provides the means to check and compare the uniformity of color spaces.
The other approach was based on the MacAdam ellipses
7
as shown in Fig. 5.7.
If one plots small color differences in various directions from a particular color, the
differences fall into an ellipsoid.
68
For a uniform color space, the small color dif-
ferences should form a circle instead of an ellipsoid. Thus, the MacAdam ellipses
form the basis for a number of color difference formulas and a uniformity measure
of the color space.
These activities led to the establishment of CIELUV and CIELAB color spaces
in 1976. Compared to CIEXYZ space, CIELUV and CIELAB are major im-
provements with respect to visual uniformity, but they are not totally uniform
(see Fig. 5.6). As early as 1980, experimental works by McDonald already in-
dicated that CIELAB was not completely satisfactory.
9
If CIELAB were uni-
form, the plot of discrimination ellipses in the chromatic plane of a

versus b

should form equal-sized circles. But, the experimental results show that ellipses
consistently increase in size as the distance from neutral (gray) increases. There-
fore, many other uniform color spaces were proposed such as the Frei space,
10
OSA,
11,12
ATD,
13
LABHNU,
14
Hunter LAB,
15
SVF space,
16
Nayatani space,
17
Hunt space,
18
and IPT space.
19
Recently, Li and Luo proposed a uniform J

color space based on CIE color appearance model CAM97s2 (see Section 5.5).
CIE Color Spaces 67
Figure 5.6 Munsell colors of value 5 in (a) CIELAB, (b) CIELUV, (c) Hunt, and (d) Nayatani
spaces. (Reprinted with permission of John Wiley & Sons, Inc.)
6
A comparison study among CIELAB, CIECAM97s2, IPT, and J

using six sets


of data with large color differences shows that the J

space gives a smaller mean


color difference.
20
Moreover, several color distance formulas (or color difference measures) have
been constructed to determine color difference in terms of the just noticeable dif-
ference (JND); important ones are FMC,
21
CMC,
2224
and BFD.
25,26
Most color
difference formulas proposed after 1976 are based on the non-Euclidean distance
between two color stimuli, having a general expression of
E =
_
[L

/(K
L
S
L
)]
2
+ [C

/(K
C
S
C
)]
2
+ [H

/(K
H
S
H
)]
2
_
1/2
, (5.18)
where L

, C

, and H

are differences in lightness, chroma, and hue predic-


tors, respectively. Parameters S
L
, S
C
, and S
H
are scaling factors that allow cor-
rections of specic faults in the formula, and K
L
, K
C
, and K
H
are parametric
factors that can be changed depending on viewing parameters such as textures and
backgrounds. The ISO color difference standard of the CMC formula (CIE94) and
BFD fall into this category. They differ in S and K parameters. For example, the
simpler CIE94 formula has K
L
=K
C
=K
H
= 1, S
L
= 1, S
C
= 1 +0.045C

ab
, and
68 Computational Color Technology
Figure 5.7 MacAdam ellipses in the constant lightness plane with luminance factor 0.2
plotted in (a) CIELAB, (b) CIELUV, (c) Hunt, and (d) Nayatani color spaces. (Reprinted with
permission of John Wiley & Sons, Inc.)
6
S
H
= 1 + 0.015C

ab
.
22
The CMC formula has K
L
=K
C
=K
H
= 1 for the best t
to perceptibility data and K
L
= 2 for textile and other critical acceptability usage.
But, it has very complex expressions for the S parameters that allow calculation of
tolerance ellipsoids, which are different in size, as required for different parts of
the color space. The ellipsoids are scaled so that differences in lightness, chroma,
and hue, as dened by the formula, agree visually with the perceived size of these
differences.
In 2000, CIE released a general formula for color difference, CIEDE2000,
as given in Eq. (5.19).
27,28
E =
_
[L

/(K
L
S
L
)]
2
+ [C

/(K
C
S
C
)]
2
+ [H

/(K
H
S
H
)]
2
+R
T
(C

)
_
1/2
. (5.19)
A new termR
T
is introduced to deal with interactions between hue and chroma dif-
ferences, which is required for blue colors to correct for the orientation of ellipses.
Color difference formulas of non-Euclidean distance are rather complex; they are
difcult to understand and cumbersome to use.
CIE Color Spaces 69
5.5 CIE Color Appearance Model
A color appearance model is any color model that includes predictors of at least
the relative color appearance attributes of lightness, chroma, and hue. To have rea-
sonable predictors of these attributes, the model must include at least some form of
a chromatic adaptation transform (see Chapter 4). All chromatic adaptation mod-
els have their roots in the von Kries hypothesis. The modern interpretation of the
von Kries model is that each cone has an independent gain control as expressed in
Eq. (5.20).
_
L
a
M
a
S
a
_
=
_
1/L
max
0 0
0 1/M
max
0
0 0 1/S
max
__
L
M
S
_
, (5.20)
where L, M, and S are the initial cone responses; L
a
, M
a
, and S
a
are the cone
signals after adaptation; and L
max
, M
max
, and S
max
are the cone responses for the
scene white or maximum stimulus. This denition enables color spaces CIELAB
and CIELUV to be considered as color appearance models.
29
More complex mod-
els that include predictors of brightness, colorfulness, and various luminance-
dependent effects are the Nayatani,
30
Hunt,
31
RLAB,
32
ATD,
33
and LLAB
34
mod-
els. Nayatanis and Hunts models, evolved from many decades of studies, are ca-
pable of predicting an extensive array of color appearance phenomena. They are
comprehensive and complex. ATD, developed by Guth, is a different type of model
from the others. It is aimed at describing color vision. RLAB and LLAB are similar
in structure; they are the extension of CIELAB and are relatively simple.
In view of the importance of the color appearance in cross-media image
renditions, the CIE has formed a committee for recommending a color appear-
ance model. In 1997, the CIE recommended an interim color appearance model
CIECAM97s.
35
The formulation of CIECAM97s builds on the work of many re-
searchers in the eld of color appearance.
2934
It is a joint effort of top color sci-
entists around the world. Subsequent studies have found three drawbacks of the
model. In 2002, a revised model CIECAM02 was published.
36
The formulation of
CIECAM02 is the revision to CIECAM97s, but using the basic structure and form
of CIECAM97s. Comparison studies indicate that CIECAM02 performs as well as
or better than CIECAM97s in almost all cases and has a signicant improvement
in the prediction of saturation.
37
Therefore, CIECAM02 is recommended for the
CIE color appearance model.
The formulation begins with a conversion from CIEXYZ to spectrally sharp-
ened cone responses RGB (see Section 12.8 for the spectral sharpening) via a ma-
trix multiplication for both sample and white point, where the matrix is the CAT02
(CIECAM97s uses the Bradford matrix, see M
B
of Chapter 4).
_
R
G
B
_
=M
CAT02
_
X
Y
Z
_
=
_
0.7328 0.4296 0.1624
0.7036 1.6975 0.0061
0.0030 0.0136 0.9834
__
X
Y
Z
_
. (5.21)
70 Computational Color Technology
The next step is the chromatic adaptation transform, which is a modied von Kries
adaptation (CIECAM97s uses the exponential nonlinearity added to the short-
wavelength blue channel).
R
c
= [D(Y
w
/R
w
) +1 D]R, (5.22a)
G
c
= [D(Y
w
/G
w
) +1 D]G, (5.22b)
B
c
= [D(Y
w
/B
w
) +1 D]B, (5.22c)
D =F{1 (1/3.6) exp[(L
A
+42)/92]}, (5.23)
where the D factor is used to specify the degree of adaptation, which is a function
of the surround and L
A
(the luminance of the adapting eld in candelas per square
meter or cd/m
2
). The value of D is set equal to 1.0 for complete adaptation to the
white point and 0.0 for no adaptation. In practice, D is greater than 0.65 for a dark
surround and quickly approaches 0.8 with increasing L
A
; for a dim surround, D is
greater than 0.75 and quickly approaches 0.9 with increasing L
A
; for an average
surround, D is greater than 0.84 and quickly approaches 1.0 with increasing L
A
.
Similar transformations are made for the source white point. The F value is set at
1.0, 0.9, or 0.8 for the average, dim, or dark surround, respectively.
Next, parameters for the viewing conditions are computed. The value F
L
is
calculated from Eqs. (5.24) and (5.25).
k = 1/(5L
A
+1), (5.24)
F
L
= 0.2k
4
(5L
A
) +0.1
_
1 k
4
_
2
(5L
A
)
1/3
. (5.25)
The background induction factor g is a function of the background luminance fac-
tor with a range from 0 to 1.
g =Y
b
/Y
w
. (5.26)
The value g can then be used to calculate the chromatic brightness induction fac-
tors, N
bb
and N
cb
, and the base exponential nonlinearity z.
N
bb
=N
cb
= 0.725(1/g)
0.2
, (5.27)
z = 1.48 +g
1/2
. (5.28)
Postadaptation signals for both sample and source white are transformed from
the sharpened cone responses to the Hunt-Pointer-Estevez cone responses using
Eq. (5.29).
_
R

_
=M
H
M
CAT02
1
_
R
c
G
c
B
c
_
, (5.29)
CIE Color Spaces 71
where
M
H
=
_
0.38971 0.68898 0.07868
0.22981 1.18340 0.04641
0.0 0.0 1.00000
_
and
M
CAT02
1
=
_
1.096124 0.278869 0.182745
0.454369 0.473533 0.072098
0.009628 0.005698 1.015326
_
.
Postadaptation nonlinear response compressions are performed by using Eq. (5.30).
R

a
=
__
400(F
L
R

/100)
0.42
___
(F
L
R

/100)
0.42
+27.13
__
+0.1, (5.30a)
G

a
=
__
400(F
L
G

/100)
0.42
___
(F
L
G

/100)
0.42
+27.13
__
+0.1, (5.30b)
B

a
=
__
400(F
L
B

/100)
0.42
___
(F
L
B

/100)
0.42
+27.13
__
+0.1. (5.30c)
The preliminary Cartesian coordinates a and b are calculated as follows:
a =R

a
12G

a
/11 +B

a
/11, (5.31)
b =(1/9)
_
R

a
+G

a
2B

a
_
. (5.32)
They are used to calculate a preliminary magnitude t and hue angle h.
t =
_
e
_
a
2
+b
2
_
1/2
_
/
_
R

a
+G

a
+(21/20)B

a
_
, (5.33)
h = tan
1
(b/a). (5.34)
Knowing h, we can compute an eccentricity factor e, which in turn is used to
calculate the hue composition H.
e =(12500/13)N
c
N
cb
[cos(h/180 +2) +3.8], (5.35)
H =H
i
+ [100(h h
1
)/e
1
]/[(h h
1
)/e
1
+(h
2
h)/e
2
]. (5.36)
The value N
c
in Eq. (5.35) is set at 1.0, 0.95, or 0.8 for average, dim, or dark
surround, respectively, and the value N
cb
is calculated from Eq. (5.27). Table 5.1
gives h, e, and H
i
values for red, yellow, green, and blue colors.
The achromatic response

A is calculated by using Eq. (5.37) for both sample
and white. The lightness J is calculated from the achromatic signals of the sample
and white using Eq. (5.38).

A=
_
2R

a
+G

a
+(1/20)B

a
0.305
_
N
bb
, (5.37)
J = 100
_

A/

A
w
_
z
, (5.38)
72 Computational Color Technology
where is the constant for the impact of the surround; = 0.69 for an average sur-
round, 0.59 for a dim surround, 0.525 for a dark surround, and 0.41 for cut-sheet
transparencies. Knowing the achromatic response and lightness, the perceptual at-
tribute correlate for brightness Q can then be calculated.
Q=(4/)(J/100)
1/2
_

A
w
+4
_
F
L
0.25
. (5.39)
The chroma correlate C can be computed using the lightness J and temporary
magnitude t .
C =t
0.9
(J/100)
1/2
(1.64 0.29
g
)
0.73
. (5.40)
Knowing the chroma C, we can calculate the colorfulness M and the saturation
correlate s.
M =CF
L
0.25
, (5.41)
s = 100(M/Q)
1/2
. (5.42)
Finally, we can determine the Cartesian representations using the chroma correlate
C and hue angle h.
a
C
=Ccos(h), (5.43)
b
C
=Csin(h). (5.44)
The notations a
C
and b
C
with subscript C specify the use of the chroma correlate.
The corresponding Cartesian representations for colorfulness M, a
M
, and b
M
, and
saturation correlate s, a
s
, and b
s
, can be computed in a similar way.
As one can see, CIECAM02 is very computationally intensive. No real-time
application of it is known. However, a major application of the color appearance
models, CIECAM97s and CIECAM02 in particular, is the evaluation of color dif-
ferences. The performances of the original CAMs and their extensions were tested
using two types of color-difference data: large- and small-magnitude color differ-
ences. Results showed that the CIECAM02-based models in general gave a more
accurate prediction than the CIECAM97s-based models. The modied version of
CIECAM02 gave satisfactory performance in predicting different data sets (bet-
ter than or equal to the best available uniform color spaces and color-difference
Table 5.1 Perceptual attribute correlates of unique colors.
Red Yellow Green Blue Red
h 20.14 90 164.25 237.53 380.14
e 0.8 0.7 1.0 1.2 0.8
H
i
0 100 200 300 400
CIE Color Spaces 73
formulas). This strongly suggests that a universal color model based on a color-
appearance model can be achieved for all colorimetric applications: color speci-
cation, color difference, and color appearance.
38
5.6 S-CIELAB
Zhang and Wandell extended CIELAB to account for the spatial as well as color
errors in reproduction of the digital color image.
39
They call this model Spatial-
CIELAB or S-CIELAB. The design goal is to apply a spatial ltering to the color
image in a small-eld or ne-patterned area, but reverting to the conventional
CIELAB in a large uniform area. The procedure for computing S-CIELAB is out-
lined as follows:
(1) Input image data are transformed into an opponent-colors space. This color
transform converts the input image, specied in terms of the CIEXYZ tris-
timulus values, into three opponent-colors planes that represent luminance,
red-green, and blue-yellow components.
(2) Each opponent-colors plane is convoluted with a 2D kernel whose shape is
determined by the visual spatial sensitivity to that color dimension; the area
under each of these kernels integrates to one. A low-pass ltering is used to
simulate the spatial blurring by the human visual system. The computation
is based on the concept of the pattern-color separability. Parameters for
the color transform and spatial lters were estimated from psychophysical
measurements.
(3) The ltered representation is converted back to CIEXYZ space, then to
a CIELAB representation. The resulting representation includes both the
spatial ltering and the CIELAB processing.
(4) The difference between the S-CIELAB representations of the original and
its reproduction is the measure of the reproduction error. The difference is
expressed by a quantity E
s
, which is computed precisely as E

ab
in the
conventional CIELAB.
S-CIELAB reects both spatial and color sensitivity. This model is a color-texture
metric and a digital-imaging model. It has been applied to printed halftone images.
Results indicate that S-CIELAB correlates with perceptual data better than stan-
dard CIELAB.
40
This metric can also be used to improve the multilevel halftone
images.
41
References
1. CIE, Recommendations on uniform color spaces, color-difference equations
and psychonetric color terms, Supplement No. 2 to Colorimetry, Publication
No. 15, Bureau Central de la CIE, Paris (1978).
74 Computational Color Technology
2. C. J. Bartleson, Colorimetry, Optical Radiation Measurements, Color Mea-
surement, F. Grum and C. J. Bartleson (Eds.), Academic Press, New York,
Vol. 2, pp. 33148 (1980).
3. D. B. Judd and G. Wyszecki, Color in Business, Science and Industry, 3rd
Edition, Wiley, New York, pp. 331332 (1975).
4. H. R. Kang, Color gamut boundaries in CIELAB space, NIP 19, pp. 808811
(2003).
5. S. M. Newall, D. Nickerson, and D. B. Judd, Final report of the OSA subcom-
mittee on the spacing of the Munsell colors, J. Opt. Soc. Am. 33, pp. 385418
(1943).
6. M. Mahy, L. Van Eycken, and A. Oosterlinck, Evaluation of uniform color
spaces developed after the adoption of CIELAB and CIELUV, Color Res.
Appl. 19, pp. 105121 (1994).
7. D. L. MacAdam, Visual sensitivities to color differences in daylight, J. Opt.
Soc. Am. 32, pp. 247274 (1942).
8. A. R. Robertson, The CIE 1976 color-difference formulae, Color Res. Appl. 2,
pp. 711 (1977).
9. R. McDonald, Industrial pass/fail colour matching, Part III: Development of a
pass/fail formula for use with instrumental measurements of colour difference,
J. Soc. Dyers Colourists 96, pp. 486495 (1980).
10. W. Frei and B. Baxter, Rate-distortion coding simulation for color images,
IEEE Trans. Commun. 25, pp. 13851392 (1977).
11. D. L. MacAdam, Colorimetric data for samples of OSA uniform color scales,
J. Opt. Soc. Am. 68, pp. 121130 (1978).
12. D. L. MacAdam, Re-determination of colors for uniform scales, J. Opt. Soc.
Am. A 7, pp. 113115 (1990).
13. S. L. Guth, R. W. Massof, and T. Benzschawel, Vector model for normal and
dichromatic color vision, J. Opt. Soc. Am. 70, pp. 197212 (1980).
14. K. Richter, Cube-root spaces and chromatic adaptation, Color Res. Appl. 5, pp.
2543 (1980).
15. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, p. 138 (1982).
16. T. Seim and A. Valberg, Towards a uniform color space: a better formula to
describe the Munsell and OSA color scales, Color Res. Appl. 11, pp. 1124
(1986).
17. Y. Nayatani, K. Hashimoto, K. Takahama, and H. Sobagaki, A nonlinear color-
appearance model using Estevez-Hunt-Pointer primaries, Color Res. Appl. 12,
pp. 231242 (1987).
18. R. W. G. Hunt, Revised colour-appearance model for related and unrelated
colours, Color Res. Appl. 16, pp. 146165 (1991).
19. F. Ebner and M. Fairchild, Development and testing of a color space (IPT) with
improved hue uniformity, 6th Color Imaging Conf.: Color Science, System,
and Application, pp. 813 (1998).
CIE Color Spaces 75
20. C. Li and M. R. Luo, A uniform colour space based upon CIECAM97s,
http://www.colour.org/tc8-05. C. Li, M. R. Luo, and G. Cui, Colour-difference
evaluation using color appearance models, 11th Color Imaging Conf.: Color
Science, System, and Application, pp. 127131 (2003).
21. K. D. Chickering, FMC color-difference formulas: Clarication concerning
usage, J. Opt. Soc. Am. 61, pp. 118122 (1971).
22. F. J. Clarke, R. McDonald, and B. Rigg, Modication to JPC79 colour differ-
ence formula, J. Soc. Dyers Colourists 100, pp. 128132 (1984).
23. F. J. Clarke, R. McDonald, and B. Rigg, Modication to JPC79 colour differ-
ence formula (Errata and test data), J. Soc. Dyers Colourists 100, pp. 281282
(1984).
24. M. H. Brill, Suggested modication of CMC formula for acceptability, Color
Res. Appl. 17, pp. 402404 (1992).
25. M. R. Luo and B. Rigg, BFD(l:c) color-difference formula, Part 1: Develop-
ment of the formula, J. Soc. Dyers Colourists 103, pp. 8694 (1987).
26. M. R. Luo and B. Rigg, BFD(l:c) color-difference formula, Part 2: Perfor-
mance of the formula, J. Soc. Dyers Colourists 103, pp. 126132 (1987).
27. M. R. Luo, G. Cui, and B. Rigg, The development of the CIE 2000 colour-
difference formula: CIEDE2000, Color Res. Appl. 26, pp. 340350 (2001).
28. M. R. Luo, G. Cui, and B. Rigg, Further comments on CIEDE2000, Color Res.
Appl. 27, pp. 127128 (2002).
29. M. D. Fairchild, Color Appearance Models, Addison-Wesley, Reading, MA
(1998).
30. Y. Nayatani, H. Sobagaki, K. Hashimoto, and T. Yano, Lightness dependency
of chroma scales of a nonlinear color-appearance model and its latest formu-
lation, Color Res. Appl. 20, pp. 156167 (1995).
31. R. W. G. Hunt, Revised colour-appearance model for related and unrelated
colours, Color Res. Appl. 16, pp. 146165 (1991).
32. M. D. Fairchild, Renement of the RLAB color space, Color Res. Appl. 21,
pp. 338346 (1996).
33. S. Guth, Further applications of the ATD model for color vision, Proc. SPIE
2414, pp. 1226 (1995).
34. M. R. Luo, M.-C. Lo, and W.-G. Kuo, The LLAB (l:c) color model, Color Res.
Appl. 21, pp. 412429 (1996).
35. CIE, Publication 131-1998, The CIE 1997 interim color appearance model
(simple version) CIECAM97s (1998).
36. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. New-
man, The CIECAM02 color appearance model, 10th CIC: Color Science and
Engineering Systems, Technologies, Applications, pp. 2327 (2002).
37. C. Li, M. R. Luo, R. W. G. Hunt, N. Moroney, M. D. Fairchild, and T. Newman,
The performance of CIECAM02, 10th CIC: Color Science and Engineering
Systems, Technologies, Applications, pp. 2832 (2002).
38. C. Li, M. R. Luo, and G. Cui, Colour-difference evaluation using colour ap-
pearance models, 11th CIC: Color Science and Engineering Systems, Tech-
nologies, Applications, pp. 127131, Nov. 3 (2003).
76 Computational Color Technology
39. X. Zhang and B. A. Wandell, A spatial extension of CIELAB for digital
color image reproduction, SID Int. Symp., Digest of Tech. Papers, pp. 731734
(1996).
40. X. Zhang, D. A. Silverstein, J. E. Farrell, and B. A. Wandell, Color image qual-
ity metric S-CIELAB and its application on halftone texture visibility, IEEE
Compcon., pp. 4448 (1997).
41. X. Zhang, J. E. Farrell, and B. A. Wandell, Applications of a spatial extension
to CIELAB, Proc. SPIE 3025, pp. 154157 (1997).
Chapter 6
RGB Color Spaces
RGB primaries play an important role in building colorimetry. CIE color-matching
functions are derived by CIE standard RGB primaries. Many imaging devices are
additive systems, such as television and digital cameras that are based on RGB
primaries, where image signals are encoded in a colorimetrical RGB standard.
Moreover, colorimetrical RGB encoding standards are a frequently encountered
representation for color reproduction and color information exchanges.
This chapter presents an extensive collection of the existing RGB primaries.
Their gamut sizes are compared. Methods of transforming between CIEXYZ and
colorimetric RGB are discussed. A conversion formula between RGB standards
under the same white point is also provided. The general structure and represen-
tation of the RGB color-encoding standard is discussed. Several important RGB
encoding standards are given in Appendix 4. Finally, conversion accuracies from
RGB via CIEXYZ to CIELAB of these primaries are examined.
6.1 RGB Primaries
Many RGB primary sets have been proposed for use in different industries such
as motion picture, television, camera, lm, printing, color measurement, and com-
puter. We have collected 21 sets of RGB primaries; their chromaticity coordinates
are given in Table 6.1.
120
Usually, the gamut size of RGB primaries is plotted in a chromaticity diagram.
The mixing of these colorimetric RGB primaries obeys Grassmanns additivity
law; therefore, the color gamut boundary is indicated by the triangle that connects
three primaries in the chromaticity diagram.
3
From the size of the gamut, RGB
primaries can be classied into three groups. The rst group has a relatively small
gamut size in which RGB primaries are located inside of the spectral color locus
as shown in Fig. 6.1. Adobe/RGB98, Bruse/RGB, EBU/RGB, Guild/RGB, Ink-
jet/RGB, ITU.BT709/RGB, NTSC/RGB, SMPTE-C/RGB, SMPTE-240M/RGB,
and Sony-P22/RGB fall into this group. SMPTE-C/RGB, the early standard of
the Society of Motion Picture and Television Engineers, gives the smallest color
gamut. Both Sony and Apple Computer use P22/RGB; it has the same blue pri-
mary as SMPTE-C/RGB but different red and green primaries. ITU709/RGB is
slightly larger than SMPTE-C/RGB; it has the primaries used in several impor-
tant RGB encoding standards such as sRGB and PhotoYCC. Bruse/RGB is larger
77
78 Computational Color Technology
Table 6.1 CIE 1931 chromaticity coordinates of RGB primaries.
Red Green Blue
x y x y x y
Adobe/RGB98
1
0.640 0.330 0.210 0.710 0.150 0.060
Bruse/RGB
2
0.640 0.330 0.280 0.650 0.150 0.060
CIE1931/RGB
3
0.7347 0.2653 0.2737 0.7174 0.1665 0.0089
CIE1964/RGB
3
0.7232 0.2768 0.1248 0.8216 0.1616 0.0134
EBU/RGB
4
0.640 0.330 0.290 0.600 0.150 0.060
Extended/RGB
5
0.701 0.299 0.170 0.796 0.131 0.046
Guild/RGB
6
0.700 0.300 0.255 0.720 0.150 0.050
Ink-jet/RGB
7
0.700 0.300 0.250 0.720 0.130 0.050
ITU-R.BT.709-2
8
0.640 0.330 0.300 0.600 0.150 0.060
Judd-Wyszecki/RGB
9
0.7347 0.2653 0.0743 0.8338 0.1741 0.0050
Kress/RGB
10
0.6915 0.3083 0.1547 0.8059 0.1440 0.0297
Laser/RGB
11
0.7117 0.2882 0.0328 0.8029 0.1632 0.0119
NTSC/RGB
12
0.670 0.330 0.210 0.710 0.140 0.080
RIMM-ROMM/RGB
1315
0.7347 0.2653 0.1596 0.8404 0.0366 0.0001
ROM/RGB
16
0.873 0.144 0.175 0.927 0.085 0.0001
SMPTE-C
17
0.630 0.340 0.310 0.595 0.155 0.070
SMPTE-240M
4
0.670 0.330 0.210 0.710 0.150 0.060
Sony-P22
18
0.625 0.340 0.280 0.595 0.155 0.070
Wide-Gamut/RGB
1
0.7347 0.2653 0.1152 0.8264 0.1566 0.0177
Wright/RGB
19
0.7260 0.2740 0.1547 0.8059 0.1440 0.0297
Usami/RGB
20
0.7347 0.2653 0.0860 1.0860 0.0957 0.0314
than ITU709/RGB as a result of extending the green primary, but keeping the
red and blue primaries the same as ITU709/RGB. Adobe/RGB98 is an enlarged
Bruse/RGB; the red and blue primaries are the same as Bruse/RGB with the green
primary at a higher chroma. The television standard NTSC/RGB has the same
green primary as Adobe/RGB98 with extended red and blue primaries. SMPTE-
240M/RGB, also developed for NTSC coding, has the same red and green pri-
maries as NTSC/RGB, whereas the blue primary is the same as Adobe/RGB98.
Guild/RGB is chosen for developing tristimulus colorimeters (note that the chro-
maticity coordinates of Guild/RGB are estimated from Fig. 2.32 of Ref. 9, p. 192);
it has a sufciently large gamut with the intention to encompass as many colors as
possible. Ink-jet/RGB is derived from experimental results of ink-jet prints; it has
a slightly larger gamut size than Guild/RGB.
The second group has RGB primaries located on the spectral color boundary as
shown in Fig. 6.2; they are the spectral stimuli. This group includes CIE1931/RGB,
CIE1964/RGB, Extended/RGB, Judd-Wyszecki/RGB, Kress/RGB, Laser/RGB,
Wide-Gamut/RGB, and Wright/RGB. CIE1931/RGB is the spectral matching
stimuli with blue primary at 435.8 nm, green at 546.1 nm, and red at 700.0 nm
RGB Color Spaces 79
Figure 6.1 Color gamut sizes of group 1 RGB primaries.
that the CIE 1931 2

observer is based upon, whereas CIE1964/RGB is the basis


for the CIE 1964 10

observer with blue at 22500 cm


1
(444.4 nm), green at
19000 cm
1
(526.3 nm), and red at 15500 cm
1
(645.2 nm).
3
Extended/RGB
with blue at 467 nm, green at 532 nm, and red at 625 nm is proposed for accommo-
dating applications in color imaging systems.
5
Judd and Wyszecki chose spectral
stimuli at two extremes of the visible spectrum for blue and red primaries and the
middle at 520 nm for green. They pointed out the disadvantage of choosing at
spectrum extremes that the primaries have very low brightness.
9
Kress proposed
a default RGB, having blue at 460 nm, green at 530 nm, and red at 620 nm.
10
Starkweather has suggested a spectral RGB color space using primaries of three
lasers with a helium-neon laser at 633 nm for red, an argon laser at 514 nm for
green, and a helium-cadmium laser at 442 nm for blue.
11
Wide-Gamut/RGB, pro-
posed by Adobe, uses spectral stimuli at 450 nm, 525 nm, and 700 nm for blue,
green, and red primaries, respectively.
1
Wright chooses spectral primaries at 460
nm for blue, 530 nm for green, and 650 nm for red.
19
Spectral stimuli, in gen-
eral, give a lager gamut size than the primaries of the rst group. However, they
are not free of the gamut mismatch (or gamut mapping) problem because the mis-
match is dependent not only on the gamut size but also on the locations of pri-
maries.
80 Computational Color Technology
Figure 6.2 Color gamut sizes of groups 2 and 3 RGB primaries.
The last group consists of RIMM-ROMM/RGB, ROM/RGB, and Usami/RGB;
they are also plotted in Fig. 6.2. The unique feature of this group is that they have
primaries outside of the spectral locus. RIMM-ROMM/RGB and Usami/RGB have
a spectral stimulus for red at 700 nm, but green and blue are outside of the spectral
locus.
1315,20
Note that the green and blue primaries of Usami/RGB have negative
values (see Table 6.1); the triangle dened by these primaries encloses the spectral
color locus, containing all real colors as well as imaginary colors. For ROM/RGB,
all primaries are outside of the spectral locus.
16
Note that the red and green pri-
maries do not meet the constraint of x +y 1(another way to put it is that z must
be negative).
6.2 Transformation of RGB Primaries
Colorimetric RGB has a simple linear relationship with CIEXYZ. Therefore, the
conversion between RGB and CIEXYZ is a simple vector-matrix multiplication,
provided the matrix coefcients are known. Matrix coefcients are derived from
chromaticity coordinates of RGB primaries if a white point is adopted. Conver-
sion matrices from XYZ to RGB and the inverse transform from RGB to XYZ
under illuminants A, B, C, D
50
, D
55
, D
65
, D
75
, and D
93
are derived and tabulated
RGB Color Spaces 81
in Appendix 1. Moreover, using CIEXYZ as an intermediate representation, we
can establish the conversion from one RGB primary set to another if the white
point is the same. Conversion matrices between RGB standards under the same
white point are provided in Appendix 2 and Appendix 3 for sRGB under D
65
and
ROMM/RGB under D
50
, respectively. Different white points require an extra step
of the chromatic adaptation or white-point conversion.
6.2.1 Conversion formula
As shown in Section 2.8, two sets of primaries are related by a 33 linear transfor-
mation, A
T
2
=(A
T
1

2
)
1
A
T
1
[see Eq. (2.31)], where A is a color-matching matrix
and
2
is the spectra of a set of primaries used to derive the color-matching matrix.
Therefore, colorimetric RGB is represented by a set of linear equations as shown
in Eq. (6.1) to give the forward encoding from XYZ to RGB.

R
G
B

11

12

13

21

22

23

31

32

33

X
Y
Z

, (6.1)
or

p
=,
where coefcients
ij
are the elements of the conversion matrix . The inverse
conversion from RGB to XYZ is given as

X
Y
Z

11

12

13

21

22

23

31

32

33

R
G
B

, (6.2)
or
=
p
,
where coefcients
ij
are the elements of the matrix . Matrix is the inverse of
the matrix as given in Eq. (6.3). With three independent channels, the matrix
is not singular and can be inverted.
=
1
. (6.3)
Equation (6.2) provides a path from a colorimetric RGB via CIEXYZ to other CIE
specications such as CIELAB or CIELUV because the subsequent transforma-
tions are well established.
3
Moreover, it also provides a path from a colorimetric
RGB to another colorimetric RGB via CIEXYZ, if the white point is the same.
82 Computational Color Technology
The inverse matrix is related to RGB primaries in the following manner:
12
X =x
r
T
r
R +x
g
T
g
G+x
b
T
b
B
Y =y
r
T
r
R +y
g
T
g
G+y
b
T
b
B (6.4a)
Z =z
r
T
r
R +z
g
T
g
G+z
b
T
b
B,

Y
X
Z

x
r
T
r
x
g
T
g
x
b
T
b
y
r
T
r
y
g
T
g
y
b
T
b
z
r
T
r
z
g
T
g
z
b
T
b

R
G
B

, (6.4b)
where (x
r
, y
r
, z
r
), (x
g
, y
g
, z
g
), and (x
b
, y
b
, z
b
) are the chromaticity coordinates of
the red, green, and blue primaries, respectively. Parameters T
r
, T
g
, and T
b
are the
proportional constants of the red, green, and blue primaries, respectively, under an
adapted white point.
4,12
By comparing with Eq. (6.2), we can establish the rela-
tionship between coefcients
ij
and chromaticity coordinates of primaries as

11
=x
r
T
r
;
12
=x
g
T
g
;
13
=x
b
T
b
;

21
=y
r
T
r
;
22
=y
g
T
g
;
23
=y
b
T
b
;

31
=z
r
T
r
;
32
=z
g
T
g
;
33
=z
b
T
b
,
(6.5a)
or

11

12

13

21

22

23

31

32

33

x
r
x
g
x
b
y
r
y
g
y
b
z
r
z
g
z
b

T
r
0 0
0 T
g
0
0 0 T
b

, (6.5b)
or
= T . (6.5c)
Equation (6.5) can be solved for T = [T
r
, T
g
, T
b
]
T
by using a known condition
of a gray-balanced and normalized RGB system; that is, when R =G=B =1, a
reference white point (x
n
, y
n
, z
n
) of the device is produced. Using this condition,
Eq. (6.4a) becomes
x
n
/y
n
=x
r
T
r
+x
g
T
g
+x
b
T
b
1 =y
r
T
r
+y
g
T
g
+y
b
T
b
(6.6a)
z
n
/y
n
=z
r
T
r
+z
g
T
g
+z
b
T
b
,

x
n
/y
n
1
z
n
/y
n

x
r
x
g
x
b
y
r
y
g
y
b
z
r
z
g
z
b

T
r
0 0
0 T
g
0
0 0 T
b

, (6.6b)
or

w
= T , (6.6c)
RGB Color Spaces 83
where normalized tristimulus values of the white point
w
= [x
n
/y
n
, 1, z
n
/y
n
]
are given in the left-hand side of Eq. (6.6). This equation is used to compute T
r
, T
g
,
and T
b
because chromaticity coordinates of primaries and white point are known.
The matrix has three independent columns with a rank of three; therefore, it is
nonsingular and can be inverted. By inverting matrix , we obtain vector T .
T =
1

w
. (6.7)
Resulting T values are substituted back into Eq. (6.5) with the known chromaticity
coordinates of primaries to obtain coefcients
ij
of the transfer matrix. Strictly
specking, Eq. (6.6) is only valid at one extreme point of the reference white, but
the derived coefcients are used in any RGB XYZ conversions. How well
this assumption holds across the whole encoding range requires further investi-
gations.
6.2.2 Conversion formula between RGB primaries
After obtaining the matrix, one can apply Eq. (6.3) to obtain matrix for the
forward transform from XYZ to RGB, if the matrix is not singular. Appendix 1
lists the conversion matrices for 20 sets of primaries under 8 different white points
(illuminants A, B, C, D
50
, D
55
, D
65
, D
75
, and D
93
). When the white point is the
same for two sets of RGB primaries, we can combine the relationships given in
Eqs. (6.1) and (6.2) for converting one set of RGB primaries via CIEXYZ to an-
other. Equation (6.8) gives the conversion formula between RGB sets.
21

R
d
G
d
B
d

d11

d12

d13

d21

d22

d23

d31

d32

d33

s11

s12

s13

s21

s22

s23

s31

s32

s33

R
s
G
s
B
s

, (6.8a)
or

p,d
=
d

p,s
=
d

1
s

p,s
, (6.8b)
where subscript s indicates the source and subscript d indicates the destination
RGB space. These two matrices can be combined to give a single conversion ma-
trix. All conversions between RGB primaries can be computed using Eq. (6.8) if
the white point is the same. Appendix 2 gives conversion matrices of various RGB
primaries to ITU-R.BT.709/RGBunder D
65
. Appendix 3 gives conversion matrices
of various RGB primaries to ROMM/RGB under D
50
. For cases of different white
points, an additional step of the white-point conversion or chromatic adaptation
is required. Several practical approaches of white-point conversion and chromatic
adaptation have been reported.
5,11,2123
84 Computational Color Technology
6.3 RGB Color-Encoding Standards
Colorimetrical RGBstandards play an important role in color imaging because they
are a frequently encountered representation for color reproduction and color infor-
mation exchanges. The purpose of the color-encoding standard is to provide format
and ranges for representing and manipulating color quantities. Its foundation is a
set of RGB primaries, either imaginary or real stimuli. By adding the viewing con-
ditions, gamma correction, conversion formula, and digital representation, a set
of RGB primaries becomes a color-encoding standard. Important RGB encoding
standards are SMPTE-C, European TV standard EBU, NTSC, and American Tele-
vision standard YIQ for the television industry; sRGB for Internet; and PhotoYCC
and ROMM-RIMM for photographic and printing industries. These standards are
given in Appendix 4. The general structure of the RGB encoding standard is given
in this section.
6.3.1 Viewing conditions
The viewing conditions include an adapted white point such as D
50
or D
65
, sur-
round, luminance level, and viewing are. Generally, the display industry prefers
D
65
and the printing industry uses D
50
as the adapted white point. The surround
is a eld outside the background and can be considered to be the entire room in
which an image is viewed. Printed images are usually viewed in an illuminated
average surround, meaning that scene objects are surrounded by other similarly
illuminated objects. The luminance level is set according to the application or not
set at all; for instance, the outdoor scene is set under typical daylight luminance
levels (>5,000 cd/m
2
). When an image is viewed, the observer ideally should see
only the light from the image itself. This is the case for instrumental color measure-
ments, but is not the case for real viewing conditions by human observers. In most
actual viewing conditions, the observer will also see are lightthe stray light
imposed on the image from the environment and reected to viewer. The viewing
are is usually low (<1.0 cd/m
2
); sometimes it is set as a percentage of the white-
point luminance (e.g., 1%) or included in 0/45 measurements as a part of the scene
itself.
6.3.2 Digital representation
Cameras and scanners receive light or photons that are converted to electrons or
voltage by a photoelectric process (e.g., photomultiplier). In digital devices, the
voltage is further converted to bits via an analog-to-digital converter. On the other
hand, a display such as a CRT monitor takes digital values and converts them back
to voltage via a digital-to-analog converter to drive the electron gun. Generally,
all color channels are encoded within a xed range such as [0, 1]; this range is
digitized and scaled to the bit depth used in the device, usually 8 bits.
RGB Color Spaces 85
6.3.3 Optical-electronic transfer function
Most RGB encoding standards are designed for additive devices such as CRT dis-
plays in which a nonlinear voltage-to-luminance relationship exists; this requires a
power function to transfer the voltage into luminance. The general equation for the
electronic-optical transfer function is given in Eq. (6.9).
4,21,24
P
L
=[(P

V
+offset)/(1 +offset)]
gamma
if P

V
b, (6.9a)
P
L
=gain P

V
if P

V
<b, (6.9b)
where P
L
is the luminance factor or one of the RGB device values, and P

V
is the
voltage. The power of the function is the monitor gamma that varies within a nar-
row range around 2.5. For small P

V
values, the voltage-to-luminance relationship
is nearly linear and goes to the point of origin as given by Eq. (6.9b) with a slope
as the gain. Figure 6.3 shows the effect of gamma correction for the case of gamma
= 2.2 and offset = 0.099; the gamma correction linearizes the device value with
luminance.
Figure 6.3 The gamma correction.
86 Computational Color Technology
Images shown on television are acquired by a camera, whereas images in a
computer monitor may come from the computer itself, the Internet, a scanner, or
a digital still camera. Camera and scanner have a nonlinear luminance-to-signal
relationship; one needs to correct the nonlinear video signal or device value by
Eq. (6.10), an inversion of Eq. (6.9).
P

=(1 +offset)P
1/gamma
offset if 1 P b

, (6.10a)
P

=(1/gain) P if 0 <P <b

. (6.10b)
Equation (6.10) is known as the gamma correction, where P is the input video
signal (or coded value of the device), P

is the gamma-corrected output signal


(or device value), and b

is a constant.
6.4 Conversion Mechanism
Once the white point is selected, the conversion mechanism between RGB and
XYZ can be established. The method is given in Section 6.2. Appendix 1 gives
the conversion matrices for 20 sets of primaries under eight different white points.
Further transforms from CIEXYZ to CIELAB or YIQ are documented in CIE liter-
ature and color textbooks. The mechanism is a simple matrix-vector multiplication.
The conversion matrix between different RGB primaries can be computed us-
ing Eq. (6.8). Appendix 2 gives conversion matrices of nineteen RGB sets to
ITU-R.BT.709/RGB under D
65
. Appendix 3 gives conversion matrices of nineteen
RGB sets to ROMM/RGB under D
50
.
6.5 Comparisons of RGB Primaries and Encoding Standards
It has been shown that an improper encoding scheme can severely damage the color
conversion accuracy.
7
Therefore, we compare RGB primaries and standards to de-
termine the most suitable RGB encoding with respect to the gamut size, gamma
correction, encoding format (oating-point or integer representation, bit depth,
etc.), and conversion accuracy.
A set of 150 color patches printed by an ink-jet printer, consisting 125 color
patches from ve-level combinations of an RGB cube and 25 three-color mixtures,
is used to compare RGB primaries and encoding standards. The color patches are
measured to obtain CIEXYZ and CIELAB values under D
65
illuminant. These
color patches are the outputs of the current printing technology, which are read-
ily available and frequently encountered. Thus, these patches provide a realistic
environment for assessing the color transform in the system environment.
To check conversion accuracy, we convert measured CIEXYZ values to a
RGB encoding standard using the specied transfer matrix, gamma correction
(if provided), and encoding range. We then convert the RGB-encoded data back
to CIELAB via the inverse transform for the purpose of comparing to the initial
RGB Color Spaces 87
measured CIELAB. Transfer matrices from RGB to XYZ of twenty sets of RGB
primaries under D
65
are computed by using Eq. (6.6). The encoding matrices from
XYZ to RGB are obtained via matrix inversion (see Appendix 1 for obtaining these
matrices). Table 6.2 summarizes the results of the computational accuracy via RGB
encoding; the second column gives the percentage of the out-of-range colors from
a total of 150 colors, the third column gives the average color difference of 150
colors, and the fourth column gives the maximum color difference.
SMPTE-C/RGB is the smallest space; it gives 45 out-of-range colors (30%,
see Fig. 6.4). The maximum error is 27.42 E
ab
and the average error is 2.92
E
ab
(see Fig. 6.5). Using sRGB encoding, we nd that 38 color patches require
clipping to put the encoded value within the range of [0, 1]. This is an alarming
25.3% of the population of out-of-range colors for such a common test target that
could come from any color printer. Most clipped sRGB points give an error greater
than 2 E
ab
with a maximum of 28.26 E
ab
, and an average of 2.22 E
ab
. Out-
of-range colors appear in most color regions except the blue-purple region (see
Fig. 6.6). Because of the clipping, these 38 points cannot be reversed back to their
original CIELAB values (see Fig. 6.7). This indicates that the sRGB space is too
small for printing. The increase in bit depth does not help either; errors of out-of-
range colors remain the same regardless of the bit depth (see Table 6.3). Moreover,
Table 6.2 Computational accuracy of RGB encoding standards.
% out of range Average Maximum
RGB primaries color E
ab
E
ab
SMPTE-C/RGB 30.0% 2.92 27.42
ITU-R.BT.709/RGB 25.3% 2.22 28.26
Sony-P22/RGB 23.3% 2.67 33.59
EBU/RGB 22.0% 2.28 30.23
Bruse/RGB 18.0% 1.43 19.31
Adobe/RGB98 10.0% 1.17 19.83
NTSC/RGB 6.0% 0.87 12.19
SMPTE-240M/RGB 5.3% 0.74 9.60
Guild/RGB 0.7% 0.60 5.91
Ink-jet/RGB 0 0.58 2.94
CIE1931/RGB 7.3% 1.08 18.08
Laser/RGB 5.3% 1.05 28.26
Judd-Wyszecki/RGB 1.3% 0.74 11.54
CIE1964/RGB 0.7% 0.64 3.68
Wide-Gamut/RGB 0.7% 0.67 3.53
Wright/RGB 0 0.68 3.30
Kress/RGB 0 0.64 3.36
Extended/RGB 0 0.61 3.00
RIMM-ROMM/RGB 0 0.71 4.29
ROM/RGB 0 0.43 0.99
88 Computational Color Technology
Figure 6.4 Color gamut of SMPTE-C/RGB for reproduction of printer colors.
Figure 6.5 Color differences between measured and computed out-of-gamut colors en-
coded in SMPTE-C/RGB.
RGB Color Spaces 89
Figure 6.6 Color gamut of sRGB for reproduction of printer colors.
Figure 6.7 Color differences between measured and computed out-of-gamut colors en-
coded in sRGB.
90 Computational Color Technology
the average error improves very slowly with increasing bit depth because error
is mostly contributed from those out-of-range colors. By removing the 38 out-
of-range colors, we obtain a much smaller error as shown in row 4 of Table 6.3.
Also, the average and maximum errors decrease much faster with respect to bit
depth.
There are several extensions of sRGB such as sRGB64 and e-sRGB. sRGB64
is a rather interesting standard; it is the same as sRGB but uses 16 bits per chan-
nel, and is actually encoded in 13 bits with a scaling factor of 8192. As shown in
Table 6.3, the bit-depth increase does not help much. e-sRGB removes the con-
straint on the encoding range, allowing sRGB to have negative and overow
values.
25
Error distributions of the CIELAB sRGB transform without clipping
are given in Table 6.4. The computation uses integer arithmetic with 8-bit ITULAB
inputs. Compared to Table 6.3, average errors are smaller than the corresponding
errors encoded by sRGB with clipping. Note that the maximum error of 9.22 E
ab
Table 6.3 Computational errors of sRGB with respect to bit depth.
Average color difference, E
ab
Maximum color difference, E
ab
Method

8-bit 9-bit 10-bit 12-bit 14-bit 8-bit 9-bit 10-bit 12-bit 14-bit
1 2.39 2.20 2.12 2.05 2.04 28.36 28.34 28.35 28.34 28.35
2 2.22 2.12 2.08 2.04 2.04 28.26 28.33 28.37 28.35 28.35
3 3.73 3.06 2.84 2.74 2.72 28.74 27.95 28.74 28.74 28.74
4 1.93 1.12 0.85 0.74 0.74 7.18 4.17 2.41 1.34 1.63

Method 1: Floating-point computation. Method 2: Floating-point computation and gamma correc-


tion. Method 3: Integer computation. Method 4: Integer computation using 112 data points by remov-
ing out-of-range colors.
Table 6.4 Error distribution of CIELAB sRGB transform without clipping.
Range 8-bit 9-bit 10-bit 11-bit 12-bit 13-bit 14-bit
01 27 65 99 118 116 116 119
12 61 72 48 32 34 34 31
23 38 7 3
34 8 4
45 8 2
56 3
67 1
78 1
89 2
910 1
Average 2.13 1.20 0.87 0.74 0.72 0.71 0.71
Maximum 9.22 4.17 2.41 1.42 1.34 1.42 1.63
RGB Color Spaces 91
at 8-bit depth is about a factor of three smaller than the value obtained by sRGB.
Moreover, both average and maximum errors decrease rapidly as the bit depth in-
creases, but the improvement levels off around 11 bits. Two of the specied bit
depths, 10 bits and 12 bits, give average errors of 0.87 E
ab
and 0.72 E
ab
, re-
spectively. No clipping of out-of-range values means that it allows mathematical
extrapolation from the triangle connected by the ITU-R.BT.709/RGB primaries.
This practically extends the color space to an unknown sizeinnity if no limit is
placed on the out-of-range value. Eliminated clipping takes care of output-referred
images very well. The information is retained in storage for interchange. How-
ever, the clipping still exists when converted to sRGB for display. This makes one
wonder: why not use a primary set with a larger color gamut to eliminate the out-
of-range colors? By so doing, all colors will be encoded by interpolation. Mathe-
matically, the extrapolation is subjected to a higher uncertainty than interpolation;
not to mention the difculty of dealing with negative values in the system environ-
ment.
Sony-P22/RGB has a slightly larger gamut than ITU-R.BT.709/RGB; it has
35 out-of-range colors (23.3%). The average color difference of 150 patches with
respect to the measured values is 2.67 E
ab
with a maximum of 33.59 E
ab
.
EBU/RGB is slightly larger than Sony-P22/RGB; it has 33 out-of-range colors
(22.0%). The average color difference of 150 patches with respect to the measured
values is 2.28 E
ab
with a maximum of 30.23 E
ab
. Bruse/RGB gives 27 out-
of-range colors (18%) with a maximum error of 19.31 E
ab
and an average error
of 1.43 E
ab
. Similar to sRGB, SMPTE-C/RGB and Bruse/RGB spaces produce
out-of-range colors that appear in most color regions (see Fig. 6.8).
Adobe/RGB98, having an enlarged green region, reduces the number of out-of-
range colors to fteen (10%) by eliminating the clipping error in the green region.
Therefore, the distribution of out-of-range colors is in the yellow, red, and purple
regions (see Fig. 6.9). The maximum error is 19.83 E
ab
and the average error is
1.17 E
ab
.
Using NTSC/RGB encoding, we nd that there are nine out-of-range colors
(6%). These colors are clustered in the red region with one exception in yellow
(see Fig. 6.10). The maximum color difference is 12.19 E
ab
and the average error
is 0.87 E
ab
. NTSC/RGB primaries, having extended space in the green region,
eliminate the clipping problem in the green region. However, the green primary is
located too far left on the x-axis of the chromaticity diagram, such that it reduces
the space in the red-magenta region. This causes many red colors to become out of
range.
SMPTE-240M/RGB has a relatively large gamut; therefore, it has only 8 out-
of-range colors or 5.3%(see Fig. 6.11). The average color difference of 150 patches
is 0.74 E
ab
with a maximum of 9.60 E
ab
. Primaries of the Ink-jet/RGB are
chosen to encompass all 150 experimental data. As expected, there is no out-of-
range color (see Fig. 6.12). In 8-bit depth, the average error is 0.58 E
ab
and the
maximum error is 2.94 E
ab
. Further improvements in computational accuracy
can be realized by increasing the bit depth for encoding the integer RGB. The
92 Computational Color Technology
Figure 6.8 Color gamut of Bruse/RGB for reproduction of printer colors.
average error becomes smaller and the error distribution becomes narrower as the
bit depth increases. Practically, there is no visually detectable error at 12 bits or
higher. Guild/RGB is almost as big as Ink-jet/RGB. However, Guilds blue primary
is a little toward the right-hand side of the Ink-jets blue; thus, the gamut is slightly
smaller in the blue region. This causes it to have one out-of-range color (0.7%).
The average error is 0.60 E
ab
and the maximum error is 5.91 E
ab
. The results
of Guild/RGB and Ink-jet/RGB demonstrate that it does not necessarily need a
large gamut to minimize the clipping errors; a properly selected RGB set is more
helpful in reducing out-of-range colors.
For Group 2 RGB sets, laser spectral primaries give a much larger color space,
but we still get eight out-of-range colors (5.3%). The average error is 1.05 E
ab
and maximum error is 28.26 E
ab
for 8-bit representation. This is because the
positions of primaries are not in a proper wavelength to cover the real colors en-
countered; in particular, the wavelength of the green primary at 514 nm is too low,
such that the line connecting the green and red primaries exclude many yellow
and green colors (see Fig. 6.13). This primary set demonstrates that a larger gamut
size does not guarantee the results to be free of out-of-range color. Results from
Guild/RGB, Ink-jet/RGB, and Laser/RGB suggest that the positions of primaries
are more important.
CIE 1931 spectral primaries give eleven out-of-range colors (7.3%) in the
green-blue region. The maximum error is 18.08 E
ab
and the average error is 1.08
RGB Color Spaces 93
Figure 6.9 Color gamut of Adobe/RGB98 for reproduction of printer colors.
Figure 6.10 Color gamut of NTSC/RGB for reproduction of printer colors.
94 Computational Color Technology
Figure 6.11 Color gamut of SMPTE-240M/RGB for reproduction of printer colors.
Figure 6.12 Color gamut of Ink-jet/RGB for reproduction of printer colors.
RGB Color Spaces 95
Figure 6.13 Color gamut of Laser/RGB for reproduction of printer colors.
E
ab
. This is because the green primary at =546.1 nm is too high in wavelength
(see Fig. 6.14). Concerning the color gamut, the position of the CIE 1931 green
primary is too far right, whereas the laser/green primary is too far left; a proper
wavelength for the green primary is around 530 nm.
Other spectral primaries fare much better because the green primary is in the
proper position of around 530 nm. Judd-Wyszecki/RGB, with green at 520 nm,
has two out-of-range colors (0.74%). The maximum error is 11.54 E
ab
and the
average error is 0.74 E
ab
. Adobe Wide-Gamut/RGB, with green at 525 nm, gives
only one out-of-range colorthe most saturated yellow. The maximum error is
3.53 E
ab
and the average error is 0.67 E
ab
. CIE1964/RGB, with green at 526.3
nm, gives the same out-of-range yellow with a maximum error of 3.68 E
ab
and
an average error of 0.64 E
ab
(see Fig. 6.15).
An Extended/RGB space, having properly selected primaries (green at 532 nm)
to encompass the commercial Scanner RGB, Monitor RGB, Duoproof RGB, Ink-
jet CMYK, Printing offset press CMYK, and Hexachrome offset press, gives no
out-of-range color (see Fig. 6.16).
26
Error distributions for 150 data encoded by Extended/RGB are given in
Table 6.5, showing that the error distribution becomes narrower as the bit depth
increases. Also, both average and maximum errors become smaller with increas-
ing bit depth. There is no visually detectable error at 12 bits and above. Like Ex-
tended/RGB, Kress/RGB and Wright/RGB have no out-of-range color (both green
primaries are at 530 nm). The average error is 0.64 E
ab
and the maximum error
96 Computational Color Technology
Figure 6.14 Color gamut of CIE1931/RGB for reproduction of printer colors.
Figure 6.15 Color gamut of CIE1964/RGB for reproduction of printer colors.
RGB Color Spaces 97
Figure 6.16 The color gamut of Extended/RGB for reproduction of printer colors.
Table 6.5 Error distribution of Extended/RGB.
Range 8-bit 9-bit 10-bit 12-bit 14-bit
01 129 149 149 150 150
12 15 1 1
23 5
34 1
Average 0.61 0.31 0.16 0.04 0.01
Maximum 3.00 1.08 1.18 0.14 0.08
is 3.36 E
ab
for Kress/RGB. The maximum error is 3.30 E
ab
and the average
error is 0.68 E
ab
for Wright/RGB.
ROMM/RGB and ROM/RGB are believed to enclose most, if not all, real-
world producible colors. To demonstrate the importance of the gamma correc-
tion, we perform conversions with and without it to see the difference. Without
gamma correction, ROMM/RGB gives an average of 0.71 E
ab
and a maximum
of 4.29 E
ab
. With gamma correction, it gives an average of 0.43 E
ab
and a max-
imum of 0.99 E
ab
. The gamma correction provides a signicant improvement
for ROMM/RGB; unlike sRGB, where there are only marginal improvements (see
Table 6.3).
98 Computational Color Technology
This study reveals that out-of-range colors give the biggest errors. Out-of-range
colors cannot be brought into range by extending the number of bits for encod-
ing. There are two ways of dealing with this problem: the rst one is to remove
the clipping (this is the approach used in e-sRGB) and the second one is to use
a wide-gamut primary set. Using the rst method, any out-of-range color is rep-
resented by either a negative value or a value greater than the maximum range.
Negative values increase the complexity of the digital implementation. For ex-
ample, one may need extra control logic for dealing with out-of-range values or
one may not be able to use simple lookup tables. In addition, most electronic de-
vices cannot render negative values. Therefore, some kind of mapping or clip-
ping must be used. If mapping is used, the color delity will be in jeopardy. If
a clipping is used, the same computational inaccuracy experienced in sRGB will
resurface.
7
Kress pointed out that any truncation of negative values would cause
loss of image detail in highly saturated colors and color shifts in hue, saturation,
and lightness.
10
Another solution is to use a wide-gamut space. As we have shown in various
RGB standards, the color space is controlled by the locations of primaries. The
number, error magnitudes, and locations of out-of-range colors are dependent on
the size and shape of the color space. Therefore, we can eliminate out-of-range
colors by properly selecting RGB primaries; examples are Ink-jet/RGB and Ex-
tended/RGB spaces. Kress has compared the gamut of monitors (ITU709, P22,
NTSC, and SMPTE), photographic lms (Agfa, Fuji Photo, Kodak, and Konica),
and printer-paper outputs (SWOP, wax thermal transfer, dye diffusion, Kodak Q60,
and graphic-arts proong material). He concluded that there was no single encod-
ing scheme that resulted in minimal computation time, absence of image artifacts,
device independence, and optimal quantization.
10
To accommodate wider applica-
tions at the system level, Kress proposed a spectral RGB space using primaries at
620, 530, and 460 nm (note that this primary set is very close to the Extended/RGB
set). With this space, colors from photographic input materials and printer outputs
could be encompassed. Ssstrunk, Buckley, and Swen derived a similar conclusion
that no particular RGB space was ideal for archiving, communicating, compress-
ing, and viewing of color images.
27
The correct color space depended on the appli-
cation. They recommended that if the desired rendering intent is known, the use of
a wide-gamut space is the best choice for the situation where more than one type
of output is desired (this is the system environment). From our study, we draw the
same conclusion that a wide RGB color space gives the best results in the system
environment. Problems caused by using a wide RGB space are relatively minor
compared to the problems caused by clipping or negative values. For example, the
low numerical resolution can be overcome by using a higher bit depth and a proper
gamma correction, and the mismatch between RGB encoding standard and real
phosphor or CCD chromaticity becomes a device-characterization problem, which
is a separate issue.
RGB Color Spaces 99
6.6 Remarks
Comparisons of RGB encoding standards revealed that out-of-range colors gave
the biggest color errors. We showed that the out-of-range colors induced by im-
proper color encoding could be eliminated. There are numerous problems for color
reproduction at the system level; the most difcult one, perhaps, is color gamut
mismatch. There are two kinds of color gamut mismatch: one stems from the
physical limitation of imaging devices (for example, a monitor gamut does not
match a printer gamut); another one is due to the color-encoding standard used,
such as sRGB. The difference is that one is imposed by nature and the other one
is a man-made constraint to describe and manipulate color data. We may not be
able to do much about limitations imposed by nature (although we have tried and
are still trying by means of the color gamut mapping). However, we should make
every effort to eliminate color errors caused by the color-encoding standard. It is
my opinion that at system level applications, we need a wide-gamut space, such
as ROMM/RGB, Kress/RGB, or Extended/RGB, for preserving color information.
We should not impose a small, device-specic color space for carrying color in-
formation. With a wide-gamut space, the problem is boiled down to have proper
device characterizations when we move color information between devices. It is in
the device color prole that the color mismatch and device characteristics are taken
into account. In this way, any color error is conned within the device; it does not
spread to other system components.
Furthermore, we should make gamma correction an option because not every
transform requires it. For example, Scanner/RGB designed to meet the Luther con-
dition does not need it.
28
In this case, the empty slot can be used for implementing
gray-balance curves, such that an equal value for all three RGB components gives
a gray. However, not all scanners are designed in this way. A check of two scanners
(HP OfceJet G95 and Microtek ScanMaker 5) showed that they are nonlinear with
respect to luminance; thus, gamma correction is still needed. It can be combined to
the gray balancing if the lookup-table approach is used.
References
1. Adobe Photoshop

5.0, Adobe Systems Inc., San Jose, CA.


2. B. Fraser and D. Blatner, http://www.pixelboyz.com.
3. CIE, Colorimetry, Publication No. 15, Bureau Central de la CIE, Paris (1971).
4. A. Ford and A. Roberts, Colour space conversions, http://www.inforamp.
net/~poynton/PDFs/colourreq.pdf.
5. H. R. Kang, Implementation issues of digital color imaging, Color Image Sci-
ence Conf. 2000, Derby, England, April 10-12, pp. 107115 (2000).
6. J. Guild, The colorimetric properties of the spectrum, Phil. Trans. Roy. Soc.
London A230, p. 149 (1931).
100 Computational Color Technology
7. H. R. Kang, Computational accuracy of RGB encoding standards, NIP 16, pp.
661664 (2000).
8. IEC/3WD 61966-2.1: Colour measurement and management in multime-
dia systems and equipmentPart 2.1: Default RGB colour spacesRGB,
http://www.srgb.com (1998).
9. D. B. Judd and G. Wyszecki, Color in Business Science and Industry, 3rd
Edition, Wiley Interscience, New York, pp. 190192 (1975).
10. W. C. Kress, First considerations of default RGB database, Proc. TAGA,
Sewickley, PA, pp. 6784 (1993).
11. G. Starkweather, Colorspace interchange using sRGB, http://www.srgb.com
(1998).
12. Color Encoding Standard, Xerox Corp. Xerox Systems Institute, Sunnyvale,
CA, p. C-3 (1989).
13. Eastman Kodak Company, Reference output medium metric RGB color space
(ROMM RGB) white paper, http://www.colour.org/tc8-05.
14. Eastman Kodak Company, Reference input medium metric RGB color space
(RIMM RGB) white paper, http://www.pima.net/standards/iso/tc42/wq18.
15. K. E. Spaulding, G. J. Woolfe, and E. J. Giorgianni, Reference input/output
medium metric RGB color encodings (RIMM/ROMM RGB), PICS 2000
Conf., Portland, OR, March 26-29 (2000).
16. K. E. Spaulding, B. Pridham, and E. J. Giorgianni, Denition of standard refer-
ence output medium RGBcolor space (ROMRGB) for best practice tone/color
processing, Kodak Imaging Science Best Practice Document.
17. SMPTE RP 145-1999, SMPTE C color monitor colorimetry, Society of Mo-
tion Picture and Television Engineers, White Plains, NY (1999).
18. N. Katoh and T. Deguchi, Reconsideration of CRT monitor characteristics, 5th
Color Imaging Conf., Scottsdale, AZ, pp. 3340 (1997).
19. W. D. Wright, A trichromatic colorimeter with spectral primaries, Trans. Opt.
Soc. London 29, p. 225 (1927).
20. A. Usami, Colorimetric-RGB color space, IS&Ts 45th Annual Conf., East
Rutherford, NJ, May 10-15, pp. 157160 (1992).
21. SMPTE RP 177-1993, Derivation of basic television color equations, Society
of Motion Picture and Television Engineers, White Plains, NY (1993).
22. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 1, pp. 2228 (1997).
23. H. R. Kang, Color conversion between sRGB and Internet FAX standards, NIP
16, pp. 665668 (2000).
24. C. Poynton, Frequently asked questions about gamma, http://www.poynton.
com/~poynton/.
25. PIMA 7667: 2001, PhotographyElectronic still picture imagingExtended
sRGB color encodinge-sRGB, http://www.sRGB.com/sRGB64/.
26. The Secrets of Color Management, Digital Color Prepress, Agfa Educational
Publ., Randolph, MA, Vol. 5, p. 9 (1997).
RGB Color Spaces 101
27. S. Ssstrunk, R. Buckley, and S. Swen, Standard RGB color spaces, Proc. of
7th Color Imaging Conf., IS&T/SID, Scottsdale, AZ, pp. 127134 (1999).
28. H. R. Kang, Color scanner calibration, J. Imaging Sci. Techn. 36, 162170
(1992).
Chapter 7
Device-Dependent Color Spaces
Most device-dependent color spaces are creations for the convenience of practical
usage, digital representation, and computation. They do not relate to the objective
denition or the way humans see color. They are used to encode device-specic
digital data at the device-control level. Device-dependent color spaces fall into
two main classesadditive and subtractive spaces. Additive spaces include RGB
spaces and any spaces derived from Device/RGB space, such as HSV and HLS
spaces. Subtractive spaces include CMY, CMYK, and Hi-Fi spaces that have ve
or more primary colorants. In this chapter, the color gamut of device-dependent
color spaces is discussed. The gamut computation of ideal block dyes is given
in detail. The process of determining a color gamut boundary of a real imaging
device, which includes the test target, device gamut model, interpolation method,
and color gamut descriptor, is given. Finally, we discuss color gamut mapping for
cross-media rendition because of the gamut mismatch among real imaging devices.
7.1 Red-Green-Blue (RGB) Color Space
RGB color space denes colors within a unit cube by the additive color-mixing
model. Additive color mixing means that a color stimulus for which the radiant
power in any wavelength interval, small or large, in any part of the spectrum is
equal to the sum of the powers in the same interval of the constituents of the
mixture, constituents that are assumed to be optically incoherent.
1
As shown in
Fig. 7.1, red, green, and blue are additive primaries represented by the three axes
of the cube that denes the color gamut boundary of the RGB space; all colors
are located within the cube. Each color in the cube can be represented as a triplet
(R, G, B) where values for R, G, and B are assigned in the range from 0 to 1 (or
from 0 to the bit depth of the device). In an ideal case, the mixture of two additive
primaries produces a subtractive primary; thus, the mixture of the red (1, 0, 0) and
green (0, 1, 0) is yellow (1, 1, 0), the mixture of the red (1, 0, 0) and blue (0, 0, 1)
is magenta (1, 0, 1), and the mixture of the green (0, 1, 0) and blue (0, 0, 1) is cyan
(0, 1, 1). A white (1, 1, 1) is obtained when all three primaries are added together
and the black (0, 0, 0) is located at the origin of the cube. Shades of gray are located
along the diagonal line that connects the black and white points.
The additive color mixing of ideal block dyes behaves like a logical OR oper-
ation; the output is 1 when any of the primaries is 1. An important characteristic of
103
104 Computational Color Technology
Figure 7.1 Additive RGB color space.
the additive system is that the object itself is a light emitter such as a television. In
addition to televisions, electronic color devices that use RGB space are scanners,
computer monitors, and digital cameras.
7.2 Hue-Saturation-Value (HSV) Space
HSV space has a hexcone shape as shown in Fig. 7.2. Additive and subtractive
primaries occupy the vertices of the hexagon that is the 2D projection of the RGB
cube along the neutral diagonal line. This hexagon governs the hue change. Hue H
is represented as an angle about the vertical axis. Vertices of the hexagon are sepa-
rated by 60 deg intervals. Red is at 0 deg, yellow is at 60 deg, green is at 120 deg,
cyan is at 180 deg, blue is at 240 deg, and magenta is at 300 deg. Complementary
colors are 180 deg apart.
Saturation S indicates the strength of the color; it increases by moving from the
center toward the edge of the hexagon. It varies from 0 to 1, representing the ratio
of the purity of a selected color to its maximum purity. At S =0, we have the gray
scale that is represented by the center axis of the hexcone.
Value V indicates the darkness of the color. It varies from 0 at the apex of the
hexcone to 1 at the top. The apex represents black. At the top, colors have their
maximum intensity. When V = 1 and S = 1, we have the pure hues. White is
located at V =1 and S =0.
Since HSV space is a modication of the RGB space, a simple transform exists
between them.
2
V =Max(R, G, B), (7.1)
Device-Dependent Color Spaces 105
Figure 7.2 HSV color space.
S =0 if V =0, (7.2a)
S =[V Min(R, G, B)]/V if V >0, (7.2b)
H =0 if S =0, (7.3a)
H =60(GB)/(SV) if V =R, (7.3b)
H =60[2 +(B R)/(SV)] if V =G, (7.3c)
H =60[4 +(R G)/(SV)] if V =B, (7.3d)
H =H +360 if H <0. (7.3e)
7.3 Hue-Lightness-Saturation (HLS) Space
HLS space (shown in Fig. 7.3) is a double cone shape with the highest saturation
at 50%. Hue has the same meaning as in the HSV model. H =0 deg corresponds
to blue. Magenta is at 60 deg, red is at 120 deg, yellow is at 180 deg, green is at
270 deg, and cyan is at 300 deg. Again, complementary colors are 180 deg apart.
The vertical axis is called lightness L, indicating the darkness of the color. Pure
black and pure white are converged to a point. The pure hues lie on the L = 0.5
plane, having a saturation value S =1.
106 Computational Color Technology
Figure 7.3 HLS color space.
7.4 Lightness-Saturation-Hue (LEF) Space
LEF space is derived by applying a linear transformation to the RGB color cube,
then a rotation that places the black-white axis in the vertical position. On the
planes perpendicular to the black-white axis, saturation and hue are represented in
polar coordinates as radius and angle.
3,4
LEF lightness (black-white axis) is given
by the L coordinate. The LEF saturation-hue plane is given by the E coordinate
that is perpendicular to the L axis, containing a hexagonal arrangement of the hue
circle, and the F-coordinate is perpendicular to the L and E axes (see Fig. 7.4).
The forward transformation from RGB to LEF is given in Eq. (7.4), and the reverse
transformation is given in Eq. (7.5).

L
E
F

2/3 2/3 2/3


1 1/2 1/2
0

3/2

3/2

R
G
B

, (7.4)

R
G
B

1/2 2/3 0
1/2 1/3 1/

3
1/2 1/3 1/

L
E
F

. (7.5)
Equation (7.4) indicates that the LEF space is mathematically linear with respect to
the RGB space and can therefore be directly used for generating juxtaposed colors
Device-Dependent Color Spaces 107
Figure 7.4 Transformed RGB cube viewed in LEF color space.
4
whose sum, weighted by their respective surface coverages, should correspond to
a given target color.
7.5 Cyan-Magenta-Yellow (CMY) Color Space
CMY color space denes colors within a unit cube by the subtractive color-mixing
model. Subtractive mixing means color stimuli for which the radiant powers in
the spectra are selectively absorbed by an object such that the remaining spectral
radiant power is reected or transmitted, then received by observers or measuring
devices. The object itself is not a light emitter, but a light absorber. Cyan, magenta,
and yellow are subtractive primaries. This system is the complement of the additive
system as shown in Fig. 7.5.
The mixing of two subtractive primaries creates an additive primary. When
cyan and magenta colorants are mixed, cyan absorbs the red reection of the ma-
genta and magenta absorbs the green reection of the cyan. This leaves blue the
only nonabsorbing spectral region. Similarly, the mixture of cyan and yellow gives
green, and the mixture of magenta and yellow gives red. The point (1, 1, 1) of the
subtractive cube represents black because all components of the incident light are
absorbed. The origin of the cube is white (0, 0, 0). In theory, equal amounts of
primary colors produce grays, locating along the diagonal line of the cube. Thus,
the subtractive mixing of block dyes behaves like a logical AND operation; for
108 Computational Color Technology
Figure 7.5 Subtractive cyan-magenta-yellow (CMY) color space.
a given region of the visible spectrum, the output is 1 if all participated primaries
are 1 at that spectral region.
A good example of subtractive color mixing is color printing on paper. Most
printers today add a fourth primary, the black that gives many advantages such as
the enlargement of the color gamut and the saving of color ink consumption by
replacing the common component with the black ink.
7.6 Ideal Block-Dye Model
Device RGB and CMY color spaces are closely related to the ideal block-dye
model. The ideal block-dye model is the simplest color-mixing model. Primary
colorants of block dyes are assumed to have a perfect reection (normalized to the
value 1) at a portion of the visible spectrum (usually one-third for additive pri-
maries and two-thirds for subtractive primaries) and a total absorption (assigned
with the value 0) for the rest of the portions, as shown in Figs. 7.6 and 7.7 for addi-
tive and subtractive block-dye models, respectively. Additive and subtractive color
mixings are different. Additive color mixing is applied to imaging systems that
are light emitters, whereas subtractive mixing is for imaging systems that combine
unabsorbed light.
7.6.1 Ideal color conversion
The simplest model for converting from RGB to cmy is the block-dye model that a
subtractive primary is the total reectance (white or 1) minus the reectance of the
Device-Dependent Color Spaces 109
Figure 7.6 Additive color mixing of ideal block dyes.
corresponding additive primary. Mathematically, this is expressed as
c =1 R, m=1 G, y =1 B, (7.6a)
where the total reectance is normalized to 1. If we set a vector
cmy
=[c, m, y]
T
,
a vector
rgb
= [R, G, B]
T
, and a unit vector U
1
= [1, 1, 1]
T
, we can express
Eq. (7.6a) in vector space.

cmy
=U
1

rgb
. (7.6b)
The intuitive model of ideal color conversion implies an ideal case of block dyes
(see Fig. 7.8).
5
This model is simple and has minimal computation cost, but it is
grossly inadequate. This is because none of the additive or subtractive primaries
come close to the block-dye spectra. Typical spectra of subtractive primaries con-
tain substantial unwanted absorptions in cyan and magenta colorants.
6
To minimize
this problemand improve the color delity, a linear transformis applied to the input
RGB.
110 Computational Color Technology
Figure 7.7 Subtractive color mixing of ideal block dyes.

11

12

13

21

22

23

31

32

33

R
G
B

, (7.7a)

rgb
=
rgb
, (7.7b)
then the corrected additive primary is removed from the total reectance to obtain
the corresponding subtractive color.
c =1 R

, m=1 G

, y =1 B

, (7.8a)

cmy
=U
1

rgb
=U
1

rgb
. (7.8b)
The coefcients
ij
of the transfer matrix are determined in such a way that the
resulting cmy have a closer match to the original or a desired output. Generally, the
diagonal elements have values near 1 and off-diagonal elements have small values,
Device-Dependent Color Spaces 111
Figure 7.8 Ideal color conversion of block dyes.
serving as the adjustment to t the measured results. However, this correction is
still not good enough because it is a well-known fact that the RGB-to-cmy conver-
sion is not linear. A better way is to use three nonlinear functions or lookup tables
for transforming RGB to R

.
7.7 Color Gamut Boundary of Block Dyes
Block dyes are similar to optimal stimuli in that they are also imaginary colorants,
having a uniformspectrumof a unit reectance within specied wavelength ranges,
and zero elsewhere; the difference is that block dyes have a wider wavelength range
than corresponding optimal stimuli. Again, no real colorants process this kind of
spectra. We generate ideal block dyes by selecting a bandwidth and moving it from
one end of the visible spectrum to the other end. We also produce block dyes with
a complementary spectrum S
c
(); that is, unit reectance subtracts a block-dye
spectrum S
B
() across the whole visible region,
S
c
() =1 S
B
(). (7.9)
We start at a bandwidth of 10 nm at one end and move it at a 10-nm interval
for generating a whole series of block dyes. We then increase the bandwidth by
112 Computational Color Technology
10 nm to generate the next set of block dyes. This procedure is repeated until the
bandwidth reaches 220 nm. We compute tristimulus values of these block dyes
by using the CIE standard formulation given in Eq. (1.1) and repeat it here as
Eq. (7.10).
X =k

E()S
B
() x(), (7.10a)
Y =k

E()S
B
() y(), (7.10b)
Z =k

E()S
B
() z(), (7.10c)
k =1/

E() y()

, (7.10d)
where S
B
() is the spectrum of a block dye and E() is the spectrum of an illu-
minant. We then compute CIELAB values using Eq. (5.11) and repeat it here as
Eq. (7.11) to derive the gamut boundary for block dyes.
L

=116f (Y/Y
n
) 16
a

=500[f (X/X
n
) f (Y/Y
n
)]
b

=200[f (Y/Y
n
) f (Z/Z
n
)]
or

0 116 0 16
500 500 0 0
0 200 200 0

f (X/X
n
)
f (Y/Y
n
)
f (Z/Z
n
)
1

(7.11a)
and
f (t ) =t
1/3
1 t >0.008856, (7.11b)
=7.787t +(16/116) 0 t 0.008856. (7.11c)
The shape of the 2D gamut boundary of block dyes is pretty similar to the cor-
responding boundary of spectral stimuli with a much smaller gamut size (see
Fig. 7.9), whereas the 3D gamut shows some similarity to the boundary of object-
color stimuli given by Judd and Wyszecki (see Fig. 7.10).
7
7.7.1 Ideal primary colors of block dyes
Block dyes can be used to nd ideal primary colorants with respect to color
strength. The most saturated red has a spectrum with wavelength onset around
600 nm and cutoff around 680 nm. The hue is not sensitive to broadening the spec-
trumat the long-wavelength side because CMF is very weak at the long-wavelength
end. On the short-wavelength side, we can broaden the spectrum as low as 550 nm,
but hue is gradually shifted toward orange.
Device-Dependent Color Spaces 113
Figure 7.9 CIE a

-b

plots of block dyes and spectral stimuli.


The most saturated green has a spectrum with a wavelength range between 500
and 560 nm. There are various shades of greens; the hue shifts toward cyan if we
broaden the short-wavelength end of the spectrum and the hue shifts toward yellow
if we broaden the long-wavelength end.
The most saturated blue has a spectrum of [400, 470] nm. By shifting a blue
spectrum with a xed bandwidth to higher wavelengths, the hue gradually changes
to cyan. We still get blue color when the long wavelength ends around 510 nm.
Similar to shifting, the hue gradually changes to cyan by broadening the spectrum
at the long-wavelength side. However, broadening at the short-wavelength side is
not sensitive to hue shift. This is because CMF is very weak at the blue end. We
still get blue color with a bandwidth of [390, 520] nm.
The most saturated cyan has a spectrum of [400, 560] nm. By broadening the
cyan spectrum at the long-wavelength side, the hue shifts toward green. It is still
considered as cyan with a bandwidth of [400, 610] nm. The change at the short-
wavelength side is not sensitive to hue shift. However, the lower bound should be
no higher than 430 nm; otherwise the hue will be too greenish. In order to produce
a wider range of greens, the cyan range should be extended to [390, 580] nm.
The most saturated magenta has a spectrum of [400, 460] and [600, 700]
nm. By extending the short-wavelength side of the red component, the hue shifts
from purple/blue through magenta toward red. By broadening the long-wavelength
114 Computational Color Technology
side of the blue component, the hue remains as magenta color with a diminished
chroma. By reducing the blue component, the hue shifts toward red.
The most saturated yellow has a spectrum of [540, 620] nm. By broadening
the yellow spectrum at the long-wavelength side, the hue becomes warmer (toward
red). It is still considered as yellow with [540, 700] nm bandwidth. By broaden-
ing the short-wavelength side, the hue becomes cooler (toward green). We still get
yellow with a bandwidth of [490, 700] nm. To make a broad range of greens, espe-
cially the darker bluish-greens, the yellow spectrum should be extended to lower
wavelengths. Thus, the cool yellows are preferred.
The recommended spectral ranges for RGB and CMY block dyes are given in
Table 7.1 and are used for the subsequent color-mixing computations. These the-
Figure 7.10 Three-dimensional gamut boundary of block dyes in CIELAB space.
Table 7.1 Block-dye sets used in the computation of the gamut size.
RGB set 1 RGB set 2 CMY set 1 CMY set 2
Blue/Cyan [400, 470] nm [390, 500] nm [400, 560] nm [390, 590] nm
Green/Magenta [500, 560] nm [500, 600] nm [400, 460] and [390, 490] and
[600, 700] nm [590, 750] nm
Red/Yellow [600, 680] nm [600, 720] nm [540, 620] nm [490, 750] nm
Device-Dependent Color Spaces 115
oretical spectral ranges of block dyes may serve as useful guidance for designing
and producing real colorants.
Interestingly, chromaticity coordinates of RGB block dyes are pretty close to
certain standard RGB primaries such as NTSC/RGB or sRGB (see Table 7.2). This
seems to imply that behind a simple chromaticity representation of a primary, there
is an associated block-dye spectrum. This association is not surprising because
many RGB primaries are spectral stimuli, such as CIE primaries. The chromaticity
coordinates of a primary can be matched by a block dye if we nely tune the
spectrum on a wavelength-by-wavelength basis.
7.7.2 Additive color mixing of block dyes
We compute the color mixing of the most saturated RGB block dyes (RGB set 1)
to obtain the color gamut. For comparison purposes, we perform the same additive
mixing on RGB set 2 and CMY set 2 with broader spectra (see Table 7.1). Equa-
tion (7.12) gives the formula of the additive mixing; it is a linear combination of
block dyes at a given wavelength.
P() =w
r
P
r
() +w
g
P
g
() +w
b
P
b
(), (7.12)
where w
r
, w
g
, and w
b
are the weights or concentrations of red, green, and blue
components, respectively, in the mixture, and P
r
, P
g
, and P
b
are the respective
reectances of red, green, and blue dyes. The computation indicates that RGB set
1 indeed has the largest color gamut. Two-color mixtures lie perfectly in the lines
of a triangle, indicating that the additive mixing obeys Grassmanns law regardless
of the type of block dyes, whether RGB or CMY (see Fig. 7.11). Converting to
CIELAB, the data no longer lie in a straight line (see Fig. 7.12), indicating that the
common practice of connecting R, Y, G, C, B, and M primary colors in straight
lines is not a correct way to dene color gamut size in a projected CIELAB a

-b

plot.
7.7.3 Subtractive color mixing of block dyes
We also compute the color gamut formed by the most saturated CMY block dyes
(CMY set 1) and CMY set 2 in subtractive mixing, where the reectances of
Table 7.2 The similarity between block dyes and standard RGB primaries.
Red Green Blue
x y x y x y
Block RGB set 1 0.6787 0.3211 0.2105 0.7138 0.1524 0.0237
NTSC/RGB 0.670 0.330 0.210 0.710 0.140 0.080
Block RGB set 2 0.6791 0.3207 0.3323 0.6224 0.1393 0.0521
sRGB 0.640 0.330 0.300 0.600 0.150 0.060
116 Computational Color Technology
Figure 7.11 Two-dimensional gamut boundaries of block dyes in chromaticity diagram.
Figure 7.12 Two-dimensional gamut boundaries of block dyes in CIELAB space.
Device-Dependent Color Spaces 117
three components are converted to densities by taking the logarithm base 10 (see
Chapter 15 for denition of density), weighted by their corresponding fractions,
then sum the three components together. The resulting total density is converted
back to reectance by taking the antilogarithm. The logarithm conversion from re-
ectance to optical density poses a problem for block dyes because they have zero
reectance at many wavelengths. Mathematically, the logarithm of zero is unde-
ned. Therefore, a density value must be articially assigned to zero reectance.
We choose to give a maximum density of 4, which is high enough for most, if
not all, practical applications. The individual densities of subtractive primaries are
weighted and combined to give the density of the mixture at a given wavelength.
Finally, the density is converted back to reectance for computing the tristimulus
and CIELAB values via Eq. (7.11).
Using a maximum density of 4, we compute the color gamut of the CMY block
dye sets 1 and 2 under D
65
in CIEXYZ and CIELAB spaces. In CIEXYZ space,
CMY set 2 forms a nearly perfect triangle with vertices located in the R, G, and B
regions (see Fig. 7.13). All two-color mixtures lie on lines of the triangle. CMY set
1 does not give a triangle; instead, it gives a pentagon, where two-color mixtures
fall on lines connecting the R, G, C, B, and M vertices. The resulting color gamut
closely resembles the chromaticity diagram of primaries employed in Donaldsons
six-primary colorimeter.
8,9
The difference between CMY set 1 and 2 may be at-
tributed to the fact that the complementary spectra of CMY set 1 are overlapped,
whereas complementary spectra of CMY set 2 form a continuous white spectrum
Figure 7.13 Two-dimensional gamut boundaries of subtractive block dyes in chromaticity
diagram.
118 Computational Color Technology
with no overlapping. These computations indicate that Grassmanns additivity is
followed, if not strictly obeyed, for block dyes, even in subtractive mixing. By
transforming to CIELAB, both dye sets give a rather curved boundary with six
vertices (see Fig. 7.14). These results reafrm that the straight-line connection of
six primaries will give a misled color gamut in CIELAB space because it is the
projection of a 3D color gamut as shown in Fig. 7.15 for CMY set 1.
In addition, we also compare the computed block-dye color gamut with two
measured real inks. The real inks give a pretty straight hexagon shape (with some
slight deviations) in the chromaticity diagram (see Fig. 7.16), indicating that the
practice of connecting six primaries in straight lines is a reasonable approxima-
tion. They give a slightly curved polygon in CIELAB (see Fig. 7.17), indicating
that they do not obey Grassmanns law, but they can be modeled by other color-
mixing theories that take the complex phenomena of absorption, reection, and
transmission into account.
10
As a summary, spectral and optimal stimuli obey Grassmanns law of color
mixing. Likewise, block dyes, either a RGB or CMY set, obey Grassmanns law
of additive color mixing. Surprisingly, subtractive block dyes in subtractive color
mixing follow Grassmanns law closely, whereas the real inks do not. From this
computation, the common practice of connecting six primaries in CIELAB space
to give a projected 2D color gamut is not a strictly correct way of dening a color
gamut. However, it is a pretty close approximation for real colorants. To have a
better denition of the color gamut, more two-color mixtures should be measured
and plotted in CIELAB space.
Figure 7.14 Two-dimensional gamut boundaries of subtractive block dyes in CIELAB space.
Device-Dependent Color Spaces 119
Figure 7.15 Three-dimensional gamut boundary of subtractive block dye set 1 in CIELAB
space.
Figure 7.16 Two-dimensional gamut boundaries of inks in a chromaticity diagram.
120 Computational Color Technology
Figure 7.17 Two-dimensional gamut boundaries of inks in CIELAB space.
7.8 Color Gamut Boundary of Imaging Devices
The device color gamut boundary is important in color reproduction for produc-
ing both accurate and pleasing outputs. For accurate color reproduction, we need
to establish the relationship between device signals with device-independent color
specications (preferably CIE specications) for in-gamut rendering. For pleasing
color renditions from devices with different color gamut boundaries, the gamut
boundary is the essential information required for color gamut mapping. The accu-
rate description of the color gamut is a necessary component for the simulation of
printing and proong, and the optimization of color output for display. An inaccu-
rate color gamut boundary can cause color mismatches that create image artifacts
and color errors.
Color gamut size of imaging devices is not as well dened as the ideal block
dyes or CIE color boundaries. Each device has its own color boundary, depending
on many factors such as the physical properties of the medium, the absorption char-
acteristics of the colorants, the viewing conditions, the halftone technique, and the
imaging process. Moreover, color gamuts are often represented in two dimensions,
such as the chromaticity diagram or CIELAB a

-b

plot. This obscures the real


three-dimensionality of the color gamut, and it is the 3D description of the gamut
that is useful. The 2D plot omits the range of lightness, which is an important di-
Device-Dependent Color Spaces 121
mension of the color gamut. Sometimes, this omission causes the misjudgment of
out-of-gamut colors as in-gamut when they appear to be inside the projected 2D
gamut but actually are not.
A chromaticity diagram shows the boundary of the color gamut in a 2D plot.
It is a projected color gamut because the luminance (or lightness) axis is not pre-
sented. This horseshoe shape is the ideal gamut formed by the purest colors. In the
real world, these colors do not exist. Real color devices have much smaller gamuts.
Figure 7.18 depicts color gamuts of a typical monitor, printer, and lm in a chro-
maticity diagram.
11
A large portion of the monitor and printer gamuts overlap, but
there are areas that the monitor can display that the printer cannot render and vise
versa. This poses a problem in color reproduction; some kind of gamut mapping is
needed.
Color gamut mapping addresses one of the most difcult problems in color
reproduction. Therefore, it is not surprising that numerous methods for determining
the color gamut boundary have been proposed. Generally, the method consists of
two parts: a properly designed test target and a device gamut model or interpolation
method.
Figure 7.18 Color gamuts of a typical monitor, printer, and lm in a chromaticity diagram.
11
122 Computational Color Technology
7.8.1 Test target of color gamut
The color gamut is dened by colors of the full-strength primary colors and their
two-color mixtures. A typical test target (or gamut-boundary descriptor) contains a
series of color patches (or coordinates) with known CIE specications that dene
a set of points on the boundary surface. It can be as simple as eight colors, having
white, red, green, blue, cyan, magenta, yellow, and black in the solid area coverage,
or as complex as several thousand colors. Ideally, the points should be spaced with
some perceptual uniformity. The larger the number of points, the more accurate the
boundary descriptor can be. However, oversampling the boundary may place some
points very close together such that the inherent uncertainty in the color rendi-
tion and color measurement may cause problems for interpolation and subsequent
gamut mapping.
There are several standard test targets, such as ISO 12640 (IT8.7/3) and ISO
12641 (IT8.7/1) designed for the device characterization.
12,13
They have a limited
number of patches at the colorant boundary and limited steps in lightness and hue.
In view of these deciencies, Green has designed a test target specically for deter-
mining the color gamut boundary.
14
The test target consists of eighteen columns,
having constant proportions of the primary colorants for each column, and 24 rows
with approximately constant lightness. In a given column, each patch contains the
maximum available chroma at the lightness level of the row. A total of 24 lightness
steps were chosen between the smallest printable dot and a tertiary overprint with
100% of the dominant color, 50% of the complementary, and 100% of black. The
layout of the CMYK and RGB targets can be found in Greens paper.
7.8.2 Device gamut model and interpolation method
The second component is the device gamut model or interpolation method. For
printing devices, the gamut model can be based on physical models of color mixing
such as the Neugebauer equations,
15
the Yule-Neilsen model, the Beer-Lambert-
Borguer law, or the Kubelka-Munk theory.
16,17
These physical models will be dis-
cussed in Chapters 1518 of this book. In addition, several analytical gamut mod-
els, such as the dot-gain model
1821
or mathematical formula,
22,23
are proposed for
general usage of input, display, and output devices.
Gustavson developed analytical models for physical and optical dot gains.
1821
The optical dot model includes nearly all paths of light interacting with a sub-
strate as well as the wavelength dependence. And the physical dot model uses the
realistic imperfect, noncircular dot. The model is comprehensive but extremely
complex. Herzog and Hill developed a mathematical representation for any device
color gamut.
22
The method is based on the general characteristics of the source
and destination gamuts. All color signals of a device are contained in a cube of unit
strength for each primary and called the kernel gamut. The kernel gamut surface is
mapped onto the destination gamut surface in a manner such that the eight corners,
twelve edges, and six planes of the kernel gamut come to lie at the corresponding
Device-Dependent Color Spaces 123
corners, edges, and planes of the destination gamut. The mapping is achieved by
using analytical functions to distort the kernel gamut in closed forms. Mahy pro-
posed a printer model by assuming that there exists a continuous function between
device space (e.g., CMY or CMYK) and device-independent color space (e.g., CIE
spaces or colorimetric RGB space).
23
He employed the Beer-Lambert-Bouger law
as the printer model for the photographic process and used a quadratic function for
cellar Neugebauer equations. He also took ink limitation into consideration. Physi-
cal and mathematical models allow the use of fewer color patches in the test target.
The problem is that none of these models are perfect in describing the color mixing
and color gamut size. Therefore, the use of a device model leads to errors in the
prediction of the color-space coordinates, whose magnitude depends on the model
used. Often, a model tends to give the largest errors at the gamut boundary and for
dark colors.
A more accurate prediction comes from experimental measurements of sur-
face colors together with an interpolation technique, provided that the test target
has an adequate number of surface colors. A variety of methods for calculating
the color gamut boundary have been proposed. Several methods are briey dis-
cussed; detailed information on these methods can be found in the original arti-
cles. Kress and Stevens measured a large set of samples spanning the printable
range of the device and used a spline interpolation to locate the gamut surface.
24
This method could be used to compute gamuts of images and media. Braun and
Fairchild used a triangulation and interpolation process to convert nonuniform
color data from a device RGB cube into a CIELAB L

ab
h

ab
representation of
the color gamut.
25
They took advantage of the fact that gamut mapping and visu-
alization are better or more efciently performed in cylindrical coordinates such as
CIELAB L

ab
h

ab
. Triangulation of the data was performed by projecting the non-
linearly spaced L

ab
h

ab
data from the device RGB onto the L

ab
plane, group-
ing them into triangles using the inherent connectivity associated with the points
from the RGB cube. The vertices of this mesh were measured or modeled from
the surface of the RGB cube. A uniform L

ab
grid of C

ab
values was interpo-
lated using triangular linear interpolation from the data provided by the triangle
list. This method provided a computational process for gamut-surface tting from
measured data that is independent of the gamut concavity or convexity. This also
provided a goodness-of-t measure to test the tting of the gamut surface. For some
CRT gamuts tested, the goodness expressed as an average E
ab
value was about
one.
Computational methods were also developed to nd the line gamut boundary
(LGB), that is, the interactions between a gamut plane and a line along which map-
ping is to be carried out.
2528
Morovic and Luo
28
summarized principal features
of computing the LGB in a plane of constant hue and compared color gamuts of
different media as follows:
2831
(1) Determine the 2D L

-C

gamut boundary at the hue angle of interest h

using the equation at this angle.


124 Computational Color Technology
(2) Find the pair of nearest neighboring points that encompass the hue angle
in the gamut-boundary descriptor (GBD).
(3) For each pair, calculate the intersection of the L

-C

plane and the line


connecting the two GBD points.
(4) Calculate the points on the L

axis where the surface dened by the GDB


matrix intersects it.
This procedure results in a set of boundary points that form a polygon for a
given hue angle. The intersection of a given line and the polygon is determined as
follows:
(1) For each pair of neighboring points in the polygon, calculate the formula
of the boundary line between them.
(2) For each of these lines, calculate their intersection point with the given
line. If it lies between the two points from the polygon, then it is an LGB
point.
7.9 Color Gamut Mapping
With the gamut differences among a wide range of images that are being ac-
quired, displayed, and printed on various imaging devices, color gamut map-
ping is becoming ever more important. The need for understanding the scope of
the color gamut mapping and for coming up with an acceptable universal gamut
mapping has been recognized by CIE, which formed the CIE Technical Com-
mittee 8-03 on color gamut mapping led by Morovic and charged it with inves-
tigating this issue. In 1999, the committee published the comprehensive report
Survey of Gamut Mapping Algorithms, reviewing 57 papers that appeared be-
tween 1977 and the rst quarter of 1999.
32
Morovic also wrote a chapter named
Gamut Mapping in the book Digital Color Imaging Handbook, edited by
Sharma.
33
The chapter contains various aspects of color gamut mapping such as
color gamut denition, gamut description and visualization, gamut boundary com-
putation, and gamut-mapping algorithms. It gives an overview of the recent de-
velopments and future directions, and is an excellent reference for research and
development.
Based on CIE denitions, color gamut mapping is a method for assigning col-
ors from the reproduction medium to colors from the original medium or image,
where the reproduction medium is dened as a medium for displaying, capturing,
or printing color information (e.g., CRT monitor, digital camera, scanner, printer
with associated substrate). The aim of color gamut mapping is to ensure a good
correspondence of overall color appearance between the original and the repro-
duction by compensating for the mismatch in size, shape, and location between
the original and reproduction gamuts.
33,34
The difculties are (i) there is no estab-
Device-Dependent Color Spaces 125
lished model for quantifying the appearance of complex images and (ii) there is
no established measure for quantifying the difference between original and repro-
duction. Nonetheless, some empirical criteria are established for assessing image
quality and are given in Section 7.9.3.
7.9.1 Color-mapping algorithm
In general, the color-mapping algorithms consist of two componentsa directional
strategy and a mapping algorithm. The directional strategy decides which color
space to perform the color mapping, determines the reference white, computes the
gamut surface, and decides what color channel to hold constant and what channel
to compress rst, or to do them simultaneously.
35
A gamut-mapping algorithm provides the means for transforming input out-of-
gamut colors to the surface or inside of the output color gamut. There are two ways
of mappingclipping and compression. Gamut-clipping algorithms only change
the out-of-gamut colors and leave in-gamut colors untouched. Clipping is char-
acterized by the mapping of all out-of-gamut colors to the surface of the output
gamut, and no change is made to input colors that are inside the output gamut. This
will map multiple colors into the same point (many-to-one mapping) that may lose
ne details and may cause shading artifacts in some images. This phenomenon is
known as blocking. However, clipping tends to maintain the maximum image
saturation.
Gamut-compression algorithms are applied to all colors from the input im-
age to distribute color differences caused by gamut mismatch across the entire
input gamut range. Several compression methods are in use: linear, piecewise lin-
ear, nonlinear, and high-order polynomial compression.
32,35
Sometimes, a com-
bination of different compression methods is used. Linear compression can be
expressed as
L

o
=t +sL

i
, (7.13a)
where
t =MIN(L

o
), (7.13b)
s =

MAX(L

o
) MIN(L

o
)

MAX(L

i
) MIN(L

i
)

, (7.13c)
and MIN is the minimum operator and MAX is the maximum operator, and L

i
and L

o
are the lightness for the input and output, respectively. A similar expression
can be obtained for chroma at a constant hue angle. Other linear compressions in
the CIELUV and CIEXYZ spaces have been suggested.
36,37
Piecewise linear compression is a variation of linear compression in which the
whole range is divided into several regions. Each region is linearly compressed
126 Computational Color Technology
with its own compression ratio (or slope). Unlike simple linear compression that
applies a global, uniform ratio to all colors, piecewise linear compression provides
different ratios depending on the attributes of input colors to which they are ap-
plied.
Nonlinear compression, in principle, can have any functions such as the cubic,
spline, or a high-order polynomial. In practice, however, the middle tone is usually
retained, while the shadow and highlight regions are nonlinearly mapped. Good
results are obtained by using a knee function as proposed by Wallace and Stone.
38
It is a soft clipping, accomplished by using a tangent to a linear function near the
gray axis, where it then approaches a horizontal line near the maximum output sat-
uration. This is designed to minimize the loss of detail that occurs with clipping,
while retaining the advantage of reproducing most of the common gamut colors ac-
curately. Nonlinear mappings maintain the shading, but can cause an objectionable
decrease in saturation.
7.9.2 Directional strategy
Color gamut mapping is most effectively performed in color spaces where the lu-
minance and chrominance are independent of each other, such as CIELAB and
CIELUV. Popular directional strategies for CIELAB or CIELUV are as follows:
Lines of constant perceptual attribute predictors
As an example, we present the sequential L

and C

mapping at constant h

.
35
A point p of the input color (see Fig. 7.19) located outside of the output color
gamut (solid lines) is to be compressed into the output gamut. L

is rst mapped
to the compressed L

(dotted lines). If clipping is used, the point will lie on the


dashed line. If a linear compression is used, the mapped point p
l
will be inside
the dotted line. Some researchers have stated that linear lightness mapping is the
best technique.
37,39,40
Jorgensen, however, had a different opinion. He suggested
achieving this by selecting an area of interest within the image, reproducing the
range of lightness within that area with an equal range of lightness on the print, and
compressing elsewhere.
41
Wallace and Stone also noted the necessity of different
lightness mappings for each image.
38
After the lightness mapping from point p to p
1
, the mapped color p
1
is then
moved horizontally in the L

-C

diagram to the nal destination for the chroma


compression. It is desirable to maintain hue constancy although it has been noted
that CIELAB and CIELUV do not exhibit perfect hue constancy.
38,42
If clipping
is used, the mapped color p
m
will be located at the boundary of the output color
gamut. If a linear or nonlinear compression is used, the compressed color p
n
will
be located somewhere inside the output color gamut (see Fig. 7.19). The location
depends on the slope and intercept for linear mapping and the equation or lookup
table used in the nonlinear mapping. Viggiano and Wang performed an experi-
ment in CIELAB where lightness and chroma could be varied independently, and
Device-Dependent Color Spaces 127
Figure 7.19 Sequential L

and C

mapping at constant h

.
found that the linear compression in chroma with lightness compression was not
preferred.
37
Lines toward a single center of gravity
A simultaneous L

and C

mapping to a xed anchor point L

=50 at constant h

is given as an example. Figure 7.20 gives the mapping to the anchor point L

=50
and C

=0.
35
If clipping is used, the out-of-gamut color will be located at the in-
terception of the output boundary and the line that connects the out-of-gamut color
and the anchor point. If linear compression is used, one draws the line connecting
the anchor point and the point of interest, then makes the line intersect both input
and output boundaries. Thus, the distances from output and input boundaries to
the anchor point are obtained. One can then calculate the ratio of the distances.
The L

and C

of the out-of-gamut color are weighted by this ratio. For nonlinear


compression, the location is dependent on the function employed. Meyer and Barth
found that this technique has a tendency to lighten shadows.
43
Lines toward variable centers of gravity
A good example of a variable center of gravity is the lightness compression from
center toward the lightness of the cusp on the lightness axis, where the cusp varies
with the hue angle; thus, the center of gravity has to change accordingly.
128 Computational Color Technology
Figure 7.20 Simultaneous L

and C

mapping to an anchor point at constant h

.
Lines toward the nearest color in the reproduction gamut
Simultaneous L

, C

, and h

mapping to minimize the color difference between


two gamuts.
39,44
Liang has developed an optimum algorithmmodied vector
shadingto minimize the matching error and to perform neighborhood gamut
compression.
45
Image-dependent gamut mapping
The gamut of a given image is compressed to a device rather than mapping all
images from one device onto another device. This approach is less general but has
the exibility to tune the color reproduction for this particular image; therefore, it
produces better results.
All of the directional strategies mentioned above can be used for image-dependent
gamut mapping. However, they require maintaining some form of image-gamut
data such as the maximum and minimum values of chrominance values during the
processing.
36
Stone, Cowan, and Beatty developed an interactive gamut-mapping
technique by using computer and human judgment.
46
Device-Dependent Color Spaces 129
7.9.3 Criteria of gamut mapping
A set of criteria for assessing the image-quality aspect of color gamut mapping is
proposed as follows, based on traditional color reproduction and psychophysical
principles:
46,47
(1) The neutral axis of the image should be preserved.
(2) Maximum luminance contrast is desirable.
(3) Few colors should lie outside the destination gamut to reduce the number
of out-of-gamut colors by excluding some extremes.
(4) Hue and saturation shifts should be minimized.
(5) It is better to increase than to decrease the color saturation by preserving
chroma differences presented in the original.
(6) It is more important to coordinate the relationship between colors present
than their precise value.
7.10 CIE Guidelines for Color Gamut Mapping
The report from the CIE TC 8-03 committee, Survey of Gamut Mapping Algo-
rithms, reviewed a wide variety of gamut-mapping strategies together with some
evaluations of color gamut mapping techniques.
32
Four salient trends in color
gamut mapping were identied as follows:
(1) Image-dependent compression techniques are preferred to image-indepen-
dent methods. For large amounts of compression, the preferred technique
is image dependent.
35
(2) Clipping algorithms are preferred to compression methods, given the same
constant color attributesspecically, the clipping of chroma to the border
of the realizable output gamut at constant lightness and hue.
38
For small
amounts of compression, Hoshino and Berns nd the preferred technique
is a soft clipping algorithm that maps the upper 95% without adjustment
and compresses the rest linearly. This mapping algorithm is most similar to
clipping.
(3) A vast majority of algorithms start with uniform overall lightness com-
pression and hue preservation, suggesting that lightness and hue are more
important than saturation.
(4) The use of different mapping methods for different parts of the color space.
For example, Spaulding and colleagues describe a method for specifying
different color-mapping strategies in various regions of the color space,
while providing a mechanism for smooth transitions between the different
regions.
48
Because of the importance, difculty, and complexity of gamut mapping, CIE
Technical Committee 8-03 has published a technical report, Guidelines for the
130 Computational Color Technology
Evaluation of Gamut-Mapping Algorithms, to provide guidance for the evaluation
of the performance of gamut-mapping algorithms that will lead to the development
and recommendation of an optimal solution for cross-device and cross-media im-
age reproduction.
49
The CIE committee provides guidance and supporting material
for the following:
(1) The test images to be used
(2) The media on which they are to be reproduced
(3) The viewing conditions to be employed
(4) The measurement procedures to be followed
(5) The gamut-mapping algorithms to be applied
(6) The way gamut boundaries are to be calculated
(7) The color spaces in which the mapping is to be done, and
(8) The experimental method for carrying out the evaluation.
The guidelines also contain example work ows that show how they are to be
applied under different circumstances and psychophysical evaluation processes.
References
1. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, p. 118 (1982).
2. Color Encoding Standard, XNSS 288811, Xerox Corp., Xerox Systems Insti-
tute, Sunnyvale, CA (1989).
3. N. Rudaz, R. D. Hersch, and V. Ostromoukhov, Specifying color differences
in a linear color space (LEF), Proc. IS&T/SID 97 Color Imaging Conf., Scotts-
dale, AZ, Nov. 1720, pp. 197202 (1997).
4. N. Rudaz and R. D. Hersch, Protecting identity documents by microstructure
color difference, J. Electron. Imaging 13 (2), pp. 315323 (2004).
5. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, p. 30 (1997).
6. H. R. Kang, Water-based ink-jet ink. I. Formulation, J. Imag. Sci. 35, pp. 179
188 (1991).
7. D. B. Judd and G. Wyszecki, Color in Business, Science and Industry, 3rd
Edition, Wiley, New York, p. 332 (1975).
8. R. Donaldson, Spectrophotometry of uorescent pigments, Brit. J. Appl. Phys.
5, pp. 210224 (1954).
9. D. B. Judd and G. Wyszecki, Color in Business, Science and Industry, 3rd
Edition, Wiley, New York, pp. 195197 (1975).
10. H. R. Kang, Kubelka-Munk modeling of ink jet ink mixing, J. Imaging Sci.
Techn. 17, pp. 7683 (1991).
Device-Dependent Color Spaces 131
11. AGFA, The Secrets of Color Management, AGFA Educational Publ., Ran-
dolph, MA, p. 9 (1997).
12. ISO 12640: 1997 graphic technologyPrepress digital data exchange
CMYK standard colour image data (SCID).
13. ISO12641: 1997 graphic technologyPrepress digital data exchangecolour
targets for input scanner calibration (SCID).
14. P. J. Green, A test target for dening media gamut boundaries, Proc. SPIE
4300, pp. 105113 (2000).
15. M. Mahy, Calculation of color gamuts based on the Neugebauer model, Color
Res. Appl. 22, pp. 365374 (1997).
16. P. Engeldrum and L. Carreira, The determination of the color gamut of dye
coated paper layers using Kubelka-Munk theory, Soc. Photo. Sci. and Eng.
Annual Conf., pp. 35 (1984).
17. P. Engeldrum, Computing color gamuts of ink-jet printing systems, SID 85
Digest, pp. 385388 (1985).
18. S. Gustavson, The color gamut of halftone reproduction, Proc. IS&T/SID
Color Imaging Conf., Scottsdale, AZ, Vol. 4, pp. 8085 (1996).
19. S. Gustavson, Dot gain in color halftones, Linkping Studies in Science and
Technology, Dissertation No. 492, Image Processing Laboratory, Dept. of
Electrical Engineering, Linkping University, Sweden (1997).
20. S. Gustavson, Color gamut of halftone reproduction, J. Imaging Sci. Techn. 41,
pp. 283290 (1997).
21. H. R. Kang, Color mixing models, Digital Color Halftoning, SPIE Press,
Bellingham, WA, Chap. 6, pp. 83111 (1999).
22. P. G. Herzog and B. Hill, A new approach to the representation of color
gamuts, Proc. IS&T/SID Color Imaging Conf., Scottsdale, AZ, Vol. 3, pp. 78
81 (1995).
23. M. Mahy, Gamut calculation of color reproduction devices, Proc. IS&T/SID
Color Imaging Conf., Scottsdale, AZ, Vol. 4, pp. 145150 (1996).
24. W. Kress and M. Stevens, Derivation of 3-dimensional gamut descriptors for
graphic arts output devices, TAGA Proc., Sewickley, PA, pp. 199214 (1994).
25. G. Braun and M. D. Fairchild, Techniques for gamut surface denition and
visualization, Proc. IS&T/SID Color Imaging Conf., Scottsdale, AZ, Vol. 5,
pp. 147152 (1997).
26. P. G. Herzog, Analytical color gamut representations, J. Imaging Sci. Techn.
40, pp. 516521 (1996).
27. P. G. Herzog, Further development of the analytical color gamut representa-
tions, Proc. SPIE 3300, pp. 118128 (1998).
28. J. Morovic and M. R. Luo, Developing algorithms for universal colour gamut
mapping, Colour Engineering: Vision and Technology, L. W. MacDonald
(Ed.), Wiley, New York, pp. 253283 (1999).
29. J. Morovic and M. R. Luo, Calculating medium and image gamut boundaries
for gamut mapping, Color Res. Appl. 25, pp. 394401 (2000).
132 Computational Color Technology
30. J. Morovic and P.-L. Sun, How different are colour gamuts in cross-media
color reproduction? Proc. Conf. Colour Image Science, John Wiley & Sons,
Ltd., Chichester, West Sussex, England, pp. 169182 (2000).
31. J. Morovic and P.-L. Sun, The inuence of image gamuts on cross-media color
image reproduction, Proc. IS&T/SID Color Imaging Conf., Scottsdale, Vol. 8,
pp. 324329 (2000).
32. J. Morovic and M. R. Luo, The fundamentals of gamut mapping: A survey,
J. Imaging Sci. Techn. 45, pp. 283290 (2001).
33. J. Morovic, Gamut mapping, Digital Color Imaging Handbook, G. Sharma
(Ed.), CRC Press, Boca Raton, pp. 639685 (2003).
34. J. Morovic, To Develop a Universal Gamut Mapping Algorithm, Ph.D. thesis,
University of Derby (1998).
35. T. Hoshino and R. S. Berns, Color gamut mapping techniques for color hard
copy images, Proc. SPIE 1909, pp. 152165 (1993).
36. J. Gordon, R. Holub, and R. Poe, On the rendition of unprintable colors, Proc.
39th Annual TAGA Conf. San Diego, TAGA, Sewickley, PA, pp. 186195
(1987).
37. J. A. S. Viggiano and C. J. Wang, A comparison of algorithms for mapping
color between media of different luminance ranges, TAGA Proc. 1992, TAGA,
Sewickley, PA, Vol. 2, pp. 959974 (1992).
38. W. E. Wallace and M. C. Stone, Gamut mapping computer generated imagery,
Image Handling and Reproduction Systems Integration, Proc. SPIE 1460,
pp. 2028 (1991).
39. E. G. Pariser, An investigation of color gamut reduction techniques, IS&Ts
2nd Symp. on Electronic Prepress Technology and Color Proong, pp. 105
107 (1991).
40. A. Johnson, Techniques for reproducing images in different media: Advan-
tages and disadvantages of current methods, TAGA Proc., Sewickley, PA, pp.
739755 (1992).
41. G. W. Jorgensen, Improved black and white halftones, GATF Research Project
Report No. 105 (1976).
42. P. Hung, Non-linearity of hue loci in CIE color spaces, Konica Tech. Report 5,
pp. 7883 (1992).
43. J. Meyer and B. Barth, Color gamut matching for hard copy, SID 89 Digest,
pp. 8689 (1989).
44. R. S. Gentile, E. Walowit, and J. P. Allebaach, A comparison of techniques
for color gamut mismatch compensation, J. Imaging Tech. 16, pp. 176181
(1990).
45. Z. Liang, Generic image matching system (GIMS), Color hard copy and
graphic arts, Proc. SPIE 1670, pp. 255265 (1992).
46. M. C. Stone, W. B. Cowan, and J. C. Beatty, Color gamut mapping and the
printing of digital color images, ACM Trans. Graphics 7, pp. 249292 (1988).
47. L. W. MacDonald, Gamut mapping in perceptual color space, Proc. 1st
IS&T/SID Color Imaging Conf., Scottsdale, AZ, Vol. 1, pp. 193196 (1993).
Device-Dependent Color Spaces 133
48. K. E. Spaulding, R. N. Ellson, and J. R. Sullivan, Ultracolor: A new gamut
mapping strategy, Device-independent color imaging II, Proc. SPIE 2414, pp.
6168 (1995).
49. CIE, Guidelines for the evaluation of gamut mapping algorithms, Technical
Report 15x (2003).
Chapter 8
Regression
Converting a color specication from one space to another requires nding the
links of the mapping. A frequently used link is the polynomial method. Polynomial
regression is based on the assumption that the correlation between color spaces can
be approximated by a set of simultaneous equations.
This chapter describes the mathematical formulation of the regression method
using matrix notation. Examples of the forward and backward color transforms us-
ing the regression method are given, and the extension to spectral data is discussed.
Finally, the method is tested and conversion accuracies are reported.
8.1 Regression Method
The schematic diagram of the regression method is depicted in Fig. 8.1. Sample
points in the source color space are selected and their color specications in the
destination space are measured. An equation is chosen for linking the source and
destination color specications; examples of polynomials with three independent
variables (x, y, z) are given in Table 8.1.
A regression is performed on selected points with known color specications
in both source and destination spaces for deriving the coefcients of the polyno-
mial. The only requirement is that the number of points should be higher than the
number of polynomial terms; otherwise, there are no unique solutions to the si-
multaneous equations because there are more unknown variables than equations.
Once the coefcients are derived, one can plug the source specications into the
simultaneous equations to compute the destination specications.
The so-called polynomial regression is, in fact, an application of the multiple
linear regression of m variables, where m is a number greater than the number
of independent variables.
1
The general approach of the linear regression with m
variables is given as follows:
p
i
(q) =c
1
q
1i
+c
2
q
2i
+ +c
m
q
mi
. (8.1a)
Polynomials can be expressed in the vector form.
p
i
=Q
i
T
C =C
T
Q
i
. (8.1b)
135
136 Computational Color Technology
Figure 8.1 Schematic diagram of the regression method.
Table 8.1 The polynomials for color space conversion.
1. p(x, y, z) =c
1
x +c
2
y +c
3
z
2. p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z
3. p(x, y, z) =c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx
4. p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx +c
10
xyz
5. p(x, y, z) =c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx +c
7
x
2
+c
8
y
2
+c
9
z
2
6. p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx +c
7
x
2
+c
8
y
2
+c
9
z
2
+c
10
xyz
7. p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx +c
7
x
2
+c
8
y
2
+c
9
z
2
+c
10
xyz +c
11
x
3
+c
12
y
3
+c
13
z
3
8. p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx +c
7
x
2
+c
8
y
2
+c
9
z
2
+c
10
xyz +c
11
x
3
+c
12
y
3
+c
13
z
3
+c
14
xy
2
+c
15
x
2
y +c
16
yz
2
+c
17
y
2
z
+c
18
zx
2
+c
19
z
2
x
Vector Q
i
has m elements indicating the number of terms in the polynomial; each
element represents an independent variable or a multiplication of independent vari-
ables. Vector C is the corresponding coefcient vector. An example of applying
the polynomial regression to three independent variables, R, G, and B, with nine
polynomial terms can have the form of q
1
= R, q
2
= G, q
3
= B, q
4
= RG, q
5
=
GB, q
6
= BR, q
7
= R
2
, q
8
= G
2
, and q
9
= B
2
. These q values are derived from
inputs of the three independent variables, where output response p
i
is given by the
corresponding color value in the destination space.
Regression 137
An explicit expression of Eq. (8.1) is given as follows:
p
1
=c
1
q
11
+c
2
q
21
+c
3
q
31
+c
j
q
j1
+c
m
q
m1
p
2
=c
1
q
12
+c
2
q
22
+c
3
q
32
+c
j
q
j2
+c
m
q
m2
p
3
=c
1
q
13
+c
2
q
23
+c
3
q
33
+c
j
q
j3
+c
m
q
m3



p
k
=c
1
q
1k
+c
2
q
2k
+c
3
q
3k
+c
j
q
jk
+c
m
q
mk
. (8.2a)
The scalar k is the number of responses (k > m). Equation (8.2a) can be put into
vector-matrix notation.

p
1
p
2
p
3


p
k

q
11
q
21
q
31
q
m1
q
12
q
22
q
32
q
m2
q
13
q
23
q
33
q
m3



q
1k
q
2k
q
3k
q
mk

c
1
c
2
c
3

c
m

(8.2b)
or
P =Q
T
C. (8.2c)
Vector P has k elements, matrix Qhas a size of mk, and C is a vector of m ele-
ments. If the number of responses in vector P is less than the number of unknowns
in vector C, k < m, Eq. (8.2) is underconstrained and there is no unique solution.
The number of elements in P must be greater than or equal to the unknowns in or-
der to have a unique solution. If the number of responses is greater than the number
of unknowns, this type of estimation is overconstrained. Because any measurement
always contains errors or noises, the unique solutions to the overconstrained esti-
mation may not exist. If the measurement noise of the response is Gaussian and
the error is the sum of the squared differences between the measured and estimated
responses, there are closed-form solutions to the linear estimation problem.
2
This
is the least-squares t, where we want to minimize the sum of the squares of the
difference between the estimated and measured values as given in Eq. (8.2).
p =

[p
i
(c
1
q
1i
+c
2
q
2i
+ +c
m
q
mi
)]
2
. (8.3)
The summation carries from i = 1, 2, . . . , k. Equation (8.3) can also be given in the
matrix form
p =

P Q
T
C

P Q
T
C

. (8.4)
138 Computational Color Technology
The least-squares t means that the partial derivatives with respect to c
j
(j =
1, 2, . . . , m) are set to zero, resulting in a new set of equations.
3
QQ
T
C =QP. (8.5)
If the number of samples k is greater than (or equal to) the number of coefcients
m, the product of (Q
T
Q) is nonsingular and can be inverted, such that the coef-
cients are obtained by
C =

QQ
T

1
(QP). (8.6)
With k sets of inputs, Q is a matrix of size m k, where m is the number of
terms in the polynomial. Q
T
is the transpose of the matrix Q that is obtained by
interchanging the rows and columns of the matrix Q; therefore, it is a matrix of
size km. The product of QQ
T
is an mmsymmetric matrix. Asix-termequation
given in Table 8.1 with eight data sets is used to illustrate this technique.
p(x, y, z) =c
1
x +c
2
y +c
3
z +c
4
xy +c
5
yz +c
6
zx, (8.7)
Q=

x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
y
1
y
2
y
3
y
4
y
5
y
6
y
7
y
8
z
1
z
2
z
3
z
4
z
5
z
6
z
7
z
8
x
1
y
1
x
2
y
2
x
3
y
3
x
4
y
4
x
5
y
5
x
6
y
6
x
7
y
7
x
8
y
8
y
1
z
1
y
2
z
2
y
3
z
3
y
4
z
4
y
5
z
5
y
6
z
6
y
7
z
7
y
8
z
8
z
1
x
1
z
2
x
2
z
3
x
3
z
4
x
4
z
5
x
5
z
6
x
6
z
7
x
7
z
8
x
8

, (8.8)
where the triplet (x
i
, y
i
, z
i
) represents the input values of the ith point. The product
[QQ
T
] is a 6 6 symmetric matrix.
QQ
T
=

x
2
i

x
i
y
i

x
i
z
i

x
2
i
y
i

x
i
y
i
z
i

x
2
i
z
i

x
i
y
i

y
2
i

y
i
z
i

x
i
y
2
i

y
2
i
z
i

x
i
y
i
z
i

x
i
z
i

y
i
z
i

z
2
i

x
i
y
i
z
i

y
i
z
2
i

x
i
z
2
i

x
2
i
y
i

x
i
y
2
i

x
i
y
i
z
i

x
2
i
y
2
i

x
i
y
2
i
z
i

x
2
i
y
i
z
i

x
i
y
i
z
i

y
2
i
z
i

y
i
z
2
i

x
i
y
2
i
z
i

y
2
i
z
2
i

x
i
y
i
z
2
i

x
2
i
z
i

x
i
y
i
z
i

x
i
z
2
i

x
2
i
y
i
z
i

x
i
y
i
z
2
i

x
2
i
z
2
i

.
(8.9)
Regression 139
The summations in Eq. (8.9) carry fromi = 1 to 8. The product of Qwith vector P
is
QP =

x
1
x
2
x
3
x
4
x
5
x
6
x
7
x
8
y
1
y
2
y
3
y
4
y
5
y
6
y
7
y
8
z
1
z
2
z
3
z
4
z
5
z
6
z
7
z
8
x
1
y
1
x
2
y
2
x
3
y
3
x
4
y
4
x
5
y
5
x
6
y
6
x
7
y
7
x
8
y
8
y
1
z
1
y
2
z
2
y
3
z
3
y
4
z
4
y
5
z
5
y
6
z
6
y
7
z
7
y
8
z
8
z
1
x
1
z
2
x
2
z
3
x
3
z
4
x
4
z
5
x
5
z
6
x
6
z
7
x
7
z
8
x
8

p
1
p
2
p
3
p
4
p
5
p
6
p
7
p
8

x
i
p
i

y
i
p
i

z
i
p
i

x
i
y
i
p
i

y
i
z
i
p
i

x
i
z
i
p
i

. (8.10)
The most computationally intensive part is to invert the [QQ
T
] matrix; the cost
increases as the matrix size increases. A matrix is invertible if and only if the
determinant of the matrix is not zero.
det

QQ
T

= 0. (8.11)
There are several ways to invert a matrix; we choose the Gaussian elimination
for its lower computational cost. Gaussian elimination is the combination of the
triangularization and back substitution. The triangularization will make all matrix
elements in the lower left of the diagonal line zero. Consequently, the last row in
the matrix contains only one element, which is the solution for the last coefcient.
This solution is substituted back into the front rows to calculate other coefcients
(for details of the Gaussian elimination, see Appendix 5). After obtaining [QQ
T
]
1
and [QP], we can calculate the coefcient C via Eq. (8.6).
8.2 Forward Color Transformation
Equation (8.6) is derived for a single component and gives the polynomial coef-
cients for one component of the color specications, such as the X of the tris-
timulus values from an input RGB. One needs to repeat the procedure twice to get
coefcients for Y and Z. Or one can generalize Eq. (8.6) by including all three
components for trichromatic systems or n (n > 3) components for multispectral
devices, such that P becomes a matrix of k 3 (or k n for a multispectral de-
vice) and C becomes a matrix of m3 (or mn). Considering the example of an
RGB-to-CIEXYZ transformation using a six-term polynomial with eight data sets,
140 Computational Color Technology
we have a P matrix with 83 elements, as given in Eq. (8.12), containing the data
of the tristimulus values.
P =

X
1
Y
1
Z
1
X
2
Y
2
Z
2
X
3
Y
3
Z
3
X
4
Y
4
Z
4
X
5
Y
5
Z
5
X
6
Y
6
Z
6
X
7
Y
7
Z
7
X
8
Y
8
Z
8

. (8.12)
For the forward color transformation from RGB to XYZ, the regression is applied
to the RGB data (independent variables or inputs) and tristimulus values P (de-
pendent variable or output response) to compute the coefcients of the polynomial.
The matrix Q, containing the data of the input RGB, is given in Eq. (8.13). The
rst three rows are the input data and the next three rows are the multiplications of
two variables.
Q=

R
1
R
2
R
3
R
4
R
5
R
6
R
7
R
8
G
1
G
2
G
3
G
4
G
5
G
6
G
7
G
8
B
1
B
2
B
3
B
4
B
5
B
6
B
7
B
8
R
1
G
1
R
2
G
2
R
3
G
3
R
4
G
4
R
5
G
5
R
6
G
6
R
7
G
7
R
8
G
8
G
1
B
1
G
2
B
2
G
3
B
3
G
4
B
4
G
5
B
5
G
6
B
6
G
7
B
7
G
8
B
8
B
1
R
1
B
2
R
2
B
3
R
3
B
4
R
4
B
5
R
5
B
6
R
6
B
7
R
7
B
8
R
8

.
(8.13)
Matrix [QQ
T
] of Eq. (8.9) and the vector [QP] of Eq. (8.10) are given in Eqs. (8.14)
and (8.15), respectively.
QQ
T
=

R
2
i

R
i
G
i

R
i
B
i

R
2
i
G
i

R
i
G
i
B
i

R
2
i
B
i

R
i
G
i

G
2
i

G
i
B
i

R
i
G
2
i

G
2
i
B
i

R
i
G
i
B
i

R
i
B
i

G
i
B
i

B
2
i

R
i
G
i
B
i

G
i
B
2
i

R
i
B
2
i

R
2
i
G
i

R
i
G
2
i

R
i
G
i
B
i

R
2
i
G
2
i

R
i
G
2
i
B
i

R
2
i
G
i
B
i

R
i
G
i
B
i

G
2
i
B
i

G
i
B
2
i

R
i
G
2
i
B
i

G
2
i
B
2
i

R
i
G
i
B
2
i

R
2
i
B
i

R
i
G
i
B
i

R
i
B
2
i

R
2
i
G
i
B
i

R
i
G
i
B
2
i

R
2
i
B
2
i

,
(8.14)
QP =

R
i
X
i

R
i
Y
i

R
i
Z
i

G
i
X
i

G
i
Y
i

G
i
Z
i

B
i
X
i

B
i
Y
i

B
i
Z
i

R
i
G
i
X
i

R
i
G
i
Y
i

R
i
G
i
Z
i

G
i
B
i
X
i

G
i
B
i
Y
i

G
i
B
i
Z
i

R
i
B
i
X
i

R
i
B
i
Y
i

R
i
B
i
Z
i

. (8.15)
Regression 141
The summations in Eqs. (8.14) and (8.15) carry from i = 1 to 8. Using Eq. (8.6)
and multiplying matrices [QQ
T
] and [QP], we obtain the coefcient C matrix with
a size of 6 3. The explicit formulas are given in Eq. (8.16).
X =c
x,1
R +c
x,2
G+c
x,3
B +c
x,4
RG+c
x,5
GB +c
x,6
BR, (8.16a)
Y =c
y,1
R +c
y,2
G+c
y,3
B +c
y,4
RG+c
y,5
GB +c
y,6
BR, (8.16b)
Z =c
z,1
R +c
z,2
G+c
z,3
B +c
z,4
RG+c
z,5
GB +c
z,6
BR, (8.16c)
or

X
Y
Z

c
x,1
c
x,2
c
x,3
c
x,4
c
x,5
c
x,6
c
y,1
c
y,2
c
y,3
c
y,4
c
y,5
c
y,6
c
z,1
c
z,2
c
z,3
c
z,4
c
z,5
c
z,6

R
G
B
RG
GB
BR

. (8.16d)
8.3 Inverse Color Transformation
The color conversion is able to go in either directionforward or backward. For
many conversion techniques, it is easier to implement in one direction than in
the other. However, this is not the case for polynomial regression. The inverse
color transform using regression is simple and straightforward. All one needs
to do is exchange the positions of the input and output data for the regression
program. For the inverse color transformation of XYZ to RGB, the regression
is applied to the tristimulus data (independent variables) and RGB values (de-
pendent variable) in order to compute a new set of coefcients with the same
polynomial order. The matrix [QQ
T
] and the vector [QP] of the inverse transfor-
mation are given in Eqs. (8.17) and (8.18), respectively, for a six-term polyno-
mial.
QQ
T
=

X
2
i

X
i
Y
i

X
i
Z
i

X
2
i
Y
i

X
i
Y
i
Z
i

X
2
i
Z
i

X
i
Y
i

Y
2
i

Y
i
Z
i

X
i
Y
2
i

Y
2
i
Z
i

X
i
Y
i
Z
i

X
i
Z
i

Y
i
Z
i

Z
2
i

X
i
Y
i
Z
i

Y
i
Z
2
i

X
i
Z
2
i

X
2
i
Y
i

X
i
Y
2
i

X
i
Y
i
Z
i

X
2
i
Y
2
i

X
i
Y
2
i
Z
i

X
2
i
Y
i
Z
i

X
i
Y
i
Z
i

Y
2
i
Z
i

Y
i
Z
2
i

X
i
Y
2
i
Z
i

Y
2
i
Z
2
i

X
i
Y
i
Z
2
i

X
2
i
Z
i

X
i
Y
i
Z
i

X
i
Z
2
i

X
2
i
Y
i
Z
i

X
i
Y
i
Z
2
i

X
2
i
Z
2
i

,
(8.17)
142 Computational Color Technology
QP =

X
i
R
i

X
i
G
i

X
i
B
i

Y
i
R
i

Y
i
G
i

Y
i
B
i

Z
i
R
i

Z
i
G
i

Z
i
B
i

X
i
Y
i
R
i

X
i
Y
i
G
i

X
i
Y
i
B
i

Y
i
Z
i
R
i

Y
i
Z
i
G
i

Y
i
Z
i
B
i

Z
i
X
i
R
i

Z
i
X
i
G
i

Z
i
X
i
B
i

. (8.18)
Again, using Eq. (8.6) and multiplying matrices [QQ
T
] and [QP], we obtain
the coefcients c for the inverse transform. The explicit formulas are given in
Eq. (8.19).
R = c
r,1
X + c
r,2
Y + c
r,3
Z + c
r,4
XY + c
r,5
YZ + c
r,6
ZX, (8.19a)
G= c
g,1
X + c
g,2
Y + c
g,3
Z + c
g,4
XY + c
g,5
YZ + c
g,6
ZX, (8.19b)
B = c
b,1
X + c
b,2
Y + c
b,3
Z + c
b,4
XY + c
b,5
YZ + c
b,6
ZX, (8.19c)
or

R
G
B

c
r,1
c
r,2
c
r,3
c
r,4
c
r,5
c
r,6
c
g,1
c
g,2
c
g,3
c
g,4
c
g,5
c
g,6
c
b,1
c
b,2
c
b,3
c
b,4
c
b,5
c
b,6

X
Y
Z
XY
YZ
ZX

. (8.19d)
This method is particularly suitable to irregularly spaced sample points. If a remap-
ping is needed, the regression method works for any packing of sample points, ei-
ther uniform or nonuniform. Points outside the gamut are extrapolated. Because
the polynomial coefcients are obtained by a global least-squares error mini-
mization, the polynomial may not map the sample points to their original val-
ues.
8.4 Extension to Spectral Data
Herzog and colleagues have extended the polynomial regression to spectral data.
8
Equation (8.6) becomes
C =

QQ
T

QS
T

. (8.20)
Matrix S contains the spectral data and is explicitly given in Eq. (8.21) with a size
of n k, where n is the sample number of the spectrum and k is the number of
spectra used in the regression. The transpose of S is a k n matrix and Q is an
m k matrix; therefore, the product (QS
T
) has a size of m n. The resulting C
Regression 143
matrix has the size of mn because (QQ
T
)
1
has a size of mm.
S =

s
1
(
1
) s
2
(
1
) s
3
(
1
) s
k
(
1
)
s
1
(
2
) s
2
(
2
) s
3
(
2
) s
k
(
2
)
s
1
(
3
) s
2
(
3
) s
3
(
3
) s
k
(
3
)

s
1
(
n
) s
2
(
n
) s
3
(
n
) s
k
(
n
)

. (8.21)
They applied the spectral polynomial regression to several sets of spectral data ac-
quired by a digital camera. As expected, the average color difference decreases
with increasing number of polynomial terms. Using a three-term linear equation,
they obtain average color differences ranging from 4.9 to 8.5E
ab
with three-
channel sensors. By using a 13-term polynomial, the color difference decreases
to the range of 3.9 to 6.8E
ab
. Therefore, it is a trade-off between the accuracy
and computational cost. There is another type of trade-off between the computa-
tional accuracy and the number of sensors. With a six-channel sensor, the average
color differences drop below 1E
ab
.
4
However, the computational cost in terms of
memory size and arithmetical operations is increased accordingly.
8.5 Results of Forward Regression
To test the proposed empirical regression method, we perform the forward conver-
sion from Xerox/RGB to CIELAB. Four polynomials of Table 8.1 with 4, 8, 14,
and 20 terms, respectively, are used for the regression. Table 8.2 lists the average
color differences using these polynomials. The training data are a set of nine-level,
equally spaced lattice points (729 points) and a set of nine-level, unequally spaced
lattice points in the RGB space. The testing data consist of 3072 points selected
around the entire RGB color space, with the emphasis on the difcult, dark colors.
The uniform training set gives higher average E
ab
values for testing data of
all four polynomials than the nonuniform training set (see columns 3 and 6 of
Table 8.2), but the nonuniformtraining set gives higher E
ab
values for the training
data (see columns 2 and 5 of Table 8.2). However, the overall errors, including the
training and testing data, are about the same for both the equally and nonequally
Table 8.2 The average E
ab
of 3072 test points from a Xerox/RGB to CIELAB transform
using a training data set that consists of nine levels in each RGB axis.
Equally spaced training data Unequally spaced training data
Training Testing Total Training Testing Total
Matrix E
ab
E
ab
E
ab
E
ab
E
ab
E
ab
4 3 12.18 17.86 16.77 18.76 15.78 16.35
8 3 11.44 15.43 14.66 17.19 14.48 15.00
14 3 3.98 6.06 5.66 6.95 5.68 5.92
20 3 2.63 4.77 4.36 4.91 4.59 4.65
144 Computational Color Technology
spaced data partitions. This indicates that the generalization from training to testing
by the regression method is well behaved with no signicant deviation from the
norm. As expected, the results show that the average color difference decreases
as the number of terms in the polynomial increases. The E
ab
distributions for the
equally spaced data are plotted in Fig. 8.2; as the number of terms in the polynomial
increases, the error distribution becomes narrower and shifts toward smaller E
ab
values. The unequally spaced data showa similar trend as in Fig. 8.2, with a slightly
narrower band for a given polynomial size (see Fig. 8.3).
A substantial improvement can be gained if the gray balancing to L

or Y is
performed prior to the polynomial regression by using a gray-balance curve such
as the one given in Fig. 8.4 for Device/RGB values that do not undergo a gamma
correction. Table 8.3 shows the results of the polynomial regression with gray bal-
ancing under the same conditions as those of no gray balancing. Compared to the
results of no gray balancing (Table 8.2), the average color differences are improved
by at least 25%; in some cases, the improvements are as high as a factor of two for
high-order polynomials.
Figure 8.5 shows the error distributions of a 14-term polynomial with and with-
out gray balance. With gray balance, the error distribution shifts toward lower
values and the error band becomes narrower than the corresponding distribution
without gray balancing. Other polynomials behave similarly.
Figure 8.2 Color-error distributions of four different polynomials using an equally spaced
729 training data set.
Regression 145
Figure 8.3 Color-error distributions of a 14-term polynomial using two different training data
sets; one is equally spaced and the other is unequally spaced, both with 729 data points.
Figure 8.4 Relationship between Device/RGB (R =G=B) and lightness L

.
146 Computational Color Technology
Table 8.3 The average color differences of 3072 test points from a Xerox/RGB-to-CIELAB
transform with gray balancing.
Equally spaced training data Unequally spaced training data
Training Testing Total Training Testing Total
Matrix E
ab
E
ab
E
ab
E
ab
E
ab
E
ab
4 3 11.59 13.18 12.88 12.55 10.60 10.97
8 3 10.45 8.70 9.04 9.17 8.00 8.22
14 3 2.80 4.64 4.29 4.82 4.04 4.19
20 3 0.96 2.33 2.07 2.08 1.73 1.80
Figure 8.5 Color-error distributions of a 14-term polynomial with and without gray balance.
8.6 Results of Inverse Regression
The average errors d, which are the Euclidean distances between the calculated
and measured RGB values for the color-space transformation from CIELAB to
Xerox/RGB using 3072 test points, are given in Table 8.4.
The 14-term and 20-term polynomials of Table 8.1, having cubic terms in the
equation, t the data very well; the 20-term is practically an exact t. The d error
distributions are given in Fig. 8.6 for 4 3, 8 3, and 14 3 transfer matrices
(there are 1542 points with d > 31 for the 3 4 matrix and 770 points for the
8 3 matrix that are not shown in Fig. 8.6) and the maximum d values are well
Regression 147
Table 8.4 The average errors d of 3072 test points from a CIELAB-to-Xerox/RGB trans-
form.
Equally spaced training data Unequally spaced training data
Training Testing Total Training Testing Total
Matrix d d d d d d
4 3 31.80 42.59 40.52 40.78 32.78 34.31
8 3 22.80 30.36 28.91 27.62 23.49 24.28
14 3 2.49 3.74 3.50 3.73 3.20 3.30
20 3 0.03 0.04 0.04 0.04 0.03 0.03
Figure 8.6 The d error distributions of 4 3, 8 3, and 14 3 transfer matrices.
beyond 60. There is a distinct difference in the error distribution of the 14 3
matrix; it peaks at d = 3 with a high amplitude and a rather narrow band as
compared to the 4 3 and 8 3 matrices that peak around d = 21 with a small
amplitude. For the 203 matrix, the distribution is even better; all 3072 points have
errors less than one device unit of an 8-bit system. From the CIE denitions, we
know that the XYZ-to-L

transform is a function of the cubic root; therefore,


the inverse transform will be a cubic function. Thus, it is not surprising that the 14-
term and 20-term polynomials, containing variables of cubic power (see Table 8.1),
t the data so well.
148 Computational Color Technology
It is a different story in the case of generalization. Results are given in Ta-
bles 8.5 and 8.6. By using a 6 3 matrix, the total average color difference in-
creases no more than 0.8 units with a training set as small as 34, and a testing
set as large as 202. In most cases, the average color differences of testing sets are
about the same as the training sets. For a 14 3 matrix, the largest increase in the
total average color difference is 2.1, and the highest color difference ratio of the
testing to training is 2.8. These results indicate that the generalization of the poly-
nomial regression is well behaved. For a given polynomial, the training and testing
results are not necessarily worse when the number of training colors decreases (see
columns 4 and 5 of Table 8.5 or 8.6). This implies that the position of the color used
for training, not the number of colors, is more important in the color interpolation
within a given gamut.
8.7 Remarks
The regression technique has been applied for the color camera, scanner, and
printer characterizations and calibrations.
3,58
The adequacy of the method de-
pends on the relationship between the source and destination spaces, the number
and location of the points chosen for the regression, the number of terms in the
Table 8.5 Testing results of Q60 using a 3 6 matrix.
Training Testing Training Testing Total
patches patches E
ab
E
ab
E
ab
Q60 236 0 2.52 2.52
Test 1 172 64 2.31 4.07 2.79
Test 2 105 131 2.77 3.08 2.94
Test 3 70 166 3.03 3.06 3.05
Test 4 58 178 3.37 2.97 3.07
Test 5 46 190 2.72 3.11 3.03
Test 6 34 202 2.99 3.32 3.27
Table 8.6 Testing results of Q60 using a 3 14 matrix
Training Testing Training Testing Total
patches patches E
ab
E
ab
E
ab
Q60 236 0 1.85 1.85
Test 1 172 64 1.78 5.03 2.66
Test 2 105 131 2.11 4.16 3.25
Test 3 70 166 2.39 4.94 4.18
Test 4 58 178 2.18 4.91 4.24
Test 5 46 190 2.16 5.55 4.89
Test 6 34 202 2.41 4.48 4.18
Regression 149
polynomial, and the measurement errors. It is ideal for transforms with a linear re-
lationship. For nonlinear color-space conversion, this regression method does not
guarantee a uniform accuracy across the entire color gamut; some regions, for ex-
ample dark colors, have higher errors than other areas.
5
In general, the accuracy
improves as the number of terms in the equation increases.
6,7
The trade-offs are
the higher computation cost and lower processing speed. The main advantages of
the regression method are: (i) that the inverse conversion is simple, (ii) it takes the
statistical uctuation of the measurement into account, and (iii) the sample points
need not be uniformly spaced.
References
1. A. A. A and S. P. Azen, Statistical Analysis, Academic Press, New York,
Chap. 3, p. 108 (1972).
2. B. A. Wandell, Foundations of Vision, Sinauer Assoc., Sunderland, MA,
pp. 431436 (1995).
3. P. H. McGinnis, Jr., Spectrophotometric color matching with the least squares
technique, Color Eng. 5, pp. 2227 (1967).
4. P. G. Herzog, D. Knipp, H. Stiebig, and F. Konig, Colorimetric characteriza-
tion of novel multiple-channel sensors for imaging and metrology, J. Electron.
Imaging 8, pp. 342353 (1999).
5. H. R. Kang, Color scanner calibration, J. Imaging Sci. Techn. 36, pp. 162170
(1992).
6. H. R. Kang, Color scanner calibration of reected samples, Proc. SPIE 1670,
pp. 468477 (1992).
7. H. R. Kang and P. G. Anderson, Neural network applications to the color scan-
ner and printer calibrations, J. Electron. Imag. 1, pp. 125135 (1992).
8. J. de Clippeleer, Device independent color reproduction, Proc. TAGA, Sewick-
ley, PA, pp. 98106 (1993).
Chapter 9
Three-Dimensional Lookup Table
with Interpolation
Color space transformation using a 3D lookup table (LUT) with interpolation is
used to correlate the source and destination color values in the lattice points of
a 3D table, where nonlattice points are interpolated by using the nearest lattice
points. This method has been used in many applications with quite satisfactory
results, and incorporated into the ICC prole standard.
1
In this chapter, the structure of the 3D-LUT approach is discussed and the math-
ematical formulations of interpolation methods are given. These methods are tested
using several sets of data points. The similarities and differences of these interpo-
lation methods are discussed.
9.1 Structure of 3D Lookup Table
The 3D lookup method consists of three partspacking (or partition), extraction
(or nd), and interpolation (or computation).
2
Packing is a process that partitions
the source space and selects sample points for the purpose of building a lookup
table. The extraction step is aimed at nding the location of the input pixel and
extracting color values of the nearest lattice points. The last step is interpolation
where the input signals and the extracted lattice points are used to calculate the
destination color specications for the input point.
9.1.1 Packing
Packing is a process that divides the domain of the source space and populates it
with sample points to build the lookup table. Generally, the table is built by an
equal step sampling along each axis of the source space, as shown in Fig. 9.1,
of a ve-level LUT. This will give (n 1)
3
cubes and n
3
lattice points, where
n is the number of levels. The advantage of this arrangement is that it implicitly
supplies information about which cell is next to which. Thus, one needs only to
store the starting point and the spacing for each axis. Generally, a matrix of n
3
color patches at the lattice points of the source space is made and the destination
color specications of these patches are measured. The corresponding values from
the source and destination spaces are tabulated into a lookup table.
151
152 Computational Color Technology
Figure 9.1 A uniformly spaced ve-level 3D packing.
Nonuniformly spaced LUTs are also used extensively. They lose the simplicity
of the implementation edges, but gain the versatility and conversion accuracy as
discussed in Section 9.7.
9.1.2 Extraction
Nonlattice points are interpolated by using the nearest lattice points. This is the
step where the extraction performs a search to select the lattice points necessary
for computing the destination specication of the input point. A well-packed space
can make the search simpler. In an 8-bit integer environment, for example, if the
axis is divided into 2
j
equal sections where j is an integer smaller than 8, then the
nearest lattice points are given in the most signicant j bits (MSB
j
) of the input
color signals. In other words, the input point is bounded between the lattice points
p(MSB
j
) and p(MSB
j
+ 1). To nd the point of interest requires the computer
operations of the masking byte and shifting bits that are signicantly faster than
the comparison operations. For unequally spaced packing, a series of comparisons
are needed to locate the nearest lattice points.
Further search within the cube may be needed, depending on the interpolation
technique employed to compute the color values of nonlattice points. There are two
interpolation methods: the geometrical method and cellular regression. Cellular re-
gression does not require a search within the cube. All geometric interpolations
Three-Dimensional Lookup Table with Interpolation 153
except the trilinear approach require a search mechanism to nd the subdivided
structure where the point resides. These search mechanisms are inequality com-
parisons.
9.1.3 Interpolation
Interpolation uses the input signals and the extracted lattice points that contain the
destination specications to calculate the destination color specications for the
input point. Interpolation techniques are mathematical computations that employ
geometrical relationships or cellular regression. Geometrical interpolations exploit
the ways of subdividing a cube. There are four geometrical interpolations, trilin-
ear, prism, pyramid, and tetrahedral. The rst 3D interpolation that appeared in
the literature is the trilinear interpolation, disclosed in a 1974 British patent by
Pugsley.
3
The prism scheme was published in 1992 by Kanamori, Fumoto, and
Kotera.
4
This prism architecture has been made into a single-chip color proces-
sor by Matsushita Research Institute Tokyo, Inc.
5,6
The pyramid interpolation was
patented by Flanklin in 1982.
7
The idea of linear interpolation using tetrahedral
segmentation of a cube was published as early as 1967 by Iri.
8
A similar concept
of a linear interpolation by searching for the nearest four neighbors that enclose
the point of interest and form a tetrahedron was applied to compute dot areas of
color scanners by Korman and Yule in 1971.
9
The application of tetrahedral inter-
polation to color-space transformation was later patented by Sakamoto and Itooka
in U.S. Patent No. 4,275,413 (1981) and related worldwide patents.
1012
The ex-
tensive activities in developing and patenting interpolation techniques during the
1980s show the importance of the technique, the desire of dominating market share
by the manufacturers, and the subsequent nancial stakes.
9.2 Geometric Interpolations
Basically, 3D interpolation is the multiple application of the linear interpolation;
therefore, we start with the linear interpolation, then extend to 2D (bilinear) and
3D (trilinear) interpolations. A linear interpolation is depicted in Fig. 9.2; a point
p on the curve between the lattice points p
0
and p
1
is to be interpolated. The
interpolated value p
c
(x) is linearly proportional to the ratio of (x x
0
)/(x
1
x
0
),
where (x
1
x
0
) is the projected length of the line segment connecting points p
0
and p
1
, and (x x
0
) is the projected distance of the line connecting points p and
p
0
.
p
c
(x) =p(x
0
) +[(x x
0
)/(x
1
x
0
)][p(x
1
) p(x
0
)]. (9.1)
As shown in Eq. (9.1), the major computational operation in the interpolation is to
calculate the projected distances. In view of the implementation, using a uniform
8-bit LUT, the projected distance at each axis of an input is simply p(LSB
8j
),
154 Computational Color Technology
Figure 9.2 Linear interpolation.
where the LSBs are the least signicant bits. This is a simple byte-masking op-
eration that signicantly lowers the computational cost and increases speed. The
interpolation error is given as
=p(x) p
c
(x). (9.2)
9.2.1 Bilinear interpolation
In two dimensions, we have a function of two variables p(x, y) and four lattice
points p
00
(x
0
, y
0
), p
01
(x
0
, y
1
), p
10
(x
1
, y
0
), and p
11
(x
1
, y
1
) as shown in Fig. 9.3.
To obtain the value for point p, we rst hold y
0
constant and apply the linear
interpolation on lattice points p
00
and p
10
to obtain p
0
.
p
0
=p
00
+(p
10
p
00
)[(x x
0
)/(x
1
x
0
)]. (9.3)
Similarly, we calculate p
1
by keeping y
1
constant.
p
1
=p
01
+(p
11
p
01
)[(x x
0
)/(x
1
x
0
)]. (9.4)
After obtaining p
0
and p
1
, we again apply the linear interpolation to them by keep-
ing x constant.
p(x, y) =p
0
+(p
1
p
0
)[(y y
0
)/(y
1
y
0
)]. (9.5)
Three-Dimensional Lookup Table with Interpolation 155
Figure 9.3 Bilinear interpolation.
Substituting Eqs. (9.3) and (9.4) into Eq. (9.5), we obtain
p(x, y) =p
00
+(p
10
p
00
)[(x x
0
)/(x
1
x
0
)]
+(p
01
p
00
)[(y y
0
)/(y
1
y
0
)]
+(p
11
p
01
p
10
+p
00
)[(x x
0
)/(x
1
x
0
)]
[(y y
0
)/(y
1
y
0
)]. (9.6)
9.2.2 Trilinear interpolation
The trilinear equation is derived by applying the linear interpolation seven times
(see Fig. 9.4); three times each to determine the points p
1
and p
0
as illustrated
in the 2D bilinear interpolation, then one more time to compute the point p. The
general expression for the trilinear interpolation is given in Eq. (9.7).
p(x, y, z) =c
0
+c
1
x +c
2
y +c
3
z +c
4
xy
+c
5
yz +c
6
zx +c
7
xyz, (9.7a)
where x, y, and z are the relative distances of the point with respect to the
starting point p
000
in the x, y, and z directions, respectively, as shown in Eq. (9.7b).
x =(x x
0
)/(x
1
x
0
); y =(y y
0
)/(y
1
y
0
); z =(z z
0
)/(z
1
z
0
).
(9.7b)
156 Computational Color Technology
Figure 9.4 Trilinear interpolation.
Coefcients c
j
are determined from the values of the vertices.
c
0
=p
000
; c
1
=(p
100
p
000
); c
2
=(p
010
p
000
);
c
3
=(p
001
p
000
); c
4
=(p
110
p
010
p
100
+p
000
);
c
5
=(p
011
p
001
p
010
+p
000
); c
6
=(p
101
p
001
p
100
+p
000
);
c
7
=(p
111
p
011
p
101
p
110
+p
100
+p
001
+p
010
p
000
). (9.7c)
Equation (9.7a) can be written in vector-matrix form as
p =C
T
Q
1
or p =Q
T
1
C, (9.7d)
where C is the vector of coefcients,
C =[c
0
c
1
c
2
c
3
c
4
c
5
c
6
c
7
]
T
, (9.8)
and Q
1
is the vector of distances related to x, y, and z.
Q
1
=[1 x y z xy yz zx xyz]
T
. (9.9)
Three-Dimensional Lookup Table with Interpolation 157
Note that the sizes of vectors Q
1
and C must be the same. The coefcients C can
be put into vector-matrix form as shown in Eq. (9.10a) by expanding Eq. (9.7c).

c
0
c
1
c
2
c
3
c
4
c
5
c
6
c
7

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 0 1 0 1 0 1 0
1 1 1 1 0 0 0 0
1 1 0 0 1 1 0 0
1 1 1 1 1 1 1 1

p
000
p
001
p
010
p
011
p
100
p
101
p
110
p
111

, (9.10a)
or
C =B
1
P, (9.10b)
where vector P is a collection of vertices,
P =[p
000
p
001
p
010
p
011
p
100
p
101
p
110
p
111
]
T
, (9.11)
and the matrix B
1
, given in Eq. (9.10a), represents a matrix of binary constants,
having a size of 8 8.
Substituting Eq. (9.10b) into Eq. (9.7d), we obtain the vector-matrix expression
for calculating the destination color value of point p.
p =C
T
Q
1
=P
T
B
T
1
Q
1
, (9.12a)
p =Q
T
1
C =Q
T
1
B
1
P. (9.12b)
Equation (9.12) is exactly the same as Eq. (9.7), only expressed differently. There
is no need for a search mechanism to nd the location of the point because the
cube is used as a whole. But the computation cost, using all eight vertices and
having eight terms in the equation, is higher than other 3D geometric interpola-
tions.
9.2.3 Prism interpolation
If one cuts a cube diagonally into two halves as shown in Fig. 9.5, one gets two
prismshapes. Asearch mechanismis needed to locate the point of interest. Because
there are two symmetric structures in the cube, a simple inequality comparison is
sufcient to determine the location: if x > y, then the point is in Prism 1,
otherwise the point is in Prism 2. For x = y, the point is laid on the diagonal
plane, and either prism can be used for interpolation.
158 Computational Color Technology
Figure 9.5 Prism interpolation.
The equation has six terms and uses six vertices of the given prism for compu-
tation. Equation (9.13) gives prism interpolation in vector-matrix form.

p
1
p
2

p
000
(p
100
p
000
) (p
110
p
100
) (p
001
p
000
)
p
000
(p
110
p
010
) (p
010
p
000
) (p
001
p
000
)
(p
101
p
001
p
100
+p
000
) (p
111
p
101
p
110
+p
100
)
(p
111
p
011
p
110
+p
010
) (p
011
p
001
p
010
+p
000
)

1
x
y
z
xz
yz

. (9.13)
By setting
Q
2
=[1 x y z xz yz]
T
, (9.14)
Eq. (9.13) can be expressed in vector-matrix form as given in Eq. (9.15).
p
1
=P
T
B
T
21
Q
2
=Q
T
2
B
21
P, (9.15a)
p
2
=P
T
B
T
22
Q
2
=Q
T
2
B
22
P, (9.15b)
Three-Dimensional Lookup Table with Interpolation 159
where vector P is given in Eq. (9.11), and B
21
and B
22
are binary matrices, having
a size of 6 8, given as follows:
B
21
=

1 0 0 0 0 0 0 0
1 0 0 1 0 0 0 0
0 0 0 0 1 0 1 0
1 1 0 0 0 0 0 0
1 1 0 0 1 1 0 0
0 0 0 0 1 1 1 1

,
B
22
=

1 0 0 0 0 0 0 0
0 0 1 0 0 0 1 0
1 0 1 0 0 0 0 0
1 1 0 0 0 0 0 0
0 0 1 1 0 0 1 1
1 1 1 1 0 0 0 0

.
The location of the data point is determined by the following IF-ELSE construct:
If x > y, p =p
1
,
else p =p
2
.
9.2.4 Pyramid interpolation
For pyramid interpolation, the cube is sliced into three pieces; each one takes a face
as the pyramid base, having its corners connected to a vertex in the opposite side
as the apex (see Fig. 9.6). A search is required to locate the point of interest. The
equation has ve terms and uses ve vertices of the given pyramid to compute the
value.
Equation (9.16) gives the vector-matrix form of the pyramid interpolation.
Figure 9.6 Pyramid interpolation.
160 Computational Color Technology

p
1
p
2
p
3

p
000
(p
111
p
011
) (p
010
p
000
) (p
001
p
000
)
p
000
(p
100
p
000
) (p
111
p
101
) (p
001
p
000
)
p
000
(p
100
p
000
) (p
010
p
000
) (p
111
p
110
)
0 (p
011
p
001
p
010
+p
000
) 0
0 0 (p
101
p
001
p
100
+p
000
)
(p
110
p
100
p
010
+p
000
) 0 0

1
x
y
z
xy
yz
zx

. (9.16)
By setting
Q
3,1
=[1 x y z yz]
T
, (9.17a)
Q
3,2
=[1 x y z zx]
T
, (9.17b)
Q
3,3
=[1 x y z xy]
T
. (9.17c)
Equation (9.16) can be expressed in vector-matrix form as given in Eq. (9.18).
p
1
=P
T
B
T
31
Q
3,1
=Q
T
3,1
B
31
P, (9.18a)
p
2
=P
T
B
T
32
Q
3,2
=Q
T
3,2
B
32
P, (9.18b)
p
3
=P
T
B
T
33
Q
3,3
=Q
T
3,3
B
33
P, (9.18c)
where B
31
, B
32
, and B
33
are binary matrices, having a size of 5 8, given as
follows:
B
31
=

1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1
1 0 1 0 0 0 0 0
1 1 0 0 0 0 0 0
1 1 1 1 0 0 0 0

,
B
32
=

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
0 0 0 0 0 1 0 1
1 1 0 0 0 0 0 0
1 1 0 0 1 1 0 0

,
Three-Dimensional Lookup Table with Interpolation 161
B
33
=

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
0 0 0 0 0 0 1 1
1 0 1 0 1 0 1 0

.
The location of the data point is determined by the following IF-THEN-ELSE con-
struct:
If (y > x and z > x), then p =p
1
,
else if (x > y and z > y), then p =p
2
,
else p =p
3
.
9.2.5 Tetrahedral interpolation
The tetrahedral interpolation slices a cube into six tetrahedrons; each has a triangle
base as shown in Fig. 9.7.
Figure 9.7 Tetrahedral interpolation.
162 Computational Color Technology
The vector-matrix expression of the tetrahedral interpolation is given in
Eq. (9.19).

p
1
p
2
p
3
p
4
p
5
p
6

p
000
(p
100
p
000
) (p
110
p
100
) (p
111
p
110
)
p
000
(p
100
p
000
) (p
111
p
101
) (p
101
p
100
)
p
000
(p
101
p
001
) (p
111
p
101
) (p
001
p
000
)
p
000
(p
110
p
010
) (p
010
p
000
) (p
111
p
110
)
p
000
(p
111
p
011
) (p
010
p
000
) (p
011
p
010
)
p
000
(p
111
p
011
) (p
011
p
001
) (p
001
p
000
)

1
x
y
z

.
(9.19)
By setting
Q
4
=[1 x y z]
T
, (9.20)
Equation (9.19) can be expressed in vector-matrix form as given in Eq. (9.21).
p
1
=P
T
B
T
41
Q
4
=Q
T
4
B
41
P, (9.21a)
p
2
=P
T
B
T
42
Q
4
=Q
T
4
B
42
P, (9.21b)
p
3
=P
T
B
T
43
Q
4
=Q
T
4
B
43
P, (9.21c)
p
4
=P
T
B
T
44
Q
4
=Q
T
4
B
44
P, (9.21d)
p
5
=P
T
B
T
45
Q
4
=Q
T
4
B
45
P, (9.21e)
p
6
=P
T
B
T
46
Q
4
=Q
T
4
B
46
P, (9.21f)
where B
41
, B
42
, B
43
, B
44
, B
45
, and B
46
are binary matrices, having a size of 4
8, given as follows:
B
41
=

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
0 0 0 0 1 0 1 0
0 0 0 0 0 0 1 1

,
B
42
=

1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
0 0 0 0 0 1 0 1
0 0 0 0 1 1 0 0

,
B
43
=

1 0 0 0 0 0 0 0
0 1 0 0 0 1 0 0
0 0 0 0 0 1 0 1
1 1 0 0 0 0 0 0

,
B
44
=

1 0 0 0 0 0 0 0
0 0 1 0 0 0 1 0
1 0 1 0 0 0 0 0
0 0 0 0 0 0 1 1

,
Three-Dimensional Lookup Table with Interpolation 163
B
45
=

1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1
1 0 1 0 0 0 0 0
0 0 1 1 0 0 0 0

,
B
46
=

1 0 0 0 0 0 0 0
0 0 0 1 0 0 0 1
0 1 0 1 0 0 0 0
1 1 0 0 0 0 0 0

.
The location of the data point is determined by the following IF-THEN-ELSE con-
struct:
If (x > y > z), then p =p
1
,
else if (x > z > y), then p =p
2
,
else if (z > x > y), then p =p
3
,
else if (y > x > z), then p =p
4
,
else if (y > z > x), then p =p
5
,
else p =p
6
.
Having four linear terms and using four vertices, it has the lowest computational
cost.
9.2.6 Derivatives and extensions
The difference among various geometrical interpolations lies in how one slices the
cube. There are only three ways of slicing a cube into multiple 3D structures with
equal numbers of vertices. They are the prism(six vertices), pyramid (ve vertices),
and tetrahedron (four vertices). Any substructure with fewer than four vertices is no
longer 3D. And it is not possible to slice a cube equally into multiple seven-vertex
structures. However, many variations do exist to the geometric slicing of a cube
such as cutting a cube from a different angle (or orientation),
13
the subdivision into
24 isomorphic tetrahedrons with the body center and face centers as the vertices
and selectively combining two, three, or four tetrahedrons to from a tetrahedron,
pyramid, prism, cube, or other 3D shapes,
11
and the combination of tetrahedrons
across the cell boundary.
13
In any case, it is recommended that the main diagonal
axis of the substructure is aligned with the neutral axis of the color space.
It is interesting to point out that the use of the average value of eight vertices
in a cube for the body center and the use of the average value of four corners
for face centers are the special cases of the trilinear and bilinear interpolations,
respectively. A face center or body center is located midway between two lattice
points; therefore,
(x x
0
)/(x
1
x
0
) =(y y
0
)/(y
1
y
0
) =(z z
0
)/(z
1
z
0
) =1/2. (9.22)
164 Computational Color Technology
Substituting Eq. (9.22) into Eq. (9.6) of the bilinear interpolation, we obtain the
value for a face-centered point that is the average of the four corner points.
p(1/2, 1/2) =p
00
+(p
10
p
00
)/2 +(p
01
p
00
)/2
+(p
11
p
01
p
10
+p
00
)/4
=(p
00
+p
01
+p
10
+p
11
)/4.
Similarly, Eq. (9.7) of the trilinear interpolation becomes
p(1/2, 1/2, 1/2) =(p
000
+p
001
+p
010
+p
011
+p
100
+p
101
+p
110
+p
111
)/8.
This means that the face centers and body center are obtained by the primary
interpolation and the point of interest interpolated from face-centered and body-
centered points is obtained by the secondary interpolation, which uses interpolated
vertices for interpolation. The secondary interpolation will cause further error prop-
agation. If one corrects this problem by taking measurements for the face centers
and body center, then it becomes the same as the doubling of the sampling fre-
quency. Moreover, the interpolation accuracy of the face center and body center
is dependent on the location of the cube in the color space; the interpolation error
increases rapidly as the device value decreases.
13
9.3 Cellular Regression
Three-dimensional interpolation using cellular regression is a combination of the
3D packing and polynomial regression.
1315
The idea is to apply regression to a
small lattice cell instead of the entire color space for the purpose of increasing the
interpolation accuracy. The eight vertices of a cube are used to derive the coef-
cients of a selected equation. There are several variations to the cellular regression
such as selections with respect to the polynomials, the regression points, and the
cell sizes and shapes. Compared to the geometric interpolation, this has the follow-
ing advantages:
(1) There is no need to nd the position of the interpolation point within the
cube.
(2) There is no need for uniform packing; it can apply to any 3D structure such
as distorted hexahedra.
(3) It can accommodate any arithmetical expressions such as square root, log-
arithm, and exponential terms.
(4) It can have any number of terms in the equation as long as the number
does not exceed the number of vertices in the cube, which means that the
number of terms cannot exceed eight.
The last constraint can be overcome by sampling the color space in a ner division
than the desired levels. In the case of a double sampling rate, one has 27 data
Three-Dimensional Lookup Table with Interpolation 165
points in the cube instead of 8; therefore, one can have a maximum of 26 terms.
The drawback is that about eight times the samples and measurements are needed,
which are time consuming and costly. If one cannot afford these extra costs, there is
another way to get around this problem. One can include the neighboring cubes and
weigh them differently from the cube of interest when performing the regression.
For example, one can include six more cubes adjacent to each face of the cube
of interest. The 8 vertices of the center cube are weightedsay, by 4and the
other 24 vertices are weighted by one. The regression is then performed on these
32 weighted vertices via a selected polynomial that can accommodate as many as
31 terms.
With these extensions and variations, cellular regression provides the additional
exibility and accuracy not offered by geometrical interpolations. If the process
speed is the prime concern, one can use a couple of linear terms for interpolation
to ease the computation time and cost. If the color quality is the main concern, one
can use all allowed terms and tailor the terms to the characteristic of the device by
selecting suitable arithmetical functions.
9.4 Nonuniform Lookup Table
The discussion so far has been centered on the standard way of implementing LUT;
that is, the source color space is equally spaced with respect to the sampling rate.
This results in an uneven and irregular destination space for those color conversions
that are nonlinear. Many techniques have been proposed to work on nonuniform
data.
2,1519
A very simple example of nonuniform packing is shown in Fig. 9.8.
With this packing, each subcell is no longer a cube, but a rectangular shape. The
lattice points of the source RGB space can be selected, for example, by using the
gray-balance curve of RGB with respect to the lightness (see Fig. 8.4) such that the
resulting L

spacing in the destination space is approximately equal.


Because of this change, the inequality rules of the geometric interpolations are
no longer valid. They have to be modied by substituting x, y, and z with the
relative distances x
r
, y
r
, and z
r
, where
x
r
=(x x
i
)/(x
i+1
x
i
); y
r
=(y y
i
)/(y
i+1
y
i
);
z
r
=(z z
i
)/(z
i+1
z
i
), (9.23)
where the subscript i indicates the LUT level. The normalization transforms a rec-
tangular shape to a cube. This implementation is basically the same as putting three
1D LUTs to linearize the incoming RGB signals in front of an equally spaced 3D
LUT. The nonuniform LUT reduces the implementation cost and simplies the
design without adding to the computational cost. The real benet, which will be
shown in Section 9.7, is the improvement in the interpolation accuracy and more
uniform error distribution.
166 Computational Color Technology
Figure 9.8 A ve-level unequal sampling of the input color space.
9.5 Inverse Color Transform
As mentioned in Chapter 8, some transformation techniques are easier to im-
plement in one direction than the other. Three-dimensional interpolation is one.
A good example is the CIELAB-to-printer/cmy transform. It is difcult to pre-
set the desired CIELAB lattice points for the forward transformation, but the de-
sired cmy values are readily available in the inverse direction from printer/cmy
CIELAB. Therefore, the general approach is to print an electronic le with known
printer/cmy values, then measure the output L

values. If the destination cmy


space is uniformly sampled, the source CIELAB space will not be uniformly sam-
pled because of the nonlinear relationship between cmy and L

. This intro-
duces some complications to the color conversion in the areas of data storage and
search. Therefore, a remapping to an equally spaced packing in the source space is
often needed. The need of remapping arises in many instances; it is not limited to
the inverse transform. For example, the scanner calibration from RGB to CIELAB
is a forward transform. But, the remapping is required because the color specica-
tions of test targets are not likely to fall exactly onto the equally spaced grid points
of the source color space. The only transform that does not require remapping is
perhaps the forward transform of monitor/RGB to other color spaces. Usually, a
remapping of the source space to a uniform sampling is performed by using the
existing sample points. The repacking can be achieved in many ways as follows:
Three-Dimensional Lookup Table with Interpolation 167
(1) The new lattice points are computed from known analytical formulas such
as those provided by the CIE and Xerox Color Encoding Standard. For
example, the lattice points for CIELAB Xerox/RGB can be calculated
using the known transform of CIELAB CIE/XYZ [see Eq. (5.15)] fol-
lowed by the known linear transform of CIEXYZ Xerox/RGB
20
such
that the corresponding [L

, a

, b

] and [R, G, B] values are established.


These formulas are ideal because they give no color-conversion error at
the lattice points; therefore, they are useful for color-simulation studies.
However, these ideal cases are rarely encountered in practical applications.
(2) The new lattice points can be evaluated from a polynomial regression as
described in Chapter 8. This works for any packing of sample points, ei-
ther uniform or nonuniform. Points outside the gamut are extrapolated. Be-
cause the polynomial coefcients are obtained by a global least-squares er-
ror minimization, the polynomial may not map the sample points to their
original values.
(3) The reshaping can be obtained by a weighted vector averaging. Rolleston
has successfully applied d
4
weighted averaging to color-space conver-
sion where d is the distance between the point of interest and neighboring
points.
19
All sample points are weighted by d
4
, then added together. Be-
cause of the inverse fourth power, the contribution to the sum from sample
points vanishes quickly with distance. Like the polynomial evaluation, the
method is able to do the extrapolation and to process irregularly spaced
data.
(4) The remapping can be interpolated by a piecewise inversion of the space.
The idea is to divide a color space into a set of tetrahedrons such that the
nonlinear mapping of the entire space is approximated by a number of local
linear transformations. The search for the enclosing tetrahedron is harder
for nonuniform packing; often a special search technique is needed.
16
The
interpolated values are bounded by the given sample points; there is no
extrapolation. Points that do not reside within the tetrahedron cannot be
evaluated.
The packing of an inversed 3D LUT is critical for the accuracy of the interpolation.
The conventional approach is a uniform spacing in all three axes of the color space,
such as the one shown in Fig. 9.1; the size of the new LUT is the bounding box
enclosing the destination color gamut such as the CIELAB of the printer/cmy
CIELAB transform. These boundaries are the anticipated maximum and minimum
values of L

, a

, and b

normalized to the bit depth of the internal representation


(usually, an 8-bit integer). It is apparent that this implementation has a lot of waste;
valid data occupy a small fraction of the total space. A more efcient packing is
the nonuniform space, such as the one depicted in Fig. 9.9. The color space is rst
dissected on the L

axis to layers of unequal thickness by using the relationship of


L

-RGB. The bounding box (shown as dashed lines in Fig. 9.9) of each L

layer
is set as close to the a

and b

color gamut as possible.


168 Computational Color Technology
Figure 9.9 A nonuniformly spaced lookup table for scaled CIELAB space (to 8-bit integer).
9.6 Sequential Linear Interpolation
The biggest drawback in the 3D-LUT approach is the inefcient use of the avail-
able color space, as discussed in the previous section. Due to the nonlinear rela-
tionship, the inverse interpolation creates a large empty space when a bounding
box is placed in the destination space. Allebach, Chang, and Bouman proposed
an efcient way, called sequential linear interpolation (SLI), to implement these
nonlinear transformations.
17,18
Sequential linear interpolation optimally allocates grid points according to the
characteristics of the function. Since the characteristics of the function are not
known in advance, they propose a design procedure that iteratively estimates the
characteristics of functions and constructs the SLI structure with increasing accu-
racy. This procedure is depicted in Fig. 9.10. An initial set of printer measurements
is made rst. They then construct an initial SLI structure so that the distance be-
tween the measured output points and the grid locations is minimized. They use
the sequential scalar quantization (SSQ) method to initialize this structure. Next,
the necessary characteristics of the inverse printer transfer function are estimated.
These estimates are then used to guide the selection of new printer measurement
points so that the mean squared error is minimized. Since the new measurement
points are approximately estimated by the initial SLI structure, the actual mea-
sured output points in the CIELAB space will not exactly have the desired SLI
Three-Dimensional Lookup Table with Interpolation 169
Figure 9.10 Iterative design procedure of the sequential linear interpolation.
structure. Therefore, one needs to perform SSQ on the output data again to obtain
a new SLI structure based on the new measurement points. This process requires it-
erations with an increasing number of measurement points to obtain more accurate
SLI structures.
The regularly spaced interpolation grid is a special case of the sequential in-
terpolation grid where the grid lines and the grid points are distributed uniformly
and the domain is regular. As in the case with the regular interpolation grid, the
sequential interpolation scheme will produce a continuous function, since all the
1D interpolations are continuous. The sequential interpolation grid automatically
tracks the domain of the function (i.e., the printer gamut) by placing all the grid
points inside the domain. If the domain of the function is not rectangular, then a
regular grid will waste grid points outside the domain. More importantly, when the
number of grid points is limited, the sequential interpolation grid allows the im-
plementer to arbitrarily allocate grid lines and points to minimize the interpolation
error.
Chang, Bouman, and Allebach applied SLI to data of CIELABvalues measured
from printed 999 color patches uniformly distributed in the printer RGB space
using an Apple color printer. Results show that the SLI grid always outperforms
the uniform grid as indicated by the average color difference; and it does better in
maximum color difference in all cases except for 5 5 5 grid points. In general,
the color difference for the SLI grid is smaller and its distribution smoother than
that of the uniform grid. The advantage of SLI is due to the fact that no grid points
are wasted outside the printer gamut. The inefciency of the uniformgrids is shown
in their data, the fraction of grid points falling within the printer gamut ranges
being from 40% for a 555 LUT to 22% for a 656565 LUT. These results
indicate that the uniform grids are not efcient; many grid points are wasted. They
170 Computational Color Technology
have shown that SLI denitely has advantages over the uniform LUT with respect
to the interpolation accuracy. The trade-offs are the complexity, computation cost,
and speed.
9.7 Results of Forward 3D Interpolation
This section provides experimental results of testing several 3D-LUT packings to
showthe accuracy of the 3D-LUT methods. Five test conditions are used: Test 1 is a
5-level, equally spaced LUT (125 lattice points); Test 2 is a 9-level, equally spaced
LUT (729 lattice points); Test 3 is a 17-level, equally spaced LUT (4913 lattice
points); Test 4 is a 5-level, unequally spaced LUT (125 lattice points); Test 5 is
9-level, unequally spaced LUT (729 lattice points). Table 9.1 lists the interpolation
errors of the geometric techniques along the neutral axis. Relationships between
the interpolation error and 255 neutral colors (in 8-bit Xerox/RGB value) that are
converted from Xerox/RGB to CIELAB using four LUT packings are plotted in
Fig. 9.11 for trilinear interpolation. Other interpolation schemes behave similarly.
This gure reveals the following important characteristics of the geometric inter-
polation:
(1) The interpolation error peaks at the center and diminishes at the nodes
(lattice points). This implies that the assumption of using the average values
of vertices for the body center and face centers is a poor one.
(2) The error amplitude decreases as the number of the level increases.
(3) The highest amplitude occurs at the lowest level (dark colors). For equally
spaced LUTs, the amplitude damps quickly as the level increases.
(4) Unequally spaced LUTs have a much lower fundamental peak but the er-
rors are rippled to higher levels; this gives a more even error distribution
and a better average value.
Table 9.2 lists the average color differences of 12 interpolation techniques un-
der ve different conditions using 3072 test points. The rst four techniques are
the geometrical approach and the rest are the cellular regression approach. Re-
sults in Tables 9.1 and 9.2 conrm that the interpolation accuracy improves as the
Table 9.1 Comparisons of interpolation accuracies in terms of the average E
ab
for 255
neutral colors.
LUT level 5-level 9-level 17-level 5-level 9-level
LUT Equal Equal Equal Unequal Unequal
packing spacing spacing spacing spacing spacing
Trilinear 2.23 0.90 0.38 1.21 0.38
Prism 3.01 1.18 0.45 1.47 0.43
Pyramid 3.08 1.20 0.46 1.27 0.36
Tetrahedral 2.95 1.06 0.41 1.31 0.36
Three-Dimensional Lookup Table with Interpolation 171
Figure 9.11 Trilinear interpolation errors of 255 neutral colors using four different LUTs from
Xerox/RGB to CIELAB under D
50
illumination.
Table 9.2 Comparisons of interpolation accuracies in terms of the average E
ab
using 3072
data points.
LUT level 5-level 9-level 17-level 5-level 9-level
LUT Equal Equal Equal Unequal Unequal
packing spacing spacing spacing spacing spacing
Cubic (Trilinear) 5.81 2.50 0.92 1.74 0.46
Prism 6.38 2.77 1.02 1.81 0.47
Pyramid 6.29 2.74 1.01 1.76 0.45
Tetrahedral 6.84 2.99 1.11 1.86 0.49
3 3 matrix (8 points) 9.04 4.28 2.08 10.28 5.36
3 4 matrix (8 points) 5.84 2.51 0.91 1.88 0.48
3 6 matrix (8 points) 6.09 2.75 0.98 4.36 1.10
3 7 matrix (8 points) 5.81 2.51 0.92 1.74 0.46
3 3 matrix (27 points) 8.47 3.94
3 4 matrix (27 points) 4.48 1.76
3 6 matrix (27 points) 2.11
3 7 matrix (27 points) 4.22 1.62
172 Computational Color Technology
sampling rate increases. At each LUT packing, the interpolation accuracies are
about the same for various geometrical techniques. The differences in computa-
tional accuracy become even smaller as LUT size increases or nonuniform pack-
ing is used. The same conclusion could be drawn from Koteras study and Kas-
sons results.
2123
Using 8-bit integer data for computation, Kotera and cowork-
ers reported that the trilinear interpolation is the best and prism is the second for
5-level and 9-level LUTs. Similarly, they found that the difference in interpola-
tion accuracy is reduced as the size of the LUT increases. The 9-level LUT gives
one E
rms
difference from the best (trilinear) to the worse (tetrahedral). Using
oating-point computation, we obtained errors about half of their sizes. For larger
17-level and 33-level LUTs, the accuracy is about the same for all four geometri-
cal interpolations.
21,22
Kasson, Plouffe, and Nin compared the trilinear interpola-
tion with two tetrahedral interpolations and a disphenoid tetrahedral interpolation,
which is a tetrahedron formed by spanning two cubes; the resulting average color
differences are 1.00, 1.24, 1.01, and 0.99, respectively.
23
These numbers are very
close, if not the same, indicating that there is little difference in their interpola-
tion capabilities. They also showed that the interpolation accuracy improves as the
sampling rate increases. If the sampling rate is high enough, the linear interpola-
tion becomes a very good approximation for computing a point that is not a lattice
point. However, the accuracy improvement levels off at high sampling rates. A fur-
ther increase in the sampling rate is not necessary because the gain in the accuracy
will be small. However, the cost of the storage is increased by about eight times
for each increment in LUT size. From this simulation, we believe that a 9-level or
17-level LUT will provide an adequate accuracy for most color conversions.
In addition, we also determined the error distribution. For a given interpolation
method, the E
ab
distributions improve as the LUT size increases, as shown in
Fig. 9.12 for the trilinear interpolation and Fig. 9.13 for the tetrahedral interpola-
tion, where the distribution bandwidth becomes narrower and the color difference
shifts to smaller values in the left-hand side of the diagram. Prism and pyramid
interpolations behave similarly.
For a given LUT size, the distributions from various interpolation techniques
are about the same as shown in Fig. 9.14 for a uniform 5-level LUT, Fig. 9.15
for a uniform 9-level LUT, Fig. 9.16 for a uniform 17-level LUT, and Fig. 9.17
for a nonuniform 5-level LUT. As shown in Table 9.2, the accuracy improvement
is dramatic for nonuniform LUTs. Results of the color difference showed that a
5-level nonuniform LUT is about 35% better than a 9-level uniform LUT, and a
9-level nonuniform LUT, having over 96.16% of points within 1 E
ab
, is about a
factor of two better than a 17-level uniform LUT. Better yet, the error distribution
is almost uniform and most points, if not all, have errors less than two units. These
results demonstrated that, with a proper selection of the sampling points along the
principle axes, we can gain a big saving in memory cost and a great improvement
in interpolation accuracy as well as the error distribution.
Similar computations were performed on the conversion from sRGB to
CIELAB under D
65
illumination. The average color difference in E
ab
units of
Three-Dimensional Lookup Table with Interpolation 173
Figure 9.12 Error distributions of the trilinear interpolation.
Figure 9.13 Error distributions of the tetrahedral interpolation.
174 Computational Color Technology
Figure 9.14 Error distributions of a uniform 5-level LUT.
Figure 9.15 Error distributions of a uniform 9-level LUT.
Three-Dimensional Lookup Table with Interpolation 175
Figure 9.16 Error distributions of a uniform 17-level LUT.
Figure 9.17 Error distributions of a nonuniform 5-level LUT.
176 Computational Color Technology
5832 equally spaced colors in sRGB space for four interpolation methods under
four different LUTs are given in Table 9.3, where Test 1 uses a 5-level equally
spaced LUT, Test 2 uses a 9-level equally spaced LUT, Test 3 uses a 5-level un-
equally spaced LUT, and Test 4 uses a 9-level unequally spaced LUT. The error
distributions of neutral colors under four different LUTs are plotted in Fig. 9.18.
These results contradict the data obtained for a Xerox/RGB-to-CIELAB transform
Table 9.3 Comparisons of interpolation accuracies in terms of the average E
ab
for sRGB-
to-CIELAB transform.
LUT level 5-level 9-level 5-level 9-level
LUT Equal Equal Unequal Unequal
packing spacing spacing spacing spacing
Trilinear 1.92 0.48 2.94 0.77
Prism 1.67 0.42 2.54 0.66
Pyramid 2.01 0.49 3.14 0.82
Tetrahedral 1.29 0.33 1.80 0.47
Figure 9.18 Trilinear interpolation errors of 255 neutral colors using four different LUTs from
sRGB to CIELAB under D
65
illumination.
Three-Dimensional Lookup Table with Interpolation 177
in two areas. First, the interpolation accuracies are not the same; the tetrahedral in-
terpolation is better than other methods. However, the differences are not big. Sec-
ond, the equally spaced LUTs give better accuracy than the corresponding nonuni-
form LUTs. This is the most signicant contradiction that can be explained by the
gamma correction of the sRGB formulation [see Eq. (A4.21) of Appendix 4]; the
gamma correction linearizes the sRGB digital count with respect to luminance Y
as shown in Fig. 9.19, whereas the nonlinear LUT in Xerox/RGB CIELAB
transform mimics this behavior. These results indicate that the RGB encoding with
gamma correction is already optimal with respect to the luminance; therefore, the
linear-spaced LUT gives the best results, where the nonlinear adjustment makes
an already linear relationship nonlinear, giving much worse conversion accuracies.
Thus, the nonlinear-spaced LUT should only be used for Device/RGB or RGB
encoding without gamma correction.
9.8 Results of Inverse 3D Interpolation
Results of the 3D LUT with interpolation for a CIELAB-to-Xerox/RGB transform
are given in Table 9.4. The nonuniform 3D LUTs improve the interpolation ac-
curacy by about 20%. Similar to the forward interpolation, the accuracy of the
Figure 9.19 Relationship of gamma-corrected sRGB and luminance Y.
178 Computational Color Technology
inverse interpolation increases with the sampling rate or the size of the LUT (see
Fig. 9.20), and the accuracies for all four geometric interpolations are about the
same (see Fig. 9.21). The error distributions of the uniform and nonuniform LUTs
for the trilinear interpolation at 5-level are shown in Fig. 9.22; the nonuniform
LUT has a narrower band and peaks earlier. Other geometric interpolations show a
similar trend.
Figure 9.20 Error distributions of the trilinear interpolation using an inverse 3D LUT.
Table 9.4 The average errors, d, of 3072 test points from a CIELAB-to-Xerox/RGB trans-
form using a 3D lookup table with geometric interpolations.
Cubic Prism Pyramid Tetrahedral
Uniform 5-level from known formula 10.2 12.6 10.5 11.3
Uniform 5-level from 20-term regression 10.0 12.3 10.2 11.0
Nonuniform 5-level from known formula 8.6 10.5 9.1 9.5
Uniform 9-level from known formula 2.8 3.4 2.9 3.1
Uniform 9-level from 20-term regression 2.6 3.2 2.7 2.8
Nonuniform 9-level from known formula 2.3 2.8 2.5 2.7
Uniform 17-level from known formula 1.1 1.2 1.1 1.2
Uniform 17-level from 20-term regression 1.0 1.1 1.0 1.0
Three-Dimensional Lookup Table with Interpolation 179
Figure 9.21 Error distributions of a 9-level inverse 3D-LUT.
Figure 9.22 Error distributions of the trilinear interpolation using a uniformly spaced and a
nonuniformly spaced inverse 3D LUT.
180 Computational Color Technology
9.9 Remarks
The differences among various geometrical interpolations lie in how the cube is
subdivided. This then leads to differences in the extractions and the equations for
computation. Simulation results of the Xerox/RGB

CIELAB conversions in-


dicate that the precision of approximating a true value by geometrical interpola-
tions is very good; even the lowest 5-level LUT has precision of less than 7 E
ab
units. The magnitude of the error decreases as the number of the level increases;
at 17-level, the errors are about 1 E
ab
unit. Proper packing will increase the pre-
cision further; for example, the precision is less than 0.5 E
ab
for a nonuniform
9-level LUT. Moreover, the interpolation errors of various geometrical techniques
are about the same; the differences from the best to the worst are small; they range
from about 1 E
ab
for a 5-level uniform LUT to less than 0.05 for a 9-level nonuni-
form packing. The error peaks at the center of the cube and diminishes at the lattice
points. The highest amplitude occurs at the lowest level (darker colors) in the for-
ward transformation. For equally spaced LUTs, the amplitude damps quickly. For
unequally spaced LUTs, the fundamental peak is lower but the errors ripple to
higher levels, resulting in a more even error distribution. In the case of RGB en-
coding standards that have gamma correction, the highest accuracies are obtained
from uniform 3D LUTs because the gamma correction linearizes the RGB values
to the luminance Y.
The 3D lookup table with interpolation has numerous applications in scanner,
monitor, and printer calibrations. Compared to other transformation techniques,
the 3D-LUT approach provides better accuracy because it has the advantage that
the color space can always be further divided by increasing the sampling rate to
improve accuracy. If the sampling rate is high enough, the linear interpolation be-
comes a very good approximation for computing a point that is not a lattice point.
For the interpolation techniques, the tetrahedral interpolation has the lowest com-
putation and implementation costs, highest processing speed, and a simpler ana-
lytical scheme to obtain the inverse lookup table. This method has been applied
successfully to the device characterizations and calibrations, and color renditions
of electronic images.
2432
References
1. InterColor Consortium, InterColor Prole Format, Version 3.0 (1994).
2. J. M. Kasson, W. Plouffe, and S. I. Nin, A tetrahedral interpolation technique
for color space conversion, Proc. SPIE 1909, pp. 127138 (1993).
3. P. C. Pugsley, Image reproduction methods and apparatus, British Patent No.
1,369,702 (1974).
4. K. Kanamori, T. Fumoto, and H. Kotera, A color transformation algorithm
using prism interpolation, IS&T 8th Int. Congress on Advances in Non-Impact
Printing Technologies, pp. 477482 (1992).
Three-Dimensional Lookup Table with Interpolation 181
5. K. Kanamori, H. Kotera, O. Yamada, H. Motomura, R. Iikawa, and T. Fu-
moto, Fast color processor with programmable interpolation by Small Memory
(PRISM), J. Electron. Imag. 2, pp. 213224 (1993).
6. H. Kotera, K. Kanamori, T. Fumoto, O. Yamada, H. Motomura, and M. Inoue
A single chip color processor for device independent color reproduction, 1st
IS&T/SIDs Color Imaging Conf.: Transforms and Transportability of Color,
pp. 133137 (1993).
7. P. Flanklin, Interpolation methods and apparatus, U.S. Patent No. 4,334,240
(1982).
8. M. Iri, A method of multi-dimensional linear interpolation, J. Info. Processing
Soc. Japan 8, pp. 3336 (1968).
9. N. I. Korman and J. A. C. Yule, Digital computation of dot areas in a colour
scanner, Proc. 11th Int. Conf. of Printing Research Institutes, Canandaigua,
NY, Edited by W. H. Banks, pp. 93106 (1971). Hensenstamm: P. Keppler
(1973).
10. T. Sakamoto and A. Itooka, Interpolation method for memory devices,
Japanese Patent Disclosure 53-123201 (1978).
11. T. Sakamoto and A. Itooka, Linear Interpolator for Color Correction, U.S.
Patent No. 4,275,413 (1981).
12. T. Sakamoto, Interpolation method for memory devices, Japanese Patent Dis-
closure 57-208765 (1982).
13. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chaps. 4 and 6 (1997).
14. H. R. Kang, Comparisons of Three-Dimensional Interpolation Techniques by
Simulations, Device-independent color imaging II, Proc. SPIE 2414, pp. 104
114 (1995).
15. H. R. Kang, Printer-related color processing techniques, Color hard copy and
graphic arts III, Proc. SPIE 2413, pp. 410419 (1995).
16. I. E. Bell and W. Cowan, Characterizing printer gamuts using tetrahedral in-
terpolation, 1st IS&T/SIDs Color Imaging Conf.: Transforms and Transporta-
bility of Color, pp. 108113 (1993).
17. J. P. Allebach, J. Z. Chang, and C. A. Bouman, Efcient implementation of
nonlinear color transformations, 1st IS&T/SIDs Color Imaging Conf.: Trans-
forms and Transportability of Color, pp. 143148 (1993).
18. J. Z. Chang, C. A. Bouman, and J. P. Allebach, Recent results in color cali-
bration using sequential linear interpolation, IS&Ts 47th Annual Conf./ICPS,
pp. 500505 (1994).
19. R. Rolleston, Using Shepards interpolation to build color transformation ta-
bles, 2nd IS&T/SIDs Color Imaging Conf.: Color Science, Systems, and Ap-
plications, pp. 7477 (1994).
20. XNSS 288811, Color Encoding Standard, Xerox Corporation.
21. K. Kanamori, H. Kotera, O. Yamada, H. Motomura, R. Iikawa, and T. Fu-
moto, Fast color processor with programmable interpolation by Small Memory
(PRISM), J. Electron. Imag. 2, pp. 213224 (1993).
182 Computational Color Technology
22. H. Kotera, K. Kanamori, T. Fumoto, O. Yamada, H. Motomura, and M. Inoue,
A single chip color processor for device independent color reproduction, 1st
IS&T and SIDs Color Imaging Conference: Transforms and Transportability
of Color, pp. 133137 (1993).
23. J. M. Kasson, W. Plouffe, and S. I. Nin, A Tetrahedral Interpolation Technique
for Color Space Conversion, Proc. SPIE 1909, pp. 127138 (1993).
24. D. A. Clark, D. C. Strong, and T. Q. White, Method of color conversion with
improved interpolation, U.S. Patent No. 4,477,833 (1984).
25. H. Ikegami, New direct color mapping method for reducing the storage capac-
ity of look-up table memory, Proc. SPIE 1075, pp. 10751105 (1989).
26. K. Kanamori, H. Kawakami, and H. Kotera, Anovel color transformation algo-
rithm and its applications, Image Processing Algorithm and Techniques, Proc.
SPIE 1244, pp. 272281 (1990).
27. K. Kanamori and H. Kotera, A method for selective color control in perceptual
color space, J. Imaging Sci. Techn. 35(5), pp. 307316 (1991).
28. Po-Chieh Hung, Colorimetric calibration for scanners and media, Proc. SPIE
1448, pp. 164174 (1991).
29. K. Kanamori and H. Kotera, Color correction technique for hard copies by 4-
neighbors interpolation method, J. Imaging Sci. Techn. 36, pp. 7380 (1992).
30. S. I. Nin, J. M. Kasson, and W. Plouffe, Printing CIELAB Images on a CMYK
printer using tri-linear interpolation, Color hard copy and graphic arts, Proc.
SPIE 1670, pp. 316324 (1992).
31. P.-C. Hung, Colorimetric calibration in electronic imaging devices using a
look-up-table model and interpolations, J. Electron. Imag. 2, pp. 5361 (1993).
32. K. D. Gennetten, RGB to CMYK conversion using 3D barycentric interpo-
lation, Device-independent color imaging and imaging systems integration,
Proc. SPIE 1909, pp. 116126 (1993).
Chapter 10
Metameric Decomposition and
Reconstruction
This chapter presents the applications of the matrix R theory for spectrum decom-
position and reconstruction. We apply the method to spectra of seven illuminants
and fourteen spectra used in the color rendering index (CRI) under the CIE 1931
or CIE 1964 standard observer. The common characteristics are derived and dis-
cussed. Based on these characteristics, we develop two methods for deriving two
basis vectors that can be used to reconstruct various illuminants to some degree
of accuracy. The rst method uses the intuitive approach of averaging to generate
basis vectors. The second method uses input tristimulus values to obtain the fun-
damental metamer via orthogonal projection, and the metameric black is obtained
from a set of coefcients to scale the average metameric black. The coefcients
are derived from ratios of input tristimulus values to average tristimulus values.
These methods are applied to reconstruct 7 illuminants and 14 CRI spectra. The
application of the metameric spectral reconstruction to the estimation of the fun-
damental metamer from an RGB or XYZ color scanner for accessing skin-tone
reproducibility is also discussed.
10.1 Metameric Spectrum Decomposition
In Section 3.2, we showed that any spectrum
i
could be decomposed to a funda-
mental and a metameric black using the matrix R.
13
R =A(A
T
A)
1
A
T
=AM
1
a
A
T
=AM
e
A
T
=M
f
A
T
(10.1)
M
f
A
T

i
=M
f
=A(A
T
A)
1
= , (10.2)
where = A
T

i
is a 3 1 vector of tristimulus values. Matrix R has a size of
n n because M
f
is n 3 and A
T
is 3 n, where n is the number of elements in
vector
i
. But R has a rank of three only because it is derived solely from matrix A,
having three independent columns. It decomposes the spectrum of the stimulus
into two components, the fundamental and the metameric black , as given in
183
184 Computational Color Technology
Eqs. (10.3) and (10.4), respectively, where the fundamental is common within any
group of mutual metamers, but metameric blacks are different.
=R
i
, (10.3)
and
=
i
=
i
R
i
=(I R)
i
. (10.4)
Figures 10.1 and 10.2 show the metameric decompositions of spectral power distri-
butions (SPDs) of illuminants D
65
and A, respectively. Inversely, the stimulus spec-
trum can be reconstructed if the fundamental and metameric black are known.
1,4

i
= + =R
i
+(I R)
i
. (10.5)
To show the performance of the spectrum reconstruction using Eq. (10.5), we use
the SPDs of the illuminants and fourteen spectra of the color rendering index. This
set is small, yet contains a very different spectral shape for testing the reconstruc-
tion method. The set is small; thus, it is easy to analyze in detail without losing
Figure 10.1 Metameric decomposition of illuminant D
65
.
Metameric Decomposition and Reconstruction 185
Figure 10.2 Metameric decomposition of illuminant A.
oneself in a sea of data, such as the Munsell chip set. Also, this set is in the public
domain and is readily available to users who are interested in duplicating the re-
sults and verifying the algorithm. It provides a good learning practice, which is not
available by using private data.
Applying the method of spectrum decomposition to a set of 7 illuminants and
14 CRI spectra, we derive the fundamental metamers and metameric blacks of
illuminants A, B, C, D
50
, D
55
, D
65
, and D
75
as shown in Figs. 10.3 and 10.4,
and CRI spectrum decompositions in Figs. 10.5 and 10.6. Results reafrm Cohens
observations as follows:
(1) The broadband and smoothly varying spectrum gives a fundamental
metamer of positive values (see Figs. 10.3 and 10.5).
(2) The fundamental metamers cross the input spectrum three or four times
(see Figs. 10.4 and 10.6).
(3) The fundamental and metameric black are mirror images of each other,
where the peak of one component is the valley of the other and vice versa
(see Figs. 10.7 and 10.8).
1
186 Computational Color Technology
Figure 10.3 Fundamental metamers of seven illuminants and average under CIE 1931
CMF.
Figure 10.4 Metameric blacks of seven illuminants and average under CIE 1931 CMF.
Metameric Decomposition and Reconstruction 187
Figure 10.5 Fundamental metamers of 14 CRI spectra and average under CIE 1931 CMF.
Although the magnitudes of spectra are not the same, the general shapes are simi-
lar in that the peaks and valleys occur approximately at the same wavelengths.
In particular, all spectra of the metameric blacks in Figs. 10.4 and 10.6, including
the average spectrum, cross over the x-axis exactly four times at exactly the same
points of 430, 470, 538, and 610 nm; it seems that their shapes differ in magnitude
only by a scaling factor. The same crossover points are shown in Cohens study
of illuminants. Another interesting point is that the average (or sum) of metameric
blacks is still a metameric black, as shown in Eq. (10.6), that is an extension of the
matrix R property 9 given in Chapter 3.
A
T
= (1/m)A
T
(
1
+
2
+ +
m
)
= (1/m)

A
T

1
+A
T

2
+ +A
T

= 0. (10.6)
In fact, the normalized spectral distributions of all illuminants are identical.

A
()/E
A
() =
B
()/E
B
() =
C
()/E
C
() =
D50
()/E
D50
()
=
D55
()/E
D55
() =
D65
()/E
D65
()
=
D75
()/E
D75
() (10.7a)
188 Computational Color Technology
Figure 10.6 Metameric blacks of 14 CRI spectra and average under CIE 1931 CMF.

A
()/E
A
() =
B
()/E
B
() =
C
()/E
C
() =
D50
()/E
D50
()
=
D55
()/E
D55
() =
D65
()/E
D65
()
=
D75
()/E
D75
(). (10.7b)
This is because

i
() =E
i
() =
i
() +
i
(), (10.8)
where E
i
() is the SPD of an illuminant i. As a result, the normalization by E
i
(),
gives
1 =
i
()/E
i
() +
i
()/E
i
(). (10.9)
Equation (10.9) represents an equal-energy illuminant; Fig. 10.7 gives the power
distributions of the normalized equal-energy illuminant under CIE 1931 CMF and
CIE 1964 CMF.
Metameric Decomposition and Reconstruction 189
Figure 10.7 The normalized spectral power distribution of seven illuminants under two CIE
standard observers.
10.2 Metameric Spectrum Reconstruction
A spectrum can be faithfully or metamerically reconstructed via Eq. (10.5) if the
fundamental and metameric black are known. This is the basic property of metamer
reconstruction. The task that we want to address is: can we reconstruct a spectrum
without knowing its components? The similarity in fundamentals and metameric
blacks among different illuminants leads to nding a single set of two basis vectors
that can be used to represent all illuminants involved. This nding makes it possi-
ble to reconstruct a spectrum without knowing its components. Two methods are
proposed to recover the input spectrum from the fundamental and the metameric
black.
10.2.1 Spectrum reconstruction from the fundamental and
metameric black
The rst method is an intuitive one, where we simply average the fundamental
metamers

(
j
) and metameric blacks (
j
) of all illuminants of interest.
190 Computational Color Technology
Figure 10.8 The average spectral power distribution of seven illuminants under two CIE
standard observers.

(
j
) =
1
K
K

i=1

i
(
j
), (
j
) =
1
K
K

i=1

i
(
j
), (10.10)
where K is the number of illuminants (or spectra). We then use them as the basis
vectors for spectrum reconstruction. Figure 10.8 gives the average spectral power
distribution (SPD) of all seven illuminants under two different CMFs. These spec-
tra are used to nd the weights in Eq. (10.11) for reconstruction.

(
1
)
(
2
)
(
3
)
. . .
. . .
(
n
)

=w
1

(
1
)

(
2
)

(
3
)
. . .
. . .

(
n
)

+w
2

(
1
)
(
2
)
(
3
)
. . .
. . .
(
n
)

(
1
) (
1
)

(
2
) (
2
)

(
3
) (
3
)
. . . . . .
. . . . . .

(
n
) (
n
)

w
1
w
2

,
(10.11a)
Metameric Decomposition and Reconstruction 191
where w
i
is the weight or coefcient of a given basis vector. This explicit expres-
sion in Eq. (10.11a) can be represented compactly in matrix-vector notation as
=w
1

+w
2
=BW, (10.11b)
where B is an n m matrix and w is a vector of m elements (in this case, m = 2
and n is the number of sample points in the spectrum or SPD). If matrix B is
known, we can derive the weights for a given input-object spectrum by using the
pseudo-inverse transformation.
W =(B
T
B)
1
B
T
. (10.12)
Matrix B
T
has a size of m n; therefore, (B
T
B) has a size of m m. Because
only two components are used in spectrum reconstruction, m is much smaller than
n to ensure a nonsingular matrix for (B
T
B). The reconstructed spectrum
c
is then
given as

c
=BW. (10.13)
10.2.2 Spectrum reconstruction from tristimulus values
The second method uses the input tristimulus values to obtain the fundamental
via Eq. (10.2) because the matrix M
f
of fundamental primaries is known for a
given color-matching function (see property 10 in Chapter 3). Once we obtain the
fundamental, we presumably can invert Eq. (10.3) to recover the spectrum. The
problem with this approach to recover a spectrum is that the matrix R has a size of
n n but a rank of only 3, where n 3. Therefore, R is singular and cannot be
inverted.
We need other ways of nding the metameric black from the input tristimulus
values. The average spectrum of metameric blacks is a good starting point, per-
haps with some weighting. The metameric blacks of seven illuminants are similar
in the sense that they all cross over at the same wavelengths (see Fig. 10.4). This
is also true for fourteen CRI spectra (see Fig. 10.6). Moreover, the normalized
average metameric black of seven illuminants almost coincides with the aver-
age CRI metameric black, except in two extremes of the visible range as shown
in Fig. 10.9. Differences in the two extremes may not cause serious problems be-
cause two extremes are visually the least sensitive. Judging from the same cross-
ing points and remarkable similarity in shape, it is possible to nd the weighting
factor with respect to the input tristimulus values. Therefore, it is possible to rep-
resent the metameric black of any spectra by an average with weights as shown in
Eq. (10.14).
= + =M
f
+ , (10.14a)
192 Computational Color Technology
Figure 10.9 Comparison of the average metameric blacks from illuminants and CRI spec-
tra.
or explicitly,

(
1
)
(
2
)
(
3
)
. . .
. . .
(
n
)

(
1
)
(
2
)
(
3
)
. . .
. . .
(
n
)

1
0 0 0 0 . . . 0
0
2
0 0 0 . . . 0
0 0
3
0 0 . . . 0
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
0 0 0 0 . . . 0
n

(
1
)
(
2
)
(
3
)
. . .
. . .
(
n
)

.
(10.14b)
The element
i
is the ratio of the metameric black to the average at the ith wave-
length. The weight vector can be derived from input tristimulus values; each
element is viewed as the linear combination of tristimulus ratios, X
r
, Y
r
, and Z
r
.
X
r
=X
j
/X
avg
, Y
r
=Y
j
/Y
avg
, Z
r
=Z
j
/Z
avg
, (10.15)
where X
avg
, Y
avg
, and Z
avg
are the average tristimulus values of a training set, and
X
j
, Y
j
, and Z
j
are the tristimulus of jth element.

i
=
i1
X
r
+
i2
Y
r
+
i3
Z
r
+
i4
X
r
Y
r
+
i5
Y
r
Z
r
+
i6
Z
r
X
r
+ . (10.16)
Metameric Decomposition and Reconstruction 193
For all values sampled across the visible range, we can put them in a matrix-
vector form as given in Eq. (10.17) for a six-term polynomial.

7
. . .
. . .

11

12

13

14

15

16

21

22

23

24

25

26

31

32

33

34

35

36

41

42

43

44

45

46

51

52

53

54

55

56

61

62

63

64

65

66

71

72

73

74

75

76
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .

n1

n2

n3

n4

n5

n6

X
r
Y
r
Z
r
X
r
Y
r
Y
r
Z
r
Z
r
X
r

(10.17a)
or
=
r
, (10.17b)
where
r
is a vector of tristimulus ratios and their higher polynomial terms. To
derive coefcients in matrix , we get them one wavelength at a time by using the
training set. For j sets of tristimulus ratios, by employing Eq. (10.16), we have

ij
=
i1
X
r,j
+
i2
Y
r,j
+
i3
Z
r,j
+
i4
X
r,j
Y
r,j
+
i5
Y
r,j
Z
r,j
+
i6
Z
r,j
X
r,j
+ .
(10.18)
For a six-term polynomial, we have

i1

i2

i3

i4

i5

i6

i7
. . .
. . .

ij

X
r,1
Y
r,1
Z
r,1
X
r,1
Y
r,1
Y
r,1
Z
r,1
Z
r,1
X
r,1
X
r,2
Y
r,2
Z
r,2
X
r,2
Y
r,2
Y
r,2
Z
r,2
Z
r,2
X
r,2
X
r,3
Y
r,3
Z
r,3
X
r,3
Y
r,3
Y
r,3
Z
r,3
Z
r,3
X
r,3
X
r,4
Y
r,4
Z
r,4
X
r,4
Y
r,4
Y
r,4
Z
r,4
Z
r,4
X
r,4
X
r,5
Y
r,5
Z
r,5
X
r,5
Y
r,5
Y
r,5
Z
r,5
Z
r,5
X
r,5
X
r,6
Y
r,6
Z
r,6
X
r,6
Y
r,6
Y
r,6
Z
r,6
Z
r,6
X
r,6
X
r,7
Y
r,7
Z
r,7
X
r,7
Y
r,7
Y
r,7
Z
r,7
Z
r,7
X
r,7
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
X
r,j
Y
r,j
Z
r,j
X
r,j
Y
r,j
Y
r,j
Z
r,j
Z
r,j
X
r,j

i1

i2

i3

i4

i5

i6

(10.19a)
or

i
=
r,j

i
. (10.19b)
The matrix
r,j
, in this case, is the tristimulus ratios of all j training samples;
it is known (or can be calculated) for a given set of samples. The vector
i
at a
given wavelength can also be computed. Therefore, vector
i
of coefcients can be
obtained by inverting Eq. (10.19b) using pseudo-inversion. Note that the number of
194 Computational Color Technology
the training samples j must be greater than (or equal to) the number of polynomial
terms in vector
r
in order to have a unique solution for Eq. (10.20).

i
=

T
r,j

T
r,j

T
r,j

i
. (10.20)
Equation (10.20) must perform n times for all wavelengths sampled in the visible
range, where matrix
r,j
is the same for all wavelengths. Once is obtained, we
can compute from input tristimulus vector
r
using Eq. (10.17). The resulting
is then used in Eq. (10.14) to obtain the metameric black for the input tristimulus
values.
10.2.3 Error measures
The spectral difference is computed by
=(|
c
|)/n. (10.21)
The summation carries over all n sample points, and the standard deviation
std
can also be computed via Eq. (10.22).

std
=

|
c
|
2

/n

1/2
. (10.22)
Furthermore, the input and computed spectra are converted to tristimulus values
and CIELAB values under a selected illuminant such that the color difference in
CIELAB space can be calculated as a measure of the goodness of the spectrum
reconstruction.
10.3 Results of Spectrum Reconstruction
In this section, we present the results of spectrumreconstruction using two different
methods. The rst method uses the average spectra of fundamental and metameric
black and the second method uses tristimulus values to reconstruct the spectrum.
10.3.1 Results from average fundamental and metameric black
Table 10.1 lists the weights of basis vectors and reconstruction accuracies. The
spectrum reconstruction accuracy is good for D
50
, D
55
, and B, marginally accept-
able for D
65
and C, and poor for D
75
and A. The plots of seven illuminants and
their reconstructed spectra are given in Figs. 10.1010.16. The accuracy improves
for D
65
and D
75
if only D illuminants are used for deriving the basis vectors (see
Table 10.2). It is quite apparent that the reconstructed spectrum does not closely
match the original (what would you expect from only two basis vectors?). Note that
the use of average spectra of fundamental and metameric black for reconstruction
Metameric Decomposition and Reconstruction 195
Table 10.1 The weights of the basis vectors and spectrum reconstruction accuracies using
the basis vectors derived from seven illuminants.
Weight Spectrum
Illuminant Observer 1 2
std
D
50
CIE 1931 0.9610 0.9431 0.044 0.054
CIE 1964 0.9607 0.9435 0.044 0.054
D
55
CIE 1931 0.9854 0.9060 0.040 0.046
CIE 1964 0.9868 0.9005 0.037 0.044
D
65
CIE 1931 1.0341 0.8735 0.132 0.145
CIE 1964 1.0385 0.8588 0.127 0.141
D
75
CIE 1931 1.0789 0.8706 0.211 0.233
CIE 1964 1.0854 0.8492 0.205 0.229
A CIE 1931 0.8941 1.5363 0.422 0.458
CIE 1964 0.8810 1.5846 0.404 0.441
B CIE 1931 0.9712 0.9810 0.049 0.061
CIE 1964 0.9696 0.9852 0.049 0.061
C CIE 1931 1.0753 0.8895 0.130 0.147
CIE 1964 1.0780 0.8782 0.127 0.144
Figure 10.10 Comparison of illuminant D
50
and its reconstructed spectra using two basis
vectors.
196 Computational Color Technology
Figure 10.11 Comparison of illuminant D
55
and its reconstructed spectra using two basis
vectors.
Figure 10.12 Comparison of illuminant D
65
and its reconstructed spectra using two basis
vectors.
Metameric Decomposition and Reconstruction 197
Figure 10.13 Comparison of illuminant D
75
and its reconstructed spectra using two basis
vectors.
Figure 10.14 Comparison of illuminant A and its reconstructed spectra using two basis
vectors.
198 Computational Color Technology
Figure 10.15 Comparison of illuminant B and its reconstructed spectra using two basis
vectors.
Figure 10.16 Comparison of illuminant C and its reconstructed spectra using two basis
vectors.
Metameric Decomposition and Reconstruction 199
Table 10.2 The weights of the basis vectors and spectrum reconstruction accuracies using
the basis vectors derived from four D illuminants.
Weight Spectrum
Illuminant Observer 1 2
std
D
50
CIE 1931 0.9394 1.0193 0.117 0.130
D
55
CIE 1931 0.9674 0.9945 0.055 0.062
D
65
CIE 1931 1.0220 0.9847 0.047 0.051
D
75
CIE 1931 1.0712 1.0016 0.126 0.140
Table 10.3 The weights of basis vectors and spectrum reconstruction accuracies using the
basis vectors derived from 14 CRI spectra.
Weight Spectrum
Sample Observer 1 2
std
CRI #1 CIE 1931 1.0690 1.1570 0.048 0.058
CRI #2 CIE 1931 0.9101 0.7968 0.046 0.055
CRI #3 CIE 1931 0.8400 0.6444 0.069 0.084
CRI #4 CIE 1931 0.8633 0.4704 0.071 0.089
CRI #5 CIE 1931 1.0828 0.6290 0.106 0.118
CRI #6 CIE 1931 1.2393 0.9085 0.147 0.164
CRI #7 CIE 1931 1.2710 1.3515 0.115 0.142
CRI #8 CIE 1931 1.3076 1.8034 0.081 0.095
CRI #9 CIE 1931 0.4835 1.8925 0.148 0.180
CRI #10 CIE 1931 1.7039 1.6261 0.186 0.207
CRI #11 CIE 1931 0.5723 0.4062 0.071 0.089
CRI #12 CIE 1931 0.3738 0.2169 0.089 0.110
CRI #13 CIE 1931 1.9479 1.8174 0.082 0.089
CRI #14 CIE 1931 0.3356 0.2798 0.026 0.030
does not necessarily produce a metamer that crosses over the original three or four
times as shown in Figs. 10.1210.14, where there is only one crossover.
The same process is applied to fourteen CRI spectra, where the average funda-
mental and metameric black of these fourteen spectra are used, and are shown in
Figs. 10.5 and 10.6, respectively. Results of the spectrum reconstruction are given
in Table 10.3; the accuracies are on the same order as the reconstruction of illumi-
nants.
10.3.2 Results of spectrum reconstruction from tristimulus values
Results of spectrum reconstruction from tristimulus values are given in Tables 10.4
and 10.5 for illuminants and CRI spectra reconstruction, respectively. The accura-
cies are comparable to, perhaps slightly better than, the reconstruction from aver-
age fundamental and metameric black. Moreover, there is a unique feature in that
the reconstructed spectrum is always the metamer of the original. This is the basic
property of the metamer reconstruction from tristimulus values.
200 Computational Color Technology
Table 10.4 The spectrum reconstruction accuracies using tristimulus values from 7 illumi-
nants.
Spectrum
Illuminant Observer
std
D
50
CIE 1931 0.050 0.088
D
55
CIE 1931 0.048 0.076
D
65
CIE 1931 0.048 0.065
D
75
CIE 1931 0.053 0.064
A CIE 1931 0.151 0.189
B CIE 1931 0.040 0.049
C CIE 1931 0.052 0.065
Table 10.5 The spectrum reconstruction accuracies using tristimulus values from 14 CRI
spectra.
Spectrum
Sample Observer
std
CRI #1 CIE 1931 0.055 0.067
CRI #2 CIE 1931 0.028 0.047
CRI #3 CIE 1931 0.030 0.041
CRI #4 CIE 1931 0.036 0.042
CRI #5 CIE 1931 0.034 0.041
CRI #6 CIE 1931 0.041 0.058
CRI #7 CIE 1931 0.049 0.073
CRI #8 CIE 1931 0.074 0.081
CRI #9 CIE 1931 0.101 0.124
CRI #10 CIE 1931 0.037 0.044
CRI #11 CIE 1931 0.049 0.064
CRI #12 CIE 1931 0.058 0.072
CRI #13 CIE 1931 0.039 0.048
CRI #14 CIE 1931 0.013 0.018
10.4 Application
Kotera and colleagues applied metameric decomposition to the spectrum recon-
struction of a color scanner with the intent to estimate the skin-tone reproducibil-
ity. For the RGB scanner, they built an n 3 matrix A
r
containing the rgb spectral
sensitivity [r(), g(), b()] of the scanner, similar to the tristimulus matrix A.
5
A
r
=

r(
1
) g(
1
) b(
1
)
r(
2
) g(
2
) b(
2
)
r(
3
) g(
3
) b(
3
)
. . . . . . . . .
. . . . . . . . .
r(
n
) g(
n
) b(
n
)

r
1
g
1
b
1
r
2
g
2
b
2
r
3
g
3
b
3
. . . . . . . . .
. . . . . . . . .
r
n
g
n
b
n

. (10.23)
Metameric Decomposition and Reconstruction 201
The rst method that they proposed to obtain the fundamental from input RGB
signals
p
= [R, G, B] was to perform an orthogonal projection directly by sub-
stituting A
r
for A in Eq. (10.2).

r
=A
r
(A
T
r
A
r
)
1

p
. (10.24)
Other methods used the corrected RGB signals
c
=M
c

p
; the input RGB sig-
nals were transformed by a 3 3 matrix M
c
to minimize the mean-square error
between the RGB signals and the tristimulus values for a set of test color chips.
This matrix M
c
could be viewed as the nonsingular transformation matrix M of
Eq. (3.22) that ensures invariance in R (see Chapter 3). In a method that they called
corrected pseudo-inverse projection. The RGB spectral sensitivity matrix A
r
was
corrected by M
c
, A
T
c
=M
c
A
T
r
. After the correction, the fundamental is given as

r
=A
c
(A
T
c
A
c
)
1

p
. (10.25)
Again, Eq. (10.25) is a substitution of Eq. (10.2). In yet another method, called
colorimetric pseudo-inverse projection, they estimated the fundamental by using
Eq. (10.2) directly on the corrected RGB input signals
c
.

r
=A(A
T
A)
1

c
=A(A
T
A)
1
M
c

p
. (10.26)
They proceeded to propose several more methods by incorporating a smoothing
matrix because direct inversion from scanner RGB signals often resulted in an ir-
regular spectral shape (see Ref. 5 for more details). These methods were compared
for estimating skin tones. Results indicated that the fundamental metamer was best
recovered by Eq. (10.26), the colorimetric pseudo-inverse projection.
10.5 Remarks
With using only two components, the reconstructed spectrum is not very close to
the original spectrum. It is not realistic to expect it to be so. Nonetheless, these
methods provide a means to reconstruct a spectrum at a minimal cost.
The rst method of spectrum reconstruction requires information about the in-
put spectrum. This method makes little sense because if one has the spectrum al-
ready, why would one need to reconstruct it? The application lies in the image
transmission, where the basis vectors are stored in both the source and destination.
At the source, the input spectrum is converted to weights; then, only the weights
are transmitted. At the destination, the received weights are converted back to spec-
trum using the stored basis vectors. Depending on the sample rate of the spectrum,
the savings in transmission speed and bandwidth are signicant. For example, a
10-nm sampling of a visible spectrum to give 31 data points can be represented by
only two weights. The compression ratio is about 15:1.
The second method is powerful. One can get the spectrum by knowing only
the tristimulus values. It can also be used in image transmission by sending only
tristimulus values.
202 Computational Color Technology
References
1. J. B. Cohen and W. E. Kappauf, Metameric color stimuli, fundamental
metamers, and Wyszeckis metameric blacks, Am. J. Psychology 95, pp. 537
564 (1982).
2. J. B. Cohen and W. E. Kappauf, Color mixture and fundamental metamers: The-
ory, algebra, geometry, application, Am. J. Psychology 98, pp. 171259 (1985).
3. J. B. Cohen, Color and color mixture: Scalar and vector fundamentals, Color
Res. Appl. 13, pp. 539 (1988).
4. H. J. Trussell, Applications of set theoretic methods to color systems, Color
Res. Appl. 16, pp. 3141 (1991).
5. H. Kotera, H. Motomura, and T. Fumoto, Recovery of fundamental spectrum
from color signals, 4th IS&T/SID Color Imaging Conf., pp. 141144 (1994).
Chapter 11
Spectrum Decomposition and
Reconstruction
It has been shown that the illuminant and object spectra could be approximated
to a very high degree of accuracy by linearly combining a few principal compo-
nents. As early as 1964, Judd, MacAdam, and Wyszecki reported that various day-
light illuminations could be approximated by three components
1
(see Section 1.5)
and Cohen showed that the spectra of Munsell color chips could be accurately
represented by a few principal components.
2
Using principal component analysis
(PCA), Cohen reported that the rst basis vector alone accounted for 92.72% of the
cumulative variance, and merely three or four vectors accounted for 99% variance
or better. As a result, principal-component analysis became an extremely power-
ful tool for the computational color technology community. These ndings led to
the establishment of a nite-dimensional linear model. Combining with the rich
contents of linear algebra and matrix theory, the model provides powerful applica-
tions in color science and technology. Many color scientists and researchers have
contributed to the building and applying of this model to numerous color-image
processing areas such as color transformation, white-point conversion, metameric
pairing, indexing of metamerism, object spectrum reconstruction, illuminant spec-
trum reconstruction, color constancy, chromatic adaptation, and targetless scanner
characterization.
In this chapter, we present several methods for spectrum decomposition and re-
construction, including orthogonal projection, smoothing inverse, Wiener inverse,
and principal component analysis. Principal components from several publications
are compiled and evaluated by using a set of fourteen spectra employed for the
color rendering index. Their similarities and differences are discussed. New meth-
ods of spectrum reconstruction directly from tristimulus values are developed and
tested.
11.1 Spectrum Reconstruction
Many methods have been developed for spectrum reconstruction. They are clas-
sied into two main groups: interpolation and estimation methods. Interpolation
methods include the linear, cubic, spline, discrete Fourier transform, and discrete
sine transform. We have discussed a few interpolation methods in Chapter 9. Es-
203
204 Computational Color Technology
timation methods include polynomial regression, Moore-Penrose pseudo-inverse,
smoothing inverse, Wiener inverse, principal component analysis, and others.
As pointed out in Chapter 1, a major advantage in the denition of tristimulus
values is the approximation from integration to summations
3
such that linear alge-
bra can be applied to the vector-matrix representations of tristimulus values given
in Eq. (11.1) for spectrum decomposition and reconstruction.
= k
_
A
T
E
_
S = k
T
S, (11.1)

T
=A
T
E, (11.2)
where = [X, Y, Z]
T
represents the tristimulus values. The scalar k is a normal-
izing constant. Matrix A is a sampled CMF with a size of n3, where the column
number of three is due to the trichromatic nature of human vision and the row num-
ber n is the number of sample points. Matrix A has three independent columns;
therefore, it has a rank of three, spanning a 3D color-stimulus space. Vector E is
the sampled SPD of an illuminant represented as a diagonal matrix with a size of
n n. Matrix can be viewed as the viewing conditions or a weighted CMF. It
gives an n3 matrix, containing elements of the products of illuminant and color-
matching functions. Parameter S is a vector of n elements obtained by sampling
the object spectrum at the same rate as the illuminant and CMF.
Equation (11.1) is the forward transform from the object spectrum to tristimu-
lus value dened by CIE, where the dimensionality is reduced from n to three (n is
usually much larger than three), a process of losing a large amount of informa-
tion. The spectrum reconstruction utilizes the inverse transform from tristimulus
values to the object spectrum, where the dimensionality increases from three to
n, a process of creating information via a prior knowledge. Conceivably, one can
take advantage of the matrix-vector representation and linear algebra to obtain the
object spectrum by using a direct pseudo-inverse transform as given in Eq. (11.3).
S = k
1
_

T
_
1
. (11.3)
Matrix has a size of n3 with a rank of three (three independent columns). The
product of (
T
) is an n n symmetric matrix with three independent compo-
nents (n 3). Therefore, matrix (
T
) is singular and cannot be inverted. As a
result, there is no unique solution for Eq. (11.3).
11.2 General Inverse Method
Linear algebra provides methods for inverse problems such as the one posted in
Eq. (11.1) by minimizing a specic vector norm S
N
, where
S
N
=
_
SNS
T
_
1/2
, (11.4)
Spectrum Decomposition and Reconstruction 205
and N is the norm matrix. The optimal solution S of the minimal norm is given as
S =k
1
N
1

T
N
1

_
1
. (11.5)
Depending on the content of the norm matrix N, one can have orthogonal projec-
tion, smoothing inverse, or Wiener inverse.
11.2.1 Spectrum reconstruction via orthogonal projection
If the identity matrix I is used as the norm matrix N, Eq. (11.5) becomes
S = k
1

_
1
=k
1

+
, (11.6)
to give the solution of the minimum Euclidean norm, where
+
= (
T
)
1
is the Moore-Penrose pseudo-inverse of
T
. Unlike (
T
) of Eq. (11.3), the
product (
T
) is a 3 3 symmetric matrix; with three independent columns in
, matrix (
T
) is not singular and can be inverted. Note that matrix
+
has the
same structure as matrix M
f
of Eq. (3.9), which is the matrix used in orthogonal
projection for metameric decomposition, if we substitute A with .
Results of applying Eq. (11.6) to reconstruct CRI spectra are given in Ta-
ble 11.1, where
std
is the standard deviation between the original and recon-
structed spectra (see Section 10.2 for the denition of
std
) and is the dis-
tance between the original and reconstructed tristimulus values. There are sev-
eral interesting results: First, the spectrum reconstruction via orthogonal projec-
tion does not give a good estimate of the original spectrum, but it gives excellent
agreements to the tristimulus values and small or no color difference. Second, this
method is not sensitive to the illuminants tested in this study (see Figs. 11.111.3):
different illuminants give similar reconstructed spectra. Generally, a reconstructed
spectrum gives a twin-peak shape as shown in Figs. 11.111.3 because it is the
orthogonal projection of a spectrum onto tristimulus space, just like the metameric
decomposition of illuminants given in Fig. 10.1. As a result, this method will not
give close estimates to original spectra. Because this is basically a metameric de-
composition, the resulting spectrum is the metamer of the original. Therefore, the
tristimulus values are identical for the original and reconstructed spectra and there
is no color error; except in some cases, where out-of-range values are calculated by
using Eq. (11.6) and are clipped as shown in Fig. 11.3 for the reconstructed spectra
in the range from 480 to 550 nm. The clipping of the out-of range reectance is due
to the fact that the physically realizable spectrum cannot be negative. Moreover,
this method is sensitive to noise.
4
With these problems, the orthogonal projection
is not well suited for spectrum reconstruction.
11.2.2 Spectrum reconstruction via smoothing inverse
The smoothing inverse is closely related to orthogonal projection. It minimizes the
average squared second differences, which is a measure of the spectrum curva-
206 Computational Color Technology
T
a
b
l
e
1
1
.
1
R
e
s
u
l
t
s
o
f
r
e
c
o
n
s
t
r
u
c
t
e
d
C
R
I
s
p
e
c
t
r
a
u
n
d
e
r
f
o
u
r
i
l
l
u
m
i
n
a
n
t
s
u
s
i
n
g
o
r
t
h
o
g
o
n
a
l
p
r
o
j
e
c
t
i
o
n
.
C
R
I
I
l
l
u
m
i
n
a
n
t
A
D
5
0
D
6
5
D
7
5
s
p
e
c
t
r
a

s
t
d

E
a
b

s
t
d

E
a
b

s
t
d

E
a
b

s
t
d

E
a
b
1
0
.
2
1
7
0
0
0
.
2
2
6
0
0
0
.
2
2
9
0
0
0
.
2
3
1
0
0
2
0
.
1
5
2
0
0
0
.
1
5
7
0
0
0
.
1
6
0
0
0
0
.
1
6
1
0
0
3
0
.
1
3
4
0
0
0
.
1
3
2
0
0
0
.
1
3
3
0
0
0
.
1
3
3
0
0
4
0
.
1
1
2
0
0
0
.
1
0
6
0
0
0
.
1
0
4
0
0
0
.
1
0
4
0
0
5
0
.
1
5
1
0
0
0
.
1
5
1
0
0
0
.
1
5
0
0
0
0
.
1
4
9
0
0
6
0
.
2
0
2
0
0
0
.
2
0
6
0
0
0
.
2
0
4
0
0
0
.
2
0
4
0
0
7
0
.
2
8
1
0
0
0
.
2
8
6
0
0
0
.
2
8
6
0
0
0
.
2
8
6
0
0
8
0
.
3
4
5
0
0
0
.
3
5
6
0
0
0
.
3
5
9
0
0
0
.
3
6
0
0
0
9
0
.
3
6
4
3
.
1
9
1
4
.
7
1
0
.
3
8
0
3
.
1
9
1
7
.
8
2
0
.
3
8
5
3
.
2
4
1
8
.
7
9
0
.
3
8
7
3
.
2
4
1
9
.
0
0
1
0
0
.
3
2
2
0
0
0
.
3
3
0
0
0
0
.
3
3
5
0
0
0
.
3
3
7
0
0
1
1
0
.
1
0
3
0
.
0
1
0
.
0
4
0
.
0
9
6
0
0
.
0
1
0
.
0
9
3
0
0
.
0
1
0
.
0
9
2
0
0
1
2
0
.
0
5
7
0
.
0
6
0
.
6
4
0
.
0
6
3
0
.
3
8
2
.
5
2
0
.
0
6
6
0
.
5
4
3
.
0
4
0
.
0
6
7
0
.
6
1
3
.
1
0
1
3
0
.
3
4
3
0
0
0
.
3
5
7
0
0
0
.
3
6
3
0
0
0
.
3
6
5
0
0
1
4
0
.
0
5
9
0
0
0
.
0
5
8
0
0
0
.
0
5
8
0
0
0
.
0
5
8
0
0
Spectrum Decomposition and Reconstruction 207
Figure 11.1 Reconstructed CRI #1 spectra using orthogonal projection under four illumi-
nants.
Figure 11.2 Reconstructed CRI #3 spectra using orthogonal projection under four illumi-
nants.
208 Computational Color Technology
Figure 11.3 Reconstructed CRI #9 spectra using orthogonal projection under four illumi-
nants.
ture. This criterion achieves smooth reconstructed spectra and leads to the follow-
ing equation and norm matrix:
4,5
S = k
1
N
1
s

_

T
N
1
s

_
1
, (11.7)
N
s
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 2 1 0 0 0
2 5 4 1 0 0
1 4 6 4 1 0
0 1 4 6 4 1 0



0 0 1 4 6 4 1 0
0 0 1 4 6 4 1
0 0 1 4 5 2
0 0 1 2 1
_

_
. (11.8)
Unfortunately, the norm matrix N
s
is singular and cannot be inverted. Therefore, it
must be modied by adding values to the diagonal elements; one way of modifying
it is using Eq. (11.9).

N
s
=N
s
+I, (11.9)
Spectrum Decomposition and Reconstruction 209
where I is the identity matrix and is a small positive constant; Herzog and col-
leagues used a value of 10
10
for that works well.
5
11.2.3 Spectrum reconstruction via Wiener inverse
Spectrum reconstruction is possible using the conventional Wiener inverse, where
there exists a matrix H that provides the spectrum of the object belonging to matrix
.
S = k
1
H (11.10)
and
H =
_

_
1
, (11.11)
where is a correlation (or covariance) matrix of S, which is related to a priori
knowledge about the object spectrum and is modeled as the rst-order Markov
covariance matrix of the form
4,6
=
_
_
_
_
_
_
_
_
_
1
2

n1
1
2

n2

2
1
2

n3


n1

n2

2
1
_

_
. (11.12)
Parameter is the adjacent-element correlation factor that is within the range of
[0, 1] and can be set by the experimenter; for example, Uchiyama and coworkers
used = 0.999 in their study of a unied multispectral representation.
6
The co-
variance matrix has a size of nn, matrix is n3, and its transpose is 3n; this
gives the product (
T
) and its inverse a size of 3 3. The size of matrix H is
equal to n 3 because it is the product of three matrices with (n n), (n 3),
and (
T
)(3 3). Once the covariance matrix is selected, matrix H can be
calculated because is known if the CMF and illuminant are selected. The object
spectrum is then estimated by substituting H into Eq. (11.10).
Results of using the Wiener inverse to reconstruct CRI spectra are given in
Table 11.2. With = 0.999, this method gives much better agreement with the
original as compared to the orthogonal projection; the standard deviation is about
an order of magnitude smaller than the corresponding results from the orthogonal
projection. Figures 11.411.6 give the plots of reconstructed CRI spectra using the
Wiener inverse under different illuminants; they t the shapes of the originals much
better than the orthogonal projection. Like the orthogonal projection, they are less
sensitive to the illumination and give identical tristimulus values, unless there exist
out-of-range values that must be clipped (see CRI #9 in Table 11.2).
210 Computational Color Technology
T
a
b
l
e
1
1
.
2
R
e
s
u
l
t
s
o
f
r
e
c
o
n
s
t
r
u
c
t
e
d
C
R
I
s
p
e
c
t
r
a
u
n
d
e
r
f
o
u
r
i
l
l
u
m
i
n
a
n
t
s
u
s
i
n
g
t
h
e
W
i
e
n
e
r
i
n
v
e
r
s
e
.
C
R
I
I
l
l
u
m
i
n
a
n
t
A
D
5
0
D
6
5
D
7
5
s
p
e
c
t
r
a

s
t
d

E
a
b

s
t
d

E
a
b

s
t
d

E
a
b

s
t
d

E
a
b
1
0
.
0
1
5
0
0
0
.
0
1
2
0
0
0
.
0
1
2
0
0
0
.
0
1
2
0
0
2
0
.
0
1
0
0
0
0
.
0
1
1
0
0
0
.
0
1
1
0
0
0
.
0
1
1
0
0
3
0
.
0
5
1
0
0
0
.
0
4
7
0
0
0
.
0
4
7
0
0
0
.
0
4
7
0
0
4
0
.
0
4
3
0
0
0
.
0
3
4
0
0
0
.
0
3
0
0
0
0
.
0
2
8
0
0
5
0
.
0
3
1
0
0
0
.
0
2
6
0
0
0
.
0
2
4
0
0
0
.
0
2
3
0
0
6
0
.
0
7
2
0
0
0
.
0
7
4
0
0
0
.
0
7
5
0
0
0
.
0
7
6
0
0
7
0
.
0
3
4
0
0
0
.
0
4
2
0
0
0
.
0
4
6
0
0
0
.
0
4
7
0
0
8
0
.
0
9
4
0
0
0
.
1
1
3
0
0
0
.
1
2
0
0
0
0
.
1
2
3
0
0
9
0
.
1
4
3
1
.
3
7
6
.
6
3
0
.
1
8
0
1
.
3
1
7
.
8
2
0
.
1
9
3
1
.
2
8
7
.
9
3
0
.
1
9
9
1
.
2
5
7
.
8
9
1
0
0
.
0
3
4
0
0
0
.
0
3
5
0
0
0
.
0
3
7
0
0
0
.
0
3
7
0
0
1
1
0
.
0
6
6
0
0
0
.
0
6
3
0
0
0
.
0
6
2
0
0
0
.
0
6
1
0
0
1
2
0
.
0
9
6
0
0
0
.
0
9
6
0
0
0
.
0
9
4
0
.
0
3
0
.
2
2
0
.
0
9
3
0
.
1
0
0
.
5
4
1
3
0
.
0
4
7
0
0
0
.
0
4
6
0
0
0
.
0
4
5
0
0
0
.
0
4
5
0
0
1
4
0
.
0
2
7
0
0
0
.
0
2
5
0
0
0
.
0
2
5
0
0
0
.
0
2
4
0
0
Spectrum Decomposition and Reconstruction 211
Figure 11.4 Reconstructed CRI #1 spectra using Wiener inverse under four illuminants.
Figure 11.5 Reconstructed CRI #3 spectra using Wiener inverse under four illuminants.
212 Computational Color Technology
Figure 11.6 Reconstructed CRI #9 spectra using Wiener inverse under four illuminants.
11.3 Spectrum Decomposition and Reconstruction Methods
The methods that can perform both spectrum decomposition and reconstruction are
orthogonal projection and PCA. Cohen and Kappauf used an orthogonal projector,
projecting onto 3D color-stimulus space, to decompose stimuli into a fundamen-
tal metamer and a metameric black.
79
Kotera and colleagues applied orthogonal
projection together with the pseudo-inversion of matrices to reconstruct input spec-
tra from scanner RGB inputs and lter spectral sensitivity functions.
10
Kang pro-
posed a method of converting tristimulus values to spectra using the fundamental
and metameric black, and a method of spectrum reconstruction by using two ba-
sis vectors derived from the averages of fundamentals and metameric blacks (see
Chapter 10). The orthogonal projection gives poor estimates for spectrum recon-
struction. A more accurate method is PCA.
11.4 Principal Component Analysis
In 1964, Cohen used the linear component analysis by the centroid method to
decompose 150 Munsell spectra and obtained a set of four basis vectors (or cen-
troid components, as called by Cohen) that accurately reconstructed spectra of 433
Munsell color chips.
2
This method is the rst application of PCA to color science.
Since Cohens nding, PCA has been used extensively in color computations and
analyses. In this section, we review this powerful method of principal component
Spectrum Decomposition and Reconstruction 213
analysis and its applications to object spectrum decomposition and reconstruction.
There are two levels of object spectrum reconstruction from a set of basis vectors:
one is from the input object spectra and the other is from the tristimulus values.
The formulations for these two approaches are given. Results of using these for-
mulations are compared by using available basis sets. From these results, we assess
the quality of the spectrum reconstruction.
According to Jolliffe, the earliest descriptions of principal component analysis
(PCA) appear to have been given by Pearson in 1901 and Hotelling in 1933 for dis-
crete random sequences.
11
The central theme of PCA is to reduce the dimension-
ality of a data set while retaining as much as possible of the variation present in the
data set. This reduction of a large number of interrelated variables is accomplished
by transforming to a new set of variables, which are uncorrelated such that the rst
few principal components retain most of the variation. The Hotelling transform
uses Lagrange multipliers and ends with an eigenfunction problem. Karhunen, in
1947, and Love, in 1948, published papers that are now called the KL transform.
Originally, the KL transform was presented as a series expansion for a continuous
random process. This is the continuous version of the discrete random sequences
of the Hotelling transform. Therefore, principal component analysis is also known
as the discrete KL transform or Hotelling transform; it is based on statistical prop-
erties of an image or signal.
1113
In the case of object reection under an illumina-
tion, multiple spectra S
i
(i = 1, 2, 3, . . . , m) are taken and the mean

S is computed;
each spectrum is sampled to a n 1 vector S, where index j = 1, 2, 3, . . . , n.
S
i
= [s
i
(
1
) s
i
(
2
) s
i
(
3
) s
i
(
j
) s
i
(
n
)]
T
= [s
i1
s
i2
s
i3
s
ij
s
in
]
T
i =1, 2, . . . , m (11.14)

S = [ s(
1
) s(
2
) s(
3
) s(
j
) s(
n
)]
T
= [ s
1
s
2
s
3
s
j
s
n
]
T
, (11.15)
and
s(
j
) = (1/m)
m

i=1
s
i
(
j
), (11.16)
where m is the number of input spectra. The basis vectors of the KL transform are
given by the orthonormalized eigenvectors of its covariance matrix .
Let us set
s
i
(
j
) =s
i
(
j
) s(
j
) = s
ij
s
j
. (11.17)
Then
S
i
=S
i


S (11.18)
214 Computational Color Technology
and we have
= (1/m)
m

i=1
(S
i
)(S
i
)
T
. (11.19)
The result of Eq. (11.19) gives a symmetric matrix. Now, let b
j
and
j
, j =
1, 2, 3, . . . , n, be the eigenvectors and corresponding eigenvalues of , where the
eigenvalues are arranged in decreasing order.

1

2

3

n
,
then the transfer matrix B is given as an n n matrix whose columns are the
eigenvectors of .
B =
_
_
_
_
_
_
_
b
11
b
21
b
31
b
i1
b
n1
b
12
b
22
b
32
b
i2
b
n2
b
13
b
23
b
33
b
i3
b
n3


b
1n
b
2n
b
3n
b
in
b
nn
_

_
, (11.20)
where b
ij
is the jth component of the ith eigenvector. Each eigenvector b
i
is a
principal component of the spectral reectance covariance matrix . Matrix B is
a unitary matrix, which reduces matrix to its diagonal form .
B
T
B =. (11.21)
The KL or Hotelling transform is the multiplying of a centralized spectrum vector,
S

S, by the transfer matrix B.
U =B(S

S). (11.22)
11.5 Basis Vectors
Using principal component analysis, Cohen reported a set of four basis vectors
that are derived from 150 randomly selected Munsell color chips. The basis vectors
plotted in Fig. 11.7 are applied to 433 Munsell chips. The rst basis vector accounts
for 92.72% of the cumulative variance, the second vector 97.25%, the third vector
99.18%, and the fourth vector 99.68%.
2
He showed data for two reconstructed
spectra; the match between original and reconstruction are excellent as shown in
Fig. 11.8. He indicated that other Munsell chips used in his study gave equally
accurate reconstructions. Since Cohens publication, many basis sets of principal
components have been reported. They are given in the order of their appearance in
literature.
Spectrum Decomposition and Reconstruction 215
Figure 11.7 Cohens basis vectors. (Reprinted with permission of John Wiley & Sons, Inc.)
2
Figure 11.8 Reconstructed spectra from Cohens basis vectors. (Reprinted with permission
of John Wiley & Sons, Inc.)
2
216 Computational Color Technology
Parkkinen and coworkers measured 1257 Munsell chips and used them to de-
rive a set of ten basis vectors via the KL transform.
14
The rst four principal com-
ponents are given in Fig. 11.9. They determined the goodness of t by computing
the differences between the reconstructed spectrum and the original, wavelength
by wavelength. They then found the error band, which was bounded by the maxi-
mum positive and the maximum negative differences in the reectance scale. The
data gave a rather uniform variation across the whole visible spectrum, indicating
that the reconstruction error was almost independent of the wavelength. Therefore,
the nal measure of the goodness of t is the average reconstruction error over
the wavelength region of interest. Using four basis vectors, all 1257 reconstructed
spectra were below the error limit of 0.10, but only 22.8% below 0.01. As expected,
the reconstruction error decreases with increasing number of basis vectors. For six
vectors, there were 44.8% of spectra below 0.01 and 72.0% below 0.01 for eight
vectors.
Drew and Funt reported a set of ve basis functions derived from 1710 syn-
thetic natural color signals. The color signals were obtained as the product of ve
standard daylights (correlated color temperatures range from 4,800 K to 10,000 K)
with 342 reectance spectra from the Krinov catalogue.
15
Vrhel and Trussell re-
ported four sets of basis vectors for four major printing techniqueslithographic
printing, the electrophotographic (or xerographic) process, ink-jet, and thermal dye
Figure 11.9 Parkkinen and coworkers basis vectors. (Reprinted with permission of John
Wiley & Sons, Inc.)
14
Spectrum Decomposition and Reconstruction 217
diffusion (or D
2
T
2
). The variation of these basis vectors was computed by using
the eigenvalues from matrix B, where the percent variation was dened as
16
e
v
= 100
_
j=3

j=1

j
_
_
_
j=n

j=1

j
_
. (11.23)
Lithographic printing used 216 spectra from R. R. Donnelley for deriving prin-
cipal components; the rst three principal components are plotted in Fig. 11.10,
which account for 99.31% of the variation. The electrophotographic copier used
343 spectra from Cannon; the rst three principal components are plotted in
Fig. 11.11, which account for 98.65% of the variation. The ink-jet printer used
216 spectra from Hewlett-Packard; the rst three principal components are plotted
in Fig. 11.12, which account for 97.48% of the variation. The thermal dye diffu-
sion printer used 512 spectra from Kodak; the rst three principal components are
plotted in Fig. 11.13, which account for 97.61% of the variation.
16
Eem and colleagues reported a set of eight basis vectors that were obtained by
measuring 1565 Munsell color chips (glossy collection); the rst four vectors are
plotted in Fig. 11.14, which account for 99.66% of the variation (the variation is
slightly lower if all eigenvalues are accounted for).
17
Figure 11.10 Lithographic basis vectors fromVrhel and Trussell. (Reprinted with permission
of John Wiley & Sons, Inc.)
16
218 Computational Color Technology
Figure 11.11 Electrophotographic basis vectors from Vrhel and Trussell. (Reprinted with
permission of IS&T.)
16
Figure 11.12 Ink-jet basis vectors from Vrhel and Trussell. (Reprinted with permission of
IS&T.)
16
Spectrum Decomposition and Reconstruction 219
Figure 11.13 Thermal dye diffusion basis vectors from Vrhel and Trussell. (Reprinted with
permission of IS&T.)
16
Figure 11.14 Eem and colleagues basis vectors. (Reprinted with permission of IS&T.)
17
220 Computational Color Technology
Figure 11.15 Lee and coworkers basis vectors.
19
Garcia-Beltran and colleagues reported a set of basis functions derived from
5,574 samples using acrylic paints on paper.
18
Lee and colleagues derived a set of
basis functions using 1269 Munsell spectra provided by the Information Technol-
ogy Dept., Lappeenranta university of Technology.
19,20
The rst three basis vectors
are given in Fig. 11.15.
These basis sets were independently derived using different sets of color
patches, but the general shapes of the normalized vectors of the rst three compo-
nents are pretty close, as shown in Figs. 11.1611.18, respectively. The normaliza-
tion is performed by dividing each element in the basis vector by the sum of all the
elements. For comparison purposes, the sign of some vectors is changed by negat-
ing the vector; this negation has no effect on the spectra reconstruction because
the weight will change accordingly. Considering the vast differences in the mater-
ial and measurement techniques of the input spectra, these similarities indicate the
underlying general characteristics of the rst three basis vectors. The rst vector is
the mean, corresponding to the gray component, or brightness. The second vector
has positive values in the blue and green regions (or cyan), but negative values in
the red region. The third vector is positive in green, but negative in blue and red.
11.6 Spectrum Reconstruction from the Input Spectrum
Spectrum reconstruction is the inverse of spectrum decomposition, where a spec-
trum can be represented by a linear combination of a few principal components
Spectrum Decomposition and Reconstruction 221
Figure 11.16 Comparisons of the rst vectors normalized from the original data.
Figure 11.17 Comparisons of the second vectors normalized from the original data.
222 Computational Color Technology
Figure 11.18 Comparisons of the third vectors normalized from the original data.
(also known as basis or characteristic vectors) B
j
(), j = 1, 2, 3, . . . , m, with m
being the highest number of the principal component used in the reconstruction.
_
_
_
_
_
_
_
_
_
s(
1
)
s(
2
)
s(
3
)


s(
n
)
_

_
= w
1
_
_
_
_
_
_
_
_
_
b
1
(
1
)
b
1
(
2
)
b
1
(
3
)


b
1
(
n
)
_

_
+w
2
_
_
_
_
_
_
_
_
_
b
2
(
1
)
b
2
(
2
)
b
2
(
3
)


b
2
(
n
)
_

_
+w
3
_
_
_
_
_
_
_
_
_
b
3
(
1
)
b
3
(
2
)
b
3
(
3
)


b
3
(
n
)
_

_
+ +w
m
_
_
_
_
_
_
_
_
_
b
m
(
1
)
b
m
(
2
)
b
m
(
3
)


b
m
(
n
)
_

_
(11.24a)
Spectrum Decomposition and Reconstruction 223
or
_
_
_
_
_
_
_
s(
1
)
s(
2
)
s(
3
)


s(
n
)
_

_
=
_
_
_
_
_
_
_
b
1
(
1
) b
2
(
1
) b
3
(
1
) b
m
(
1
)
b
1
(
2
) b
2
(
2
) b
3
(
2
) b
m
(
2
)
b
1
(
3
) b
2
(
3
) b
3
(
3
) b
m
(
3
)


b
1
(
n
) b
2
(
n
) b
3
(
n
) b
m
(
n
)
_

_
_
_
_
_
_
w
1
w
2
w
3

w
m
_

_
,
(11.24b)
where w
i
is the weight or coefcient of the ith principal component. For simplicity,
we eliminate the wavelength by setting s
i
= s(
i
) and b
ij
=b
i
(
j
).
_
_
_
_
_
_
_
s
1
s
2
s
3


s
n
_

_
=
_
_
_
_
_
_
_
b
11
b
21
b
31
b
m1
b
12
b
22
b
32
b
m2
b
13
b
23
b
33
b
m3


b
1n
b
2n
b
3n
b
mn
_

_
_
_
_
_
_
w
1
w
2
w
3

w
m
_

_
. (11.24c)
The explicit expression in Eq. (11.24c) can be represented compactly in the matrix
notation
S =BW, (11.24d)
where B is an n m matrix and W is a vector of m elements. If matrix B of the
principal components is known, we can derive the weights W for a given object
spectrum by using the pseudo-inverse transform:
W =
_
B
T
B
_
1
B
T
S. (11.25)
Matrix B
T
has a size of m n; therefore, (B
T
B) has a size of m m. Because
only a few principal components are needed in spectrum reconstruction, m is much
smaller than n, which ensures a nonsingular matrix for (B
T
B) that can be inverted.
The reconstructed spectrum S
c
is given as
S
c
=BW. (11.26)
11.7 Spectrum Reconstruction from Tristimulus Values
This method has been enhanced by many researchers, such as Horn, Wandell, and
Trussell, among others, and applied to many areas of color science and technology.
By substituting Eq. (11.24d) into Eq. (11.1), we have
= k
T
BW. (11.27)
224 Computational Color Technology
Because
T
has only three rows and has only three components, we have a
constraint of m 3. As in Eq. (11.3), there is no unique solution for m > 3. If we
set m= 3, the explicit expression of Eq. (11.27) becomes
_
X
Y
Z
_
=k
_
s
1
x
1
s
2
x
2
s
3
x
3
s
n
x
n
s
1
y
1
s
2
y
2
s
3
y
3
s
n
y
n
s
1
z
1
s
2
z
2
s
3
z
3
s
n
z
n
_
_
_
_
_
_
_
_
b
11
b
21
b
31
b
12
b
22
b
32
b
13
b
23
b
33


b
1n
b
2n
b
3n
_

_
_
w
1
w
2
w
3
_
.
(11.28)
In this case, the product (
T
B) is a 3 3 matrix and the weights W can be deter-
mined by inverting matrix (
T
B).
W = k
1
(
T
B)
1
. (11.29)
The reconstructed spectrum S

of input tristimulus values is


S

=BW. (11.30)
11.8 Error Metrics
The spectral error s is computed by
s =

|S
c
S|
n
. (11.31)
The summation carries over all n sample points; and the standard deviation s
std
is computed by using Eq. (11.32).
s
std
=
_
|S
c
S|
2
n
_
1/2
. (11.32)
Furthermore, with the reconstructed spectrum, the tristimulus and CIELAB values
can be computed for comparison with the input spectrum.
11.9 Results and Discussions
Seven sets of principal components are used to reconstruct 14 CRI spectra. The
number of basis vectors is varied to observe its effect on the reconstruction accu-
racy. We determine spectral errors and color differences of reconstructed spectra
with respect to measured values.
Spectrum Decomposition and Reconstruction 225
11.9.1 Spectrum reconstruction from the object spectrum
Results of the individual spectrum reconstructions using various basis sets are
given in Appendix 6. The quality of the reproduced spectra is indicated by the ab-
solute difference and standard deviation measures given in the last two columns of
the table there. The average color difference E
avg
, the standard deviation of the
color difference E
std
, the spectral difference s, and the standard deviation of
the spectral difference s
std
of all fourteen spectra are summarized in Table 11.3.
Figures 11.1911.21 show the selected plots of reconstructed CRI spectra from
several basis sets. Generally, the spectrum difference becomes smaller as the num-
ber of basis vectors increases. However, there are oscillations at the low and mid-
dle wavelengths of the computed spectrum using eight basis vectors as shown in
Fig. 11.19, indicating that the higher number of vectors may increase the noise. An-
other problem of PCA is that we may get negative values for some reconstructed
spectra, as shown in Fig. 11.21, between 470 and 550 nm. Usually, if there are very
low reectances somewhere in the spectrum, the chances of getting negative values
are high. In these cases, clipping is performed because physical realizable spectra
cannot be negative. The clipping introduces additional errors to the spectrum and
tristimulus values.
Moreover, the metrics of the spectrum difference do not correlate well to the
color difference measure of E
ab
. In fact, the spectrum difference measures are a
rather poor indication of the quality of the spectrum construction. Personally, I be-
lieve that the spectrum difference should be weighted by visual sensitivity, such as
the CMF, to give a better indication. Second, at a given number of basis vectors, the
average performance of various sets of basis vectors is rather close (see Fig. 10.22
and Table 11.3), but the tting to an individual spectrum may be quite different
(see Figs. 10.1910.21), particularly in the case of using a small number of vec-
tors. The individual difference is primarily due to the shape of the input spectrum,
Table 11.3 Average spectral and color errors of fourteen CRI spectra.
# of basis
Basis vector vectors E
avg
E
std
s s
std
Cohen 2 29.17 17.54 0.063 0.034
Eem 2 29.36 18.04 0.063 0.036
Cohen 3 7.00 5.36 0.029 0.018
Eem 3 6.93 5.64 0.030 0.017
Copier 3 6.01 5.25 0.034 0.021
Thermal 3 6.76 3.82 0.031 0.013
Ink-jet 3 6.15 4.00 0.052 0.028
Lithographic 3 5.24 4.32 0.031 0.020
Cohen 4 4.30 5.66 0.022 0.015
Eem 4 2.85 3.96 0.021 0.012
Eem 5 2.86 3.79 0.018 0.011
Eem 8 0.79 0.83 0.010 0.006
226 Computational Color Technology
Figure 11.19 Comparisons between reconstructed and measured spectra of CRI #1.
Figure 11.20 Comparisons between reconstructed and measured spectra of CRI #3.
Spectrum Decomposition and Reconstruction 227
Figure 11.21 Comparisons between reconstructed and measured spectra of CRI #9.
Figure 11.22 Comparisons of color differences in CIELAB of various basis sets for spec-
trum reconstruction from input spectra.
228 Computational Color Technology
where a different basis set has different characteristics depending on the initial set
of the training spectra resulting in a preference toward certain spectrum shapes.
Generally, any set of basis vectors can be used to give a satisfactory matching, but
it would be nice to have a metric that can provide information on the goodness of
t.
11.9.2 Spectrum reconstruction from the tristimulus values
This method has a limitation on the number of principal components; for trichro-
matic space, we can only use the rst three vectors for reconstruction. Results of
the individual spectrum reconstructions using two different basis sets are given in
Appendix 7. If there is no clipping, the computed spectrum is a metamer of the ob-
ject color where the tristimulus and CIELAB values are identical to those shown in
the table of Appendix 7 and in Fig. 11.23. The average spectral and color errors of
fourteen reconstructed spectra with respect to measured values are summarized in
Table 11.4. The quality of the reproduced spectrum is indicated by the absolute dif-
ference and standard deviation measures given in the last two columns of the table.
Generally, there is a reasonable t between the measured and calculated spectra by
either Cohens or Eems basis set, but not a very good t because only three basis
vectors are used. However, the beauty is that there is no (or small) color difference
because this method gives a metameric reconstruction.
Figure 11.23 Comparisons of color differences in CIELAB of various basis sets for spec-
trum reconstruction from tristimulus values.
Spectrum Decomposition and Reconstruction 229
Table 11.4 Average spectral and color errors of fourteen reconstructed spectra from tristim-
ulus values with respect to measured values.
Basis vector E
avg
E
std
s s
std
Cohen 0.40 1.25 0.036 0.026
Eem 0.42 1.24 0.038 0.025
Finlayson and Morovic have developed a metamer-constrained color correction
by projecting a set of characterized RGB values from a camera onto a CIEXYZ
color-matching function.
21
This projection gives many sets of XYZ values; any
one of these sets may be the correct answer for the RGB to XYZ transform. A good
color transform results by choosing the middle of the set.
11.10 Applications
Spectrum decomposition/reconstruction is an extremely powerful tool for color
imaging in understanding color science and developing color technology. In
the areas of color science, Maloney, Wandell, Trussell, Brill, West, Finalayson,
Drew, Funt, and many others have developed and applied spectrum decompo-
sition/reconstruction to color transformation, white-point conversion, metameric
pairing, metamerism indexing, color constancy, and chromatic adaptation. These
topics will be discussed in subsequent chapters.
In the areas of technology development, Mancill has used the Wiener estima-
tion for digital color-image restoration;
22
Hubel and coworkers have used simple
linear regression, pseudo-inverse, and Wiener estimation for estimating the spectral
sensitivity of a digital camera;
23
Partt and Mancill have used the Wiener estimation
for spectral calibration of a color scanner;
24
and Sherman and Farrell have used
PCA for scanner color characterization.
25
Utilizing the PCA approach, Sharma de-
veloped a color scanner characterization method for photographic input media that
does not require any test target.
26
First, he used the characteristic spectra measured
from the photographic IT8 target to obtain basis vectors by employing principal
component analysis. Together with a spectral model for the medium and a scan-
ner model, he was able to convert scanner RGB signals from images to spectra
by reconstructing from the basis set. The simulation results showed that the av-
erage and maximum errors were 1.75 and 7.15 E
ab
, respectively, for the Kodak
IT8 test target scanned by a UMAX scanner. These results are on the same or-
der of color characterizations using test targets, maybe even slightly better. Thus,
this method was shown to provide a means for spectral scanner calibration, which
can readily be transformed into a color calibration under any viewing illumination.
Baribean used PCA to design a multispectral 3D camera (or scanner) with optimal
wavelengths that could be used to capture the color and shape of 3D objects such
as ne arts and archeological artifacts.
27
Vilaseca and colleagues used PCA for
processing near-infrared (NIR) multispectral signals obtained from a CCD camera.
The multispectral signals were transformed and mapped to the rst three principal
230 Computational Color Technology
components, creating pseudo-color images for visualization (NIR is invisible to the
human eye). This method provides a means of discriminating images with similar
visible spectra and appearance, but differs in the NIR region.
28
References
1. D. B. Judd, D. L. MacAdam, and G. Wyszecki, Spectral distribution of typical
daylight as a function of correlated color temperature, J. Opt. Soc. Am. 54,
pp. 10311040 (1964).
2. J. Cohen, Dependency of the spectral reectance curves of the Munsell color
chips, Psychon. Sci. 1, pp. 369370 (1964).
3. Colorimetry, CIE Publication No. 15.2, Central Bureau of the CIE, Vienna
(1986).
4. F. Konig and W. Praefcke, A multispectral scanner, Colour Imaging Vision and
Technology, L. W. MacDonald and M. R. Luo (Eds.), pp. 129143 (1999).
5. P. C. Herzog, D. Knipp, H. Stiebig, and F. Konig, Colorimetric characteriza-
tion of novel multiple-channel sensors for imaging and metrology, J. Electron.
Imaging 8, pp. 342353 (1999).
6. T. Uchiyama, M. Yamaguchi, H. Haneishi, and N. Ohyama, A method for the
unied representation of multispectral images with different number of bands,
J. Imaging Sci. Techn. 48, pp. 120124 (2004).
7. J. B. Cohen and W. E. Kappauf, Metameric color stimuli, fundamental
metamers, and Wyszeckis metameric blacks, Am. J. Psychology 95, pp. 537
564 (1982).
8. J. B. Cohen and W. E. Kappauf, Color mixture and fundamental metamers:
Theory, algebra, geometry, application, Am. J. Psychology 98, pp. 171259
(1985).
9. J. B. Cohen, Color and color mixture: Scalar and vector fundamentals, Color
Res. Appl. 13, pp. 539 (1988).
10. H. Kotera, H. Motomura, and T. Fumoto, Recovery of fundamental spectrum
from color signals, 4th IS&T/SID Color Imaging Conf., pp. 141144 (1996).
11. I. T. Jolliffe, Principal Component Analysis, Springer-Verlag, New York
(1986).
12. A. K. Jain, Fundamental of Digital Image Processing, Prentice-Hall, Engle-
wood Cliffs, NJ, pp. 163176 (1989).
13. R. C. Gonzalez and P. Wintz, Digital Image Processing, Addison-Wesley,
Reading, MA, pp. 122130 (1987).
14. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, Characteristic spectra
of Munsell colors, J. Opt. Soc. Am. A 6, pp. 318322 (1989).
15. M. S. Drew and B. V. Funt, Natural metamers, CVGIP 56, pp. 139151 (1992).
16. M. J. Vrhel and H. J. Trussell, Color correction using principal components,
J. Color Res. Appl. 17, pp. 328338 (1992).
Spectrum Decomposition and Reconstruction 231
17. J. K. Eem, H. D. Shin, and S. O. Park, Reconstruction of surface spectral re-
ectance using characteristic vectors of Munsell colors, 2nd IS&T/SID Color
Imaging Conf., pp. 127130 (1994).
18. A. Garcia-Beltran, J. L. Nieves, J. Hernandez-Andres, and J. Romero, Linear
bases for spectral reectance functions of acrylic paints, Color Res. Appl. 23,
pp. 3945 (1998).
19. C.-H. Lee, B.-J. Moon, H.-Y. Lee, E.-Y. Chung, and Y.-H. Ha, Estimation of
spectral distribution of scene illumination from a single image, J. Imaging Sci.
Techn. 44, pp. 308320 (2000).
20. http://www.it.lut./research/color/lutcs_database.html.
21. G. D. Finlayson and P. M. Morovic, Metamer constrained color correction,
J. Imaging Sci. Techn. 44, pp. 295300 (2000).
22. C. E. Mancill, Digital Color Image Restoration, Ph.D. Thesis, University of
Southern California, Los Angeles, California (1975).
23. P. M. Hubel, D. Sherman, and J. E. Farrell, A comparison of methods of sen-
sor spectral sensitivity estimation, IS&T and SIDs 2nd Color Imaging Conf.,
pp. 4548 (1994).
24. W. K. Pratt and C. E. Mancill, Spectral estimation techniques for the spectral
calibration of a color image scanner, Appl. Optics 15, pp. 7375 (1976).
25. D. Sherman and J. E. Farrell, When to use linear models for color calibration,
IS&T and SIDs 2nd Color Imaging Conf., pp. 3336 (1994).
26. G. Sharma, Targetless scanner color calibration, J. Imaging Sci. Techn. 44,
pp. 301307 (2000).
27. R. Baribeau, Application of spectral estimation methods to the design of a
multispectral 3D Camera, J. Imaging Sci. Techn. 49, pp. 256261 (2005).
28. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martinez-Verdu, Color visualiza-
tion system for near-infrared multispectral images, J. Imaging Sci. Techn. 49,
pp. 246255 (2005).
Chapter 12
Computational Color Constancy
Color constancy is a visual phenomenon wherein colors of objects remain rela-
tively the same under changing illumination. A red apple gives the same red ap-
pearance under either a uorescent or incandescent lamp, although the lights that
the apple reects to the eye are quite different. The light reected from an object
into the human visual pathway is proportional to the product of the illumination
incident on the surface and the invariant spectral reectance properties of the ob-
ject. The fact that humans possess approximate color constancy indicates that our
visual system does attempt to recover a description of the invariant spectral re-
ectance properties of the object. Often, color constancy and chromatic adaptation
are used interchangeably. However, Brill and West seem to suggest otherwise. Us-
ing Hunts demonstration: a yellow cushion looks yellow when the whole scene
is covered by a blue lter, but the cushion looks green when it is covered by the
blue lter cut to the exact shape of the cushion. They stated that the yellowness
of the cushion in the rst instance is seen immediately despite the bluish cast of
the whole slide; then the bluish cast diminishes after several seconds of chromatic
adaptation. The difference in time dependence suggests that chromatic adaptation
may be a different phenomenon from color constancy.
1
With the similarity and difference in mind, we turn to computational color con-
stancy. Computational color constancy is built on the belief that the way to achieve
color constancy is by mathematically recovering or estimating the illuminant-
invariant surface reectance. By doing so, it not only achieves color constancy
but also recovers spectra of the surface and illumination. With these added bene-
ts, the computational color constancy becomes a powerful tool for many applica-
tions, such as artwork analysis, archives, reproduction, remote sensing, multispec-
tral imaging, and machine vision.
This chapter reviews the vector-space representation of computational color
constancy by rst presenting the image irradiance model, then several mathemati-
cal models such as the neutral interface reection (NIR) model, nite-dimensional
linear model, dichromatic reection model, lightness/retinex theory, and spectral
sharpening.
12.1 Image Irradiance Model
The image irradiance model is basically the trichromatic visual system expanded
into the spatial domain. Trichromatic value is the integration of the input sig-
233
234 Computational Color Technology
nal () with the three types of cone response functions V(). The input color
signal, in turn, is the product of the illuminant SPD E() and the surface spectral
reectance function S(), where is the wavelength (see Section 1.1).
=
_
V()() d =V
T
=V
T
ES. (12.1)
Function V() can be any set of sensor sensitivities from cone photoreceptors,
such as V
1
() represents a long-wavelength sensor, V
2
() represents a middle-
wavelength sensor, and V
3
() represents a short-wavelength sensor. Thus, matrix
V is an n3 matrix, containing the sampled cone response functions in the visible
range, where n is the number of samples in the range.
V =
_
_
_
_
_
_
_
_
_
v
1
(
1
) v
2
(
1
) v
3
(
1
)
v
1
(
2
) v
2
(
2
) v
3
(
2
)
v
1
(
3
) v
2
(
3
) v
3
(
3
)



v
1
(
n
) v
2
(
n
) v
3
(
n
)
_

_
. (12.2)
E is usually expressed as an nn diagonal matrix, containing the sampled illumi-
nant SPD.
E =
_
_
_
_
_
_
_
_
_
e(
1
) 0 0 0
0 e(
2
) 0 0
0 0 e(
3
) 0 0



0 0 e(
n
)
_

_
. (12.3)
S is a vector of n elements, consisting of the sampled surface spectral reectance.
S =[s(
1
) s(
2
) s(
3
) s(
n
)]
T
. (12.4)
is a vector of three elements that represent the tristimulus values [X, Y, Z].
12.1.1 Reection phenomenon
Equation (12.1) implicitly assumes that a surface reects only one type of light.
This assumption is an oversimplication of a very complex surface reection phe-
nomenon. An excellent review of physical phenomena inuencing color, such as
surface reection, light absorption, light scattering, multiple-internal reection,
and substrate reection, is given by Emmel;
2
readers interested in this subject are
referred to Ref. 2 for more detail. A simplied model considers two types of sur-
Computational Color Constancy 235
face reection. One is body (or subsurface) reection in which the light crosses
the interface and enters the body of the object; it is then selectively absorbed by
the material, partially scattered by the substrate, and then reected diffusely in all
directions. It is assumed that the spectral composition of the diffused lights is the
same for all angles. This type of surface reection depends on the chemical char-
acteristics of the material. For optically inhomogeneous materials such as prints
and lms, the light that they reect is dominated by body reection. Thus, it is
the light from the body reection that contributes to the color constancy of the
object.
Another type of reection is the interface reection that arises from the air-
material interface; the light does not enter the object, but is immediately reected
at the interface, and is directed to a narrow range of viewing angles. For smooth
surfaces, the interface reection is called specular. This type of reection is caused
by the difference in the refractive indices between air and the material and is gov-
erned by Fresnels law. For materials like metals and crystals, having highly smooth
and homogeneous surfaces, the light that they reect is dominated by interface re-
ection. For matte and inhomogeneous materials, the SPD of reected light is a
function of the reective index of the material, wavelength, and viewing geometry.
Many types of materials serving as vehicles for pigments and dyes, such as oil,
have virtually constant indices of refraction with respect to wavelength. It is as-
sumed that the specular reection is invariant, except for the intensity change with
respect to viewing geometry. Fresnels equation for specular reection does vary
with angle; it is, however, believed that this variation is small. Therefore, the inter-
face reection from inhomogeneous materials takes on the color of the illuminant.
Lee and coworkers termed this phenomenon the neutral interface reection (NIR)
model.
3
Lee and coworkers tested the validity of these assumptions by measuring the
SPD of the reected light from a variety of inhomogeneous materials at several
viewing angles with respect to the reected light from an ideal diffuser (pressed
barium-sulphate powder). They found that the SPDchanges in magnitude but not in
shape for plants, paints, oils, and plastics.
3
In other words, SPD curves are linearly
related to one another at different viewing angles for a variety of materials. This
means that the brightness (or luminance) of the body reection from inhomoge-
neous materials changes, but the hue remains the same with respect to the viewing
angle, indicating that the body spectral reectance is independent of the viewing
geometry. A plot of the chromaticities of the reected light at different angles in-
dicates that the NIR model holds, even for paper. In the following derivations, we
assume the surface is matte with perfect light diffusion in all directions for the pur-
pose of simplifying the extremely complex surface phenomenon. This assumption
is an important concept in photometric computations and is called the Lambertian
surface, having a uniformly diffusing surface with a constant luminance. The lu-
minous intensity of a Lambertian surface in any direction is the cosine of the angle
between that direction and the normal to the surface.
4
236 Computational Color Technology
12.2 Finite-Dimensional Linear Models
Finite-dimensional linear models are based on the fact that illuminant SPD and
object reectance can be approximated accurately by a linear combination of a
small number of basis vectors (or functions). It is a well-known fact that daylight
illuminants can be computed by using three basis vectors (see Chapter 1); and nu-
merous studies show that object surface reectance can be reconstructed by a few
principal components (three to six) that account for 99% or more of the variance
(see Chapter 11). Therefore, Sllstrn and Buchsbaum proposed approximations
for surface reection and illumination as the linear combination of basis functions
(or vectors).
5,6
S()

S
j
()
j
, (12.5)
or
S

=S
j
=
_
_
_
_
_
_
_
_
_
s
1
(
1
) s
2
(
1
) s
3
(
1
) s
j
(
1
)
s
1
(
2
) s
2
(
2
) s
3
(
2
) s
j
(
2
)
s
1
(
3
) s
2
(
3
) s
3
(
3
) s
j
(
3
)



s
1
(
n
) s
2
(
n
) s
3
(
n
) s
j
(
n
)
_

_
_
_
_
_
_

j
_

_
=
_
_
_
_
_
_
_
_
_
s
11
s
21
s
31
s
j1
s
12
s
22
s
32
s
j2
s
13
s
23
s
33
s
j3



s
1n
s
2n
s
3n
s
jn
_

_
_
_
_
_
_

j
_

_
. (12.6)
The matrix S
j
has a size of n j and the vector has j elements, where j is the
number of the principal components (or basis vectors) used in the construction of
the surface reectance. Similarly, the illumination can be expressed as
E()

E
i
()
i
, (12.7)
where i is the number of basis functions used. Equation (12.7) can be expressed in
a vector-matrix form.
E

=E
i
=
_
_
_
_
_
_
_
_
_
e
1
(
1
) e
2
(
1
) e
i
(
1
)
e
1
(
2
) e
2
(
2
) e
i
(
2
)
e
1
(
3
) e
2
(
3
) e
i
(
3
)



e
1
(
n
) e
2
(
n
) e
i
(
n
)
_

_
_
_
_

i
_

_
Computational Color Constancy 237
=
_
_
_
_
_
_
_
_
_
e
11
e
21
e
i1
e
12
e
22
e
i2
e
13
e
23
e
i3



e
1n
e
2n
e
in
_

_
_
_
_

i
_

_
. (12.8)
For the purpose of adhering to the diagonal matrix expression for illuminants, we
have
E

=E
i
=
1
_
_
_
_
_
_
_
_
_
e
11
0 0 0
0 e
12
0 0
0 0 e
13
0 0



0 0 e
1n
_

_
+
2
_
_
_
_
_
_
_
_
_
e
21
0 0 0
0 e
22
0 0
0 0 e
23
0 0



0 0 e
2n
_

_
+ +
i
_
_
_
_
_
_
_
_
_
e
i1
0 0 0
0 e
i2
0 0
0 0 e
i3
0 0



0 0 e
in
_

_
, (12.9a)
E =
_
_
_
_
_
_
_
_
_

e
i1

i
0 0 0
0

e
i2

i
0 0
0 0

e
i3

i
0 0



0 0

e
in

i
_

_
. (12.9b)
238 Computational Color Technology
Matrix E has the size of n n and the summation in Eq. (12.9b) carries from 1
to i. Substituting Eqs. (12.6) and (12.9) into Eq. (12.1), we have


=V
T
(E
i
)(S
j
) =
_
V
T
(E
i
)S
j
_
=L, (12.10)
where
L=V
T
(E
i
)S
j
. (12.11)
Matrix L is called the lighting matrix by Maloney.
7,8
To obtain the lighting matrix,
we need to determine the product of [(E
i
)S
j
] rst. The product is an nj matrix
because (E
i
) is n n and S
j
is n j, and we have
[(E
j
)S
i
] =
_
_
_
_
_
_
_
_
_
(

e
i1

i
)s
11
(

e
i1

i
)s
21
(

e
i1

i
)s
31
(

e
i1

i
)s
j1
(

e
i2

i
)s
12
(

e
i2

i
)s
22
(

e
i2

i
)s
32
(

e
i2

i
)s
j2
(

e
i3

i
)s
13
(

e
i3

i
)s
23
(

e
i3

i
)s
33
(

e
i3

i
)s
j3



(

e
in

i
)s
1n
(

e
in

i
)s
2n
(

e
in

i
)s
3n
(

e
in

i
)s
jn
_

_
.
(12.12)
Knowing the product [(E
i
)S
j
], we can obtain the lighting matrix L. The explicit
expression is given in Eq. (12.13).
L=[V
T
(E
i
)S
j
] =
_
_
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
1n
(

e
in

i
)s
3n


v
1n
(

e
in

i
)s
jn

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
3n


v
2n
(

e
in

i
)s
jn

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
3n


v
3n
(

e
in

i
)s
jn

v
hn
(

e
in

i
)s
1n

v
hn
(

e
in

i
)s
2n

v
hn
(

e
in

i
)s
3n


v
hn
(

e
in

i
)s
jn
_

_
.
(12.13)
The lighting matrix L has the size of h j, where h is the number of sensors in
the V matrix and j is the number of basis vectors for the surface reectance. The
(k, l) element in the lighting matrix L has the form of [

v
kn
(

e
in

i
)s
ln
], where
the row number k ranges from 1 to h with h being the number of sensors, and the
column number l ranges from 1 to j with j being the number of basis vectors used
for approximating the surface reectance. The inner summation carries from 1 to
i with i being the number of basis vectors used for approximating the illuminant
SPD, and the outer summation carries from 1 to n with n being the number of
samples in the visible spectrum. Therefore, the lighting matrix L is dependent on
the illumination vectors . It is a linear transform from the j-dimensional space
Computational Color Constancy 239
of surface reectance into the h-dimensional space of sensor responses. Equa-
tion (12.10) is the general expression for the nite-dimensional linear model; theo-
retically, it works for any number of sensors.
9
Normally, h is equal to three because
human color vision has only three different cone photoreceptors.
Alternately, Eq. (12.10) can be expressed as


=V
T
(S
j
)(E
i
) =
_
V
T
(S
j
)E
i
_
=

L, (12.14)
and

L=V
T
(S
j
)E
i
, (12.15)
by changing the association; this is possible because E
i
is a diagonal matrix; there-
fore, it is commutative. In matrix-vector form, we have

L=
_
_
_

v
1n
(

s
jn

j
)
1n

v
1n
(

s
jn

j
)
2n

v
1n
(

s
jn

j
)
3n


v
1n
(

s
jn

j
)
in

v
2n
(

s
jn

j
)
1n

v
2n
(

s
jn

j
)
2n

v
2n
(

s
jn

j
)
3n


v
2n
(

s
jn

j
)
in

v
3n
(

s
jn

j
)
1n

v
3n
(

s
jn

j
)
2n

v
3n
(

s
jn

j
)
3n


v
3n
(

s
jn

j
)
in

v
hn
(

s
jn

j
)
1n

v
hn
(

s
jn

j
)
2n

v
hn
(

s
jn

j
)
3n


v
hn
(

s
jn

j
)
in
_

_
.
(12.16)
Matrix

L can be viewed as the visually adjusted surface reection. By substituting
Eq. (12.16) into Eq. (12.14), we have
=
_
_
_

v
1n
(

s
jn

j
)
1n

v
1n
(

s
jn

j
)
2n

v
1n
(

s
jn

j
)
3n


v
1n
(

s
jn

j
)
in

v
2n
(

s
jn

j
)
1n

v
2n
(

s
jn

j
)
2n

v
2n
(

s
jn

j
)
3n


v
2n
(

s
jn

j
)
in

v
3n
(

s
jn

j
)
1n

v
3n
(

s
jn

j
)
2n

v
3n
(

s
jn

j
)
3n


v
3n
(

s
jn

j
)
in

v
hn
(

s
jn

j
)
1n

v
hn
(

s
jn

j
)
2n

v
hn
(

s
jn

j
)
3n


v
hn
(

s
jn

j
)
in
_

_
_
_

i
_

_
. (12.17)
The inner summation of Eq. (12.16) carries from 1 to j and the outer summation
carries from 1 to n.
For trichromatic vision, Eq. (12.10) may be solved for , the coefcients for
basis vectors of the surface reection, independently for each of the three chromatic
channels. The problem is that there are only three equations but (i +j) unknowns.
To obtain a unique solution, the sum (i +j) connot exceed 3.
First, let us consider the case where the ambient lighting on the scene is known.
In this case, the vector is known and the lighting matrix L is also known. There-
fore, the surface reectance or the vector can be recovered by inverting the light-
ing matrix when h = j (this gives a square matrix) or by pseudo-inverting the
240 Computational Color Technology
lighting matrix when h > j. However, there is still no unique solution if h < j
because Eq. (12.13) is underconstrained.
Second, we consider the case where both lighting and surface reectance are
unknown. Various daylights (e.g., CIE Dilluminants) can be accurately represented
by three basis vectors with two coefcients (the rst vector has the coefcient of 1
for daylight illuminants), whereas other illuminants may need more basis vectors.
Surface reectance requires three to six basis vectors in order to have acceptable
accuracy. Therefore, in almost all cases (i + j) > 3; thus, no unique solution is
possible without additional constraints imposed on the lighting and surfaces.
12.3 Three-Two Constraint
Maloney and Wandell proposed a solution by setting h =j +1, which means that
the number of basis vectors for the surface reectance is one less than the num-
ber of sensors. With this constraint, one will have m(j +1) known observations
from m different spatial locations, mj unknown surface reections, and i unknown
illuminant components, assuming that the lighting is the same at all spatial loca-
tions involved. One can choose the number of locations to satisfy the condition
of m(j + 1) > (mj + i) or m > i for a unique solution that simultaneously ob-
tains vectors and .
811
Now, let us make a few computations to see how many
patches are needed for a unique solution. We know h =3, therefore j =2. If i =3,
m must be greater than three locations (or patches) in order to meet the inequality
condition.
The constraint of i =3 and j =2 is the three-two (3-2) constraint or 3-2 world
assumption that allows an illuminant to have three components and a surface spec-
trum to have two principal components; the explicit expressions in two different
ways are given in Eq. (12.18).
=
_
V
T
(E
i
)S
j
_
=
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n
_
_

2
_
,
(12.18a)
or
=
1
_

v
1n
e
1n
s
1n

v
1n
e
1n
s
2n

v
2n
e
1n
s
1n

v
2n
e
1n
s
2n

v
3n
e
1n
s
1n

v
3n
e
1n
s
2n
_
_

2
_
+
2
_

v
1n
e
2n
s
1n

v
1n
e
2n
s
2n

v
2n
e
2n
s
1n

v
2n
e
2n
s
2n

v
3n
e
2n
s
1n

v
3n
e
2n
s
2n
_
_

2
_
+
3
_

v
1n
e
3n
s
1n

v
1n
e
3n
s
2n

v
2n
e
3n
s
1n

v
2n
e
3n
s
2n

v
3n
e
3n
s
1n

v
3n
e
3n
s
2n
_
. (12.18b)
Computational Color Constancy 241
The inner summations of Eq. (12.18a) carry from i =1 to 3 and the outer summa-
tions carry from n =1 to n. The elements in all three 32 matrices of Eq. (12.18b)
are known and can be precomputed. If we set l
ijk
=

v
kn
e
in
s
jn
, Eq. (12.18b) be-
comes
X =
1
(l
111

1
+l
121

2
) +
2
(l
211

1
+l
221

2
) +
3
(l
311

1
+l
321

2
), (12.19a)
Y =
1
(l
112

1
+l
122

2
) +
2
(l
212

1
+l
222

2
) +
3
(l
312

1
+l
322

2
), (12.19b)
Z =
1
(l
113

1
+l
123

2
) +
2
(l
213

1
+l
223

2
) +
3
(l
313

1
+l
323

2
). (12.19c)
To solve for and , we need at least four distinct color patches (m>i) such that
we can set up 12 equations with 12 known (X
m
, Y
m
, Z
m
) values to solve for 11
unknowns,
1
,
2
,
3
,
1m
, and
2m
(m=1, 2, 3, and 4).
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
X
1
Y
1
Z
1
X
2
Y
2
Z
2
X
3
Y
3
Z
3
X
4
Y
4
Z
4
_

_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
(l
111

11
+l
121

21
) (l
211

11
+l
221

21
) (l
311

11
+l
321

21
)
(l
112

11
+l
122

21
) (l
212

11
+l
222

21
) (l
312

11
+l
322

21
)
(l
113

11
+l
123

21
) (l
213

11
+l
223

21
) (l
313

11
+l
323

21
)
(l
111

12
+l
121

22
) (l
211

12
+l
221

22
) (l
311

12
+l
321

22
)
(l
112

12
+l
122

22
) (l
212

12
+l
222

22
) (l
312

12
+l
322

22
)
(l
113

12
+l
123

22
) (l
213

12
+l
223

22
) (l
313

12
+l
323

22
)
(l
111

13
+l
121

23
) (l
211

13
+l
221

23
) (l
311

13
+l
321

23
)
(l
112

13
+l
122

23
) (l
212

13
+l
222

23
) (l
312

13
+l
322

23
)
(l
113

13
+l
123

23
) (l
213

13
+l
223

23
) (l
313

13
+l
323

23
)
(l
111

14
+l
121

24
) (l
211

14
+l
221

24
) (l
311

14
+l
321

24
)
(l
112

14
+l
122

24
) (l
212

14
+l
222

24
) (l
312

14
+l
322

24
)
(l
113

14
+l
123

24
) (l
213

14
+l
223

24
) (l
313

14
+l
323

24
)
_

_
_

3
_
(12.20a)
or

m
=L

. (12.20b)
Vector
m
has 12 elements, matrix L

has a size of 12 3, and vector has


3 elements. If the selected patches are truly distinct, then vector can be expressed
as functions of by pseudo-inversing Eq. (12.20).
=
_
L

T
L

_
1
L

m
. (12.21)
Equation (12.21) is substituted back into Eq. (12.20a) for obtaining vector . If
more patches are available, we can have optimal results in the least-squares sense.
After obtaining and , we can proceed to recover the illumination using vector
and four surface reectances for four
m
vectors.
The general solution to the world of i illuminant components and j surface
components with (j + 1) sensors is attributed to the different characteristics of
the vectors and . The illumination vector species the value of the lighting
242 Computational Color Technology
matrix L that is a linear transformation from the j-dimensional space of surface
reectance into the (j + 1)-dimensional space of the sensor responses. Equa-
tion (12.10) shows that the sensor response to any particular surface is the weighted
sum of the j column vectors of L. Consequently, the sensor response must fall into
a subspace of the sensor space determined by L and, therefore, by the illumination
vector . Generally, the computational method consists of a two-step procedure.
First, one determines the plane spanning the sensor response that permits one to
recover the illumination vector. Second, once the illumination vector is known,
one can compute the lighting matrix L. Knowing L, one can obtain the surface
vector by simply inverting L.
8,9
The nite-dimensional linear model has salient implications to human color
vision. First, it suggests that computationally accurate color constancy is possible
only if illumination and reectance can be represented by a small number of ba-
sis functions. The human visual system is known to have better color constancy
over some types of illumination than others. This formulation provides a means of
experimentally determining the range of ambient illuminations and surfaces over
which human color constancy will sufce.
8
Second, The number of active pho-
toreceptors in color vision limits the number of degrees of freedom in surface re-
ection that can be recovered. With only three different cone photoreceptors, one
can recover surface reectance with, at most, two degrees of freedom. Two basis
vectors, in general, are insufcient for accurately estimating surface reectance,
precluding perfect color constancy. Third, to meet the criterion of a unique solu-
tion, m(j +1) > (mj +i), one must have m > i number of distinct surfaces (or
locations) in the scene to permit recovery. At least two distinct surfaces are needed
to specify the plane (which must pass through the origin) that determines the light.
And, at least (h1) distinct surfaces are needed in order to uniquely determine the
illumination vector . In the presence of deviations from the linear models of illu-
mination and surface reection, an increase in the number of distinct surfaces will,
in general, improve the estimates of the illumination and the corresponding surface
reectance. An analysis of the brightness constancy suggests that color constancy
should improve with the number of distinct surfaces in a scene.
12
Fourth, the two-
component surface reectance and i-component illumination model are solvable
under the assumption that the ambient lighting is constant in the spatial domain.
In a real scene, the spectral composition of ambient illumination varies with spa-
tial location. The computation given above can be extended to estimate a slowly
varying illumination.
8
12.4 Three-Three Constraint
Most computational models for color constancy use three basis vectors for both
illumination and surface reectance. This is a pragmatic choice because three vec-
tors provide better approximations to illumination and surface reectance. Major
ones are the Sllstrn-Buchsbaum model, various dichromatic models, Brills vol-
umetric model, and retinex theory.
Computational Color Constancy 243
For a three-three (3-3) world, Eq. (12.10) becomes
=
_
V
T
(E
i
)S
i
]
_
=L
=
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
1n
(

e
in

i
)s
3n

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
3n

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
3n
__

3
_
.
(12.22)
The inner summation of Eq. (12.22) carries from i =1 to 3, and the outer summa-
tion carries from n =1 to n.
From the foundation of the trichromatic visual system, it is not possible to have
three or more surface basis vectors regardless of the number of spatial locations
taken. This is because mh is always less than (mj +i) if j h. To have three or
more surface reectance vectors, additional information is needed or constraints
must be imposed.
Additional information may come from interface reections or additional sen-
sors, such as the 1951 scotopic function, to have four sensors in Eq. (12.13) for
sustaining a three-basis surface reectance.
13,14
Wandell and Maloney proposed
a method of using four photosensors.
15
They estimated reectance parameters by
using many surfaces, then obtained and by a linearization argument.
Constraints may come from two areas, the spatial variation of the signal inten-
sity and spectral normalization of the surface reectance. Spatial variation assumes
that the effective irradiance varies slowly and smoothly across the entire scene and
is independent of the viewers position.
16,17
Spectral normalization assumes that
the average reectance of a scene is a gray.
12.4.1 Gray world assumption
The assumption that the average reectance of a scene in each chromatic channel is
the same, giving a gray (or the average of the lightest and darkest naturally occur-
ring surface reectance values) is called gray world assumption.
6,13,18
If the gray
world assumption holds, then the lightness of a patch is an accurate and invariant
measure of its surface reectance. If the normalizing factor is obtained by averag-
ing over the entire scene, giving equal weight to areas close and far from the point
of interest (e.g., a gray patch), then the color of a gray patch will not change when
the color patches around it are merely shufed in position. This means that the local
effect of the simultaneous color contrast will disappear in the global computation
of constant colors. In reality, a gray patch does change color as its surroundings are
shufed; it looks gray when surrounded by a random array of colors and yellowish
when surrounded by bluish colors.
18,19
These results indicate that the gray world
assumption plays a role in spectral normalization, but it is not the only factor. In
extreme cases of biased surroundings, the gray world assumption predicts larger
color shifts than those observed.
18
244 Computational Color Technology
12.4.2 Sllstrn-Buchsbaum model
Sllstrn and Buchsbaum proposed that both illuminant and reectance spectra are
linear combinations of three known basis vectors; and color constancy is adapted
to gray or white by using the gray world assumption that
0
is inferred from a
spatial average over all object colors in the visual eld or is obtained from a ref-
erence white area.
5,6
A reference white or gray in the visual eld has a reectance
spectrum S
0
(), which is known a priori, thus
0
= [X
0
, Y
0
, Z
0
] is also known.
Knowing S
0
(), we can express the tristimulus values of the gray as

0
=V
T
E
i
S
0
=L
0
=
_

v
1n
e
1n
s
0n

v
1n
e
2n
s
0n

v
1n
e
3n
s
0n

v
2n
e
1n
s
0n

v
2n
e
2n
s
0n

v
2n
e
3n
s
0n

v
3n
e
1n
s
0n

v
3n
e
2n
s
0n

v
3n
e
3n
s
0n
__

3
_
.
(12.23)
The summation of Eq. (12.23) carries from 1 to n with n being the number of sam-
ples in the visible spectrum. Equation (12.23) can be solved for vector because
the sensor responses V and reectance S
0
are known and the basis vectors for il-
lumination E
i
are predetermined. This means that every element in matrix L
0
is
known and can be precomputed. Matrix L
0
is a square 3 3 matrix with a rank of
three because it has three independent rows; therefore, it is not singular and can be
inverted.
=L
1
0

0
. (12.24)
Knowing , we can estimate the illuminant by using Eq. (12.8), then insert it into
Eq. (12.22) to compute the lighting matrix L. With the gray (or white) world as-
sumption, we now know every element in the 3 3 matrix L and can obtain vector
by inverting matrix L as given in Eq. (12.25).
=L
1
. (12.25)
Using measured tristimulus values of a surface as input, we compute and then
estimate the surface reectance by using Eq. (12.6). With the gray world assump-
tion and one reference patch, we obtain both the surface reectance and SPD of the
illuminant.
Unfortunately, the gray world assumption does not always hold and the scene
does not always have a white area for estimating the illumination. To circumvent
these problems, Shafer, Lees, and others use highlights, the bright colored surfaces,
for estimating the illumination.
3,20
This becomes a major component of dichro-
matic reection models.
Computational Color Constancy 245
12.4.3 Dichromatic reection model
Recognizing the complex nature of a surface reection that consists of two
componentsinterface (specular) and body (diffuse) reectionsresearchers have
proposed color-constancy models that take into account both reections and utilize
the additional information given by the specular reection to estimate the illumina-
tion. This type of light-reection model is called the dichromatic reection model,
where the reected light (, x) of an inhomogeneous material is the sum of the
interface light
s
(, x) and body light
d
(, x).
3,2032
(, x) =
d
(, x) +
s
(, x). (12.26)
The parameter x accounts for the pixel location and reection geometry, such as the
incident and viewing angles. FromLees study, it is possible to separate wavelength
from angular dependence because the luminance of the reected light changes, but
the hue remains the same with respect to the viewing geometry.
3,23
(, x) =(x)
d
() +(x)
s
(), (12.27)
where and are the position and geometric scaling factors. The reected light
is the product of the surface reectance and the SPD of the illuminant.
23,24
(, x) =(x)S
d
()E() +(x)S
s
()E(). (12.28)
Many materials have constant reective indices over the visible region, such that
the specular reection S
s
() can also be viewed as a constant. In this case, we
can lump S
s
(x) into the scaling factor (x) and make the resulting factor
s
(x) a
function of only the pixel location.
(, x) =(x)S
d
()E() +
s
(x)E(). (12.29)
Now, we introduce the nite-dimensional linear model for both S
d
() and E() as
given in Eqs. (12.5) and (12.7), respectively. Equation (12.29) becomes.
28
(, x) =(x)
_

E
i
()
i
__

S
j
()
j
_
+
s
(x)
_

E
i
()
i
_
. (12.30)
Substituting Eq. (12.30) into Eq. (12.1), we derive the sensor responses as
(x) =
_
V()(, x) d

=(x)
_
V()
_

E
i
()
i
__

S
j
()
j
_
d
+
s
(x)
_
V()
_

E
i
()
i
_
d. (12.31)
246 Computational Color Technology
In vector-matrix notation, Eq. (12.31) becomes
(x)

=(x)
_
V
T
(E
i
)S
j
_
+
s
(x)
_
V
T
E
i
_
=(x)L +
s
(x)

. (12.32)
The lighting matrix L is exactly the same as the one given in Eq. (12.13), and
matrix

is given as follows:

=
_
V
T
E
i
_
=
_
_
_
_
_

v
1n
e
1n

v
1n
e
2n

v
1n
e
3n


v
1n
e
in

v
2n
e
1n

v
2n
e
2n

v
2n
e
3n


v
2n
e
in

v
3n
e
1n

v
3n
e
2n

v
3n
e
3n


v
3n
e
in

v
hn
e
1n

v
hn
e
2n

v
hn
e
3n


v
hn
e
in
_

_
.
(12.33)
For trichromatic vision, h =3 with a 3-3 world constraint on and ; the explicit
expression of Eq. (12.32) is
(x) =(x)
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
1n
(

e
in

i
)s
3n

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
3n

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
3n
_

3
_
+
s
(x)
_

v
1n
e
1n

v
1n
e
2n

v
1n
e
3n

v
2n
e
1n

v
2n
e
2n

v
2n
e
3n

v
3n
e
1n

v
3n
e
2n

v
3n
e
3n
__

3
_
. (12.34)
For multispectral sensors such as a six-channel digital camera, for example, under
a 3-3 world constraint, we have
(x) =(x)
_
_
_
_
_
_
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
1n
(

e
in

i
)s
3n

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
3n

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
3n

v
4n
(

e
in

i
)s
1n

v
4n
(

e
in

i
)s
2n

v
4n
(

e
in

i
)s
3n

v
5n
(

e
in

i
)s
1n

v
5n
(

e
in

i
)s
2n

v
5n
(

e
in

i
)s
3n

v
6n
(

e
in

i
)s
1n

v
6n
(

e
in

i
)s
2n

v
6n
(

e
in

i
)s
3n
_

3
_
+
s
(x)
_

v
1n
e
1n

v
1n
e
2n

v
1n
e
3n

v
2n
e
1n

v
2n
e
2n

v
2n
e
3n

v
3n
e
1n

v
3n
e
2n

v
3n
e
3n
__

3
_
. (12.35)
12.4.4 Estimation of illumination
The dichromatic reection model of Eq. (12.32) indicates that the sensor response
(x) at any spatial position x is a linear combination of two vectors, L and

,
that form a 2D plane in h-dimensional vector space, where h is the number of sen-
sors in the visual system or an imaging system. This result is based on two assump-
tions: (i) the additivity of Eq. (12.26) holds, and (ii) the illumination is constant
Computational Color Constancy 247
[see Eq. (12.29)]. Tominaga and Wandell developed a method to evaluate illumi-
nation by using several object surfaces that included specular reection. If m ob-
jects are observed under the same illumination, they argue that the m image-signal
planes must intersect because they all contain a common vector

. Based on
this reasoning, they proposed that if the intersection is found, then the illuminant
vector can be estimated from the intersection vector

by a matrix inversion
(if

is a square matrix) or a pseudo-matrix inversion

+
(if

is not a square
matrix).
23,24,28
To nd the intersection of the color-signal planes, they rst measure m color
signals reected from an object surface to give an n m matrix, where each mea-
surement
m
is normalized to unity.
=
_
_
_
_
_
_
_
_
_

1
(
1
)
2
(
1
)
m
(
1
)

1
(
2
)
2
(
2
)
m
(
2
)

1
(
3
)
2
(
3
)
m
(
3
)


1
(
n
)
2
(
n
)
m
(
n
)
_

_
, (12.36)
and
[
m
]
T

m
=

[
m
(
n
)]
2
=1. (12.37)
The second step is to performthe singular-value decomposition (SVD) on matrix .
The SVD is a method of decomposing a rectangular matrix into two orthogonal
matrices and and a diagonal matrix .
=
T
. (12.38)
It starts by creating a symmetric and positive matrix of [
T
], having a size of m
m because is n m. Since the matrix is symmetric and positive, the eigenvalues
are all positive. Furthermore, if the matrix [
T
] has a rank r, then there are r
nonzero eigenvalues. The elements
i
of the diagonal matrix are the square
roots of eigenvalues of the matrix [
T
] with the singular values
i
in decreasing
order
i

i+1
. The matrix is the eigenvectors of the matrix [
T
], having a
size of mm, and matrix is the eigenvectors of the matrix [
T
], having a size
of n m. The explicit expression of Eq. (12.38) is given in Eq. (12.39).
=
_
_
_
_
_
_
_

11

21

31

m1

12

22

32

m2

13

23

33

m3

1n

2n

3n

mn
_

_
_
_
_
_
_

1
0 0 0
0
2
0 0
0 0
3
0 0

0 0
m
_

_
248 Computational Color Technology

_
_
_
_
_

11

12

13

1m

21

22

23

2m

31

32

33

3m

m1

m2

m3

mm
_

_
, (12.39a)
=
_
_
_
_
_
_
_

j1

j1

j1

j2

j1

j3

j1

jm

j2

j1

j2

j2

j2

j3

j2

jm

j3

j1

j3

j2

j3

j3

j3

jm

jn

j1

jn

j2

jn

j3

jn

jm
_

_
.
(12.39b)
The summation in Eq. (12.39b) carries from j =1 to m.
SVD gives the rank of the matrix that is the number of nonzero eigenval-
ues; this in turn gives the dimension of the space spanned by the measured color
signals. If the assumption of Eq. (12.26) holds, then we get
1

2
> 0 and

3
=
4
= =
m
=0. In practice, this condition is not strictly obeyed because
of measurement and round-off errors. Thus, Tominaga and Wandell designed a
measure r(j) to estimate the rank of the matrix .
r(j) =
_
j

i=1

2
i
_
_
_
m

i=1

2
i
_
=
_
j

i=1

2
i
_
/m. (12.40)
The summation in the numerator sums from 1 to j and the summation in the de-
nominator sums from 1 to m, with m > j. If the measured color signal is indeed
2D, we get r(2) = 1.0. For j = 1, if r(1) = 1, the signal is 1D. If r(2) 1, then
the signal has a higher dimension than two. In this case, one must consider whether
the surface is a homogeneous material or the dichromatic model is in error.
23
They tested this hypothesis by measuring three different surfaces. A red cup
was measured nine times at various angles to obtain a 31 9 matrix of , where
the spectra were sampled at 10-nm intervals from 400 to 700 nm to get 31 values.
SVD is performed on this matrix , to give r(2) = 0.9998. A green ashtray with
eight measurements gives r(2) = 0.9999, and fruits with ten measurements give
r(2) = 0.9990. These results indicate that the hypothesis of the 2D color signal
space is valid.
23
The rst two vectors of matrix are the basis vectors that span a 2D plane and
the measured SPD is the linear combination of
1
and
2
.

j
=
j1

1
+
j2

2
, j =1, 2, 3, . . . , m. (12.41)
Now, let us measure two object surfaces under the same illumination at various
geometric arrangements. The resulting two matrices are decomposed using SVD
to get [
1
,
2
] and [

1
,

2
] vectors for surface 1 and surface 2, respectively. Note
Computational Color Constancy 249
that these vectors do not correspond to the nite-dimensional components dened
in Eq. (12.5) or Eq. (12.7). To estimate the illuminant spectrum, we need to nd
the intersection line that lies in both planes with the following relation:

1
+
2

2
=
1

1
+
2

2
, (12.42a)
or
[
1
,
2
,

1
,

2
][
1
,
2
,
1
,
2
]
T
=0. (12.42b)
The explicit expression is
_
_
_
_
_
_
_

11

21

11

21

12

22

12

22

13

23

13

23

1n

2n

1n

2n
_

_
_
_
_

2

1

2
_

_
=0 (12.43a)
or

4
=0. (12.43b)
The matrix
4
is n4 and the vector is 41. Anontrivial solution of Eq. (12.43)
denes the intersection of the planes. A reliable method is to apply SVD to the
matrix
4
, and we have

4
=[
1

4
]
_
_
_

1
0 0 0
0
2
0 0
0 0
3
0
0 0 0
4
_

_
_
_
_

11

12

13

14

21

22

23

24

31

32

33

34

41

42

43

44
_

_
, (12.44)
where the denitions of
m
,
m
, and
m
(m=1, 2, 3, 4) correspond to those of
m
,

m
, and
m
in Eq. (12.39). If
4
=0 and
3
> 0, we have a rank of three, where
the planes intersect in a line. Thus, the vector
4
is a solution for the intersection.
The intersection line is given as
E
1
() =

2[
41

1
() +
42

2
()], (12.45a)
or
E
2
() =

2[
43
u
1
() +
44
u
2
()]. (12.45b)
The line vectors are normalized as E
j

2
=[E
j
T
E
j
] =1. The signs of the coef-
cients
42
are chosen so that the illuminant vectors are positive in order to to meet
250 Computational Color Technology
the requirement that the illuminants are physically realizable. E
1
and E
2
can be
regarded as the estimates of the illuminant SPD.
23
In a practical application, the value of
4
is not exactly zero, but it is usu-
ally small enough to satisfy the second assumption that the specular reectance is
nearly constant. It is recommended that one get the illuminant spectrum by taking
the mean of Eq. (12.46).

E() =
E
1
() +E
2
()
2
. (12.46)
Tominaga and Wandell used this method and the vectors derived fromthe red cup
and green ashtray to estimate a oor lamp. The estimated spectrum agrees pretty
well with the measured spectrum. Similar agreement is obtained for an illumination
of a slide projector. Their results indicate that the dichromatic reection model is
adequate for describing color signals from some inhomogeneous materials. With
the knowledge of only two surfaces, this algorithm infers the illuminant spectrum
with good precision.
The method outlined from Eq. (12.41) to Eq. (12.43) can readily be expended
to cases with three or more surfaces. For three planes, we have.
28

1
+
2

2
=
1

1
+
2

2
, (12.47a)

1
+
2

2
=

1
+

2
, (12.47b)

1

1
+
2

2
=

1
+

2
, (12.47c)
or
_

1

2

2
0 0
0 0

1

2
0 0

2
_
_
_
_
_
_
_
_

2

1

2

2
_

_
=0. (12.47d)
Once the illuminant spectrum is obtained, we substitute it into Eq. (12.32) to cal-
culate the lighting matrix L. The surface reectance can then be computed as
=L
+
(x), (12.48)
where L
+
= L
T
[LL
T
]
1
is the Moore-Penrose inverse of the lighting matrix L,
and (x) is the sensor responses.
28
12.4.5 Other dichromatic models
Dzmura and Lennie proposed a nite-dimensional color-constancy model that
takes into account the interface and body reections in the spatial domain. Us-
ing three basis functions S
j
(), j = 1, 2, or 3, for body reection together with
Computational Color Constancy 251
the interface reectance S
0
(), they expressed the surface reectance S(, x) at a
position x as
21
S(, x)

=f
s
(x)S
0
() +f
b
(x)
_

S
j
()
j
_
, (12.49)
where function f
s
(x) is the intensity variation (or weight) of the interface reec-
tion in the spatial domain and f
b
(x) is the corresponding variation of the body
reection. Again, using Lees results, the spatial and spectral dependences are sep-
arated on the right-hand side of Eq. (12.49), where the intensity variation functions
are dependent only on the geometric variables, and the surface reectances are
dependent only on the wavelength.
3
This model is built on the foundation of trichromatic vision with opponent color
theory. The rst basis function of the body reection is assigned as the luminance
(light-dark) channel, the second as the red-green chrominance channel, and the
third as the yellow-blue chrominance channel. They further assume that the in-
terface reectance S
0
() is a scaled luminance channel, the same as the rst ba-
sis function S
1
(). With these assumptions, they combine the rst two terms in
Eq. (12.49) to give
S(, x)

=f
L
(x)S
1
() +f
b
(x)
2
S
2
() +f
b
(x)
3
S
3
(), (12.50)
and
f
L
(x) =f
s
(x) +f
b
(x)
1
. (12.51)
Equation (12.50) indicates that color constancy depends on the weights of f
L
(x),
f
b
(x)
2
, and f
b
(x)
3
for the luminance, red-green, and yellow-blue basis func-
tions, respectively. They can be extracted if the spectral properties of the illuminant
can be discounted. The illuminant is approximated by three basis functions given
in Eq. (12.7). With known photoreceptor responses, we obtain an equation that is
very similar to Eq. (12.22).
=
_
V
T
(E
i
)S
i
_
f =Lf
=
_

v
1n
(

e
in

i
)s
1n

v
1n
(

e
in

i
)s
2n

v
1n
(

e
in

i
)s
3n

v
2n
(

e
in

i
)s
1n

v
2n
(

e
in

i
)s
2n

v
2n
(

e
in

i
)s
3n

v
3n
(

e
in

i
)s
1n

v
3n
(

e
in

i
)s
2n

v
3n
(

e
in

i
)s
3n
__
f
L
(x)
f
d
(x)
2
f
d
(x)
3
_
.
(12.52)
The lighting matrix L, having three independent columns, can be inverted to ob-
tain weights only if the illuminant can be discounted by nding vector . They
suggested the use of highlights to derive the interface reection.
18,20,21
The com-
putational method is the same as those outlined in Eqs. (12.23)(12.25). To dis-
count the residual effects of the illuminant is to incorporate the effects of scaling
252 Computational Color Technology
with a diagonal matrix .
=Lf. (12.53)
Klinker and coworkers proposed a way of determining the illumination vector by
nding characteristic cluster shapes in a 3D plot of pixel color values from an
image.
18,20
Lee uses a similar technique, called the chromaticity convergence algo-
rithm, to nd the illumination spectrum. This approach is based on the assumption
that pixel values of any one highlight color under a single illumination source with
area specular variation lies on a line connecting the coordinates of the illuminant
and the object color, if one plots pixel color values in the CIE chromaticity space
(or an equivalent 2D space with coordinates
i
/
j
and
i
/
k
that are the ratios of
the image irradiance values in different color channels to factor out the absolute il-
lumination level). The illumination source is considered as the white point, and the
effect of a specular reection is to get more white light into the visual pathway to
effectively reduce the saturation of the highlight color. This moves the coordinates
of the reected color toward the coordinates of the illuminant. One can obtain line
plots of several different highlight colors and plot them together in one chromatic-
ity diagram; the result is a set of lines that intersect at a point or converge to a small
area, indicating the location of the illuminant.
3,18
Since then, several modications
and improvements have been reported.
3032
Kim and colleagues developed an al-
gorithm of the illuminant estimation using highlight colors. The algorithm consists
of the following:
32
(1) The means to nd highlight regions in the image by computing a
luminance-like value from input RGB values, then comparing to a thresh-
old value
(2) The method of converting RGB to chromaticity coordinates and lumi-
nance Y
(3) The sorting of chromaticity coordinates with respect to Y
(4) The method of sampling and subdivision of the highlight color area to get
moving averages of the subdivision
(5) The line-tting method and line parameter computation
(6) The method of determining the illuminant point
The problem with the chromaticity convergence algorithm is that it is difcult to
decide the nal intersection point from many intersection points. To avoid this
problem and enhance the chance of nding the correct illuminant point, they devel-
oped a perceived illumination algorithm for estimating a scenes illuminant chro-
maticity. This algorithm is an iterative method; it rst computes a luminous thresh-
old. If a pixel value is larger than the threshold value, it is excluded in the next
round of computations. The input RGB values are converted to XYZ values, and
the average tristimulus values and a new luminous threshold are computed. If the
new threshold is within the tolerance of the old threshold, it has reached conver-
gence and the illuminant chromaticity coordinates can be calculated. If not, they
Computational Color Constancy 253
repeat the process by computing average tristimulus values of reduced pixels and
threshold until convergence is reached. The combination of these two algorithms
has the advantages of stability from the perceived illumination and accuracy from
the highlight algorithm. The three closest line intersection points obtained from the
highlight chromaticity convergence algorithm are compared to the chromaticity co-
ordinates obtained from the perceived illumination; the closest one to the perceived
illumination point is selected as the illuminant chromaticity.
32
Once the illumina-
tion is determined, it is used to calculate the surface reectance of the maximum
achromatic region using the three basis vectors derived from a set of 1269 Munsell
chips.
31
Next, the computed surface reectance is fed into the spectral database
to nd the closest match. The spectral database is composed of the 1269 Munsell
spectra multiplied by six illuminants (A, C, D
50
, D
65
, green, and yellow), having
a total of 7314 spectra. Finally, color-biased images are recovered by dividing the
SPD of the reected light to the matched surface reectance.
32
The method re-
covers color-biased images by illuminants A, C, D
50
, green, or yellow light quite
nicely when compared with the original. The recovered image from a color-biased
image using illuminant A is lighter than the original, but the color is very close to
the original. Images biased on other illuminants show a similar trend (the original
and recovered images can be found in Ref. 32).
12.4.6 Volumetric model
Brill proposed a model of color constancy that does not estimate the illumination,
but makes use of the illuminant invariance to estimate surface reectance directly.
1
A simplied version, called the volumetric model of color constancy, uses a nite-
dimensional linear approximation with i basis vectors for illumination and three
vectors for surface reectance, together with a residual term S
r
().
3336
Without
estimating the illumination, it allows one to have as many basis vectors for surface
reectance as the number of sensors.
S() =

S
j
()
j
+S
r
(), (12.54a)
or
S =S
j
+S
r
. (12.54b)
The residual S
r
() is orthogonal to the 3i products V()E
i
().
_
V()E
i
()S
r
() =0. (12.55)
For trichromatic vision, h =3 and we have
V
T
E
1
S
r
=V
T
E
2
S
r
=V
T
E
3
S
r
= =V
T
E
i
S
r
=[0, 0, 0]
T
. (12.56)
254 Computational Color Technology
Thus, surface reectances differing by S
r
() should match under any illumination.
Because S
r
(), hence S(), is subject to the 3i constraints of Eq. (12.56), it follows
that the more degrees of freedom allowed in the illumination, the more constrained
will be the surface reectances. In the limiting case of i , S
r
() must be zero
and S() is just a linear combination of the three known reectances S
j
() as given
in Eq. (12.5). This method requires three reference surfaces (e.g., the red, white,
and blue of an American ag), where any test surface reectance is assumed to be
the linear combination of these three reectances. Under a given illumination E,
the tristimulus values of the three reference surfaces,
1
= [X
1
Y
1
Z
1
]
T
,
2
=
[X
2
Y
2
Z
2
]
T
, and
2
=[X
3
Y
3
Z
3
]
T
are known and are given in Eq. (12.57).

1
=V
T
ES
1
,
2
=V
T
ES
2
,
3
=V
T
ES
3
. (12.57)
Employing Eq. (12.5) for approximating the surface reection and combining with
Eq. (12.57), we obtain the tristimulus values of the test surface as
=V
T
ES =V
T
ES
j
=V
T
E(S
1

1
+S
2

2
+S
3

3
) =

j
=.
(12.58)
Equation (12.59) gives the explicit expression.
_
X
Y
Z
_
=
_
X
1
X
2
X
3
Y
1
Y
2
Y
3
Z
1
Z
2
Z
3
__

3
_
. (12.59)
Matrix given in Eq. (12.59) consists of the vectors of three reference tristimulus
values, forming a tristimulus volume. Comparing to Eq. (12.22), one can see that
matrix is a form of the lighting matrix L, or a simple linear transform away from
it. Matrix , having three independent vectors, can be inverted to solve for vector
if is known. Let us set V
123
as the determinant of the matrix .
V
123
=
_
X
1
X
2
X
3
Y
1
Y
2
Y
3
Z
1
Z
2
Z
3
_
. (12.60)
By applying Cramers rule, the solution to Eq. (12.59) is

1
=V
1
123
_
X X
2
X
3
Y Y
2
Y
3
Z Z
2
Z
3
_
,
2
=V
1
123
_
X
1
X X
3
Y
1
Y Y
3
Z
1
Z Z
3
_
,

3
=V
1
123
_
X
1
X
2
X
Y
1
Y
2
Y
Z
1
Z
2
Z
_
. (12.61)
Knowing vector , we have an estimate of the surface reectance. This ap-
proach does not use illumination information, it only assumes that the illumi-
nant is invariant; therefore, it can have as many unknowns as there are sensors.
Computational Color Constancy 255
Note that the name volumetric model comes from the fact that the solutions in
Eq. (12.61) are ratios of the determinant, which in turn is a 3D volume of tris-
timulus values. For application to color recognition, and are the inputs to
the color recognizer. Comparing with stored coefcients for reectance vectors
of known materials under a known illuminant provides the object color recogni-
tion.
12.5 Gamut-Mapping Approach
Forsyth developed a color constancy approach by interpreting the color of an ob-
ject as what it would have looked like under a canonical illuminant instead of the
surface reectance from the illuminant used. Under this interpretation, the color
of a surface is described by its appearance under the canonical illuminant E
c
().
For an image illuminated by a different source E
d
() with colored light, color
constancy involves the prediction of its appearance had it been illuminated by the
canonical illuminant E
c
().
37
Forsyth believes that the prediction is possible by
constructing a function
d
to account for the receptor responses generated by the
image under colored light of E
d
(). Using the association of sensor responses with
the illuminant [see also Eq. (1.2)], we have

d
__

c
()S() d
_
=
_

d
()S() d. (12.62)
For human trichromatic vision,
c
has three components
c,l
(), where l =1, 2,
or 3; other devices such as a camera may have a different number of sensors (l >3
for a multispectral camera). Each component of
c
and
d
can be decomposed as
the linear combination of the orthonormal basis set. Similarly, the surface spectrum
S() can also be expanded in terms of the basis set.

c,l
() =
j=L

j=1

lj

l
(), (12.63)

d,l
() =
j=L

j=1

lj

l
() +
d,l
() =
i=L

i=1
j=L

j=1
r
lj

ji

l
() +
d,l
(), (12.64)
S() =
i=L

i=1

i
() + s(). (12.65)
The functions
d,l
() and s() are residue orthogonal terms of
l
(). Substituting
Eqs. (12.63), (12.64), and (12.65) into Eq. (12.62) and utilizing the properties of
256 Computational Color Technology
orthonormal functions,
_

i
d =1 and
_

j
d =0 when i =j, we have

d,n
_
j=L

j=1

ij

j
_
=
i=L

i=1
j=L

j=1
r
ni

ji

j
+
_

d,n
() s() d. (12.66)
The function
d,n
is the nth component of the function
d
. For the color constancy
to work, Forsyth assumed that the residual terms are zero. Thus, Eq. (12.66) be-
comes

d,n
_
j=L

j=1

ij

j
_
=
i=L

i=1
j=L

j=1
r
ni

ji

j
. (12.67)
Equation (12.67) states that the image gamut under illuminant t is a linear map
of the gamut under the canonical illuminant. Therefore, it is possible to use geo-
metrical properties of the gamut to estimate the linear map and to determine the
illuminant. The algorithm is given as follows:
37
(1) Construct the canonical gamut by observing as many surfaces as possible
under a single canonical illuminant. The canonical gamut is approximated
by taking the convex hull of the union of the gamuts obtained by these
observations.
(2) Construct a feasible set of linear maps for any patch imaged under another
illuminant. The convex hull of the observed gamut is computed. For each
vertex of this hull, a diagonal map that takes the vertex of the observed
gamut to a point inside the canonical gamut is computed. Intersect all the
sets to give the feasible set.
(3) Use some estimator to choose a map within this feasible set that corre-
sponds most closely to the illuminant.
(4) Apply the chosen map to the receptor responses to obtain color descriptors
that are the estimate of the image appearance under the canonical illumi-
nation.
12.6 Lightness/Retinex Model
The lightness/retinex theory developed by Land and colleagues is special and
unique. It is special because it was the rst attempt to develop a computational
model to account for the human color constancy phenomenon. It is unique in many
ways. First, it takes the surrounding objects into account by using Mondrian pat-
terns, a clear intention to include the simultaneous contrast. A Mondrian is a at
2D surface that is composed of a 2D array of color patches with matte surface and
rectangular shape. It is so called by Land because the 2D color array resembles a
painting by Piet Mondrian. Second, unlike other deterministic models, the retinex
Computational Color Constancy 257
theory is stochastic in nature. In this chapter, we give a brief introduction to the
theory; more details can be found elsewhere.
18,3843
The lightness/retinex algorithmstarts by taking inputs froman array of photore-
ceptor responses for each location (or pixel) in the image. The input data can be
viewed as three separate arrays (or planes) of data, one for each different photore-
ceptor. Each of these spatial planes contains responses of a single photoreceptor for
each pixel in the image. The algorithm transforms the spatial array of photorecep-
tor responses
h
into a corresponding spatial array of lightness values l
h
. A central
principle of the lightness/retinex algorithm is that the lightness values at any pixel
are calculated independently for each photoreceptor.
The algorithm estimates the spatial array of lightness values for each plane by
computing a series of paths. Each path is computed as follows:
(1) Select a starting pixel.
(2) Randomly select a neighboring pixel.
(3) Calculate the difference of the logarithms of the sensor responses at the
two positions.
(4) Add the difference into an accumulator for position 2.
(5) Increment a counter for position 2 to indicate that a path has crossed this
position.
The path calculation proceeds iteratively with the random selection of a neighbor-
ing pixel, where the accumulator and counter are updated accordingly. Note that
the sensor response of the rst element of the path plays a special role in the accu-
mulation for that path calculation: It is used as a normalizing term at every point
on the path. After the rst path has been computed, the procedure is repeated for a
new path that starts at another randomly chosen position. After all paths have been
completed, the lightness value for each pixel is computed by simply dividing the
accumulated values by the value in the corresponding counter.
The purpose of the lightness/retinex algorithm is to compute lightness values
that are invariant under changing illumination, much as human performance is
roughly invariant under similar changes. At each pixel, the lightness triplet should
depend only on the surface reectance and not on the SPD of the ambient light
or the surface reectance of the surrounding patches in the Mondrian. The retinex
algorithm always tries to correct for the illuminant by means of a diagonal matrix
(m, n), where (m, n) indicates the pixel location.
l(m, n) =(m, n)(m, n). (12.68)
The diagonal matrix depends on the SPD of the ambient light, the location, and
other surrounding surfaces in the image. As the path length increases, the algo-
rithm converges toward using a single global matrix for all positions, where the
diagonal elements approach the inverse of the geometric mean of the photoreceptor
responses.
42
258 Computational Color Technology
12.7 General Linear Transform
In the general linear transform, the estimated tristimulus values

B
of an object
under illuminant B can be approximated from the tristmulus values
A
under illu-
minant A via a 3 3 matrix M.

B
=M
A
. (12.69)
The matrix M can be derived for a set of test reectance spectra using a mean-
square-error minimization in the sensor space. For tristimulus values, we are under
the CIEXYZ space, where

A
=A
T
E
A
S =
A
T
S and
B
=A
T
E
B
S =
B
T
S. (12.70)
For a set of m input spectra, S is no longer a vector, but a matrix containing all
input spectra, one in each column, having a size of n m with n being the number
of spectrum sampling points. Matrix is still the weighted CMF with a size of
n 3 (see Section 1.1). Therefore, resulting matrices
A
and
B
have a size of
3 m.
S =
_
_
_
_
_
_
_
_
_
s
1
(
1
) s
2
(
1
) s
m
(
1
)
s
1
(
2
) s
2
(
2
) s
m
(
2
)
s
1
(
3
) s
2
(
3
) s
m
(
3
)



s
1
(
n
) s
2
(
n
) s
m
(
n
)
_

_
, (12.71)
=
_
X
1
X
2
X
3
X
m
Y
1
Y
2
Y
3
Y
m
Z
1
Z
2
Z
3
Z
m
_
. (12.72)
The least-squares solution for matrix M is obtained by the pseudo-inverse of
Eq. (12.69).
M =
B

A
T
_

A
T
_
1
. (12.73)
Substituting Eq. (12.70) into Eq. (12.73), we obtain
M =
B
T
SS
T

A
_

A
T
SS
T

A
_
1
. (12.74)
By setting the term SS
T
as the spectral correlation matrix , Eq. (12.74) becomes
the Wiener inverse estimation (see Section 11.2).
44,45
The correlation matrix has
a size of nn because S is an nm matrix. In order to become independent from
a specic set of input spectra, Praefcke and Konig model the covariance matrix as a
Computational Color Constancy 259
Toeplitz structure with
|k|
on the kth secondary diagonal. A value of 0.99 is given
to the correlation coefcient , which leads to only slightly inferior results when
compared to a measured correlation matrix.
45
H =
A
_

A
T

A
_
1
. (12.75)
Equation (12.74) becomes
M =
B
T
H. (12.76)
Substituting Eq. (12.76) into Eq. (12.69), we obtain the estimated tristimulus values

B
=M
A
=
B
T
H
A
. (12.77)
Finlayson and Drew called this approach nonmaximum ignorance.
44
12.8 Spectral Sharpening
Spectral sharpening is a method developed for the sensor transformation, where a
set of sensor sensitivity functions is converted into a new set to improve the per-
formance of any color-constancy algorithm that is based on the independent and
individual adjustment of sensor response channels. New sensor sensitivities are
constructed as linear combinations of the original sensitivities. Spectral sharpening
has been applied to chromatic adaptations.
4650
The rst chromatic adaptation us-
ing spectral sharpening was the Bradford transform by Lam.
46
With the addition of
the incomplete adaptation, the CIECAM97s color appearance model adopted this
spectrally sharpened chromatic adaptation. Other developments of spectral sharp-
ening can be found in publications by Finlayson and coworkers,
4749
and Calabria
and colleagues.
50
Independent and individual adjustment of multiplicative coefcients corre-
sponds to the application of a diagonal-matrix transform (DMT) to the sensor
response vector. It is a common feature of many chromatic-adaptation and color-
constancy theories such as the von Kries adaptation, lightness/retinex algorithm,
and Forsyths gamut-mapping approach.

=
d

d
. (12.78)

c
is the chromatic values observed under a xed canonical illuminant,
d
is the
chromatic values observed under a test illuminant, and
d
is the diagonal matrix
for mapping the test to the canonical illumination.
The sharpening transform performs a linear transform on both sets of sensor
responses prior to the diagonal transform. It effectively generalizes diagonal-matrix
theories of color constancy by maintaining the inherent simplicity of many color-
constancy algorithms.
4749
M
d

=
d
M
d

d
. (12.79)
260 Computational Color Technology
12.8.1 Sensor-based sharpening
Sensor-based sharpening focuses on the production of new sensors with their
spectral sensitivities concentrated as much as possible within a narrow band of
wavelengths.
48
This technique determines the linear combination of a given sensor
set that is maximally sensitive to subintervals of the visible spectrum. It does not
consider the characteristics of illuminants and surface reectances; it only consid-
ers the sharpness of the resulting sensors.
The sensor response V() is the most sensitive in the wavelength interval
[
1
,
2
] if the following condition is met:
Q=
_
V
b
T
C
_
2
+
__
V
T
C
_
2
1
_
. (12.80)
Matrix V
b
represents sampled elements with wavelengths outside the sharpening
band, matrix V encompasses the whole visible range, C is a coefcient vector,
and is a Lagrange multiplier. Sensor response [V
b
T
C] is the most sensitive in
[
1
,
2
] if the percentage of its norm within the interval is the highest with re-
spect to all other sensors. One can determine vector C by minimizing Eq. (12.80);
the Lagrange multiplier guarantees a nontrivial solution. Partial differentiation of
Eq. (12.80) with respect to yields the constraint equation.
Q/ =0 =
_
V
T
C
_
2
1. (12.81)
For each component, Eq. (12.81) becomes
(V
i
T
V
i
)c
2
i
=1 (12.82a)
or
c
i
=
__
V
i
T
V
i
_
1
_
1/2
. (12.82b)
Under the constraint of Eq. (12.82), we can solve Eq. (12.80) by partial differenti-
ating Qwith respect to C, and nd the minimum by setting the resulting differen-
tiation to zero.
Q/C =0 =2V
b
V
b
T
C +2VV
T
C (12.83a)
or
_
VV
T
_
1
_
V
b
V
b
T
_
C =C. (12.83b)
Equation (12.83b) is an eigenfunction. For a trichromatic response, V and V
b
are
n 3; thus, (VV
T
)
1
and (V
b
V
b
T
) are both 3 3 matrices that in turn give a
3 3 matrix for the product of (VV
T
)
1
(V
b
V
b
T
). C is a vector of three elements
Computational Color Constancy 261
such that there are three solutions for Eq. (12.83b), each solution corresponding
to an eigenvalue that minimizes (V
b
T
C)
2
. Equation (12.83b) indicates that C is a
real-valued vector because the matrices (VV
T
)
1
and (V
b
V
b
T
) are positive de-
nite, and eigenvalues of the product of two positive-denite matrices are real and
nonnegative. This implies that the sharpened sensor is a real-valued function. Solv-
ing for C in each of three wavelength intervals yields the matrix M
d
for use in
Eq. (12.79). Matrix M
d
, derived from the sensor-based sharpening, is not depen-
dent on the illuminant because it deals only with the sensor spectra; no illuminant
is involved.
Finlayson and colleagues have sharpened two sets of sensor-response functions,
the cone absorptance function measured by Bowmaker and Dartnell
51
and the cone
fundamentals derived by Vos and Walraven.
52
The sharpening matrix M
d
for Vos-
Walraven fundamentals is given as follows:
M
d
=
_
2.46 1.97 0.075
0.66 1.58 0.12
0.09 0.14 1.00
_
.
Results showed that the two sets of sensors are indeed sharpened signicantly in
the long-wavelength cone, marginally in the medium-wavelength cone, and slightly
in the short-wavelength cone. The peak of the sharpened long-wavelength function
is shifted toward a longer wavelength, the medium-wavelength is shifted toward
a lower wavelength, and the short-wavelength remains essentially the same (see
Figs. 12.112.3). Note that the sharpened curves contain negative values repre-
senting the consequence of negative coefcients in the computation, but not the
negative physical sensitivities.
48
12.8.2 Data-based sharpening
Data-based sharpening extracts new sensors by optimizing the ability of a DMT to
account for a given illumination change. It is achieved by examining the sensor-
response vectors obtained from a set of surfaces under two different illuminants.
48
If the DMT-based algorithms sufce for color constancy, then a set of surfaces
observed under a canonical illuminant E
c
should be approximately equivalent to
those surfaces observed under another illuminant E
d
via a DMT transform.

=
d

d
, (12.84)
where
c
is a 3 m matrix containing trichromatic values observed under a xed
canonical illuminant of m surfaces,
d
is another 3 m matrix containing trichro-
matic values observed under a test illuminant of the same m surfaces, and
d
is the
diagonal matrix for mapping the test to the canonical illumination.
Again, Finalayson and colleagues introduced a 3 3 transfer matrix M
d
to
improve the computational color constancy.
M
d

=
d
M
d

d
. (12.85)
262 Computational Color Technology
Figure 12.1 The sharpening of the long-wavelength sensor.
Figure 12.2 The sharpening of the middle-wavelength sensor.
Computational Color Constancy 263
Figure 12.3 The sharpening of the short-wavelength sensor.

d
can be optimalized in the least-squares sense by performing the Moore-Penrose
inverse.

d
=M
d

c
[M
d

d
]
+
. (12.86)
The superscript + denotes the Moore-Penrose inverse of M
+
= M
T
[MM
T
]
1
;
therefore, we have
[M
d

d
]
+
=
_
M
d

d
_
T
_
(M
d

d
)(M
d

d
)
T
_
1
. (12.87)
The matrix [M
d

d
] is 3 m because M
d
is 3 3 and
d
is 3 m; this gives the
inverted matrix [(M
d

d
)(M
d

d
)
T
]
1
a size of 3 3 and matrix [M
d

d
]
+
a size
of m 3. The method of choosing M
d
to ensure that
d
is diagonal is given in
Eq. (12.88).
M
1
d

d
M
d
=M
1
d
M
d

c
[M
d

d
]
+
M
d
=
c
[M
d

d
]
T
_
(M
d

d
)(M
d

d
)
T
_
1
M
d
=
c

d
T
M
d
T
_
M
d

d
T
M
d
T
_
1
M
d
=
c

d
T
M
d
T
_
M
d
T
_
1
_

d
T
_
1
M
1
d
M
d
264 Computational Color Technology
=
c

d
T
_

d
T
_
1
=
c

d
+
. (12.88)
The diagonal and transfer matrices,
d
and M
d
, can be found by the diagonal-
ization of matrix [
c

d
+
] giving a real square matrix with a size of 3 3. Let
matrix =
c

d
+
be a real matrix; then, there exists an orthogonal matrix U
d
such that
d
= U
1
d
U
d
is diagonal. It follows that U
d

d
U
1
d
= . Comparing
to Eq. (12.88), we obtain a unique M
d
by equating to U
1
d
. The sharpening matrix
M
d
for Vos-Walraven fundamentals is given as follows:
48
M
d
=
_
2.46 1.98 0.10
0.58 1.52 0.14
0.07 0.13 1.00
_
.
It is obvious that the sharpening matrix derived from input data depends on the
selection of the input data and the size of the data. The input data, in turn, are af-
fected by the illuminant. The test of ve different illuminants by Finalayson and
colleagues showed that the sharpened Vos-Walraven fundamentals are remarkably
similar, indicating that data-based sharpening is relatively independent of the illu-
minant. Therefore, the method can be represented by the mean of these sharpened
sensors. Moreover, the mean sensors are very similar to those derived from sensor-
based sharpening. This is not surprising, cosidering the closeness of the derived
sharpening matrices.
12.8.3 Perfect sharpening
Perfect sharpening provides a unique and optimal sharpening transform using a
2-3 world of the nite-dimensional model for illumination and surface reectance,
respectively.
47,48
For the 2-3 world, there are two illumination vectors, E
1
and E
2
,
and a surface matrix S of three components; hence, Eq. (12.10) becomes
=
1
_
A
T
E
1
S
_
+
2
_
A
T
E
2
S
_

=
1
L
1
+
2
L
2
. (12.89)
Finlayson and colleagues dene the rst illuminant basis function as the canonical
illuminant and the second basis function as the test illuminant. It follows that the
color response under the second basis function L
2
is a linear transform M of the
rst basis function L
1
.
L
2
=ML
1
, (12.90)
or
L
2
=ML
1
, (12.91)
Computational Color Constancy 265
or
M =L
2
L
1
1
. (12.92)
Equation (12.89) becomes
=(
1
I +
2
M)L
1
, (12.93)
where I is the identity matrix. There exists a generalized diagonal transform, map-
ping surface color responses between illuminants, that follows the eigenvector de-
composition of matrix M.
M =M
1
d
M
d
. (12.94)
Substituting Eq. (12.94) into Eq. (12.93), we have
=
_

1
M
1
d
IM
d
+
2
M
1
d
M
d
_
L
1
. (12.95)
Multiplying both sides of Eq. (12.95) by M
d
, we have
M
d
=(
1
IM
d
+
2
M
d
)L
1
=(
1
I +
2
)M
d
L
1
. (12.96)
Finally, we invert the matrix (
1
I +
2
) to give Eq. (12.97).
(
1
I +
2
)
1
M
d
=M
d
L
1
. (12.97)
Equation (12.97) shows that in the 2-3 world, a diagonal transform supports per-
fect color constancy after an appropriate sharpening M
d
. Using the Munsell spec-
tra under six test illuminants and employing the principal component analysis, they
constructed lighting matrices and derived the transfer matrix for Vos-Walraven fun-
damentals.
M
d
=
_
2.44 1.93 0.11
0.63 1.55 0.16
0.08 0.13 1.00
_
.
All three methods give very similar M
d
matrices; therefore, the shapes of the sharp-
ened sensors are very close, which in turn give a similar performance. For each
illuminant, the sharpened sensors give a better performance than the unsharpened
ones as evidenced by the measure of the cumulative normalized tting distance.
48
Generally, the performance difference increases as the color temperatures of illu-
minants become wider apart.
266 Computational Color Technology
12.8.4 Diagonal transform of the 3-2 world
Finlayson and colleagues extended the diagonal approach of the 2-3 world to
Maloney-Wandells 3-2 world. They showed that if the illumination is three-vector
and the surface reectance is two-vector, then there exists a sensor transform M
d
for which a diagonal matrix supports a perfect color constancy.
49
Employing Eq. (12.14), they showed that


=
_
V
T
(S
j
)E
i
_
=

L =
1

L
1
+
2

L
2
, (12.98)
M
d
=(
1
I +
2
)M
d

L
1
. (12.99)
This shows the sharpening of the short-wavelength sensor. Equation (12.99) states
that diagonal invariance holds between the canonical surface and other surfaces,
given the sharpening transform M
d
. They proceeded to show that diagonal invari-
ance holds between any two surfaces.
49
12.9 Von Kries Color Prediction
As given in Section 4.1, the von Kries hypothesis states that trichromatic vision
is independently adapted. Based on the von Kries hypothesis, Praefcke and Konig
developed a white adaptation via different color spaces.
45
For an object under two
different illuminants, E
A
and E
B
, one obtains two sets of tristimulus values,
A
and
B
. If the von Kries hypothesis holds, the normalized tristimulus values of the
respective tristimulus values of the white points should give the same appearance.

A
=[X
A
/X
W,A
, Y
A
/Y
W,A
, Z
A
/Z
W,A
]
T
, (12.100)

B
=[X
B
/X
W,B
, Y
B
/Y
W,B
, Z
B
/Z
W,B
]
T
, (12.101)
and

A
=

B
. (12.102)
Within the same color space, the estimated tristimulus values under illuminant E
B
are achieved by a diagonal matrix given in Eq. (12.103).

B
=
W,B

1
W,A

A
(12.103)
or

B
=
_
X
W,B
0 0
0 Y
W,B
0
0 0 Z
W,B
__
X
W,A
0 0
0 Y
W,A
0
0 0 Z
W,A
_
1
,
Computational Color Constancy 267

A
=
_
X
W,B
/X
W,A
0 0
0 Y
W,B
/Y
W,A
0
0 0 Z
W,B
/Z
W,A
_

A
. (12.104)
To transform into another color space , there exists a 3 3 matrix M
p
for the
source-tristimulus and white-point conversions.

A
=M
p

A
,
W,A
=diag(M
p

W,A
), and
W,B
=diag(M
p

W,B
).
(12.105)
Like
W,A
and
W,B
, matrices
W,A
and
W,B
are diagonal matrices. The esti-
mated color values under illuminant E
B
in space is

B
=
W,B

1
W,A

A
. (12.106)
Reentering the initial color space, we have the estimated color values

B
=M
1
p

B
=M
1
p

W,B

1
W,A
M
p

A
. (12.107)
Equation (12.107) gives the estimated color values for the von Kries adaptation in
different color spaces. The key for using this estimation lies in nding the 3 3
transfer matrix M
p
. There is no analytical solution for M
p
; therefore, it is obtained
by optimization. To reduce the complexity of the optimization problem, Praefcke
and Konig xed the diagonal elements of M
p
to unity, thus reducing the number
of variables to six.
M
p
=
_
1 p
1
p
2
p
3
1 p
4
p
5
p
6
1
_
. (12.108)
They started with p
j
= 0, j = 1, 2, . . . , 6 for M
p
. After selecting the initial M
p
,
they computed
W,A
and
W,B
via Eq. (12.105) because
W,A
and
W,B
are
known, and then substituted
W,A
,
W,B
, and M
p
into Eq. (12.107) to estimate

B
from the known
A
. The difference between estimated

B
and measured
B
is calculated for each reectance spectrum. Then, the average color difference over
all test spectra is calculated to serve as the error measure, which is minimized iter-
atively by the standard gradient method.
Praefcke and Konig compared several approaches, including the general least-
squares method, perfect sharpened sensors (Finlayson, et al., see Section 12.7),
individual optimal sensors, and mean optimal sensors.
45
Results for the von Kries
adaptation in different color spaces between daylight illuminants range from 0.271
to 0.292 in E

94
units for mean values with maximum color difference ranges
from 4.77 to 5.24 in E

94
units. For estimations between illuminants D
65
, F
11
,
A, and Xe, the mean color error ranges from 0.888 to 1.290, with maximum color
difference ranges from 9.18 to 11.70 in E

94
units. These results indicate that the
sharpened and optimized sensors and the least-squares method have comparable
accuracy in predicting the color values under different illumination via different
color spaces.
268 Computational Color Technology
12.10 Remarks
The nite-dimensional linear model is an extremely powerful tool in that it can re-
cover (or estimate) the illuminant SPD and object reectance. Surface reectance
and illuminant SPD are each represented by a nite coefcient vector, where the
imaging process is an algebraic interaction between the surface coefcients and
illuminant coefcients. The richness and complexity of these interactions indicate
that the simple von Kries type correction of diagonal matrix coefcients (or coef-
cient rule) is inadequate. However, Forsyths study demonstrated that the simple
coefcient rule has merit,
37
and it is supported by subsequent studies of Finalayson
and colleagues.
4749
Many color constancy theories have some uncanny similarities. Most, if not
all, models are based on the von Kries hypothesis that sensor sensitivities are in-
dependent and can be adjusted individually. They all use linear combinations and
transforms. Using the spectral sharpening in a 2-3 world, with the rst illumina-
tion coefcient of 1 such that E() = E
1
() + E
2
(), Brill has shown that the
von Kries adapted tristimulus values are illuminant invariant. This result reafrms
the unusual stature of the von Kries transform in securing illuminant invariance.
He further showed that the Judd adaptation can be illuminant invariant only when
the illuminant basis functions are constrained to be metameric.
53
This result ts
Cohens R-matrix decomposition perfectly.
54
The volume matrix of Brills volu-
metric theory is certainly related to the lighting matrix of the Maloney-Wandell
algorithm. Data-based sharpening can be viewed as a generalized volumetric the-
ory.
48
In the case where the sample number is three, data-based sharpening reduces
to the volumetric theory. As a color-constancy algorithm, data-based sharpening
has the advantage that it is optimal with respect to the least-squares criterion, at the
expense of the requirement that all surface reectances must appear in the image,
not to mention the added computational cost.
References
1. M. H. Brill and G. West, Chromatic adaptation and color constancy: A possible
dichotomy, Color Res. Appl. 11, pp. 196227 (1986).
2. P. Emmel, Physical models for color prediction, Digital Color Imaging Hand-
book, G. Sharma (Ed.), CRC Press, Boca Raton, pp. 173237 (2000).
3. H. C. Lee, E. J. Breneman, and C. P. Schulte, Modeling light reection for
computer color vision, Technical Report, Eastman Kodak Company (1987).
4. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 273274 (1982).
5. P. Sllstrn, Colour and physics, University of Stockholm Institute of Physics
Report 73-09, Stockholm (1973).
6. G. Buchsbaum, A spatial processor model for object colour perception,
J. Franklin Inst. 310, pp. 126 (1980).
Computational Color Constancy 269
7. L. T. Maloney, Evaluation of linear models of surface spectral reectance with
small number of parameters, J. Opt. Soc. Am. A 3, pp. 16731683 (1986).
8. L. T. Maloney, Computational approaches to color constancy, Standard Ap-
plied Psychology Lab. Tech. Report 1985-01, Standard University, Standard,
CA (1985).
9. L. T. Maloney and B. A. Wandell, Color constancy: a method for recovering
surface spectral reectance, J. Opt. Soc. Am. A 3, pp. 2933 (1986).
10. B. A. Wandell, Color constancy and the natural image, Physica Scripta 39,
pp. 187192 (1989).
11. D. H. Marimont and B. A. Wandell, Linear models of surface and illuminant
spectra, J. Opt. Soc. Am. A 9, pp. 19051913 (1992).
12. R. B. MacLeod, An experimental investigation of brightness constancy, Arch.
Psychol. 135, 5102 (1932).
13. H. C. Lee, Method for computing the scene-illuminant chromaticity fromspec-
ular highlights, J. Opt. Soc. Am. A 3, pp. 16941699 (1986).
14. W. S. Stiles and G. Wyszecki, Intersections of the spectral reectances curves
of metameric object color, J. Opt. Soc. Am. 58, pp. 3241 (1968).
15. L. T. Maloney and B. A. Wandell, Color Constancy: a method for recovering
surface spectral reections, J. Opt. Soc. Am. A 3, pp. 2933 (1986).
16. A. C. Hurlbert, The computation of color, MIT Technical Report 1154 (1990).
17. B. A. Wandell, Foundations of Vision, Sinauer Assoc., pp. 301306 (1995).
18. A. C. Hurlbert, The computation of color, MIT Industrial Liaison Program
Report (1990).
19. P. Brou, T. R. Sciascia, L. Linden, and J Y. Lettvin, The colors of things, Sci.
Am. 255, pp. 8491 (1986).
20. S. A. Shafer, G. J. Klinker, and T. Kanade, Using color to separate reection
components, Color Res. Appl. 10, pp. 210218 (1985).
21. M. DZmura and P. Lennie, Mechanisms of color constancy, J. Opt. Soc. Am. A
3, pp. 16631672 (1986).
22. G. J. Klinker, S. A. Shafer, and T. Kanade, The measurement of highlights in
color images, Int. J. Comput. Vision 2, pp. 732 (1988).
23. S. Tominaga and B. A. Wandell, Standard surface reectance model and illu-
minant estimation, J. Opt. Soc. Am. A 6, pp. 576584 (1989).
24. S. Tominaga and B. A. Wandell, Component estimation of surface spectral
reectance, J. Opt. Soc. Am. A 7, pp. 312317 (1990).
25. M. DZmura, Color constancy: surface color from changing illumination,
J. Opt. Soc. Am. A 9, pp. 490493 (1992).
26. M. DZmura and G. Iverson, Color constancy II: Results for two-stage linear
recovery of spectral descriptions for lights and surfaces, J. Opt. Soc. Am. A 10,
pp. 21662176 (1993).
27. M. DZmura and G. Iverson, Color constancy III: General linear recovery of
spectral descriptions for lights and surfaces, J. Opt. Soc. Am. A 11, pp. 2389
2400 (1994).
270 Computational Color Technology
28. S. Tominaga, Realization of color constancy using the dichromatic reection
model, IS&T & SIDs 2nd Color Imaging Conf., pp. 3740 (1994).
29. A. P. Petrov, C. Y. Kim, Y. S. Seo, and I. S. Kweon, Perceived illuminant
measured, Color Res. Appl. 23, pp. 159168 (1998).
30. F. H. Cheng, W. H. Hsu, and T. W. Chen, Recovering colors in an image with
chromatic illuminant, IEEE Trans. Image Proc. 7, pp. 15241533 (1998).
31. C. H. Lee, J. H. Lee, H. Y. Lee, E. Y. Chung, and Y. H. Ha, Estimation of
spectral distribution of scene illumination from a single image, J. Imaging Sci.
Techn. 44, pp. 308314 (2000).
32. Y.-T. Kim, Y.-H. Ha, C.-H. Lee, and J.-Y. Kim, Estimation of chromatic char-
acteristics of scene illumination in an image by surface recovery from the high-
light region, J. Imaging Sci. Techn. 48, pp. 2836 (2004).
33. M. H. Brill, A device performing illuminant-invariant assessment of chromatic
relations, J. Theor. Biol. 71, pp. 473478 (1978).
34. M. H. Brill, Further features of the illuminant-invariant trichromatic photosen-
sor, J. Theor. Biol. 78, pp. 305308 (1979).
35. M. H. Brill, Computer simulation of object-color recognizers, MIT Research
Laboratory of Electronics Progress Report No. 122, Cambridge, pp. 214221
(1980).
36. M. H. Brill, Computer simulation of object color recognizers, MIT Progress
Report No. 122, pp. 214221 (1980).
37. D. A. Forsyth, A novel algorithm for color constancy, Int. J. Comput. Vision 5,
pp. 535 (1990).
38. E. H. Land and J. J. McCann, Lightness and retinex theory, J. Opt. Soc. Am.
61, pp. 111 (1971).
39. J. J. McCann, S. P. McKee, and T. H. Taylor, Quantitative studies in retinex
theory: a comparison between theoretical predictions and observer responses
to the color Mondrian experiments, Vision Res. 16, pp. 445458 (1976).
40. E. H. Land, Recent advances in retinex theory and some implications for cor-
tical computation: color vision and the natural image, Proc. Natl. Acad. Sci.
USA 80, pp. 51635169 (1983).
41. J. J. McCann and K. Houston, Calculating color sensations from arrays of
physical stimuli, IEEE Trans. Syst. Man. Cybern. SMC-13, pp. 10001007
(1983).
42. E. H. Land, Recent advances in retinex theory, Vision Res. 26, pp. 722 (1986).
43. D. H. Brainard and B. A. Wandell, Analysis of the retinex theory of color
vision, J. Opt. Soc. Am. A 3, pp. 16511661 (1986).
44. G. D. Finlayson and M. S. Drew, Constrained least-squares regression in
colour spaces, J. Electron. Imaging 6, pp. 484493 (1997).
45. W. Praefcke and F. Konig, Colour prediction using the von Kries transform,
in Colour Imaging: Vision and Technology, L. W. MacDonald and M. R. Luo
(Eds.), pp. 3954 (1999).
46. K. M. Lam, Metamerism and Color Constancy, Ph.D. Thesis, University of
Bradford (1985).
Computational Color Constancy 271
47. G. D. Finlayson, M. S. Drew, and B. V. Funt, Color constancy: Enhancing von
Kries adaptation via sensor transformations, Proc. SPIE 1913, pp. 473484
(1993).
48. G. D. Finlayson, M. S. Drew, and B. V. Funt, Spectral sharpening: sensor trans-
formations for improved color constancy, J. Opt. Soc. Am. A 11, pp. 15531563
(1994).
49. G. D. Finalayson, M. S. Drew, and B. V. Funt, Color constancy: Generalized
diagonal transforms sufce, J. Opt. Soc. Am. A 11, pp. 30113019 (1994).
50. A. J. Calabria and M. D. Fairchild, Herding CATs: A comparison of linear
chromatic-adaptation transforms for CIECAM97s, Proc. IS&T/SID 9th Color
Imaging Conf., IS&T, Scottsdale, AZ, pp. 174178 (2001).
51. J. K. Bowmaker and H. J. A. Dartnell, Visual pigments of rods and cones in
the human retina, J. Physiol. 298, pp. 501511 (1980).
52. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulas, 2nd Edition Wiley, New York, p. 806 (1982).
53. M. H. Brill, Can color-space transformation improve color computation other
than Von Kries, Proc. SPIE 1913, pp. 485492 (1993).
54. J. B. Cohen and W. E. Kappauf, Metamer color stimuli, fundamental
metamers, and Wyszeckis metameric black, Am. J. Psychology 95, pp. 537
564 (1982).
Chapter 13
White-Point Conversion
Device color-space characteristics and transformations are dependent on the illu-
minants used. The mismatch of white points is a frequently encountered problem.
It happens in situations where the measuring and viewing of an object are under
different illuminants, the original and reproduction use different illuminants, and
the different substrates are under the same illuminant. To correct these problems,
the transform of white points is needed. White-point conversion techniques are de-
veloped for converting between different illuminants strictly on the physical quan-
tities, without any appearance transform. In this context, white-point conversion is
different from chromatic adaptation or color constancy in which the illumination
difference is treated with an appearance transform (see Chapter 3). For white-point
conversion, we are asking the question: What are the tristimulus values (or other
colorimetric specications) of an object under illuminant A if the only information
available is under illuminant B? Thus, it is strictly a mathematical transform.
In this chapter, we present four methods of white-point conversion. The rst
method is based on the transform between CIEXYZ and colorimetric RGB. By
using a colorimetric RGB space as the intermediate connection between two white
points, the white-point conversion is scaled by the ratios of the proportional con-
stants of red, green, and blue primaries.
1
The second method performs the white
point-conversion via tristimulus ratios.
2
The third method uses the spectral differ-
ence of the white points. Source tristimulus values are corrected from the white-
point difference to obtain destination tristimulus values. The fourth method uses
polynomial regression. Theory and derivation are given in detail for each method.
Conversion accuracies of these methods are compared using a set of 135 data
points. Advantages and disadvantages of these methods are discussed.
13.1 White-Point Conversion via RGB Space
Xerox Color Encoding Standards (XCES) provide a method for white-point con-
version, utilizing an intermediate colorimetric RGB space because tristimulus val-
ues are a linear transform of the colorimetric RGB, as shown in Section 6.2.
1
First,
the source tristimulus values X
s
, Y
s
, and Z
s
are transformed to RGB via a simple
273
274 Computational Color Technology
conversion matrix .
_
_
R
s
G
s
B
s
_
_
=
_
_

11

12

13

21

22

23

31

32

33
_
_
_
_
X
s
Y
s
Z
s
_
_
, (13.1a)
or

p,s
=
s
. (13.1b)
Resulting RGB values are converted to the destination tristimulus values X
d
, Y
d
,
and Z
d
via another matrix .
_
_
X
d
Y
d
Z
d
_
_
=
_
_

11

12

13

21

22

23

31

32

33
_
_
_
_
R
d
G
d
B
d
_
_
, (13.2a)
or

d
=
p,d
. (13.2b)
By assuming that source RGB values
p,s
and destination RGB values
p,d
rep-
resented by the same intermediate RGB space are equal,
[ R
s
G
s
B
s
]
T
=[ R
d
G
d
B
d
]
T
or
p,s
=
p,d
, (13.3)
Equation (13.3) provides the connection between the source and destination white
points; therefore, we can substitute Eq. (13.1) into Eq. (13.2) to give
_
_
X
d
Y
d
Z
d
_
_
=
_
_

11

12

13

21

22

23

31

32

33
_
_
_
_

11

12

13

21

22

23

31

32

33
_
_
_
_
X
s
Y
s
Z
s
_
_
, (13.4a)
or

d
=
s
. (13.4b)
Both matrices and are usually known or can be determined experimentally or
mathematically, and we can combine them into one matrix by using matrix mul-
tiplication. Still, the computational cost is quite high (9 multiplications and 6 ad-
ditions). Interestingly, this seemingly complex equation masks an extremely sim-
ple relationship between the source and destination white points. Mathematically,
Eq. (13.4) can be reduced to a very simple form by relating matrix coefcients to
tristimulus values of the intermediate RGB primaries. Colorimetric RGB primaries
White-Point Conversion 275
are related to tristimulus values via Eq. (13.5).
1
X =x
r
T
r
R +x
g
T
g
G+x
b
T
b
B
Y =y
r
T
r
R +y
g
T
g
G+y
b
T
b
B (13.5a)
Z =z
r
T
r
R +z
g
T
g
G+z
b
T
b
B,
or
_
_
X
Y
Z
_
_
=
_
_
x
r
x
g
x
b
y
r
y
g
y
b
z
r
z
g
z
b
_
_
_
_
T
r
0 0
0 T
g
0
0 0 T
b
_
_
_
_
R
G
B
_
_
, (13.5b)
or
= T
p
, (13.5c)
where (x
r
, y
r
, z
r
), (x
g
, y
g
, z
g
), and (x
b
, y
b
, z
b
) are the chromaticity coordinates of
the red, green, and blue primaries, respectively. Parameters T
r
, T
g
, and T
b
are the
proportional constants of the red, green, and blue primaries, respectively, under an
adapted white point.
1
Equation (13.5) can be solved for T
r
, T
g
, and T
b
by using
a known condition of a gray-balanced and normalized RGB system; that is, when
R = G = B = 1, a reference white point (X
n
, Y
n
, Z
n
) of the device is produced.
Using this condition, Eq. (13.5) becomes
_
_
x
r
x
g
x
b
y
r
y
g
y
b
z
r
z
g
z
b
_
_
_
_
T
r
0 0
0 T
g
0
0 0 T
b
_
_
=
_
_
X
n
Y
n
Z
n
_
_
, (13.6a)
or
T =
n
. (13.6b)
For the case of the same RGB primaries but different white points used for the
source and reproduction, we can use relationships given in Eqs. (13.5) and (13.6)
to determine tristimulus values of the reproduction under a destination illuminant
if we know tristimulus values of the object under a source illuminant. Under the
source illuminant, we rst compute RGB values using Eq. (13.5) because the tris-
timulus values under the source illuminant are known and the conversion matrix,
having three independent columns, can be inverted.
_
_
R
G
B
_
_
=
_
_
x
r
T
r,s
x
g
T
g,s
x
b
T
b,s
y
r
T
r,s
y
g
T
g,s
y
b
T
b.s
z
r
T
r,s
z
g
T
g,s
z
b
T
b,s
_
_
1
_
_
X
s
Y
s
Z
s
_
_
, (13.7a)
276 Computational Color Technology
or

p
=( T
s
)
1

s
. (13.7b)
Equation (13.7) is the inverse of Eq. (13.5). Compared to Eq. (13.1), we have
_
_

11

12

13

21

22

23

31

32

33
_
_
=
_
_
x
r
T
r,s
x
g
T
g,s
x
b
T
b,s
y
r
T
r,s
y
g
T
g,s
y
b
T
b.s
z
r
T
r,s
z
g
T
g,s
z
b
T
b,s
_
_
1
, (13.8a)
or
=( T
s
)
1
. (13.8b)
From Eqs. (13.2) and (13.5), we have
_
_

11

12

13

21

22

23

31

32

33
_
_
=
_
_
x
r
T
r,d
x
g
T
g,d
x
b
T
b,d
y
r
T
r,d
y
g
T
g,d
y
b
T
b.d
z
r
T
r,d
z
g
T
g,d
z
b
T
b,d
_
_
, (13.9a)
or
= T
d
. (13.9b)
Substituting Eqs. (13.8a) and (13.9b) into Eq. (13.4b), we have

d
= T
d
( T
s
)
1

s
= T
d
T
1
s

1

s
=
1
T
d
(T
s
)
1

s
=T
d
(T
s
)
1

s
(13.10a)
or
_
_
X
d
Y
d
Z
d
_
_
=
_
_
(T
r,d
/T
r,s
) 0 0
0 (T
g,d
/T
g,s
) 0
0 0 (T
b,d
/T
b,s
)
_
_
_
_
X
s
Y
s
Z
s
_
_
. (13.10b)
Equation (13.10) simplies the method of white-point conversion from a matrix
multiplication [see Eq. (13.4)] to a constant scaling via a colorimetrical RGBspace.
It reduces the computational cost, requiring only three multiplications. The as-
sumption for this derivation is that the colorimetric RGB values remain the same
under different white points. Based on this assumption, Eq. (13.10) states that
the white points are converted by multiplying ratios of RGB proportional con-
stants from destination illuminant to source illuminant. This simple relationship of
Eq. (13.10) is basically the von Kries type of coefcient rule. Ratios of A C,
A D
50
, A D
65
, C D
50
, C D
65
, and D
50
D
65
are given in Table 13.1
for three different RGB spaces, where column 3 contains the destination-to-source
ratios of the red component, column 4 contains the ratios of the green component,
and column 5 contains the ratios of the blue component. For the backward con-
White-Point Conversion 277
Table 13.1 Ratios of proportional constants from destination illuminant to source illuminant.
Transform Primary T
r,d
/T
r,s
T
g,d
/T
g,s
T
b,d
/T
b,s
AC RGB709 1.754 0.848 0.212
CIE1931/RGB 1.764 0.870 0.296
ROMM/RGB 1.200 0.919 0.301
AD
50
RGB709 1.567 0.848 0.323
CIE1931/RGB 1.536 0.890 0.425
ROMM/RGB 1.204 0.918 0.431
AD
65
RGB709 1.845 0.827 0.233
CIE1931/RGB 1.823 0.864 0.322
ROMM/RGB 1.244 0.906 0.327
C D
50
RGB709 0.871 1.023 1.437
CIE1931/RGB 0.893 0.999 1.523
ROMM/RGB 1.003 0.999 1.432
C D
65
RGB709 1.033 0.993 1.087
CIE1931/RGB 1.052 0.975 1.100
ROMM/RGB 1.037 0.986 1.086
D
50
D
65
RGB709 1.187 0.970 0.756
CIE1931/RGB 1.178 0.975 0.723
ROMM/RGB 1.034 0.987 0.758
versions, the ratios are the inverse of the corresponding values given in the table;
for example, C A conversion ratios are the inverse of the A C ratios. Note
that the ratios obtained via different RGB spaces give different values; for exam-
ple, (T
r,d
/T
r,s
) ratios involving illuminant A give smaller values via ROMM/RGB
space than the other two RGB spaces. These results indicate that there are gamut
differences in the RGB primary sets, which reveal the suitability of the intermedi-
ate RGB space used for deriving the proportional constants.
A set of 135 data points was obtained from an electrophotographic (or xero-
graphic) print of color patches that contained step wedges of primary CMYK col-
ors, secondary CM, MY, and CY mixtures, and three-color CMY mixtures. The
spectra of these 135 patches were measured; then, tristimulus values were calcu-
lated under illuminants A, C, D
50
, D
65
, and D
75
using Eq. (1.1) at 10-nm intervals.
These results were used as the standards for comparison with results of the white-
point conversions.
Figures (13.1)(13.3) show the goodness of the simple coefcient rule of
Eq. (13.10) to t the measured data for D
50
D
65
conversion on a set of 135
data points. Figure 13.1 is the plot of X ratios (X measured under D
50
divides X
measured under D
65
) as a function of the X value under D
65
. Figures 13.2 and 13.3
are the corresponding plots for Y and Z ratios, respectively. If Eq. (13.10) is true,
the plot of the tristimulus ratio as a function of the tristimulus value should fall
into a horizontal line with a constant value. In other words, the tristimulus ratios
of all data points should be the same, regardless of the magnitude of the tristim-
ulus value. The gures, however, show that the ratio diverges as the tristimulus
278 Computational Color Technology
Figure 13.1 Plot of X
D50
/X
D65
as a function of X
D65
.
Figure 13.2 Plot of Y
D50
/Y
D65
as a function of Y
D65
.
value decreases, reaching maximum scattering in the dark region (<20), then con-
verges again. In spite of the data scattering in the low-tristimulus region, one can
still draw a straight line through all data points to obtain the white-point ratio. The
constant value obtained from the graph is pretty close to the ratio computed from
Eq. (13.10), except for those data points involving illuminants A and T
r
of D
50
(see
Table 13.1). Other white-point conversions such as D
50
C and D
65
C behave
similarly.
White-Point Conversion 279
Figure 13.3 Plot of Z
D50
/Z
D65
as a function of Z
D65
.
Table 13.2 lists the accuracies of white-point conversions between four illu-
minants, A, C, D
50
, and D
65
, using Eq. (13.10) on the set of 135 data points via
three sets of primaries (RGB709, CIE1931/RGB, and ROMM/RGB). Results in
CIEXYZ space are not good measures for evaluating the conversion accuracy be-
cause it is not a visually linear space; a small difference in CIEXYZ may not give a
small visual difference. Therefore, for the purpose of comparing color differences,
the measured and calculated tristimulus values are further converted to CIELAB
values using CIE formulation for the purpose of computing color differences in
CIELAB space.
3
Table 13.2 indicates that any conversions involving illuminant A
are not acceptable; the maximum error can go as high as a 109 E
ab
value. For
other conversions involving C, D
50
, and D
65
, this method is marginal in conver-
sion accuracy; the average color differences in CIELAB space range from 12 to 22
E
ab
units, and the maximum values are below 31 E
ab
units. Errors are smaller
when ROMM/RGB primaries are used for computing the transformation, where
CIE1931/RGB and RGB709 are comparable in conversion accuracy with a factor
of three or four higher in conversion error than those obtained from ROMM/RGB.
These data reveal that the conversion error depends on the relative distance be-
tween white points in the chromaticity coordinates. Figure 13.4 shows the posi-
tions of illuminants in the chromaticity diagram, and Fig. 13.5 depicts the average
color difference as a function of the distance between illuminants. The errors are
smaller in D
65
C conversions than in D
50
C and D
50
D
65
conversions be-
cause the distance between D
65
and C is smaller than the distances between D
50
and C and between D
50
and D
65
in the chromaticity diagram. Generally, the larger
the distance between illuminants in the chromaticity diagram, the larger the color
difference.
280 Computational Color Technology
Table 13.2 Conversion accuracies between illuminants A, C, D
50
, and D
65
on a set of 135
measured data points from three sets of primaries (RGB709, CIE1931, and ROMM/RGB).
Source Destination RGB CIEXYZ CIELAB
illuminant illuminant primary Average Maximum Average Maximum
A C RGB709 24.27 52.73 68.70 93.05
CIE1931 15.01 31.89 65.61 89.10
ROMM 6.89 13.23 21.56 44.73
C A RGB709 23.85 53.52 75.07 102.00
CIE1931 23.66 53.20 72.77 98.97
ROMM 6.81 13.42 21.49 46.82
A D
50
RGB709 16.35 34.86 54.73 74.56
CIE1931 10.72 22.73 46.72 63.59
ROMM 5.41 10.13 18.63 32.59
D
50
A RGB709 16.61 36.74 57.39 78.34
CIE1931 14.71 32.67 49.60 67.56
ROMM 5.37 9.95 18.45 30.95
A D
65
RGB709 23.17 49.87 72.87 99.30
CIE1931 14.73 31.27 66.18 90.24
ROMM 7.02 13.28 23.26 42.91
D
65
A RGB709 25.15 56.27 79.62 108.77
CIE1931 23.76 53.28 73.27 100.02
ROMM 7.02 13.26 23.09 44.49
C D
50
RGB709 5.22 11.88 16.10 26.81
CIE1931 6.05 13.60 21.38 30.88
ROMM 1.66 4.09 4.63 15.35
D
50
C RGB709 5.17 11.66 15.48 25.52
CIE1931 5.36 11.88 20.47 29.12
ROMM 1.78 4.18 4.61 15.27
C D
65
RGB709 1.26 2.71 4.92 8.71
CIE1931 0.48 1.03 1.46 3.43
ROMM 0.68 1.44 2.27 4.91
D
65
C RGB709 1.29 2.75 4.91 8.71
CIE1931 0.49 1.07 1.46 3.43
ROMM 0.69 1.48 2.27 4.90
D
50
D
65
RGB709 5.38 11.94 19.67 26.85
CIE1931 5.24 11.61 21.00 28.76
ROMM 1.65 3.37 4.94 13.04
D
65
D
50
RGB709 5.96 13.35 20.49 27.98
CIE1931 6.11 13.68 21.90 30.00
ROMM 1.64 3.43 4.95 13.14
Note that the average color differences, as well as maximum differences for
the forward and backward transformations under a given RGB space, are about the
same. For example, the C D
50
conversion under ROMM/RGB gives an average
of 4.63 E
ab
and a maximum of 15.35 E
ab
, where the inverse D
50
C gives an
White-Point Conversion 281
Figure 13.4 Chromaticity coordinates of illuminants.
Figure 13.5 Average color difference as a function of the distance between illuminants.
average of 4.61 E
ab
and a maximum of 15.27 E
ab
; other conversions such as
C D
65
conversions are even closer (almost identical). Moreover, the conversion
accuracy is dependent on the intermediate RGB space employed. The RGB709,
having a very small color gamut, gives the worst accuracy in most cases, whereas
282 Computational Color Technology
ROMM/RGB, having the largest color gamut, gives the best accuracy. In summary,
this method is unacceptable for any conversions involving illuminant A. The accu-
racy is marginal for C D
50
and D
50
D
65
via ROMM space, and the accuracy
is acceptable or good for C D
65
via all three RGB spaces. The poor agreement is
attributed to the deviations from the constant values shown in Figs. (13.1) to (13.3).
It is unfortunate that the worst deviations occur in the dark region where the sensi-
tivity in CIELAB color difference is very high.
The conversion accuracy can be improved by merely applying a scaling fac-
tor to the T
r
of D
50
. A general expression for the scaling correction is given in
Eq. (13.11).
_
_
X
d
Y
d
Z
d
_
_
=
_
_
f
r
(T
r,d
/T
r,s
) 0 0
0 f
g
(T
g,d
/T
g,s
) 0
0 0 f
b
(T
b,d
/T
b,s
)
_
_
_
_
X
s
Y
s
Z
s
_
_
, (13.11)
where f
r
, f
g
, and f
b
are the scaling factors for tristimulus values X, Y, and Z,
respectively. The optimal scaling factor can be found by plotting the scaling factor
as a function of the average color difference E
ab
, as shown in Fig. 13.6 for illumi-
nant C to D
50
conversion from three RGB spaces, where the optimal scaling factor
is the minimum of the curve. The optimal ratios of RGB proportional constants
Figure 13.6 The scaling factor as a function of the average color difference E
ab
for illumi-
nant C to D
50
conversion from three RGB spaces, where the optimal scaling factor is the
minimum of the curve.
White-Point Conversion 283
for conversions between illuminants A, C, D
50
, and D
65
are listed in Table 13.3. It
is interesting to note that they are extremely close to the corresponding tristimulus
ratios (also given in Table 13.3 for comparisons). Table 13.4 gives white-point con-
version errors using optimal ratios; they indeed give the smallest color differences
(by a factor of two or more) as compared to the corresponding conversion errors
given in Table 13.2. Even with the optimal ratios, the conversion accuracy is still
not acceptable for transforms involving illuminant A; it is marginal, acceptable for
C D
50
, and acceptable for C D
65
and D
50
D
65
. Because of the scattering
and deviations in the dark region, it is difcult to improve the conversion accuracy
further. Moreover, there are different consequences of the scaling effect with re-
spect to RGB primaries. Unlike RGB709 and CIE1931/RGB, ROMM/RGB does
not need the scaling and gives very satisfactory agreements. Using ROMM/RGB,
the scaling makes conversion errors larger, not smaller. In a few cases where there
is an improvement, the improvement is marginal and the scaling factor is very close
to 1.
13.2 White-Point Conversion via Tristimulus Ratios of
Illuminants
Viewing the agreements between the ratio of proportional constants and the ratio
of tristimulus values (see Table 13.3), Kang proposed another empirical method
that builds on the known color transfer matrix of a given illuminant.
2
When the il-
luminant changes, a white-point conversion matrix M
w
is computed and weighted
to the corresponding elements of the color-transfer matrix. Specically, the con-
version matrix is the product of two vectors: a column vector of tristimulus values
of the destination white point and a row vector of reciprocal tristimulus values of
Table 13.3 Optimal ratios of RGB proportional constants and rations of tristimulus values
for conversions between illuminants A, C, D
50
, and D
65
.
Transform Red Green Blue
C A Ratios of proportional constants (T
d
/T
s
) 1.128 0.994 0.300
Ratios of tristimulus values (
d
/
s
) 1.120 1.000 0.301
D
50
A Ratios of proportional constants (T
d
/T
s
) 1.133 0.992 0.430
Ratios of tristimulus values (
d
/
s
) 1.138 1.000 0.431
D
65
A Ratios of proportional constants (T
d
/T
s
) 1.153 0.988 0.328
Ratios of tristimulus values (
d
/
s
) 1.155 1.000 0.327
D
50
C Ratios of proportional constants (T
d
/T
s
) 1.016 1.002 1.440
Ratios of tristimulus values (
d
/
s
) 1.017 1.000 1.432
D
65
C Ratios of proportional constants (T
d
/T
s
) 1.033 0.999 1.089
Ratios of tristimulus values (
d
/
s
) 1.032 1.000 1.086
D
65
D
50
Ratios of proportional constants (T
d
/T
s
) 1.010 0.995 0.759
Ratios of tristimulus values (
d
/
s
) 1.015 1.000 0.758
284 Computational Color Technology
Table 13.4 Conversion accuracies between illuminants A, C, D
50
, and D
65
on a set of 135
measured data using optimal proportional constants.
Source Destination CIEXYZ CIELAB
illuminant illuminant Average Maximum Average Maximum
A C 5.28 13.27 11.61 37.26
C A 5.26 14.45 11.63 37.40
A D
50
3.58 9.91 7.44 22.91
D
50
A 3.73 10.74 7.42 22.88
A D
65
5.09 13.23 10.93 34.44
D
65
A 5.23 14.56 10.92 34.45
C D
50
1.62 5.09 4.25 14.01
D
50
C 1.75 5.03 4.32 14.57
C D
65
0.40 1.04 1.26 3.87
D
65
C 0.40 1.03 1.21 4.07
D
50
D
65
1.49 4.15 3.49 11.11
D
65
D
50
1.49 4.36 3.50 11.10
the source white point.
M
w
=
_
_
X
N,d
Y
N,d
Z
N,d
_
_
_
1/X
N,s
1/Y
N,s
1/Z
N,s
_
=
_
_
X
N,d
/X
N,s
X
N,d
/Y
N,s
X
N,d
/Z
N,s
Y
N,d
/X
N,s
Y
N,d
/Y
N,s
Y
N,d
/Z
N,s
Z
N,d
/X
N,s
Z
N,d
/Y
N,s
Z
N,d
/Z
N,s
_
_
. (13.12)
If we know the tristimulus values of the illuminants involved (which are readily
available), we can compute the conversion matrix M
w
. Note that the diagonal ele-
ments of M
w
in Eq. (13.12) are exactly the ratios of tristimulus values between the
destination and source white points. Matrix M
w
provides additional corrections
via off-diagonal elements. This matrix is used to scale the transfer matrix from
source to destination illuminants, where elements of the conversion matrix M
w
are
multiplied by the corresponding matrix elements of the color-transfer matrix given
in Eq. (13.1) to give the destination RGB values.
_
_
R
d
G
d
B
d
_
_
=
_
_

11
(X
N,d
/X
N,s
)
12
(X
N,d
/Y
N,s
)
13
(X
N,d
/Z
N,s
)

21
(Y
N,d
/X
N,s
)
22
(Y
N,d
/Y
N,s
)
23
(Y
N,d
/Z
N,s
)

31
(Z
N,d
/X
N,s
)
32
(Z
N,d
/Y
N,s
)
33
(Z
N,d
/Z
N,s
)
_
_
_
_
X
s
Y
s
Z
s
_
_
.
(13.13)
The resulting RGB values are converted to destination tristimulus values by mul-
tiplying the inverse matrix of Eq. (13.1) under the source illuminant; the overall
White-Point Conversion 285
conversion is given in Eq. (13.14).
_
_
X
d
Y
d
Z
d
_
_
=
_
_

11

12

13

21

22

23

31

32

33
_
_
1

_
_

11
(X
N,d
/X
N,s
)
12
(X
N,d
/Y
N,s
)
13
(X
N,d
/Z
N,s
)

21
(Y
N,d
/X
N,s
)
22
(Y
N,d
/Y
N,s
)
23
(Y
N,d
/Z
N,s
)

31
(Z
N,d
/X
N,s
)
32
(Z
N,d
/Y
N,s
)
33
(Z
N,d
/Z
N,s
)
_
_
_
_
X
s
Y
s
Z
s
_
_
.
(13.14)
This method uses the source transfer matrix only, where the white-point difference
is taken into account in Eq. (13.13); there are no destination transfer matrices in-
volved. For a given white-point conversion, transfer matrices from different RGB
spaces are pretty close, particularly for diagonal elements. This may be attributed
to the fact that the formula does not involve the destination matrix. The best con-
version matrix for each white-point conversion is given as follows:
A to C
0.8903 0.0082 0.1156
0.0076 1.0032 0.0097
0.0009 0.0026 3.3215
A to D
50
0.8760 0.0073 0.0147
0.0126 1.0026 0.0012
0 0 2.3204
A to D
65
0.8620 0.0103 0.0150
0.0177 1.0037 0.0018
0 0 3.0602
C to A
1.1271 0.0199 0.0241
0.0386 0.9928 0.0012
0 0 0.3010
D
50
to A
1.1459 0.0199 0.0345
0.0392 0.9929 0.0018
0 0 0.4310
D
65
to A
1.1630 0.0198 0.0261
0.0398 0.9929 0.0014
0 0 0.3268
C to D
50
0.9776 0.0163 0.0230
0.0154 1.0057 0.0009
0.0009 0.0024 0.6986
C to D
65
0.9653 0.0102 0.0046
0.0198 1.0037 0.0006
0 0 0.9211
D
50
to D
65
0.9814 0.0102 0.0065
0.0202 1.0038 0.0009
0 0 1.3189
D
50
to C
1.0152 0.0041 0.0092
0.0080 1.0015 0.0004
0 0 1.4317
D
65
to C
1.0303 0.0041 0.0071
0.0081 1.0015 0.0003
0 0 1.0856
D
65
to D
50
1.0089 0.0162 0.0250
0.0157 1.0057 0.0010
0.0010 0.0025 0.7584
Note that the diagonal elements of the conversion matrix are very close to the cor-
responding optimal ratios (or the tristimulus ratios) given in Table 13.3. Therefore,
it is not surprising that the conversion accuracy is on the order of the accuracy
obtained from the optimal ratios of the rst method. This method gives quite sat-
isfactory results to conversions among illuminants C, D
50
, and D
65
in SMPTE-
C/RGB space; the average color differences range from 2.2 to 4.2 E
ab
units.
However, it is not adequate for conversions that involve illuminant A; the errors
286 Computational Color Technology
are about 10 E
ab
units.
2
Additional testing results of this method under other
RGB spaces are given in Table 13.5. Results indicate that the conversion accu-
racy is less sensitive to the RGB primaries used to derive the conversion matrix.
Unlike the rst method where the forward and reverse transforms have similar
conversion accuracies, the second method gives very different accuracies. This
is because only one transfer matrix (the source) is involved; thus, no correlation
between the source and destination white points is accounted for. The data also
reconrm previous observations that the adequacy of this white-point conversion
method depends on the difference between correlated color temperatures of white
points; the larger the temperature difference, the less accurate the approximation.
The relationship between the color differences and relative positions of illumi-
nants with respect to illuminant D
93
in CIELAB space is shown in Fig. 13.7.
The accuracy, however, can be improved by adding nonlinear terms to the transfer
matrix.
4
13.3 White-Point Conversion via Difference in Illuminants
Tristimulus values are computed using the object spectrum S() and illuminant
SPD E() together with the color-matching functions (CMF), x(), y(), z(), as
Figure 13.7 Plot of average color differences versus relative positions of illuminants in
CIELAB space.
White-Point Conversion 287
Table 13.5 Conversion accuracies of the second method of white-point conversion using
different RGB primaries to derive the conversion matrix.
Source Destination RGB XYZ XYZ XYZ LAB LAB LAB
illumination illumination space Avg. RMS Max. Avg. RMS Max.
A C RGB709 5.43 6.58 13.33 10.81 12.67 30.10
CIE1931/RGB 5.50 6.72 13.54 11.02 12.44 23.64
ROMM/RGB 5.27 6.55 13.34 11.02 13.65 33.32
C A RGB709 8.10 9.79 15.81 21.02 22.60 45.69
CIE1931/RGB 8.32 9.93 15.87 22.08 23.08 29.91
ROMM/RGB 5.60 6.85 14.32 12.73 14.18 26.08
A D
50
RGB709 3.93 4.75 9.26 9.14 11.38 29.30
CIE1931/RGB 3.70 4.53 9.41 9.32 12.14 33.54
ROMM/RGB 3.51 4.33 9.38 7.84 9.73 23.98
D
50
A RGB709 7.51 8.85 14.59 21.87 23.01 42.72
CIE1931/RGB 7.69 8.95 14.55 23.82 25.76 56.56
ROMM/RGB 4.16 4.91 10.11 10.04 10.76 19.81
A D
65
RGB709 5.10 6.18 12.54 11.40 13.00 28.01
CIE1931/RGB 5.24 6.36 12.90 11.47 12.88 24.57
ROMM/RGB 5.00 6.14 12.69 11.06 13.25 30.87
D
65
A RGB709 8.09 9.71 15.03 21.40 22.75 45.26
CIE1931/RGB 8.33 9.87 15.40 22.68 23.76 35.02
ROMM/RGB 5.52 6.68 14.06 12.44 13.72 24.60
C D
50
RGB709 1.95 2.31 4.20 4.75 5.53 14.53
CIE1931/RGB 1.61 2.03 4.47 3.02 3.55 7.84
ROMM/RGB 1.70 2.12 4.82 4.45 5.55 14.01
D
50
C RGB709 2.27 2.76 5.15 4.77 5.34 10.17
CIE1931/RGB 2.29 2.78 5.54 5.54 6.00 10.16
ROMM/RGB 1.77 2.21 5.16 4.20 5.13 10.98
C D
65
RGB709 1.69 1.97 3.52 6.23 6.70 12.70
CIE1931/RGB 1.71 2.05 3.77 6.72 7.44 14.92
ROMM/RGB 0.90 1.02 1.66 3.83 4.19 7.98
D
65
C RGB709 1.86 2.18 4.06 4.10 4.27 7.22
CIE1931/RGB 1.75 2.05 3.80 5.49 5.80 8.88
ROMM/RGB 0.59 0.69 1.20 1.78 2.04 3.95
D
50
D
65
RGB709 1.99 2.41 4.84 5.83 6.38 11.18
CIE1931/RGB 2.11 2.53 4.98 6.22 6.68 10.91
ROMM/RGB 1.63 1.96 4.27 4.51 5.06 9.85
D
65
D
50
RGB709 1.80 2.13 3.94 4.10 4.69 11.74
CIE1931/RGB 1.48 1.83 3.79 2.53 2.85 6.05
ROMM/RGB 1.52 1.87 3.94 3.64 4.41 11.26
dened in Eq. (13.15) by CIE.
3
X =k
_
E()S() x() d

=k

E()S() x(), (13.15a)


288 Computational Color Technology
Y =k
_
E()S() y() d

=k

E()S() y(), (13.15b)


Z =k
_
E()S() z() d

=k

E()S() z(), (13.15c)


k =100/
__
E() y() d
_

=100/
_

E() y()
_
. (13.15d)
For two different illuminants, E
s
() and E
d
(), of the source and destination, we
can determine the illuminant difference E() between them wavelength by wave-
length as given in Eq. (13.16).
E() =E
d
() E
s
(). (13.16)
Knowing this relationship, we can express tristimulus values via the difference
term.
X
d
=k
d

E
d
()S() x()
=k
d

[E
s
() +E()]S() x()
=k
d
_

E
s
()S() x() +

E()S() x()
_
=k
d
_
X
s
/k
s
+

E()S() x()
_
=(k
d
/k
s
)X
s
+k
d

E()S() x(). (13.17a)


Similarly, we have
Y
d
=(k
d
/k
s
)Y
s
+k
d

E()S() y(), (13.17b)


Z
d
=(k
d
/k
s
)Z
s
+k
d

E()S() z(). (13.17c)


Equation (13.17) provides the means for white-point conversion between any two
illuminants. We can compute tristimulus values (X
d
, Y
d
, Z
d
) from (X
s
, Y
s
, Z
s
) and
vice versa because k
d
, k
s
, and E() are known. The only thing missing is the
spectrum of the object, S(). Fortunately, it can be estimated in three components
(not wavelength by wavelength) via the source tristimulus values because we only
have three known inputsthe tristimulus values.
To accommodate the trichromatic nature of the inputs for the purpose of esti-
mating the object spectrum, we need to partition the whole visible spectrum into
three bands in accordance with the red, green, and blue regions; for example, in
White-Point Conversion 289
the usual visible range of 400 to 700 nm, we can have a blue band S
b
with a range
of 400500 nm, a green band S
g
with a range of 500600 nm, and a red band S
r
with a range of 600700 nm. We can segment wider ranges of the visible spec-
trum (e.g., 360760 nm) by extending S
b
to the low-wavelength end and S
r
to the
high-wavelength end of the spectrum. With partitions, Eq. (13.15) becomes
X/k =
700

=400
S()E() x() =S
r

rx
+S
g

gx
+S
b

bx
, (13.18a)
Y/k =
700

=400
S()E() y() =S
r

ry
+S
g

gy
+S
b

by
, (13.18b)
Z/k =
700

=400
S()E() z() =S
r

rz
+S
g

gz
+S
b

bz
, (13.18c)
where

rx
=
700

=600
E() x(),
gx
=
600

=500
E() x(),

bx
=
500

=400
E() x(),

ry
=
700

=600
E() y(),
gy
=
600

=500
E() y(),

by
=
500

=400
E() y(),

rz
=
700

=600
E() z(),
gz
=
600

=500
E() z(),
290 Computational Color Technology

bz
=
500

=400
E() z(),
or
_
_
X/k
Y/k
Z/k
_
_
=
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_
_
_
S
r
S
g
S
b
_
_
. (13.18d)
The coefcients
rx
,
gx
,
bx
,
ry
,
gy
,
by
,
rz
,
gz
, and
bz
in Eq. (13.18)
can be computed by using the illuminant SPD and CMF. Therefore, three broad
bands, S
b
, S
g
, and S
r
, can be estimated from the source tristimulus values by invert-
ing Eq. (13.18d) as given by Eq. (13.19) because the matrix has three independent
vectors; thus, it is not singular and the determinate in Eq. (13.19b) is not zero and
can be inverted.
_
_
S
r
S
g
S
b
_
_
=k
1
s
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_
1
_
_
X
s
Y
s
Z
s
_
_
, (13.19a)
det =
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_
=0. (13.19b)
Similarly, we can approximate the illuminant difference E() into three bands in
accordance to the same ranges used in partitioning illuminant.
X =
700

=400
S()E() x() =S
r

rx
+S
g

gx
+S
b

bx
, (13.20a)
Y =
700

=400
S()E() y() =S
r

ry
+S
g

gy
+S
b

by
, (13.20b)
Z =
700

=400
S()E() z() =S
r

rz
+S
g

gz
+S
b

bz
, (13.20c)
where

rx
=
700

=600
E() x(),
gx
=
600

=500
E() x(),
White-Point Conversion 291

bx
=
500

=400
E() x(),

ry
=
700

=600
E() y(),
gy
=
600

=500
E() y(),

by
=
500

=400
E() y(),

rz
=
700

=600
E() z(),
gz
=
600

=500
E() z(),

bz
=
500

=400
E() z(),
or
_
_
X
Y
Z
_
_
=
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_
_
_
S
r
S
g
S
b
_
_
. (13.20d)
Again, we can compute
rx
,
gx
,
bx
,
ry
,
gy
,
by
,
rz
,
gz
, and

bz
. By substituting S
b
, S
g
, and S
r
that are obtained from Eq. (13.19) into
Eq. (13.20); we derive the tristimulus differences [X, Y, Z] across the whole
visible spectrum. We then substitute tristimulus differences into Eq. (13.17) to
compute the destination tristimulus values. We can combine all these computations
into one equation as
X
d
=(k
d
/k
s
)X
s
+k
d

S()E() x()
=(k
d
/k
s
)X
s
+k
d
(S
r

rx
+S
g

gx
+S
b

bx
),
Y
d
=(k
d
/k
s
)Y
s
+k
d

S()E() y()
=(k
d
/k
s
)Y
s
+k
d
(S
r

ry
+S
g

gy
+S
b

by
), (13.21a)
Z
d
=(k
d
/k
s
)Z
s
+k
d

S()E() z()
=(k
d
/k
s
)Z
s
+k
d
(S
r

rz
+S
g

gz
+S
b

bz
),
292 Computational Color Technology
_
_
X
d
Y
d
Z
d
_
_
=(k
d
/k
s
)
_
_
X
s
Y
s
Z
s
_
_
+k
d
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_
_
_
S
r
S
g
S
b
_
_
, (13.21b)
_
_
X
d
Y
d
Z
d
_
_
=(k
d
/k
s
)
_
_
X
s
Y
s
Z
s
_
_
+(k
d
/k
s
)
_
_

rx

gx

bx

ry

gy

by

rz

gz

bz
_
_

_
_

rx,s

gx,s

bx,s

ry,s

gy,s

by,s

rz,s

gz,s

bz,s
_
_
1
_
_
X
s
Y
s
Z
s
_
_
. (13.21c)
Equation (13.21c) shows that the ratio (k
d
/k
s
) is the conversion factor from the
source to the destination; the source tristimulus values are then corrected by the
illuminant differences in the second term on the right-hand side. The matrix of
the illuminant difference and the inverse matrix of the source illuminant can be
precomputed. They are multiplied together to give a 3 3 matrix. Finally, the
ratio k
d
/k
s
is added to the diagonal elements. Table 13.6 gives the white-point
conversion matrices derived from Eq. (13.21) for conversions between illuminants
A, C, D
50
, and D
65
using two partitions of the visible range.
Results of this approach are summarized in Tables 13.7 and 13.8. Generally,
conversion accuracy is about a factor of 3 or better than the best results in the
rst and second methods. However, this method gives negative S
r
, S
g
, and S
b
val-
ues for some input tristimulus values if their values are small or if the Z value is
substantially higher than X and Y. Because of this problem, we perform the com-
putation in stages. First, we compute the S
r
, S
g
, and S
b
values using Eq. (13.19).
Any negative values obtained in this step can be found and dealt with. The second
step computes the illuminant difference using Eq. (13.20). Knowing the illuminant
difference, we can compute the destination tristimulus values via Eq. (13.17). In
this way, we are able to check the values of S
r
, S
g
, and S
b
; if any of these values
is negative, we can set it to zero. Table 13.7 contains the results using the two-
step computation. Table 13.8 uses the one-step conversion via the derived matrix
given in Table 13.6. Generally, the two-step computation gives a smaller average
error (about 15% smaller) as well as a smaller maximum error than the single-step
computation because it has the advantage of checking for negative S
r
, S
g
, and S
b
values and setting them to zero. Moreover, the RGB partition also makes a differ-
ence in the conversion accuracy. Using the partition of R:[580,700] nm, G:[490,
580] nm, and B:[400, 490] nm, we obtain better agreement as shown in both Ta-
bles 13.7 and 13.8. This is because the second partition gives fewer and smaller
negative values for S
r
, S
g
, and S
b
. For a given partition, conversion accuracies
of the forward and backward transformations are about the same. Also, there is
a dependency of the conversion accuracy with respect to the illuminant separa-
tion in the chromaticity diagram. For example, the C D
65
transformations have
the shortest distance; thus, the smallest errors. However, the dependence on the
White-Point Conversion 293
Table 13.6 White-point conversion matrices derived from illuminant spectra differences.
Source Destination Partition 1 Partition 2
A C 0.4960 0.2426 0.5069 0.5815 0.1384 0.5358
0.2960 1.2558 0.1576 0.3539 1.3254 0.1407
0.0640 0.1393 3.4730 0.1394 0.2522 3.5576
C A 1.8219 0.3796 0.2487 1.6449 0.2173 0.2392
0.4314 0.7024 0.0949 0.4427 0.6904 0.0940
0.0163 0.0352 0.2887 0.0331 0.0575 0.2838
A D
50
0.5882 0.1886 0.2920 0.6407 0.1249 0.3094
0.2391 1.2008 0.0995 0.2944 1.2633 0.0950
0.0304 0.0663 2.3510 0.0683 0.1237 2.3956
D
50
A 1.6032 0.2622 0.1881 1.5041 0.1671 0.1877
0.3201 0.7785 0.0727 0.3524 0.7494 0.0753
0.0117 0.0253 0.4257 0.0247 0.0435 0.4189
A D
65
0.4874 0.2345 0.4519 0.5628 0.1423 0.4786
0.3035 1.2623 0.1420 0.3711 1.3397 0.1331
0.0522 0.1138 3.1583 0.1188 0.2150 3.2373
D
65
A 1.8502 0.3662 0.2483 1.6841 0.2174 0.2400
0.4464 0.7007 0.0954 0.4695 0.6810 0.0974
0.0145 0.0313 0.3173 0.0306 0.0532 0.3112
C D
50
1.1483 0.0805 0.0799 1.0988 0.0352 0.0771
0.0809 0.9378 0.0257 0.0718 0.9415 0.0213
0.0115 0.0246 0.6775 0.0217 0.0374 0.6752
D
50
C 0.8669 0.0717 0.1049 0.9102 0.0299 0.1049
0.0743 1.0592 0.0314 0.0686 1.0585 0.0256
0.0174 0.0373 1.4767 0.0331 0.0577 1.4831
C D
65
0.9819 0.0044 0.0130 0.9729 0.0034 0.0122
0.0106 1.0068 0.0033 0.0217 1.0131 0.0006
0.0053 0.0113 0.9097 0.0069 0.0118 0.9106
D
65
C 1.0186 0.0042 0.0146 1.0278 0.0036 0.0137
0.0108 0.9932 0.0037 0.0220 0.9870 0.0004
0.0058 0.0123 1.0994 0.0075 0.0128 1.0983
D
50
D
65
0.8512 0.0662 0.0837 0.8849 0.0334 0.0842
0.0841 1.0658 0.0257 0.0893 1.0717 0.0246
0.0104 0.0223 1.3430 0.0231 0.0403 1.3500
D
65
D
50
1.1683 0.0741 0.0714 1.1281 0.0378 0.0696
0.0924 0.9321 0.0236 0.0944 0.9293 0.0228
0.0075 0.0161 0.7447 0.0165 0.0284 0.7412
Partition 1: R =[600, 700] nm, G =[500, 600] nm, and B =[400, 500] nm.
Partition 2: R =[580, 700] nm, G =[490, 580] nm, and B =[400, 490] nm.
294 Computational Color Technology
Table 13.7 Conversion accuracies of the third method using the two-step computation.
Source Destination XYZ XYZ XYZ LAB LAB LAB Spectrum Clip
illuminant illuminant Ratio Avg. RMS Max. Avg. RMS Max. partition
A C 1.01339 1.31 1.77 3.85 5.00 7.38 22.87 #1 No
1.29 1.74 3.66 4.69 6.63 17.07 #1 Yes
1.14 1.56 4.37 4.39 6.03 16.18 #1 No
1.10 1.49 4.32 4.12 5.62 15.84 #2 Yes
C A 0.98679 1.12 1.55 3.67 6.31 10.9 37.09 #1 No
0.95 1.29 3.24 4.44 6.33 15.78 #1 Yes
0.84 1.09 2.72 4.24 6.57 24.75 #2 No
0.81 1.02 2.51 3.90 5.66 16.79 #2 Yes
A D
50
1.02706 0.71 0.94 2.13 3.79 5.77 17.85 #1 No
0.70 0.94 2.14 3.47 5.01 13.14 #1 Yes
0.65 0.85 2.17 3.49 4.70 12.75 #2 No
0.62 0.81 2.13 3.31 4.47 12.42 #2 Yes
D
50
A 0.97366 0.68 0.94 2.28 4.30 7.29 23.82 #1 No
0.62 0.84 2.13 3.25 4.71 12.55 #1 Yes
0.58 0.74 1.75 3.23 4.79 16.66 #2 No
0.56 0.71 1.69 3.06 4.36 12.55 #2 Yes
A D
65
1.02102 1.16 1.55 3.52 5.11 7.67 23.68 #1 No
1.16 1.53 3.41 4.76 6.83 17.73 #1 Yes
1.05 1.43 3.92 4.70 6.36 16.93 #2 No
1.01 1.37 3.88 4.44 6.00 16.57 #2 Yes
D
65
A 0.97942 1.04 1.46 3.54 6.30 11.0 37.23 #1 No
0.91 1.23 3.10 4.42 6.35 16.12 #1 Yes
0.84 1.08 2.60 4.39 6.72 24.58 #2 No
0.80 1.02 2.41 4.04 5.81 16.77 #2 Yes
C D
50
1.01349 0.43 1.30 1.36 5.67 #1 Yes
0.34 0.43 1.09 1.01 1.40 4.26 #2 Yes
D
50
C 0.98669 0.53 1.72 1.32 5.27 #1 Yes
0.44 0.58 1.58 1.04 1.44 4.38 #2 No
0.43 0.57 1.52 1.00 1.36 4.01 #2 Yes
C D
65
1.00753 0.28 0.66 0.57 2.74 #1 Yes
0.23 0.28 0.60 0.47 0.56 1.48 #2 Yes
D
65
C 0.99253 0.29 0.72 0.56 2.63 #1 Yes
0.24 0.30 0.66 0.47 0.56 1.44 #2 Yes
D
50
D
65
0.99412 0.37 1.07 1.27 4.34 #1 Yes
0.31 0.43 1.22 1.09 1.52 4.33 #2 Yes
0.34 0.44 1.03 1.13 1.64 4.26 #3 Yes
D
65
D
50
1.00592 0.32 0.95 1.26 4.21 #1 Yes
0.27 0.35 0.94 1.08 1.52 4.33 #2 Yes
Partition #1: R =[600, 700] nm, G =[500, 600] nm, and B =[400, 500] nm.
Partition #2: R =[580, 700] nm, G =[490, 580] nm, and B =[400, 490] nm.
Partition #3: R =[590, 700] nm, G =[500, 590] nm, and B =[400, 500] nm.
White-Point Conversion 295
Table 13.8 Conversion accuracies of the third method using a single conversion matrix.
Source Destination CIEXYZ CIELAB Spectrum
illuminant illuminant Avg. RMS Max. Avg. RMS Max. partition
A C 1.71 2.06 4.37 5.12 7.45 23.12 Partition 1
1.49 1.72 3.67 4.39 6.00 16.15 Partition 2
C A 1.56 1.85 3.57 6.49 11.07 37.53 Partition 1
1.09 1.31 3.15 4.30 6.65 25.06 Partition 2
A D
50
1.87 2.11 3.46 4.13 5.97 18.39 Partition 1
1.71 1.98 3.42 3.60 4.71 12.85 Partition 2
D
50
A 1.85 2.13 3.59 4.72 7.58 24.61 Partition 1
1.59 1.93 3.97 3.44 4.94 17.20 Partition 2
A D
65
1.88 2.15 3.81 5.33 7.80 24.09 Partition 1
1.67 1.88 3.73 4.73 6.33 16.90 Partition 2
D
65
A 1.83 2.10 3.43 6.61 11.25 37.93 Partition 1
1.39 1.67 3.89 4.51 6.84 25.05 Partition 2
C D
50
0.95 1.14 2.01 1.64 2.35 7.38 Partition 1
0.97 1.12 1.83 1.18 1.53 4.93 Partition 2
D
50
C 1.09 1.32 2.07 1.52 2.08 6.22 Partition 1
1.11 1.30 2.55 1.16 1.49 4.40 Partition 2
C D
65
0.48 0.57 0.98 0.63 0.77 2.56 Partition 1
0.49 0.56 0.87 0.54 0.60 1.28 Partition 2
D
65
C 0.49 0.59 1.02 0.62 0.75 2.45 Partition 1
0.50 0.58 0.89 0.54 0.59 1.25 Partition 2
D
50
D
65
0.61 0.74 1.59 1.45 2.18 6.93 Partition 1
0.61 0.74 1.65 1.18 1.61 4.83 Partition 2
D
65
D
50
0.54 0.66 1.36 1.55 2.41 7.92 Partition 1
0.55 0.64 1.29 1.18 1.64 5.21 Partition 2
Partition 1: R =[600, 700] nm, G =[500, 600] nm, and B =[400, 500] nm.
Partition 2: R =[580, 700] nm, G =[490, 580] nm, and B =[400, 490] nm.
illuminant distance is not as pronounced as in the rst and second methods. This
method gives very good conversion accuracies for conversions between C, D
50
,
and D
65
. The conversion accuracy ranges from good to marginal for conversions
involving illuminant A. The marginal accuracies occur in conversions that use the
equal partition (partition 1) with a large distance between the source and destina-
tion illuminants in the chromaticity diagram, such as C A and D
65
A. They
become acceptable when the spectrum changes to partition 2. Compared to the rst
two methods, this method gives the closest agreements for any pair of white points.
Also, the maximum color difference is much smaller than in other approaches.
13.4 White-Point Conversion via Polynomial Regression
Polynomial regression is discussed in Chapter 8. Six polynomials (3, 4, 7, 10, 14,
or 20 terms, see Table 8.1) are used for regression. The training data are 135 sets
296 Computational Color Technology
of measured values, the same sets used by the other methods. Table 13.9 gives
the derived coefcients of the three-term linear regression using 135 data points
Table 13.9 Derived coefcients of the three-term linear regression using 135 data points for
white-point conversions between illuminants A, C, D
50
, and D
65
.
Source Destination
illuminant illuminant CIEXYZ space CIELAB space
A C 0.5916 0.1549 0.4780 1.0024 0.0949 0.0367
0.3434 1.3146 0.1836 0.0289 1.0927 0.3284
0.2185 0.3119 3.5109 0.0172 0.2543 1.0274
C A 1.6187 0.2329 0.2088 0.9939 0.0957 0.0646
0.4342 0.6867 0.0950 0.0317 0.9512 0.2967
0.0646 0.0780 0.2894 0.0239 0.2249 1.0401
A D
50
0.6504 0.1483 0.2732 1.0025 0.0808 0.0260
0.2911 1.2764 0.1317 0.0143 1.1033 0.1843
0.0916 0.1352 2.4081 0.0138 0.1956 1.0147
D
50
A 1.4711 0.1836 0.1573 0.9959 0.0778 0.0391
0.3413 0.7346 0.0788 0.0151 0.9229 0.1653
0.0379 0.0493 0.4168 0.0163 0.1737 1.0150
A D
65
0.5695 0.1672 0.4258 1.0032 0.1023 0.0359
0.3668 1.3424 0.1830 0.0222 1.1190 0.2959
0.1744 0.2511 3.2076 0.0191 0.2559 1.0176
D
65
A 1.6519 0.2368 0.2064 0.9935 0.1005 0.0629
0.4622 0.6680 0.0994 0.0249 0.9278 0.2639
0.0559 0.0673 0.3152 0.0245 0.2245 1.0433
C D
50
1.1033 0.0321 0.0709 0.9995 0.0155 0.0147
0.0714 0.9576 0.0223 0.0178 1.0349 0.1476
0.0625 0.0701 0.6905 0.0048 0.0509 1.0019
D
50
C 0.9096 0.0241 0.0942 1.0002 0.0147 0.0127
0.0658 1.0401 0.0268 0.0168 0.9688 0.1453
0.0883 0.1026 1.4538 0.0040 0.0510 1.0046
C D
65
0.9724 0.0101 0.0116 1.0011 0.0069 0.0014
0.0268 1.0255 0.0021 0.0083 1.0307 0.0402
0.0271 0.0302 0.9156 0.0021 0.0042 0.9889
D
65
C 1.0285 0.0106 0.0131 0.9990 0.0067 0.0011
0.0270 0.9749 0.0019 0.0081 0.9697 0.0396
0.0295 0.0325 1.0926 0.0021 0.0040 1.0110
D
50
D
65
0.8829 0.0351 0.0749 1.0012 0.0214 0.0131
0.0916 1.0656 0.0280 0.0091 0.9979 0.1087
0.0545 0.0634 1.3295 0.0060 0.0543 0.9942
D
65
D
50
1.1317 0.0405 0.0629 0.9986 0.0219 0.0154
0.0985 0.9336 0.0252 0.0097 1.0054 0.1087
0.0421 0.0465 0.7536 0.0066 0.0538 1.0111
White-Point Conversion 297
for white-point conversions between illuminants A, C, D
50
, and D
65
. The reverse
transform matrix is the inverse of the forward transform matrix.
Results of conversion accuracies for the polynomial regression are given in
Appendix 8. The average errors are small compared to the previous methods. The
root-mean-square error is only slightly larger than the average, indicating that there
are no signicant deviations from the mean as evidenced by the small maximum
errors. As expected, results indicate that the higher the polynomial, the smaller
the error. However, the improvement levels off around the 10-term polynomial as
shown in Fig. 13.8. For a given polynomial, the conversion accuracies for forward
and backward directions are about the same. The number of data points also affects
the regression results, where the average error increases slightly with an increasing
number of data points.
There is always a lingering question about the regression tting: How good
is the regression tting for nontraining data? We have checked all six polynomial
conversions using a set of 65 data points. Results show that the average errors are
slightly higher (no more than 20%) for the three-term and seven-term polynomials,
where the maximum error is smaller than the training set for the three-term poly-
nomial and about 50% higher for the seven-term polynomial. For the eleven-term
polynomial, the average and maximum errors are about double that of the training
Figure 13.8 Relationships between the average color difference and the number of polyno-
mial terms for several white-point conversions.
298 Computational Color Technology
Figure 13.9 Relationships between the average color difference and the distance between
illuminants.
results, but this may not be a concern because the average and maximum errors are
so small (see Appendix 8).
Like the previous methods, the adequacy of this white-point conversion method
depends on the distance between the source and destination illuminants; the error
is smaller if two illuminants are closer in the chromaticity diagram. Figure 13.9
depicts the average color difference as a function of the distance between two illu-
minants for three polynomials; they have a near-linear relationship.
13.5 Remarks
The rst method of white-point conversion using ratios of the proportional con-
stants of the RGB primaries is basically the von Kries type of coefcient rule. It
provides the simplest conversion mechanism and the lowest computational cost,
but gives the lowest conversion accuracy. The conversion accuracy of the rst
method depends on: (i) the intermediate RGB spacea wide-gamut RGB space
such as ROMM/RGB gives the best accuracy; and (ii) the distance between illu-
minants in the chromaticity diagramthe smaller the distance, the better the accu-
racy. Minor accuracy improvement can be gained by applying an optimal scaling
factor to the ratio of the proportional constant. The second method of white-point
conversion, using the ratios of tristimulus values, is empirical. However, it gives
better results at a higher computational cost than the rst method. The conversion
White-Point Conversion 299
accuracy of the second method can only be matched by the ROMM/RGB and opti-
mal results of the rst method. The third method of white-point conversion, using
differences of white points, gives very good results with reasonable computational
cost. It is based on the CIE denition with a sound mathematical derivation. The
resulting formula shows that it is a scaling by the ratio of (k
d
/k
s
) together with
a matrix correction of the illuminant differences. Compared to XCES and Kangs
conversions, this method gives the closest agreement for any pair of white points.
Also, the maximumcolor difference is much smaller than the other two approaches.
The results conrm that this approach is built on solid theoretical ground and may
have caught the essence of white-point conversion. Polynomial regression has the
highest accuracy and highest computational cost. It requires a substantial amount
of training data; and the derivation of polynomial coefcients requires extensive
computations. However, once matrix coefcients are obtained, the computational
cost reduces to matrix-vector multiplication; for a three-term function, it is the
same as the rst method. The real concern regarding the regression method is that
there is no guarantee that the testing is as good as the training. The limited tests
show that this is not a serious problem. Thus, polynomial regression can be consid-
ered as the optimal method of white-point conversion, other than measurements of
samples under destination conditions. Moreover, the number of polynomial terms
can be varied to suit the accuracy requirement or the allowable computational cost.
Results from the illuminant difference method (see Tables 13.7 and 13.8) are
comparable to the corresponding results from linear regression (see Appendix 8).
In many cases, if the RGB partition is correct, the method of illuminant difference
gives a better conversion accuracy than linear regression. Note that the CIEXYZ
matrices derived from illuminant difference using partition 2 are very close to the
corresponding matrices of linear regression for conversions between illuminants C,
D
50
, and D
65
; the diagonal elements, in particular, are almost the same (the differ-
ences are no greater than a few percent), indicating that they both are near optimal.
Results from all four methods reconrm the earlier nding that the adequacy of
the white-point conversion method depends on the differences of correlated color
temperatures of the white points. The larger the temperature difference, the less
accurate the conversion. Another characteristic is that for all except the second
method, the conversion accuracies for forward and backward transformations are
about the same.
References
1. Color Encoding Standard, Xerox Corp., Xerox Systems Institute, Sunnyvale,
CA, p. C-3 (1989).
2. H. R. Kang, Color scanner calibration of reected samples, Proc. SPIE 1670,
pp. 468477 (1992).
3. CIE, Colorimetry, Publication No. 15, Bureau Central de la CIE, Paris (1971).
4. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 5562 (1996).
Chapter 14
Multispectral Imaging
A multispectral image is one in which each pixel contains information about
the spectral reectance of the scene with more channels of responses than the
normal trichromatic system. By this denition, four-color and high-delity color
printing are forms of multispectral imaging. The number of channels varies from
four to several hundreds of bands for hyperspectral images. Because of its rich
information content, multispectral imaging has a very wide range of applica-
tions, such as art-object analysis and archive,
1,2
astronomy,
3
camera and television
development,
4
computer graphics,
5
color copying, color printing,
6
medicine,
8,9
and remote sensing.
10
It is one of the most actively researched areas in compu-
tational color technology.
A system commonly used to acquire multispectral color images consists of
a monochrome digital camera coupled with a set of color lters and an image-
processing module for spectrum reconstruction as shown in Fig. 14.1. By using a
single sensor array that has high spectral sensitivity across a wide range of electro-
magnetic wavelengths from infrared (IR) through visible light to ultraviolet (UV),
multichannel information is obtained by taking multiple exposures of a scene, one
for each lter in the lter wheel.
1114
Multispectral color imaging is an expansion
of the conventional trichromatic system with the number of channels (or bands),
provided by different lters, greater than three.
Another way of obtaining multichannel information is by using a conventional
trichromatic camera (or a scanner) as shown in Fig. 14.2,
15
coupled with a color
lter by placing the lter between the object and camera (or scanner) to bias signals
and articially increase the number of channels by a multiple of three. The lter
is not an integral part of the camera or scanner. This method has been used by
Farrell and coworkers for converting a color scanner into a colorimeter.
16
They
introduced a color lter between the object and scanner, then made an extra scan
to obtain a set of biased trichromatic signals. This effectively increases the number
of color signals (or channels) by a factor of three. With the proper choice of color
lter, they are able to achieve six-channel outputs from a three-sensor scanner.
This approach has been applied to camera imaging.
1719
For example, Imai and
colleagues have used the Kodak Wrattern lter 38 (light blue) in front of the IBM
PRO/3000 digital camera to give six-channel signals with two exposures, with and
without the lter.
18,19
One can increase the number of signals by a multiple of
three by replacing the light-blue Wrattern lter 38 with a different color lter, then
301
302 Computational Color Technology
F
i
g
u
r
e
1
4
.
1
S
c
h
e
m
a
t
i
c
v
i
e
w
o
f
t
h
e
i
m
a
g
e
-
a
c
q
u
i
s
i
t
i
o
n
s
y
s
t
e
m
a
n
d
m
u
l
t
i
s
p
e
c
t
r
a
l
r
e
c
o
n
s
t
r
u
c
t
i
o
n
m
o
d
u
l
e
.
Multispectral Imaging 303
Figure 14.2 Digital camera with three-path optics to acquire color components in one
exposure.
15
taking another exposure to have nine-channel signals. By changing the front lter,
one can create any number of signals by a multiple of three. Unlike a scanner that
has a xed lighting system, the illumination of a camera can also be changed to
provide more ways of obtaining multichannel signals.
1719
The quality of a multispectral color-image acquisition system depends on many
factors; important ones are the spectral sensitivity of the camera, the spectral sensi-
tivity of each channel, the spectral radiance of the illuminant, and the performance
of the imaging module. This chapter presents applications of the vector-space rep-
resentation in multispectral imaging, including lter design and selection, camera
and scanner spectral characterizations, spectrum reconstruction, multispectral im-
age representation, and image quality.
14.1 Multispectral Irradiance Model
An image-irradiance model of multispectral devices provides output color stimulus
values registered by pixels that are equal to the integral of the sensor response
functions

A() multiplied by the input spectral function (). The spectral func-
tion, in turn, is the product of the illuminant SPD E() and the surface spectral
reectance function S() where is the wavelength (see Section 1.1).
304 Computational Color Technology
= k


A()() d =k


A()E()S() d
= k

A
T
=k

A
T
ES =k

T
S. (14.1)
= [
1
,
2
,
3
, . . . ,
m
]
T
is a vector of m elements, where m is the number of
channels; for a multispectral image, m > 3. The parameter k is a constant. For
simplication of the sequential expressions, we will drop the constant k; one can
think of it as being factored into the vector . The function

A() can be any set
of sensor sensitivities from a digital camera or scanner with m channels of sensor
responses such that

A is an n m matrix containing the sampled sensor-response
functions in the spectral range of the device, where n is the number of samples in
the spectral range.

A=

a
1
(
1
) a
2
(
1
) . . . a
m
(
1
)
a
1
(
2
) a
2
(
2
) . . . a
m
(
2
)
a
1
(
3
) a
2
(
3
) . . . a
m
(
3
)
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
a
1
(
n
) a
2
(
n
) . . . a
m
(
n
)

. (14.2)
The sensor response function is the integrated output of the spectral transmittance
of the lter set F
j
() and the spectral sensitivity of the camera sensors V(),
assuming that all elements in the sensor array have the same sensitivity. The l-
ter set contains lters with different spectral ranges of transmittance in order to
acquire different color components of the stimulus. Thus, each lter provides a
unique spectral channel that is the product of lter transmittance and camera sen-
sitivity.

A
j
() =F
j
()V(
i
). (14.3)
The matrix

A of Eq. (14.2) can also be written in the vector-matrix representa-
tion.

A=

f
1
(
1
)v(
1
) f
2
(
1
)v(
1
) . . . f
m
(
1
)v(
1
)
f
1
(
2
)v(
2
) f
2
(
2
)v(
2
) . . . f
m
(
2
)v(
2
)
f
1
(
3
)v(
3
) f
2
(
3
)v(
3
) . . . f
m
(
3
)v(
3
)
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
f
1
(
n
)v(
n
) f
2
(
n
)v(
n
) . . . f
m
(
n
)v(
n
)

. (14.4)
Multispectral Imaging 305
E is usually expressed as an n n diagonal matrix containing the sampled illumi-
nant SPD.
E =

e(
1
) 0 0 . . . . . . 0
0 e(
2
) 0 . . . . . . 0
0 0 e(
3
) 0 . . . 0
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
0 . . . . . . . . . 0 e(
n
)

. (14.5)
Usually, the sensor response matrix is combined with the illuminant matrix,

T
=

A
T
E, with an explicit expression of

f
1
(
1
)v(
1
)e(
1
) f
2
(
1
)v(
1
)e(
1
) . . . f
m
(
1
)v(
1
)e(
1
)
f
1
(
2
)v(
2
)e(
2
) f
2
(
2
)v(
2
)e(
2
) . . . f
m
(
2
)v(
2
)e(
2
)
f
1
(
3
)v(
3
)e(
3
) f
2
(
3
)v(
3
)e(
3
) . . . f
m
(
3
)v(
3
)e(
2
)
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
f
1
(
n
)v(
n
)e(
n
) f
2
(
n
)v(
n
)e(
n
) . . . f
m
(
n
)v(
n
)e(
n
)

.
(14.6)
Surface reectance S is a vector of n elements, S = [s(
1
), s(
2
), s(
3
), . . . ,
s(
n
)]
T
, consisting of the sampled spectrum of an object.
14.2 Sensitivity and Uniformity of a Digital Camera
The quality of a multispectral color-image acquisition system depends on the uni-
formity of the sensor array and the spectral sensitivity of the camera. The sensor
of a digital camera is usually a CCD (charge-coupled device) or CMOS array that
is located at the focal plane of a lens system and serves as an alternative to pho-
tographic lm. Generally, sensor array size is specied by the diagonal length in
millimeters; for example, an 8-mm CCD array has a width of 6.4 mm and a length
of 4.8 mm. The number of photosites in a CCD array varies from 128 to 4096 or
more per dimension. An 8-mm array with 720 pixels in width will have 540 pixels
in height to give a total of 388,800 pixels. A commercial Quantix camera uses the
Kodak KAF-6303E 20483072 sensor, which gives 6.29 M pixels. The number of
photosites that can be packed into a CCD array represents the spatial resolution of
the CCD camera and, in turn, affects the image quality in the spatial domain. The
cell size ranges from 2.5 2.5 m to 16 16 m. A larger cell is able to receive
more light and therefore has both a higher dynamic range and lower noise.
For a typical CMOS sensor array, each photosite is a photodiode that converts
light into electrons. The photodiode is a type of semiconductor; therefore, a CMOS
sensor array can be fabricated by using the same manufacturing process of making
306 Computational Color Technology
silicon chips, which allows a high degree of integration to include an A/D con-
verter, timing devices, an amplier, and a readout control. The biggest issue with
a CMOS sensor array is the noise caused by variability in readout transistors and
ampliers.
20
Because of the noise problem, a CMOS sensor array has a lower dy-
namic range than a CCD sensor array.
14.2.1 Spatial uniformity of a digital camera
In a conventional trichromatic digital camera, there are several methods of obtain-
ing the trichromatic signals. One method is to obtain each component sequentially
by taking three exposures, each with a proper lter. The second method is to ob-
tain all color signals in one exposure, using optics to split the beam into three
paths; each path has a sensor array coupled with a proper lter to detect the sig-
nal (see Fig. 14.2).
15
The third method is to obtain all color signals in one ex-
posure with a single sensor array coupled with lters in a mosaic arrangement as
shown in Fig. 14.3. This mosaic lter arrangement is called the color lter array
(CFA), where the ltered color components at a given photosite are obtained via
interpolation from neighboring pixels containing the color component. A CFA can
have various patterns in either additive RGB lters
21
or complementary CMYG
arrangements.
22,23
Figure 14.3 shows three RGB and three CMYG patterns; some
of them are intended to enhance the perceived sharpness by doubling the photosites
for the green signal because it coincides with the visual sensitivity curve, contain-
ing the most luminance information. The process of obtaining the ltered color
components has been referred to as color sensor demosaicing, CFA interpola-
tion, or color reconstruction. Many algorithms have been developed for use with
different CFA patterns.
20,24
Multispectral image acquisition expands from the existing trichromatic sys-
tems. The rst method is expanded to include a lter wheel with four or more
lters. Multiple exposures are taken, one for each lter in the wheel, to obtain mul-
tispectral signals. The second method, which acquires all multispectral signals in
one exposure, requires that a beam is split into four or more paths and each path
has its own optics, lter, and detector. To have a high number of channels, this
method becomes complex and expensive. In addition, the light intensity can be at-
tenuated by the optics to the extent that it may not be detectable. The third method
can be expanded into a multispectral system by designing a mosaic lter array to
have the desired number of different color photosites. The CFA approach is already
a stretch for a trichromatic camera; it is too coarse in spatial resolution for parti-
tioning the photosites into many color zones. It may be acceptable in trichromatic
devices to lower the cost, but it will give low resolution to acquired color signals,
where color values of different channels have to be interpolated from distant pixels.
This leaves the rst method and the method of using a trichromatic camera coupled
with a lter to acquire biased signals in a multiple of three as viable techniques of
acquiring multichannel signals. These methods have simple optics and do not re-
quire interpolation; the only assumption is that the temporal variation is minimal
Multispectral Imaging 307
F
i
g
u
r
e
1
4
.
3
M
o
s
a
i
c

l
t
e
r
a
r
r
a
y
a
r
r
a
n
g
e
m
e
n
t
s
f
o
r
a
d
i
g
i
t
a
l
c
a
m
e
r
a
.
308 Computational Color Technology
at the short time period during multiple exposures. However, there is doubt about
the independency of the biased trichromatic signals from the original trichromatic
signals.
Recently, there has been a new hardware development for acquiring six spectral
bands. The novel multichannel sensors have three vertically integrated amorphous
pin diodes. Each layer has a different thickness and is applied with a different
voltage to generate a different spectral sensitivity such that the top layer gives
the blue signal, the middle layer gives green, and the bottom layer gives red.
25
To obtain six-channel responses, color signals are captured in two exposures by
changing the bias voltages. This method is limited to six or an increment of three
channels. It is not known whether the number of sensors can be increased or varied.
14.2.2 Spectral sensitivity of a digital camera
Typical CCD cameras have a much broader spectral sensitivity than spectropho-
tometers, with sensitivity extending up to 1100 nm in the near-infrared region; and
some silicon-based CCDs have good quantum response through 10 nm or lower.
The highest sensitivity is in the visible spectrum. The quantum efciency of a CCD
cell is much higher than that of photographic lm (about 40 times) in the spectral
range of 400 to 1100 nm. In addition, the CCD signal is linear in comparison with
the logarithmic response of photographic lm.
26
The broad spectral range permits
the detection of near-infrared signals at the long-wavelength end and soft x-ray sig-
nals at the short-wavelength end. For working in the visible range, the near-infrared
(IR) and ultraviolet (UV) radiations must be rejected via IR and UV cutoff lters,
respectively.
14.3 Spectral Transmittance of Filters
The spectral sensitivity of a digital camera is equal to the integrated effects of
the sensor responsivity, optics, electronics, and lter transmittance. Given a sensor
array, the spectral sensitivity of the camera is determined by the spectral trans-
mittance of the lter. Figure 14.4 shows the spectra of several Kodak color lters.
Filters operate by absorption or interference. Absorption lters, such as colored
glass lters, are doped with materials that selectively absorb light by wavelength.
The lter absorption obeys the Beer-Lambert-Bouger law (see Chapter 15) that
transmittance decreases with increasing thickness. Filter thickness also affects the
spectral bandwidth; the bandwidth decreases with increasing thickness. There are
many types of absorption lters such as narrowband, wideband, and sharp-cut l-
ters. Interference lters rely on thin layers of dielectric to cause interference be-
tween wavefronts, producing very narrow bandwidths.
27
Any of these lter types
can be combined to form a composite lter that meets a particular need.
Multispectral imaging requires a very specic set of lters that span the spectral
range to give optimal color signals. Each lter denes the spectral range of acqui-
sition and spectral sensitivity. Therefore, lter transmittance is perhaps the most
Multispectral Imaging 309
Figure 14.4 Spectra of several Kodak color lters.
critical characteristic in determining the spectral sensitivity of a digital camera.
Several methods have been developed to obtain optimal lters: (i) design a set of
optimal lters based on some optimization criteria; (ii) use a set of equal-spacing
lters to cover the whole spectrum of interest; (iii) select from a set of available
lters by combinatorial search, and (iv) combinatorial search with optimization.
14.3.1 Design of optimal lters
The design of optimal lters is one of the active areas for scanners and cameras.
2836
Here, we present the work developed by Trussell and colleagues.
2833
For scanners and cameras, the recording of a color stimulus is obtained by col-
lecting intensities of ltered light via a set of lters (or channels). If F
j
represents
the transmittance of the jth lter, then the recording process can be modeled as

F
=F
T
E
r
S +. (14.7)
The scanner or camera response
F
is a vector of melements; each element records
the integration of the image spectrum through a lter channel. F is an nm matrix
with m columns of lters and n sampling points of the lter transmittance in the
spectral range of interest, E
r
is the recording illuminant, S is the spectrum of the
stimulus, and is the additive noise uncorrelated with the reectance spectra. The
310 Computational Color Technology
tristimulus values of the original image under a particular viewing illuminant E
v
are given in Eq. (14.8).
=A
T
E
v
S. (14.8)
Matrix A is a sampled CMF. The estimate
E
from a scanner is a linear function
of the recorded responses
F
.

E
=W
F
+. (14.9)
The vector W represents the weights or coefcients and is the residue. Trussell
and colleagues applied a linear minimum mean-square error (LMMSE) approach
to minimize the error between
E
and . The minimization is performed with
respect to W and . They presented the LMMSE estimate as

E
=A
T
E
v

S
E
r
F

F
T
E
r

S
E
r
F +
u

F
F
T
E
r
S

+A
T
E
v

S.
(14.10)
Matrices
S
and
u
are the covariance, or correlation, matrices of the reectance
and noise, respectively, and

S and are the means of the reectance spectra and
noise, respectively. In the case of limited information about the reectance spectra,
the problem of nding a set of optimal lters can be formulated using a min/max
method to give a maximum likelihood estimate

S of the reectance spectrum.

S =(F
T
E
r
)
+

F
. (14.11)
The term(F
T
E
r
)
+
is the Moore-Penrose generalized inverse of the matrix (F
T
E
r
).
Equation (14.11) gives the minimum norm solution, which is the orthogonal pro-
jection of the spectrum S onto the range space of the matrix (F
T
E
r
). The resulting
optimal min/max lters are shown to be any set of lters, together with the record-
ing illuminant, spanning the range space of the matrix B, where the basis vectors
b
j
are the eigenvectors of the largest eigenvalues obtained from a singular value
decomposition (SVD).
31
The problem with this approach is that theoretical optimal lters may not be
physically realizable. Even if they are possible to realize, the cost of practical pro-
duction is usually high.
14.3.2 Equal-spacing lter set
A simple and low-cost way of assembling a set of lters is to select from avail-
able lters. This method uses a set of narrowband color lters with approximately
equal spacing between peak transmittances to cover the whole spectrum of interest
(see Fig. 14.5, for example). It is simple and heuristic; therefore, it has been used
in many multispectral imaging systems;
12,3741
for instance, the VASARI scanner
implemented at the National Gallery in London used seven bands. Nonetheless,
Multispectral Imaging 311
an optimization process can be performed to determine the bandwidth and shape
of the lters.
39,41
Konig and Praefcke have performed a simulation using a vari-
able number of ideal square lters with variable bandwidth. Their results indicate
that the average color error of reconstructed spectra is the lowest at a bandwidth
of 50 nm for six lters, 30 nm for ten lters, and 5 nm for sixteen lters.
42
The
results seem to indicate that there is a correlation between bandwidth and lter
number needed to encompass the spectral range of visible light with a width of
about 300 nm. Vilaseca and colleagues have found that lters with Gaussian trans-
mittance proles give good results for spectrum reconstruction in the infrared re-
gion, using three to ve equally spaced Gaussian-shaped lters.
43,44
14.3.3 Selection of optimal lters
Another way of selecting a set of optimal lters from a larger set of avail-
able lters is by performing a brute-force exhaustive search of all possible lter
combinationsthe combinatorial search.
1,4547
The problem with this approach is
that the cost and time required could be prohibitive because the number of combi-
nations N
c
is
N
c
=(N
f
)!/[(N
s
)!(N
f
N
s
)!],
where N
f
is the number of lters and N
s
is the number of selected lters.
To select ve lters from a set of twenty-seven lters takes (27!)/[(5!)(22!)] =
80, 730 combinations. Obviously, this is time-consuming and costly. An interme-
diate solution to the combinatorial search is found by taking into account the sta-
tistical spectral properties of objects, camera, lters, and the illuminant.
46,47
The
main idea is to choose lters that have a high degree of orthogonal character after
projection into the vector space spanned by the most signicant characteristic re-
ectances of the principal component analysis (PCA). It is recommended that one
uses this faster method for prescreening of a large set of lters. This narrows the
lter selection to a manageable number; then, the exhaustive combinatorial search
can be performed on this reduced set to get the desired number of lters.
14.4 Spectral Radiance of Illuminant
Under well-controlled viewing conditions, the spectral radiance of the illuminant
is xed. In this case, it can be incorporated into the spectral sensitivity of the sen-
sors as given in Eq. (14.6). For outdoor scenes, the illumination varies. Vilaseca
and colleagues studied the illuminant inuence on the spectrum reconstruction in
the near-infrared region. They used numerical simulation to analyze the inuence
of the illuminant on the quality of the reconstruction obtained by using commer-
cially available lters that were similar to the derived theoretical optimal lters
with equal spacing.
48
The illuminants encompassed a wide range of color tem-
peratures, ranging from 1000 K to 16000 K, calculated from the Planck equation,
312 Computational Color Technology
Eq. (1.10). They obtained very good results for all illuminants considered (RMS
< 0.01), indicating that the spectrum reconstruction does not strongly depend on
the illuminant used. They concluded that good spectrum reconstructions of various
samples could be achieved under any illuminant.
Another application for multispectral imaging is in the area of color constancy.
If the nite-dimensional linear model is used, we can also estimate the illuminant
SPD. With a number of sensors greater than three, we are no longer constrained in
the 3-2 or 2-3 world (see Chapter 12). Thus, we can get more information and better
estimates of the illuminant and object surface spectrum (as given in Section 12.4)
of a six-channel digital camera.
14.5 Determination of Matrix

Matrix

can be obtained from direct spectral measurements. The camera char-
acteristics are determined by evaluating camera responses to monochromatic light
from each sample wavelength of the spectrum.
37,49,50
The direct measurement is
expensive.
Another way of estimating

is by using a camera to acquire a number of
color patches with known spectra. By analyzing the camera output with respect to
the known spectral information, one can estimate the camera sensitivity by using
either pseudo-inverse analysis or PCA. To perform the estimate, one selects p color
patches and measures their spectra to obtain a matrix

S with a size of np, where
n is the number of sampling points of the measured spectra.

S =

s
1
(
1
) s
2
(
1
) . . . s
p
(
1
)
s
1
(
2
) s
2
(
2
) . . . s
p
(
2
)
s
1
(
3
) s
2
(
3
) . . . s
p
(
3
)
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
s
1
(
n
) s
2
(
n
) . . . s
p
(
n
)

. (14.12)
Next, the camera responses of these color patches are taken to generate an mp
matrix
p
as given in Eq. (14.13); each column contains the m-channel responses
of a color patch.

p
=

1,1

2,1

3,1
. . .
p,1

1,2

2,2

3,2
. . .
p,2

1,3

2,3

3,3
. . .
p,3
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .

1,m

2,m

3,m
. . .
p,m

. (14.13)
Multispectral Imaging 313
Knowing
p
and

S, one can calculate matrix

using Eq. (14.14).

p
=

T

S. (14.14)
The simplest way is to invert Eq. (14.14), which results in

T
=
p

S
T

S
T

1
. (14.15)
The product (

S
T
) and its inverse have a size of n n; the inverse is then multi-
plied by the transpose of

S to give a p n matrix. The resulting matrix is again
multiplied by
p
to give a nal size of m n that is the size of the matrix

T
.
As discussed in Chapter 11, pseudo-inverse analysis is sensitive to signal noise.
Considering the uncertainty associated with the color measurements and camera
sensitivity, this method does not give good results for estimating

.
A better way is to use PCA by performing a Karhunen-Loeve transform on
matrix

S.
16,45,5157
The basis vectors of the KL transform are given by the ortho-
normalized eigenvectors of its covariance matrix (see Chapter 11).
=(1/p)
p

i=1


S
i


S
i

T
. (14.16)
The result of Eq. (14.16) gives a symmetric matrix. Now, let b
j
and
j
, j =
1, 2, 3, . . . , n be the eigenvectors and corresponding eigenvalues of , where
eigenvalues are arranged in decreasing order

1

2

3

n
,
then the transformation matrix B
p
is given as an n n matrix whose columns are
the eigenvectors of .
B
p
=

b
11
b
21
b
31
. . . b
i1
. . . b
n1
b
12
b
22
b
32
. . . b
i2
. . . b
n2
b
13
b
23
b
33
. . . b
i3
. . . b
n3
. . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
b
1n
b
2n
b
3n
. . . b
in
. . . b
nn

, (14.17)
where b
ij
is the jth component of the ith eigenvector. Each eigenvector b
i
is a
principal component of the spectral reectance covariance matrix . Matrix B
p
is
a unitary matrix, which reduces matrix to its diagonal form .
B
T
p
B
p
=. (14.18)
314 Computational Color Technology
The analysis generates a matrix B
p
, containing the basis vectors arranged in the
order of decreasing eigenvalues. The matrix is then estimated by

e
=B
p
B
T
p

p
. (14.19)
The optimal number of principal eigenvectors is usually quite small (this is the
major benet of the PCA); it depends on the level of noise.
46
The choice of color
patches is also of great importance for the quality of the spectrum reconstruction.
14.6 Spectral Reconstruction
Knowing the matrix

or its estimate

e
, we can perform spectrum reconstruc-
tion. We have presented many techniques for spectra reconstruction in Chapter 11.
They are classied in two categories: interpolation methods that include linear, cu-
bic, spline, discrete Fourier transform, and modied discrete sine transform; and
estimation methods, which include polynomial regression, Moore-Penrose pseudo-
inverse analysis, smoothing inverse analysis, Wiener estimation, and PCA. Here,
we present PCA and several inverse estimates for the spectrum reconstruction of
multispectral signals.
14.6.1 Tristimulus values using PCA
As given in Section 11.5, the reconstructed spectrum is a linear combination of the
basis vectors.
S =BW. (14.20)
Where B is the matrix of basis vectors derived from PCA, and W is the weight
vector of m components (or channels). By substituting Eq. (14.20) into Eq. (14.1),
we have
=

T
BW. (14.21)
Because

T
has m rows with a size of m n, and B has m independent com-
ponents with a size of n m, the product (

T
B) is an m m matrix; thus, the
weights W can be determined by inverting matrix (

T
B), provided that it is not
singular. If the channels in the camera are truly independent, which is usually the
case, the matrix (

T
B) is not singular and can be inverted.
W =

T
B

1
. (14.22)
The reconstructed spectrum S
m
of input tristimulus values is
S
m
=BW =B

T
B

1
. (14.23)
Multispectral Imaging 315
Generally, PCA-based methods give good results because they minimize the root-
mean-square (RMS) error. As pointed out in Chapter 11, RMS is not a good mea-
sure for color reconstruction; and there are several disadvantages associated with
the PCA approach, as follows:
(1) Basis vectors depend on the set of training samples. The best set of basis
vectors for one group might not be optimal for another group. As shown in
Chapter 11, several sets of vectors t reasonably well to a given spectrum,
but differences do exist; and some basis sets are better than others.
(2) It may generate negative values for a reconstructed spectrum as shown in
Figs. 11.6 and 11.21.
(3) It causes a wide dynamic range among coefcients of the basis vectors;
some coefcients have very large values and some have very small values
that stretch the dynamic range of coefcients within a W vector.
Because of these problems, other methods of spectrum estimation should be exam-
ined so that we do not rely on one method too heavily.
14.6.2 Pseudo-inverse estimation
The simplest way of recovering the spectrum S from an acquired multispectral
signal is by pseudo-inverse estimation, Eq. (14.1).
S =

. (14.24)
Matrix

has a size of n m with m columns, one for each camera chan-
nel, and n is the number of samples in the spectrum. The product (

T
) is an
n n symmetric matrix with m independent components. The matrix (

T
)
is singular if n > m. In order to have a unique solution, the rank m of

must
be equal to n, which means that the number of channels (or lters) must be equal
to or greater than the number of spectral sampling points. For m = n, this method
practically measures the spectrum of the object in accordance to the sampling rate.
In this case, the camera behaves like a spectroradiometer; there is no need to re-
construct the spectrum because it is already captured.
This approach minimizes the Euclidian distance between the object and re-
constructed signals in the camera response domain. It is very sensitive to signal
noise. Considering the uncertainty associated with any CCD or CMOS camera
from random noise, quantization error, computational errors, spectral measure-
ment errors, and optics misalignment, it is not surprising that pseudo-inverse
analysis does not give good results for spectrum reconstruction; the maxi-
mum color difference can go as high as 83 E
ab
, as shown by Konig and
Praefcke.
42
316 Computational Color Technology
14.6.3 Smoothing inverse estimation
Linear algebra provides other methods for solving inverse problems such as the
smoothing inverse and Wiener inverse estimations. The optimal solution is to min-
imize a specic vector norm S
N
, where
S
N
=

SNS
T

1/2
(14.25)
and N is the norm matrix. The solution S of the minimal norm is given as
26,43
S =N
s
1

T
N
s
1

1
, (14.26)
N
s
=

1 2 1 0 0 . . . . . . . . . 0
2 5 4 1 0 . . . . . . . . . 0
1 4 6 4 1 . . . . . . . . . 0
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
0 . . . . . . 0 1 4 6 4 1
0 . . . . . . . . . 0 1 4 5 2
0 . . . . . . . . . . . . 0 1 2 1

. (14.27)
The matrix N
s
is singular; it must be modied by adding noises in order to have a
successful matrix inversion (see Section 11.2).
25
14.6.4 Wiener estimation
The nonlinear estimation method uses the conventional Wiener estimation. There
exists a matrix H that provides the spectrum of the object belonging to the ma-
trix

.
S
m
=H , (14.28)
and
H =

1
, (14.29)
where is a correlation matrix of S given in Eq. (14.30).
42,58
=

1
2
. . . . . . . . . . . .
n1
1
2
. . . . . . . . .
n2

2
1
2
. . . . . .
n3
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .

n1

n2
. . . . . . . . .
2
1

. (14.30)
Multispectral Imaging 317
The parameter is the adjacent-element-correlation factor that is within the range
of [0, 1] and can be set by the experimenter; for example, Uchiyama and cowork-
ers set = 0.999 in their study of a unied multispectral representation.
58
The
correlation matrix has a size of n n, the matrix

is n m, and its transpose
is m n; this gives the product (


) a size of m m. The size of matrix
H given in Eq. (14.29) is (n n)(n m)(mm) = n m. Once the correlation
matrix is selected, the matrix H can be calculated because

is known for a given
digital camera. Finally, the object spectrum is estimated by substituting H into
Eq. (14.28).
Asimple version of the Wiener estimation is given by Vilaseca and colleagues.
48
Their estimate does not include the correlation matrix. Equation (14.29) becomes
H =

1
(14.31)
The interesting thing is that they expand the matrix by including second- or higher-
order polynomial terms.
14.7 Multispectral Image Representation
There exist many kinds of multispectral devices that produce multispectral images
with a different number of channels; and there are various representations for the
acquired multispectral images. Sometimes, there comes the need for editing multi-
spectral still images and video images, such as blending two or more videos with
different numbers of bands for cross-media display. Therefore, a unied represen-
tation is needed to encode various multispectral images in a common space.
In view of its efciency in representing spectral information, PCA is useful in
reducing the high dimensionality of the spectral information. With this method,
all multispectral images are represented as coefcients of basis vectors, which
are predetermined and stored for use in reconstruction. Keusen and Praefcke in-
troduced a modied version of PCA to produce the rst three basis vectors that
are compatible with the conventional tristimulus curves. The remaining basis vec-
tors are calculated with the Karhunen-Loeve transform.
38
In this representation,
they suggested that one can use the rst three coefcients for conventional printing
or displaying, whereas high-denition color reconstruction can use the whole set
of coefcients. Murakami and coworkers proposed a weighted Karhunen-Loeve
transform that includes human visual sensitivities to minimize the color difference
between the original and the reconstruction.
59
Konig and Praefcke reported a simulation study comparing color estimation
accuracy using multispectral images from a virtual multispectral camera (VMSC)
with varying numbers of channels between six and sixteen bands.
42
Results sug-
gested the possibility of keeping the mean error under 0.5 by using more than ten
bands. In this way, representation of spectral information as an output image of a
VMSC with this number of bands is reasonable for accurate color reproduction.
Uchiyama and coworkers proposed a simple but useful way to dene a VMSC
318 Computational Color Technology
with a certain number of bands.
58
First, they transform real multispectral images
with different numbers of bands from a VMSC into virtual multispectral images
with the same number of bands as the VMSC. This makes the virtual multispec-
tral images independent from the original input devices. They design virtual lters
to have equal sensitivities for each band located at equal intervals over the visi-
ble range (see Fig. 14.5). This method avoids the disadvantages of the PCA-based
methods described above. From Eqs. (14.1) and (14.28), we obtain Eq. (14.32) for
virtual camera response
v
.

v
=

v
T
S
m
=

v
T
H , (14.32)
where

v
is a matrix with a size of nm that denes the sensitivities of the virtual
channels (virtual lters and CCD or CMOS spectral sensitivities). The jth channel
of the matrix is given in Eq. (14.33).

v,j
() =F
v,j
()V()E(). (14.33)
They designed the VMSC for general usage; therefore, it is not optimized for any
particular sample set. Instead, they used Gaussian curves to dene the spectral
sensitivity of the jth channel.
F
v,j
() =

(2)
1/2

1
exp

[ (
0
+j)]
2
/

2
2

, (14.34)
Figure 14.5 A set of equally spaced Gaussian lters calculated from Eq. (14.34). (Reprinted
with permission of IS&T.)
58
Multispectral Imaging 319
where is wavelength and ,
0
, and are constants. The center wavelength
of the jth lter is given by
j
=
0
+ j, with j = 1, 2, 3, . . . , m. They set

0
= 380 +
a
nm, and (m+1) =(780
0

b
) nm. Equation (14.34) gives
equally spaced Gaussian lters as shown in Fig. 14.5 for an eight-channel lter set
(m= 8) with
a
= 25 nm,
b
= 60 nm, and =/2. All elements in matrix

v
are positive values; therefore, it does not cause negative pixel values.
They used an equal-energy stimulus for E() (a unit matrix for the illumi-
nant E) and an ideal CCD response, V() = 1; thus,

v
= F
v
and the Wiener
estimate of the spectrum in the VMSC space is
S
v
=H
v

v
, (14.35)
where H
v
is a matrix that corresponds to H in Eq. (14.28), but is independent from
the linear system matrix

. Knowing

v
, we can derive H
v
from Eq. (14.29).
They experimentally demonstrated how color-reproduction accuracy changes
when images with different numbers of bands are transformed into output images
of the dened VMSC. Using 24 patches of the Macbeth Color Checker, they ob-
tained average RMSEs of estimated reectances S
m
and reestimated virtual re-
ectances S
v
with respect to the measured spectra at 0.047 and 0.062, respectively;
the average color differences are 0.80 and 0.83 E
ab
, respectively. The results indi-
cate that this virtual multispectral image representation preserves enough spectral
information for accurate color reproductions.
14.8 Multispectral Image Quality
Color-image quality has two main aspects: the spatial and color quality. To a large
extent, the spatial quality of a digital camera is affected by the spatial resolution
or the number of photosites per unit area as discussed in Section 14.1. On the
other hand, the color quality is largely determined by the spectral sensitivity of
the photosites and the spectrum reconstruction module. For multispectral imaging,
there is an interaction between color and spatial image quality. Increasing the num-
ber of channels increases spectral and colorimetric accuracy; thus, the color qual-
ity improves. Decreasing the number of channels reduces spatial artifacts; thus,
the spatial image quality improves. Day and colleagues have performed a psy-
chophysical experiment of pair comparison to evaluate the color and spatial image
quality of several multispectral image-capture techniques.
60
The test targets were
a watercolor print and several dioramas; they were imaged using 3-, 6-, and 31-
channel image-acquisition systems. Twenty-seven observers judged, successively,
color and spatial image quality of color images rendered for an LCD display and
compared them with objects viewed in a light booth. The targets were evaluated un-
der simulated daylight (6800 K) and incandescent (2700 K) illumination. The rst
experiment evaluated color-image quality. Under simulated daylight, the subjects
judged all of the images to have the same color accuracy, except the professional-
camera image, which was signicantly worse. Under incandescent illumination,
320 Computational Color Technology
all of the images, including the professional-camera image, had equivalent perfor-
mance. The second experiment evaluated spatial image quality. The results of this
experiment were highly target-dependent. For both experiments, there was high
observer uncertainty and poor data normality. They concluded that multispectral
imaging performs well in terms of both color reproduction accuracy and image
quality, regardless of the number of channels used in imaging and the techniques
used to reconstruct the images.
References
1. H. Maitre, F. Schmitt, J. P. Crettez, Y. Wu, and J. Y. Hardeberg, Spectropho-
tometric image analysis of ne art painting, Proc IS&T and SID 4th Color
Imaging Conf., Scottsdale, AZ, pp. 5053 (1996).
2. F. A. Imai and R. S. Berns, High resolution multispectral image archives: Ahy-
brid approach, Proc. IS&T SID, 6th Color Imaging Conf., Scottsdale, AZ, pp.
224227 (1998).
3. A. Rosselet, W. Graff, U. P. Wild, C. U. Keller, and R. Gschwind, Persistent
spectral hole burning used for spectrally high-resolved imaging of the sun,
Proc. SPIE 2480, pp. 205212 (1995).
4. R. Baribeau, Application of spectral estimation methods to the design of a
multispectral 3D camera. J. Imaging Sci. Techn. 49, pp. 256261 (2005).
5. M. S. Peercy, Linear color representation for full spectral rendering, Comput.
Graphics Proc., pp. 191198 (1997).
6. R. S. Berns, Challenges for color science in multimedia imaging, Proc.
CIM98: Color Imaging in Multimedia, University of Derby, Derby, England,
pp. 123133 (1998).
7. R. S. Berns, F. H. Imai, P. D. Burns, and D.-Y. Tzeng, Multi-spectral-based
color reproduction research at the Munsell Color Science Laboratory, Proc.
SPIE 3409, p. 14 (1998).
8. D. L. Farkas, B. T. Ballou, G. W. Fisher, D. Fishman, Y. Garini, W. Niu, and
E. S. Wachman, Microscopic and mesoscopic spectral bio-imaging, Proc.
SPIE 2678, pp. 200206 (1996).
9. M. Nishibori, N. Tsumura, and Y. Miyake, Why multispectral imaging in
medicine? J. Imaging Sci. Techn. 48, pp. 125129 (2004).
10. P. H. Swain and S. M. Davis (Eds.), Remote Sensing: The Quantitative Ap-
proach, McGraw-Hill, New York (1978).
11. P. D. Burns and R.S. Berns, Analysis of multispectral image capture, Proc. 4th
IS&T/SID Color Imaging Conf., Springeld, VA, pp. 1922 (1996).
12. F. Konig and W. Praefke, The practice of multispectral image acquisition, Proc.
SPIE 3409, pp. 3441 (1998).
13. S. Tominaga, Spectral Imaging by a multi-channel camera, Proc. SPIE 3648,
pp. 3847 (1999).
Multispectral Imaging 321
14. M. Rosen and X. Jiang, Lippmann 2000: A spectral image database under con-
struction, Proc. Int. Symp. on Multispectral Imaging and Color Reproduction
for Digital Archives, Chiba University, Chiba, Japan, pp. 117122 (1999).
15. E. J. Giorgianni and T. E. Madden, Digital Color Management, Addison-
Wesley Longman, Reading, MA, pp. 3336 (1998).
16. J. Farrell, D. Sherman, and B. Wandell, How to turn your scanner into a col-
orimeter, Proc. IS&T 10th Int. Congress on Advances in Non-Impact Printing
Technologies, Springeld, VA, pp. 579581 (1994).
17. W. Wu, J. P. Allebach, and M. Analoui, Imaging colorimetry using a digital
camera, J. Imaging Sci. Techn. 44, pp. 267279 (2000).
18. F. H. Imai, R. S. Berns and D. Tzeng, A comparative analysis of spectral
reectance estimated in various spaces using a trichromatic camera system,
J. Imaging Sci. Techn. 44, pp. 280287 (2000).
19. F. H. Imai, D. R. Wyble, R. S. Berns, and D.-Y. Tzeng, A feasibility study of
spectral color reproduction, J. Imaging Sci. Techn. 47, 543553 (2003).
20. K. Parulski and K. Spaulding, Color image processing for digital cameras,
Digital Color Imaging Handbook, G. Sharma (Ed.), CRC Press, Boca Raton,
FL, pp. 727757 (2003)
21. B. E. Bayer, An optimum method for two-level rendition of continuous-tone
picture, IEEE Int. Conf. on Comm., Vol. 1, pp. 26-1126-15 (1973).
22. H. Sigiura, et al., False color signal reduction method for single-chip color
video cameras, IEEE Trans. Consumer Electron. 40, pp. 100106 (1994).
23. R. L. Baer, W. D. Holland, J. Holm, and P. Vora, A comparison of primary and
complementary color lters for CCD-based digital photography, Proc. SPIE
3650, pp. 1625 (1999).
24. K. A. Parulski, Color lters and processing alternatives for one-chip cameras,
IEEE Trans. Electron Devices ED-32(8), pp. 13811389 (1985).
25. P. G. Herzog, D. Knipp, H. Stiebig, and F. Konig, Colorimetric characteriza-
tion of novel multiple-channel sensors for imaging and metrology, J. Electron.
Imaging 8, pp. 342353 (1999).
26. K. Simomaa, Are the CCD sensors good enough for print quality monitoring?
Proc. TAGA, Sewickley, PA, pp. 174185 (1987).
27. A. Ryer, Light Measurement Handbook, Int. Light, Newburyport, MA (1997).
28. P. L. Vora, H. J. Trussell, and L. Iwan, A mathematical method for designing
a set of color scanning lters, Proc. SPIE 1912, pp. 322332 (1993).
29. M. J. Vrhel and H. J. Trussell, Optimal scanning lters using spectral re-
ectance information, Proc. SPIE 1913, pp. 404412 (1993).
30. M. J. Vrhel and H. J. Trussell, Optimal color lters in the presence of noise,
IEEE Trans. Imag. Proc. 4, pp. 814823 (1995).
31. M. J. Vrhel and H. J. Trussell, Filter considerations in color correction, IEEE
Trans. Imag. Proc. 3, pp. 147161 (1994).
32. G. Sharma and H. J. Trussell, Optimal lter design for multilluminant color
correction, Proc. IS&T/OSAs Optics and Imaging in the Information Age,
Springeld, VA, pp. 8386 (1996).
322 Computational Color Technology
33. P. L. Vora and H. J. Trussell, Mathematical methods for the design of color
scanning lters, IEEE Trans. Imag. Proc. 6, pp. 312320 (1997).
34. R. Lenz, M. Osterberg, J. Hiltunen, T. Jaaskelainen, and J. Parkkinen, Unsu-
pervised ltering of color spectra, J. Opt. Soc. Am. A 13(7), pp. 13151324
(1996).
35. W. Wang, M. Hauta-Kasari, and S. Toyooka, Optimal lters design for mea-
suring colors using unsupervised neural network, Proc. of 8th Congress of the
Int. Colour Assoc., AIC Color97, Color Science Association of Japan, Tokyo,
Japan, Vol. I, pp. 419422 (1997).
36. M. Hauta-Kasari, K. Miyazawa, S. Toyooka, and J. Parkkinen, Spectral vision
system for measuring color images, J. Opt. Soc. Am. A 16(10), pp. 23522362
(1999).
37. P. D. Burns, Analysis of image noise in multispectral color acquisition,
Ph.D. Thesis, Center for Imaging Science, Rochester Institute of Technology,
Rochester, New York (1997).
38. T. Keusen and W. Praefcke, Multispectral color system with an encoding for-
mat compatible to the conventional tristimulus model, IS&T/SIDs 3rd Color
Imaging Conf., Scottsdale, AZ, pp. 112114 (1995). T. Keusen, Multispectral
color system with an encoding format compatible with the conventional tris-
timulus model, J. Imaging Sci. Techn. 40, pp. 510515 (1996).
39. K. Martinez, J. Cuppitt, and D. Saunders, High resolution colorimetric imaging
of paintings, Proc. SPIE 1901, pp. 2536 (1993).
40. A. Abrardo, V. Cappellini, M. Cappellini, and A. Mecocci, Artworks colour
calibration using the VASARI scanner, Proc. IS&T and SIDs 4th Color Imag-
ing Conf.: Color Science, Systems and Applications, Springeld, VA, pp. 94
97 (1996).
41. K. Martinez, J. Cuppitt, D. Saunders, and R. Pillay, 10 years of art imaging
research, Proc. IEEE 90(1), pp. 2841 (2002).
42. F. Konig and W. Praefcke, A multispectral scanner, Colour Imaging: Vision
and Technology, L. W. MacDonald and M. R. Luo (Eds.), John Wiley & Sons,
Chichester, England, pp. 129143 (1999).
43. M. Vilaseca, J. Pujol, and M. Arjona, Spectral-reectance reconstruction in
the near-infrared region by use of conventional charge-coupled-device camera
measurements, Appl. Opt. 42, p. 1788 (2003).
44. M. Vilaseca, J. Pujol, M. Arjona, and F. M. Martinez-Verdu, Color visualiza-
tion system for near-infrared multispectral images, J. Imaging Sci. Techn. 49,
pp. 246255 (2005).
45. Y. Yokoyama, N. Tsumura, H. Haneishi, Y. Miyake, J. Hayashi, and M. Saito,
A new color management system based on human perception and its applica-
tion to recording and reproduction of art paintings, Proc. IS&T and SIDs 5th
Color Imaging Conf.: Color Science, Systems and Applications, Springeld,
VA, pp. 169172 (1997).
46. J. Y. Hardeberg, F. Schmitt, H. Brettel, J.-P. Crettez, and H. Maitre, Multispec-
tral image acquisition and simulation of illuminant changes, Colour Imaging:
Multispectral Imaging 323
Vision and Technology, L. W. MacDonald and M. R. Luo (Eds.), John Wiley
& Sons, Chichester, England, pp. 145164 (1999).
47. J. Y. Hardeberg, Filter selection for multispectral color image acquisition,
J. Imaging Sci. Techn. 48, pp. 105110 (2004).
48. M. Vilaseca, J. Pujol, and M. Arjona, Illuminant inuence on the reconstruc-
tion of near-infrared spectra, J. Imaging Sci. Techn. 48, pp. 111119 (2004).
49. S. O. Park, H. S. Kim, J. M. Park, and J. K. Elm, Development of spectral sen-
sitivity measurement system of image sensor devices, Proc. IS&T and SIDs
3rd Color Imaging Conf.: Color Science, Systems and Applications, Spring-
eld, VA, pp. 115118 (1995).
50. F. Martinez-Verdu, J. Pujol, and P. Capilla, Designing a tristimulus colorimeter
from a conventional machine vision system, Proc. CIM98: Color Imaging in
Multimedia, Derby, UK, Mar. 1998, pp. 319333 (1998).
51. W. K. Pratt and C. E. Mancill, Spectral estimation techniques for the spectral
calibration of a color image scanner, Appl. Opt. 15, pp. 7375 (1976).
52. G. Sharma and H. J. Trussell, Characterization of scanner sensitivity, Proc.
IS&T/SIDs 1st Color Imaging Conf., Scottsdale, AZ, pp. 103107 (1993).
53. G. Sharma and H. J. Trussell, Set theoretic estimation in color scanner charac-
terization, J. Electron. Imaging 5, pp. 479489 (1996).
54. J. E. Farrel and B. A. Wandell, Scanner linearity, J. Electron. Imaging 3, pp.
225230 (1993).
55. D. Sherman and J. E. Farrel, When to use linear models for color calibration,
Proc. IS&T/SIDs 2nd Color Imaging Conf., Scottsdale, AZ, pp. 3336 (1994).
56. R. E. Burger and D. Sherman, Producing colorimetric data from lm scanners
using a spectral characterization target, Proc. SPIE 2170, 4252 (1994).
57. P. M. Hubel, D. Sherman, and J. E. Farrel, A comparison of methods of sensor
spectral sensitivity estimation, Proc. IS&T/SIDs 2nd Color Imaging Conf.,
Scottsdale, AZ, pp. 4548 (1994).
58. T. Uchiyama, M. Yamaguchi, H. Haneishi, and N. Ohyama, A method for the
unied representation of multispectral images with different number of bands,
J. Imaging Sci. Techn. 48, pp. 120124 (2004).
59. Y. Murakami, H. Manabe, T. Obi, M. Yamaguchi, and N. Ohyama, Multispec-
tral image compression for color reproduction: Weighted KLT and adaptive
quantization based on visual color perception, IS&T/SIDs 3rd Color Imaging
Conf., Scottsdale, AZ, pp. 6872 (2001).
60. E. A. Day, R. S. Beans, L. A. Taplin, and F. H. Imai, A psychophysical exper-
iment evaluating the color and spatial image quality of several multispectral
image capture techniques, J. Imaging Sci. Techn. 48, pp. 93104 (2004).
Chapter 15
Densitometry
Densitometry provides the method and instrumentation for determining the optical
density of objects. There are two types of optical density measurements: transmis-
sion and reection. Transmission densitometry measures the density of transmis-
sive samples such as the 35-mm lm and overhead transparency. Reection densit-
ometry measures the density of reected samples such as photographic and offset
prints. Densitometry is widely used in the printing industry because most forms of
color encoding are based directly or indirectly on density readings; most input and
output devices are calibrated by using density measurements, and most reection
and transmission scanners are essentially densitometers or can be converted into
one.
1
Optical densities of objects correspond quite closely to human visual sensibil-
ity. Therefore, density is a powerful measure of the objects color quality because,
within a relatively wide range, the density follows the proportionality and additiv-
ity of the colorant absorptivity. In theory, one can use the additivity law to pre-
dict the density of a mixture from its components. However, there are problems in
achieving this goal because of the dependencies on instrumentation, imaging de-
vice, halftone technique, and media. Nevertheless, densitometry is widely used in
the printing industry for print quality control and other applications. This is because
of its simplicity, convenience, ease of use, and an approximately linear response
to human visual sensibility. In the foreseeable future, the use of densitometry is
not likely to decrease. Therefore, we present the fundamentals of densitometry, its
problems, and its applications.
Perhaps the most important application is the color reproduction that uses
the density masking method based on density additivity and proportionality. We
present the derivation of the density-masking equation and examine the validity
of the assumptions. We then extend the density-masking equation to the device-
masking equation. Modications and adjustments of tone characteristics are sug-
gested to t the assumptions better. We propose several methods for deriving coef-
cients of the device-masking equation by using the measured densities of primary
color patches. The advantages and disadvantages of these methods are discussed.
Digital implementations of the density and device-masking equations are given in
Appendix 9.
325
326 Computational Color Technology
15.1 Densitometer
Originally, densitometers were designed to make color measurements on photo-
graphic materials, aiming at applications in the photography and graphic arts print-
ing industries.
2
They were not designed to respond to color as a standard observer;
thus, no color-matching functions or standard illuminants were involved. How-
ever, the densitometer does provide its own brand of color quantication in the
form of a spectral density response, a density spectrum, or three integrated densi-
ties through three different color-separation lters (e.g., densities at the red, green,
and blue regions; some instruments have four components by including gray). The
tricolor densities at the three different visible regions resemble tristimulus values.
The major problem is in attempting to compare data between densitometers and
colorimeters or in treating densitometric data as if they were colorimetric data (see
Section 15.1.1the correlation is very poor).
There are two types of densitometers: reection and transmission, with some
instruments capable of making both measurements. For transmission measure-
ments, density is determined from the transmittance factor of the sample. The
transmittance factor P
t
is the ratio of the transmitted light intensity I
t
, measured
with the sample in place, to the incident light intensity I
0
, measured without the
sample.
P
t
= I
t
/I
0
. (15.1)
The transmission density D
t
is dened as the negative logarithm of its transmit-
tance factor.
D
t
= logP
t
. (15.2)
Similarly, the reection density D
r
is determined from the reectance factor P
r
.
P
r
= I
r
/I
d
, (15.3)
D
r
= logP, (15.4)
where I
r
is the light intensity reected by a sample, and I
d
is the light intensity
reected by a perfect diffuse reector.
A densitometer measures the reected or transmitted light through an optical
lter that transmits a region of the light in one of the red, green, or blue regions
of the visible spectrum for the purpose of obtaining the color-separated signal.
Upon receiving the signal, the instrument computes the density for output. Mod-
ern densitometers use a built-in set of spectral-response curves (sometimes called
response lters). The American National Standards Institute (ANSI) has specied
a set of density response lters
3
: Status T lters are used for reective samples
such as press proofs, off-press proofs, and press sheets. They are broadband lters
as shown in Fig. 15.1. Status E lters are a European standard for reective sam-
ples. Status A lters are used for both reective and transmissive samples, aiming
Densitometry 327
Figure 15.1 Spectra of Status T lters.
at color photographic materials such as photographic prints, 35 mm slides, and
transparencies. Status M lters are used for transmissive materialscolor negative
lms in particular. Commercial densitometers employ these lters and other lters
such as DIN, DIN NB (Gretag spectrophotometer), and Standard I lters (X-Rite
densitometer). The spectra of Standard I lters are given in Fig. 15.2; they are
narrowband lters with the exception of the neutral lter.
4
15.1.1 Precision of density measurements
Most commercial densitometers output values with an accuracy to one-hundredth
of a density unit. Thus, densitometers give only three signicant gures for den-
sity values greater than 1, and one or two signicant gures for density values less
than 1. Compared to spectrophotometers that are capable of giving ve signicant
gures with at least three signicant gures for extremely dark colors, the accuracy
of densitometers is low. Moreover, the resolving power is not high either. Often,
a signicant difference in CIELAB measured by a spectrophotometer gives only a
small difference in density values that may be within the measurement uncertainty.
Table 15.1 lists the density and CIELAB measurements of CMY primary colors in
12-step wedges using a Gretag spectrophotometer that is capable of making both
density and CIELAB measurements. Figure 15.3 shows the CIE a

-b

plot of the
CMY step wedges. Density difference, which is the Euclidean distance between
two consecutive patches, is calculated and given in column 5 of Table 15.1. Color
difference of consecutive patches in CIELAB space (E
ab
) is given in the last
328 Computational Color Technology
Figure 15.2 Spectra of Standard I lters.
Figure 15.3 CIE a

-b

plot of step wedges of CMY primary colorants.


Densitometry 329
column. As shown, there is a noticeable difference between samples Cyan-3 and
Cyan-4 measured by the spectrophotometer (E
ab
= 1.24), but there is no differ-
ence measured by the densitometer (D = 0). Figure 15.4 shows the correlation
between the density and colorimetric measurements using the data in Table 15.1;
the correlation is poor. Because of these reasons, the densitometer is not sensitive
enough for color characterization tasks. However, it is widely used in color printer
calibration.
15.1.2 Applications
There are many applications for densitometry. For example, the transmissive den-
sitometer is used extensively in chemistry for determining solute concentrations,
and the reective densitometer has been used in biological measurements such as
the reading of electrophoregrams. The main application of densitometry resides in
the printing and related industries. Densitometers are used in printing processes and
product controls such as the calibration and control of printing exposures. In most
cases, the spectral response of the densitometer is tailored by lters to represent as
closely as possible the spectral dye absorption peaks of the materials involved.
5
It
has been suggested by Dawson and Vogelsong that the variability among instru-
ments can be reduced by standardizing the unltered response.
6
In graphic arts printing, densitometers are used for quality control during print-
ing on the press, evaluation of the nal printed image, evaluation of the original
copy for determining the separation and masking requirements, and quality control
of the photographic steps encountered in prepress operations. Color-transmission
Figure 15.4 Correlation between density and color-difference measurements.
330 Computational Color Technology
Table 15.1 Precision comparisons of densitometer and spectrophotometer measurements.
Sample D
r
D
g
D
b
D L

E
ab
Cyan-1 1.27 0.55 0.37 49.31 21.09 50.33
Cyan-2 1.25 0.53 0.36 0.030 50.40 22.17 49.79 1.63
Cyan-3 1.24 0.52 0.36 0.014 50.64 22.35 49.40 0.49
Cyan-4 1.24 0.52 0.36 0 50.64 23.02 48.36 1.24
Cyan-5 1.06 0.44 0.33 0.199 55.71 24.69 43.38 7.30
Cyan-6 0.90 0.38 0.3 0.173 60.14 24.26 38.71 6.45
Cyan-7 0.75 0.32 0.27 0.164 64.97 23.43 33.57 7.10
Cyan-8 0.62 0.28 0.25 0.137 69.13 20.70 27.82 7.60
Cyan-9 0.51 0.24 0.22 0.121 73.18 17.47 22.72 7.27
Cyan-10 0.41 0.2 0.2 0.110 76.97 14.74 18.15 6.53
Cyan-11 0.32 0.17 0.18 0.097 80.86 11.49 13.26 7.04
Cyan-12 0.22 0.13 0.14 0.115 85.97 6.97 8.29 8.44
Magenta-1 0.22 1.28 0.73 48.48 66.09 2.59
Magenta-2 0.14 0.88 0.52 0.459 58.25 56.69 1.85 14.27
Magenta-3 0.14 0.78 0.48 0.108 60.87 51.84 2.39 5.54
Magenta-4 0.13 0.69 0.42 0.109 63.57 47.7 3.82 5.15
Magenta-5 0.13 0.62 0.39 0.076 66.01 42.99 3.68 5.31
Magenta-6 0.12 0.55 0.35 0.081 68.56 39.05 4.46 4.76
Magenta-7 0.12 0.49 0.32 0.067 71.34 34.76 4.6 5.11
Magenta-8 0.12 0.42 0.28 0.081 73.73 29.12 5.4 6.18
Magenta-9 0.11 0.34 0.23 0.095 77.88 23.72 5.05 6.82
Magenta-10 0.1 0.24 0.17 0.117 83.08 16.03 4.88 9.28
Magenta-11 0.09 0.2 0.15 0.046 85.59 12.05 4.11 4.77
Magenta-12 0.08 0.14 0.11 0.073 89.4 6.48 3.89 6.75
Yellow-1 0.08 0.14 1.07 88.3 6.78 70.7
Yellow-2 0.08 0.15 0.91 0.160 87.6 4.65 61.81 9.17
Yellow-3 0.08 0.14 0.81 0.101 88.47 5.21 56.55 5.36
Yellow-4 0.08 0.14 0.68 0.130 88.57 4.69 48.46 8.11
Yellow-5 0.08 0.13 0.6 0.081 88.92 4.38 43.36 5.12
Yellow-6 0.08 0.11 0.53 0.073 90.36 5.74 39.47 4.37
Yellow-7 0.08 0.11 0.48 0.050 90.76 5.73 35.62 3.87
Yellow-8 0.08 0.11 0.42 0.060 90.84 4.79 30.56 5.15
Yellow-9 0.08 0.1 0.34 0.081 91.2 4.08 23.75 6.86
Yellow-10 0.08 0.1 0.28 0.060 91.37 2.52 17.24 6.70
Yellow-11 0.08 0.1 0.22 0.060 91.57 1.11 11.13 6.27
Yellow-12 0.08 0.09 0.12 0.101 92.3 1.09 1.29 10.11
densitometry is conned mostly to the evaluation of original transparencies and
color duplicates. Quality control during printing and evaluation of the nal prod-
uct is nearly always done using reection measurements. The major differences
between graphic-arts densitometry and photographic densitometry are in the wide
diversity of substrates and pigments used in the graphic-arts industry and the fact
Densitometry 331
that the scattering characteristics of the pigments are more signicant than in pho-
tographic dyes.
5
To complicate the matter further, there is the device value. Device value, area
coverage, and density all shape the rendition of color images that are used in den-
sitometry. Device value is usually represented in an 8-bit depth with 256 levels. It
is a device-dependent parameter, where two devices with an identical set of device
values may not look the same, even if they are the same type and from the same
manufacturer. The density measurement, on the other hand, is universal and device-
independent. Two densitometers with different geometries, lter sets, or calibra-
tion standards may give different density readings for a given print. However, for a
given densitometer, the density value is not dependent on imaging devices. Usually,
the area coverage is calculated from the density measurement. The area coverage
(or dot gain) depends strongly on the substrate and halftone screen used. To have
high color delity, the relationships between density, area coverage, and device
value must be established. Moreover, it is useful to separate device-dependent and
device-independent parts of the printing process. This can be achieved by using
the area coverage as the intermediate exchange standard between device value and
density value. Relationships to the area coverage from device value and density are
individually established. The device value to area coverage relationship contains
mostly device-related characteristics, while the area coverage to density relation-
ship is sheltered from device dependency.
15.2 Beer-Lambert-Bouguer Law
The Beer-Lambert-Bouguer law governs quantitative interpretations in densitom-
etry. The law relates the light intensity to the quantity of the absorbent, based on
the proportionality and additivity of the colorant absorptivity. For a single colorant
viewed in transmission mode, such as a transparency, the absorption phenomenon
follows the Beer-Lambert-Bouguer law, which states that the density D() at a
given wavelength is proportional to the concentration c (usually in the unit of
moles per liter or grams per liter) and thickness b (usually in the unit of centime-
ter) of the colorant layer.
D() = ()bc, (15.5)
where () is the absorption coefcient at wavelength . Equation (15.5) is valid
for any monochromatic light or within a narrowband of wavelengths.
Equation (15.5) is often used in chemical analysis, where the experimenter ob-
tains the relationship between the chemical concentration and optical density. The
procedure includes preparation of a series of solutions with varying concentrations
of the chemical, ranging from the maximum concentration possible to zero con-
centration (solvent alone). Densities of these solutions, stored in a glass or quartz
rectangular cell with a xed length, are measured at a xed wavelength (usually
at the peak of the absorption band). The plot of the absorbance (or density) versus
332 Computational Color Technology
concentration is called the calibration curve. Any unknown concentration of the
chemical solution is determined by measuring the density and plugging into the
calibration curve to nd the concentration.
15.3 Proportionality
The Beer-Lambert-Bouguer law can also be used in color imaging by employ-
ing the concepts of proportionality and additivity. The proportionality of a colorant
refers to the property that the densities, measured through three color-separation l-
ters, remain proportional to each other as the amount of the colorant is varied. The
colorant amount can be modulated by changing the lm thickness for continuous-
tone printers, or by changing the area coverage for binary devices via a halftone
process. For two different wavelengths
1
and
2
, we have
D(
1
) = (
1
)bc and D(
2
) = (
2
)bc. (15.6)
The density ratio at these two wavelengths is
D(
1
)/D(
2
) = (
1
)/(
2
), (15.7)
because the thickness b and concentration c are the same for a given color patch or
object. This means that the ratio of densities at two (or more) different wavelengths
is constant if the colorant concentration and thickness remain the same. Moreover,
the ratio should remain constant as the concentration is varied. In other words, the
proportionality should be obeyed by a color-transmission measurement.
1
This can
be shown as follows:
At concentration c
1
,
D
1
(
1
) = (
1
)bc
1
, D
1
(
2
) = (
2
)bc
1
,
and
D
1
(
1
)/D
1
(
2
) = (
1
)/(
2
). (15.8)
At concentration c
2
,
D
2
(
1
) = (
1
)bc
2
, D
2
(
2
) = (
2
)bc
2
,
and
D
2
(
1
)/D
2
(
2
) = (
1
)/(
2
). (15.9)
At any arbitrary concentration c
i
,
D
i
(
1
) = (
1
)bc
i
, D
i
(
2
) = (
2
)bc
i
, and D
i
(
1
)/D
i
(
2
) = (
1
)/(
2
).
(15.10)
Densitometry 333
Thus, the density ratios of
1
and
2
at various concentrations are the same.
D
1
(
1
)/D
1
(
2
) = D
2
(
1
)/D
2
(
2
) = = D
i
(
1
)/D
i
(
2
)
= = (
1
)/(
2
) =
12
, (15.11)
where
12
is a proportional constant representing the ratio of (
1
) over (
2
).
This means that for all densities,
D(
1
) =
12
D(
2
). (15.12)
This means that a plot of density ratio
12
as a function of either the density D(
1
)
or D(
2
) will give a horizontal line. If we measure densities of a cyan patch at one
wavelength each in red, green, and blue regions (or through a set of narrowband
RGB separation lters), we have the following relationships:
D
c,g
=
gr
D
c,r
and D
c,b
=
br
D
c,r
, (15.13)
where D
c,r
, D
c,g
, and D
c,b
are the densities of the red, green, and blue components,
respectively, of the cyan ink;
gr
is the proportional constant of the green density
over the red density; and
br
is the proportional constant of the blue density over
the red density. The red density is used as the common denominator because it is
the strongest density in cyan ink.
Similarly, for magenta and yellow inks under the same measuring conditions,
we have
D
m,r
=
rg
D
m,g
and D
m,b
=
bg
D
m,g
, (15.14)
D
y,r
=
rb
D
y,b
and D
y,g
=
gb
D
y,b
, (15.15)
where D
m,r
, D
m,g
, and D
m,b
are the densities of the red, green, and blue compo-
nents, respectively, of the magenta ink;
rg
is the proportional constant of the red
density over the green density; and
bg
is the proportional constant of the blue
density over the green density. The green density, the strongest one in magenta ink,
is used as the common denominator. For yellow ink, D
y,r
, D
y,g
, and D
y,b
are the
densities of the red, green, and blue components, respectively;
rb
is the propor-
tional constant of the red density over the blue density, and
gb
is the proportional
constant of the green density over the blue density. The blue density, the strongest
one in yellow ink, is used as the common denominator.
Relationships given in Eqs. (15.5) and (15.12) are strictly obeyed with mono-
chromatic light in transmission rather than reection, and by contone rather than by
halftone images. However, in color printing, we often deal with reected light from
halftone images that are measured with a broadband densitometer. Consequently,
the proportionality is not rigorously obeyed.
334 Computational Color Technology
Table 15.2 Density ratios of the Canon CLC500 primary toners.
Measured solid density Density ratio
Region Cyan Magenta Yellow Cyan Magenta Yellow
Red 1.446 0.060 0.011 1.000 0.059 0.010
Green 0.610 1.022 0.056 0.422 1.000 0.050
Blue 0.283 0.388 1.115 0.196 0.380 1.000
Max. OD 1.446 1.022 1.115 1.446 1.022 1.115
15.3.1 Density ratio measurement
Density ratios can be measured by a densitometer; a simple procedure is given as
follows:
6
(1) Print primary colorants in steps (e.g., 10 to 20 steps) that cover the whole
range of device values.
(2) Measure RGB three-component densities of each color patch.
(3) Compute ratios
ij
as dened in Eqs. (15.13)(15.15) for each patch.
(4) Compute the average
ij
value of a color component within a given pri-
mary color.
(a) Select the darkest k patches, where k is set arbitrarily to prevent the
inclusion of the highly scattered data in the light-color region (see
Fig. 15.7).
(b) Take the average of these selected patches.
Table 15.2 gives density ratios of primary toners used in a commercial color
device. As one would expect, this procedure gives a very high experimental uncer-
tainty because the measured densities depend on the instrument and the lter used,
and the use of mean values from scattered data. However, they are the building
blocks of the density-masking equation.
15.4 Additivity
The additivity rule states that the density of a print from combined inks should be
equal to the sum of the individual ink densities.
7
D
s,r
= D
c,r
+D
m,r
+D
y,r
D
s,g
= D
c,g
+D
m,g
+D
y,g
D
s,b
= D
c,b
+D
m,b
+D
y,b
or

D
s,r
D
s,g
D
s,b

D
c,r
D
m,r
D
y,r
D
c,g
D
m,g
D
y,g
D
c,b
D
m,b
D
y,b

1
1
1

or D
s
=D
p
U
1
, (15.16)
Densitometry 335
where D
s,r
, D
s,g
, and D
s,b
are the sums of the densities of the red, green, and blue
components, respectively. Equation (15.16) can also be expressed in vector-matrix
notation by setting D
s
= [D
s,r
, D
s,g
, D
s,b
]
T
and U
1
= [1, 1, 1]
T
. Equation (15.16)
describes an ideal case, where the additivity holds. In reality, when several inks
are superimposed, the density of the mixture measured through a color-separation
lter is often less than the sum for high-density color patchesthe failure of the
additivity rule.
15.5 Proportionality and Additivity Failures
As mentioned earlier, the additivity and proportionality of the density are not
strictly obeyed. Yule pointed out major reasons for the failures of both propor-
tionality and additivity; major ones are lter bandwidth, surface reection, internal
reection, opacity, and halftone pattern.
7
15.5.1 Filter bandwidth
Usually, narrowband lters give a fairly linear relationship between the measured
density and dye concentration or thickness, indicating that proportionality holds.
When the lter bandwidth is increased, the curve starts to bend as shown in
Fig. 15.5, giving a nonlinear relationship.
15.5.2 First-surface reection
When light strikes a paper or ink surface, about 4% of the light is reected at
the surface. This is because of the difference in the refractive indices between air
and an ink lm. On a matte surface, this light is diffused so that it reaches the
detector and limits the maximum density obtainable to about 1.4 (= log0.04). If
the surface is glossy, a major fraction of this 4% light is reected away from the
detector of the densitometer (either 45/0 or 0/45 geometry) and will not reach the
detector. The result is a higher density value than 1.4.
15.5.3 Multiple internal reections
After penetrating into the paper substrate, a considerable proportion of the light
does not emerge directly, but is reected by the substrate back into the paper. This
internal reection can happen many times. Each time the light is reected back into
the paper, it passes twice through any layer of light-absorbing material such as ink
lm. Consequently, more light is absorbed and the density is increased.
15.5.4 Opacity
Opacity is the reciprocal of the transmittance. It is the scattering of light caused by
the difference in refractive indices between the colorant and ink vehicle.
336 Computational Color Technology
Figure 15.5 Density versus thickness of the magenta dye layer, using narrow-, medium-,
and wideband lters.
7
15.5.5 Halftone pattern
Generally, a coarse halftone screen shows more proportionality failure than a ne
screen.
7
To study the additivity behavior of halftone tints, one needs to print the
crossed halftone step wedges with the two inks involved, then measure their densi-
ties. The problem is that it is difcult to get accurate results because of inconsisten-
cies in the printing process; special care must be taken to minimize errors caused
by printer variability.
15.5.6 Tone characteristics of commercial printers
As a result of these problems, a color print measured by a reective densitometer
often exhibits varying degrees of deviation from the ideal proportionality and ad-
ditivity. Figure 15.6 shows the three-component (R, G, and B) curves of HP660C
ink-jet cyan ink. If the proportionality of Eq. (15.12) holds, we should get three
straight lines with different slopes; we get three curves instead.
Densitometry 337
Figure 15.6 Tone curves of HP660C cyan ink exhibiting non-Beers behavior.
Figure 15.7 shows another representation of the failure of constant proportional
ratios. The data are taken from three primary inks from an HP660C ink-jet printer.
6
If the proportionality of Eq. (15.12) holds, the data of each density ratio (or pro-
portional constant) with respect to the tone level should be constant to t a straight
horizontal line. Figure 15.7 shows linear relationships (but not horizontal lines)
above a device value of 100. The data are scattered upward at low device values.
The data scattering is primarily due to the large measurement uncertainty at low
densities. Fortunately, the contribution to the overall density at low device values
is small. For higher device values, the proportionality seems to hold quite well in
most cases.
Several studies indicate that additivity is followed rather closely before reach-
ing high densities for modern-day printing devices (Canon CLC500, for example).
6
Figures 15.815.11 show the comparisons of measured and calculated densities
from various two-color and three-color mixtures printed on paper substrates using
a HP660C ink-jet printer. As one can see from these gures, the additivity holds at
low toner levels, but fails at high device values. These results are expected because
of the non-Beers behavior at high colorant concentrations.
338 Computational Color Technology
Figure 15.7 Failure of the constant proportional ratio.
15.6 Empirical Proportionality Correction
Like the gamma correction of the display video signal, the nonlinear density rela-
tionship with respect to the device value of a printer can be transformed in order
to have a linear (or near-linear) response. An empirical equation is proposed in
Eq. (15.17) to linearize the tone curve.
g
c
= g
max
{1 [(g
max
g
i
)/g
max
]

}, (15.17)
where g
max
= 2
N
1 and N is the number of bits representing the integer value,
g
c
is the corrected output value, g
i
is the input device value, and is the expo-
nent of the power function. The optimal value can be determined by computing
g
c
over a range of values for the best t to the experimental data. Figure 15.12
gives the plot of the corrected device value versus density using the data given in
Fig. 15.6 with an optimal of 2.4; the correction gives quite linear relationships
for all three components of the HP660C cyan ink. If the input device value is repre-
sented inversely, where g
max
is the white and 0 is the full strength, then Eq. (15.17)
becomes
g
c
= g
max
{1 (g
i
/g
max
)

}. (15.18)
Densitometry 339
Figure 15.8 Additivity plots of magenta-yellow mixtures with varying yellow value mixed
with magenta at full strength.
Figure 15.9 Additivity plots of cyan-yellow mixtures with varying yellow value mixed with
cyan at full strength.
340 Computational Color Technology
Figure 15.10 Additivity plots of cyan-magenta mixtures with varying magenta value mixed
with cyan at full strength.
Figure 15.11 Additivity plots of cyan-magenta-yellow mixtures with varying yellow values
mixed with cyan and magenta at full strength.
Densitometry 341
Figure 15.12 Linearization of HP660C cyan ink curves.
Equations (15.17) and (15.18) work for a wide range of curvatures by nding an
optimal value.
15.7 Empirical Additivity Correction
The main difculty in verifying additivity is printer variability. Often, one is not
able to resolve changes in printer additivity from the high variability of the printer.
Large amounts of data and clever experiment design are needed to extract statis-
tically signicant factors. This approach is expensive and time consuming. More-
over, for interlaboratory assessments of various printing devices, this problem is
further compounded by differences in instrumentation, experimental procedure,
and printer characteristics. Therefore, it is difcult to deduce rules for correcting
complex additivity failure. An empirical method of correcting additivity failure is
by using the regression technique to nd the link between measured and calcu-
lated values.
8
The advantage of this technique is that it takes statistical uctuation
into account. Table 15.3 lists the average errors from using polynomial regression
342 Computational Color Technology
Table 15.3 Average density errors of the polynomial regression.
6
Data
Printer number 3 4 3 7 3 11
Epson (Ink-jet) 64 0.10 0.08 0.06
HP (Ink-jet) 38 0.12 0.11 0.06
Xerox 5775 125 0.09 0.09 0.06
on three printers to give some idea about the accuracy of the regression method.
6
The errors of all three printers using 11-term polynomials are the same at 0.06;
this number is only slightly larger than the measurement uncertainty. By using
the regression method, unique coefcients are derived for each printer at a given
polynomial. The coefcients are used to convert the measured density of mixed
colorants to the linear relationship of Eq. (15.16). Reasonable accuracies can be
obtained using a high-order polynomial.
15.8 Density-Masking Equation
Equation (15.16) applies to ideal block dyes. In the real world, there are no ideal
block dyes; all primary colorants have some unwanted absorptions (see Fig. 15.6,
for example). The effect of the unwanted absorption is that one does not get the
expected colors when mixing primary colorants together. A good example is the
reproduction of blue colors when cyan and magenta toners are mixed in equal
amounts; one gets purple instead of blue. These unwanted absorptions are undesir-
able and must be corrected. The correction is called masking, usually performed
in the density domain. Again, the fundamental assumptions of density masking are
the additivity and proportionality. If the proportionality and additivity rules hold
for densities of mixed colors, we can substitute Eqs. (15.13), (15.14), and (15.15)
into Eq. (15.16) to give Eq. (15.19).
D
s,r
= D
c,r
+
rg
D
m,g
+
rb
D
y,b
,
D
s,g
=
gr
D
c,r
+D
m,g
+
gb
D
y,b
,
D
s,b
=
br
D
c,r
+
bg
D
m,g
+D
y,b
,
or

D
s,r
D
s,g
D
s,b

1
rg

rb

gr
1
gb

br

bg
1

D
c,r
D
m,g
D
y,b

or
D
s
=M

D
h
, (15.19)
Densitometry 343
where D
h
= [D
c,r
, D
m,g
, D
y,b
]
T
is a vector that contains the highest density com-
ponent of each primary ink. If densities of the resulting print are measured and
the six proportional constants are known, then Eq. (15.19) becomes a set of three
simultaneous equations with three unknowns. We can solve this set of equa-
tions to obtain the densities of individual inks, D
c,r
, D
m,g
, and D
y,b
, by inverting
Eq. (15.19), as given in Eq. (15.20). The matrix M

is not singular, and therefore


determinant d is not equal to zero, because there are three independent channels:
red, green, and blue. Thus, the matrix M

has a rank of three and can be inverted.


D
h
=M
1

D
s
=M

D
s
. (15.20)
The explicit expressions are given as follows:

D
c,r
D
m,g
D
y,b

rr

rg

rb

gr

gg

gb

br

bg

bb

D
s,r
D
s,g
D
s,b

,
where

rr
=

1
bg

gb
1

-
d
,
rg
=

rg

rb

bg
1

-
d
,
rb
=

rg

rb
1
gb

-
d
,

gr
=

gr

gb

br
1

-
d
,
gg
=

1
rb

br
1

-
d
,
gb
=

1
rb

gr

gb

-
d
,

br
=

gr
1

br

bg

-
d
,
bg
=

1
rg

br

bg

-
d
,
bb
=

1
rg

gr
1

-
d
,
and
-
d
=

1
rg

rb

gr
1
gb

br

bg
1

.
Equation (15.20) states that if the densities of the RGB components of an output
color are known, one can determine the amounts of cyan, magenta, and yellow
primary colorants needed to give a match. In other words, the composition of pri-
mary colors can be found for a desired output color. Based on this simple theory,
we develop several methods of device color correction by using density masking.
Moreover, this seemingly simple theory possesses many useful properties and has
many important applications, such as gray balance, gray-component replacement,
maximum ink setting, and blue correction.
15.9 Device-Masking Equation
Equation (15.20) is not very useful because it is in the density domain. Color imag-
ing devices do not operate in the density domain; they are in the device intensity
344 Computational Color Technology
domain. The transformation between optical density and device intensity exists; it
can be established experimentally as discussed in Section 15.6. If the relationship
between intensity and density is linear, Eq. (15.20) can be directly transformed
into the device intensity domain. However, as Yule pointed out long ago: in offset
printing, the additivity and proportionality of the density are not strictly obeyed.
2
They are met approximately.
5
His observation is still valid in todays printing envi-
ronment, which uses very different technologies such as electrophotographic and
ink-jet printers. As discussed in Section 15.5, additivity is followed rather closely
before reaching high densities, and density ratios of primary colorants are pretty
constant across a wide dynamic range. The tone curve can be linearized with re-
spect to device value by using Eq. (15.17) or Eq. (15.18). Therefore, we can set up
a forward masking equation by replacing density values in Eq. (15.20) with device
values. Equation (15.21) gives the masking equation in the device intensity domain,
where G
i
= [C
i
, M
i
, Y
i
]
T
are input device CMYvalues and G
o
= [C
o
, M
o
, Y
o
]
T
are
corresponding output values.
G
i
=M

G
o
=M

r
G
o
. (15.21)
The explicit expression is

C
i
M
i
Y
i

rr

rg

rb

gr

gg

gb

br

bg

bb

C
o
M
o
Y
o

1
rg

rb

gr
1
gb

br

bg
1

D
c,r
/D
max
0 0
0 D
m,g
/D
max
0
0 0 D
y,b
/D
max

C
o
M
o
Y
o

.
D
max
= MAX(D
c,r
, D
m,g
, D
y,b
) is the highest density among all three primary
colorants. With scaling by the diagonal matrix
r
, this transformation takes into
account the difference of the maximum densities from the primary colorants; den-
sity ratios are scaled with respect to the highest density. For the Canon CLC500,
using the density ratios given in Table 15.2, the matrix M

is given by Eq. (15.22).

C
i
M
i
Y
i

1.000 0.042 0.008


0.422 0.707 0.039
0.196 0.269 0.771

C
o
M
o
Y
o

. (15.22)
With the measured density ratios
ij
, we can compute output values by invert-
ing the forward masking equation of Eq. (15.21). Two methods are proposed for
inverting Eq. (15.21).
15.9.1 Single-step conversion of the device-masking equation
Because matrix M

is not singular, we invert Eq. (15.21) directly to obtain the


output vector G
o
as given in Eq. (15.23).
G
o
=M
1

G
i
=M

G
i
. (15.23)
Densitometry 345
For the Canon CLC500, the M

is given by Eq. (15.24).

C
o
M
o
Y
o

1.0260 0.0580 0.0077


0.6098 1.4767 0.0684
0.0481 0.5005 1.3228

C
i
M
i
Y
i

. (15.24)
Note that the off-diagonal elements are all negative because they are unwanted
absorption, and should be removed. One can use Eq. (15.23) directly to obtain
output values. The problem is that many inputs will give out-of-range values for
outputs because the diagonal elements are greater than 1. For inputs of a single ink
with a high device value, for instance, the output will be greater than 1. To prevent
outputs from going over the upper boundary, one can normalize Eq. (15.23) to have
a value 1 for the maximum coefcient in each row.

C
o
M
o
Y
o

1
rg

rb

gr
1
gb

br

bg
1

C
i
M
i
Y
i

or G
o
=M

G
i
, (15.25)
where

rg
=
rg
/
rr
,
rb
=
rb
/
rr
,
gr
=
gr
/
gg
,

gb
=
gb
/
gg
,
br
=
br
/
bb
,
bg
=
bg
/
bb
.
This normalization step ensures that no outputs will go over the upper bound-
ary, but it decouples the density correlation among the three components. For the
Canon CLC500, matrix M

is given by Eq. (15.26), which is the normalization of


Eq. (15.24).

C
o
M
o
Y
o

1.0000 0.0565 0.0075


0.4129 1.0000 0.0463
0.0364 0.3784 1.0000

C
i
M
i
Y
i

. (15.26)
15.9.2 Multistep conversion of the device-masking equation
The second method breaks the process into several steps. First, the row sums of
matrix M

are normalized to 1. This normalization totally decouples the density


correlation among all three components, giving each channel an independent treat-
ment.

C
i
M
i
Y
i

rr

rg

rb

gr

gg

gb

br

bg

bb

C
o
M
o
Y
o

or G
i
=M

G
o
, (15.27)
346 Computational Color Technology
where

rr
=
rr
/(
rr
+
rg
+
rb
);

rg
=
rg
/(
rr
+
rg
+
rb
);

gr
=
gr
/(
gr
+
gg
+
gb
);

gg
=
gg
/(
gr
+
gg
+
gb
);

br
=
br
/(
br
+
bg
+
bb
);

bg
=
bg
/(
br
+
bg
+
bb
);

rb
=
rb
/(
rr
+
rg
+
rb
);

gb
=
gb
/(
gr
+
gg
+
gb
);

bb
=
bb
/(
br
+
bg
+
bb
).
We then invert Eq. (15.27) to get outputs in which unwanted absorptions are
masked.

C
o
M
o
Y
o

rr

rg

rb

gr

gg

gb

br

bg

bb

C
i
M
i
Y
i

or G
i
=M

G
o
, (15.28)
where M

= M
1

. Finally, we normalize Eq. (15.28) to have a value 1 for the


maximum coefcient in each row.

C
o
M
o
Y
o

rg

rb

gr
1

gb

br

bg
1

C
i
M
i
Y
i

or G
o
=M

G
i
, (15.29)
where

rg
=

rg
/

rr
,

rb
=

rb
/

rr
,

gr
=

gr
/

gg
,

gb
=

gb
/

gg
,

br
=

br
/

bb
and

bg
=

bg
/

bb
.
The second method is used to demonstrate the exibility of the masking
method. One can break the process apart and insert a desired processing such as
scaling, normalization, or mapping in places that one sees t. Both methods will
work. The selection is dependent on the preference of the viewer and design con-
cern of the implementer, among other things. Moreover, the difference between
these two methods may be compensated to some degree by using 1D tone curves
before and/or after the masking correction. The addition of 1D lookup tables makes
the method very versatile; one can use it to implement tone linearization for ensur-
ing the proportionality of the tone scale, or one can use it to realize a viewers
preference.
15.9.3 Intuitive approach
There is also a very intuitive approach for masking.

C
o
M
o
Y
o

1
21

31

12
1
32

13

23
1

C
i
M
i
Y
i

, (15.30)
Densitometry 347
where
21
= D
m,r
/D
c,r
,
31
= D
y,r
/D
c,r
,
12
= D
c,g
/D
m,g
,
32
=
D
y,g
/D
m,g
,
13
= D
c,b
/D
y,b
, and
23
= D
m,b
/D
y,b
are ratios of unwanted
absorption to prime absorption. Equation (15.30) says that the correction for cyan
is to subtract magenta and yellow, weighted by their absorption ratios; the correc-
tion for magenta is to subtract cyan and yellow, weighted by their absorption ratios;
and the correction for yellow is to subtract cyan and magenta, weighted by their
absorption ratios. This intuitive approach makes sense and may also work.
15.10 Performance of the Device-Masking Equation
Several numerical examples from an experimental electrophotographic printer, G2,
and a Canon CLC500 copier are given in Table 15.4 using the single-step conver-
sion of Eq. (15.25). This table reveals several interesting points, as follows:
(1) Primary colors are not affected. Therefore, primary colors are as pure as
they can be and the most saturated colors are retained. This may not be the
case for the 3D-LUT and regression methods.
(2) The mixed colors use less colorant than inputs because of reduction by
unwanted absorptions. Thus, output colors would not get too dark. It also
saves colorant consumption without sacricing saturation.
(3) There is an upper limit placed on the three colorants because of the un-
wanted absorption (see the results in Table 15.4; when C
i
= M
i
= Y
i
= 255,
the output is only 69%of the input). With gray-component replacement, the
total amount of colorants can be further reduced.
(4) This color correction produces correct blue hues.
Multistep conversion also has these advantages. On some occasions, we obtained a
negative value for one component (or more), indicating that the output gamut is not
able to match the input color. These are the cases when (i) yellow is small and the
other two components are large, where the relatively large negative magenta coef-
cient in the yellow expression overcomes the small positive contribution from the
yellow component to give a negative value; and (ii) magenta is small and the other
two components are large, where the relatively large negative cyan coefcient in
the magenta expression overcomes the small positive contribution from the ma-
genta component to give a negative value. We need to check every output value
and set any negative value to zero.
15.11 Gray Balancing
The density-masking method can be applied to gray balance in which the correct
amounts of cyan, magenta, and yellow are computed to give a neutral color. This
is achieved by setting C
i
= M
i
= Y
i
= g
i
in Eq. (15.23) to give Eq. (15.31).
348 Computational Color Technology
Table 15.4 The input values and corrected output values using the density-masking
method.
Input G2 output CLC500 output
C
i
M
i
Y
i
C
o
M
o
Y
o
C
o
M
o
Y
o
255 255 255 199 139 128 239 138 149
255 255 127 205 151 0 240 144 21
255 255 0 211 162 0 241 148 0
255 127 255 221 11 172 246 10 198
255 127 127 227 23 44 247 16 70
255 127 0 233 34 0 248 22 0
255 0 255 243 0 215 253 0 246
255 0 127 249 0 87 254 0 118
255 0 0 255 0 0 255 0 0
127 255 255 71 186 148 111 191 154
127 255 127 77 197 20 112 197 26
127 255 0 83 209 0 113 203 0
127 127 255 93 58 192 118 63 202
127 127 127 99 69 64 119 69 74
127 127 0 105 81 0 120 75 0
127 0 255 115 0 235 125 0 250
127 0 127 121 0 107 126 0 122
127 0 0 127 0 0 127 0 0
0 255 255 0 232 168 0 243 159
0 255 127 0 244 40 0 249 31
0 255 0 0 255 0 0 255 0
0 127 255 0 104 212 0 115 207
0 127 127 0 116 84 0 121 79
0 127 0 0 127 0 0 127 0
0 0 255 0 0 255 0 0 255
0 0 127 0 0 127 0 0 127
0 0 0 0 0 0 0 0 0
C
o
= (
rr
+
rg
+
rb
)g
i
,
M
o
= (
gr
+
gg
+
gb
)g
i
,
Y
o
= (
br
+
bg
+
bb
)g
i
. (15.31)
With known values, we can compute C
o
, M
o
, and Y
o
to give a gray that is bal-
anced with respect to density. Hopefully, the gray is colorimetrically balanced as
well. Equation (15.31) provides a means for checking the quality of the masking
equation. If the resulting CMY components give true grays, then there is no need
to perform tedious experiments for obtaining gray-balance curves.
9
Moreover, in reproducing blue colors, the masking equation correctly reduces
the unwanted absorptions from the cyan and magenta inks in proportion to the
Densitometry 349
amounts of CMY inputs because all off-diagonal elements are negative values [see
Eq. (15.24)]. The result is a correct blue color instead of purple, such that there
is no need for a complex algorithm for hue rotation to achieve a blue color rendi-
tion.
15.12 Gray-Component Replacement
Gray-component replacement (GCR) is dened as a technique in color printing
wherein the neutral or gray component of a three-color image is replaced dur-
ing reproduction with a certain level of black ink. The least predominant of the
three primary inks is used to calculate a partial or total substitution by black, and
the color components of that image are reduced to produce a print image of a
nearly equivalent color to the original three-color print.
10
Specications Web Off-
set Publications (SWOP) use both GCR and UCR (under-color removal) in their
publications and make the distinction between them by stating that UCR refers
to the reduction of chromatic colors in the dark or near-neutral shadow areas for
the purpose of reducing the total area coverage and ink-lm thickness.
11
Under
this denition, it makes UCR a subset of GCR, which applies to all color mix-
tures. To my comprehension, the conventional way of performing GCR is ac-
complished by a sequence of operations including grayscale tone reproduction
or gray balancing, color correction, UCR, black generation (BG), under-color ad-
dition (UCA), and mathematical analysis. UCR nds the minimum of three pri-
mary colors and determines the amount of each primary to be removed. BG de-
termines the amount of the black ink to be added. UCA is used by adding color
primaries to enhance the saturation and depth. In any case, GCR is a very com-
plicated process of color reproduction; it requires many attempts by skilled col-
orists.
The masking equation can also be used for GCR. It is the natural extension of
gray balancing. To remove a given gray component g
i
, we can compute the CMY
components via Eq. (15.23). The common component g
i
is determined by a prede-
ned black-generation algorithm. Then, the common component is removed from
the initial CMY values. The masking equation makes the complex GCR process
simpler and easy to implement; an algorithm is given as follows:
(1) Find the minimum in the C
i
M
i
Y
i
components.
(2) Generate the black using a function g
i
= f [MIN(C
i
, M
i
, Y
i
)], where the
function can be a scaling factor or a nonlinear transform. For example, the
scaling factor can be varied as a function of the common component as
shown in Fig. 15.13.
(3) Substitute the g
i
value into Eq. (15.31) to obtain values of C
o
, M
o
, and Y
o
for UCR.
(4) Subtract under colors C
o
, M
o
, and Y
o
from inputs C
i
, M
i
, and Y
i
, respec-
tively.
350 Computational Color Technology
Figure 15.13 The strategy of gray-component replacement.
An additional benet is that this GCR approach places a maximum value for ink
loading at any given spot. Figure 15.14 depicts the maximum and minimum ink
loading of mixing four CMYK inks as a function of the common component. Ink
quantities are computed by using the masking Eq. (15.31) and a linear GCR, show-
ing that the maximum ink loading cannot exceed 220%, as shown in Fig. 15.14.
The beauty of this algorithm is that the maximum ink loading is built into the
masking operation. There is no need for implementing a complex mechanism for
ensuring that the ink loading will not exceed a certain level.
15.13 Digital Implementation
Mathematical fundamentals for implementing the color-masking conversion via
integer lookup tables are given in Appendix 9. The accuracy of the integer lookup
is compared to the oating-point computation. The optimal pixel depth is recom-
mended by using a simulation that computes the adjusted densities via a masking
equation.
Densitometry 351
Figure 15.14 Maximum and minimum ink loading using a variable gray-component replace-
ment.
15.13.1 Results of the integer masking equation
As an exercise, we take the nine density values from Yules book,
12
D
c,r
= 1.23, D
m,r
= 0.11, D
y,r
= 0.02,
D
c,g
= 0.35, D
m,g
= 1.05, D
y,g
= 0.08,
D
c,b
= 0.11, D
m,b
= 0.53, D
y,b
= 0.94,
for computing the coefcients of the matrices. Using Eqs. (15.13), (15.14), and
(15.15), we compute the density ratios and build matrix M

from Eq. (15.19).


Matrix M

is inverted to matrix M

using Eq. (15.20). Coefcients of the M

encoded in the oating point are used as the standard for comparison with integer
implementations.
The procedure for the conversion is as follows:
(1) RGB inputs are converted to densities via a forward lookup table to obtain
scaled integer CMY values.
(2) These integers are plugged into Eq. (15.20) to compute color-corrected
densities.
(3) Resulting densities are used as indices to a backward lookup table for ob-
taining corrected RGB values.
352 Computational Color Technology
Results are compared to the oating-point computation as a means of judging the
computational accuracy of the scaled approach. The error metric is the Euclidean
distance between integer and oating-point values, given as follows:
Error =

r,mbit
D

r,oat

2
+

g,mbit
D

g,oat

2
+

b,mbit
D

b,oat

1/2
.
(15.32)
In this study, we use ve-level RGB combinationsa total of 125 data sets. The
average errors of 125 data sets are 2.8, 1.6, and 1.1 for 8-bit, 10-bit, and 12-bit
representations, respectively. As expected, the average error decreases as the bit
depth increases. Note that the average error of 12-bit scaling is not much big-
ger than the maximum round-off error of 0.87. The histograms of the error dis-
tribution are given in Fig. 15.15; the bandwidth becomes narrower and the am-
plitude increases as the bit depth increases. Usually, large errors occur in cases
where the differences between input values are big (e.g., R = 245, G = 1, and
B = 1). From this simulation, it is believed that either the 10-bit or 12-bit repre-
sentation will give a good accuracy for converting reectance to and from den-
sity.
Figure 15.15 Histograms of error distributions with respect to bit depth.
Densitometry 353
15.14 Remarks
Several methods of density and device-masking are proposed. These methods are
simple, yet versatile and robust. They give very good color renditions for digital
images, giving vivid colors and depth in the shadow region. They can be applied to
gray balance, gray-component replacement, maximum ink setting, and blue correc-
tion. Unlike colorimetric transformation, coefcients of the masking equation are
adopted to the device characteristics. This transformation is already in the device
intensity domain; thus, there is no need for the prole-connection space or any
other conversions. The color conversion is accomplished in one transformation.
Additional exibilities can be gained by adding a set of 1D lookup tables (one for
each channel) before and/or after the matrix transformation to implement the tone
curves for any desired outcomes such as the linearization (proportionality), contrast
enhancement, customer preference, etc. This approach provides a very simple im-
plementation and a very low computational cost when compared to methods using
regression and a 3D lookup table with interpolation. This, in turn, gives a simple
color architecture and fast processing speed. There are many additional advantages,
as follows:
(1) The primaries are not affected; therefore, saturated colors are retained.
(2) The mixed colors use less colorant than inputs because of the unwanted
absorption.
(3) An upper limit is placed on the three colorants at any point of the rendered
image because of the unwanted absorption.
(4) There is no need for color rotation to give correct blue hues.
Using densitometry to reproduce an image is not a colorimetric reproduction. Even
if the density is transformed or correlated to the tristimulus values, it still will
not be able to handle metamers and orescence. Unlike colorimeters, a metameric
pair will give different sets of density values measured by a given densitometer,
and most likely will be rendered differently by a given printer. In spite of these
problems, the densitometric reproduction is a good and simple color reproduction
method.
Giorgianni and Madden pointed out that for a reproduction system using a den-
sitometric scanner and a printer, the densitometric reproduction that measures input
dye amounts and produces those amounts on the output will be a one-to-one phys-
ical copywhat is known as a duplicateof the input image.
13
If the input and
output images are viewed under identical conditions, they will visually match. For
a system based on densitometric scanning of a single medium, one does not need to
convert back and forth to colorimetric representations. With few transformations,
the processing speed and accuracy will be high. However, relatively few systems
use the same imaging medium for both input and output. Still, the densitometric
input and output values of many practical systems are remarkably similar. They are
at least much more similar to each other than either is to any set of CIE colorimetric
values.
354 Computational Color Technology
References
1. E. J. Giorgianni and T. E. Madden, Digital Color Management, Addison-
Wesley, Reading, MA, pp. 447457 (1998).
2. C. S. McCamy, Color: Theory and Imaging System, Color Densitometry,
R. A. Eynard (Ed.), Soc. Photogr. Sci. Eng., Washington, D.C. (1973).
3. ANSI.
4. X-Rite densitometer.
5. M. Pearson, Modern Color-Measuring Instruments, Optical Radiation Mea-
surements, F. Grum and C. J. Bartleson (Ed.), Vol. 2, Academic Press, New
York, pp. 337366 (1980).
6. H. R. Kang, Digital Color Halftoning, SPIE Press, Chap. 3, Densitometry,
Bellingham, WA, pp. 2941 (1999).
7. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, Chap. 8,
pp. 205232 (1967).
8. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 3, pp. 5563 (1997).
9. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Chap. 12, pp. 310317 (1997).
10. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 317
326 (1999).
11. SWOP 1993, SWOP Inc., New York (1993).
12. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, p. 281
(1967).
13. E. J. Giorgianni and T. E. Madden, Digital Color Management, Addison-
Wesley, Reading, MA, Chap. 9, pp. 201206 (1998).
Chapter 16
Kubelka-Munk Theory
The Kubelka-Munk (K-M) theory is based on light absorption and partial scatter-
ing. Kubelka and Munk formulated this theory to model the resultant light emerg-
ing from translucent and opaque media, assuming that there are only two light
channels traveling in opposite directions. The light is being absorbed and scattered
in only two directions: an upward beam J and a downward beam I , as shown in
Fig. 16.1.
13
A background is presented at the bottom of the medium to provide
the upward light reection.
This chapter presents the fundamentals of the K-M theory, spectral extension,
cellular extension, and a unied global theory.
Figure 16.1 Kubelka-Munk two-channel model of light absorption and scattering.
355
356 Computational Color Technology
16.1 Two-Constant Kubelka-Munk Theory
The derivation of the Kubelka-Munk formula can be found in many publications.
13
dI ()/dw = [ () + s()]I () + s()J(), (16.1a)
dJ()/dw = [ () + s()]J() + s()I (), (16.1b)
where w is the width, () is the absorption coefcient, and s() is the scattering
coefcient.
Equation (16.1) can be expressed in matrix-vector form as
_
dI ()/dw
dJ()/dw
_
=
_
() + s()] s()
s() [ () + s()]
__
I ()
J()
_
. (16.2)
There is a general solution to this kind of matrix differential equation [Eq. (16.2)]
that is given by the exponential of the matrix. Integrating Eq. (16.2) from w = 0 to
w = W, we obtain the general solution as follows:
_
I
W
()
J
W
()
_
= exp
__
() + s()] s()
s() [ () + s()]
_
W
__
I
0
()
J
0
()
_
, (16.3)
where I
W
() and J
W
() are the intensities of uxes I and J at w = W, respec-
tively, and I
0
() and J
0
() are the intensities of uxes I and J at w = 0, respec-
tively. The exponent of a matrix such as the one in Eq. (16.3) is dened as the sum
of a power series:
exp(M) =

k=0
M
k
/k!. (16.4)
An explicit solution can be derived by setting the body reection P() =
J()/I (), and we obtain
dP()/dw = d[J()/I ()]/dw
= s() 2[ () + s()]P() + s()P()
2
, (16.5)
_
dw =
_
dP()/[ s() 2[ () + s()]P() + s()P()
2
]. (16.6)
Equation (16.6) can be integrated by setting the boundary conditions, when at
background w = 0 and P() = P
g
(), and at air-substrate interface w = W and
p() = P(), to give an expression of Eq. (16.7) as follows:
P() =
1 P
g
()[() () coth(() s()W)]
() P
g
() +() coth[() s()W]
, (16.7)
Kubelka-Munk Theory 357
where P() is the reectance of the lm and P
g
() is the reectance of the back-
ground.
() = 1 + ()/ s(), (16.8)
() = [()
2
1]
1/2
= {[ ()/ s()]
2
+2[ ()/ s()]}
1/2
, (16.9)
coth[() s()W] =
exp[() s()W] +exp[() s()W]
exp[() s()W] exp[() s()W]
. (16.10)
The expression given in Eq. (16.7) is called the two-constant Kubelka-Munk equa-
tion, because the two constants, and s, are determined separately. It indicates
that the reectance of a translucent lm is a function of the absorption coefcient,
the scattering coefcient, the lm thickness, and the reectance of the background.
Equation (16.7) is the basic formof the Kubelka-Munk equation and the foundation
of other variations of this basic Kubelka-Munk formula.
2
To employ Eq. (16.7),
one needs to know P
g
and W in addition to the and s constants.
16.2 Single-Constant Kubelka-Munk theory
One of the variations to the two-constant Kubelka-Munk equation is the single-
constant Kubelka-Munk equation. In the limiting case of an innite thickness, the
equation becomes
P

() = () () = 1 + [ ()/ s()] {[ ()/ s()]


2
+2[ ()/ s()]}
1/2
.
(16.11)
The single constant ()/ s() is the ratio of the absorption coefcient to the scat-
tering coefcient. It can be determined by measuring ink reectance. In practical
applications, the single constant of a multicomponent system is obtained by sum-
ming ratios of all components.
()/ s() =
w
()/ s
w
() + c
1
[
1
()/ s
1
()] + c
2
[
2
()/ s
2
()]
+ + c
i
[
i
()/ s
i
()] + + c
m
[
m
()/ s
m
()], (16.12)
where
w
/ s
w
= the single constant of the substrate;
i
/ s
i
= the single constant of
the component i.
c
i
= f
i
(c
i
/ c
i
), (16.13)
where c
i
is the concentration of the component i in the mixture, c
i
is the concentra-
tion of the primary colorant i at full strength without mixing with other colorants,
and c
i
is the concentration ratio of the component i. Usually, a correction factor f
i
is included for each colorant to account for variability in the mixing and formula-
tion processes.
358 Computational Color Technology
If we set
() = ()/ s(), (16.14)
then

w
() =
w
()/ s
w
(), (16.15)
and

i
() =
i
()/ s
i
(). (16.16)
We then have
() = [
w
()
1
()
2
()
3
()
m
()][1 c
1
c
2
c
3
c
m
]
T
. (16.17)
If we sample the visible spectrum at a xed interval, we can put Eq. (16.17)
in a vector-matrix form.
_
_
_
_
_
_
_
_
_
_
(
1
)
(
2
)
(
3
)
(
4
)
. . .
. . .
(
n
)
_

_
=
_
_
_
_
_
_
_
_
_
_

w
(
1
)
1
(
1
)
2
(
1
)
3
(
1
) . . .
m
(
1
)

w
(
2
)
1
(
2
)
2
(
2
)
3
(
2
) . . .
m
(
2
)

w
(
3
)
1
(
3
)
2
(
3
)
3
(
3
) . . .
m
(
3
)

w
(
4
)
1
(
4
)
2
(
4
)
3
(
4
) . . .
m
(
4
)
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .

w
(
n
)
1
(
n
)
2
(
n
)
3
(
n
) . . .
m
(
n
)
_

_
_
_
_
_
_
_
_
1
c
1
c
2
c
3
. . .
c
m
_

_
(16.18a)
or
_
_
_
_
_
_
_
_
_
_

4
. . .
. . .

n
_

_
=
_
_
_
_
_
_
_
_
_
_

w1

11

21

31
. . .
m1

w2

12

22

32
. . .
m2

w3

13

23

33
. . .
m3

w4

14

24

34
. . .
m4
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .

wn

1n

2n

3n
. . .
mn
_

_
_
_
_
_
_
_
_
1
c
1
c
2
c
3
. . .
c
m
_

_
(16.18b)
or
=C, (16.18c)
where n is the number of the sample points. For example, if the spectral range
is 400700 nm and the sampling interval is 10 nm, we have n = 31. The value
m is the number of primary inks, or the number of independent columns in ma-
trix . In practical applications, the number of primary inks m is usually three or
Kubelka-Munk Theory 359
four. In such cases, it is unlikely to satisfy all the simultaneous equations given in
Eq. (16.18); some of the equations have nonzero residuals because of the limita-
tion of the K-M theory and experimental errors. Therefore, the best way to a close
approximation is the least-squares t that minimizes the sum of the squares of the
residues for each equation in Eq. (16.18).
4
=

i

_

wi
+ c
1

1j
+ c
2

2j
+ c
3

3j
+ + c
m

mj
__
2
. (16.19)
The summation carries from j = 1, . . . , n. Equation (16.18) can be expressed in
the matrix form.
= ( C)
T
( C). (16.20)
This means that the partial derivatives with respect to c
i
(i = 1, . . . , n) are set to
zero, resulting a new set of equations.
(
T
)C =
T
. (16.21)
The explicit expressions of (
T
) and (
T
) are

T
=
_
_
_
_
_
_
_
_
_
_
_

2
wj

wj

1j

wj

2j

wj

3i
. . .

wj

mj

wj

1j

2
1j

1j

2j

1j

3i
. . .

1j

mj

wj

2j

1j

2j

2
2j

2j

3i
. . .

2j

mj

wj

3j

1j

3j

2j

3j

2
3j
. . .

3j

mj
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .

wj

mj

1j

mj

2j

mj

3j

mi
. . .

2
mj
_

_
,
(16.22)

T
=
_
_
_
_
_
_
_
_
_
_

wj

1j

2j

3j

j
. . .
. . .

mj

j
_

_
. (16.23)
Again, the summations in Eqs. (16.22) and (16.23) carry from j = 1, . . . , n. The
resulting matrix (
T
) has a size of (m + 1) (m + 1) and (
T
) is a vector of
(m + 1) elements. In all cases, we can make n > m, which means that the matrix
(
T
) is invertible.
C = (
T
)
1

T
. (16.24)
360 Computational Color Technology
16.3 Determination of the Single Constant
The single constant in Eq. (16.16),
i
() =
i
()/ s
i
(), is calculated from the
measured reection spectrum of a primary ink.

i
() = [1 P
i
()]
2
/[2P
i
()] i = 1, 2, 3, . . . , m. (16.25)
In this single-constant Kubelka-Munk model, the correction for the refractive in-
dex that changes at the interface of air and a colored layer is included. A simple
correction is to subtract the surface reectance r
s
from the measured reectance,
P
i
().
P
,i
() = P
i
() r
s
. (16.26)
The rst surface reectance is a physical phenomenon of the change in the reec-
tion index between the media; it is not dependent on the colorant absorption char-
acteristics. Therefore, it is not a colorant-dependent parameter. Another frequently
used approach is Saundersons correction, as follows:
5
P
,i
() = [P
i
() f
s
]/[1 f
s
f
r
+f
r
P
i
()], (16.27)
where f
s
= a constant representing the surface reection. f
r
= a fraction repre-
senting internal reections.
For perfectly diffuse light, the theoretical value of f
r
is 0.6.
6
In practice,
it is constrained from 0.45 to 0.6. However, in many applications, f
r
is treated
as a variable to t the data.
16.4 Derivation of Saundersons Correction
Emmel and Hersch provide an interesting derivation of Saundersons correction
using the Kubelka-Munk two-ux model.
7,8
The incident ux on the external sur-
face is partially reected at the surface and is added to the emerging ux, where a
fraction of the emerging ux is reected back and is added to the incident ux as
shown in Fig. 16.2. This model leads to the following equations:
I
W
() = (1 f
s
)I () +f
r
J
W
(), (16.28)
J() = f
s
I () +(1 f
r
)J
W
(). (16.29)
The uxes I
W
() and J
W
() are the intensities at w = W. Rearranging
Eq. (16.28), we obtain
I () = [1/(1 f
s
)]I
W
() [f
r
/(1 f
s
)]J
W
(). (16.30)
Kubelka-Munk Theory 361
Figure 16.2 Kubelka-Munk four-channel model of external and internal reections with up-
ward and downward uxes on the air-ink interface. (Reprinted with permission of IS&T.)
7
Substituting Eq. (16.30) into Eq. (16.29), we have
J() = [f
s
/(1 f
s
)]J
W
() + [(1 f
s
f
r
)/(1 f
s
)]J
W
(). (16.31)
Equations (16.30) and (16.31) are combined to give the vector-matrix form of
Eq. (16.32) as
_
I ()
J()
_
=
_
1/(1 f
s
) f
r
/(1 f
s
)
f
s
/(1 f
s
) (1 f
s
f
r
)/(1 f
s
)
__
I
W
()
J
W
()
_
. (16.32)
Reectance P() is the ratio of the emerging ux J() to the incident ux I ();
therefore, we derive the expression for reectance P by using Eqs. (16.30) and
(16.31).
P() = J()/I ()
= {[f
s
/(1 f
s
)]I
W
() + [(1 f
s
f
r
)/(1 f
s
)]J
W
()}/
{[1/(1 f
s
)]I
W
() [f
r
/(1 f
s
)]J
W
()}
= [f
s
+(1 f
s
f
r
)P
W
()]/[1 f
r
P
W
()], (16.33)
362 Computational Color Technology
where P
w
() = J
w
()/I
w
() is the body reectance at w = W; if W , then
P
W
() P

().
P() = [f
s
+(1 f
s
f
r
)P

()]/[1 f
r
P

()]. (16.34)
By rearranging Eq. (16.34), we derive the expression for the reectance at innite
thickness.
P

() = [P() f
s
]/[1 f
s
f
r
+f
r
P()]. (16.35)
Equation (16.35) is identical to Eq. (16.27) of Saundersons correction. The gen-
eral Kubelka-Munk model with Saundersons correction is given by combining
Eq. (16.3) with Eq. (16.32).
_
I ()
J()
_
=
_
1/(1 f
s
) f
r
/(1 f
s
)
f
s
/(1 f
s
) (1 f
s
f
r
)/(1 f
s
)
_
exp
__
[ () + s()] s()
s() [ () + s()]
_
W
__
I
0
()
J
0
()
_
. (16.36)
The matrices in Eq. (16.36) can be combined together to give a compact expression
of Eq. (16.37).
_
I ()
J()
_
=
_

o
__
I
0
()
J
0
()
_
. (16.37)
Equation (16.37), together with the boundary condition of J
0
() = P
g
I
0
(), can
be used to calculate the reectance.
P() = J()/I () = ( + P
g
)/( o + P
g
). (16.38)
Equation (16.38) is used to compute the reection spectrum of a translucent
medium with light-absorbing and light-scattering properties.
16.5 Generalized Kubelka-Munk Model
Emmel and Hersch developed a generalized model based on the Kubelka-Munk
theory.
7,8
They consider two types of regions: inked and noninked, and assume that
the exchange of photons between surface elements takes place only in the substrate
because the ink layer is very thin (about 10 m). The model uses two levels of ink
intensities, where each ink level consists of two light uxes of the Kubelka-Munk
up-and-down type. Expanding Eq. (16.2) to two ink levels, we have
d
dw
_
_
_
I
0
(w)
J
0
(w)
I
1
(w)
J
1
(w)
_

_
=
_
_
_

0
+ s
0
s
0
0 0
s
0
(
0
+ s
0
) 0 0
0 0
1
+ s
1
s
1
0 0 s
1
(
1
+ s
1
)
_

_
_
_
_
I
0
(w)
J
0
(w)
I
1
(w)
J
1
(w)
_

_
Kubelka-Munk Theory 363
=M
KS
_
_
_
I
0
(w)
J
0
(w)
I
1
(w)
J
1
(w)
_

_
. (16.39)
The parameters
0
, s
0
,
1
, and s
1
are the absorption and scattering constants of the
noninked and inked substrates, respectively. By integrating Eq. (16.39) from w = 0
to w = W, we obtain
_
_
_
I
0,W
J
0,W
I
1,W
J
1,W
_

_
= exp(M
KS
W)
_
_
_
I
0,0
J
0,0
I
1,0
J
1,0
_

_
. (16.40)
At this point, they include Saundersons correction to account for the multiple in-
ternal reections.
_
_
_
I
0
J
0
I
1
J
1
_

_
=
_
_
_
1/(1 f
s
) f
r
/(1 f
s
) 0 0
f
s
/(1 f
s
) (1 f
s
f
r
)/(1 f
s
) 0 0
0 0 1/(1 f
s
) f
r
/(1 f
s
)
0 0 f
s
/(1 f
s
) (1 f
s
f
r
)/(1 f
s
)
_

_
_
_
I
0,W
J
0,W
I
1,W
J
1,W
_

_
=M
SC
_
_
_
I
0,W
J
0,W
I
1,W
J
1,W
_

_
. (16.41)
Substituting Eq. (16.40) into Eq. (16.41), we obtain
_
_
_
I
0
J
0
I
1
J
1
_

_
=M
SC
exp(M
KS
W)
_
_
_
I
0,0
J
0,0
I
1,0
J
1,0
_

_
. (16.42)
This generalized Kubelka-Munk model is based on the assumption that the ex-
change of photons occurs only in the substrate; therefore, light-scattering occurs
at the boundary of w = 0. This implies that the emerging uxes J
0
and J
1
depend
on the incident uxes I
0
and I
1
, and the background reection P
g
; Eq. (16.43)
gives the relationships for representing the emerging uxes in terms of the incident
uxes.
_
J
0,0
J
1,0
_
=
_

0,0

0,1

1,0

1,1
__
P
g
0
0 P
g
__
I
0,0
I
1,0
_
. (16.43)
364 Computational Color Technology
Coefcients
p,q
represent the overall probability of a photon entering through a
surface element with ink level q and emerging from a surface element with ink
level p. The row sums of the probability coefcients are equal to 1. The expression
for all four light uxes is
_
_
_
I
0
I
1
J
0
J
1
_

_
=
_
_
_
1 0 0 0
0 1 0 0
0 0
0,0

0,1
0 0
1,0

1,1
_

_
_
_
_
1 0
0 1
P
g
0
0 P
g
_

_
_
I
0,0
I
1,0
_
. (16.44)
We can substitute Eq. (16.44) into Eq. (16.42) to derive the equation for computing
the emerging uxes from the incident uxes. However, the sequences of elements
in the left-hand-side vectors of Eq. (16.42) and Eq. (16.44) do not match, so we
need to rearrange the vectors to get the resulting expression of Eq. (16.45).
_
_
_
I
0
I
1
J
0
J
1
_

_
=
_
_
_
1 0 0 0
0 0 1 0
0 1 0 0
0 0 0 1
_

_
M
SC
exp(M
KS
W)
_
_
_
1 0 0 0
0 0 1 0
0 1 0 0
0 0 0 1
_

_
1

_
_
_
1 0 0 0
0 1 0 0
0 0
0,0

0,1
0 0
1,0

1,1
_

_
_
_
_
1 0
0 1
P
g
0
0 P
g
_

_
_
I
0,0
I
1,0
_
. (16.45)
The bi-level 4 4 matrix located in front of matrix M
SC
and its inverse are used to
rearrange the row sequence of matrix M
SC
and M
KS
, respectively, for meeting the
sequence of the left-hand-side vector. Equation (16.45) gives a 4 2 matrix for the
product of the right-hand-side matrices, excluding the last vector [I
0,0
, I
1,0
]. This
matrix can be split into two 22 matrices; the rst matrix relates the incident uxes
[I
0
, I
1
] to [I
0,0
, I
1,0
], and the second matrix relates the emerging uxes [J
0
, J
1
] to
[I
0,0
, I
1,0
]. We can multiply the second matrix by the inverse of the rst matrix
to obtain the relationship between the emerging uxes and incident uxes because
the reectance is the ratio of the emerging ux to the incident ux.
Next, Emmel and Hersch apply the halftone area coverage to compute the re-
ectance. Let a
1
be the fraction of area covered by ink; then, a
0
= 1 a
1
is the
fraction of the area without ink, such that the total emerging ux is the sum of each
emerging ux J
0
or J
1
, weighted by its area coverage. Similarly, the total incident
ux is the sum of each incident ux I
0
or I
1
, weighted by its area coverage. The
resulting reectance is the ratio of the total emerging ux to the total incident ux
as given in Eq. (16.46).
P = (a
0
J
0
+a
1
J
1
)/(a
0
I
0
+a
1
I
1
). (16.46)
Kubelka-Munk Theory 365
Because the incident light has the same intensity on the inked and noninked areas,
I
0
= I
1
= I , Eq. (16.46) becomes
P = [(1 a
0
)J
0
+a
1
J
1
]/I. (16.47)
The emerging uxes J
0
and J
1
and the incident ux I in Eq. (16.47) are given by
Eq. (16.45). Equation (16.47) reduces to Clapper-Yule equation (see Chapter 18)
by setting
0,0
=
1,0
= a
0
= 1 a
1
,
0,1
=
1,1
= a
1
, and s
0
= s
1
=
0
= 0.
If we set
j,j
= 1,
i,j
= 0, and s
0
= s
1
=
0
= f
s
= f
r
= 0, Eq. (16.47) re-
duces to the Murray-Davies equation.
7,8
It is amusing that such a simple four-ux
Kubelka-Munk model has such a rich content, providing a unied global theory
for interpreting various color-mixing models. Using this model together with a
high-resolution halftone dot prole and a simple ink-spreading model, Emmel and
Hersch obtained good results on two printers (HP and Epson) with two very differ-
ent halftone dots, yet the average color difference is in the neighborhood of 2 E
ab
and the maximum is about 5 E
ab
.
7
16.6 Cellular Extension of the Kubelka-Munk Model
Up to now, we have discussed the Kubelka-Munk model and its spectral extension.
In most cases, the parameters are derived from the solid-color patches of the pri-
mary colors. No intermediate levels are used in the model. One form of extension
to the Kubelka-Munk model is to include a few intermediate levels. The addition
of these intermediate samples is equivalent to partitioning the CMY space into
rectangular cells and employing the Kubelka-Munk equation within each cell.
9
Using only the bi-level dot area, either all or none, for a three-colorant system,
we have 2
3
= 8 known sample points in the model. For a four-colorant system, we
have 2
4
= 16 known points. If one intermediate point is added, we have 3
4
= 81
known points. In principle, the intermediate points can be increased to any number;
however, there is a practical limitation in view of the complexity and storage cost.
Once the number of levels in the primary axis is set, the Kubelka-Munk equation
is used in a smaller subcell, rather than the entire color space. It is not required that
the same cellular division occurs along each of the three colorant axes; the cellular
division can lie on a noncubic (but rectangular) lattice.
10
16.7 Applications
There are numerous applications of the Kubelka-Munk theory. The K-M theory
is widely used in the plastic,
5
paint,
11,12
paper, printing,
13
and textile
1416
in-
dustries. McGinnis used the single-constant K-M model together with the least-
squares technique for determining color dye quantities in printing cotton textiles.
4
He demonstrated that the least-squares technique is capable of selecting the orig-
inal dyes used in formulating the standards and calculating the concentrations for
each dye in order to give good spectral color matching. The theory has also been
366 Computational Color Technology
used for computer color matching,
17
estimation of the color gamut of ink-jet inks in
printing,
1822
ink formulation,
22
ink-jet design modeling,
23
dye-diffusion thermal-
transfer printer calibration,
24
and modeling the paper spread function.
25
16.7.1 Applications to multispectral imaging
Imai and colleagues have applied the single-constant K-M theory to reconstruct
the spectra obtained from a trichromatic digital camera (IBM PRO3000 with
3072 4096 pixels and 12 bits per channel) that is equipped with a color-ltration
mechanism to give multiple sets of biased trichromatic signals.
26
They used a Ko-
dak Wratten light-blue lter #38 for obtaining a set of biased trichromatic signals,
which were coupled with the trichromatic signals without lters, to have a total of
six channels.
Using principal component analysis with basis vectors in the K-M space, they
performed spectrum reconstruction of ColorChecker and a set of 105 painted
patches. First, the device value g
d
(or digital count) from a digital camera was
transformed to g
KM
in the K-M space via a simple nonlinear transformation.
g
KM
= 1/(2g
d
) +g
d
/2 1. (16.48)
The simulation showed that this nonlinear transform improved the accuracy of
the spectral estimation. Results indicated that spectral reconstruction in the K-M
space was slightly worse than the reectance space. For six-channel data, the av-
erage color differences for ColorChecker and painted color set were 0.8 E
ab
and
0.5 E
ab
, respectively, and the corresponding spectrum root-mean-square errors
were 0.039 and 0.022. These results are considered very good for spectrum recon-
struction. In fact, this innovative application will have a place in printing applica-
tions.
References
1. P. Kubelka, New contributions to the optics of intensely light-scattering mate-
rials, Part I, J. Opt. Soc. Am. 38, pp. 448457 (1948).
2. D. B. Judd and G. Wyszecki, Color in Business, Science, and Industry, 3rd
Edition, Wiley, New York, Chap. 3, pp. 397461 (1975).
3. E. Allen, Colorant formulation and shading, Optical Radiation Measurements,
Vol. 2, F. Grum and C. J. Bartleson (Eds.), Academic Press, New York,
Chap. 7, pp. 305315 (1980).
4. P. H. McGinnis, Jr., Spectrophotometric color matching with the least square
technique, Color Eng. 5, pp. 2227 (1967).
5. J. L. Saunderson, Calculation of the color of pigmented plastics, J. Opt. Soc.
Am. 32, pp. 727736 (1942).
6. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, Chap. 8,
p. 205 (1967).
Kubelka-Munk Theory 367
7. P. Emmel and R. D. Hersch, A unied model for color prediction of halftoned
prints, J. Imaging Sci. Techn. 44, pp. 351359 (2000).
8. P. Emmel, Physical methods for color prediction, Digital Color Imaging Hand-
book, G. Sharma (Ed.), CRC Press, Boca Raton, FL, pp. 173237 (2003).
9. K. J. Heuberger, Z. M. Jing, and S. Persiev, Color transformation and lookup
tables, 1992 TAGA/ISCC Proc., Sewickley, PA, Vol. 2, pp. 863881.
10. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 8595 (1996).
11. F. W. Billmayer, Jr. and R. L. Abrams, Predicting reectance and color of paint
lms by Kubelka-Munk Analysis II: Performance tests, J. Paint Tech. 45(579),
pp. 3138 (1973).
12. E. L. Cairns, D. A. Holtzen, and D. L. Spooner, Determining absorption and
scattering constants for pigments, Color Res. Appl. 1, pp. 174180 (1976).
13. E. Allen, Calculations for colorant formulations, Adv. Chem. Series 107,
pp. 8794 (1971).
14. R. E. Derby, Colorant formulation and color control in the textile industry, Adv.
Chem. Series 107, p. 95 (1971).
15. D. A. Burlone, Formulation of blends of precolored nylon ber, Color Res.
Appl. 8, pp. 114120 (1983).
16. D. A. Burlone, Theoretical and practical aspects of selected ber-blend color-
formulation functions, Color Res. Appl. 9, pp. 213219 (1984).
17. H. R. Davidson and H. Hemendinger, Color prediction using the two-constant
turbid-media theory, J. Opt. Soc. Am. 56, pp. 11021109 (1966).
18. P. Engeldrum and L. Carreira, Determination of the color gamut of dye coated
paper layers using Kubelka-Munk theory, Soc. of Photo. Sci. and Eng. Annual
Conf., pp. 35 (1984).
19. P. G. Engeldrum, Computing color gamuts of ink-jet printing systems, SID Int.
Symp. Digest of Tech. Papers, Society for Information Display, San Jose, CA,
pp. 385388 (1985).
20. H. R. Kang, Kubelka-Munk modeling of ink jet ink mixing, J. Imaging Sci.
Techn. 17, pp. 7683 (1991).
21. H. R. Kang, Comparisons of color mixing theories for use in electronic print-
ing, 1st IS&T/SID Color Imaging Conf.: Transforms and Transportablity of
Color, pp. 7882 (1993).
22. H. R. Kang, Applications of color mixing models to electronic printing,
J. Electron. Imag. 3, pp. 276287 (1994).
23. K. H. Parton and R. S. Berns, Color modeling of ink-jet ink on paper using
Kubelka-Munk theory, Proc. of 7th Int. Congress on Advance in Nonimpact
Printing Technology, IS&T, Springeld, VA, pp. 271280 (1992).
24. R. S. Berns, Spectral modeling of a dye diffusion thermal transfer printer,
J. Electron. Imag. 2, pp. 359370 (1993).
25. P. G. Engeldrum and B. Pridham, Application of turbid medium theory to pa-
per spread function measurements, Proc. TAGA, Sewickley, PA, pp. 339352
(1995).
368 Computational Color Technology
26. F. H. Imai, R. S. Berns, and D.-Y. Tzeng, A comparative analysis of spectral
reectance estimated in various spaces using a trichromatic camera system,
J. Imaging Sci. Techn. 44, pp. 280287 (2000).
Chapter 17
Light-Reection Model
The Neugebauer equation is perhaps is best-known light-reection model devel-
oped to interpret the light-reection phenomenon of the halftone printing process.
It is solely based on the light-reection process with no concern for any absorp-
tion or transmission. Thus, it is a rather simple and incomplete description of the
light and colorant interactions. However, it has been widely used in printing appli-
cations, such as color mixing and color gamut prediction, with various degrees of
success.
1
In mixing three subtractive primaries, Neugebauer recognized that there
are eight dominant colors, namely, white, cyan, magenta, yellow, red, green, blue,
and black for constituting any color halftone print. A given color is perceived as the
integration of these eight Neugebauer dominant colors. The incident light reected
by one of the eight colors is equal to the reectance of that color multiplied by its
area coverage. The total reectance is the sum of all eight colors weighted by the
corresponding area coverage.
2,3
Therefore, the Neugebauer equation is based on
broadband additive color mixing of the reected light.
17.1 Three-Primary Neugebauer Equations
A general expression of the three-primary Neugebauer equation is
P
3
= A
w
P
w
+A
c
P
c
+A
m
P
m
+A
y
P
y
+A
cm
P
cm
+A
cy
P
cy
+A
my
P
my
+A
cmy
P
cmy
, (17.1)
where P
3
represents a total reectance from one of the broadband RGB colors
by mixing three CMY primaries, and P
w
, P
c
, P
m
, P
y
, P
cm
, P
cy
, P
my
, and P
cmy
are the reectance of paper, cyan, magenta, yellow, cyan-magenta overlap, cyan-
yellow overlap, magenta-yellow overlap, and three-primary overlap measured with
a given additive primary light. The parameters A
w
, A
c
, A
m
, A
y
, A
cm
, A
cy
, A
my
,
and A
cmy
are the area coverage by paper, cyan, magenta, yellow, cyan-magenta
overlap, cyan-yellow overlap, magenta-yellow overlap, and three-primary overlap,
respectively. For a complete description of the RGB outputs, we have
P
3
(R) = A
w
P
w
(R) +A
c
P
c
(R) +A
m
P
m
(R) +A
y
P
y
(R) +A
cm
P
cm
(R)
+A
cy
P
cy
(R) +A
my
P
my
(R) +A
cmy
P
cmy
(R),
369
370 Computational Color Technology
P
3
(G) = A
w
P
w
(G) +A
c
P
c
(G) +A
m
P
m
(G) +A
y
P
y
(G) +A
cm
P
cm
(G)
+A
cy
P
cy
(G) +A
my
P
my
(G) +A
cmy
P
cmy
(G),
P
3
(B) = A
w
P
w
(B) +A
c
P
c
(B) +A
m
P
m
(B) +A
y
P
y
(B) +A
cm
P
cm
(B)
+A
cy
P
cy
(B) +A
my
P
my
(B) +A
cmy
P
cmy
(B). (17.2)
Equation (17.2) can be expressed in vector-matrix notation.
P
3
= P
3N
A
3N
. (17.3)
Vector P
3
consists of the three elements of the RGB primary components.
P
3
= [P
3
(R) P
3
(G) P
3
(B)]
T
. (17.4)
P
3N
is a 3 8 matrix containing the RGB components of the eight Neugebauer
dominant colors
P
3N
=

P
w
(R) P
c
(R) P
m
(R) P
y
(R) P
cm
(R) P
cy
(R) P
my
(R) P
cmy
(R)
P
w
(G) P
c
(G) P
m
(G) P
y
(G) P
cm
(G) P
cy
(G) P
my
(G) P
cmy
(G)
P
w
(B) P
c
(B) P
m
(B) P
y
(B) P
cm
(B) P
cy
(B) P
my
(B) P
cmy
(B)

,
(17.5)
and A
3N
is a vector of the eight corresponding area coverages.
A
3N
= [A
w
A
c
A
m
A
y
A
cm
A
cy
A
my
A
cmy
]
T
. (17.6)
17.2 Demichel Dot-Overlap Model
Neugebauer employed Demichels dot-overlap model for the area of each dominant
color.
3
They are computed from the halftone dot areas of the three primaries a
c
,
a
m
, and a
y
for cyan, magenta, and yellow, respectively.
3,4
A
w
= (1 a
c
)(1 a
m
)(1 a
y
),
A
c
= a
c
(1 a
m
)(1 a
y
),
A
m
= a
m
(1 a
c
)(1 a
y
),
A
y
= a
y
(1 a
c
)(1 a
m
),
A
cm
= a
c
a
m
(1 a
y
),
A
cy
= a
c
a
y
(1 a
m
),
A
my
= a
m
a
y
(1 a
c
),
A
cmy
= a
c
a
m
a
y
. (17.7)
Light-Reection Model 371
By expanding each Demichels area coverage in terms of the primary ink cover-
ages, we can express Demichels dot-overlap model in the vector-space form.

A
w
A
c
A
m
A
y
A
cm
A
cy
A
my
A
cmy

1 1 1 1 1 1 1 1
0 1 0 0 1 1 0 1
0 0 1 0 1 0 1 1
0 0 0 1 0 1 1 1
0 0 0 0 1 0 0 1
0 0 0 0 0 1 0 1
0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 1

1
a
c
a
m
a
y
a
c
a
m
a
c
a
y
a
m
a
y
a
c
a
m
a
y

, (17.8a)
or
A
3N
=
3N
a
3N
. (17.8b)
Matrix
3N
is an 8 8 binary matrix with values of zero below the diagonal line,
vector A
3N
is given in Eq. (17.6), and vector a
3N
= [1, a
c
, a
m
, a
y
, a
c
a
m
, a
c
a
y
,
a
m
a
y
, a
c
a
m
a
y
]
T
. Equation (17.3) becomes
P
3
= P
3N

3N
a
3N
. (17.9)
17.3 Simplications
Neugebauers model provides a general description of halftone color mixing, pre-
dicting the resulting red, green, and blue reectances or tristimulus values from
a given set of primary dot areas in the print. In practical applications, one wants
the cmy (or cmyk) dot areas for producing the desired RGB or XYZ values. Dot ar-
eas of individual inks are obtained by solving the inverted Neugebauer equations.
The inversion is not trivial because the Neugebauer equations are nonlinear. The
exact analytical solutions to the Neugebauer equations have not yet been solved, al-
though numerical solutions by computer have been reported.
5,6
Pollak has worked
out a partial solution by making assumptions and approximations.
79
The primary
assumption was that the solid densities were additive. Based on this assumption,
the reectance of a mixture is equal to the product of its components. Thus,
P
cmy
(R) = P
c
(R)P
m
(R)P
y
(R),
P
cmy
(G) = P
c
(G)P
m
(G)P
y
(G),
P
cmy
(B) = P
c
(B)P
m
(B)P
y
(B). (17.10)
By substituting Eq. (17.10) into Eq. (17.5) and by taking the reectance of the
white area as 1, such that P
w
(R) = P
w
(G) = P
w
(B) = 1, Eq. (17.5) becomes
372 Computational Color Technology
P

3N
=

1 P
c
(R) P
m
(R) P
y
(R) P
c
(R)P
m
(R) P
c
(R)P
y
(R) P
m
(R)P
y
(R) P
c
(R)P
m
(R)P
y
(R)
1 P
c
(G) P
m
(G) P
y
(G) P
c
(G)P
m
(G) P
c
(G)P
y
(G) P
m
(G)P
y
(G) P
c
(G)P
m
(G)P
y
(G)
1 P
c
(B) P
m
(B) P
y
(B) P
c
(B)P
m
(B) P
c
(B)P
y
(B) P
m
(B)P
y
(B) P
c
(B)P
m
(B)P
y
(B)

,
(17.11)
and the Neugebauer equation of (17.9) becomes
P
3
= P

3N

3N
a
3N
. (17.12)
Pollak then made further approximations that magenta ink is completely transpar-
ent to red light and that yellow ink is transparent to red and green light, such that
P
m
(R) = P
y
(R) = P
y
(G) = 1. In the real world, the yellow approximation is al-
most true because many yellow dyes exhibit a near-ideal spectrum with a sharp
transition around 500 nm. The magenta approximation, however, is questionable
because most magenta colorants have substantial absorption in the red and blue re-
gions. Nonetheless, these approximations simplify the Neugebauer equations and
make them analytically solvable. Equation (17.11) becomes
P

3N
=

1 P
c
(R) 1 1 P
c
(R) P
c
(R) 1 P
c
(R)
1 P
c
(G) P
m
(G) 1 P
c
(G)P
m
(G) P
c
(G) P
m
(G) P
c
(G)P
m
(G)
1 P
c
(B) P
m
(B) P
y
(B) P
c
(B)P
m
(B) P
c
(B)P
y
(B) P
m
(B)P
y
(B) P
c
(B)P
m
(B)P
y
(B)

,
(17.13)
and
P
3
= P

3N

3N
a
3N
. (17.14)
The matrices and vector on the right-hand side of Eq. (17.14) are multiplied to give
the following simplied expressions:
P
3
(R) = [1 a
c
+a
c
P
c
(R)], (17.15a)
P
3
(G) = [1 a
c
+a
c
P
c
(G)][1 a
m
+a
m
P
m
(G)], (17.15b)
P
3
(B) = [1 a
c
+a
c
P
c
(B)]
[1 a
m
+a
m
P
m
(B)][1 a
y
+a
y
P
y
(B)]. (17.15c)
Equation (17.15) can be solved for a
c
, a
m
, and a
y
to give a rather involved masking
procedure.
10
Light-Reection Model 373
17.4 Four-Primary Neugebauer Equation
The three-primary expression of Eq. (17.1) can readily be expanded to four pri-
maries by employing the four-primary fractional-area expressions given by Hardy
and Wurzburg.
11
P
4
=

A
w
P
w
+

A
c
P
c
+

A
m
P
m
+

A
y
P
y
+

A
cm
P
cm
+

A
cy
P
cy
+

A
my
P
my
+

A
cmy
P
cmy
+

A
k
P
k
+

A
ck
P
ck
+

A
mk
P
mk
+

A
yk
P
yk
+

A
cmk
P
cmk
+

A
cyk
P
cyk
+

A
myk
P
myk
+

A
cmyk
P
cmyk
. (17.16)
Like the three-primary Neugebauer equations of (17.3), Eq. (17.16) can be ex-
pressed in vector-matrix notation.
P
4
= P
4N
A
4N
. (17.17)
Vector P
4
consists of the three elements of the broadband RGB components from
mixing four CMYK primaries.
P
4
= [P
4
(R) P
4
(G) P
4
(B)]
T
. (17.18)
P
4N
is a 3 16 matrix containing RGB components of 16 Neugebauer dominant
colors.
P
4N
=

P
w
(R) P
c
(R) P
m
(R) P
y
(R) P
k
(R) P
cm
(R) P
cy
(R) P
ck
(R)
P
w
(G) P
c
(G) P
m
(G) P
y
(G) P
k
(G) P
cm
(G) P
cy
(G) P
ck
(G)
P
w
(B) P
c
(B) P
m
(B) P
y
(B) P
k
(B) P
cm
(B) P
cy
(B) P
ck
(B)
P
my
(R) P
mk
(R) P
yk
(R) P
cmy
(R) P
cmk
(R) P
cyk
(R) P
myk
(R) P
cmyk
(R)
P
my
(G) P
mk
(G) P
yk
(G) P
cmy
(G) P
cmk
(G) P
cyk
(G) P
myk
(G) P
cmyk
(G)
P
my
(B) P
mk
(B) P
yk
(B) P
cmy
(B) P
cmk
(B) P
cyk
(B) P
myk
(B) P
cmyk
(B)

.
(17.19)
A
4N
is a vector of sixteen area coverages.
A
4N
=


A
w

A
c

A
m

A
y

A
k

A
cm

A
cy

A
ck

A
my

A
mk

A
yk

A
cmy

A
cmk

A
cyk

A
myk

A
cmyk

T
,
(17.20)
where

A
w
= (1 a
c
)(1 a
m
)(1 a
y
)(1 a
k
)
= 1 a
c
a
m
a
y
a
k
+a
c
a
m
+a
c
a
y
+a
c
a
k
+a
m
a
y
+a
m
a
k
+a
y
a
k
a
c
a
m
a
y
a
c
a
m
a
k
a
c
a
y
a
k
a
m
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
c
= a
c
(1 a
m
)(1 a
y
)(1 a
k
)
= a
c
a
c
a
m
a
c
a
y
a
c
a
k
+a
c
a
m
a
y
+a
c
a
m
a
k
+a
c
a
y
a
k
a
c
a
m
a
y
a
k
,
374 Computational Color Technology

A
m
= a
m
(1 a
c
)(1 a
y
)(1 a
k
)
= a
m
a
c
a
m
a
m
a
y
a
m
a
k
+a
c
a
m
a
y
+a
c
a
m
a
k
+a
m
a
y
a
k
a
c
a
m
a
y
a
k
,

A
y
= a
y
(1 a
c
)(1 a
m
)(1 a
k
)
= a
y
a
c
a
y
a
m
a
y
a
y
a
k
+a
c
a
m
a
y
+a
c
a
y
a
k
+a
m
a
y
a
k
a
c
a
m
a
y
a
k
,

A
k
= a
k
(1 a
c
)(1 a
m
)(1 a
y
)
= a
k
a
c
a
k
a
m
a
k
a
y
a
k
+a
c
a
m
a
k
+a
c
a
y
a
k
+a
m
a
y
a
k
a
c
a
m
a
y
a
k
,

A
cm
= a
c
a
m
(1 a
y
)(1 a
k
) = a
c
a
m
a
c
a
m
a
y
a
c
a
m
a
k
+a
c
a
m
a
y
a
k
,

A
cy
= a
c
a
y
(1 a
m
)(1 a
k
) = a
c
a
y
a
c
a
m
a
y
a
c
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
ck
= a
c
a
k
(1 a
m
)(1 a
y
) = a
c
a
k
a
c
a
m
a
k
a
c
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
my
= a
m
a
y
(1 a
c
)(1 a
k
) = a
m
a
y
a
c
a
m
a
y
a
m
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
mk
= a
m
a
k
(1 a
c
)(1 a
y
) = a
m
a
k
a
c
a
m
a
k
a
m
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
yk
= a
y
a
k
(1 a
c
)(1 a
m
) = a
y
a
k
a
c
a
y
a
k
a
m
a
y
a
k
+a
c
a
m
a
y
a
k
,

A
cmy
= a
c
a
m
a
y
(1 a
k
) = a
c
a
m
a
y
a
c
a
m
a
y
a
k
,

A
cmk
= a
c
a
m
a
k
(1 a
y
) = a
c
a
m
a
k
a
c
a
m
a
y
a
k
,

A
cyk
= a
c
a
y
a
k
(1 a
m
) = a
c
a
y
a
k
a
c
a
m
a
y
a
k
,

A
myk
= a
m
a
y
a
k
(1 a
c
) = a
m
a
y
a
k
a
c
a
m
a
y
a
k
,

A
cmyk
= a
c
a
m
a
y
a
k
. (17.21)
Again, the four-color area coverages can be expressed in matrix form.

A
w

A
c

A
m

A
y

A
k

A
cm

A
cy

A
ck

A
my

A
mk

A
yk

A
cmy

A
cmk

A
cyk

A
myk

A
cmyk

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
0 1 0 0 0 1 1 1 0 0 0 1 1 1 0 1
0 0 1 0 0 1 0 0 1 1 0 1 1 0 1 1
0 0 0 1 0 0 1 0 1 0 1 1 0 1 1 1
0 0 0 0 1 0 0 1 0 1 1 0 1 1 1 1
0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 1
0 0 0 0 0 0 1 0 0 0 0 1 0 1 0 1
0 0 0 0 0 0 0 1 0 0 0 0 1 1 0 1
0 0 0 0 0 0 0 0 1 0 0 1 0 0 1 1
0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 1
0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 1
0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1

Light-Reection Model 375

1
a
c
a
m
a
y
a
k
a
c
a
m
a
c
a
y
a
c
a
k
a
m
a
y
a
m
a
k
a
y
a
k
a
c
a
m
a
y
a
c
a
m
a
k
a
c
a
y
a
k
a
m
a
y
a
k
a
c
a
m
a
y
a
k

, (17.22a)
or
A
4N
=
4N
a
4N
, (17.22b)
where
4N
is a 16 16 binary matrix with values of zero below the diagonal line,
and a
4N
is a vector of 16 elements given in the last column of Eq. (17.22a). Sub-
stituting Eq. (17.22b) into Eq. (17.17), we have
P
4
= P
4N

4N
a
4N
. (17.23)
For four-primary systems, the analytic inversion is even more formidable because
there are four unknowns (a
c
, a
m
, a
y
, and a
k
) and only three measurable quantities
(either RGB or XYZ). The practical approach is to put a constraint on the black
ink, then seek solutions in cmy that combine with the black to produce a target
color. Because of this approach, the Neugebauer equations are closely related to
gray-component replacement.
12
There were other attempts to modify the Neugebauer equation, such as the
spectral extension
13
and the use of the geometric dot-overlap model,
14,15
instead
of Demichels probability overlap model.
17.5 Cellular Extension of the Neugebauer Equations
Neugebauer equations use a set of eight dominant colors obtained from all com-
binations of 0% and 100% primary colorants to cover the whole color space. For
a four-colorant system, we have 2
4
= 16 known sample points in the model. To
improve the accuracy of the model, we can add intermediate levels to increase the
number of sample points. If one intermediate point, say 50%, is added, we will
have 3
4
= 81 known sample points. The addition of these intermediate samples is
equivalent to partitioning the cmy space into rectangular cells and employing the
Neugebauer equations within each smaller cell instead of the entire CMY color
space. Hence, this model is referred to as the cellular Neugebauer model.
16
It is
376 Computational Color Technology
not required that the same cellular division occurs along each of the three colorant
axes; the cellular division can lie on a noncubic (but rectangular) lattice.
17.6 Spectral Extension of the Neugebauer Equations
Unlike ideal colorants, real color inks do not have constant reectance values
across two-thirds of the whole visible spectrum; therefore, it has been argued that
the broadband reectance measurements are inappropriate for a Neugebauer model
of a color printer. If one replaces the broadband light of each Neugebauer dominant
color with its narrow spectral reectance P(), centered at a wavelength , one
obtains the spectral Neugebauer equation.
15,17,18
An example of the three-primary
spectral Neugebauer equations is given in Eq. (17.24); a similar equation can be
obtained for the four-primary equation by converting Eq. (17.16) as a function of
wavelength.
P() = A
w
P
w
() +A
c
P
c
() +A
m
P
m
() +A
y
P
y
() +A
cm
P
cm
()
+A
cy
P
cy
() +A
my
P
my
() +A
cmy
P
cmy
(). (17.24)
When expressed in the vector-space form, we have

p(
1
)
p(
2
)
p(
3
)
p(
4
)
p(
5
)
p(
6
)
p(
7
)
p(
8
)
p(
9
)
. . .
. . .
. . .
p(
n
)

p
w
(
1
) p
c
(
1
) p
m
(
1
) p
y
(
1
) p
cm
(
1
) p
cy
(
1
) p
my
(
1
) p
cmy
(
1
)
p
w
(
2
) p
c
(
2
) p
m
(
2
) p
y
(
2
) p
cm
(
2
) p
cy
(
2
) p
my
(
2
) p
cmy
(
2
)
p
w
(
3
) p
c
(
3
) p
m
(
3
) p
y
(
3
) p
cm
(
3
) p
cy
(
3
) p
my
(
3
) p
cmy
(
3
)
p
w
(
4
) p
c
(
4
) p
m
(
4
) p
y
(
4
) p
cm
(
4
) p
cy
(
4
) p
my
(
4
) p
cmy
(
4
)
p
w
(
5
) p
c
(
5
) p
m
(
5
) p
y
(
5
) p
cm
(
5
) p
cy
(
5
) p
my
(
5
) p
cmy
(
5
)
p
w
(
6
) p
c
(
6
) p
m
(
6
) p
y
(
6
) p
cm
(
6
) p
cy
(
6
) p
my
(
6
) p
cmy
(
6
)
p
w
(
7
) p
c
(
7
) p
m
(
7
) p
y
(
7
) p
cm
(
7
) p
cy
(
7
) p
my
(
7
) p
cmy
(
7
)
p
w
(
8
) p
c
(
8
) p
m
(
8
) p
y
(
8
) p
cm
(
8
) p
cy
(
8
) p
my
(
8
) p
cmy
(
8
)
p
w
(
9
) p
c
(
9
) p
m
(
9
) p
y
(
9
) p
cm
(
9
) p
cy
(
9
) p
my
(
9
) p
cmy
(
9
)
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . .
p
w
(
n
) P
c
(
n
) P
m
(
n
) P
y
(
n
) P
cm
(
n
) P
cy
(
n
) P
my
(
n
) P
cmy
(
n
)

A
w
A
c
A
m
A
y
A
cm
A
cy
A
my
A
cmy

, (17.25a)
or
P = P
3
A
3N
= P
3

3N
a
3N
. (17.25b)
Vector P has n elements, where n is the number of sampled points in the visible
range. The spectral extension of the Neugebauer equation is no longer restricted to
three broadband RGB ranges. For the three-primary Neugebauer equation, matrix
Light-Reection Model 377
P
3
has a size of n 8 with eight columns for representing the sampled spectra
of eight dominant colors, vector A
3N
has a size of 8 1, matrix
3N
has a size of
8 8, and vector a
3N
has 8 elements, the same as those given in Eq. (17.8a). The
spectral Neugebauer equation (17.25) can be solved if matrix P
3
is invertible.
The criterion for the matrix inversion is that P
3
is not singular, which means that
the eight column vectors must be independent from one another. The component
vectors of the Neugebauer dominant colors contain spectra of the two-color and
three-color mixtures, which are closely related to the spectra of primaries. Thus,
their independencies are in question. The additivity failure discussed in Section
15.5 may give us some comfort that the component colors do not add up to the
mixture. If additivity holds, then
P
cmy
() = P
c
()P
m
()P
y
(). (17.26)
Figure 17.1 shows the comparison of the measured blue spectrum (made from
the overlay of the cyan and magenta colorants) with the product of the cyan and
magenta spectra (computed blue spectrum). Similar comparisons for cyan-yellow,
magenta-yellow, and cyan-magenta-yellow mixtures are given in Figs. 17.2, 17.3,
and 17.4, respectively. As shown in these gures, the computed spectrum is smaller
than the measured spectrum, but the shapes, in general, are similar.
Figure 17.1 Comparison of the measured and computed blue spectra.
378 Computational Color Technology
Figure 17.2 Comparison of the measured and computed green spectra.
If Eq. (17.26) holds, the plot of the ratios, r
cm
, r
cy
, r
my
, and r
cmy
dened in
Eq. (17.28) between measured and calculated spectra should give a horizontal line
as a function of wavelength.
r
cm
= P
cm
()/[P
c
()P
m
()], (17.27a)
r
cy
= P
cy
()/[P
c
()P
y
()], (17.27b)
r
my
= P
my
()/[P
m
()P
y
()], (17.27c)
r
cmy
= P
cmy
()/[P
c
()P
m
()P
y
()]. (17.27d)
Figure 17.5 gives the plots of the ratios given in Eq. (17.27); they are not con-
stant, indicating that there is no correlation between the measured and calculated
spectra. These results imply that the column vectors of the Neugebauer dominant
colors are indeed independent. Therefore, Eq. (17.25) can be inverted by using
the pseudo-matrix transform to obtain Demichels area coverage A
3N
and primary
color coverage a
3N
as given in Eqs. (17.28) and (17.29), respectively.
A
3N
=

P
3
T
P
3

1
P
3
T
P, (17.28)
a
3N
=

(P
3

3N
)
T
(P
3

3N
)

1
(P
3

3N
)
T
P. (17.29)
Light-Reection Model 379
Figure 17.3 Comparison of the measured and computed red spectra.
The spatial extension can also be applied to four-primary Neugebauer equations.
P = P
4
A
4N
= P
4

4N
a
4N
, (17.30)
A
4N
=

P
4
T
P
4

1
P
4
T
P, (17.31)
a
4N
=

(P
4

4N
)
T
(P
4

4N
)

1
(P
4

3N
)
T
P. (17.32)
Matrix P
4
has a size of n 16 with 16 independent columns for representing the
sampled spectra of 16 dominant colors, vector A
4N
has a size of 16 1, matrix

4N
has a size of 16 16, and vector a
4N
has 16 elements, the same as those given
in Eq. (14.22a).
The spectral extension, having n data points, provides a solution to the inver-
sion of the Neugebauer equations. It works for three- and four-primary mixing,
provided that the number of elements in A
3N
and A
4N
is less than or equal to the
number of samples n. This solution to the Neugebauer equation can be viewed as
a variation of spectrum reconstruction. The spectra of the Neugebauer dominant
colors are served as the basis vectors (or principal components), whereas the area
coverage, A or a, is the weighting factor. Preliminary studies show some promis-
ing results; however, they are sensitive to experimental uncertainty. The problem is
the lack of good experimental data from a stable printing device to verify this ap-
380 Computational Color Technology
Figure 17.4 Comparison of the measured and computed black spectra.
proach. Similarly, the spectral extension can also be applied to Hi-Fi color printing
by considering that each Hi-Fi color is a principal component.
Moreover, Eq. (17.25) can also be written as
[p(
1
) p(
2
) p(
3
) p(
4
) p(
5
) p(
n1
) p(
n
)]
= [A
w
A
c
A
m
A
y
A
cm
A
cy
A
my
A
cmy
]

p
w
(
1
) p
w
(
2
) p
w
(
3
) p
w
(
n
)
p
c
(
1
) p
c
(
2
) p
c
(
3
) p
c
(
n
)
p
m
(
1
) p
m
(
2
) p
m
(
3
) p
m
(
n
)
p
y
(
1
) p
y
(
2
) p
y
(
3
) p
y
(
n
)
p
cm
(
1
) p
cm
(
2
) p
cm
(
3
) p
cm
(
n
)
p
cy
(
1
) p
cy
(
2
) p
cy
(
3
) p
cy
(
n
)
p
my
(
1
) p
my
(
2
) p
my
(
3
) p
my
(
n
)
p
cmy
(
1
) P
cmy
(
2
) P
cmy
(
3
) P
cmy
(
n
)

, (17.33a)
or
P
T
= A
3N
T
P
3N
T
. (17.33b)
Light-Reection Model 381
Figure 17.5 Plots of the ratios of measured and calculated spectra.
Equation (17.33) can be solved to obtain the area coverage A
3N
by the pseudo-
matrix inverse.
A
3N
T
= P
T
P
3N

P
3N
T
P
3N

1
. (17.34)
For three-primary Neugebauer equations, matrix P
3N
is n 8 and the inversed
product (P
3N
T
P
3N
)
1
is 8 8. The result is an 8-element vector for A
3N
. For
four-primary Neugebauer equations, we have
P
T
= A
4N
T
P
4N
T
, (17.35)
A
4N
T
= P
T
P
4N

P
4N
T
P
4N

1
. (17.36)
Matrix P
4N
is n16 and the inverted product (P
4N
T
P
4N
)
1
is 1616. This gives
a vector of 16 elements for A
4N
. Chen and colleagues have applied this approach on
a much grander scale to the Cellular-Yule-Nielsen-Spectral-Neugebauer (CYNSN)
model using a six-color (cyan, magenta, yellow, black, green, and orange) Epson
Pro 5500 ink-jet printer with 64 dominant colors.
19
A
6N
T
= P
T
P
6N

P
6N
T
P
6N

1
. (17.37)
382 Computational Color Technology
First, the reectances are modied by the Yule-Nielsen value of 1/ n (see Chap-
ter 18). Second, the cellular partition expands the vector A
6N
of 64 elements to a
matrix of 64 m, where m is the number of color patches. For six-color printing,
the number of dominant colors is 64, which is the sum of the coefcients of the
(x +y)
6
expansion (the rst coefcient 1 indicates no color, the second coefcient
6 indicates the six individual colors, the third coefcient 15 indicates the two-color
mixtures, the fourth coefcient 20 indicates the three-color mixtures, the fth co-
efcient 15 indicates the four-color mixtures, the sixth coefcient 6 indicates the
ve-color mixtures, and the last coefcient 1 indicates the six-color mixtures). The
matrix P
6N
is n64 and P is nm. With the optimized Yule-Nielsen value, they
obtained excellent results in predicting 600 random colors.
Equations (17.34), (17.36), and (17.37) can be extended to nd the quantities
of primary inks by substituting into Eqs. (17.8b), (17.22b), and (17.38) for three-
primary, four-primary, or six-primary Neugebauer equations, respectively.
A
6N
=
6N
a
6N
. (17.38)
We obtain the following inverted Neugebauer equations:
a
3N
T
= P
T
(P
3N

3N
)

3N
T
P
3N
T
P
3N

3N

1
, (17.39)
a
4N
T
= P
T
(P
4N

4N
)

4N
T
P
4N
T
P
4N

4N

1
, (17.40)
a
6N
T
= P
T
(P
6N

6N
)

6N
T
P
6N
T
P
6N

6N

1
. (17.41)
For three-primary Neugebauer equations, matrix
3N
is 8 8 and the product
(P
3N

3N
) is n 8. The inversed matrix (
3N
T
P
3N
T
P
3N

3N
)
1
is 8 8. This
gives an eight-element vector for a
3N
, but only the second, third, and fourth ele-
ments contain the quantities for primaries cyan, magenta, and yellow, respectively.
For four-primary Neugebauer equations, matrix
4N
is 16 16 and the inverted
matrix (
4N
T
P
4N
T
P
4N

4N
)
1
is 16 16. This gives a 16-element vector for a
4N
,
but only the second, third, fourth, and fth elements contain the quantities for pri-
maries cyan, magenta, yellow, and black, respectively. Similarly, only the second
to seventh elements in the vector a
6N
of 64 elements give the amounts of cyan,
magenta, yellow, black, green, and orange inks, respectively. The six-color system,
in fact, is Hi-Fi printing, where the matrix P
6N
contains spectra of all Hi-Fi inks
one ink spectrum per column. Giving the spectrum of the color P to be reproduced,
we can determine the vector a
6N
, which contains the amounts of Hi-Fi inks needed
to match the input spectrum. This method is applicable to any Hi-Fi printing sys-
tem with any number of Hi-Fi inks in any color combination as long as the number
of spectral samples is greater than the number of inks.
References
1. Neugebauer Memorial Seminar on Color Reproduction, Proc. SPIE 1184
(1989).
Light-Reection Model 383
2. H. E. J. Neugebauer, Die theoretischen Grundlagen des Mehrfarbendruckes,
Z. wiss. Photogr. 36, pp. 7389 (1937).
3. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, Chap. 10,
p. 255 (1967).
4. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, p. 30 (1997).
5. I. Pobboravsky and M. Pearson, Computation of dot areas required to match
a colorimetrically specied color using the modied, Proc. TAGA, Sewickley,
PA, Vol. 24, pp. 6577 (1972).
6. R. Holub and W. Kearsley, Color to colorant conversions in a colorimetric
separation system, Proc. SPIE 1184, pp. 2435 (1989).
7. F. Pollak, The relationship between the densities and dot size of halftone mul-
ticolor images, J. Photogr. Sci. 3, pp. 112116 (1955).
8. F. Pollak, Masking for halftone, J. Photogr. Sci. 3, pp. 180188 (1955).
9. F. Pollak, New thoughts on halftone color masking, Penrose Annu. 50,
pp. 106110 (1956).
10. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, pp. 371
374 (1967).
11. A. C. Hardy and F. L. Wurzburg, Color correction in color printing, J. Opt.
Soc. Am. 38, pp. 300307 (1948).
12. C. Nakamura and K. Sayanagi, Gray component replacement by the Neuge-
bauer equations, Proc. SPIE 1184, pp. 5063 (1989).
13. H. R. Kang, Applications of color mixing models to electronic printing,
J. Electron. Imag. 3, pp. 276287 (1994).
14. G. L. Rogers, Neugebauer revisited: Random dots in halftone screening, Color
Res. Appl. 23, pp. 104113 (1998).
15. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, Chap. 7,
pp. 113129 (1999).
16. K. J. Heuberger, Z. M. Jing, and S. Persiev, Color transformation and lookup
table, Proc. TAGA/ISCC, Sewickley, PA, pp. 863881 (1992).
17. J. A. Stephen Viggiano, The color of halftone tints, TAGA Proc., Sewickley,
PA, pp. 647661 (1985).
18. J. A. Stephen Viggiano, Modeling the color of multicolored halftone tints,
TAGA Proc., Sewickley, PA, pp. 4462 (1990).
19. Y. Chen, R. S. Berns, and L. A. Taplin, Six color printer characterization using
an optimized cellular Yule-Nielsen spectral Neugebauer model, J. Imaging Sci.
Techn. 48, pp. 519528 (2004).
Chapter 18
Halftone Printing Models
Unlike the Beer-Lambert law, which is based solely on the light-absorption
process, and the Neugebauer equations, which are based solely on the light-
reection process, several models developed for halftone printing are more sophis-
ticated because they take into consideration absorption, reection, and transmis-
sion, as well as scattering. This chapter presents several halftone-printing models
with various degrees of complexity and sophistication. Perhaps the simplest one
is the Murray-Davies model, considering merely halftone dot absorptions and di-
rect reections. Yule and Nielsen developed a detailed model that takes the light
penetration and scattering into consideration. Clapper and Yule developed a rather
complete model to account for light reection, absorption, scattering, and trans-
mission of the halftone process. The most comprehensive model is a computer
simulation model developed by Kruse and Wedin for the purpose of describing the
dot gain phenomenon, which consists of the physical and optical dot gain mod-
els that consider nearly all possible light paths and interactions in order to give a
thorough and accurate account of the halftone process.
18.1 Murray-Davies Equation
Unlike the Neugebauer equations, which take reectance as is, the Murray-Davies
equation derives the reectance via the absorption of halftone dots and the reec-
tion of the substrate. Figure 18.1 depicts the Murray-Davies model of halftone
printing. In a unit area, if the solid-ink reectance is P
s
, then the absorption by
halftone dots is (1 P
s
) weighted by the dot area coverage a. The reectance P
of a unit halftone area is the unit white reectance subtracting the dot absorptance.
1
If we take the unit white reectance as a total reection with a value of 1, we
obtain Eq. (18.1) for the Murray-Davies equation.
P = 1 a(1 P
s
). (18.1)
If we use the reectance of the substrate P
w
as the unit white reectance, we obtain
Eq. (18.2).
P = P
w
a(P
w
P
s
). (18.2)
385
386 Computational Color Technology
Figure 18.1 Murray-Davies model of the light and halftone dot interaction.
2
Equation (18.1) is usually expressed in the density domain by using the logarithmic
relationship between the density and reectance.
D = logP. (18.3)
Using the relationship of Eq. (18.3), the density of a solid-ink coverage is
D
s
= logP
s
(18.4)
or
P
s
= 10
D
s
. (18.5)
Substituting Eq. (18.5) into Eq. (18.1), we derive an expression for the Murray-
Davies equation.
P = 1 a
_
1 10
D
s
_
. (18.6)
By converting to density, we obtain Eq. (18.7).
D = log
_
1 a
_
1 10
D
s
__
. (18.7)
Halftone Printing Models 387
Rearranging Eq. (18.7), we obtain yet another expression for the Murray-Davies
equation.
a =
_
1 10
D
___
1 10
D
s
_
. (18.8)
Equation (18.8) is a popular form of the Murray-Davies equation; it is often used
to determine the area coverage of halftone prints by measuring the reectance of
halftone step wedges with respect to the solid-ink coverage.
18.1.1 Spectral extension of the Murray-Davies equation
The spectral Murray-Davies equation is an extension of Eq. (18.2).
2
P() = P
w
() a[P
w
() P
s
()]. (18.9)
By rearranging Eq. (18.9), we obtain
P() = (1 a)P
w
() +aP
s
(). (18.10)
The explicit expression in the vector-matrix form is given in Eq. (18.11).
_
_
_
_
_
P(
1
)
P(
2
)
. . .
. . .
P(
n
)
_

_
=
_
_
_
_
_
P
w
(
1
) P
s
(
1
)
P
w
(
2
) P
s
(
2
)
. . . . . .
. . . . . .
P
w
(
n
) P
s
(
n
)
_

_
_
(1 a)
a
_
, (18.11a)
or
P =P
2x
a
2
. (18.11b)
P is a vector of n elements, where n represents the sampling points taken for a
spectrum, P
2x
is a matrix of size n2, and a
2
is a vector of two elements. Equation
(18.11b) can be inverted to obtain the area coverage a by using the pseudo-inverse
transform because P
2x
contains two totally independent vectors.
a
2
=
_
P
T
2x
P
2x
_
1
P
T
2x
P. (18.12)
If we set the background reectance of paper at 1 and the solid-ink reectance is
measured relative to the white background, Eqs. (18.10) and (18.11) become Eqs.
(18.13) and (18.14), respectively.
P() = (1 a) +aP
s
(), (18.13)
_
_
_
_
_
P(
1
)
P(
2
)
. . .
. . .
P(
n
)
_

_
=
_
_
_
_
_
1 P
s
(
1
)
1 P
s
(
2
)
. . . . . .
. . . . . .
1 P
s
(
n
)
_

_
_
(1 a)
a
_
. (18.14)
388 Computational Color Technology
Spectral extension can also be applied to Eq. (18.8) as given in Eq. (18.15).
a =
_
1 10
D()
___
1 10
D
s
()
_
, (18.15a)
or
10
D()
= 1 a +a10
D
s
()
. (18.15b)
18.1.2 Expanded Murray-Davies model
Knowing the nonlinearity between the reectance P and area a, Arney, Engel-
drum, and Zeng expanded the Murray-Davies model by considering that P
s
and
P
w
are, themselves, functions of the area coverage; this was based on the obser-
vation that the histogram of reectance values in a halftone image is dependent on
the area coverage. The microdensitometric analyses of halftone prints have shown
that both the paper reectance and the dot reectance decrease as the dot area
coverage increases.
35
They developed a model that contains two empirical para-
meters; one relates to the optical spread function of the paper with respect to the
spatial frequency of the halftone screen, and the other relates to the distribution of
the colorant within the halftone cell.
6
Thus, Eq. (18.2) becomes a function that is
dependent on the area coverage
P = P
w
(a) a[P
w
(a) P
s
(a)], (18.16)
where P
w
(a) and P
s
(a) are the paper and solid-ink reectances, which are no
longer constants but are functions of the area coverage. Values of P
w
(a) and P
s
(a)
can be determined at the histogram peaks. This model ts well with image mi-
crostructure data from a variety of halftone printers, including offset lithography,
thermal transfer, and ink-jet. Results indicate that this expanded Murray-Davies
model is as good as the Yule-Nielsen model, with an adjustable exponent n known
as the Yule-Nielsen value (see Section 18.2 for detail).
18.2 Yule-Nielsen Model
From their study of the halftone process, Yule and Nielsen pointed out that light
does not emerge from the paper at the point where it entered. They estimated that
between one quarter and one half of the light that enters through a white area will
emerge through a colored area, and vice versa. Based on this observation, Yule and
Nielsen took light penetration and scattering into consideration, and proposed a
model, shown in Fig. 18.2, for deriving the halftone equation.
7
Like the Murray-
Davies equation, a fraction of light, a(1 T
s
), is absorbed by the ink lm on its
way into the substrate, where T
s
is the transmittance of the ink lm. After passing
through the ink, the remaining light, 1a(1T
s
), in the substrate is scattered and
reected. It is attenuated by a factor P
w
when the light reaches the air-ink-paper
Halftone Printing Models 389
Figure 18.2 Yule-Nielsen model of the light, halftone dot, and substrate interactions.
interface. The light emerging from the substrate is absorbed again by the ink lm;
the second absorption introduces a power of two to the equation. Finally, the light
is corrected by the surface reection r
s
at the air-ink interface. Summing all these
effects together, we obtain
P = r
s
+P
w
(1 r
s
)[1 a(1 T
s
)]
2
. (18.17)
Solid-ink transmittance T
s
can be obtained by setting a = 1 and rearranging
Eq. (18.17).
T
s
= {[P
s
r
s
]/[P
w
(1 r
s
)]}
1/2
, (18.18)
where P
s
is the ink reectance of the solid-ink layer. P
s
and P
w
can be obtained
experimentally.
The Yule-Nielsen model is a simplied version of a complex phenomenon in-
volving light, colorants, and substrate. For example, the model does not take into
account the fact that in real systems the light is not completely diffused by the
paper and is internally reected many times within the substrate and ink lm. To
compensate for these deciencies, Yule and Nielsen made the square power of the
Eq. (18.17) into a variable n, known as the Yule-Nielsen value, for tting the data.
With this modication, Eq. (18.17) becomes
P = r
s
+P
w
(1 r
s
)[1 a(1 T
s
)]
n
. (18.19)
390 Computational Color Technology
The solid-ink transmittance is obtained by setting a = 1, then rearranging
Eq. (18.19) to give Eq. (18.20).
T
s
= {[P
s
r
s
]/[P
w
(1 r
s
)]}
1/ n
. (18.20)
For a paper substrate, the value of r
s
is usually small (<4%). Assuming that the
surface reection r
s
can be neglected and the reectance is measured relative to
the substrate, we can simplify Eq. (18.20) to
T
s
= (P
s
/P
w
)
1/ n
, or logT
s
= D
s
/ n, or T
s
= 10
D
s
/ n
. (18.21)
Similarly, Eq. (18.19) is simplied to Eq. (18.22) by setting r
s
= 0.
P/P
w
= [1 a(1 T
s
)]
n
. (18.22)
Substituting Eq. (18.21) into Eq. (18.22), we have
P/P
w
=
_
1 a
_
1 10
D
s
/ n
__
n
. (18.23)
Converting reectance to density, we obtain the expression in the form of the opti-
cal density D as given in Eq. (18.24).
D = log(P/P
w
) = nlog
_
1 a
_
1 10
D
s
/ n
__
, (18.24a)
or
D/ n = log
_
1 a
_
1 10
D
s
/ n
__
. (18.24b)
Note the similarity between Eq. (18.24b) and Eq. (18.7) of the Murray-Davies
equation; they only differ by a factor of 1/ n applied to the density values. Rear-
ranging Eq. (18.24b), we obtain
a =
_
1 10
D/ n
___
1 10
D
s
/ n
_
. (18.25)
Like Eq. (18.8), Eq. (18.25) is often used to determine ink area coverage or dot
gain.
18.2.1 Spectral extension of Yule-Nielsen model
Spectral extension of the Yule-Nielsen equation is obtained by replacing the broad-
band color signals of the paper reectance P
w
and ink transmittance T
s
with the
corresponding spectra, P
w
() and T
s
(), to give the resulting spectrum P(). With
this substitution, we have
P() = r
s
+P
w
()(1 r
s
){1 a[1 T
s
()]}
n
. (18.26)
Halftone Printing Models 391
The surface reectance r
s
at the air-ink interface may also be wavelength depen-
dent; however, we treat it as a constant to simply the model.
A simplied version is given in Eq. (18.27), which is a direct expansion of
Eq. (18.24b) by neglecting the surface reection r
s
and by measuring the re-
ectance relative to the substrate.
D()/ n = log
_
1 a
_
1 10
D
s
()/ n
_
_
. (18.27)
Rearranging Eq. (18.27), we have
10
D()/ n
= 1 a +a10
D
s
()/ n
. (18.28)
By setting P
n
() = 10
D()/ n
and P
ns
() = 10
D
s
()/ n
, Eq. (18.28) becomes
P
n
() = (1 a) +aP
ns
(). (18.29)
The explicit expression is given in Eq. (18.30).
_
_
_
_
_
P
n
(
1
)
P
n
(
2
)
. . .
. . .
P
n
(
n
)
_

_
=
_
_
_
_
_
1 P
ns
(
1
)
1 P
ns
(
2
)
. . . . . .
. . . . . .
1 P
ns
(
n
)
_

_
_
(1 a)
a
_
. (18.30)
Note the similarity between Eqs. (18.30) and (18.14); Eq. (18.30) can be inverted
via a pseudo-inverse transform to obtain the area coverage a.
For mixed inks, the effective transmittance T
s
() is derived from a nonlin-
ear combination of the primary ink transmittances measured in the solid-ink area
coverage. Each solid-ink transmittance can be obtained by setting a = 1 and rear-
ranging Eq. (18.26).
T
s
() = {[P
s
() r
s
]/[P
w
()(1 r
s
)]}
1/ n
, (18.31)
where T
s
and P
s
are the ink transmittance and reectance of the solid-ink layer,
respectively. P
s
and P
w
can be obtained experimentally. Knowing r
s
and n, we
can calculate T
s
using Eq. (18.31). One way of obtaining the mixed ink area cov-
erage and the effective transmittance is by utilizing the Neugebauer equation (see
Chapter 17).
Berns and colleagues have combined the spectral Yule-Nielsen equation with
the Neugebauer model to form a hybrid approach called the Yule-Nielsen-Spectral-
Neugebauer (YNSN) model.
8,9
It has been applied to study the feasibility of spec-
tral color reproduction, having very good results in end-to-end color reproduction,
8
and with a six-primary (cyan, magenta, yellow, black, green, and orange) Epson
Pro 5500 ink-jet printer. With an optimized Yule-Nielsen value and cellular parti-
tion, they obtained excellent results for predicting 600 random colors, having an
average spectral RMS error of less than 0.5%; the average color difference is about
1.0 E
00
, with a maximum of 5.9 E
00
.
9
392 Computational Color Technology
18.3 Area Coverage-Density Relationship
Both the Murray-Davies and the Yule-Nielsen models relate the area coverage and
density in the form of power functions. From Eq. (18.8) and Eq. (18.25), we can
see that the Murray-Davies equation is a special case of the Yule-Nielsen equation
with Yule-Nielsen value n = 1. Figure 18.3 shows the area-density relationships
computed from the Murray-Davies and Yule-Nielsen equations using the data from
cyan wedges of the CLC500 copier. The relationships are nonlinear, where the
Murray-Davies equation gives the highest curvature and the curvature decreases as
the Yule-Nielsen n value increases. When n approaches innity, the relationship
becomes linear.
10
This phenomenon can be shown mathematically using the Yule-
Nielsen equation [Eq. (18.25)].
a
j
=
_
1 10
D
j
/ n
___
1 10
D
s
/ n
_
, (18.32)
where a
j
is the area coverage of a jth color patch, D
j
is the optical density of the
color patch, and D
s
is the optical density of the solid-ink area coverage. If n = 1,
Eq. (18.32) reduces to the Murray-Davies equation [Eq. (18.8)]. The Yule-Nielsen
n value serves as an adjustable parameter that modulates the nonlinear behavior of
the area coverage and optical density.
Figure 18.3 Relationships between area coverage and density.
Halftone Printing Models 393
Using the exponential series expansion, we have the following approximation:
10
D
j
/ n
= 1 2.303
_
D
j
/ n
_
+
_
2.303
2
/2!
__
D
j
/ n
_
2

_
2.303
3
/3!
__
D
j
/ n
_
3
+ .
(18.33)
When n , D
j
/ n 0; therefore, we can eliminate the high-order terms and
keep only the rst two terms.
lim
__
1 10
D
j
/ n
___
1 10
D
s
/ n
__
=
_
1 1 +2.303D
j
/ n
___
1 1 +2.303D
s
/ n
_
= D
j
/D
s
. (18.34)
This derivation shows that in the limiting case of when n approaches innity, the
dot area coverage becomes the ratio of the halftone dot density to the solid-ink area
density. This linear relationship, showing proportionality, is basically the Beer-
Lambert-Bouguer law. The color-printing model would be greatly simplied if this
relationship and additivity were true.
18.4 Clapper-Yule Model
After formulating the Yule-Nielsen model, Yule then worked with Clapper to de-
velop an accurate account of the halftone process from a theoretical analysis of
multiple-scattering, internal-reection, and ink transmissions.
11,12
In this model,
the light is being reected many times from the surface, both within the ink and
substrate and by the background, as shown in Fig. 18.4. The total reected light is
Figure 18.4 Clapper-Yule model of the light, halftone dot, and substrate interactions.
394 Computational Color Technology
the sum of the light fractions that emerge after each internal reection cycle.
P = k
s
+
_
f
e
(1 r
s
)f
r
_
1 a +aT
s
_
2
_
_
_
1 (1 f
e
)f
r
_
1 a +aT
s
2
_
_
,
(18.35)
where k
s
is the specular component of the surface reection, f
r
is a fraction of
light reected at the bottom of the substrate, and f
e
is a fraction of light emerged
at the top of the substrate.
Again, solid-ink transmittances are obtained by setting a = 1 and rearranging
Eq. (18.35).
T
s
= [(P
s
k
s
)/{f
e
(1 r
s
)f
r
+(P
s
k
s
)(1 f
e
)f
r
}]
1/2
. (18.36)
18.4.1 Spectral extension of the Clapper-Yule model
Similar to the spectral extension of the Yule-Nielsen equation, the spectral exten-
sion of the Clapper-Yule equation is
P() = k
s
+
_
f
e
(1 r
s
)f
r
_
1 a +aT
s
()
_
2
_
_
_
1 (1 f
e
)f
r
_
1 a +aT
s
()
2
_
_
, (18.37)
T
s
() = ([P
s
() k
s
]/{f
e
(1 r
s
)f
r
+ [P
s
() k
s
](1 f
e
)f
r
})
1/2
. (18.38)
The halftone models of Murray-Davies, Yule-Nielsen, and Clapper-Yule show a
progressively more thorough consideration of the halftone printing process with
respect to light absorption, reection, transmission, and scattering induced by the
ink-paper interaction.
18.5 Hybrid Approaches
It has been found that the single-constant Kubelka-Munk theory ts very well
with contone printing and halftone printing with solid-ink area coverage.
13
It also
has been shown that the Yule-Nielsen and the Clapper-Yule models described
the halftone process quite satisfactorily.
14,15
The central part of Yule-Nielsen and
Clapper-Yule models is the transmittance, T
i
, of the ink lm. They employed the
Neugebauer concept of eight dominant colors and Demichels area coverage; the
total ink transmittance is the sum of the individual color transmittances weighted
by their area ratios. This made their theories additive in nature. If a subtractive
derivation such as the single-constant Kubelka-Munk is used for T
i
, the transmit-
ted light will become the reected light when the light reaches the background and
Halftone Printing Models 395
then reects back; we make the following approximation:
T
i
() = P

()
1/ n
. (18.39)
The transmittance is taken as the 1/ n root of the reectance at the innite thick-
ness P

() to account for the multiple channels of the Kubelka-Munk theory.


Substituting Eq. (18.39) into the Yule-Neilsen and Clapper-Yule equations, we ob-
tain Eqs. (18.40) and (18.41) for Yule-Neilsen and Clapper-Yule models, respec-
tively.
P() = r
s
+P
w
()(1 r
s
)
_
1 A
_
1 P

()
1/ n
__
n
, (18.40)
P() = k
s
+
f
e
(1 r
s
)f
r
[1 A+AP

()
1/ n
]
2
1 (1 f
e
)f
r
[1 A+AP

()
2/ n
]
. (18.41)
To obtain the reectance P(), both Eqs. (18.40) and (18.41) require the in-
put of the reectance at the innite thickness P

() that can be obtained by


using the single-constant Kubelka-Munk equation P

() = 1 + [ ()/ s()]
{[ ()/ s()]
2
+ 2[ ()/ s()]}
1/2
[see Chapter 16, Eq. (16.11)]. By taking the
substrate reection as the reference point, P

() = 1, and by approximating
(1 r
s
) 1, Eq. (18.40) reduces to the surface reectance correction.
P

() = P() r
s
. (18.42)
Similarly, Eq. (18.41) can be simplied by setting n = 2 and f
i
= 1 f
e
.
P

() = [P() k
s
]/[1 k
s
f
i
f
i
+f
i
P()]. (18.43)
For all practical purposes, this expression is equivalent to Saundersons correc-
tion of Eq. (16.27) because both k
s
and f
i
are small such that the denominator
in Eq. (18.43) is very close to 1. These hybrid approaches are further validations
to the unied global Kubelka-Munk theory developed by Emmel and Hersch (see
Section 16.5).
18.6 Cellular Extension of Color-Mixing Models
In most color-mixing models, their parameters are derived from the solid-color
patches of the primary or dominant colors. With the addition of partial dot area cov-
erages, these models can be modied to make use of these intermediate samples,
which are equivalent to partitioning the cmy space into rectangular cells and ap-
plying the color-mixing model within each cell in a smaller subspace.
16
Rolleston
and Balasubramanian extend this cellular concept to the Yule-Nielsen model and
396 Computational Color Technology
spectral extension.
17
Their results indicate that the cellular extension makes sub-
stantial improvement in modeling accuracy. The cellular concept can also be used
in other color-mixing models such as the Murray-Davies equation and Clapper-
Yule model.
18
18.7 Dot Gain
Dot gain is a phenomenon associated with halftone imaging. Halftone dots can
grow from their initial size to the nal output or from separation lm to printed
page because of the characteristics of paper, ink, and the printing process. Dot
gain leads to more intense colors than intended. There are two kinds of dot gains
physical and optical. In conventional halftoning, the physical dot gain is the change
in size of the printing dot from the photographic lm that made the plate, to the
size of the dot on the substrate.
19,20
Digital imaging uses a variety of printing
methods such as electrophotographic (or xerographic), ink-jet, and thermal trans-
fer; however, dot gain persists due to the heat fusing, ink spreading, or dye dif-
fusion. To accommodate digital imaging, the denition of the dot gain is broad-
ened to cover the phenomenon that a printed dot size is bigger or darker than one
would expect on the basis of the nominal size of the dot intended by the printing
process.
21
Physical reasons for dot gain are many, including ink-paper interaction, print-
ing technology, and printing conditions. The ink-paper interaction has a major ef-
fect on physical dot gain; for example, an ink-jet ink with low surface tension will
spread wider than a nonwetting ink such as a xerographic toner. And the spread-
ing of ink is dependent on the substrateone can expect a small dot size increase
on a coated paper, a larger increase on an uncoated paper, and an even larger in-
crease on a soft stock such as newsprint. Printing technology also plays a role in
the physical dot gain; for example, ink-jet printing is different from electropho-
tographic printing. Ink-jet printing is expected to have higher dot gain than elec-
trophotographic printing because of the liquid ink spreading, whereas the solid
pigment used in the electrophotographic device can slightly gain size by heat fus-
ing. Printing conditions, such as the impression pressure and ink-drop velocity,
affect the toner/ink spreading. In addition, the dot gain is sensitive to the halftone
technique used. Normally, a dispersed dot will give a higher dot gain than a clus-
tered dot. To make matters even more complex, dot gain is not uniform for all
dot sizes within a given halftone cell. It varies in degree over the tonal range of
the reproduction. Experiments have shown that dot gain is greatest in the mid-
tone area and about equal in the highlight and shadow areas. The net result is a
distortion of the tone reproduction on the printed reproduction compared with the
original. All these effectsprinting conditions, nature of the substrate, ink char-
acteristics, and halftone techniqueare coupled together to give the physical dot
gain.
From practical experience, an experimental measurement almost always results
in a reectance signicantly less than that predicted by the Murray-Davies equa-
Halftone Printing Models 397
tion. This effect, which is different from physical dot gain, is called the optical dot
gain or the Yule-Nielsen effect because the Yule-Nielsen model quite successfully
accounts for the nonlinear relationship between reectance and area coverage.
22
The mathematical derivation of the optical dot gain starts from the Yule-Nielsen
equation. Using the solid-ink transmittance expression of Eq. (18.20) and by set-
ting r
s
= 0, we obtain
T
s
= (P
s
/P
w
)
1/ n
. (18.44)
Substituting Eq. (18.44) into Eq. (18.19) with r
s
= 0, we obtain
P =
_
P
w
1/ n
a
_
P
w
1/ n
P
s
1/ n
__
n
. (18.45)
For a given n value, the dot area can be determined because reectances P, P
s
, and
P
w
can be measured experimentally. Equation (18.45) is the form frequently used
for modeling the optical dot gain. Generally, this Yule-Nielsen equation provides a
good t to experimental data, with n values typically falling in the range between
1 and 2.
23
It would be useful to derive the n value from the underlying paper and
ink properties, not as an adjustable parameter for tting the data. Both theoretical
and empirical attempts have been made to relate the Yule-Nielsen value to the
fundamental physical and optical parameters of ink and paper.
11,2434
Arney and colleagues were able to combine theory with an empirical model to
provide practical relationships between the optical dot gain and the effect of light
scattering with respect to dot size and shape. They related two empirical parame-
ters, similar to the ones used in the Murray-Davies model,
35
to an a priori model
using two modulation transfer functions (MTF) for edge sharpness and light scat-
tering. The analysis suggested that one empirical parameter depends on the edge
sharpness of the halftone dot and the other parameter is a function of both the edge
sharpness and the lateral light scattering.
Kruse and Wedin modied the Yule-Nielsen model to account for the trapping
in the substrate of the internally scattered light.
3032
This model provided the expla-
nation for the decrease in the MTF toward higher screen rulings. It also shed some
light on reasons for the variation in dot gain with different screen-dot geometries.
Later, Kruse and Gustavson proposed a model for the optical dot gain on scat-
tering media that predicts color shifts in the rendering caused by different raster
technologies.
33,34
The model is based on a nonlinear application of a point-spread
function (PSF) to the light ux falling on the medium surface. It has successfully
predicted dot gains in monochromatic halftone prints. This model is perhaps the
most comprehensive attempt to account for light interaction in a turbid medium,
including absorption, directional transmission, surface reection, bulk reection,
diffuse transmission, and internal surface reection. Possible paths in a halftone
print are shown in Fig. 18.5, A is the surface reection from the paper surface, B
is the ink absorption before reaching the substrate, C is the surface reection from
the ink surface, D is the bulk reection from the paper, E is the absorbed light
398 Computational Color Technology
that enters from the uncovered substrate, F is the emerged but attenuated light that
enters from the uncovered substrate, G is the absorbed light that enters from the
ink lm, H is the emerged but attenuated light that enters and exits from the ink
lm, and I is the escaped light that enters from the ink lm. The paths F, G, H,
and I are important for color printing, where the primary inks are quite transpar-
ent for a certain region of the visible spectrum.
36
The model takes into account all
events shown in Fig. 18.5 except the surface reections, A and C. Another omis-
sion is the transmitted light path J that passes through the substrate and is lost
forever.
The Kruse-Wedin model also includes the physical dot gain, which is modeled
on the ink density by preserving the amount of ink in the image locally. This is
accomplished by the convolution of an ideal halftone image with a point-spread
function. The ideal halftone image with sharp and perfect halftone dots can be
described by a binary image H(x, y), where the value 0 at a particular point cor-
responds to no ink and the value 1 is full ink coverage. They proposed a formula
given in Eq. (18.46), one for each primary color at a given wavelength .
T
c
(x, y, ) = 10exp{D
c
()[H
c
(x, y) (x, y)]},
T
m
(x, y, ) = 10exp{D
m
()[H
m
(x, y) (x, y)]},
T
y
(x, y, ) = 10exp{D
y
()[H
y
(x, y) (x, y)]},
T
k
(x, y, ) = 10exp{D
k
()[H
k
(x, y) (x, y)]}, (18.46)
where D
c
, D
m
, D
y
, and D
k
are the solid-ink densities of the primary inks, (x, y)
is a point-spread function describing the blurring of the ink, T
j
(x, y, ) is the trans-
Figure 18.5 Possible light paths in a turbid medium.
36
Halftone Printing Models 399
mission of the jth primary ink pattern, and the operator denotes convolution.
The convolution does not change the amount of the ink; it just redistributes the ink
lm locally over the surface. More specically, it blurs the edges of halftone dots.
For optical dot gain, the ink lm is modeled by its transmittance T (x, y, ),
and the blurring effect of the bulk reection is modeled by a point-spread function
(x, y, ). The reected image P(x, y, ) is described by
P(x, y, ) = {[I (x, y, )T (x, y, )] (x, y, )}T (x, y, ) (18.47)
and
T (x, y, ) = T
c
(x, y, )T
m
(x, y, )T
y
(x, y, )T
k
(x, y, ), (18.48)
where I (x, y, ) is the incident light intensity. The product I (x, y, )T (x, y, )
is the light incident on the substrate, where the convolution describes the dif-
fused bulk reection, and the second multiplication with T (x, y, ) at the end of
Eq. (18.47) is the passage of the reected light through the ink layer on its way up.
Like the model of the physical dot gain, Eq. (18.48) constitutes a nonlinear trans-
fer function. The linear step, the convolution, describes the reectance properties
of the substrate. Xerox researchers reported a similar approach in 1978.
24
The heart of the model is the two point-spread functions (x, y) and
(x, y, ). The PSF for the physical dot gain (x, y) is a rough estimate of the real
physical ink smearing and is strongly dependent on the printing situation. The PSF
for optical dot gain (x, y, ) takes the multiple light scattering into account. It is
obtained on a discrete approximation by a direct simulation of the light-scattering
events, closely resembling a Monte Carlo simulation. The basic appearance of a
reection PSF is a sharp peak at the point of entry and an approximately expo-
nential decay with radial distance from the center point. A typical PSF can be
approximated by a simple exponential equation as
(x, y)

= P
0
[/(2r)] exp(r), (18.49)
where r = (x
2
+ y
2
)
1/2
. Parameter P
0
determines the total reectance of the sub-
strate and controls the radial extent of the PSF.
Gustavsons simulation shows that dot gain has a signicant impact on the size
of the color gamut.
36
Stochastic screens, having signicant dot gain, give a larger
color gamut by reproducing more saturated green and purple tones than clustered-
dot screens. This result is conrmed by the experimental work of Andersson.
37
When dot gain happens, a correction is required in order to bring the tone level
back to normal. The correction is made in advance to compensate for the dot gain
expected in printing. Therefore, one has to determine the amount of dot gains along
the whole tone reproduction curve for a particular halftone, printer, and paper com-
bination. The correction for dot gain has been applied to both the clustered-dot and
the error-diffusion algorithms. In both cases, a bitmap can be created that will print
400 Computational Color Technology
with the proper grayscale reproduction. One might therefore expect that the modi-
ed error-diffusion algorithm would be the preferred method to use because of its
higher spatial frequency content and better detail rendition. On the other hand, the
magnitude of the two correction effects is quite different. The correction needed
for clustered dots is much smaller than that for error diffusion because of its lower
spatial frequency content. As a result of this smaller correction, clustered dots tend
to be more tolerant of variations in the dot gain. This can be an advantage when
the dot gain of the printer is not well known, or is even unknown, or varies in time
and/or space.
18.8 Comparisons of Halftone Models
An electronic le of 58 colors with known printer/cmyk values is printed for testing
color-mixing models. In addition, a 10- or 20-level intensity wedge, ranging from
solid-ink coverage to near white, of each primary color is used to determine the
parameters such as the area coverage and halftone correction factors. Halftone tints
are made by using the line screen provided by the manufacturer.
38
Reection spectra and colorimetric data of prints are obtained by a Gretag SPM
100

(45/0-deg measuring geometry) spectrophotometer using 2-deg standard ob-


server, illuminant D
50
, and absolute white scale with black backing. This instru-
ment outputs reection spectrum in a 10-nm interval from 380 to 730 nm. All data
are the average of at least three measurements at different locations of a patch.
Color-mixing models are programmed into C language. The outputs are the spec-
trum of a mixed color in a 10-nm interval from 400 to 700 nm and its CIELAB
specications.
First, the area coverage is determined by the Yule-Nielsen (Y-N) formula of
Eq. (18.25). The equation can be used for a broadband density measurement as
well as individual wavelength measurements. If a spectrum is used for calculating
the area coverage, one will nd that a
i
() is wavelength dependent. From our ex-
perience, the deviations across the wavelengths of the absorption region are usually
small. Therefore, an average across the absorption band is used. In this study, we
use the spectra of 10-level primary color wedges for calculating a
i
() of CLC500
prints.
Knowing the device values for the halftone wedges, we can correlate printer/
cmy values to the area coverage. Figure 18.6 shows the relationships between the
CLC500 device value and area coverage determined at n = 2.0. From these curves,
we can generate the area-coverage lookup tables for primary colors of all color-
mixing models. The lookup table implementation enables easy access and fast
computation.
The eight-color Y-N model ts the data in the neighborhood of six E
ab
units
for the mean and the three-color Y-N increases the error of the corresponding run
by about two E
ab
units. This is because of the additivity failure, as pointed out
by Yule.
39
These numbers are on the order of the CLC500 printer variability. Ad-
justable parameters in the Y-N equation are varied to get a better understanding of
Halftone Printing Models 401
Figure 18.6 Relationship between area coverage and device value of CLC500 primary ton-
ers.
the color mixing. Varying the surface reection does not give a signicant change
in the average color difference; the best t is around r
s
= 0.01.
Generally, the Clapper-Yule (C-Y) model ts the data slightly worse than the
Y-N model by about one E
ab
unit. Like the YN model, the eight-color C-Y is bet-
ter than the three-color approach by about two E
ab
units. The surface reection
in the C-Y model is expressed as an offset from the total reection of 1; because
r
s
is small, it has a negligible effect. Instead, the parameter k
s
of the C-Y equa-
tion replaces the role that r
s
plays in the Y-N model. The modeling prefers low k
s
values. Decreasing k
s
by 0.01 shows a less than one E
ab
improvement. The inter-
nal reection factor f
e
has a stronger inuence than surface reections; the tting
of data improves when the fraction of light emerging from the paper increases.
The improvement decreases as f
e
increases. Histograms of the E
ab
distribution
are depicted in Fig. 18.7; the C-Y model exhibits the narrowest E
ab
distribution.
Individual comparisons reveal that dark colors, in general, have large discrepan-
cies. For example, the spectral errors are 0.0224, 0.0412, and 0.0387 for a shadow,
a middle-tone, and a highlight color, respectively, but the corresponding color dif-
ferences are 9.92, 5.61, and 2.28. These results indicate that darker colors require
a higher modeling precision than lighter colors.
Further improvements can be gained by including the intermediate points along
each primary color axis, using the cellular Yule-Nielsen or cellular Clapper-Yule
model.
17,41
One major problem in the color modeling is that the modeling errors
are not uniformly distributed in the color space, requiring a higher precision for
402 Computational Color Technology
Figure 18.7 Histograms of color-difference distributions calculated by using Yule-Nielsen
and Clapper-Yule models.
darker colors. The solution lies in the cellular extensions of the color-mixing mod-
els. With proper selection of the intermediate toner points, the modeling errors
can be reduced and made to distribute more evenly. The trade-off is that the num-
ber of measurements and the data-storage requirements will be increased accord-
ingly.
Using the Yule-Neilsen model for four-primary mixing, we obtain the average
spectral difference of 0.049 from 58 color samples. The average color difference
is 10.6 E
ab
and root-mean-square value is 11.22 E
ab
. The improvements in
data tting for the Y-N model may come from using 16 dominant colors. In any
case, it provides reasonable estimates to the measured values. These theoretical
halftone models provide guidance in practical applications. However, they are not
accurate enough for high-quality reproductions. Although some theories are better
than others, the difference depends strongly on the characteristics of the printing
devices and measurement conditions.
References
1. A. Murray, Monochrome reproduction in photoengraving, J. Franklin Inst.
221, pp. 721744 (1936).
Halftone Printing Models 403
2. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 4247 (1997).
3. P. G. Engeldrum, Color between the dots, J. Imaging Sci. Techn. 38, pp. 545
551 (1994).
4. J. S. Arney, P. G. Engeldrum, and H. Zeng, A modied Murray-Davies model
of halftone gray scales, Proc. Tech. Assn. Graphic Arts, pp. 353363 (1995).
5. J. S. Arney and P. G. Engeldrum, An optical model of tone reproduction in hard
copy halftones, Proc. IS&T 11th Int. Congress on Advances in Non-Impact
Printing Technologies, Springeld, VA, pp. 497499 (1995).
6. J. S. Arney, P. G. Engeldrum, and H. Zeng, An Expanded Murray-Davies
Model of Tone Reproduction in halftone imaging, J. Imaging Sci. Techn. 39,
pp. 502508 (1995).
7. J. A. C. Yule and W. J. Nielsen, The penetration of light into paper and its
effect on halftone reproduction, TAGA Proc., Sewickley, PA, Vol. 4, pp. 6575
(1951).
8. F. H. Imai, D. R. Wyble, and R. S. Berns, A feasibility study of spectral color
reproduction, J. Imaging Sci. Techn. 47, pp. 543553 (2003).
9. Y. Chen, R. S. Berns, and L. A. Taplin, Six color printer characterization using
an optimized cellular Yule-Nielsen spectral Neugebauer model, J. Imaging Sci.
Techn. 48, pp. 519528 (2004).
10. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, Chap. 6
(1999).
11. F. R. Clapper and J. A. C. Yule, The effect of multiple internal reections on
the densities of half-tone prints on paper, J. Opt. Soc. Am. 43, pp. 600603
(1953).
12. F. R. Clapper and J. A. C. Yule, The effect of multiple internal reections on
the densities of half-tone prints on paper, J. Opt. Soc. Am. 43, pp. 600603
(1953).
13. H. R. Kang, Kubelka-Munk modeling of ink jet ink mixing, J. Imaging Sci.
Techn. 17, pp. 7683 (1991).
14. H. R. Kang, Comparisons of Color Mixing Theories for Use in Electronic
Printing, 1st IS&T/SID Color Imaging Conf.: Transforms and Transportablity
of Color, pp. 7882 (1993).
15. H. R. Kang, Applications of color mixing models to electronic printing,
J. Electron. Imaging 3, pp. 276287 (1994).
16. K. J. Heuberger, Z. M. Jing, and S. Persiev, Color transformation and lookup
tables, TAGA/ISCC Proc., Sewickley, PA, Vol. 2, pp. 863881 (1992).
17. R. Rolleston and R. Balasubramanian, Accuracy of various types of Neuge-
bauer model, 1st IS&T/SIDs Color Imaging Conference: Transforms and
Transportablity of Color, Springeld, VA, pp. 3237 (1993).
18. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 8792 (1997).
19. M. Southworth, Pocket Guide to Color Reproduction, 2nd Edition, Graphic
Arts Publ. Livonia, NY, p. 55 (1987).
404 Computational Color Technology
20. AGFA, Digital Color Prepress, Vol. II, Agfa Prepress Education Resources,
Mt. Prospect, IL (1992).
21. J. S. Arney and C. D. Arney, Modeling the Yule-Nielsen halftone effect,
J. Imaging Sci. Techn. 40, pp. 233238 (1996).
22. J. A. S. Viggiano, The color of halftone tints, TAGA Proc., Sewickley, PA, pp.
647661 (1985).
23. Y. Shiraiwa and T. Mizuno, Equation to predict colors of halftone prints con-
sidering the optical property of paper, J. Imaging Sci. Techn. 37, pp. 385391
(1993).
24. F. R. Ruckdeschel and O. G. Hauser, Yule-Nielsen effect in printing: a physical
analysis, Appl. Optics 17, pp. 33763383 (1978).
25. W. W. Pope, A practical approach to n-value, Proc. TAGA, Sewickley, PA,
pp. 142151 (1989).
26. F. P. Callahan, Light sattering in halftone prints, J. Opt. Soc. Am. 42, pp. 104
105 (1952).
27. J. A. C. Yule, D. J. Howe, and J. H. Altman, The effect of the spread function
of paper on halftone reproduction, TAPPI J. 50, pp. 337344 (1967).
28. J. R. Huntsman, A new model of dot gain and its application to a multilayer
color proof, J. Imaging Sci. Techn. 13, pp. 136145 (1987).
29. M. Pearson, n-value for general conditions, Proc. TAGA, Sewickley, PA,
pp. 415425 (1980).
30. M. Wedin and B. Kruse, Modeling of screened color prints using singular value
decomposition, Proc. SPIE 2179, pp. 318326 (1994).
31. M. Wedin and B. Kruse, Halftone color prints: Dot gain and modeling of color
distributions, Proc. SPIE 2413, pp. 344355 (1995).
32. B. Kruse and M. Wedin, A new approach to dot gain modeling, Proc. TAGA,
Sewickley, PA, pp. 329338 (1995).
33. M. Wedin and B. Kruse, Mechanical and optical dot gain in halftone colour
prints, Proc. NIP11, IS&T, Hilton Head, South Carolina, pp. 500503 (1995).
34. B. Kruse and S. Gustavson, Rendering of color on scattering media, Proc.
SPIE 2657, pp. 422431 (1996).
35. J. S. Arney and C. D. Arney, Modeling the Yule-Nielsen halftone effect,
J. Imaging Sci. Techn. 40, pp. 233238 (1996).
36. S. Gustavson, Color Gamut of halftone reproduction, J. Imaging Sci. Techn.
41, pp. 283290 (1997).
37. M. A. Andersson, A study in how the ink set, solid ink density and screening
method inuence the color gamut in four color process printing, Master Thesis,
RIT School of Printing Management and Science, May (1997).
38. J. H. Riseman, J. J. Smith, A. M. dEntremont, and C. E. Goldman, Apparatus
for generating an image from a digital video signal, U.S. Patent No. 4,800,442
(1989).
39. J. A. C. Yule, Principles of Color Reproduction, Wiley, New York, Chap. 11,
p. 282 (1967).
Halftone Printing Models 405
40. H. R. Kang, Color Scanner Calibration, J. Imaging Sci. Techn. 36, pp. 162170
(1992).
41. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 129138 (1997).
Chapter 19
Issues of Digital Color Imaging
A digital color image is a multidimensional entity. It is sampled in a 2D plane with
width and length, having quantized values in the third dimension to indicate the
intensities of three or more channels (trichromatic or multispectral) for describ-
ing color attributes. The smallest unit in the 2D image plane is called the picture
element (pixel or pel), constituted by the pixel size (width and length) and depth
(the number of tone levels). However, the appearance of a color image is much
more intriguing than a few physical measurements; there are psychological and
psychophysical attributes that cannot be measured by any existing instruments
only by human vision. Therefore, the most important factor in color-image analysis
is human vision because a human being is the ultimate judge of image quality. Hu-
man vision provides the fundamental guidance to digital-imaging design, analysis,
and interpretation.
Because human vision is subjective and instrumentation is objective, there is
a need for establishing the correlation between them. This makes digital color
imaging a very interesting and complex process, involving human vision, color
appearance phenomena, imaging technology, device characteristics, and media
properties.
1
With rapid advances in color science and technology that resulted in a
better understanding of color images in the forms of human visual system (HVS)
and color appearance model (CAM) on the one hand, and the developments of
various color-imaging technologies and computation approaches on the other, it
is now possible to address the color-image reproduction at the system level in a
quantitative manner and to produce good image quality across all media involved.
To achieve these goals, cross-media color-image reproduction consists of four core
elements: device characteristics and calibration, color appearance modeling, image
processing, and gamut mapping. This chapter describes the scope and complexity
of the digital color imaging at the system level and provides the sRGBCIELAB
conversions as examples to illustrate the modular architecture and technical re-
quirements for a successful implementation.
19.1 Human Visual Model
Human visual system (HVS) models are attempts to utilize the human visual sen-
sitivity and selectivity for modeling and improving the perceived image quality.
HVS is based on the psychophysical process that relates psychological phenomena
407
408 Computational Color Technology
(color, contrast, brightness, etc.) to physical phenomena (wavelength, spatial fre-
quency, light intensity, etc.). It determines what physical conditions give rise to a
particular psychological condition (perceptual, in this case). A common approach
is to study the relationship between stimulus quantities and the perception of the
just-noticeable difference (JND)a procedure in which the observer is required to
discriminate between color stimuli that evoke JNDs in visual sensation. To this end,
Webers and Fechners laws play key roles in the basic concepts and formulations
of psychophysical measures. Weber proposed a general law of sensory thresholds
that the JND of a given stimulus is proportional to the magnitude of the stimulus.
This means that the JND between one stimulus and another is a constant fraction of
the rst and is called the Weber fraction, which is constant within any one-sense
modality for a given set of observing conditions.
2
The size of the JND, measured
in physical quantities (e.g., radiance or luminance) for a given attribute, depends
on the magnitude of the stimuli involved. Generally, the greater the magnitude of
the stimuli, the greater the size of the JND. Fechner built on the work of Weber to
derive a mathematical formula, indicating that the perceived stimulus magnitude is
proportional to the logarithm of the physical stimulus intensity.
2
The implication
of this logarithmic relationship is that a perceptual magnitude may be determined
by summing JNDs, suggesting that quantization levels should be spaced logarith-
mically in reectance, i.e., uniformly in the density domain.
Human vision response is not uniform with respect to spatial frequency, lu-
minance level, object orientation, visible-spectrum wavelength, etc. Webers and
Fechners laws have been used to form scales of color differences. Quantitative
studies address thresholds for a single attribute of color (e.g., brightness, hue,
and colorfulness). The Weber fraction is a function of the luminance, which indi-
cates that the Weber fraction is not constant over the range of luminances studied.
Because the Weber fraction is the reciprocal of sensitivity, the sensitivity drops
quickly as the luminance becomes weak. Similar determinations have been made
regarding the threshold difference in wavelength necessary to see a hue difference.
Results indicate that sensitivity to hue is much lower at both ends of the visible
spectrum. Approximately, the human visual system can distinguish between colors
with dominant wavelengths differing by about 1 nm in the blue-yellow region, but
near the spectrum extremes a 10-nm separation is required.
3
The threshold mea-
surements of the purity show the purity threshold varies considerably with wave-
length. A very marked minimum occurs at about 570 nm; the number of JND steps
increases rapidly on either side of this wavelength.
4,5
Weber fractions related to
luminance, wavelength, color purity, and spatial frequency are the foundations of
color-difference measurement and image-texture analysis.
Human vision has spatial and color components. Major spatial patterns are
aliasing, moir, and contouring. Spatial visual sensitivity is judged by the image
contrast. Image contrast is the ratio of the local intensity to the average image
intensity.
6
Visual-contrast sensitivity describes the signal properties of the visual
system near threshold contrast. For sinusoidal gratings, contrast C is dened by
Issues of Digital Color Imaging 409
the Michelson contrast as
7
C =(L
max
L
min
)/(L
max
+L
min
) =L
a
/L, (19.1)
where L
max
and L
min
are the maximum and minimum luminances, respectively, L
a
is the luminance amplitude, and L is the average luminance. Contrast sensitivity is
the reciprocal of the contrast threshold.
19.1.1 Contrast sensitivity function
Contrast sensitivity can be used to measure human spatial resolution by monitor-
ing it as a function of spatial frequency. Visual-contrast sensitivity with respect
to spatial frequency is known as the contrast sensitivity function (CSF). A CSF
describes contrast sensitivity for sinusoidal gratings as a function of spatial fre-
quency expressed in cycles per degree of visual angle. It dictates how humans
perceive image textures. Typical human contrast sensitivity curves are given in
Fig. 19.1,
5,8,9
where the horizontal axis is the spatial frequency and the vertical
axis is the contrast sensitivity, namely, log(1/C) = logC, where C is the con-
trast of the pattern at the detection threshold. The CSF of Fig. 19.1 shows two
signicant features. First, there is a falloff in sensitivity as the spatial frequency
Figure 19.1 Human contrast sensitivity functions.
9
410 Computational Color Technology
of the test pattern increases, indicating that the visual pathways are insensitive to
high-spatial-frequency targets. In other words, human vision has a low-pass l-
tering characteristic. Second, there is no improvement of sensitivity at low spatial
frequencies; there is even a loss of contrast sensitivity at lower spatial frequencies
for the luminance CSF. This behavior is dependent on background intensity; it is
more pronounced at higher background intensity. Under scotopic conditions of low
illumination, the luminance CSF curve is lowpass and peaks near 1 cpd. On intense
photopic backgrounds, CSF curves are band pass and peak near 8 cpd.
8
Chromatic
CSFs show a low-pass characteristic and have much lower cutoff frequencies than
the luminance channel (see Fig. 19.1), where the blue-yellow opponent-color pair
has the lowest cutoff frequency. This means that the human visual system is much
more sensitive to ne spatial changes in luminance than it is to ne spatial changes
in chrominance. Images with spatial degradations in luminance will often be per-
ceived as blurry or not sharp, while similar degradations in the chrominance will
usually not be noticed. The high-frequency cutoff and exponential relationship with
spatial frequency form the basis for various HVS models.
10,11
Imaging scientists have taken advantage of this characteristic for designing
compression algorithms and halftone patterns, among other things. CSF plays an
important role in image resolution, image quality improvement, halftoning, and
image compression. It has been used to estimate the resolution and tone level of
imaging devices. An upper limit of the estimate is about 400 dpi with a tone step
of 200 levels. In many situations, much lower resolutions and tone levels still
give good image quality. For the effect of object orientation, the visual acuity is
better for gratings oriented at 0 and 90 deg, while it is least sensitive at 45 deg
(see Fig. 19.2).
9
This property has been used in printing for designing halftone
screens.
12
19.1.2 Color visual model
In general, color visual models are extensions of luminance models attained by
utilizing the human visual difference in the luminance and chrominance. We can
separate the luminance and chrominance channels by using the opponent color de-
scription. For the chrominance part, we use the fact that the contrast sensitivity
to spatial variations in chrominance falls off faster than luminance with respect
to spatial frequency (see Fig. 19.1) to nd a new set of constants for the chro-
matic CSF. Both responses are low pass; however, the luminance response is re-
duced at 45, 135, 225, and 315 deg for the orientation sensitivity. This will place
more luminance error along the diagonals in the frequency domain, taking advan-
tage that these angles are the least sensitive to spatial variations. The chrominance
response has a narrower bandwidth than the luminance response. Using the nar-
rower chrominance response, as opposed to the identical responses for both lu-
minance and chrominance, will allow a lower frequency chromatic error, which
may not be perceived by human viewers. We can further allow the adjustment
of the relative weight between the luminance and chrominance responses. This is
Issues of Digital Color Imaging 411
Figure 19.2 Human orientation sensitivity.
9
achieved by multiplying the luminance response with a weighting factor. As the
weighting factor increases, more error will be forced into the chrominance compo-
nents.
Many human visual models have been proposed that attempt to capture the
central features of human perception. The simplest HVS includes just a visual
lter that implements one of the CSFs mentioned above. Better approaches in-
clude a module in front of the lter to account for nonlinearity such as We-
bers law. The ltered signal is pooled together in a single channel of infor-
mation. This structure is called the single channel model.
13,14
Because digital-
image quality is a very complex phenomenon, it requires inputs of all types
of distortions. In view of the image complexity, multiple-channel approaches
are developed to include various inputs. This is achieved by putting a bank
of lters before the nonlinearity in a systematic way; each lter addresses one
aspect of the whole image quality. This approach is called the multichannel
model.
1523
HVS has been used to design digital halftone screens; it often takes hours or
days for a mainframe computer to optimize a set of stochastic halftone screens.
Thus, it is seldom applied directly to real-time image processing because of the
computational cost. The HVS model generates point-spread functions that are con-
volution operators, which require multiplications and summations in a specied
neighborhood, repeatedly for each and every pixel in the image.
412 Computational Color Technology
19.2 Color Appearance Model
CIE colorimetry is based on color matching; it can predict color matches for an
average observer, but it is not equipped to specify color appearance. Colorimetry
is extremely useful if two stimuli are viewed with identical surround, background,
size, illumination, surface characteristics, etc. If any of these constraints is vio-
lated, it is likely that the color match will no longer hold. In practical applications,
the constraints for colorimetry are rarely met. Color appearance models are de-
veloped to deal with various phenomena that break the constraints of the simple
CIE tristimulus system such as simultaneous contrast, hue change with luminance
level (Bezold-Brcke hue shift), hue change with colorimetric purity (Abney ef-
fect), colorfulness increase with luminance (Hunt effect), contrast increase with
luminance (Stevens effect), chromatic adaptation, color constancy, and discount-
ing of the illuminant.
24
These phenomena cannot be measured by the existing color
instruments. A color appearance model is any model that includes predictors of at
least the relative color appearance attributes of lightness, chroma, and hue. To have
reasonable predictors of these attributes, the model must include at least some form
of a chromatic-adaptation transform. All chromatic-adaptation models have their
roots in the von Kries hypothesis (see Chapter 4). This denition enables the uni-
form color spaces CIELAB and CIELUV to be considered as color appearance
models.
24
However, modern appearance models are much more sophisticated and
complex, having key features of the cone-sensitivity transform, chromatic adapta-
tion, opponent-color processing, nonlinear response functions, memory colors, and
discounting the illuminant.
Chromatic adaptation is achieved through the largely independent adaptive gain
control on each of the long (L), medium (M), and short (S) cone responses. The
gains of the three channels depend on the state of eye adaptation, which is de-
termined by preexposed stimuli and the surround, but is independent of the test
stimulus.
5
An example of chromatic adaptation is viewing a piece of white pa-
per under a uorescent lamp, then under an incandescent lamp. The paper appears
white under both lamps, despite the fact that the energy reected from the paper
has changed from blue casting to yellow casting. Chromatic adaptation is the single
most important property of the human visual system with respect to understanding
and modeling color appearance.
Another key component of the color appearance model is opponent-color
processing. Modern theory of color vision is based on the trichromatic and
opponent-color theories. Three types of cones, L, M, and S, separate the incom-
ing color signal, and the neurons of the retina then encode the color into opponent
signals. The sum of the three cone types (L + M + S) produces an achromatic re-
sponse, while the combinations of two types with the difference of a third type pro-
duce the red-green (L M+S) and yellow-blue (L +MS) opponent signals.
25
In addition to the static human-vision theory, dynamic adaptation is very important
in color perception. It includes dark, light, and chromatic adaptations.
CIE has recommended a series of color appearance models, CIECAM97s and
CIECAM02 (see Section 5.5).
2628
The development and formulation of CIECAM
Issues of Digital Color Imaging 413
builds on the works of many researchers in the eld of color appearance. There
are 20 or more equations in CIECAM using computations such as matrix multi-
plication and various power functions. CIECAM is computationally intensive and
its implementation is costly: imagine a digital document of several million pixels
where each and every pixel has to be manipulated by 20 or more complex com-
putations! Therefore, there are almost no real-time applications and this situation
is not likely to change in the foreseeable future. At best, some high-end imaging
devices may implement a scaled-down version of CIECAM.
19.3 Integrated Spatial-Appearance Model
Zhang and Wandell extended CIELAB to account for spatial as well as color errors
in reproduction of the digital color image.
29
They call this model Spatial-CIELAB
or S-CIELAB (see Section 5.6). The design goal is to apply a spatial ltering to
the color image in a small-eld or ne-patterned area, but reverting to conventional
CIELAB in a large uniform area. S-CIELAB reects both spatial and color sensi-
tivity. This model is a color-texture metric and a digital-imaging model. This mea-
sure has been applied to printed halftone images. Results indicate that S-CIELAB
correlates with perceptual data better than standard CIELAB.
30
This measure can
also be used to improve multilevel halftone images.
31
However, the addition of the
2D convolution operator makes the computation even more costly. It is question-
able whether the image quality improvement is worth the cost, complexity, and
speed.
19.4 Image Quality
Color images have at least two components: spatial patterns and colorimetric prop-
erties. The main causes of spatial patterns are inadequate sampling in the spa-
tial domain (aliasing, staircasing, etc.), halftone dot size and shape, the interaction
of color separations (moir and rosette), and insufcient tone levels in the inten-
sity domain (false contouring). Colorimetric properties involve color transforma-
tion, color matching, color difference, surround, adaptation, etc. The overall image
quality is the integration of these factors perceived by human eyes. Major fac-
tors affecting image quality are the color gamut mismatch, chromatic adaptation
(or a full-edged CAM), gray-component replacement (GCR), color conversion
techniques, sampling, resolution conversion, spatial scaling, depth scaling, quan-
tization, halftoning, sharpness, focus, compression, color encoding, computational
error, measurement error, and device characteristics (e.g., stability and registra-
tion).
Color gamut mismatch is the physical limitation imposed by color imaging
devices. Out-of-gamut colors must be mapped within the destination space for ren-
dering. This problem is difcult to resolve. It is still an actively researched area.
Several reviews on color gamut mapping techniques can be found elsewhere (see
414 Computational Color Technology
Section 7.9).
32
Chromatic adaptation is the viewers ability to adapt to object il-
lumination. Adaptation techniques require the alteration of image data. Thus, an
improper adaptation may cause adverse effects. Gray-component replacement is
used in the printing industry to convert CMY to CMYK. This method is likely to
introduce color errors because it changes the initial CMY compositions and adds a
black component. Color-conversion technique affects color accuracy; for example,
the 3D lookup table usually gives better conversion accuracy than polynomial re-
gression (see Chapters 8 and 9).
33
Resolution conversion changes the spatial com-
position of the image via techniques such as sampling, pixel deletion/insertion,
linear interpolation, or area mapping.
34
Digital halftoning trades area for depth (or
intensity) by modulating the area to give a continuous-tone appearance. Both reso-
lution conversion and halftoning have major impacts on image texture. Halftoning
also affects the color appearance due to dot gain and dot overlap. Several device
dot models have been proposed; however, they are either too complex or not quite
successful.
35
Fortunately, dot gain can be taken into account by measurement and
corrected in the form of tone response curves (TRC). The dot overlap can be treated
by a color-mixing model or a device characterization. This means that a new device
characterization is required whenever a new halftone scheme or device resolution
is used. Sharpness is affected by the spatial resolution (sampling rate), contrast,
halftoning, and edges of objects, among other factors. An image looks sharp when
rendered with high resolution and/or high contrast (e.g., high-key images). A ne
halftone screen gives a sharper appearance than a coarse screen. A steep transi-
tion in the boundary of an object appears sharp. Edge sharpness can be enhanced
by various edge-enhancement techniques. The overall sharpness of an image is
the outcome of the settings of these fundamental imaging parameters and their in-
teractions. On the other hand, defocusing changes the resolving power and blurs
the image. Like sharpness enhancement, a minor defocusing can be corrected via
digital image processing.
Lossy compression techniques change information content that leads to color
and texture errors. JPEG compression has shown that there is little or no visible
image degradation on pictorial images for a compression ratio around 10.
36
There-
fore, the compression is not a major concern for pictorial images, unless a very
high compression ratio is requested. However, lossy compression is a concern for
scanned text and line art.
Color encoding standards provide format and ranges for representing and ma-
nipulating color quantities. An improper encoding scheme can severely damage
the color-conversion accuracy (see Section 6.5).
Device instability affects color consistency and increases measurement errors.
Device-registration error may produce unwanted textures, such as banding, and
affect the halftone appearance.
Issues of Digital Color Imaging 415
19.5 Imaging Technology
Imaging technology plays a major role in image quality; important examples are
sampling, quantization, scaling, resolution conversion, color transformation, color
matching, and halftoning, as given in Section 19.4. Device characteristics such as
resolution, bit depth, tone response curve, and color gamut affect image quality.
Generally, image quality improves as the resolution and/or bit depth increase. Res-
olution and bit depth are governed by the fundamental principle and limitation of
the device. For example, it is difcult to pack a CCD array densely enough to
reach the high resolution desired for digital cameras; therefore, some pattern (e.g.,
Bayers pattern, see Section 14.2) is employed with interpolation to achieve the
desired resolution. It is also difcult to pack ink-jet nozzles closely enough to give
a 300 jets/inch, so multiple columns (or lines) of jets are arranged in a stagger con-
guration with a xed delay in the ring time for aligning the beginning of line
positions.
More importantly, ink-jet and electrophotographic technologies have a limited
number of tone levels and are usually bi-level. This physical constraint dictates
the quality and bit depth of these printers. To create a continuous-tone appearance,
one needs to employ halftoning or dithering for spreading the tone curve beyond
bi-level. The tone curve can be determined experimentally. It is preferable to have
a wide dynamic range; that is, the slope in the central linear portion of the curve
should not be too steep.
Color gamut is governed by the selection of primary colors that, in turn, are
determined by the chemical and physical properties of the colorants used. The
selection of proper primary colorants is critical to any imaging device.
The last issue is the media. Some imaging technologies are more sensitive to
media than others; for example, ink-jet printing is much more sensitive to paper
type than the electrophotographic process. The same image printed by an ink-jet
printer on plain paper looks different from one printed on a photographic paper.
19.5.1 Device characteristics
Color images, either printed on paper or displayed on a monitor, are formed by
pixels that are conned to a digital grid. Generally, the digital grid is a square
(sometimes rectangular, if the aspect ratio is not 1) ruling in accordance with the
resolution of the imaging device; that is, if a printer resolution is 300 dots per inch
(dpi), then the digital grid has 300 squares per inch. This discrete representation
creates problems for the quality of color images in many ways. First, the quality
of an image is dependent on sampling frequency, phase relationship, and threshold
rule.
37
Then, it is dependent on the arrangement of dots in the digital grid. There are
three types of dot placement techniques: dot on dot, dot off dot, and rotated dot.
38
Each technique has its own characteristics and concerns. CRT and LED displays
can be considered as the dot-off-dot technique, where three RGB phosphor dots
are closely spaced in a triangular pattern without overlapping.
416 Computational Color Technology
In printing, a frequently used technique is that of rotated dots, where color
dots are partially overlapped. Rotated dots are produced by a halftone algorithm.
Many halftone algorithms are available to create various pixel arrangements for
the purpose of simulating gray sensation from bi-level imaging devices. Moreover,
the image texture is also dependent on the size and shape of the pixel. Even in the
ideal case of the square pixel shape that exactly matches the digital grid, there are
still image-quality problems; the ideal square pixel will do well on horizontal and
vertical lines, providing sharp and crisp edges, but will do poorly on angled lines
and curves, showing jaggedness and discontinuity. Several dot models have been
proposed to mathematically account for the dot overlapping, such as Demichels
dot-overlap model, which is obtained from the joint probability of the overlapped
area, and circular dot overlap, which assumes an ideal round shape with an equally
sized dot.
39
Halftone patterns for different levels of halftone dots are analyzed with
the circular dot model to predict the actual reectance of the output patterns.
40
Threshold levels of the halftone dot are then adjusted so that the reectance of the
dot pattern matches the input reectance values. This correction produces patterns
with fewer isolated pixels, since the calculation gives a more accurate estimate of
the actual reectance achieved on the paper. Pappas expanded the monochromatic
dot-overlap model to color.
41
The color dot-overlap model accounts for overlap
between neighboring dots of the same and different colors. The resulting color is
the weighted average of segmented colors with the weight being the area of the
segment. This approach is basically the same as the Neugebauer equations, using
geometrically computed areas instead of Demichels dot-overlap model.
19.5.2 Measurement-based tone correction
The circular dot-overlap model is an ideal case. It provides a good rst-order ap-
proximation for the behavior of printers. In reality, it does not adequately account
for all of the printer distortions. Signicant discrepancies often exist between the
predictions of the model and measured values.
40
In view of this problem, a tradi-
tional approach to the correction of printer distortions is the measurement-based
technique.
Basically, the measurement-based technique is to obtain the relationship be-
tween the input tone level and measured output reectance or optical density for
a specic halftone algorithm, printer, and substrate combination. The experiment
procedure consists of making prints having patches that are halftoned by the al-
gorithm of interest. Each patch corresponds to an input tone level. The number of
selected tone levels should be sufcient to cover the whole toner dynamic range
with a ne step size, but it is not necessary to print all available input levels, such
as 256 levels for an 8-bit system, because most printers and measuring instruments
are not able to resolve such a ne step. The reectance or density values of these
patches are measured to give the tone-response curve of the halftone algorithmused
in conjunction with the printing device. Correcting for printer distortions consists
of inverting this curve to form a tone-correction curve. This tone-correction curve
is applied to the image data prior to halftoning and printing, as a precompensation.
Issues of Digital Color Imaging 417
Most rendering devices, such as CRTs and dot-matrix, ink-jet, and electropho-
tographic printers, have a circular pixel shape (sometimes, an irregular shape be-
cause of ink spreading). A circular pixel will never be able to exactly t the size and
shape of the square grid. To compensate for these defects, the diameter of the round
pixel is made

2 bigger than the digital grid period t such that dots will touch in
diagonal positions. This condition is the minimum requirement for a total-area-
coverage by round pixels in a square grid. Usually, the pixel size is made larger
than

2 t but less than 2t to provide a higher degree of overlapping. These larger


pixels eliminate the discontinuity and reduce the jaggedness, even in the diagonal
direction. However, these larger pixels cause the rendered area per pixel to be more
than the area of the digital grid. When adjacent dots overlap, the interaction in the
overlapped pixel area is not simply an additive one, but more resembles a logical
OR (as if areas of the paper covered by ink are represented by a 1 and uncov-
ered areas are represented by a 0). That is, parts of the paper that have already
been darkened are not made signicantly darker by an additional section of a dot
being placed on top. The correction for the overlapping of the irregularly shaped
and oversized pixels is an important part of the overall color-imaging model.
19.5.3 Tone level
In addition to the discrete nature of the digital grid, different imaging devices use
different imaging technology. The most important rendering methods are the con-
tone and halftone techniques. A contone device is capable of producing analog
signals in both the spatial and intensity domains, having continuously changing
shades. An example of a true contone imaging is the photographic print. Scanners,
monitors, and dye-diffusion printers with 8-bit depth (or more) can be considered
as contone devices if the spatial resolution is high enough.
The halftone process is perhaps the single most important factor in image re-
production for imaging devices with a limited tone level (usually bi-level). Simply
stated, halftoning is a printing and display technique that trades area for depth
by partitioning an image plane into small areas containing a certain number of
pixels; it then modulates the tone density by modulating the area to simulate the
gray sensation for bi-level devices. This is possible because of the limited human
spatial contrast sensitivity. At a normal viewing distance, persons with correct (or
corrected) vision can resolve roughly 810 cycles/mm. The number of discernible
levels decreases logarithmically as the spatial frequency increases. With a suf-
ciently high frequency, the gray sensation is achieved by integrating over areas at
a normal viewing distance. This binary representation of a contone image is an
illusion; much image information has been lost after the halftoning. Thus, there is
a high probability of creating image artifacts such as moir and contouring. Moir
is caused by the interaction among halftone screens, and contouring is due to an
insufcient tone level in the halftone cell. There will be no artifacts if a continu-
ous imaging device is used such as in photographic prints. Recently, printers that
can output a limited number of tone levels have been introduced (usually, less than
418 Computational Color Technology
16 levels). The result of this capability creates the so-called multilevel halftoning
that combines the halftone screen with limited tone levels to improve color image
quality.
42
Most color displays are based on a frame-buffer architecture, where the image
is stored in video memory from which controllers constantly refresh the screen.
Usually, images are recorded as full-color images, where the color of each pixel
is represented by tristimulus values with respect to the displays primaries and
quantized to 8 or more bits per channel. For 3-primary and 8-bit devices, in theory,
256 256 256 = 16,777,216 colors can be generated. Often, the cost of the
high-speed video memory needed to support storage of these full-color images
on a high-resolution display is not justied; and, the human eye is not able to
resolve 16 million colors; the most recent estimate is about 2.28 million discernible
colors.
43
Therefore, many color-display devices limit the number of colors that
can be displayed simultaneously to 8, 12, or 16 bits of video memory, allowing
256, 4096, and 65,536 colors, respectively, due to cost considerations. A palettized
image, which has only the colors contained in the palette, can be stored in video
memory and be rapidly displayed using lookup tables. This color quantization is
designed in two successive steps: (i) the selection of a palette and (ii) the mapping
of each pixel to a color in the palette. The problem of selecting a palette is a specic
instance of vector quantization. If the input color image has Q distinct colors and
the palette is to have K entries (Q > K), the palette selection may be viewed as
the process of dividing Q colors into K clusters in a 3D color space and selecting
a representative color for each cluster. Ideally, this many-to-one mapping should
minimize perceived color differences. The color quantization can also be designed
as a multilevel halftoning. For example, the mapping of a full-color image to an
8-bit color palette (256 colors) is a multilevel halftoning with 8 levels of red (3 bits),
8 levels of green (3 bits), and 4 levels of blue (2 bits).
19.6 Device-Independent Color Imaging
In a modern ofce environment, various imaging devices such as scanners, com-
puters, workstations, copiers, and printers are connected to this open environment.
For example, a system may have several color printers using different printing tech-
nologies such as dot matrix, ink-jet, and electrophotography. Moreover, it does not
exclude devices of the same technology from different manufactures. In view of
these differences, the complexity of an ofce information system can be extremely
high. The main concern in the color-image analysis of the electronic information at
the system level is color consistency; the appearance of a document should look the
same when the image moves across various devices and goes through many color
transforms. However, the degree of color consistency depends on the applications;
desktop printing for casual users may differ signicantly from a short-run printing
of high-quality commercial brochures.
Hunt pointed out that there are several levels of color matching: spectral, colori-
metric, exact, equivalent, corresponding, and preferred color matching.
44
Spectral
Issues of Digital Color Imaging 419
color reproduction matches the spectral reectance curves of the original and re-
produced colors. At this level, the original and reproduction look the same under
any illuminant; there is no metamerism. A pair of stimuli are metamers if they
match under an illuminant but mismatch under a different illuminant (see Chap-
ter 3). Colorimetric reproduction is a metameric match that is characterized by
the original and the reproduction colors having the same CIE chromaticities and
relative luminances. A reproduction of a color in an image is exact if its chromatic-
ity, relative luminance, and absolute luminance are the same as those in the original
scene. Equivalent color reproduction requires that the chromaticities, relative lumi-
nance, and absolute luminances of colors have the same appearance as the colors
in the original scene when seen in the image-viewing conditions. The correspond-
ing reproduction matches the chromaticities and relative luminances of the color
such that they have the same appearance as the colors in the original would have
had if they had been illuminated to produce the same average absolute luminance
level. Preferred color reproduction is dened as a reproduction in which the col-
ors depart from equality of appearance to those in the original, either absolutely or
relative to white, in order to give a more pleasing result to the viewer. Ideally, the
appearance of a reproduction should be equal to the appearance of the original or
to the customers preference. In many cases, however, designers and implementers
of color management systems do not know what users really want. For an open
environment with a diverse user demography, it is very difcult to satisfy every-
body. Usually, this demand is partially met by an interactive system, adjustable
knobs, or some kind of soft proong system. This difculty is compounded with
the fact that modeling the human visual system is still an area of active research
and debate.
Even color consistency based on colorimetric equivalence at the system level is
not a simple task because each device has its unique characteristics. They are dif-
ferent in their imaging technology, color encoding, color description, color gamut,
stability, and applications. Because not all scanners use the same lamp and lters,
and not all printers use the same inks, the same RGB image le will look differ-
ently when it is displayed on different monitors. Likewise, the same cmyk image
le will look differently when it is rendered by different printers. Moreover, there
are even bigger differences in color appearance when the image is moved from one
medium to another, such as from monitor to print. Because of these differences, ef-
forts have been made to establish a color management system (CMS) that will pro-
vide color consistency. An important event was the formation of the International
Color Consortium (ICC) for promoting the usage of color images by increasing
the interoperability among various applications (image processing, desktop pub-
lishing, etc.), different computer platforms, and different operating systems. This
organization established architectures and standards for color transform via ICC
proles and color management modules (CMMs). A prole contains data for per-
forming a color transform, and a CMM is an engine for actually processing the
image through proles. The central theme in color consistency at the system level
is the requirement of a device-independent color description and properly charac-
terized and calibrated color-imaging devices.
420 Computational Color Technology
A proper color characterization requires image test standards, techniques, tools,
instruments, and controlled viewing conditions. The main features to consider
for color consistency are: image structure, white-point equivalence, color gamut,
color transformation, measurement geometry, media difference, printer stability,
and registration.
45
Image structure refers to the way an image is rendered by a given device. The
primary factor affecting image texture is quantization; it is particularly apparent
if the output is a bi-level device. Scanners and monitors that produce image ele-
ments in eight-bit depth can be considered as contone devices. For printers, the dye
sublimation printer is capable of printing contone; other types of printers such as
the electrophotographic and ink-jet printers have to rely on halftoning to simulate
the contone appearance. For bi-level devices, halftoning is perhaps the single most
important factor in determining the image structure. It affects the tone reproduc-
tion curves, contouring, banding, graininess, moir, sharpness, and resolution of
ne details. Because of these dependencies, color characterization has to be done
for each halftone screen. A new characterization is needed whenever the halftone
screen is changed.
Various white points are used for different devices; the white point for a scanner
is very different from the one used in a CRT monitor. Moreover, the viewing condi-
tions of a hard copy is usually different from those used in scanning and displaying.
The operator or the color management system must convert the chromaticity of the
white point in the original to that of the reproduction. From the denition of tris-
timulus values, we know that the exact conversion requires the substitution of one
illuminant spectrum for another and weightings by the color-matching function and
object spectrum, then integrates the product of spectra over the whole visible range.
This illuminant-replacement process is not a linear correspondence. The changing
of the viewing conditions has been addressed by the chromatic-adaptation models
(see Chapter 4) and white-point conversion (see Chapter 13).
46
Issues concerning the color gamut are given in Chapter 5 for CIE spaces, Chap-
ter 6 for colorimetric RGB spaces, and Chapter 7 for device spaces.
Color transformation is a key part of color consistency at the system level. This
is because different devices use different color descriptions. Various transforma-
tion methods such as the color-mixing model, regression, 3D interpolation, arti-
cial neural network, and fuzzy logic, have been developed. Techniques for the color
transform are given in Chapter 8 for regression and Chapter 9 for 3D lookup ta-
ble approaches. Major trade-offs among these techniques are conversion accuracy,
processing speed, and computational costs.
Color printers are known for their instability; it is a rather common experience
to nd that the rst print may be quite different from the one-hundredth one. A
study indicates that dye sublimation, ink-jet, and thermal transfer techniques are
more stable than lithography.
47
The electrophotographic process, involving charg-
ing, light exposure, toner transfer, and heat fusing, is complex and inherently un-
stable. Part of the device stability problems such as drifting may be corrected by
device calibration to bring the device back to the nominal state.
Issues of Digital Color Imaging 421
Finally, a few words about printer registration: color printing requires three
(cmy) or four (cmyk) separations. They are printed by multiple passes through the
printing mechanism or by multiple separate operations in one pass. These opera-
tions require mechanical control of pixel registration. Registration is very impor-
tant to color-image quality. However, it is more of a mechanical problem than an
image-processing problem. Thus, it is beyond the scope of this book.
19.7 Device Characterization
As stated in Section 9.4, a color image has at least two components: spatial pat-
terns and colorimetric properties. Factors affecting color matching are color trans-
form, color gamut mapping, viewing conditions, and media. Factors affecting im-
age structure are sampling, tone level, compression, and halftoning. Surface char-
acteristics are primarily affected by the media; various substrates may be encoun-
tered during image transmission and reproduction. These factors are not indepen-
dent; they may interact to cause an appearance change. For example, inadequate
sampling and tone level not only give poor image structures but may also cause
color shifts, a different halftone screen may cause tone changes and color shifts,
an improper use of colors may create textures, images with sharp edges may ap-
pear to have higher saturation and more contrast than images with soft edges,
48
and ner screen rulings will give higher resolution and cleaner highlight colors.
49
With these complexities and difculties, a systematically formulated color-imaging
model that incorporates the human visual model, color-mixing model, color ap-
pearance model, and dot-overlap model is needed to improve and analyze color
images.
A requirement for color imaging at the system level is to retain color con-
sistency and image detail when the image is moved from one device to another.
Color consistency based on colorimetric equivalence requires a device-independent
encoding standard, properly characterized color-imaging devices, and a color-
conversion engine.
50
At the system level, we will encounter all kinds of input and
output color representations. If we select a device-independent color representation
as the intermediate standard, we have the benet of reducing the complexity from
M N to M + N (see Fig. 19.3). This modular implementation adds exibility
and scalability to the color-imaging architecture. We can easily add or remove sys-
tem components. Various inputs are converted to an internal exchange standard.
On the output side, the internal standard is converted to an output color speci-
cation. An example of using CIEXYZ as the intermediate interchange standard is
given in Fig. 19.4. The conversion between color spaces requires transformation
techniques. For device inputs, a device characterization is performed by experi-
mentally determining parameters for a selected conversion technique (e.g., matrix
transform or table lookup) to correlate relationships between color spaces and build
a device color prole.
51
For colorimetric inputs, the transformation involves com-
putations via known equations. Transformation techniques, device parameters, de-
vice proles, and known formulas form the basis for the color-conversion engine.
422 Computational Color Technology
Figure 19.3 System complexity with respect to device characterization, with and without an
internal standard.
Figure 19.4 Possible color transformations using CIEXYZ as the internal standard.
Issues of Digital Color Imaging 423
Implementations of color-conversion engines are subjected to constraints of image
quality, speed, and cost.
Colorimetric properties involve color transformation, color matching, color dif-
ference, surround, adaptation, etc. Overall image quality is the integration of these
factors perceived by human eyes. Various theories and models have been devel-
oped to address these problems and requirements. Sampling and quantization the-
ories ensure proper resolution and tone level. Colorimetry provides quantitative
measures to specify color stimuli. Color-mixing models and transformation tech-
niques give the ability to move color images through various imaging devices.
Color appearance models attempt to account for phenomena such as chromatic
adaptation that cant be explained by colorimetry. Device models deal with various
device characteristics such as tone rendering, halftoning, color quantization, and
device dot modeling for generating visually appearing color images. A thorough
image color analysis should include the human visual model, color measurement,
color-mixing model, color space transform, color matching, device models, device
characteristics, and color appearance model as given in Fig. 19.5. In reality, this
complete picture is seldom implemented because of the cost, complexity, and time
required.
19.8 Color Spaces and Transforms
For system-level applications, it is essential to have a device-independent color
representation and means of transforming image data among color spaces.
A color image is acquired, displayed, and rendered by different imaging de-
vices. For example, an image is often acquired by a atbed color scanner or a
Figure 19.5 System-level color-image reproduction.
424 Computational Color Technology
digital camera, displayed on a color monitor, and rendered by a color printer. Dif-
ferent imaging devices use different color spaces; well-known examples are the
RGB space for monitors and cmy (or cmyk) space for printers. There are two main
types of color space: device dependent and device independent. Color signals pro-
duced or utilized by various imaging devices are device dependent; they are device
specic, depending on the characteristics of the device such as imaging technology,
color description, color gamut, and stability. Thus, the same image rendered by dif-
ferent imaging devices may not look the same. In fact, they usually look different.
These problems can be minimized by using some device-independent color spaces.
CIE developed a series of color spaces using colorimetry that are not dependent
on the imaging devices. CIE color spaces provide an objective color measure and
are being used as interchange standards for converting between device-dependent
color spaces (see Chapter 5).
Because various color spaces are used in the color-image process, there is a
need to convert them. Color-space transformation is essential to the transport of
color information during image acquisition, display, and rendition. Converting a
color specication from one space to another means nding the links for mapping.
Some transforms have a well-behaved linear relationship (see Chapter 6), others
have a strong nonlinear power function (see Chapter 5). Many techniques have
been developed to provide sufcient accuracy for color space conversion under
the constraints of cost, speed, design parameters, and implementation concerns.
Generally, these techniques can be classied into four categories. The rst method
uses polynomial regression based on the assumption that the correlation between
color spaces can be approximated by a set of simultaneous equations. Once the
equations are selected, a statistical regression is performed on a set of selected
points with known color specications in both source and destination spaces for
deriving the coefcients of the equations (see Chapter 8). The second approach
uses a 3D lookup table with interpolation. A color space is divided into small cells,
then the source and destination color specications are experimentally determined
for all lattice points. Nonlattice points are interpolated, using the nearest lattice
points (see Chapter 9). The third technique is based on theoretical color-mixing
models. The additive color mixing of the contone devices (e.g., monitors) can be
addressed by Grassmanns laws, where the subtractive color mixing of contone
devices (e.g., printers) can be interpreted by the Beer-Lambert-Bouguer law and
Kubelka-Munk theory (see Chapters 1518).
19.8.1 Color-mixing models
Roughly speaking, two types of color-mixing theories exist. The rst is formulated
for the halftone process and includes the Neugebauer equations, Murray-Davies
equation, and Yule-Nielsen model. The other type is based on light absorption and
includes the Beer-Lambert-Bouguer law and Kubelka-Munk theories, for subtrac-
tive color mixing. The Beer-Lambert-Bought law is based on light absorption; thus,
it is a subtractive color-mixing theory. The Beer-Lambert-Bouguer law relates light
Issues of Digital Color Imaging 425
intensity to the quantity of the absorbent based on the proportionality and additivity
of the colorant absorptivity (see Chapter 15). The Kubelka-Munk theory is based
on two light channels traveling in opposite directions for modeling translucent and
opaque media. The light is being absorbed and scattered in only two directions, up
and down (see Chapter 16).
Neugebauers model provides a general description of halftone color mixing;
it predicts the resulting red, green, and blue reectances or tristimulus values XYZ
from given dot areas in the print. In practical applications, one wants the cmy (or
cmyk) dot areas for a given color (see Chapter 17). This requires solving the in-
verted Neugebauer equations, which specify the dot areas of individual inks re-
quired to produce the desired RGB or XYZ values. The inversion is not trivial
because the Neugebauer equations are nonlinear (see Chapter 17). The Murray-
Davies equation derives the reectance via the absorption of halftone dots (see
Chapter 18). It is often used to determine the area coverage by measuring the re-
ectance of the solid and halftone step wedges. The Yule-Nielsen model addresses
the light penetration and scattering (see Chapter 18). The Clapper-Yule model is
an accurate account of the halftone process from a theoretical analysis of the mul-
tiple scattering, internal reections, and ink transmissions (see Chapter 18). In this
model, the light is being reected many times from the surface, within the ink
and substrate, and by the background. The total reected light is the sum of light
fractions that emerge after each internal reection cycle.
19.9 Spectral Reproduction
Spectral reproduction is the ultimate solution to color reproduction; it removes the
metameric problem. It is also an essential part of multispectral imaging, serving as
the internal representation (see Chapter 14). This approach often improves image
quality or reproduction accuracy by a factor of two or three, but the computational
cost, system complexity, and time required are several orders of magnitude higher
than conventional trichromatic approaches. It is not known if this trade-off is cost
effective.
19.10 Color Gamut Mapping
The color gamut indicates the range of colors an imaging device can render; it is
the volume enclosed by the most saturated primary colors and their two-color mix-
tures in a 3D color space. The gamut size and shape are mainly governed by the
colorants, printing technology, and media. Good colorants can be found for use in
a chosen technology, but they tend to vary from technology to technology. As a
result, different color devices have different gamut sizes, so that a color monitor is
different from a color printer and an electrophotographic printer is different from
an ink-jet printer. Color gamut mismatches of input, display, and output devices are
the most difcult problem in maintaining color consistency. Numerous approaches
for gamut mapping have been proposed; however, it is still an active research area
426 Computational Color Technology
(see Section 7.9). The color correction for an ink-substrate combination is only
a partial solution. The relationship between the original and reproduction should
be taken into account. In fact, the optimal relationship is not usually known; it
is suggested by Hunt to have six different color-matching levels. In most cases,
the optimum relationship is a compromise. The compromise depends on the users
preference regarding the colors in the original (e.g., monitor) and those available
on the reproduction (e.g., printer). The optimum transform for photographic trans-
parency originals that contain mainly light colors, the high-key image, is differ-
ent from the optimum for an original that contains mainly dark colors, the low-
key image. Graphic images probably require a transform different from scanned
photographs.
48
The color gamut mapping and preferred color transform are a part
of the color appearance task.
19.11 Color Measurement
Most colorimeters and spectrophotometers have measuring geometries that are de-
liberately chosen to either eliminate or totally include the rst surface reections.
Typical viewing conditions, having light reection, transmission, and scattering,
lie somewhere between these two extremes. Colors that look alike in a practi-
cal viewing condition may measure differently. This phenomenon is strongly af-
fected by the surface characteristics of the images and are most signicant for
dark colors. A practical solution to this problem that is being proposed is by us-
ing telecolorimetry, which places the measuring device in the same position as the
observer.
52
As mentioned previously, CIE systems have limited capability to account for
changes in appearance that arise from a change of the white point or substrate.
On different media, the isomeric samples may look different. Even within a given
medium type, such as paper substrate, the difference can come from the white-
ness of the paper, uorescence, gloss, surface smoothness, etc. The problem of the
media difference is not just a measurement problem; it is also a part of the color
appearance problem.
19.12 Color-Imaging Process
A complete color-imaging process that incorporates various visual and physical
models may be described as follows: An image, with a specic sampling rate, quan-
tization level, and color description, is created by the input device which uses (i)
the human visual model to determine the sampling and quantization, and (ii) device
calibration to ensure that the gain, offset, and tone reproduction curves are prop-
erly set. The setting of the tone reproduction curves may employ the dot-overlap
model, for example. This input device assigns device-dependent color coordinates
(e.g., RGB) to the image elements. The image is fed to the color-transform engine
for converting to colorimetric coordinates (e.g., CIELAB). This engine performs
the colorimetric characterization of the input device; it implements the required
Issues of Digital Color Imaging 427
color-space transform techniques (e.g., color-mixing models, 3D lookup tables).
The engine can be a part of the input device or an attachment, or it can be on the
host computer.
The second step is to apply a chromatic-adaptation and/or color appearance
model to the colorimetric data, along with additional information on the viewing
conditions of the original image (if any), in order to transform the image data into
appearance attributes such as lightness, hue, and chroma. These appearance co-
ordinates, which have accounted for the inuences of the particular device and
the viewing conditions, are called viewing-conditions-independent space.
53
At this
point, the image is represented purely by its original appearance. After this step,
the image may go through various types of processing such as spatial scaling, edge
enhancement, resolution conversion, editing, or compression. If the output device
is selected, the image should be transformed to a format suitable for the output
device. If the output is a bi-level device, the image should be halftoned. A good
halftone algorithm takes the human visual model, dot-overlap model, and color-
mixing model into account. Then, color gamut mapping is performed to meet the
output color gamut. This puts the image in its nal form with respect to the appear-
ance that is to be reproduced. Next, the process must be reversed. The viewing con-
ditions for the output image, along with the nal image-appearance data, are used
in an inverted color appearance model to transform from the viewing-conditions-
independent space to a device-independent color space (e.g., CIEXYZ). These val-
ues, together with the colorimetric characterization of the output device, are used
for transforming to the device coordinates (e.g., cmyk) for producing the desired
output image.
The main implementation issues for the color-space transform are the perfor-
mance, image quality, and cost that are affected by the imaging architecture and
imaging technology. Performance is judged by the processing speed. Image quality
consists of color appearance and spatial patterns that are strongly dependent on the
imaging technology, device characteristics, and media properties. Implementation
cost is directly affected by the computational accuracy and memory requirement,
and indirectly affected by the performance and quality. We briey discuss these
issues and the trade-offs among them.
19.12.1 Performance
The major factors affecting speed are the image quality, color architecture, image
path, bandwidth, design complexity, conversion technique, and degree of hardware
assistance [e.g., an application-specic integrated circuit (ASIC) chip]. In general,
high quality requires more image processing and thus reduces the speed. For exam-
ple, a higher resolution gives a better image quality, but it increases data size, which
leads to a longer processing time. An improperly designed image path may add
unnecessary transforms that increase complexity and processing time, whereas a
cleverly designed architecture may simplify computation and increase parallelism
to enhance speed. The color-transformation technique makes a big difference in
428 Computational Color Technology
complexity and speed. For example, polynomial regression requires more compu-
tational steps than 3D lookup with interpolation but less memory for data storage,
and table lookup is faster than arithmetic operations. The bandwidth indicates the
rate of carrying bits in the ofce system. At 600 dpi resolution, a page-size image
has 134.6 Mbytes of data. If a throughput of 10 pages per minute is requested,
we need a bit rate of 179.5 Mbits/sec. This bit rate undoubtedly requires a high
bandwidth (by the way, compression will help; the extent of data size reduction is
dependent on the compression ratio). Moreover, to process this amount of pixels
with respect to the implemented imaging techniques requires a high-speed proces-
sor. A specically designed silicon chip is always faster than a general-purpose
processor. Therefore, it is common practice to use an ASIC for the purpose of
enhancing speed.
19.12.2 Cost
The major factors affecting cost are image quality, design complexity, computa-
tional cost (ASIC versus software), memory requirement, and the conversion algo-
rithm. High image quality requires more image processing, which increases com-
putational cost. For example, edge enhancement using a convolution lter improves
sharpness of the image, but it is a computationally intensive operation where the
computational cost increases roughly as a square function of the lter size. The
cost also increases with increasing design complexity, which requires more hard-
ware components and software management. The conversion algorithm determines
the feasibility of design simplication and the memory requirement. A thorough
understanding of the algorithms allows us to concatenate several arithmetic opera-
tions into one, to use integer arithmetic instead of oating-point computation, and
to determine the minimum memory requirement that still provides good accuracy
and image quality.
19.13 Color Architecture
To optimize the cost, quality, and performance, we proposed a modular color-
architecture and presented several implementation schemes. The architecture is
simple, yet scalable and exible. It accommodates various color inputs and out-
puts. The CIELABsRGB transforms were used as examples to illustrate the im-
plementation detail. We implemented these transforms in software to demonstrate
the effects of the implementation scheme with respect to performance and memory
requirements. The software simulation indicated that it is feasible to implement a
high-performance, high-accuracy, and cost-effective transform between CIELAB
and sRGB.
Quality, performance, and cost are interrelated; any optimization is a delicate
trade-off among them. Optimization is best achieved via a modular architecture
in which each major processing component can be individually designed, imple-
Issues of Digital Color Imaging 429
mented, tested, and improved upon. Additional benets are (i) a reduction in sys-
tem complexity and (ii) the ease of upgrading software modules to hardware ASIC
chips.
In view of these advantages and knowing that the example given in Fig. 19.5
is too complex in scope for this book, we narrowed our attention to colorimet-
ric and device RGB inputs via an intermediate standard (e.g., CIE color space) to
device color outputs (e.g., RGB or CMYK). At this scale, we proposed a mod-
ular architecture in Fig. 19.6 for colorimetrically converting from RGB inputs to
device outputs and the reverse transformation. We selected CIEXYZ as the inter-
nal standard because it is a common thread between RGB standards (most RGB
spaces have a linear transform to and from CIEXYZ) and it is convenient for per-
forming chromatic adaptation. The CIEXYZ is transformed to CIELAB because
of its visual linearity, which gives a better accuracy for device characterization.
Each module in Fig. 19.6 was implemented according to its denition and for-
mulas. Still, this architecture is quite involved because there are many RGB stan-
dards and Device/RGB inputs; each standard has its own characteristics. However,
the beauty of a modular architecture is that if we change the output device, all
we have to do is to change the output device prole. Similarly, if we change the
RGB input, we only need to change the gamma value and matrix coefcients. The
mathematical framework, data structure, and mechanism remain the same for all
inputs.
Figure 19.6 Schematic diagram of color transformations from colorimetric RGB and De-
vice/RGB to CIELAB via CIEXYZ, and the reverse transformations.
430 Computational Color Technology
19.14 Transformations between sRGB and Internet FAX Color
Standard
Insight into color transforms with respect to computational accuracy and perfor-
mance can be gained by examining the denitions of the color-encoding standards
involved. Because of numerous RGB inputs, it is not likely that we can cover them
all in this chapter. Therefore, we choose the transform between the default World
Wide Web color standard sRGB
54,55
and Internet FAX color standard
56,57
as an ex-
ample because they are well documented and have high commercial value. These
standards are the carriers of color information on the Internet. For the coming in-
formation age, compatibility and interoperability between these two standards are
extremely important; we need to have the capability of converting them accurately
and cost effectively. Other RGB standards, although different numerically, have the
same formulas and structures.
For system-level applications using Internet FAX, we deal with two color-
encoding standards, sRGB and CIELAB, and two device-dependent color spaces,
Device/RGB for scanning or display and Device/CMYK for printing. The def-
inition and associated formulas of the sRGBCIELAB transform are given in
Eqs. (19.2)(19.9). Equation (19.2) scales inputs to the range of [0, 1] in which the
n-bit integer input is divided by the maximum integer in the n-bit representation to
give a oating-point value between 0 and 1, where n is the bit depth of the input
sRGB and g represents a component of the RGB triplet.
g

sRGB
=g
nbit
/(2
n
1). (19.2)
Equation (19.3) performs the gamma correction.
g
sRGB
=g

sRGB
/12.92, if g

0.03928
g
sRGB
=

sRGB
+0.055

/1.055

2.4
, if g

>0.03928. (19.3)
Equation (19.4) is the linear transform from RGB to XYZ.
45,46

X
D65
Y
D65
Z
D65

0.4124 0.3576 0.1805


0.2126 0.7152 0.0722
0.0193 0.1192 0.9505

R
sRGB
G
sRGB
B
sRGB

. (19.4)
After converting to the internal standard CIEXYZ, a chromatic adaptation is
needed because sRGB is encoded under the illuminant D
65
, whereas Internet FAX
is under D
50
. Many chromatic-adaptation models have been proposed with varying
degrees of complexity; the computational cost is usually high. Even for a simple
von Kries model, the computational cost is substantial; the white-point adjustment
takes the form of multiplying four matrices as shown in Eq. (19.5),
58
where (X
D65
,
Y
D65
, Z
D65
) and (X
D50
, Y
D50
, Z
D50
) represent the tristimulus values under D
65
and D
50
, respectively. Parameters L
max65
, M
max65
, and S
max65
are the maximum
Issues of Digital Color Imaging 431
or illuminant white of long, medium, and short cone responses, respectively, un-
der D
65
. Similarly, L
max50
, M
max50
, and S
max50
are the maximum cone responses
under D
50
.

X
D50
Y
D50
Z
D50

=M
1
1

L
max50
0 0
0 M
max50
0
0 0 S
max50

1/L
max65
0 0
0 1/M
max65
0
0 0 1/S
max65

M
1

X
D65
Y
D65
Z
D65

, (19.5)
where
M
1
=

0.4002 0.7076 0.0808


0.2263 1.1653 0.0457
0 0 0.9182

.
Knowing the source and destination white points and the matrix M
1
(see Sec-
tion 4.1) of converting from XYZ to LMS, we can concatenate all four matrices into
one as shown in Eq. (19.6).

X
D50
Y
D50
Z
D50

1.0161 0.0553 0.0524


0.0061 0.9955 0.0012
0 0 0.7566

X
D65
Y
D65
Z
D65

. (19.6)
Furthermore, Eqs. (19.4) and (19.6) can be combined to give Eq. (19.7), a one-step
computation from sRGB to chromatically adapted XYZ under D
50
.

X
D50
Y
D50
Z
D50

0.4298 0.3967 0.1376


0.2141 0.7140 0.0718
0.0146 0.0902 0.7191

R
sRGB
G
sRGB
B
sRGB

. (19.7)
The CIEXYZCIELAB transform is dened in Eq. (19.8), where L

is the
lightness, a

is the red-green component, and b

is the blue-yellow component;


(X, Y, Z) and (X
n
, Y
n
, Z
n
) are the tristimulus values of the input and illuminant,
respectively, and t is a component of the normalized tristimulus values.
59
L

=116f (Y/Y
n
) 16,
a

=500[f (X/X
n
) f (Y/Y
n
)],
b

=200[f (Y/Y
n
) f (Z/Z
n
)],
and
f (t ) =t
1/3
if 1 t >0.008856,
=7.787t +(16/116) if 0 t 0.008856. (19.8)
432 Computational Color Technology
The resulting tristimulus values are represented as oating-point values. According
to the Internet FAX color standard, CIELAB is conformed to the ITULAB format
and is represented as n-bit integers. This standard selects D
50
(x = 0.3457, y =
0.3585) as the illuminant white. The default gamut range is L

= [0, 100], a

=
[85, 85], and b

=[75, 125]. Equation (19.9) gives the encoding formulas from


the oating-point to integer representation.
43,44
L
sample
=(L

L
min
) (2
n
1)/(L
max
L
min
),
a
sample
=(a

a
min
) (2
n
1)/(a
max
a
min
),
b
sample
=(b

b
min
) (2
n
1)/(b
max
b
min
), (19.9)
where L
sample
, a
sample
, and b
sample
are the encoded integer representations of
L

, a

, and b

, respectively. Parameter n is the number of bits for integer represen-


tation. The maximum and minimum values are given by the ranges: L
max
= 100,
L
min
=0, a
max
=85, a
min
=85, b
max
=125, and b
min
=75. Equation (19.10)
gives the decoding formula from integer to oating-point representation.
L

=L
min
+L
sample
(L
max
L
min
)/(2
n
1),
a

=a
min
+a
sample
(a
max
a
min
)/(2
n
1),
b

=b
min
+b
sample
(b
max
a
min
)/(2
n
1). (19.10)
For Device/RGB inputs, there are no predened transfer matrix and gamma func-
tions for converting input RGB values to CIEXYZ. The link between Device/RGB
and CIEXYZ must be determined via device characterization. If polynomial re-
gression is used for characterization, the accuracy of the transformation increases
as the number of polynomial terms increases.
22
Because our primary interest is
improving the computational accuracy by varying the bit depth of coefcients; we
use a linear transform from RGB to XYZ for reasons of computational cost and
to comply with the existing data structure and conversion mechanism. Therefore,
in this study, Device/RGB values are gray balanced via experimentally determined
gray-balance curves via 1D lookup tables, then linearly transformed to CIEXYZ
using coefcients C
ij
derived from a device characterization.
60

X
Y
Z

C
11
C
21
C
31
C
12
C
22
C
32
C
13
C
23
C
33

R
G
B

. (19.11)
For this particular exercise, Device/RGB has the following matrix coefcients for
encoding from XYZ to RGB (the left-hand side matrix) and decoding from RGB
to XYZ (the right-hand side matrix). These coefcients are derived from the exper-
Issues of Digital Color Imaging 433
imental data of a Sharp scanner via linear regression.
61
2.7412 1.2638 0.2579
0.8053 1.6698 0.1288
0.0819 0.1080 0.8644
0.4679 0.3597 0.0860
0.2269 0.7676 0.0467
0.0160 0.0618 1.1429
Encoding Decoding
The inverse transform from CIELAB to sRGB is given in Eqs. (19.12)(19.19).
Equation (19.12) gives the CIELABCIEXYZ transform, where T is a compo-
nent of the tristimulus values.
f (X/X
n
) =a

/500 +(L

+16)/116,
f (Y/Y
n
) =(L

+16)/116,
f (Z/Z
n
) =b

/200 +(L

+16)/116,
and
T =f (T/T
n
)
3
T
n
if f (T/T
n
) >0.20689,
={[f (T/T
n
) 16/116]/7.787} T
n
if f (T/T
n
) 0.20689.
(19.12)
Equation (19.13) provides the chromatic adaptation from D
50
to D
65
.

X
D65
Y
D65
Z
D65

=M
1
1

L
max65
0 0
0 M
max65
0
0 0 S
max65

1/L
max50
0 0
0 1/M
max50
0
0 0 1/S
max50

M
1

X
D50
Y
D50
Z
D50

. (19.13)
Again, the four matrices can be concatenated into one; the result is given in
Eq. (19.14).

X
D65
Y
D65
Z
D65

0.9845 0.0547 0.0679


0.0060 1.0048 0.0012
0 0 1.3208

X
D50
Y
D50
Z
D50

. (19.14)
Equation (19.15) is the matrix transform from CIEXYZ to sRGB.

R
sRGB
G
sRGB
B
sRGB

3.2410 1.5374 0.4986


0.9692 1.8760 0.0416
0.0556 0.2040 1.0570

X
D65
Y
D65
Z
D65

. (19.15)
434 Computational Color Technology
Substituting Eq. (19.14) into Eq. (19.15), we obtain Eq. (19.16), a one-step com-
putation from CIEXYZ at D
50
to sRGB at D
65
.

R
sRGB
G
sRGB
B
sRGB

3.2000 1.7221 0.4403


0.9654 1.9380 0.0086
0.0535 0.2019 1.4001

X
D50
Y
D50
Z
D50

. (19.16)
Equation (19.17) does the clipping, where g
sRGB
represents a component of the
RGB triplet.
g
sRGB
=0, if g
sRGB
<0
g
sRGB
=1, if g
sRGB
>1. (19.17)
Equation (19.18) performs the gamma correction.
g

sRGB
=12.92 g
sRGB
, if g
sRGB
0.00304
g

sRGB
=1.055 A
sRGB
1.0/2.4
0.055, if g
sRGB
>0.00304. (19.18)
Finally, Eq. (19.19) scales the results to n-bit integers.
g
nbit
=g

sRGB
(2
n
1). (19.19)
19.15 Modular Implementation
Implementation is a delicate balance among accuracy, speed, and memory cost.
From an accuracy standpoint, one would prefer oating-point computations. For
the speed consideration, one would prefer the use of integer arithmetic and lookup
tables wherever possible. To minimize memory cost, one would use small lookup
tables or no table at all. Although memory cost is dropping and it is no longer a
big concern for cost savings, the memory requirements add up quickly when we
implement the various input and output transforms given in Fig. 19.5. Moreover,
it increases the complexity for memory management and the number of context
switches into and out of the CPU. This, in turn, lowers the processing speed.
From the color architecture given in Fig. 19.6, the system consists of six
color transforms: colorimetric RGBCIEXYZ, Device/RGBCIEXYZ, and
CIEXYZCIELAB in the forward direction; then, CIELABCIEXYZ,
CIEXYZcolorimetric RGB, and CIEXYZDevice/RGB in the reverse direc-
tion. Each of these transforms is implemented through software and tested using
experimental data as well as synthetic values.
19.15.1 SRGB-to-CIEXYZ transformation
The sRGBCIEXYZ conversion is implemented in two ways, as shown in Figs.
19.7 and 19.8, depending on the need for performance or cost. The low-cost im-
plementation uses a 1D LUT to store computed results of Eqs. (19.2) and (19.3),
Issues of Digital Color Imaging 435
Figure 19.7 Low-cost implementation of colorimetric RGB-to-XYZ transformation.
Figure 19.8 High-cost implementation of colorimetric RGB-to-XYZ transformation.
followed by a matrix-vector multiplication. The contents of the table can be integer
or oating-point values. Integer representation has lower accuracy but little need
for storage memory. To use integer arithmetic, outputs from Eq. (19.3) must be
scaled to an integer by multiplying it with a scaling factor f
s
, usually (2
n
1). If
possible, one would like to make the scaling factor a power of 2 because it can be
implemented as a bit shifting instead of the computationally expensive multiplica-
tion. Equations (19.2) and (19.3) together with the integer scaling can be combined
to give an integer LUT, in which the bit depth n determines memory requirements
and computational accuracy. Only one LUT is needed because the three RGB com-
ponents are treated identically. The number of LUT entries is dependent on the bit
depth of the input sRGB. If input the sRGB is encoded in 8-bit depth, we have 256
entries for the LUT.
The second step is to compute the scaled XYZ from the scaled RGB if integer
arithmetic is used. In this case, we need to scale the coefcients in Eq. (19.4). This
436 Computational Color Technology
is achieved via Eq. (19.20).
C
ij,int
=int[C
ij
(2
m
1) +0.5], (19.20)
where C
ij
and C
ij,int
are the oating-point and integer expressions of the ijth coef-
cient, respectively, and mis the bit depth of the scaled integer coefcient. Because
the multiplication is used in Eq. (19.4), the combined bit depth is (m+n) with a
total scaling factor of (2
m
1) (2
n
1). If one wants to keep the bit depth at n,
then the output of Eq. (19.4) must be divided by (2
m
1). This low-cost implemen-
tation gives three table lookups, nine multiplications, six additions, and, optionally,
three divisions for scaling back to n bits.
For high-cost implementation, the sRGBCIEXYZ transform is implemented
in two stages of table lookups. The rst stage is the same as the low-cost imple-
mentation. In the second stage, we use nine 1D LUTs to store products of the
coefcients and scaled RGB values (see Fig. 19.8). In this way, the whole compu-
tation becomes 12 table lookups and 6 additions. The enhanced performance is at
the expense of memory cost. The memory requirement can become quite costly.
For example, an 8-bit input and a 12-bit LUT require 512 bytes (1 byte = 8 bits)
for the rst LUT and 73,728 (2 9 2
12
) bytes for the second set of LUTs, where
the bit depth of the rst LUT dictates the memory size of the second LUT.
19.15.2 Device/RGB-to-CIEXYZ transformation
Device/RGB is implemented in nearly the same way as sRGB. A minor differ-
ence is that input Device/RGB requires three 1D LUTs, one for each primary, for
converting to gray-balanced RGB before matrix multiplication.
19.15.3 CIEXYZ-to-CIELAB transformation
The cubic-root function f (t ) of Eq. (19.8) is bounded in the range of [16/116, 1]
because t has the range of [0, 1]. If input CIEXYZ values are scaled and repre-
sented by n-bit integers, values of the function f (t ) can be precomputed and stored
in lookup tables. This approach reduces computational cost and enhances perfor-
mance by removing run-time computations of the cubic-root function, which are
computationally intensive. The LUT element is computed as follows:
(1) Each scaled integer tristimulus value in the range of [0, 2
n
1] is divided
by the scaling factor and converted to a oating-point value.
(2) The resulting value is divided by the corresponding tristimulus value of the
white point to get t .
(3) The cubic root of the value t is computed.
(4) Results are stored in a 1D LUT.
The parameter t is a normalized tristimulus value with respect to the white point;
therefore, we need to have three LUTs, one for each component because X
n
, Y
n
,
and Z
n
are not the same.
Issues of Digital Color Imaging 437
There are several implementation levels. For a low-performance and high-
accuracy implementation, one can use oating-point LUTs and computations. For
a high-performance implementation, one can use integer LUTs and integer arith-
metic. Figure 19.9 depicts a scheme using three LUTs. Elements of the LUTs can
be integers or oating-point values, depending on the accuracy requirement. The
Y-LUT output is used three times as follows:
(1) It is multiplied by 116 and the result is subtracted from 16 to give L

.
(2) It is subtracted from the X-LUT output and the result is multiplied by 500
to give a

.
(3) The Z-LUT output is substracted from it and the result is multiplied by 200
to give b

.
If integer LUTs for f (t ) are required, we must convert the oating-point values
to integers. A major concern regarding integer implementation is the accuracy of
the calculated CIELAB. To provide sufcient accuracy, a common approach is to
scale the LUT elements with a factor f
s
. After performing the required integer
arithmetic, the value is divided by the same scaling factor for getting back to the
initial depth. The optimal scaling factor can be derived by computer simulation
with respect to accuracy, cost, and performance.
19.15.4 CIELAB-to-CIEXYZ transformation
The CIELABCIEXYZ transform is implemented in two stages of table lookups
as shown in Fig. 19.10. The rst stage uses three 1D lookup tables with 2
n
entries
to convert the n-bit ITULAB values to scaled L

, a

, and b

. One LUT stores


Figure 19.9 Floating-point implementation of the CIEXYZCIELAB transformation.
438 Computational Color Technology
Figure 19.10 Low-cost implementation of the CIELABCIEXYZ transformation.
the results of [(L

+16)/116 f
s
], the second LUT for (a

/500 f
s
), and the
third LUT for (b

/200 f
s
), where f
s
is the scaling factor. The scaling factor is
varied to nd the optimal bit depth. The L

-LUT output is plugged into a 1D LUT


containing the results of the cubic function in order to obtain Y. Outputs of the a

LUT and L

LUT are added, and the sum is used to obtain X. The b

-LUT output
is subtracted from the L

-LUT output, and the result is used to obtain Z.


The second stage uses a 1D LUT to store the computed cubic function. The
output from the 1D LUT is multiplied by the corresponding tristimulus value of
the white point to give the tristimulus value. To use integer multiplication, the 1D
LUT contains integer elements and the white-point tristimulus values are scaled to
integers via Eq. (19.19).
19.15.5 CIEXYZ-to-colorimetric RGB transformation
As shown in Fig. 19.6, CIEXYZ is converted to colorimetric RGB via a matrix
transform [see Eq. (19.16)], clipping [see Eq. (19.17)], gamma correction [see
Eq. (19.18)], and integer scaling [see Eq. (19.19)]. For a oating-point implemen-
tation, we perform real-time computations of Eqs. (19.16)(19.19). For integer im-
plementation, we can group the clipping, gamma correction, and depth scaling into
one 1D LUT. The matrix multiplication is implemented by using the second half
of Fig. 19.7 or 19.8.
19.15.6 CIEXYZ-to-Device/RGB transformation
The implementation is similar to the CIEXYZcolorimetric RGB transform with
the exception that the clipping and gamma corrections are replaced by an inversed
gray-balance module implemented as 1D LUTs.
Issues of Digital Color Imaging 439
19.16 Results and Discussion
We perform software simulations by varying the bit depth of LUTs and matrix co-
efcients for the purpose of nding the optimal bit depth with respect to accuracy,
speed, and memory cost. For colorimetric RGB, the bit depth n of the rst LUT
and the bit depth m of the second set of LUTs (or matrix coefcients) are varied in-
dependently. For Device/RGB, only the second set of LUTs or matrix coefcients
are varied. Computational accuracy is judged by calculating the difference between
integer results and oating-point computation in CIELAB space.
19.16.1 SRGB-to-CIEXYZ transformation
This experiment uses 150 data points, having 125 color patches from 5-level com-
binations of a RGB cube and 25 three-color mixtures. This is a color test target
used by Hewlett-Packard Corporation for printer evaluation. Input RGB values are
converted to CIEXYZ via Fig. 19.7 or 19.8. The resulting CIEXYZ values are then
converted to CIELAB using oating-point computations. This CIELAB value is
compared with the computed CIELAB value using oating-point computations for
all equations from RGB to CIELAB. In this way, we ensure that the second half
of the computation (from XYZ to LAB) does not introduce computational error.
Table 19.1 gives the average E
ab
of 150 data points for the two implementations
shown in Figs. 19.7 and 19.8.
62
For a given bit depth of the rst LUT, Table 19.1
shows a substantial improvement in accuracy as the bit depth of the matrix coef-
cient (or second LUT) increases from 8 to 9 bits. Little improvement is gained
beyond 9 bits. On the other hand, for a given bit depth of matrix coefcients (or
second LUT), accuracy improves as the bit depth of the rst LUT increases. The
improvement levels off around 12 bits. The bit-depth increase not only improves
the accuracy but also narrows the error distribution.
Comparing the corresponding values of low-cost and high-cost implementa-
tions, one would notice that the accuracy is higher for the high-cost implementa-
tion. For all practical purposes, the 12-bit and 14-bit high-cost implementations,
Table 19.1 Average E
ab
of the sRGB-to-CIEXYZ transformation.
Low-Cost Implementation High-Cost Implementation
Bit depth of matrix coefcients Bit depth of second LUT
Bit depth
of rst
LUT 8 bits 9 bits 10 bits 12 bits 8 bits 9 bits 10 bits 12 bits
8 bits 1.31 1.15 1.15 1.17 1.30 1.10 1.08 1.15
9 bits 0.82 0.55 0.59 0.56 0.52 0.48 0.53 0.58
10 bits 0.53 0.30 0.30 0.27 0.29 0.24 0.27 0.26
12 bits 0.42 0.19 0.12 0.08 0.07 0.08 0.06 0.06
14 bits 0.42 0.18 0.11 0.05 0.02 0.02 0.02 0.02
440 Computational Color Technology
having errors less than 0.1 E
ab
units, are as good as the oating-point computa-
tion.
19.16.2 Device/RGB-to-CIEXYZ transformation
This experiment uses 146 data points, including 24 Macbeth ColorChecker outputs
from a Sharp scanner and 122 sets of RGB values from 5-level combinations of an
RGB cube (three colors are slightly out of gamut because the matrix coefcients
are obtained via linear regression).
60
The rst set of LUTs has the same bit depth
as the input. We only varied the bit depth of the second set of LUTs or matrix
coefcients. Results are summarized in Table 19.2 for both low-cost and high-cost
implementations, where d
xyz
is the difference in CIEXYZ space and E
ab,max
is
the maximum E
ab
.
As expected, in both cases, the computational accuracy improves as the bit
depth of the coefcients or second LUTs increases. Error histograms show the
same trend as sRGB histograms in that the error bandwidth is narrowed and the
band is shifted to the left as the bit depth is increased.
19.16.3 CIEXYZ-to-CIELAB transformation
In this transform, we use 314 sets of scaled 8-bit CIEXYZ values, consisting of the
following:
(1) 5-level cubic combinations of X =Y =Z =[1, 62, 123, 184, 245]
(2) 5-level cubic combinations of X =[2, 63, 124, 185, 246], Y =[5, 66, 127,
188, 249], and Z =[9, 70, 131, 192, 253]
(3) 4-level cubic combinations of X = [22, 83, 144, 205], Y = [15, 76, 137,
198], and Z =[19, 80, 141, 202]
Five different scaling factors with values 1 (no scaling), 2, 4, 8, and 16 are used
for elements of the 1D LUTs shown in Fig. 19.9. For the case of no scaling, 98.1%
of the 314 data sets give a color difference less than 1 E
ab
. The percentages for
other cases are 99.4% for f
s
=2, 88.2% for f
s
=4, 74.5% for f
s
=8, and 65.6%
Table 19.2 Average errors in CIEXYZ and CIELAB spaces for Device/RGB-to-CIELAB con-
version.
Low-cost implementation High-cost implementation
Bit depth of
coefcients d
xyz
E
ab
E
ab,max
d
xyz
E
ab
E
ab,max
8 bits 1.05 3.77 38.92 0.96 2.81 38.92
9 bits 0.43 1.61 18.91 0.47 1.37 18.91
10 bits 0.18 0.69 8.46 0.24 0.66 8.46
12 bits 0.06 0.20 1.42 0.06 0.16 1.42
14 bits 0.02 0.06 0.41 0.02 0.04 0.41
Issues of Digital Color Imaging 441
for f
s
= 16. The scaling factor of 2 provides the most accurate result with the
narrowest error distribution. Contrary to the common belief that the accuracy is
improved by increasing the scaling factor, results indicate that the accuracy de-
creases as the scaling factor increases, with the exception of f
s
= 2. From this
simulation, we recommend a scaling factor of 2 (this is equivalent to a 9-bit depth
because the initial XYZ values are 8 bits) for use in software as well as hardware
implementations.
19.16.4 CIELAB-to-CIEXYZ transformation
This experiment uses 147 data points produced by an electrophotographic printer.
Color patches consist of 20 levels of CMYprimaries, 15 levels of the black primary,
7 levels of MY, CM, and CY two-color mixtures, and the remaining 51 three-color
mixtures. Measured CIELAB values are converted to ITULAB via Eq. (19.9), and
are used as inputs to the rst set of LUTs shown in Fig. 19.10.
Table 19.3 gives the average E
ab
of 147 data points for the implementation
shown in Fig. 19.10.
51
For a given bit depth of the second LUT, results indicate that
the bit depth of the rst LUT has little effect on the computational accuracy, recon-
rming that 8 bits are enough for representing visually uniform color spaces such
as CIELAB.
63
On the other hand, for a given bit depth of the rst LUT, accuracy
improves as the bit depth of the second LUT, implementing the cubic function, in-
creases. However, it levels off around 12 bits. Also, the bit-depth increase not only
improves the accuracy but also narrows the error distribution.
19.16.5 CIEXYZ-to-sRGB transformation
This transform gives the biggest error because clipping is used in this stage. Errors
of out-of-range colors are usually big and their magnitudes show a dependence
on the distance from the triangular gamut boundary; the farther away from the
boundary, the higher the error. (Here, we use the term out-of-range to indicate
color values that are mathematically clipped to a specied range, and to distinguish
it from the term out-of-gamut that is used for the real physical gamut mismatch
of imaging devices.)
Table 19.3 Average E
ab
for low-cost implementation of the CIELAB-to-CIEXYZ transfor-
mation.
Bit depth of second LUT
Bit depth of
rst LUT 7 bits 8 bits 9 bits 10 bits 12 bits 14 bits
8 bits 2.17 1.40 0.83 0.74 0.68 0.68
9 bits 2.09 1.37 0.83 0.72 0.62 0.61
10 bits 2.06 1.27 0.76 0.67 0.53 0.51
12 bits 1.32 0.79 0.62 0.53 0.53
14 bits 1.30 0.84 0.63 0.55 0.52
442 Computational Color Technology
By using integer computations, the in-range colors give an average difference
of 1.7 counts between 8-bit CIEXYZ inputs and 8-bit reversed CIEXYZ (obtained
from an sRGBCIEXYZ transform using oating-point computations) inputs
with a maximum of three digital counts. For integer implementation without clip-
ping, the average error is 3.5 counts for an 8-bit representation of sRGB. The error
decreases as the bit depth increases. It levels off around 12 bits at 1.1 counts.
19.16.6 Combined computational error
Using 150 measured CIELAB values from a printed color test target from an H-P
ink-jet printer under D
65
illuminant, we examine overall computational error from
CIELAB via CIEXYZ to sRGB. To check computational accuracy, we convert
sRGB back to CIELAB using oating-point computations to ensure that the back-
ward transform does not introduce computational error. The difference between the
initial CIELAB and inverted CIELAB is the measure of the computational accu-
racy.
Table 19.4 summarizes the results as a function of the bit depth.
62
For oating-
point computations, we obtain average E
ab
values ranging form 2.39 to 2.04 (see
row 1), depending on the number of bits used to represent sRGB. The average error
decreases as the bit depth increases, but the improvement levels off around 12 bits.
The maximum error is about 28.3 E
ab
units. As shown in row 2 of Table 19.4,
gamma correction reduces the computational error somewhat, but maximum errors
remain the same. For integer computation, we obtain average E
ab
values ranging
form 3.73 to 2.72 (see row 3). As expected, integer computation has a higher error
than the corresponding oating-point computation; however, maximum errors are
about the same. By removing the 38 out-of-range colors, we get a much smaller
error as shown in rows 4, 5, and 6. Comparing the corresponding values of rows 4,
5, and 6, we nd that the input ITULAB bit depth does not affect the computational
Table 19.4 Combined computational errors with respect to bit depth.
Average color difference Maximum color difference
8-bit 9-bit 10- 12- 14- 8-bit 9-bit 10- 12- 14-
Method

bit bit bit bit bit bit


1 2.39 2.20 2.12 2.05 2.04 28.36 28.34 28.35 28.34 28.35
2 2.22 2.12 2.08 2.04 2.04 28.26 28.33 28.37 28.35 28.35
3 3.73 3.06 2.84 2.74 2.72 28.74 27.95 28.74 28.74 28.74
4 1.93 1.12 0.85 0.74 0.74 7.18 4.17 2.41 1.34 1.63
5 2.00 1.12 0.88 0.64 0.66 7.98 4.37 2.14 1.22 1.15
6 1.95 1.14 0.83 0.66 0.67 7.84 4.24 2.20 1.37 1.26

Methods 1: Floating-point computation. 2: Floating-point computation and gamma correction.


3: Integer computation using 150 data points. 4: Integer computation using 112 data points. 5: Integer
computation using 112 data points with 9-bit input. 6: Integer computation using 112 data points with
10-bit input.
Issues of Digital Color Imaging 443
accuracy. Once again, the results reconrm previous studies nding that the 8-bit
depth is sufcient to represent a visually linear color space such as CIELAB.
63
If we exclude the clipping problem, results indicate that it is feasible to imple-
ment a high-speed, high-accuracy, and cost-effective transform between the sRGB
and Internet FAX color standards. In the sRGBCIEXYZ transform, the complex
computations using Eqs. (19.2)(19.7) can be implemented in two sets of lookup
tables (or one LUT followed by a matrix multiplication) with a total of twelve
table lookups and six additions as shown in Fig. 19.8. The optimal bit depth is
determined to be 10 bits for the rst LUT (8 bits and 9 bits are a somewhat low
in computational accuracy, see Table 19.1). This gives 1024 entries for the second
set of LUTs. Elements of the second LUTs are recommended to be encoded at 12
bits (see Table 19.1). The reverse CIEXYZsRGB transform from Eqs. (19.16)
(19.19) can also be implemented in two sets of lookup tables (or a matrix mul-
tiplication followed by a lookup table); therefore, the computational cost and bit
depth are the same as the forward transform. In the CIEXYZCIELAB trans-
form, Eq. (19.8) can be implemented as a table lookup also. Using Fig. 19.9, we
have three lookups, three multiplications, and three subtractions. A 9-bit depth is
recommended for LUT elements in Fig. 19.9. The reverse CIELABCIEXYZ
transform can be implemented in two sets of LUTs as shown in Fig. 19.10. The
computational cost can go as low as six lookups, two additions, and three mul-
tiplications. We recommend an 8-bit depth for the rst set of LUTs and a 12-bit
depth for the second set of LUTs (see Tables 19.3 and 19.4). In the case of the
Device/RGBCIEXYZ transform, the bit depth of the rst LUTs is the same as
input Device/RGB. The bit depth of the second LUTs or matrix coefcients is rec-
ommended to have 12 bits (see Table 19.2).
From the results of this simulation, it seems that the 12-bit depth is an upper
bound. We need not go beyond 12 bits. There are the following additional advan-
tages to using two bytes to store a 12-bit number:
(1) It reserves room for subsequent computations, reducing the chance of over-
owing.
(2) One can use a value 2
m
for the scaling factor, instead of (2
m
1), to reduce
computational costs by changing the multiplication to a bit shifting.
19.17 Remarks
In this chapter, we presented a couple of color architectures and several implemen-
tation schemes for system-level applications. We demonstrated the exibility and
scalability of the modular color architecture by its ability to accommodate various
RGB inputs and the ease of replacing one implementation scheme with another. We
showed the effect of the implementation scheme with respect to performance and
memory cost. From this study, we concluded that the accuracy improved as the bit
depth of LUTs increased and the bit-depth increase narrowed the error distribution.
The improvement leveled off around 12 bits. Exceptions were 9 bits for CIEXYZ
444 Computational Color Technology
inputs in the CIEXYZCIELAB transform and 8 bits for CIELAB inputs in the
CIELABCIEXYZ transform. Results also suggested that it is feasible to im-
plement a high-performance, high-accuracy, and cost-effective transform between
sRGB and Internet FAX standards.
Comparisons of RGB encoding standards revealed that out-of-range colors
gave the biggest color errors. We showed that out-of-range colors induced by im-
proper color encoding could be eliminated. There are numerous problems for color
reproduction at the system level; the most difcult one is perhaps the color gamut
mismatch. There are two kinds of color gamut mismatch: one stems from the phys-
ical limitation of imaging devices (e.g., a monitor gamut does not match a printer
gamut); another one is due to the color encoding standard, such as sRGB. The dif-
ference is that one is imposed by nature and the other one is a man-made constraint
to describe and manipulate color data. We may not be able to do much about the
limitations imposed by nature (although we have tried and are still trying by means
of color gamut mapping), but we should make every effort to eliminate color errors
caused by the color-encoding standard. It is my opinion that in system-level appli-
cations, we need a wide-gamut space, such as RIMM/ROMM RGB, for preserving
color information. We should not impose a small, device-specic color space for
carrying color information. With a wide-gamut space, the problem is boiled down
to having proper device characterizations when we move color information be-
tween devices. It is in the device color prole that color mismatch and device char-
acteristics are taken into account. In this way, any color error is conned within the
device; it does not spread to other system components.
References
1. H. R. Kang, Image Color Analysis, Wiley Encyclopedia of Electrical and Elec-
tronics Engineering, Vol. 9, J. G. Webster (Ed.), Wiley, NewYork, pp. 534550
(1999).
2. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 489490 (1982).
3. D. F. Rogers, Procedural Elements for Computer Graphics, McGraw-Hill,
New York, p. 389 (1985).
4. C. J. Bartleson, Colorimetry, Optical Rediation Measurements, Vol. 2, Color
Measurement, F. Grum and C. J. Bartleson (Eds.), Academic Press, New York,
NY, pp. 33148 (1980).
5. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 514581 (1982).
6. B. A. Wandell, Foundations of Vision, Sinauer Assoc., Sunderland, MA,
pp. 106108 (1995).
7. A. A. Michelson, Studies in Optics, University of Chicago Press, Chicago
(1927).
8. P. G. J. Barten, Contrast Sensitivity of the Human Eye and Its Effects on Image
Quality, SPIE Press, Bellingham, WA, Chap. 3, pp. 2763 (1999).
Issues of Digital Color Imaging 445
9. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 6468
(1999).
10. J. L. Mannos and D. J. Sakrison, The effects of a visual delity criterion on the
encoding of images, IEEE Trans. Info. Theory IT-20, pp. 525536 (1974).
11. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 7378
(1999).
12. R. Ulichney, Digital Halftoning, MIT Press, Cambridge, MA, pp. 7984
(1987).
13. J. L. Mannos and D. J. Sakrison, The effects of a visual delity criterion on the
encoding of images, IEEE Trans. Info. Theory IT-20, pp. 525536 (1974).
14. A. K. Jain, Fundamentals of Digital Image Processing, Prentice-Hall, Engle-
wood Cliffs, NJ, pp. 5357 (1989).
15. F. W. Campbell and J. G. Robson, Application of Fourier analysis to the visi-
bility of gratings, J. Physiology 197, pp. 551566 (1968).
16. N. Graham and J. Nachmias, Detection of grating patterns containing two spa-
tial frequencies: A comparison of single-channel and multiple-channels mod-
els, Vision Res. 11, pp. 252259 (1971).
17. N. Graham, Spatial-frequency channels in human vision: Detecting edges
without edge detectors, Visual Coding and Adaptivity, C. S. Harris (Ed.), Erl-
baum, Hillsdale, NJ, pp. 215262 (1980).
18. C. F. Hall and E. L. Hall, A nonlinear model for the spatial characteristics of
the human visual system, IEEE Trans. Syst. Man. Cyber. SMC-7, pp. 162170
(1977).
19. B. Julesz, Spatial-frequency channels in one-, two-, and three-dimensional
vision: Variations on a theme by Bekesy, Visual Coding and Adaptivity,
C. S. Harris (Ed.), Erlbaum, Hillsdale, NJ, pp. 263316 (1980).
20. M. B. Sachs, J. Nachmias, and J. G. Robson, Spatial-frequency channels in
human vision, J. Opt. Soc. Am. 61, pp. 11761186 (1971).
21. T. J. Stockham, Image processing in the context of a visual model, Proc. IEEE
60, pp. 828842 (1972).
22. J. E. Farrell, X. Zhang, C. J. van den Branden Lambrecht, and D. A. Silver-
stein, Image quality metrics based on single and multichannel models of visual
processing, IEEE Compcon., pp. 5660 (1997).
23. T. Mitsa and J. R. Alford, Single-channel versus multiple-channel visual mod-
els for the formulation of image quality measures in digital halftoning, IS&T
NIP10, pp. 385387 (1994).
24. M. D. Fairchild, Color Appearance Models, Addison-Wesley, Reading, MA,
pp. 133151 (1998).
25. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quanti-
tative Data and Formulae, 2nd Edition, Wiley, New York, pp. 639641 (1982).
26. M. R. Luo and R. W. G. Hunt, The structures of the CIE 1997 color appearance
model (CIECAM97s), Color Res. Appl. 23, pp. 138146 (1998).
27. N. Moroney, M. D. Fairchild, R. W. G. Hunt, C. Li, M. R. Luo, and T. New-
man, The CMCCAT02 color appearance model, 10th CIC: Color Science and
Engineering Systems, Technologies, Applications, pp. 2327 (2002).
446 Computational Color Technology
28. C. Li, M. R. Luo, R. W. G. Hunt, N. Moroney, M. D. Fairchild, and T. Newman,
The performance of CMCCAT02, 10th CIC: Color Science and Engineering
Systems, Technologies, Applications, pp. 2832 (2002).
29. X. Zhang and B. A. Wandell, A spatial extension of CIELAB for digital color
image reproduction, SID Int. Symp., Digest of Tech. Papers, pp. 731734
(1996).
30. X. Zhang, D. A. Silverstein, J. E. Farrell, and B. A. Wandell, Color image qual-
ity metric S-CIELAB and its application on halftone texture visibility, IEEE
Compcon., pp. 4448 (1997).
31. X. Zhang, J. E. Farrell, and B. A. Wandell, Applications of a spatial extension
to CIELAB, Proc. SPIE 3025, pp. 154157 (1997).
32. J. Morovic and M. R. Luo, The fundamentals of gamut mapping,
http://www.colour.org/tc8-03/survey (1999).
33. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 6, pp. 128152 (1997).
34. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 8, pp. 177207 (1997).
35. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 83
111 (1999).
36. M. Kaji, H. Furuta, and H. Kurakami, Evaluation of JPEG compression by
using SCID images, TAGA, Sewickley, PA, pp. 103116 (1994).
37. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 10, pp. 153156 (1997).
38. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chaps. 35, pp. 224247 (1997).
39. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 113
128 (1999).
40. T. N. Pappas and D. L. Neuhoff, Model-based halftoning, Proc. SPIE 1453,
pp. 244255 (1991).
41. T. N. Pappas, Model-based halftoning of color images, IEEE Trans. Image
Proc. 6, pp. 10141024 (1997).
42. H. R. Kang, Digital Color Halftoning, SPIE Press, Bellingham, WA, pp. 445
470 (1999).
43. M. R. Pointer and G. G. Attridge, The number of discernible colors, Color Res.
Appl. 23, pp. 5254 (1998).
44. R. W. G. Hunt, The Reproduction of Color in Photography, Printing and Tele-
vision, 4th Edition, Fountain Press, England, pp. 177197 (1987).
45. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chap. 10, pp. 208260 (1997).
46. M. D. Fairchild, Color Appearance Models, Addison-Wesley, Reading, MA,
pp. 191214 (1998).
47. S. B. Bolte, A perspective on non-impact printing in color, Proc. SPIE 1670,
pp. 211 (1992).
Issues of Digital Color Imaging 447
48. W. L. Rhodes, Digital imaging: Problems and standards, Proc. SID, Society
for Information Display, San Jose, CA, Vol. 30, pp. 191195 (1989).
49. G. G. Field, The systems approach to color reproductionA critique, Proc.
TAGA, Sewickley, PA, pp. 117 (1984).
50. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 261271 (1997).
51. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, Chaps. 35, pp. 53127 (1997).
52. T. Johnson, Device independent colourIs it real? TAGA Proc., Sewickley,
PA, pp. 81113 (1992).
53. M. D. Fairchild, Color Appearance Models, Addison-Wesley, Reading, MA,
p. 346 (1998).
54. M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, A standard default
color space for the InternetsRGB, Version 1.10, Nov. 5 (1996).
55. IEC/3WD 61966-2.1: Colour measurement and management in multime-
dia systems and equipmentPart 2.1: Default RGB colour spacesRGB,
http://www.srgb.com (1998).
56. L. McIntyre and S. Zilles, File format for internet FAX, http://www.itu.int/
itudoc/itu-t (1997).
57. ITU-T.42, Continuous tone color representation method for facsimile,
http://www.itu.int/itudoc/itu-t (1996).
58. M. D. Fairchild, Color Appearance Models, Addison-Wesley, Reading, MA,
Chap. 9, pp. 199214 (1998).
59. CIE, Recommendations on uniform color spaces, color-difference equations
and psychometric color terms, Supplement No. 2 to Colorimetry, publication
No. 15, Bureau Central de la CIE, Paris (1978).
60. H. R. Kang, Color scanner calibration, J. Imaging Sci. Techn. 36, pp. 162170
(1992).
61. H. R. Kang, Color Technology for Electronic Imaging Devices, SPIE Press,
Bellingham, WA, pp. 284286 (1997).
62. H. R. Kang, Color conversion between sRGB and Internet FAX standards,
NIP16, pp. 665668 (2000).
63. R. Poe and J. Gordon, Quantization effects in digital imaging systems, Proc.
TAGA, Sewickley, PA, pp. 230255 (1987).
Appendix 1
Conversion Matrices
This appendix contains the conversion matrices for the RGB-to-XYZ transform
and its reverse conversion for twenty primary sets under eight illuminants.
Set 1. Adobe/RGB98
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8957 0.1534 0.0484 1.3144 0.3637 0.2219
0.4619 0.5188 0.0194 1.1721 2.2687 0.0503
0.0420 0.0585 0.2551 0.0523 0.4600 3.9457
Under illuminant B
0.6708 0.1765 0.1431 1.7551 0.4857 0.2964
0.3459 0.5969 0.0572 1.0187 1.9717 0.0437
0.0314 0.0673 0.7535 0.0177 0.1557 1.3356
Under illuminant C
0.5934 0.1809 0.2059 1.9840 0.5491 0.3350
0.3060 0.6117 0.0823 0.9941 1.9241 0.0426
0.0278 0.0689 1.0843 0.0123 0.1082 0.9282
Under illuminant D
50
0.6454 0.1810 0.1378 1.8243 0.5049 0.3080
0.3328 0.6121 0.0551 0.9934 1.9227 0.0426
0.0303 0.0690 0.7257 0.0184 0.1617 1.3868
Adobe/RGB98 under illuminant D
55
0.6169 0.1832 0.1561 1.9084 0.5281 0.3222
0.3181 0.6195 0.0624 0.9816 1.8998 0.0421
0.0289 0.0698 0.8219 0.0162 0.1428 1.2245
Adobe/RGB98 under illuminant D
65
0.5762 0.1856 0.1880 2.0431 0.5654 0.3450
0.2971 0.6277 0.0752 0.9688 1.8750 0.0415
0.0270 0.0707 0.9902 0.0135 0.1185 1.0164
449
450 Computational Color Technology
(Continued).
Set 1. Adobe/RGB98
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.5481 0.1868 0.2142 2.1479 0.5944 0.3627
0.2826 0.6317 0.0857 0.9626 1.8632 0.0413
0.0257 0.0712 1.1283 0.0118 0.1040 0.8919
Adobe/RGB98 under illuminant D
93
0.5151 0.1876 0.2501 2.2854 0.6325 0.3859
0.2656 0.6343 0.1001 0.9586 1.8554 0.0411
0.0241 0.0715 1.3174 0.0101 0.0891 0.7639
Set 2. Bruse/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8053 0.2435 0.0488 1.5935 0.6592 0.2525
0.4153 0.5652 0.0195 1.1721 2.2687 0.0503
0.0378 0.0609 0.2569 0.0436 0.4407 3.9181
Under illuminant B
0.5668 0.2802 0.1435 2.2642 0.9367 0.3588
0.2922 0.6504 0.0574 1.0187 1.9717 0.0437
0.0266 0.0700 0.7556 0.0148 0.1498 1.3320
Under illuminant C
0.4868 0.2871 0.2063 2.6360 1.0905 0.4177
0.2510 0.6665 0.0825 0.9941 1.9241 0.0426
0.0228 0.0718 1.0864 0.0103 0.1042 0.9264
Under illuminant D
50
0.5387 0.2873 0.1382 2.3822 0.9855 0.3775
0.2778 0.6670 0.0553 0.9934 1.9227 0.0426
0.0253 0.0718 0.7278 0.0154 0.1555 1.3829
Under illuminant D
55
0.5090 0.2908 0.1565 2.5213 1.0431 0.3995
0.2624 0.6750 0.0626 0.9816 1.8998 0.0421
0.0239 0.0727 0.8241 0.0136 0.1374 1.2214
Under illuminant D
65
0.4669 0.2946 0.1884 2.7487 1.1371 0.4355
0.2407 0.6839 0.0754 0.9688 1.8750 0.0415
0.0219 0.0737 0.9924 0.0113 0.1141 1.0142
Conversion Matrices 451
(Continued).
Set 2. Bruse/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.4381 0.2965 0.2147 2.9295 1.2119 0.4642
0.2259 0.6883 0.0859 0.9626 1.8632 0.0413
0.0205 0.0741 1.1305 0.0099 0.1001 0.8903
Under illuminant D
93
0.4046 0.2977 0.2506 3.1716 1.3121 0.5025
0.2086 0.6911 0.1002 0.9586 1.8554 0.0411
0.0190 0.0744 1.3196 0.0085 0.0858 0.7627
Set 3. CIE1931/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.7510 0.2766 0.0700 1.5427 0.5847 0.3053
0.2712 0.7251 0.0037 0.5771 1.5981 0.0993
0.0000 0.0090 0.3465 0.0150 0.0414 2.8833
Under illuminant B
0.5129 0.3074 0.1701 2.2587 0.8561 0.4470
0.1852 0.8057 0.0091 0.5194 1.4382 0.0894
0.0000 0.0100 0.8422 0.0062 0.0170 1.1863
Under illuminant C
0.4257 0.3180 0.2364 2.7213 1.0314 0.5385
0.1537 0.8336 0.0126 0.5019 1.3900 0.0864
0.0000 0.0103 1.1707 0.0044 0.0123 0.8534
Under illuminant D
50
0.4889 0.3108 0.1646 2.3700 0.8983 0.4690
0.1765 0.8147 0.0088 0.5136 1.4223 0.0884
0.0000 0.0101 0.8148 0.0064 0.0176 1.2262
Under illuminant D
55
0.4576 0.3147 0.1839 2.5316 0.9595 0.5010
0.1653 0.8249 0.0098 0.5072 1.4046 0.0873
0.0000 0.0102 0.9104 0.0057 0.0158 1.0975
Under illuminant D
65
0.4120 0.3203 0.2176 2.8122 1.0659 0.5565
0.1488 0.8396 0.0116 0.4984 1.3801 0.0858
0.0000 0.0104 1.0775 0.0048 0.0133 0.9272
452 Computational Color Technology
(Continued).
Set 3. CIE1931/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.3797 0.3242 0.2453 3.0513 1.1565 0.6038
0.1371 0.8498 0.0131 0.4924 1.3635 0.0848
0.0000 0.0105 1.2147 0.0043 0.0118 0.8225
Under illuminant D
93
0.3409 0.3287 0.2832 3.3983 1.2880 0.6725
0.1231 0.8618 0.0151 0.4855 1.3446 0.0836
0.0000 0.0107 1.4023 0.0037 0.0102 0.7125
Set 4. CIE1964/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9396 0.0965 0.0615 1.1242 0.1566 0.2176
0.3597 0.6352 0.0051 0.6372 1.6647 0.0977
0.0000 0.0414 0.3141 0.0839 0.2195 3.1713
Under illuminant B
0.7248 0.1078 0.1578 1.4574 0.2030 0.2821
0.2775 0.7095 0.0131 0.5706 1.4905 0.0875
0.0000 0.0463 0.8059 0.0327 0.0855 1.2358
Under illuminant C
0.6468 0.1115 0.2219 1.6331 0.2275 0.3161
0.2476 0.7340 0.0184 0.5515 1.4407 0.0846
0.0000 0.0479 1.1331 0.0233 0.0608 0.8789
Under illuminant D
50
0.7027 0.1091 0.1523 1.5032 0.2094 0.2909
0.2690 0.7184 0.0126 0.5635 1.4720 0.0864
0.0000 0.0468 0.7780 0.0339 0.0886 1.2801
Under illuminant D
55
0.6747 0.1105 0.1710 1.5656 0.2181 0.3030
0.2583 0.7275 0.0142 0.5564 1.4534 0.0853
0.0000 0.0474 0.8731 0.0302 0.0790 1.1407
Under illuminant D
65
0.6339 0.1125 0.2036 1.6665 0.2321 0.3225
0.2426 0.7405 0.0169 0.5466 1.4280 0.0838
0.0000 0.0483 1.0396 0.0254 0.0663 0.9580
Conversion Matrices 453
(Continued).
Set 4. CIE1964/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6050 0.1138 0.2303 1.7459 0.2432 0.3379
0.2316 0.7493 0.0191 0.5402 1.4113 0.0829
0.0000 0.0489 1.1763 0.0224 0.0586 0.8467
Under illuminant D
93
0.5706 0.1154 0.2670 1.8514 0.2579 0.3583
0.2184 0.7594 0.0221 0.5330 1.3924 0.0817
0.0000 0.0495 1.3635 0.0193 0.0506 0.7305
Set 5. EBU/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.7749 0.2824 0.0403 1.7021 0.7742 0.2644
0.3996 0.5843 0.0161 1.1721 2.2687 0.0503
0.0363 0.1071 0.2120 0.3006 1.0135 4.7358
Under illuminant B
0.5318 0.3250 0.1337 2.4803 1.1282 0.3853
0.2742 0.6723 0.0535 1.0187 1.9717 0.0437
0.0249 0.1233 0.7040 0.0905 0.3053 1.4264
Under illuminant C
0.4509 0.3330 0.1962 2.9248 1.3304 0.4543
0.2325 0.6890 0.0785 0.9941 1.9241 0.0426
0.0211 0.1263 1.0335 0.0617 0.2079 0.9716
Under illuminant D
50
0.5028 0.3333 0.1282 2.6232 1.1932 0.4075
0.2593 0.6895 0.0513 0.9934 1.9227 0.0426
0.0236 0.1264 0.6749 0.0944 0.3184 1.4879
Under illuminant D
55
0.4726 0.3373 0.1463 2.7906 1.2693 0.4335
0.2437 0.6978 0.0585 0.9816 1.8998 0.0421
0.0222 0.1279 0.7705 0.0827 0.2789 1.3033
Under illuminant D
65
0.4310 0.3420 0.1780 3.0669 1.3950 0.4764
0.2220 0.7070 0.0710 0.9688 1.8750 0.0415
0.0200 0.1300 0.9390 0.0679 0.2291 1.0705
454 Computational Color Technology
(Continued).
Set 5. EBU/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.4010 0.3439 0.2043 3.2891 1.4961 0.5109
0.2068 0.7115 0.0817 0.9626 1.8632 0.0413
0.0188 0.1304 1.0760 0.0592 0.1997 0.9333
Under illuminant D
93
0.3674 0.3453 0.2401 3.5898 1.6329 0.5576
0.1894 0.7145 0.0961 0.9586 1.8554 0.0411
0.0172 0.1310 1.2648 0.0504 0.1699 0.7940
Set 6. Extended/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9191 0.1259 0.0526 1.1935 0.2474 0.1761
0.3920 0.5895 0.0185 0.7956 1.8653 0.0224
0.0000 0.0252 0.3303 0.0606 0.1422 3.0257
Under illuminant B
0.7211 0.1380 0.1313 1.5212 0.3153 0.2245
0.3076 0.6463 0.0461 0.7257 1.7013 0.0204
0.0000 0.0276 0.8246 0.0243 0.0570 1.2120
Under illuminant C
0.6567 0.1400 0.1835 1.6705 0.3462 0.2465
0.2801 0.6555 0.0644 0.7156 1.6776 0.0201
0.0000 0.0280 1.1530 0.0174 0.0407 0.8668
Under illuminant D
50
0.6968 0.1406 0.1268 1.5743 0.3263 0.2324
0.2972 0.6583 0.0445 0.7125 1.6705 0.0200
0.0000 0.0281 0.7968 0.0251 0.0589 1.2543
Under illuminant D
55
0.6725 0.1417 0.1420 1.6311 0.3381 0.2407
0.2869 0.6633 0.0499 0.7071 1.6578 0.0199
0.0000 0.0283 0.8923 0.0225 0.0526 1.1201
Under illuminant D
65
0.6385 0.1428 0.1686 1.7180 0.3561 0.2536
0.2724 0.6684 0.0592 0.7017 1.6450 0.0197
0.0000 0.0286 1.0593 0.0189 0.0443 0.9434
Conversion Matrices 455
(Continued).
Set 6. Extended/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6155 0.1432 0.1905 1.7822 0.3694 0.2630
0.2625 0.6706 0.0669 0.6994 1.6398 0.0197
0.0000 0.0286 1.1966 0.0167 0.0393 0.8353
Under illuminant D
93
0.5892 0.1434 0.2203 1.8619 0.3859 0.2748
0.2513 0.6713 0.0774 0.6986 1.6380 0.0197
0.0000 0.0287 1.3843 0.0145 0.0339 0.7220
Set 7. Guild/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8114 0.2236 0.0625 1.4487 0.5047 0.2401
0.3478 0.6314 0.0208 0.7997 1.8659 0.0333
0.0000 0.0219 0.3336 0.0526 0.1226 2.9956
Under illuminant B
0.5887 0.2465 0.1553 1.9970 0.6958 0.3309
0.2523 0.6960 0.0518 0.7255 1.6927 0.0302
0.0000 0.0242 0.8280 0.0212 0.0494 1.2068
Under illuminant C
0.5126 0.2508 0.2168 2.2932 0.7990 0.3800
0.2197 0.7080 0.0723 0.7131 1.6639 0.0297
0.0000 0.0246 1.1564 0.0152 0.0354 0.8641
Under illuminant D
50
0.5632 0.2510 0.1501 2.0873 0.7273 0.3459
0.2414 0.7086 0.0500 0.7125 1.6625 0.0297
0.0000 0.0246 0.8003 0.0219 0.0511 1.2486
Under illuminant D
55
0.5351 0.2531 0.1680 2.1967 0.7654 0.3641
0.2293 0.7147 0.0560 0.7065 1.6484 0.0294
0.0000 0.0248 0.8958 0.0196 0.0457 1.1155
Under illuminant D
65
0.4951 0.2555 0.1993 2.3742 0.8272 0.3935
0.2122 0.7214 0.0664 0.6999 1.6331 0.0292
0.0000 0.0250 1.0629 0.0165 0.0385 0.9402
456 Computational Color Technology
(Continued).
Set 7. Guild/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.4676 0.2566 0.2250 2.5142 0.8760 0.4167
0.2004 0.7246 0.0750 0.6968 1.6258 0.0290
0.0000 0.0252 1.2000 0.0146 0.0341 0.8327
Under illuminant D
93
0.4353 0.2574 0.2602 2.7004 0.9408 0.4475
0.1866 0.7267 0.0867 0.6948 1.6211 0.0289
0.0000 0.0252 1.3878 0.0126 0.0295 0.7201
Set 8. Ink-jet/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8284 0.2170 0.0522 1.4141 0.4829 0.1947
0.3550 0.6249 0.0201 0.8055 1.8794 0.0131
0.0000 0.0260 0.3295 0.0637 0.1485 3.0342
Under illuminant B
0.6227 0.2371 0.1306 1.8813 0.6424 0.2591
0.2669 0.6829 0.0502 0.7370 1.7197 0.0120
0.0000 0.0285 0.8237 0.0255 0.0594 1.2136
Under illuminant C
0.5577 0.2398 0.1827 2.1005 0.7173 0.2893
0.2390 0.6907 0.0703 0.7287 1.7003 0.0118
0.0000 0.0288 1.1522 0.0182 0.0425 0.8676
Under illuminant D
50
0.5964 0.2416 0.1262 1.9642 0.6707 0.2705
0.2556 0.6959 0.0485 0.7233 1.6877 0.0118
0.0000 0.0290 0.7959 0.0263 0.0615 1.2560
Under illuminant D
55
0.5716 0.2433 0.1413 2.0494 0.6999 0.2822
0.2450 0.7007 0.0544 0.7183 1.6761 0.0117
0.0000 0.0292 0.8914 0.0235 0.0549 1.1214
Under illuminant D
65
0.5372 0.2449 0.1678 2.1806 0.7446 0.3003
0.2302 0.7052 0.0645 0.7137 1.6653 0.0116
0.0000 0.0294 1.0585 0.0198 0.0462 0.9444
Conversion Matrices 457
(Continued).
Set 8. Ink-jet/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.5142 0.2454 0.1896 2.2780 0.7779 0.3137
0.2204 0.7067 0.0729 0.7122 1.6619 0.0116
0.0000 0.0294 1.1958 0.0175 0.0409 0.8360
Under illuminant D
93
0.4883 0.2453 0.2193 2.3991 0.8192 0.3304
0.2093 0.7064 0.0844 0.7126 1.6626 0.0116
0.0000 0.0294 1.3836 0.0152 0.0354 0.7225
Set 9. ITUR.BT.709/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.7599 0.2957 0.0420 1.7589 0.8343 0.2706
0.3918 0.5914 0.0168 1.1721 2.2687 0.0503
0.0356 0.0986 0.2213 0.2389 0.8761 4.5396
Under illuminant B
0.5145 0.3402 0.1357 2.5979 1.2323 0.3997
0.2653 0.6804 0.0543 1.0187 1.9717 0.0437
0.0241 0.1134 0.7147 0.0740 0.2713 1.4058
Under illuminant C
0.4332 0.3486 0.1983 3.0850 1.4634 0.4746
0.2234 0.6973 0.0793 0.9941 1.9241 0.0426
0.0203 0.1162 1.0445 0.0506 0.1856 0.9619
Under illuminant D
50
0.4851 0.3489 0.1302 2.7553 1.3070 0.4239
0.2501 0.6978 0.0521 0.9934 1.9227 0.0426
0.0227 0.1163 0.6859 0.0771 0.2827 1.4648
Under illuminant D
55
0.4547 0.3531 0.1484 2.9394 1.3943 0.4522
0.2345 0.7062 0.0594 0.9816 1.8998 0.0421
0.0213 0.1177 0.7816 0.0677 0.2481 1.2854
Under illuminant D
65
0.4119 0.3578 0.1803 3.2410 1.5374 0.4986
0.2124 0.7155 0.0721 0.9692 1.8760 0.0416
0.0193 0.1193 0.9493 0.0556 0.2040 1.0570
458 Computational Color Technology
(Continued).
Set 9. ITUR.BT.709/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.3827 0.3600 0.2064 3.4922 1.6566 0.5373
0.1973 0.7201 0.0826 0.9626 1.8632 0.0413
0.0179 0.1200 1.0872 0.0486 0.1783 0.9241
Under illuminant D
93
0.3490 0.3616 0.2423 3.8291 1.8164 0.5891
0.1800 0.7231 0.0969 0.9586 1.8554 0.0411
0.0164 0.1205 1.2761 0.0414 0.1519 0.7873
Set 10. Judd-Wyszecki/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9798 0.0574 0.0603 1.0454 0.0688 0.2213
0.3538 0.6444 0.0017 0.5743 1.5905 0.1121
0.0000 0.0710 0.2845 0.1434 0.3971 3.4873
Under illuminant B
0.7623 0.0642 0.1639 1.3437 0.0884 0.2844
0.2753 0.7200 0.0047 0.5141 1.4236 0.1004
0.0000 0.0794 0.7728 0.0528 0.1462 1.2836
Under illuminant C
0.6806 0.0666 0.2330 1.5051 0.0990 0.3186
0.2458 0.7475 0.0067 0.4951 1.3712 0.0967
0.0000 0.0824 1.0986 0.0371 0.1028 0.9030
Under illuminant D
50
0.7414 0.0648 0.1579 1.3816 0.0909 0.2925
0.2677 0.7277 0.0045 0.5086 1.4085 0.0993
0.0000 0.0802 0.7447 0.0548 0.1517 1.3321
Under illuminant D
55
0.7125 0.0657 0.1780 1.4377 0.0946 0.3043
0.2573 0.7376 0.0051 0.5018 1.3896 0.0980
0.0000 0.0813 0.8393 0.0486 0.1346 1.1820
Under illuminant D
65
0.6697 0.0670 0.2131 1.5294 0.1006 0.3238
0.2418 0.7520 0.0061 0.4922 1.3630 0.0961
0.0000 0.0829 1.0050 0.0406 0.1124 0.9871
Conversion Matrices 459
(Continued).
Set 10. Judd-Wyszecki/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6393 0.0679 0.2420 1.6024 0.1054 0.3392
0.2308 0.7622 0.0070 0.4856 1.3448 0.0948
0.0000 0.0840 1.1412 0.0357 0.0990 0.8693
Under illuminant D
93
0.6023 0.0690 0.2816 1.7006 0.1119 0.3600
0.2175 0.7744 0.0081 0.4780 1.3236 0.0933
0.0000 0.0854 1.3276 0.0307 0.0851 0.7472
Set 11. Kress Default/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9305 0.1101 0.0570 1.1706 0.2151 0.1963
0.4149 0.5734 0.0118 0.8485 1.9030 0.0795
0.0003 0.0280 0.3272 0.0717 0.1629 3.0496
Under illuminant B
0.7230 0.1244 0.1430 1.5066 0.2769 0.2526
0.3224 0.6482 0.0295 0.7506 1.6835 0.0703
0.0002 0.0317 0.8203 0.0286 0.0650 1.2164
Under illuminant C
0.6519 0.1282 0.2001 1.6711 0.3071 0.2802
0.2906 0.6681 0.0413 0.7282 1.6332 0.0682
0.0002 0.0327 1.1481 0.0204 0.0464 0.8691
Under illuminant D
50
0.6995 0.1266 0.1381 1.5573 0.2862 0.2611
0.3119 0.6597 0.0285 0.7375 1.6541 0.0691
0.0002 0.0323 0.7924 0.0296 0.0672 1.2592
Under illuminant D
55
0.6733 0.1282 0.1547 1.6179 0.2973 0.2713
0.3002 0.6679 0.0319 0.7284 1.6337 0.0682
0.0002 0.0327 0.8878 0.0264 0.0600 1.1240
Under illuminant D
65
0.6359 0.1303 0.1838 1.7131 0.3148 0.2872
0.2835 0.6786 0.0379 0.7169 1.6080 0.0671
0.0002 0.0332 1.0545 0.0223 0.0505 0.9462
460 Computational Color Technology
(Continued).
Set 11. Kress Default/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6100 0.1315 0.2076 1.7857 0.3281 0.2994
0.2720 0.6852 0.0428 0.7100 1.5925 0.0665
0.0002 0.0335 1.1915 0.0197 0.0447 0.8374
Under illuminant D
93
0.5798 0.1328 0.2403 1.8789 0.3453 0.3150
0.2585 0.6920 0.0496 0.7031 1.5769 0.0658
0.0002 0.0338 1.3790 0.0170 0.0386 0.7236
Set 12. Laser/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
1.0270 0.0237 0.0468 0.9738 0.0004 0.1927
0.4159 0.5807 0.0034 0.6996 1.7276 0.1134
0.0001 0.1188 0.2367 0.3509 0.8671 4.1686
Under illuminant B
0.8215 0.0269 0.1420 1.2175 0.0005 0.2409
0.3327 0.6569 0.0104 0.6184 1.5270 0.1003
0.0000 0.1344 0.7178 0.1157 0.2859 1.3744
Under illuminant C
0.7463 0.0279 0.2060 1.3403 0.0005 0.2652
0.3022 0.6827 0.0151 0.5950 1.4693 0.0965
0.0000 0.1397 1.0413 0.0798 0.1971 0.9474
Under illuminant D
50
0.8007 0.0272 0.1363 1.2491 0.0005 0.2471
0.3243 0.6658 0.0100 0.6102 1.5067 0.0989
0.0000 0.1362 0.6887 0.1206 0.2980 1.4325
Under illuminant D
55
0.7738 0.0276 0.1548 1.2926 0.0005 0.2557
0.3134 0.6753 0.0113 0.6016 1.4854 0.0975
0.0000 0.1381 0.7824 0.1061 0.2623 1.2609
Under illuminant D
65
0.7344 0.0282 0.1874 1.3619 0.0005 0.2695
0.2974 0.6889 0.0137 0.5897 1.4562 0.0956
0.0000 0.1409 0.9469 0.0877 0.2167 1.0418
Conversion Matrices 461
(Continued).
Set 12. Laser/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.7065 0.0285 0.2142 1.4157 0.0006 0.2801
0.2861 0.6982 0.0157 0.5818 1.4367 0.0943
0.0000 0.1428 1.0823 0.0767 0.1896 0.9115
Under illuminant D
93
0.6730 0.0290 0.2509 1.4861 0.0006 0.2940
0.2726 0.7091 0.0184 0.5729 1.4147 0.0929
0.0000 0.1451 1.2679 0.0655 0.1618 0.7781
Set 13. NTSC/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8870 0.1576 0.0530 1.3069 0.3643 0.1972
0.4369 0.5328 0.0303 1.0840 2.2009 0.0312
0.0000 0.0600 0.2955 0.2203 0.4472 3.3909
Under illuminant B
0.6758 0.1735 0.1411 1.7152 0.4781 0.2588
0.3329 0.5865 0.0806 0.9848 1.9995 0.0283
0.0000 0.0661 0.7861 0.0828 0.1681 1.2745
Under illuminant C
0.6065 0.1736 0.2001 1.9112 0.5328 0.2884
0.2987 0.5869 0.1143 0.9841 1.9980 0.0283
0.0000 0.0661 1.1149 0.0584 0.1185 0.8986
Under illuminant D
50
0.6502 0.1781 0.1359 1.7827 0.4970 0.2690
0.3203 0.6021 0.0776 0.9593 1.9477 0.0276
0.0000 0.0678 0.7571 0.0860 0.1745 1.3234
Under illuminant D
55
0.6242 0.1790 0.1530 1.8570 0.5177 0.2802
0.3075 0.6051 0.0874 0.9545 1.9380 0.0274
0.0000 0.0682 0.8524 0.0764 0.1550 1.1753
Under illuminant D
65
0.5877 0.1792 0.1830 1.9725 0.5499 0.2976
0.2894 0.6060 0.1046 0.9532 1.9352 0.0274
0.0000 0.0683 1.0196 0.0638 0.1296 0.9826
462 Computational Color Technology
(Continued).
Set 13. NTSC/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.5628 0.1787 0.2077 2.0595 0.5741 0.3108
0.2772 0.6041 0.1187 0.9561 1.9412 0.0275
0.0000 0.0681 1.1571 0.0562 0.1142 0.8658
Under illuminant D
93
0.5343 0.1771 0.2415 2.1696 0.6048 0.3274
0.2631 0.5988 0.1380 0.9645 1.9583 0.0277
0.0000 0.0675 1.3455 0.0484 0.0982 0.7446
Set 14. ROM/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8933 0.1610 0.0434 1.1553 0.2299 0.1073
0.1473 0.8526 0.0001 0.1997 1.2126 0.0184
0.0174 0.0938 0.4667 0.0029 0.2352 2.1424
Under illuminant B
0.7350 0.1659 0.0895 1.4040 0.2794 0.1304
0.1212 0.8787 0.0001 0.1937 1.1766 0.0179
0.0143 0.0967 0.9632 0.0014 0.1140 1.0381
Under illuminant C
0.6930 0.1672 0.1200 1.4892 0.2964 0.1383
0.1143 0.8856 0.0001 0.1922 1.1675 0.0177
0.0135 0.0974 1.2919 0.0011 0.0850 0.7739
Under illuminant D
50
0.7106 0.1666 0.0869 1.4523 0.2890 0.1349
0.1172 0.8827 0.0001 0.1929 1.1713 0.0178
0.0138 0.0971 0.9359 0.0015 0.1173 1.0684
Under illuminant D
55
0.6932 0.1672 0.0958 1.4888 0.2963 0.1383
0.1143 0.8855 0.0001 0.1922 1.1675 0.0177
0.0135 0.0974 1.0315 0.0013 0.1064 0.9693
Under illuminant D
65
0.6707 0.1679 0.1114 1.5388 0.3062 0.1429
0.1106 0.8892 0.0001 0.1914 1.1626 0.0177
0.0131 0.0978 1.1988 0.0011 0.0916 0.8340
Conversion Matrices 463
(Continued).
Set 14. ROM/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6568 0.1683 0.1241 1.5713 0.3127 0.1460
0.1083 0.8915 0.0001 0.1909 1.1597 0.0176
0.0128 0.0981 1.3361 0.0010 0.0821 0.7484
Under illuminant D
93
0.6426 0.1687 0.1416 1.6060 0.3196 0.1492
0.1060 0.8938 0.0002 0.1904 1.1567 0.0176
0.0125 0.0984 1.5239 0.0009 0.0720 0.6561
Set 15. ROMM/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9600 0.1241 0.0135 1.1183 0.2124 0.0425
0.3467 0.6533 0.0000 0.5934 1.6434 0.0224
0.0000 0.0000 0.3555 0.0000 0.0000 2.8129
Under illuminant B
0.8247 0.1333 0.0324 1.3019 0.2472 0.0494
0.2978 0.7021 0.0001 0.5522 1.5291 0.0208
0.0000 0.0000 0.8522 0.0000 0.0000 1.1734
Under illuminant C
0.8003 0.1350 0.0449 1.3415 0.2548 0.0509
0.2890 0.7109 0.0001 0.5454 1.5103 0.0206
0.0000 0.0000 1.1810 0.0000 0.0000 0.8467
Under illuminant D
50
0.7977 0.1352 0.0313 1.3460 0.2556 0.0511
0.2880 0.7119 0.0001 0.5446 1.5082 0.0205
0.0000 0.0000 0.8249 0.0000 0.0000 1.2123
Under illuminant D
55
0.7852 0.1360 0.0350 1.3674 0.2597 0.0519
0.2835 0.7164 0.0001 0.5412 1.4987 0.0204
0.0000 0.0000 0.9206 0.0000 0.0000 1.0862
Under illuminant D
65
0.7716 0.1370 0.0413 1.3914 0.2642 0.0528
0.2786 0.7213 0.0001 0.5375 1.4885 0.0203
0.0000 0.0000 1.0879 0.0000 0.0000 0.9192
464 Computational Color Technology
(Continued).
Set 15. ROMM/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.7652 0.1374 0.0466 1.4030 0.2664 0.0533
0.2763 0.7235 0.0001 0.5358 1.4838 0.0202
0.0000 0.0000 1.2252 0.0000 0.0000 0.8162
Under illuminant D
93
0.7616 0.1377 0.0537 1.4098 0.2677 0.0535
0.2750 0.7249 0.0001 0.5348 1.4812 0.0202
0.0000 0.0000 1.4130 0.0000 0.0000 0.7077
Set 16. SMPTE-C/RGB (Xerox/RGB)
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.7536 0.2984 0.0456 1.8308 0.9085 0.2841
0.4067 0.5727 0.0206 1.3087 2.4211 0.0431
0.0359 0.0914 0.2282 0.2365 0.8273 4.4100
Under illuminant B
0.4993 0.3466 0.1444 2.7630 1.3711 0.4288
0.2695 0.6653 0.0652 1.1265 2.0841 0.0371
0.0238 0.1062 0.7222 0.0747 0.2614 1.3933
Under illuminant C
0.4149 0.3548 0.2105 3.3256 1.6502 0.5161
0.2239 0.6810 0.0951 1.1005 2.0359 0.0362
0.0198 0.1087 1.0525 0.0513 0.1794 0.9561
Under illuminant D
50
0.4690 0.3565 0.1387 2.9415 1.4597 0.4565
0.2531 0.6842 0.0626 1.0953 2.0264 0.0360
0.0223 0.1092 0.6933 0.0778 0.2723 1.4514
Under illuminant D
55
0.4375 0.3609 0.1578 3.1535 1.5649 0.4894
0.2361 0.6926 0.0713 1.0821 2.0019 0.0356
0.0208 0.1106 0.7892 0.0684 0.2392 1.2751
Under illuminant D
65
0.3930 0.3655 0.1914 3.5106 1.7421 0.5448
0.2121 0.7014 0.0865 1.0685 1.9767 0.0352
0.0187 0.1120 0.9572 0.0564 0.1972 1.0513
Conversion Matrices 465
(Continued).
Set 16. SMPTE-C/RGB (Xerox/RGB)
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.3626 0.3675 0.2191 3.8046 1.8879 0.5904
0.1957 0.7054 0.0989 1.0625 1.9657 0.0350
0.0173 0.1126 1.0953 0.0493 0.1724 0.9187
Under illuminant D
93
0.3275 0.3685 0.2569 4.2123 2.0903 0.6537
0.1768 0.7072 0.1160 1.0597 1.9606 0.0349
0.0156 0.1129 1.2845 0.0420 0.1470 0.7834
Set 17. SMPTE-240M/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8809 0.1608 0.0559 1.3144 0.3637 0.2219
0.4339 0.5438 0.0223 1.0578 2.1476 0.0377
0.0000 0.0613 0.2942 0.2203 0.4472 3.3909
Under illuminant B
0.6597 0.1821 0.1486 1.7551 0.4857 0.2964
0.3249 0.6156 0.0595 0.9343 1.8969 0.0333
0.0000 0.0694 0.7828 0.0828 0.1681 1.2745
Under illuminant C
0.5836 0.1858 0.2108 1.9840 0.5491 0.3350
0.2874 0.6282 0.0843 0.9156 1.8589 0.0327
0.0000 0.0708 1.1102 0.0584 0.1185 0.8986
Under illuminant D
50
0.6347 0.1864 0.1431 1.8243 0.5049 0.3080
0.3126 0.6301 0.0573 0.9128 1.8533 0.0326
0.0000 0.0710 0.7539 0.0860 0.1745 1.3234
Under illuminant D
55
0.6067 0.1883 0.1612 1.9084 0.5281 0.3222
0.2988 0.6367 0.0645 0.9034 1.8342 0.0322
0.0000 0.0717 0.8489 0.0764 0.1550 1.1753
Under illuminant D
65
0.5667 0.1904 0.1928 2.0431 0.5654 0.3450
0.2791 0.6438 0.0771 0.8935 1.8140 0.0319
0.0000 0.0725 1.0154 0.0638 0.1296 0.9826
466 Computational Color Technology
(Continued).
Set 17. SMPTE-240M/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.5390 0.1914 0.2188 2.1479 0.5944 0.3627
0.2655 0.6470 0.0875 0.8890 1.8050 0.0317
Under illuminant D
93
0.5066 0.1919 0.2544 2.2854 0.6325 0.3859
0.2495 0.6487 0.1018 0.8867 1.8002 0.0316
0.0000 0.0731 1.3399 0.0484 0.0982 0.7446
Set 18. Sony-P22/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.8013 0.2571 0.0392 1.6565 0.7237 0.2659
0.4359 0.5464 0.0177 1.3346 2.4486 0.0458
0.0449 0.1148 0.1958 0.4027 1.2694 5.1404
Under illuminant B
0.5532 0.2998 0.1375 2.3997 1.0483 0.3852
0.3009 0.6370 0.0621 1.1448 2.1004 0.0392
0.0310 0.1338 0.6874 0.1147 0.3617 1.4645
Under illuminant C
0.4696 0.3071 0.2035 2.8269 1.2350 0.4538
0.2554 0.6527 0.0919 1.1173 2.0500 0.0383
0.0263 0.1371 1.0176 0.0775 0.2443 0.9893
Under illuminant D
50
0.5242 0.3084 0.1316 2.5322 1.1062 0.4065
0.2852 0.6554 0.0594 1.1126 2.0413 0.0381
0.0294 0.1377 0.6579 0.1199 0.3779 1.5303
Under illuminant D
55
0.4932 0.3123 0.1507 2.6915 1.1758 0.4321
0.2683 0.6636 0.0681 1.0988 2.0160 0.0377
0.0276 0.1394 0.7536 0.1047 0.3299 1.3359
Under illuminant D
65
0.4492 0.3164 0.1843 2.9552 1.2910 0.4744
0.2443 0.6724 0.0832 1.0845 1.9897 0.0372
0.0252 0.1413 0.9215 0.0856 0.2698 1.0925
Conversion Matrices 467
(Continued).
Set 18. Sony-P22/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.4190 0.3183 0.2119 3.1681 1.3840 0.5086
0.2279 0.6764 0.0957 1.0782 1.9781 0.0370
0.0235 0.1421 1.0596 0.0744 0.2346 0.9500
Under illuminant D
93
0.3839 0.3192 0.2498 3.4579 1.5106 0.5551
0.2088 0.6784 0.1128 1.0750 1.9723 0.0369
0.0215 0.1425 1.2490 0.0631 0.1990 0.8060
Set 19. Wide-Gamut/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9482 0.0907 0.0587 1.1050 0.1394 0.2066
0.3424 0.6510 0.0066 0.5821 1.6119 0.0758
0.0000 0.0460 0.3095 0.0865 0.2396 3.2198
Under illuminant B
0.7385 0.0998 0.1520 1.4187 0.1790 0.2652
0.2667 0.7161 0.0172 0.5291 1.4653 0.0689
0.0000 0.0506 0.8016 0.0334 0.0925 1.2432
Under illuminant C
0.6635 0.1026 0.2141 1.5792 0.1993 0.2952
0.2396 0.7362 0.0242 0.5147 1.4253 0.0671
0.0000 0.0520 1.1290 0.0237 0.0657 0.8827
Under illuminant D
50
0.7164 0.1010 0.1467 1.4624 0.1845 0.2734
0.2587 0.7247 0.0166 0.5228 1.4479 0.0681
0.0000 0.0512 0.7737 0.0346 0.0958 1.2880
Under illuminant D
55
0.6893 0.1021 0.1648 1.5200 0.1918 0.2842
0.2489 0.7325 0.0186 0.5173 1.4326 0.0674
0.0000 0.0518 0.8688 0.0308 0.0853 1.1469
Under illuminant D
65
0.6499 0.1036 0.1964 1.6121 0.2034 0.3014
0.2347 0.7431 0.0222 0.5099 1.4121 0.0664
0.0000 0.0525 1.0354 0.0259 0.0716 0.9625
468 Computational Color Technology
(Continued).
Set 19. Wide-Gamut/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6223 0.1046 0.2223 1.6836 0.2125 0.3148
0.2247 0.7502 0.0251 0.5051 1.3988 0.0658
0.0000 0.0530 1.1722 0.0228 0.0633 0.8501
Under illuminant D
93
0.5894 0.1057 0.2578 1.7776 0.2243 0.3323
0.2128 0.7580 0.0291 0.4999 1.3843 0.0651
0.0000 0.0536 1.3594 0.0197 0.0545 0.7330
Set 20. Wright/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant A
0.9179 0.1232 0.0565 1.1706 0.2151 0.1963
0.3464 0.6419 0.0116 0.6329 1.6768 0.0500
0.0000 0.0314 0.3241 0.0613 0.1624 3.0805
Under illuminant B
0.7132 0.1346 0.1425 1.5066 0.2769 0.2526
0.2692 0.7014 0.0294 0.5792 1.5346 0.0458
0.0000 0.0343 0.8179 0.0243 0.0643 1.2207
Under illuminant C
0.6430 0.1375 0.1997 1.6711 0.3071 0.2802
0.2427 0.7161 0.0412 0.5673 1.5031 0.0448
0.0000 0.0350 1.1460 0.0173 0.0459 0.8712
Under illuminant D
50
0.6900 0.1365 0.1377 1.5573 0.2862 0.2611
0.2604 0.7112 0.0284 0.5712 1.5135 0.0451
0.0000 0.0348 0.7901 0.0251 0.0666 1.2636
Under illuminant D
55
0.6641 0.1377 0.1543 1.6179 0.2973 0.2713
0.2507 0.7175 0.0318 0.5662 1.5002 0.0447
0.0000 0.0351 0.8855 0.0224 0.0594 1.1275
Under illuminant D
65
0.6272 0.1393 0.1834 1.7131 0.3148 0.2872
0.2367 0.7254 0.0378 0.5600 1.4838 0.0443
0.0000 0.0355 1.0524 0.0189 0.0500 0.9487
Conversion Matrices 469
(Continued).
Set 20. Wright/RGB
RGB-to-XYZ transformation XYZ-to-RGB transformation
Under illuminant D
75
0.6017 0.1402 0.2073 1.7857 0.3281 0.2994
0.2271 0.7301 0.0428 0.5564 1.4743 0.0440
0.0000 0.0357 1.1895 0.0167 0.0442 0.8394
Under illuminant D
93
0.5719 0.1410 0.2400 1.8789 0.3453 0.3150
0.2158 0.7347 0.0495 0.5530 1.4652 0.0437
0.0000 0.0359 1.3771 0.0144 0.0382 0.7250
Appendix 2
Conversion Matrices from RGB to
ITU-R.BT.709/RGB
This appendix contains the conversion matrices for various RGB primary sets to
ITU-R.BT.709/RGB under illuminant D
65
as follows:
(1) Adobe/RGB98 to ITU-R.BT.709/RGB under D
65
1.3972 0.3987 0.0000
0.0000 1.0006 0.0001
0.0000 0.0430 1.0418
(2) Bruse/RGB to ITU-R.BT.709/RGB under D
65
1.1323 0.1334 0.0001
0.0001 1.0005 0.0001
0.0000 0.0452 1.0441
(3) CIE1931/RGB to ITU-R.BT.709/RGB under D
65
1.1065 0.2579 0.1502
0.1202 1.2651 0.1443
0.0074 0.1425 1.1486
(4) CIE1964/RGB to ITU-R.BT.709/RGB under D
65
1.6815 0.7979 0.1155
0.1593 1.2822 0.1224
0.0142 0.0938 1.1067
(5) EBU/RGB to ITU-R.BT.709/RGB under D
65
1.0456 0.0433 0.0004
0.0004 1.0003 0.0003
0.0002 0.0122 0.9879
(6) Extended/RGB to ITU-R.BT.709/RGB under D
65
1.6506 0.5790 0.0727
0.1078 1.1167 0.0083
0.0201 0.0982 1.1170
471
472 Computational Color Technology
(7) Guild/RGB to ITU-R.BT.709/RGB under D
65
1.2784 0.2935 0.0139
0.0818 1.1068 0.0244
0.0158 0.1065 1.1210
(8) Ink-jet/RGB to ITU-R.BT.709/RGB under D
65
1.3872 0.3051 0.0831
0.0888 1.0868 0.0024
0.0171 0.0992 1.1150
(9) Judd-Wyszecki/RGB to ITU-R.BT.709/RGB under D
65
1.7988 0.9803 0.1802
0.1955 1.3493 0.1533
0.0121 0.0621 1.0729
(10) Kress/RGB to ITU-R.BT.709/RGB under D
65
1.6250 0.6375 0.0117
0.0845 1.1481 0.0632
0.0223 0.0961 1.1171
(11) Laser/RGB to ITU-R.BT.709/RGB under D
65
1.9230 1.0380 0.1142
0.1539 1.2709 0.1165
0.0198 0.0100 1.0085
(12) NTSC/RGB to ITU-R.BT.709/RGB under D
65
1.4598 0.3849 0.0761
0.0267 0.9660 0.0613
0.0264 0.0415 1.0666
(13) ROM/RGB to ITU-R.BT.709/RGB under D
65
2.0102 0.7741 0.2368
0.4431 1.5013 0.0579
0.0009 0.2754 1.2733
(14) ROMM/RGB to ITU-R.BT.709/RGB under D
65
2.0724 0.6649 0.4087
0.2252 1.2204 0.0054
0.0139 0.1395 1.1522
Conversion Matrices from RGB to ITU-R.BT.709/RGB 473
(15) SMPTE-C/RGB to ITU-R.BT.709/RGB under D
65
0.9383 0.0504 0.0101
0.0178 0.9662 0.0166
0.0017 0.0044 1.0048
(16) SMPTE-240M/RGB to ITU-R.BT.709/RGB under D
65
1.4076 0.4088 0.0001
0.0257 1.0262 0.0000
0.0254 0.0441 1.0683
(17) Sony-P22/RGB to ITU-R.BT.709/RGB under D
65
1.0677 0.0787 0.0099
0.0240 0.9606 0.0158
0.0018 0.0298 0.9673
(18) Wide-Gamut/RGB to ITU-R.BT.709/RGB under D
65
1.7455 0.8329 0.0862
0.1896 1.2958 0.1056
0.0117 0.0903 1.1008
(19) Wright/RGB to ITU-R.BT.709/RGB under D
65
1.6689 0.6815 0.0116
0.1638 1.2273 0.0631
0.0134 0.1027 1.1149
Appendix 3
Conversion Matrices from RGB to
ROMM/RGB
This appendix contains the conversion matrices for various RGB primary sets to
ROMM/RGB under illuminant D
50
as follows:
(1) Adobe/RGB98 to ROMM/RGB under D
50
0.7821 0.0836 0.1343
0.1511 0.8260 0.0229
0.0367 0.0836 0.8798
(2) Bruse/RGB to ROMM/RGB under D
50
0.6528 0.2126 0.1347
0.1261 0.8510 0.0231
0.0307 0.0870 0.8823
(3) CIE1931/RGB to ROMM/RGB under D
50
0.6129 0.2096 0.1777
0.0001 1.0597 0.0597
0.0000 0.0122 0.9878
(4) CIE1964/RGB to ROMM/RGB under D
50
0.8771 0.0392 0.1620
0.0230 1.0250 0.0480
0.0000 0.0567 0.9432
(5) EBU/RGB to ROMM/RGB under D
50
0.6093 0.2659 0.1250
0.1177 0.8610 0.0214
0.0286 0.1532 0.8182
(6) Extended/RGB to ROMM/RGB under D
50
0.8619 0.0196 0.1186
0.0688 0.9169 0.0144
0.0000 0.0341 0.9660
475
476 Computational Color Technology
(7) Guild/RGB to ROMM/RGB under D
50
0.6964 0.1555 0.1484
0.0574 0.9325 0.0101
0.0000 0.0298 0.9702
(8) Ink-jet/RGB to ROMM/RGB under D
50
0.7374 0.1458 0.1168
0.0607 0.9186 0.0207
0.0000 0.0352 0.9649
(9) ITU-R.BT.709/RGB to ROMM/RGB under D
50
0.5879 0.2853 0.1269
0.1135 0.8648 0.0217
0.0275 0.1410 0.8315
(10) Judd-Wyszecki/RGB to ROMM/RGB under D
50
0.9295 0.1029 0.1733
0.0000 1.0639 0.0639
0.0000 0.0972 0.9028
(11) Kress/RGB to ROMM/RGB under D
50
0.8618 0.0001 0.1381
0.0895 0.9267 0.0160
0.0002 0.0392 0.9606
(12) Laser/RGB to ROMM/RGB under D
50
0.9949 0.1405 0.1457
0.0530 0.9921 0.0450
0.0000 0.1651 0.8349
(13) NTSC/RGB to ROMM/RGB under D
50
0.7933 0.0824 0.1244
0.1290 0.8125 0.0585
0.0000 0.0822 0.9178
(14) ROM/RGB to ROMM/RGB under D
50
0.9272 0.0036 0.0691
0.2105 1.2386 0.0280
0.0167 0.1177 1.1346
Conversion Matrices from RGB to ROMM/RGB 477
(15) SMPTE-C/RGB to ROMM/RGB under D
50
0.5654 0.2994 0.1353
0.1268 0.8400 0.0331
0.0270 0.1324 0.8405
(16) SMPTE-240M/RGB to ROMM/RGB under D
50
0.7744 0.0862 0.1394
0.1258 0.8503 0.0239
0.0000 0.0861 0.9140
(17) Sony-P22/RGB to ROMM/RGB under D
50
0.6312 0.2405 0.1283
0.1453 0.8233 0.0314
0.0356 0.1669 0.7976
(18) Wide-Gamut/RGB to ROMM/RGB under D
50
0.8982 0.0519 0.1537
0.0000 1.0390 0.0390
0.0000 0.0621 0.9380
(19) Wright/RGB to ROMM/RGB under D
50
0.8622 0.0002 0.1377
0.0170 0.9990 0.0160
0.0000 0.0422 0.9578
Appendix 4
RGB Color-Encoding Standards
A4.1 SMPTE-C/RGB
SMPTE-C is the color-encoding standard for broadcasting in America. It uses the
following primaries with D
65
as the white point.
1
Red: x =0.630, y =0.340
Green: x =0.310, y =0.595
Blue: x =0.155, y =0.070.
The encoding formula to SMPTE-C/RGB is

R
SMPTE
G
SMPTE
B
SMPTE

3.5058 1.7397 0.5440


1.069 1.9778 0.0352
0.0563 0.1970 1.0501

X
D65
Y
D65
Z
D65

. (A4.1)
The encoded SMPTE-C/RGB is transferred to give a linear response
2
as
P

SMPTE
=1.099P
0.45
SMPTE
0.099 if 1 P
SMPTE
0.018, (A4.2a)
P

SMPTE
=4.5P
SMPTE
if 0 <P
SMPTE
<0.018, (A4.2b)
where P represents one of the RGBprimaries. After all three components are trans-
formed, they are further converted to the luminance and chrominance channels of
Y

, I

, and Q

, using Eq. (A4.3), in order to take advantage of the different in-


formation content in the chrominance and luminance channels with respect to the
information compression.
3

0.299 0.587 0.114


0.596 0.274 0.322
0.212 0.523 0.311

SMPTE
G

SMPTE
B

SMPTE

row sum=1
row sum=0
row sum=0.
(A4.3)
The inverse transform from YIQ via SMPTE-C/RGB to CIEXYZ follows the fol-
lowing path:

SMPTE
G

SMPTE
B

SMPTE

1.0 0.956 0.621


1.0 0.272 0.647
1.0 1.105 1.702

. (A4.4)
479
480 Computational Color Technology
After converting to SMPTE-C/RGB, each component is gamma-corrected using a
gamma of 2.22 and an offset of 0.099 as given in Eq. (A4.5).
P
SMPTE
=

SMPTE
+0.099

/(1.099)

2.22
if P

SMPTE
0.081, (A4.5a)
P
SMPTE
=(1/4.5)P

SMPTE
if P

SMPTE
<0.081. (A4.5b)
Finally, the gamma-corrected RGB are converted back to XYZ using the decoding
matrix of Eq. (A4.6).

X
D65
Y
D65
Z
D65

0.3935 0.3653 0.1916


0.2124 0.7011 0.0866
0.0187 0.1119 0.9582

R
SMPTE
G
SMPTE
B
SMPTE

. (A4.6)
The Xerox color-encoding standard employs SMPTE-C/RGB, but uses D
50
as the
white point for printing and copying. Because of the white-point difference, the
Xerox RGB standard gives the following matrices for coding and decoding:
4
2.944 1.461 0.457
1.095 2.026 0.036
0.078 0.272 1.452
Encoding
0.469 0.357 0.139
0.253 0.684 0.063
0.022 0.109 0.693
Decoding
A4.2 European TV Standard (EBU)
The European TV standard uses EBU/RGB primaries and D
65
as the white point.
The chromaticity coordinates of the primaries are given as follows:
Red: x =0.640, y =0.330
Green: x =0.290, y =0.600
Blue: x =0.150, y =0.060.
The encoding formula to EBU/RGB is

R
EBU
G
EBU
B
EBU

3.063 1.393 0.476


0.969 1.876 0.042
0.068 0.229 1.069

X
D65
Y
D65
Z
D65

. (A4.7)
For EBU/RGB, the gamma correction is given in Eq. (A4.8) with a gamma of 0.45,
an offset of 0.099, and a gain of 4.5 if P
EBU
is smaller than 0.018.
3
P

EBU
=1.099P
0.45
EBU
0.099 if P
EBU
>0.018,
P

EBU
=4.5P
EBU
if P
EBU
0.018. (A4.8)
RGB Color-Encoding Standards 481
Encoded RGB is further converted to luminance and chrominance channels of Y

,
U

, and V

to take advantages of the different information content in the chromi-


nance and luminance channels.
3

0.299 0.587 0.114


0.147 0.289 0.436
0.615 0.515 0.100

EBU
G

EBU
B

EBU

row sum=1
row sum=0
row sum=0.
(A4.9)
The inverse transform from YUV via EBU/RGB to CIEXYZ is given as follows:

EBU
G

EBU
B

EBU

1.0 0 0.114
1.0 0.396 0.581
1.0 2.029 0

. (A4.10)
Resulting EBU/RGB values are inversely gamma corrected as
P
EBU
=

EBU
+0.099

/(1 +0.099)

2.22
if P

EBU
0.081, (A4.11a)
P
EBU
=(1/4.5)P

EBU
if P

EBU
<0.081. (A4.11b)
Finally, they are transformed to CIEXYZ using the decoding matrix of Eq. (A4.12)
as

X
D65
Y
D65
Z
D65

0.431 0.342 0.178


0.222 0.707 0.071
0.020 0.130 0.939

R
EBU
G
EBU
B
EBU

. (A4.12)
A4.3 American TV YIQ Standard
The American TV YIQ standard uses NTSC/RGB with the following chromaticity
coordinates and illuminant C as the white point:
3
Red: x =0.670, y =0.330
Green: x =0.210, y =0.710
Blue: x =0.140, y =0.080.
The gamma for NTSC/RGB is 2.2. The encoding formula to NTSC/RGB is

R
NTSC
G
NTSC
B
NTSC

1.910 0.532 0.288


0.985 1.999 0.028
0.058 0.118 0.898

X
C
Y
C
Z
C

, (A4.13)
and the decoding matrix to XYZ is
3

X
C
Y
C
Z
C

0.607 0.174 0.200


0.299 0.587 0.114
0.0 0.066 1.116

R
NTSC
G
NTSC
B
NTSC

. (A4.14)
482 Computational Color Technology
The encoded NTSC/RGB is further converted to the luminance and chrominance
channels of Y

, I

, and Q

using Eq. (A4.3).


3
A4.4 PhotoYCC
PhotoYCC is designed for encoding outdoor scenes. It is based on ITU-R BT.709-
3/RGB and the adaptive white point is D
65
.
Red: x =0.640, y =0.330
Green: x =0.300, y =0.600
Blue: x =0.150, y =0.060.
The viewing conditions are (i) no viewing are because any are in the original
scene is a part of the scene itself; (ii) average surround, meaning that scene ob-
jects are surrounded by other similarly illuminated objects; and (iii) under typical
daylight luminance levels (>5000 lux).
5
The encoding equation is given in Eq.
(A4.15).

R
YCC
G
YCC
B
YCC

3.2410 1.5374 0.4986


0.9692 1.8760 0.0416
0.0556 0.2040 1.0570

X
scene
Y
scene
Z
scene

. (A4.15)
There is no limitation for ranges of converted RGB tristimulus values. All values
greater than 1 and less than 0 are retained; thus, the color gamut dened by Pho-
toYCC is unlimited. Because there are no constraints on the ranges, the gamma
correction is slightly more complicated than previous RGB standards for taking
care of negative values.
P

YCC
=1.099 P
0.45
YCC
0.099 if P
YCC
0.018, (A4.16a)
P

YCC
=4.5 P
YCC
if 0.018 <P
YCC
<0.018, (A4.16b)
P

YCC
=1.099 |P
YCC
|
0.45
+0.099 if P
YCC
0.018, (A4.16c)
where P represents one of the RGB primaries. The resulting RGB are rotated to
Luma and Chromas as given in Eq. (A4.17).

Luma
Chroma
1
Chroma
2

0.299 0.587 0.114


0.299 0.587 0.886
0.701 0.587 0.114

YCC
G

YCC
B

YCC

. (A4.17)
The last step converts the Luma and Chromas to digital values Y, C
1
, and C
2
. For
8 bits/channel, we have
Y =(255/1.402) Luma, (A4.18a)
RGB Color-Encoding Standards 483
C
1
=111.40 Chroma
1
+156, (A4.18b)
C
2
=135.64 Chroma
2
+137. (A4.18c)
A4.5 SRGB Encoding Standards
SRGB Has been adopted as the default color space for the Internet. It is intended
for CRT output encoding and, like PhotoYCC, is based on the ITU-R BT.709-
3/RGB. The encoding and media white-point luminance are the same at 80 cd/m
2
and both encoding and media white points are D
65
. The viewing conditions are the
following: (i) the background is 20% of display white-point luminance (16 cd/m
2
),
(ii) the surround is 20% reectance of the ambient illuminance (4.1 cd/m
2
), (iii)
the are is 1% (0.8 cd/m
2
), (iv) the glare is 0.2 cd/m
2
, and (v) the observed black
point is 1.0 cd/m
2
.
6,7
There are several versions of sRGB. The encoding formula
is given in Eq. (A4.19).

R
sRGB
G
sRGB
B
sRGB

3.2410 1.5374 0.4986


0.9692 1.8760 0.0416
0.0556 0.2040 1.0570

X
D65
Y
D65
Z
D65

. (A4.19)
Encoded RGBvalues are constrained within a range of [0, 1], which means that any
out-of-range value is clipped by using Eq. (A4.20), where P denotes a component
of the RGB triplet.
P
sRGB
=0 if P
sRGB
<0, (A4.20a)
P
sRGB
=1 if P
sRGB
>1. (A4.20b)
Resulting sRGB values are transformed to nonlinear sR

via the gamma cor-


rection of Eq. (A4.21), having a gamma of 0.42 (2.4
1
), an offset of 0.55, and a
slope of 12.92 if P
sRGB
is smaller than 0.00304.
6,7
P

sRGB
=1.055 P
1.0/2.4
sRGB
0.055 if P
sRGB
>0.00304, (A4.21a)
P

sRGB
=12.92 P
sRGB
if P
sRGB
0.00304. (A4.21b)
Finally, the sR

values are scaled to 8-bit integers as shown in Eq. (A4.22).


P
nbit
=P

sRGB
255. (A4.22)
The initial version of sRGB was encoded to 8 bits per channel. The later version
of sRGB64 has been extended to 16 bits by scaling with a factor of 8192 (this is
actually 13 bits).
For the reverse transform from sRGB to CIEXYZ, the process starts by scaling
digital sRGB to the range [0, 1] as given in Eq. (A4.23), in which the n-bit integer
484 Computational Color Technology
input is divided by the maximumvalue of the n-bit representation to give a oating-
point value between 0 and 1.
P

sRGB
=P
nbit
/(2
n
1). (A4.23)
The gamma correction is followed with a gamma of 2.4, an offset of 0.55, and a
slope of 1/12.92 if P

sRGB
is smaller than 0.03928.
P
sRGB
=

sRGB
+0.055

/1.055

2.4
if P

sRGB
>0.03928, (A4.24a)
P
sRGB
=P

sRGB
/12.92 if P

sRGB
0.03928. (A4.24b)
Resulting sRGB values are linearly transformed to CIEXYZ under the white point
of D
65
.

X
D65
Y
D65
Z
D65

0.4124 0.3576 0.1805


0.2126 0.7152 0.0722
0.0193 0.1192 0.9505

R
sRGB
G
sRGB
B
sRGB

. (A4.25)
It has been claimed that the reverse transform closely ts a simple power function
with an exponent of 2.2, thus maintaining color consistency of desktop and video
images.
7
A4.6 E-sRGB Encoding Standard
The e-sRGB is the latest extension of sRGB. It provides a way of encoding output-
referred images by removing the constraint of ranges, allowing encoded RGB val-
ues to go above and below the range of [0, 1]. The encoding transform is
8

R
esRGB
G
esRGB
B
esRGB

3.2406 1.5372 0.4986


0.9689 1.8758 0.0415
0.0557 0.2040 1.0570

X
D65
Y
D65
Z
D65

. (A4.26)
Note that the matrix coefcients are slightly changed. The difference is not signif-
icant enough to cause an accuracy problem. Because of the change in ranges, the
gamma correction of Eq. (A4.21) becomes Eq. (A4.27)
P

esRGB
=1.055 P
1.0/2.4
esRGB
0.055 if P
esRGB
>0.0031308 (A4.27a)
P

esRGB
=12.92 P
esRGB
if 0.0031308 P
esRGB
0.0031308 (A4.27b)
P

esRGB
=1.055| P
esRGB
|
1.0/2.4
+0.055 if P
esRGB
<0.0031308. (A4.27c)
Note that the numerical accuracy of the constant is also improved. It has three
levels of precision: 10 bits per channel for general applications, and 12 and 16 bits
RGB Color-Encoding Standards 485
per channel for photography and graphic-art applications. Therefore, the digital
representation is given in Eq. (A4.28).
P

esRGBnbit
=255.0 2
n9
P

esRGB
+offset, (A4.28)
where offset =2
n2
+2
n3
, and n =10, 12, or 16.
The inverse transform is given as follows: First, digital to oating-point con-
version of Eq. (A4.29).
P

esRGB
=

esRGBnbit
offset

255.0 2
n9

. (A4.29)
The gamma correction of Eq. (A4.24) becomes
P
esRGB
=[(P

esRGB
+0.055)/1.055]
2.4
if P

esRGB
>0.04045 (A4.30a)
P
esRGB
=P

esRGB
/12.92 if 0.04045 P

esRGB
0.04045 (A4.30b)
P
esRGB
=[(|P

esRGB
| +0.055)/1.055]
2.4
if P

esRGB
<0.04045. (A4.30c)
Note that the constant has changed. Finally, the conversion to CIEXYZ is the same
as Eq. (A4.25).
A4.7 Kodak ROMM/RGB Encoding Standard
Kodak ROMM/RGB uses the following RGB primaries:
911
Red ( =700 nm): x =0.7347, y =0.2653,
Green: x =0.1596, y =0.8404,
Blue: x =0.0366, y =0.0001.
The encoding and media white-point luminance are the same at 142 cd/m
2
, where
the adapted white-point luminance is 160 cd/m
2
. Both encoding and media white
points are D
50
in accordance with the Graphic Art standard for viewing prints.
The viewing surround is average (20% of the adapted white-point luminance), the
are and glare are included in colorimetric measurements based on the CIE1931
observer, and the observed black point is 0.5 cd/m
2
. The encoding formula is given
in Eq. (A4.31).

R
ROMM
G
ROMM
B
ROMM

1.3460 0.2556 0.0511


0.5446 1.5082 0.0205
0.0 0.0 1.2123

X
D50
Y
D50
Z
D50

, (A4.31)
and RGB has a range of [0, 1] when Y
D50
of the ideal reference white is normal-
ized to 1.0; therefore, the clipping of out-of-range values is still needed. Equa-
tion (A4.32) gives the gamma correction.
P

ROMM
=16 P
ROMM
, if P
ROMM
<0.001953, and (A4.32a)
486 Computational Color Technology
P

ROMM
=P
1.0/1.8
ROMM
if P
ROMM
0.001953. (A4.32b)
Finally, Eq. (A4.33) scales results to n-bit integers.
P
ROMMnbit
=P

ROMM
(2
n
1), (A4.33)
where n can be 8, 12, or 16.
For the reverse transform from ROMM/RGB to CIELAB, the mathematical
formulas are given in Eqs. (A4.34)(A4.36). Equation (A4.34) does the scaling to
[0, 1], in which the n-bit integer input is divided by the maximum value of the n-bit
representation to give a oating-point value between 0 and 1.
P

ROMM
=P
ROMMnbit
/(2
n
1). (A4.34)
Equation (A4.35) performs the gamma correction.
P
ROMM
=P

ROMM
/16 if P

ROMM
<0.031248, (A4.35a)
and
P
ROMM
=P
1.8
ROMM
if P

ROMM
0.031248. (A4.35b)
Equation (A4.36) is the linear transform from RGB to XYZ.

X
D50
Y
D50
Z
D50

0.7977 0.1352 0.0313


0.2880 0.7119 0.0001
0.0 0.0 0.8249

R
ROMM
G
ROMM
B
ROMM

. (A4.36)
A4.8 Kodak RIMM/RGB
RIMM/RGB is a companion color-encoding standard to ROMM/RGB. It uses the
same RGB primaries and D
50
white point as ROMM/RGB. It is intended to encode
outdoor scenes; thus, it has a high luminance level of 1500 cd/m
2
. There is no
viewing are for the scene. The gamma correction is similar to sRGBwith a scaling
factor of 1/1.402.
911
P

ROMM
=

1.099P
0.45
ROMM
0.099

/1.402 if P
ROMM
0.018, (A4.37a)
and
P

ROMM
=(4.5P
ROMM
)/1.402 if P
ROMM
<0.018. (A4.37b)
The reverse encoding is the inverse of Eq. (A4.37).
P
ROMM
=

1.402P

ROMM
+0.099

/1.099

1/0.45
if P
ROMM
0.01284, (A4.38a)
RGB Color-Encoding Standards 487
and
P
ROMM
=(1.402P

ROMM
)/4.5 if P
ROMM
<0.01284. (A4.38b)
Similar to ROMM/RGB, it can be encoded in 8, 12, and 16 bits.
References
1. SMPTE RP 145-1999, SMPTE C color monitor colorimetry, Society of Mo-
tion Picture and Television Engineers, White Plains, NY (1999).
2. SMPTE RP 176-1997, Derivation of reference signals for television camera
color evaluation, Society of Motion Picture and Television Engineers, White
Plains, NY (1997).
3. A. Ford and A. Roberts, Colour space conversions, http://www.inforamp.net/
poynton/PDFs/colourreq.pdf
4. Color Encoding Standard, Xerox Corp., Xerox Systems Institute Sunnyvale,
CA, p. C-3 (1989).
5. E. J. Giorgianni and T. E. Madden, Digital Color Management, Addison-
Wesley, Reading, MA, pp. 489497 (1998).
6. M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, A standard default
color space for the InternetsRGB, Version 1.10, Nov. 5 (1996).
7. IEC/3WD 61966-2.1: Colour measurement and management in multime-
dia systems and equipmentPart 2.1: Default RGB colour spacesRGB,
http://www.srgb.com (1998).
8. PIMA 7667: PhotographyElectronic still picture imagingExtended sRGB
color encodinge-sRGB, (2001).
9. Eastman Kodak Company, Reference output medium metric RGB color space
(ROMM RGB) white paper, http://www.colour.org/tc8-05.
10. Eastman Kodak Company, Reference input medium metric RGB color space
(RIMM RGB) white paper, http://www.pima.net/standards/iso/tc42/wq18.
11. K. E. Spaulding, G. J. Woolfe, and E. J. Giorgianni, Reference input/output
medium metric RGB color encodings (RIMM/ROMM RGB), PICS 2000
Conf., Portland, OR, March (2000).
Appendix 5
Matrix Inversion
There are several ways to invert a matrix; here, we describe Gaussian elimina-
tion for the purpose of decreasing computational costs. Gaussian elimination is
the combination of triangularization and back substitution. The triangularization
will make all matrix elements in the lower-left part of the diagonal line zero. Con-
sequently, the last row in the matrix will contain only one element, which is the
solution for the last coefcient. This solution is substituted back into the front rows
to calculate the other coefcients.
A5.1 Triangularization
For convenience of explanation, let us rewrite the matrix as a set of simultaneous
equations and substitute the matrix elements with k
ij
as follows:
k
11
a
1
+k
12
a
2
+k
13
a
3
+k
14
a
4
+k
15
a
5
+k
16
a
6
=q
1
, (A5.1a)
k
21
a
1
+k
22
a
2
+k
23
a
3
+k
24
a
4
+k
25
a
5
+k
26
a
6
=q
2
, (A5.1b)
k
31
a
1
+k
32
a
2
+k
33
a
3
+k
34
a
4
+k
35
a
5
+k
36
a
6
=q
3
, (A5.1c)
k
41
a
1
+k
42
a
2
+k
43
a
3
+k
44
a
4
+k
45
a
5
+k
46
a
6
=q
4
, (A5.1d)
k
51
a
1
+k
52
a
2
+k
53
a
3
+k
54
a
4
+k
55
a
5
+k
56
a
6
=q
5
, (A5.1e)
k
61
a
1
+k
62
a
2
+k
63
a
3
+k
64
a
4
+k
65
a
5
+k
66
a
6
=q
6
, (A5.1f)
where k
11
=

x
2
i
, k
12
=

x
i
y
i
, k
13
=

x
i
z
i
, k
14
=

x
2
i
y
i
, k
15
=

x
i
y
i
z
i
,
k
16
=

x
2
i
z
i
, and q
1
=

x
i
p
i
. Similar substitutions are performed for Eqs.
(A5.1bA5.1f).
489
490 Computational Color Technology
First iteration:
Eq. (A5.1b)(k
21
/k
11
) Eq. (A5.1a)
1
k
22
a
2
+
1
k
23
a
3
+
1
k
24
a
4
+
1
k
25
a
5
+
1
k
26
a
6
=
1
q
2
, (A5.2a)
Eq. (A5.1c)(k
31
/k
11
) Eq. (A5.1a)
1
k
32
a
2
+
1
k
33
a
3
+
1
k
34
a
4
+
1
k
35
a
5
+
1
k
36
a
6
=
1
q
3
, (A5.2b)
Eq. (A5.1d)(k
41
/k
11
) Eq. (A5.1a)
1
k
42
a
2
+
1
k
43
a
3
+
1
k
44
a
4
+
1
k
45
a
5
+
1
k
46
a
6
=
1
q
4
, (A5.2c)
Eq. (A5.1e)(k
51
/k
11
) Eq. (A5.1a)
1
k
52
a
2
+
1
k
53
a
3
+
1
k
54
a
4
+
1
k
55
a
5
+
1
k
56
a
6
=
1
q
5
, (A5.2d)
Eq. (A5.1f)(k
61
/k
11
) Eq. (A5.1a)
1
k
62
a
2
+
1
k
63
a
3
+
1
k
64
a
4
+
1
k
65
a
5
+
1
k
66
a
6
=
1
q
6
. (A5.2e)
After the rst iteration, the matrix elements k
21
, k
31
, k
41
, k
51
, and k
61
are elimi-
nated.
Second iteration:
Eq. (A5.2b)(
1
k
32
/
1
k
22
) Eq. (A5.2a)
2
k
33
a
3
+
2
k
34
a
4
+
2
k
35
a
5
+
2
k
36
a
6
=
2
q
3
, (A5.3a)
Eq. (A5.2c)(
1
k
42
/
1
k
22
) Eq. (A5.2a)
2
k
43
a
3
+
2
k
44
a
4
+
2
k
45
a
5
+
2
k
46
a
6
=
2
q
4
, (A5.3b)
Eq. (A5.2d)(
1
k
52
/
1
k
22
) Eq. (A5.2a)
2
k
53
a
3
+
2
k
54
a
4
+
2
k
55
a
5
+
2
k
56
a
6
=
2
q
5
, (A5.3c)
Eq. (A5.2e)(
1
k
62
/
1
k
22
) Eq. (A5.2a)
2
k
63
a
3
+
2
k
64
a
4
+
2
k
65
a
5
+
2
k
66
a
6
=
2
q
6
. (A5.3d)
After the second iteration, the elements
1
k
32
,
1
k
42
,
1
k
52
, and
1
k
62
are eliminated.
Matrix Inversion 491
Third iteration:
Eq. (A5.3b)(
2
k
43
/
2
k
33
) Eq. (A5.3a)
3
k
44
a
4
+
3
k
45
a
5
+
3
k
46
a
6
=
3
q
4
, (A5.4a)
Eq. (A5.3c)(
2
k
53
/
2
k
33
) Eq. (A5.3a)
3
k
54
a
4
+
3
k
55
a
5
+
3
k
56
a
6
=
3
q
5
, (A5.4b)
Eq. (A5.3d)(
2
k
63
/
2
k
33
) Eq. (A5.3a)
3
k
64
a
4
+
3
k
65
a
5
+
3
k
66
a
6
=
3
q
6
. (A5.4c)
After the third iteration, the elements
2
k
43
,
2
k
53
, and
2
k
63
are eliminated.
Fourth iteration:
Eq. (A5.4b)(
3
k
54
/
3
k
44
) Eq. (A5.4a)
4
k
55
a
5
+
4
k
56
a
6
=
4
q
5
, (A5.5a)
Eq. (A5.4c)(
3
k
64
/
3
k
44
) Eq. (A5.4a)
4
k
65
a
5
+
4
k
66
a
6
=
4
q
6
. (A5.5b)
After the fourth iteration, the elements
3
k
54
and
3
k
64
are eliminated.
Fifth iteration:
Eq. (A5.5b)(
4
k
65
/
4
k
55
) Eq. (A5.5a)
5
k
66
a
6
=
5
q
6
. (A5.6a)
After the fth iteration, the element
4
k
65
is eliminated. Now, all elements in the
bottom-right triangle of the matrix are zero.
A5.2 Back Substitution
The a
i
value is obtained by back substituting a
i+1
sequentially via Eqs. (A5.5a),
(A5.4a), (A5.3a), (A5.2a), and (A5.1a).
5
k
66
a
6
=
5
q
6
, (A5.6a)
4
k
55
a
5
+
4
k
56
a
6
=
4
q
5
, (A5.5a)
3
k
44
a
4
+
3
k
45
a
5
+
3
k
46
a
6
=
3
q
4
, (A5.4a)
492 Computational Color Technology
2
k
33
a
3
+
2
k
34
a
4
+
2
k
35
a
5
+
2
k
36
a
6
=
2
q
3
, (A5.3a)
1
k
22
a
2
+
1
k
23
a
3
+
1
k
24
a
4
+
1
k
25
a
5
+
1
k
26
a
6
=
1
q
2
, (A5.2a)
k
11
a
1
+k
12
a
2
+k
13
a
3
+k
14
a
4
+k
15
a
5
+k
16
a
6
=q
1
. (A5.1a)
From Eq. (A5.6a), we obtain
a
6
=
5
q
6
/
5
k
66
. (A5.6a)
Substituting a
6
into Eq. (A5.5a), we obtain a
5
as
a
5
=[
4
q
5

4
k
56
(
5
q
6
/
5
k
66
)]/
4
k
55
. (A5.6b)
We then substitute a
5
and a
6
into Eq. (A5.4a) to compute a
4
. Continuing these
back substitutions, we can determine all a
i
. The coefcients are stored for use in
computing p
i
via the selected polynomial.
Appendix 6
Color Errors of Reconstructed CRI
Spectra with Respect to Measured
Values
Basis
Sample vectors X Y Z L* A* B* E
ab
s s
std
1: Measured 34.57 30.42 18.53 62.01 18.94 12.93
1: Cohen 2 33.89 31.37 18.00 62.82 13.13 15.50 6.37 0.016 0.020
1: Cohen 3 33.36 30.29 18.57 61.91 15.22 12.65 3.69 0.013 0.017
1: Cohen 4 34.42 30.42 18.93 62.02 18.42 12.05 0.98 0.007 0.008
1: Eem 2 34.78 32.18 18.03 63.49 13.30 16.59 6.86 0.022 0.039
1: Eem 3 33.83 30.30 19.43 61.91 16.82 10.81 2.96 0.020 0.035
1: Eem 4 34.83 30.42 19.81 62.02 19.82 10.20 2.86 0.016 0.032
1: Eem 5 34.84 30.41 19.77 62.01 19.90 10.26 2.83 0.016 0.032
1: Eem 8 34.47 30.41 18.43 62.00 18.64 13.13 0.34 0.007 0.010
2: Measured 28.97 29.33 11.46 61.07 2.68 29.30
2: Cohen 2 26.26 24.28 13.86 56.36 12.20 14.42 18.27 0.042 0.048
2: Cohen 3 28.81 29.45 11.10 61.17 1.63 30.57 1.66 0.007 0.009
2: Cohen 4 29.08 29.48 11.19 61.20 2.55 30.33 1.07 0.006 0.008
2: Eem 2 27.22 25.51 14.85 57.57 10.90 13.92 17.78 0.034 0.039
2: Eem 3 29.21 29.42 11.93 61.15 3.26 28.03 1.38 0.008 0.011
2: Eem 4 29.05 29.40 11.87 61.13 2.74 28.18 1.11 0.008 0.011
2: Eem 5 29.08 29.36 11.75 61.10 2.99 28.48 0.86 0.006 0.008
2: Eem 8 28.97 29.33 11.37 61.07 2.68 29.59 0.31 0.003 0.004
3: Measured 25.05 30.59 7.73 62.16 17.85 43.92
3: Cohen 2 22.65 21.46 13.42 53.45 9.17 10.57 43.84 0.079 0.093
3: Cohen 3 27.06 30.41 8.65 62.00 8.85 40.18 9.80 0.032 0.044
3: Cohen 4 24.45 30.09 7.75 61.73 18.59 43.10 1.15 0.018 0.023
3: Eem 2 23.58 22.90 14.71 54.97 6.74 9.79 42.71 0.073 0.087
3: Eem 3 27.56 30.74 8.86 62.29 8.09 39.91 10.61 0.033 0.045
3: Eem 4 24.95 30.42 7.88 62.01 17.66 43.09 0.88 0.023 0.028
3: Eem 5 24.88 30.53 8.21 62.11 18.35 41.98 2.00 0.017 0.021
3: Eem 8 24.73 30.52 7.66 62.10 18.94 44.07 1.04 0.014 0.019
4: Measured 20.67 29.00 16.56 60.78 31.71 15.28
4: Cohen 2 19.14 21.70 21.38 53.71 8.78 7.32 32.99 0.066 0.082
4: Cohen 3 23.14 29.81 17.06 61.49 23.28 15.33 8.49 0.031 0.035
493
494 Computational Color Technology
(Continued)
Basis
Sample vectors X Y Z L* A* B* E
ab
s s
std
4: Cohen 4 21.52 29.61 16.50 61.32 29.99 16.34 2.12 0.023 0.026
4: Eem 2 20.39 24.01 22.31 56.10 12.90 5.03 28.10 0.049 0.065
4: Eem 3 23.18 29.51 18.20 61.23 21.99 12.30 10.21 0.033 0.039
4: Eem 4 20.75 29.21 17.29 60.97 32.13 13.91 1.43 0.019 0.022
4: Eem 5 20.82 29.10 16.95 60.87 31.38 14.52 0.84 0.009 0.012
4: Eem 8 20.70 29.02 16.61 60.80 31.61 15.19 0.16 0.004 0.004
5: Measured 24.46 30.24 30.75 61.86 19.09 9.70
5: Cohen 2 24.11 28.94 31.45 60.73 15.73 12.74 4.70 0.012 0.016
5: Cohen 3 24.83 30.40 30.67 61.99 18.10 9.35 1.09 0.007 0.008
5: Cohen 4 24.64 30.37 30.61 61.98 18.83 9.28 0.53 0.006 0.007
5: Eem 2 25.46 31.37 31.02 62.82 18.96 8.46 1.58 0.019 0.041
5: Eem 3 24.92 30.30 31.82 61.92 17.36 11.24 2.34 0.016 0.039
5: Eem 4 24.78 30.29 31.76 61.90 17.91 11.19 1.92 0.015 0.039
5: Eem 5 24.78 30.28 31.73 61.89 17.85 11.16 1.94 0.016 0.039
5: Eem 8 24.43 30.26 30.54 61.88 19.30 9.33 0.41 0.009 0.013
6: Measured 27.09 29.15 43.68 60.91 4.05 29.19
6: Cohen 2 29.87 35.47 37.90 66.12 15.63 12.74 20.74 0.059 0.069
6: Cohen 3 26.90 29.45 41.11 61.18 5.95 25.49 4.14 0.020 0.041
6: Cohen 4 27.00 29.47 41.14 61.19 5.58 25.52 3.97 0.020 0.041
6: Eem 2 31.50 38.20 36.97 68.16 18.42 7.94 26.62 0.080 0.092
6: Eem 3 26.83 29.00 43.83 60.78 4.55 29.61 0.61 0.018 0.028
6: Eem 4 27.27 29.06 44.00 60.83 2.95 29.73 1.29 0.018 0.027
6: Eem 5 27.23 29.13 44.24 60.90 3.43 29.90 0.99 0.012 0.024
6: Eem 8 27.02 29.15 43.52 60.92 4.34 28.99 0.30 0.005 0.006
7: Measured 33.11 29.39 39.76 61.12 17.70 23.84
7: Cohen 2 37.80 40.00 34.17 69.47 2.44 1.73 31.02 0.094 0.107
7: Cohen 3 32.24 28.71 40.19 60.52 17.20 25.44 1.76 0.023 0.029
7: Cohen 4 32.89 28.78 40.41 60.59 19.21 25.61 2.42 0.021 0.027
7: Eem 2 38.85 41.24 32.20 70.34 2.86 2.69 34.78 0.117 0.144
7: Eem 3 32.56 28.85 41.45 60.65 17.78 26.85 3.04 0.048 0.083
7: Eem 4 33.96 29.02 41.98 60.80 22.06 27.26 5.59 0.049 0.081
7: Eem 5 33.85 29.20 42.52 60.96 21.01 27.66 5.09 0.038 0.075
7: Eem 8 33.01 29.41 39.29 61.14 17.28 23.18 0.76 0.011 0.019
8: Measured 38.68 31.85 34.01 63.22 27.30 12.27
8: Cohen 2 45.79 43.02 26.11 71.57 12.65 14.69 31.76 0.104 0.115
8: Cohen 3 40.61 32.50 31.72 63.75 31.02 7.93 5.80 0.043 0.064
8: Cohen 4 39.59 32.38 31.37 63.65 28.30 7.57 4.85 0.041 0.063
8: Eem 2 47.09 44.24 25.93 72.39 12.79 16.41 33.39 0.110 0.124
8: Eem 3 40.80 31.85 35.18 63.22 33.92 13.96 6.91 0.031 0.038
8: Eem 4 39.09 31.64 34.53 63.04 29.35 13.34 2.39 0.026 0.031
8: Eem 5 39.03 31.74 34.83 63.12 28.81 13.62 2.09 0.022 0.026
8: Eem 8 38.86 31.92 33.97 63.28 27.62 12.11 0.44 0.011 0.014
Color Errors of Reconstructed CRI Spectra with Respect to Measured Values 495
(Continued)
Basis
Sample vectors X Y Z L* A* B* E
ab
s s
std
9: Measured 23.30 12.42 3.24 41.88 61.97 31.80
9: Cohen 2 36.40 26.08 0.33 58.11 41.94 94.02 67.34 0.106 0.138
9: Cohen 3 30.70 16.46 4.58 47.57 67.42 33.32 8.09 0.072 0.088
9: Cohen 4 25.94 14.43 3.01 44.84 60.51 38.53 7.51 0.054 0.066
9: Eem 2 36.27 24.70 0.18 56.78 47.24 94.56 66.16 0.102 0.136
9: Eem 3 31.02 16.18 6.32 47.22 70.13 24.04 12.51 0.069 0.087
9: Eem 4 23.60 12.53 3.21 42.05 62.52 32.35 0.86 0.026 0.034
9: Eem 5 23.64 12.46 2.98 41.94 63.22 33.76 2.36 0.023 0.032
9: Eem 8 23.73 12.63 3.05 42.19 62.50 33.72 2.05 0.023 0.030
10: Measured 58.82 60.24 9.50 81.97 1.78 71.61
10: Cohen 2 52.44 44.80 17.32 72.76 25.54 34.15 45.33 0.120 0.143
10: Cohen 3 60.04 60.22 9.10 81.96 4.73 72.96 3.30 0.021 0.026
10: Cohen 4 58.81 60.07 8.68 81.88 2.16 74.32 2.75 0.016 0.020
10: Eem 2 54.12 46.71 20.40 74.01 24.49 29.65 48.39 0.112 0.134
10: Eem 3 61.12 60.48 10.12 82.10 6.67 69.76 5.28 0.027 0.032
10: Eem 4 58.75 60.19 9.23 81.94 1.73 72.50 0.90 0.006 0.008
10: Eem 5 58.74 60.20 9.26 81.95 1.69 72.40 0.80 0.006 0.007
10: Eem 8 58.81 60.25 9.47 81.97 1.74 71.70 0.11 0.004 0.005
11: Measured 12.10 19.81 11.95 51.62 41.14 11.55
11: Cohen 2 13.47 15.89 16.78 46.83 11.37 9.31 36.71 0.055 0.068
11: Cohen 3 15.86 20.74 14.20 52.66 22.00 7.13 19.74 0.042 0.052
11: Cohen 4 12.91 20.38 13.18 52.27 38.44 9.16 3.71 0.023 0.030
11: Eem 2 14.25 17.45 17.12 48.82 15.08 6.65 31.96 0.052 0.064
11: Eem 3 15.76 20.42 14.91 52.31 21.08 4.70 21.27 0.048 0.057
11: Eem 4 12.26 19.99 13.59 51.83 40.95 7.31 4.24 0.028 0.034
11: Eem 5 12.32 19.89 13.28 51.71 40.04 7.93 3.79 0.023 0.029
11: Eem 8 11.91 19.76 11.82 51.56 42.24 11.83 1.08 0.010 0.013
12: Measured 5.26 5.92 21.27 29.21 5.24 49.35
12: Cohen 2 7.39 10.71 14.80 39.08 25.01 17.84 38.44 0.059 0.074
12: Cohen 3 5.52 6.90 16.83 31.59 12.43 35.70 15.58 0.046 0.065
12: Cohen 4 4.62 6.85 16.48 31.46 23.02 35.09 22.82 0.045 0.064
12: Eem 2 8.05 11.86 14.23 40.99 27.12 13.08 43.93 0.062 0.079
12: Eem 3 5.23 6.31 18.37 30.19 9.82 41.58 9.03 0.040 0.058
12: Eem 4 4.62 6.38 18.05 30.35 18.18 40.61 15.57 0.042 0.057
12: Eem 5 4.73 6.44 17.93 30.50 17.34 40.08 15.22 0.042 0.056
12: Eem 8 5.24 6.09 20.55 29.65 7.32 47.14 3.00 0.021 0.025
13: Measured 61.68 58.09 31.44 80.79 13.63 21.87
13: Cohen 2 57.54 53.51 31.26 78.18 15.03 17.64 5.16 0.046 0.065
13: Cohen 3 60.00 58.49 28.61 81.01 8.72 26.73 6.90 0.035 0.046
13: Cohen 4 61.54 58.67 29.14 81.11 11.92 26.05 4.53 0.024 0.041
13: Eem 2 59.65 56.06 32.90 79.65 13.77 17.68 4.32 0.023 0.028
13: Eem 3 60.72 58.16 31.33 80.83 11.20 22.11 2.40 0.016 0.020
496 Computational Color Technology
(Continued)
Basis
Sample vectors X Y Z L* A* B* E
ab
s s
std
13: Eem 4 61.68 58.28 31.69 80.89 13.17 21.66 0.47 0.011 0.016
13: Eem 5 61.71 58.22 31.52 80.86 13.39 21.88 0.21 0.010 0.012
13: Eem 8 61.66 58.09 31.43 80.79 13.58 21.89 0.04 0.004 0.005
14: Measured 9.70 11.75 4.15 40.82 12.35 24.13
14: Cohen 2 9.27 8.91 5.86 35.82 5.70 6.50 25.76 0.029 0.034
14: Cohen 3 10.63 11.67 4.39 40.69 4.61 22.49 7.98 0.016 0.023
14: Cohen 4 9.28 11.51 3.93 40.42 14.03 24.77 1.78 0.010 0.012
14: Eem 2 9.63 9.47 6.26 36.87 4.09 6.46 24.49 0.027 0.032
14: Eem 3 10.82 11.81 4.52 40.91 4.14 22.17 8.51 0.016 0.023
14: Eem 4 9.62 11.66 4.07 40.67 12.36 24.37 0.32 0.013 0.017
14: Eem 5 9.58 11.73 4.28 40.78 13.18 23.33 1.07 0.008 0.012
14: Eem 8 9.52 11.71 4.10 40.75 13.48 24.32 1.07 0.007 0.011
Appendix 7
Color Errors of Reconstructed CRI
Spectra with Respect to Measured
Values Using Tristimulus Inputs
Sample X Y Z L

E
ab
s s
std
1: Measured 34.57 30.42 18.53 62.01 18.94 12.93
1: Cohen 34.57 30.42 18.53 62.01 18.94 12.93 0.0 0.019 0.028
1: Eem 34.57 30.42 18.53 62.01 18.94 12.93 0.0 0.022 0.037
2: Measured 28.97 29.33 11.46 61.07 2.68 29.30
2: Cohen 28.97 29.33 11.46 61.07 2.68 29.30 0.0 0.007 0.011
2: Eem 28.97 29.33 11.46 61.07 2.68 29.30 0.0 0.008 0.012
3: Measured 25.05 30.59 7.73 62.16 17.85 43.92
3: Cohen 25.05 30.59 7.73 62.16 17.85 43.92 0.0 0.041 0.061
3: Eem 25.05 30.59 7.73 62.16 17.85 43.92 0.0 0.040 0.064
4: Measured 20.67 29.00 16.56 60.78 31.71 15.28
4: Cohen 20.67 29.00 16.56 60.78 31.71 15.28 0.0 0.041 0.052
4: Eem 20.67 29.00 16.56 60.78 31.71 15.28 0.0 0.048 0.058
5: Measured 24.46 30.24 30.75 61.86 19.09 9.70
5: Cohen 24.46 30.24 30.75 61.86 19.09 9.70 0.0 0.008 0.009
5: Eem 24.46 30.24 30.75 61.86 19.09 9.70 0.0 0.018 0.041
6: Measured 27.09 29.15 43.68 60.91 4.05 29.19
6: Cohen 27.09 29.15 43.68 60.91 4.05 29.19 0.0 0.018 0.046
6: Eem 27.09 29.15 43.68 60.91 4.05 29.19 0.0 0.019 0.028
7: Measured 33.11 29.39 39.76 61.12 17.70 23.84
7: Cohen 33.11 29.39 39.76 61.12 17.70 23.84 0.0 0.026 0.030
7: Eem 33.11 29.39 39.76 61.12 17.70 23.84 0.0 0.049 0.084
8: Measured 38.68 31.85 34.01 63.22 27.30 12.27
8: Cohen 38.68 31.85 34.01 63.22 27.30 12.27 0.00 0.047 0.073
8: Eem 38.68 31.85 34.01 63.22 27.30 12.27 0.00 0.046 0.056
9: Measured 23.30 12.42 3.24 41.88 61.97 31.80
9: Cohen 23.37 13.14 3.71 42.97 57.55 30.58 4.71 0.107 0.148
9: Eem 23.39 13.16 3.49 43.01 57.47 32.03 4.65 0.107 0.156
10: Measured 58.82 60.24 9.50 81.97 1.78 71.61
10: Cohen 58.82 60.24 9.50 81.97 1.78 71.61 0.0 0.028 0.036
497
498 Computational Color Technology
(Continued)
Sample X Y Z L

E
ab
s s
std
10: Eem 58.82 60.24 9.50 81.97 1.78 71.61 0.0 0.035 0.052
11: Measured 12.10 19.81 11.95 51.62 41.14 11.55
11: Cohen 12.22 19.86 11.95 51.67 40.53 11.64 0.62 0.057 0.072
11: Eem 12.24 19.86 11.95 51.68 40.46 11.65 0.70 0.061 0.077
12: Measured 5.26 5.92 21.27 29.21 5.24 49.35
12: Cohen 5.32 5.96 21.27 29.32 5.02 49.17 0.30 0.046 0.074
12: Eem 5.36 6.01 21.27 29.44 5.07 48.95 0.49 0.042 0.062
13: Measured 61.68 58.09 31.44 80.79 13.63 21.87
13: Cohen 61.68 58.09 31.44 80.79 13.63 21.87 0.0 0.046 0.064
13: Eem 61.68 58.09 31.44 80.79 13.63 21.87 0.0 0.022 0.028
14: Measured 9.70 11.75 4.15 40.82 12.35 24.13
14: Cohen 9.70 11.75 4.15 40.82 12.35 24.13 0.0 0.018 0.030
14: Eem 9.70 11.75 4.15 40.82 12.35 24.13 0.0 0.018 0.031
Appendix 8
White-Point Conversion Accuracies
Using Polynomial Regression
Difference Maximum Difference Maximum
Source Destination Number Polynomial in in in in
illuminant illuminant of data terms CIEXYZ CIEXYZ CIELAB CIELAB
A B 64 3 3 0.52 1.32 2.58 7.47
3 4 0.36 1.06 2.10 7.03
3 7 0.30 0.74 1.34 4.67
3 10 0.13 0.45 0.50 1.59
3 14 0.11 0.33 0.28 0.73
3 20 0.07 0.20 0.12 0.34
134 3 3 0.56 1.83 2.54 9.11
3 4 0.44 1.40 2.28 8.66
3 7 0.36 1.06 1.47 4.87
3 10 0.26 0.90 1.05 3.76
3 14 0.23 0.88 0.88 3.66
3 20 0.17 0.55 0.69 3.25
200 3 3 0.55 2.84 10.70
3 4 0.50 2.67
3 7 0.40 1.66
A C 64 3 3 0.88 2.24 3.72 10.16
3 4 0.63 1.77 3.05 9.85
3 7 0.53 1.28 1.96 6.76
3 10 0.22 0.78 0.67 2.13
3 14 0.20 0.61 0.38 1.22
3 20 0.12 0.38 0.17 0.41
134 3 3 0.94 3.12 3.66 12.36
3 4 0.77 2.52 3.32 11.75
3 7 0.63 1.96 2.23 7.25
3 10 0.44 1.57 1.49 5.37
3 14 0.38 1.50 1.28 5.41
3 20 0.27 0.95 0.99 4.69
200 3 3 0.93 4.09
3 4 0.85 3.87
3 7 0.69 2.45
499
500 Computational Color Technology
Difference Maximum Difference Maximum
Source Destination Number Polynomial in in in in
illuminant illuminant of data terms CIEXYZ CIEXYZ CIELAB CIELAB
A D
50
64 3 3 0.48 1.24 2.49 7.28
3 4 0.32 0.99 2.06 6.96
3 7 0.26 0.66 1.32 4.46
3 10 0.11 0.40 0.50 1.56
3 14 0.09 0.26 0.28 0.86
3 20 0.06 0.17 0.12 0.33
134 3 3 0.50 1.62 2.44 8.80
3 4 0.39 1.14 2.22 8.44
3 7 0.31 0.84 1.48 4.92
3 10 0.23 0.78 1.08 3.83
3 14 0.20 0.79 0.91 3.83
3 20 0.16 0.51 0.73 3.36
200 3 3 0.50 2.76
3 4 0.44 2.71
3 7 0.35 1.66
A D
55
64 3 3 0.59 1.50 2.89 8.21
3 4 0.40 1.20 2.39 7.83
3 7 0.33 0.82 1.53 5.19
3 10 0.14 0.50 0.56 1.74
3 14 0.12 0.35 0.31 0.94
3 20 0.07 0.22 0.14 0.35
134 3 3 0.61 2.00 2.83 9.95
3 4 0.48 1.46 2.58 9.53
3 7 0.39 0.94 1.72 5.74
3 10 0.28 0.96 1.23 4.43
3 14 0.25 0.96 1.05 4.43
3 20 0.18 0.61 0.83 3.88
200 3 3 0.61 3.20
3 4 0.54 3.03
3 7 0.44 1.92
A D
65
64 3 3 0.78 1.96 3.49 9.53
3 4 0.54 1.57 2.89 9.23
3 7 0.45 1.11 1.85 6.26
3 10 0.19 0.67 0.66 2.06
3 14 0.16 0.49 0.37 1.16
3 20 0.10 0.31 0.16 0.40
134 3 3 0.81 2.68 3.41 11.55
3 4 0.65 2.06 3.12 11.07
3 7 0.53 1.58 2.12 7.01
3 10 0.37 1.29 1.48 5.35
3 14 0.33 1.28 1.27 5.40
3 20 0.23 0.82 1.00 4.70
White-Point Conversion Accuracies Using Polynomial Regression 501
Difference Maximum Difference Maximum
Source Destination Number Polynomial in in in in
illuminant illuminant of data terms CIEXYZ CIEXYZ CIELAB CIELAB
200 3 3 0.81 3.85
3 4 0.73 3.65
3 7 0.59 2.34
A D
75
64 3 3 0.94 2.37 3.90 10.34
3 4 0.66 1.88 3.24 10.38
3 7 0.56 1.34 2.09 6.98
3 10 0.23 0.82 0.72 2.30
3 14 0.21 0.62 0.41 1.36
3 20 0.12 0.39 0.18 0.44
134 3 3 0.98 3.26 3.82 12.55
3 4 0.79 2.56 3.51 12.04
3 7 0.65 1.98 2.43 7.95
3 10 0.45 1.56 1.65 6.00
3 14 0.40 1.55 1.43 6.14
3 20 0.27 1.00 1.13 5.30
200 3 3 0.97 4.32
200 3 4 0.88 4.09
200 3 7 0.71 2.65
Appendix 9
Digital Implementation of the
Masking Equation
This appendix provides mathematical fundamentals for implementing the color-
masking conversion via integer lookup tables.
For integer inputs such as RGB, the reectance is computed using Eq. (A9.1).
P
r
=I
rgb
/I
max
, (A9.1)
where I
max
is the maximum digital count of the device. For an 8-bit device, I
max
=
255. The density is
D
r
=log(I
rgb
/I
max
). (A9.2)
The density value is always positive because I
rgb
I
max
for all I
rgb
. However,
there is one problem when I
rgb
= 0the density is undened. In this case, the
implementer must set the maximum density value for I
rgb
= 0. We choose to set
D
r
=4.0 when I
rgb
=0; this value is much higher than most density values of real
colors printed by ink-jet and electrophotographic printers.
For CMY inputs, a simple relationship of inverse polarity, I
cmy
=I
max
I
rgb
, is
assumed. Under this assumption, the reectance and density are given in Eq. (A9.3)
and Eq. (A9.4), respectively.
P
r
=(I
max
I
cmy
)/I
max
, (A9.3)
D
r
=log[(I
max
I
cmy
)/I
max
]. (A9.4)
A9.1 Integer Implementation of Forward Conversion
Density is represented in oating-point format. Compared to integer arithmetic,
oating-point arithmetic is computationally intensive and slow. It is desirable,
sometimes necessary, to compute density by using integer arithmetic. The major
concern is the accuracy of integer representation and computation. This concern
503
504 Computational Color Technology
can be addressed by using a scaling method, where the oating-point representa-
tion is multiplied by a constant, then converted to integer form as expressed in Eq.
(A9.5).
D
mbit
=int(K
s
D
r
+0.5), (A9.5)
where D
mbit
is a scaled density value and K
s
is a scaling factor. The scaling factor
is dependent on the bit depth mbit. For a given bit depth, the highest density D
max
is
D
max
=log[1/(2
mbit
1)]. (A9.6)
This D
max
is scaled to the maximum value, 2
mbit
1. Therefore, the scaling factor
is
K
s
=(2
mbit
1)/D
max
=(2
mbit
1)/log(2
mbit
1). (A9.7)
Table A9.1 gives the scaling factors for 8-bit, 10-bit, and 12-bit scaling.
The general density expression in digital form is given in Eq. (A9.8), where we
substitute Eqs. (A9.4) and (A9.7) into Eq. (A9.5).
D
mbit
=int

[(2
mbit
1)/log(2
mbit
1)]log[I
max
/(I
max
I
cmy
)]+0.5

, (A9.8)
where mbit = 0 and (I
max
I
cmy
) = 0. For 8-bit input system and 8-bit output
scaling, Eq. (A9.8) becomes
D
8bit
=int{255/log(255) log[255/(255 I
cmy
)] +0.5}
=int{105.96 log[255/(255 I
cmy
)] +0.5}. (A9.9)
Similarly, Eqs. (A9.10) and (A9.11) give 10-bit and 12-bit scaling, respectively.
D
10bit
=int{1023/log(1023) log[255/(255 I
cmy
)] +0.5}
=int{339.88 log[255/(255 I
cmy
)] +0.5}, (A9.10)
D
12bit
=int{4095/log(4095) log[255/(255 I
cmy
)] +0.5}
=int{1133.64 log[255/(255 I
cmy
)] +0.5}. (A9.11)
Table A9.1 Scaling factors for various bit depths.
Bit depth D
max
Scaling factor
8 2.4065 105.96
10 3.0099 339.88
12 3.6123 1133.64
Digital Implementation of the Masking Equation 505
A few selected values of I
rgb
, I
cmy
, D
r
, D
8bit
, D
10bit
, and D
12bit
are given in Ta-
ble A9.2. Note that the polarity of scaled densities is inverted with respect to I
rgb
.
To avoid the denominator becoming 0, which would not have a unique solution for
Eq. (A9.8), the I
cmy
values cannot be 255 (or I
rgb
cannot be zero). However, the in-
put data will certainly contain values of 255 for I
cmy
(or 0 for I
rgb
). In this case, how
are we going to get the scaled density value for I
cmy
= 255 (or I
rgb
= 0)? There
are several ways to get around this problem. These methods are illustrated by using
D
10bit
scaling as an example. First, let us rearrange Eq. (A9.10) by multiplying the
Table A9.2 Selected values of density expressed in oating-point 8-bit, 10-bit, and 12-bit
form.
I
rgb
I
cmy
D
r
D
8bit
D
10bit
D
12bit
0 255 4.0 255 1023 4095
1 254 2.40654 255 818 2728
2 253 2.10551 223 716 2387
3 252 1.92942 204 656 2187
4 251 1.80448 191 613 2046
5 250 1.70757 181 580 1936
10 245 1.40654 149 478 1595
20 235 1.10551 117 376 1253
30 225 0.92942 98 316 1054
40 215 0.80448 85 273 912
50 205 0.70757 75 240 802
60 195 0.62839 67 214 712
70 185 0.56144 59 191 636
80 175 0.50345 53 171 571
100 155 0.40654 43 138 461
120 135 0.32736 35 111 371
121 134 0.32375 34 110 367
122 133 0.32018 34 109 363
123 132 0.31664 34 108 359
124 131 0.31312 33 106 355
125 130 0.30963 33 105 351
140 115 0.26041 28 89 295
160 95 0.20242 21 69 229
180 75 0.15127 16 51 171
200 55 0.10551 11 36 120
220 35 0.06412 7 22 73
240 15 0.02633 3 9 30
250 5 0.00860 1 3 10
251 4 0.00687 1 2 8
252 3 0.00514 1 2 6
253 2 0.00342 0 1 4
254 1 0.00171 0 1 2
255 0 0 0 0 0
506 Computational Color Technology
numerator and denominator of the logarithmic term by a constant, 1023/255.
D
10bit
=int{339.88 log[1023/(1023 1023P
cmy
/255)] +0.5}
=int{339.88 log[1023/(1023 4.01176P
cmy
)] +0.5}. (A9.12)
The highest value for I
cmy
is 255, which gives a value of 0 for the denominator.
To avoid this problem, one can add 1 to the value 1023 in the denominator [see
Eq. (A9.13)] or change both 1023 values to 1024 [see Eq. (A9.14)].
D
10bit
=int{339.88 log[1023/(1024 4.01176P
cmy
)] +0.5}, (A9.13)
or
D
10bit
=int{339.88 log[1024/(1024 4.01176P
cmy
)] +0.5}. (A9.14)
Equation (A9.14) guarantees that the denominator will not become 0; however,
errors are introduced into the computation. Compared with the integer round-off
error, the error by this adjustment may be small and negligible. Nonetheless, this
approach is mathematically incorrect. The correct way of computing D
10bit
is by
using Eq. (A9.12) for all values from 0 to 254. At 255, the maximum value 1023
is assigned to D
10bit
as shown in Table A9.2.
Because of the nonlinear logarithmic transform from I
cmy
to D
r
, the scaled
density values drop rapidly from the maximum values at the top of the table and
reach an asymptotic level at the bottom of the table. It is obvious that the D
8bit
scaling does not have enough bit depth to resolve density values when D
r
< 0.5;
D
10bit
scaling is marginal at intermediate values and not adequate at lower values.
Froma purely computational point of view, it is suggested that the density be scaled
to a 12-bit value.
In practice, a 12-bit scaling may be overkill. In the real world, the uncertainty
of the density measurement is at least 0.01, considering the measurement errors,
substrate variations, and statistical uctuations. Therefore, the difference between
adjunct densities in some regions of the table may be smaller than the measurement
uncertainty (see the density values from 250 to 255 of I
rgb
). In this case, it is not
necessary to resolve them, and 10-bit representation is enough. Note that the den-
sity values given in Table A9.2 have ve signicant gures after the decimal point;
this high accuracy is not necessary, and three signicant gures are sufcient.
A9.2 Integer Implementation of Inverse Conversion
Inverse conversion from D
mbit
to input digital count is derived from Eq. (A9.5).
Rearranging Eq. (A9.5), we have
D
r
=D
mbit
/K
s
. (A9.15)
Digital Implementation of the Masking Equation 507
Substituting Eq. (A9.7) for K
s
into Eq. (A9.15), we obtain
D
r
=D
mbit
log(2
mbit
1)/(2
mbit
1). (A9.16)
Substituting Eq. (A9.16) into Eq. (A9.2) and rearranging, we have
I
rgb
=I
max
10
D
r
=I
max
10exp[D
mbit
log(2
mbit
1)/(2
mbit
1)].
(A9.17)
Taking the integer round-off into consideration, we obtain the nal expression of
Eq. (A9.18).
I
rgb
=int{I
max
10exp[D
mbit
log(2
mbit
1)/(2
mbit
1)] +0.5}. (A9.18)
For 8-bit scaling, Eq. (A9.18) becomes
I
rgb
=int{255 10exp[D
mbit
log(255)/255] +0.5}
=int[255 10exp(0.0094374D
mbit
) +0.5]. (A9.19)
Equations (A9.20) and (A9.21) give the 10-bit and 12-bit formulas, respectively.
I
rgb
=int{255 10exp[D
mbit
log(1023)/1023] +0.5}
=int[255 10exp(0.0029422D
mbit
) +0.5], (A9.20)
I
rgb
=int{255 10exp[D
mbit
log(4095)/4095] +0.5}
=int[255 10exp(0.00088211D
mbit
) +0.5]. (A9.21)
The backward lookup tables are computed using Eq. (A9.18). The size of the
lookup table is dependent on the scaling factor. A 10-bit scaling needs a size of
1024 entries, and a 12-bit scaling needs a size of 4096 entries for the backward
lookup table. The contents in these lookup tables are in the range of [0, 255]; thus,
many different entries will have the same value, resulting in a many-to-one map-
ping.
Index
2-3 world, 264266, 268, 312
3-2 world, 266, 312
3-2 world assumption, 240
3D interpolation, 420
3D lookup table (LUT), 151
A
Abney effect, 412
additivity, 2, 334, 341
additivity law, 2, 21, 22
B
banding, 420
Bartleson transform, 48
Beer-Lambert-Bouguer law, 331, 332, 393,
424
Bezold-Brcke hue shift, 412
BFD transform, 52, 53
bilinear interpolation, 154
black generation (BG), 349
block-dye model, 108
brightness, 5, 6
C
cellular regression, 164
chroma, 60, 6769
chromatic adaptation, 24, 43, 51, 64, 69, 81,
259, 412, 414, 427, 430
chromaticity coordinate, 3, 6, 7, 57, 59, 77,
80, 82, 83
chromaticity diagram, 58, 91, 117
CIELAB, 60
CIELUV, 59
CIEXYZ, 57
Clapper-Yule model, 393395, 425
clustered-dot screens, 399
color appearance model, 69
color constancy, 233, 244, 245, 251, 253, 255,
256, 259, 261, 265, 266, 268, 312, 412
color difference, 3
color gamut, 117
color gamut boundary, 57, 60, 62, 111, 120
color gamut mapping, 103, 120, 121, 124,
129, 413, 427
color management modules (CMMs), 419
color management system (CMS), 419
Color Mondrian, 53
color purity, 58
color transform, 23
color appearance model (CAM), 72, 407,
412, 421, 423, 427
color-encoding standards, 84, 430
color lter array (CFA), 306
color-matching functions (CMFs), 2, 59, 15,
19, 22, 30, 33, 36, 37, 53, 57
color-matching matrix, 5, 15, 19, 21, 29, 34,
36, 81
color-mixing function, 30
color-mixing model, 17, 103, 107, 108, 420,
421, 423, 427
complementary color, 58
cone response functions, 234
cone responsivities, 4446, 49, 52, 53
cone sensitivities, 49
contouring, 420
contrast sensitivity function (CSF), 409, 411
correlated color temperature, 286
cyan-magenta-yellow (CMY) color space,
107
D
Demichels dot-overlap model, 370, 371
densitometer, 326
densitometry, 325, 353
density-masking equation, 342
device-masking equation, 343345, 347
dichromatic reection model, 244246
dominant wavelength, 58, 59, 408
dot gain, 390, 396, 400
dot-overlap model, 421, 426, 427
E
edge enhancement, 414, 427
edge sharpness, 414
error diffusion, 399, 400
F
Fairchilds model, 49
Fechners law, 408
nite-dimensional linear approximation, 253
nite-dimensional linear model, 236, 312
rst-surface reection, 335
509
510 Computational Color Technology
fundamental color stimulus function, 27, 29,
31, 35
fundamental metamer, 31, 32, 39
G
gamma correction, 86, 338
gamut boundary, 57
gamut mapping, 122, 129, 130
Gaussian elimination, 139
graininess, 420
Grassmanns additivity law, 1, 2, 17, 27, 33,
59, 64, 77, 115, 118
gray balancing, 144, 347
gray (or white) world assumption, 243, 244
gray-balance curves, 432
gray-balanced, 82, 275
gray-component replacement (GCR), 349,
375, 414
Guths model, 53
H
Helson-Judd effect, 48, 51
Helson-Judd-Warren transform, 46
hue, 5, 6, 6769
hue-lightness-saturation (HLS) space, 105
hue-saturation-value (HSV) space, 104
human visual model, 421, 427
human visual system (HVS), 407, 411
Hunt effect, 48, 49, 412
Hunt model, 51
Hunt-Pointer-Estevez cone responses, 70
I
ICC proles, 419
image irradiance model, 233
internal reection, 335, 393
International Color Consortium (ICC), 419
J
JPEG compression, 414
just noticeable difference (JND), 67, 408
K
Karhunen-Loeve transform, 313, 317
Knig-Dieterici fundamentals, 48
Kubelka-Munk equation, 365, 395
Kubelka-Munk (K-M) theory, 355357, 362,
394, 395, 425
Kubelka-Munk model, 360, 362, 365
L
Lambertian surface, 235
lighting matrix, 238, 242, 246, 251, 265, 268
lightness, 6769
lightness-saturation-hue (LEF) space, 106
lightness/retinex algorithm, 257, 259
M
MacAdam ellipses, 66, 68
matrix A, 31
matrix R, 27, 29, 31, 32, 37
metamer, 1, 27, 29, 31
metameric black, 27, 29, 31, 34, 35, 39
metameric color matching, 27
metameric correction, 39
metameric indices, 27
metameric match, 21, 27, 28
metameric spectrum reconstruction, 189
metamerism, 2, 27
metamerism index, 40
modulation transfer functions (MTF), 397
moir, 420
Moore-Penrose pseudo-inverse, 205, 314
multispectral imaging, 301, 303, 425
multispectral irradiance model, 303
Munsell colors, 67
Munsell system, 65
Murray-Davies, 394
Murray-Davies equation, 385, 392, 397, 425
Murray-Davies model, 388
N
Nayatani model, 47, 53
NCS, 51
Neugebauer equations, 369, 371373, 375,
376, 379, 381, 382
Neugebauers model, 425
neutral interface reection (NIR) model, 235
O
opacity, 335
optical density, 325
optical dot gain, 399
optimal color stimuli, 62
optimal lters, 309, 311
orthogonal projection, 205
orthonormal functions, 256
P
physical dot gain, 399
Plancks radiation law, 10
point-spread function, 397399, 411
polynomial regression, 135, 295, 297, 314
principal component analysis (PCA), 203,
212, 213, 265, 311, 313, 314, 317, 318
prism interpolation, 157
proportionality law, 2, 21, 332
pseudo-inverse estimation, 315
pyramid interpolation, 159
Index 511
R
R-matrix, 268
regression, 420
resolution conversion, 414, 427
retinex theory, 53, 256, 257
RGB color space, 103
RGB primary, 77
S
Sllstrn-Buchsbaum model, 244
saturation, 5, 6
Saundersons correction, 360, 362, 395
sequential linear interpolation, 168
simultaneous color contrast, 243
simultaneous contrast, 256, 412
singular-value decomposition (SVD), 247,
248, 310
smoothing inverse, 203, 205, 314, 316
Spatial-CIELAB, 73, 413
spectral sharpening, 259
standard I lters, 327
status A lters, 326
status E lters, 326
status M lters, 327
status T lters, 326
Stevens effect, 48, 412
stochastic screens, 399
symmetry law, 2
T
tetrahedral interpolation, 161
three-three (3-3) world, 243
three-three (3-3) constraint, 240, 242
tone reproduction curves, 420, 426
tone response curves (TRC), 414, 416
transitivity law, 2, 21
triangularization, 139
trichromacy, 1
trilinear interpolation, 155
tristimulus values, 1, 3, 5, 8, 2832, 57
U
UCR (under-color removal), 349
under-color addition (UCA), 349
V
virtual multispectral camera (VMSC), 317,
318
volumetric model, 253, 255
volumetric theory, 268
von Kries adaptation, 70, 259, 267
von Kries hypothesis, 43, 48, 49, 69, 266, 268
von Kries model, 44, 49, 69, 430
von Kries transform, 47, 49, 52, 53, 268
von Kries type, 276
Vos-Walraven fundamentals, 261, 264, 265
W
Weber fraction, 408
Webers law, 408
white-point conversion, 23, 81, 273
Wiener estimation, 314, 316
Wiener inverse, 203, 209
Y
Yule-Nielsen model, 388, 389, 394, 395, 397,
400, 402, 425
Yule-Nielsen value, 382, 388, 389, 391, 392
Henry R. Kang received his Ph.D. degree in phys-
ical chemistry from Indiana University and M.S.
degree in computer science from Rochester Insti-
tute of Technology. He has taught chemistry courses
and researched gas-phase chemical kinetics at the
University of Texas at Arlington. His industry ex-
periences include ink-jet ink analysis and testing at
Mead Ofce Systems, and color technology devel-
opment with the Xerox Corporation, Peerless Sys-
tem Corporation, and Aetas Technology Inc. Cur-
rently, he is a part-time chemistry instructor and an
independent color-imaging consultant in the areas
of color technology development and color system architecture.
Henry is the author of Color Technology for Electronic Imaging Devices by
SPIE Press (1997) and Digital Color Halftoning by SPIE Press (1999), and more
than 30 papers in the areas of color-device calibration, color-mixing models, color-
image processing, and digital halftone processing.

Вам также может понравиться