Вы находитесь на странице: 1из 20

Fundamentals of Lighting and Perception: The Rendering of

Physically Accurate Images


Hector Y. Yee
Software Engineer, Westwood Studios,
Electronic Arts,
2400 N. Tenaya Way, Las Vegas,
NV 89128

Philip Dutr
Assistant Professor,
Computer Graphics Group,K.U. Leuven,
Department of Computer Science
Celestijnenlaan 200A, B-3001 Leuven,
BELGIUM

Sumanta N. Pattanaik
Associate Professor,
University of Central Florida
Department of Computer Science
Orlando, FL

Abstract
This paper introduces the foundations of physically accurate rendering in computer graphics.
As graphics hardware and processing power improves, we begin to create images in real time
that rival real world photography. This paper lays the groundwork for creating such images,
working from the most critical component of realistic image generation, the rendering equation.
This paper reviews all stages of realistic rendering and physically accurate image generation:
description of reflectance properties of materials; solving the rendering equation using Monte
Carlo integration; tone reproduction operators to realistically show the images on limited
dynamic range displays; and perceptual techniques that can be used to accelerate rendering.
1.

Introduction

We are on the verge of a new level of realism in interactive graphics. Given enough computing
power, we may soon be able to reproduce images such as the above title illustration, with
realistic lighting effects (including high-dynamic range, caustics, soft shadows, indirect lighting,
area light sources, color bleeding and motion blur) in real time. The current state-of-the-art
graphics in computer games make extensive use of texture mapping and geometry
acceleration hardware to reproduce these effects, but require extreme effort on the part of
artists, programmers and designers to deliver the illusion of reality.

This paper reviews the foundations of realistic, physically accurate computer graphics. It is
intended to help developers understand the state-of-the-art of non-interactive graphics in the
hope that they will one day find shortcuts to deliver the same quality at interactive rates. The
beauty of physically accurate computer graphics is that it reduces the amount of human effort
involved because realistic lighting effects are a natural consequence of solving the rendering
equation. This paper will be most useful for people interested in lighting game levels
realistically and for people interested in the theoretical underpinnings of illumination and
perception. It is a very brief, whirlwind tour of the field of realistic rendering and should serve
as a good jumping board to more in-depth papers.
The first part of the paper covers the basics of radiometry and the propagation of light. Next, a
brief discussion of Bi-Directional Reflectance Distribution Functions (BRDFs), followed by a
discussion of the rendering equation- what it is and how to solve it. The paper continues with a
short tutorial on the human visual system and how to apply the knowledge for the display of
high-dynamic range images and to accelerate rendering.
2.

Radiometry

Light is electromagnetic radiation and can be produced in different ways; for example, by
thermal sources such as the sun, or by quantum effects such as fluorescence where materials
absorb energy at some wavelength and emit it at some other wavelength. Several models exist
that attempt to explain the behavior of light. The most well known is the geometric optics
model. In this model, light is assumed to travel along rays. This model captures effects such as
reflections and transmission. This is the most commonly used model in computer graphics.
Another model, the wave model, is described by Maxwells equations and captures effects that
arise because light interacts with objects of a size, which is comparable to the wavelength of
light. This model can explain effects such as diffraction, interference, polarization and
dispersion. However, these effects are too detailed for the purposes of image generation in
computer graphics and are generally ignored. The last model, the quantum mechanics model,
is a fundamental model of light that captures effects such as fluorescence and phosphorescence. However, this model is also too detailed and is generally not considered for use
in computer graphics.
In order to formulate the rendering equation, which describes the light equilibrium in a scene,
most global illumination algorithms make several simplifying assumptions: we ignore the light
energy that is absorbed at surfaces and dissipated as heat. We also ignore effects due to the
transmission of light through participating media and media with varying indices of refraction
(for a discussion involving participating media, see [11] in this proceedings). Additionally, we
assume that light propagates instantaneously through vacuum. Therefore, the goal of global
illumination algorithms is to compute the steady-state distribution of lights in a threedimensional scene.
The rendering equation describes how much radiance is present at each point and at each
direction in a scene. Radiance (L) is the most important quantity in radiometry, and is
expressed as power () per unit projected area per unit solid angle (Watt/sr.m2). Intuitively,
radiance expresses how much power arrives at (or leaves from) a certain point on a surface,
per solid angle, and per unit projected area. The projected area is needed because energy that

is incident at a grazing angle at a surface, will be distributed over a larger surface area,
thereby making the average power per area smaller.
d 2
L=
ddA cos

or = L( x, ) cosdwdA
A

An important property of radiance is its invariance along straight paths (in vacuum). The
radiance leaving point x directed towards point y is equal to the radiance arriving at point y
from the direction in which point x is observed. This can be proven by writing an energy
balance for two differential surfaces around points x and y. So, radiance does not attenuate
with distance, and is invariable along straight paths of travel. If we allow a participating medium
to be present between the surfaces, which can absorb and scatter energy, this property of
radiance is no longer valid. From the above observation, it follows that once incident or exitant
radiance at all surface points is known, the radiance distribution for all points in a threedimensional scene is also known. Almost all algorithms used in global illumination limit
themselves to computing the radiance values at surface points (still assuming the absence of
any participating medium).
Another important property of radiance is that most light receivers, such as cameras or the
human eye, are sensitive to radiance. The response of these sensors is proportional to the
radiance incident upon them; the constant of proportionality depends on the geometry of the
sense.
Radiance is not only dependent of position and direction, but are also dependent on the
wavelength of the light energy under consideration. Thus, radiance values are normally
specified for all possible wavelength values. The measures defined above are to be
considered as integrated functions over the wavelength domain covering visible light.
However, in papers and publications, it is often implicitly assumed that the wavelength
dependency is part of the equations, and is not mentioned explicitly.
3.

Bidirectional Reflectance Distribution Function

Materials interact with light in different ways, causing distinct materials to have different
appearances under the same lighting conditions. Some materials appear as mirrors, others
appear as diffuse surfaces. The bidirectional reflectance distribution function (BRDF) is the
most general expression of reflectance of a material at the level of detail we wish to consider.
The BRDF is defined as the ratio between differential radiance reflected in an exitant direction,
and incident irradiance (power per projected area) through a differential solid angle. More
precisely, the BRDF is defined as the derivative of reflected radiance to incident irradiance.

f r ( i r ) =

dL( x r )
dL( x r )
=
dE ( x i ) L( x i ) cos i d i

The BRDF has some interesting properties:

The BRDF can take any positive value, and varies with wavelength.

The value of the BRDF will remain unchanged if the incident and exitant directions are
interchanged. This property is also called the Helmholtz reciprocity, a principle which
says that paths followed by light can be reversed.

Generally, the BRDF is anisotropic. That is, if the surface is rotated about the surface
normal, the value of the BRDF will change. However, many materials are isotropic and
in these cases the value of the BRDF does not depend on the specific orientation of the
underlying surface.

The value of the BRDF for a specific incident direction is not dependent on the possible
presence of irradiance along other incident angles. Therefore, the BRDF as defined
above behaves as a linear function with respect to all incident directions. In order to
know the total reflected radiance due to some irradiance distribution over the hemisphere around an opaque, non-emissive surface point, we have to integrate the BRDF
equation over the surrounding hemisphere, and this provides us with the following
equation, referred to as the reflectance equation:
L( x r ) = f ( x, r ) L( x ) cosd

The more commonly encountered types of BRDF, as used in photo-realistic rendering, are
listed below:

Diffuse surfaces
Some materials reflect light in a uniform way over the entire reflecting hemisphere. That
is, given an irradiance distribution, the reflected radiance is independent of the exitant
direction. Such materials are called diffuse reflectors, and the value of their BRDF is
constant for all directions. To an observer, a diffuse material looks the same from all
possible directions.

Specular surfaces
Other materials can be considered as perfect specular surfaces and only reflect light in
one specific direction given the incident direction. According to Snells law, the incident
and exitant direction make equal angles to the surfaces normal. A perfect specular
surface has only one exitant direction for which the BRDF is different from 0, which
implies that the value of the BRDF along that direction is infinite. Real materials can
exhibit this behaviour very closely, but are nevertheless not ideal reflectors as defined
above.

Glossy surfaces
Most surfaces, however, are neither ideally diffuse nor ideally specular, but exhibit a
combination of both reflectance behaviours. These surfaces are called glossy surfaces.
Their BRDF is often difficult to model with analytical formulae.

Transparent surfaces
Strictly speaking, the BRDF is defined over the entire sphere of directions around a
surface point. This is important for transparent surfaces, since these surfaces can
reflect light over the entire sphere. The transparent side of the BRDF can also behave
as a diffuse, specular or glossy surface, depending on the transparency characteristics
of the material. One has to be careful when assuming properties about the transparent
side of the BRDF. Some characteristics, such as reciprocity, may not be true with
transparent surfaces.

Diffuse and glossy surfaces


In global illumination algorithms, one often uses empirical models to characterize the BRDF.
Great care must be taken to make certain that these empirical models indeed make up a good
and acceptable BRDF. More specifically, the following conditions must hold:

The BRDF must be modelled such that the law of conservation of energy is met.

The empirical model for the BRDF must also obey the Helmholtz reciprocity. This is an
important constraint for some algorithms, especially those that compute the distribution
of light energy by considering paths starting from the light sources and paths starting
from the observer at the same time. Such algorithms explicitly assume that light paths
can be reversed, and thus the model for the BRDF should reflect this property.

The most commonly used BRDF model is the Phong-model (or modified Phong [27]). This
model is computationally efficient, but is not physically based since it does not satisfy the

energy conservation property described above. To date the most comprehensive model is the
He-model [10], a model that includes effects such as subsurface scattering and surface
anisotropy but remains, however, computationally very expensive. Instead people use other
models such as the Torrance-Sparrow [22] model, which is physically based and uses
microfacets to explain the reflectance behaviour of light or the Ward-model [24], which is a
popular empirically based model.
4.

The Rendering Equation

This section presents the rendering equation, a mathematical formulation of the steady-state
distribution of energy in a scene with no participating media. As mentioned before, we assume
that this steady-state equilibrium distribution of light energy is achieved instantaneously. The
rendering equation expresses the radiance leaving a surface as a sum of its self-emitted
radiance (in case the surface is light source), and its reflected radiance, as given by the
reflectance equation below:
L( x r ) = Le ( x r ) + f ( x, r ) L( x ) cosd

This equation describes all light transport in a scene. The incident radiance inside the integral
can be transformed to radiance leaving some other point y, by following a ray in the opposite
direction of . This is the basis for all ray tracing and particle tracing algorithms.
5.

General strategies for solving the rendering equation

In order to compute a photorealistic image, we need to compute the radiance value visible
through each pixel of the image. This means solving the rendering equation for each visible
point through every pixel. When designing a global illumination algorithm, one usually assumes
that the following functionality is already present (black boxes):

We can query all possible attributes about surface points (normal, BRDF, etc.).

We can query light sources (emitted radiance for a given point and direction).

We can trace rays in order to find the nearest visible point in some direction. This
usually involves some sort of spatial subdivision structure to speed up the ray tracing.

We can check whether two points are mutually visible.

It is important to realize that many different light paths will contribute to a single radiance value
at a point visible through a pixel. All these paths need to be checked, and the light transport
along all of these paths needs to be computed. This is of course not possible in practice, so we
have to make smart selections about what paths to check, since many paths are unimportant.
The most important tool needed to generate all possible paths is Monte Carlo (MC) integration.
MC integration is a technique where random points are generated in the integration domain,
after which their function values are averaged in function to get an unbiased estimator of the
integral to be computed. In the context of global illumination, MC integration is applied to the
rendering equation: random paths are selected between a point and the light source, and the
average contribution of these paths is computed. Since the paths are randomly selected, all
paths have a probability of being chosen, and so no light transport is eliminated from the
computation.
The most straightforward example is the computation of direct illumination: only direct paths
between the surface point and the light source are to be computed, and the paths are chosen
at random by selecting random points on the surface of the light source. This is illustrated in
the figures below.

Indirect illumination needs paths of length greater than 1. Several approaches are possible, but
the easiest one is to generate paths distributed over the hemisphere, and then recursively

apply direct illumination computations on each point along the path. Again, the most important
issue is that all possible paths should have a non-zero probability of being generated.
In general, global illumination algorithms differ in the way in which all the paths are generated,
as well as what selection mechanism is used. Various strategies give rise to different error
characteristics, and much of the research is oriented towards finding the most optimal path
generators. A crucial component of visualizing the solution of global illumination is the process
of displaying a physically accurate image. For that, we must first understand how the human
visual system works. The next section describes the mechanisms of visual adaptation.
6.

Visual Adaptation

The previous section deals with physical values of radiance, which is a measure of energy.
This section deals with luminance measured in cd/m2, which deals with how bright light
appears to the human visual system. Luminance is calculated from radiance by integrating
spectral radiance weighted by the spectral luminous efficiency curve. A really approximate way
to derive luminance from radiance at three wavelengths Red, Green and Blue is Luminance =
0.3 * Red + 0.6 * Green + 0.1 * Blue.
Physically based rendering techniques allow us to accurately compute variations of light
energy in a scene under widely varying illumination conditions. Figure 1 shows the widely
varying illumination conditions that we may be simulating. Further, the difference between the
dark part and the bright part of the computed image can differ by a factor of a thousand or
more. In contrast, CRT monitors can display only up to a factor of a hundred between light and
dark. Amazingly, the human visual system is able to cope with such a large difference in
dynamic range. For example, it is possible to read a book in the shade of a tree and still see
clearly everything around in the bright sunlight. This represents a scene dynamic range of
perhaps 1000:1 or more and yet the visual system is able to cope with the large changes in
light intensity via means of adaptation.

Figure 1: Dynamic Range. From Ferwerda et. al. [9]

How is this relevant to computer games, one might ask. Well, modern digital cameras come
with 12-bit DACs that can capture high dynamic range images. Also, software techniques by
Debevec [5], Nayar [16] and collaborators allow us to generate high dynamic range images
from several 8-bit images taken at different exposures. These techniques, and the realistic
rendering algorithms described earlier, generate high dynamic range images that can be used

as High-Dynamic Range Texture Maps (HDRTMs) [3] for greater realism in interactive
applications. Furthermore, the latest generation of graphics hardware have support for
extended precision and range beyond 8-bits in the pixel processing pipeline. Making use of
High-Dynamic Range data is a logical step to maximizing the use of such hardware. It also
allows game developers to dramatically increase the perceived dynamic range of their levels.
The human visual system functions effectively over a wide range of illumination. This is made
possible through a complex process, widely known as visual adaptation. The mechanisms
effecting adaptation are mostly: the combined effects of receptor types, photopigment
bleaching and neural mechanism.
The human retina holds two types of photoreceptor cells. Cones are responsible for color
vision and function in the illumination range of ~10-2 to ~10+6 cd/m2. Rods are mostly
responsible for vision in low illumination range (~10-6 to ~10 cd/m2). Within their response
ranges, receptors react when one of their visual pigment molecules captures a photon. The
captured photon triggers a complex cascade of reactions known as bleaching that
desensitizes the molecule. Bright light rapidly reduces a receptors usable photopigment
concentration, but slow retinal mechanisms restore it (see Dowling [7]). Photopigment
concentration sets an upper limit on receptor sensitivity, and absorption coefficients sets the
lower limits of receptors response to incident illumination [12].

Figure 2-a: (Left figure) Response of dark-adapted vertebrate cells to various intensities in arbitrary units. b: (Right
figure) The shift in response-intensity function at different adapting background illuminations. (figures adapted from
[7])

Compared to the broad range of background intensities over which the visual system performs,
most of visual neurons respond linearly to a very narrow range of intensities. For human visual
system, this range is only about 3 log units. Figure 2 a shows this narrow range in the loglinear plot of the intensity-response function measured from vertebrate rod cells [7]. When brief
light of moderate intensity (about 3 log units above the current ambient level) is presented the
photoreceptor response reaches its maximum and the photoreceptor is said to be saturated.
This initial saturation of the photoreceptor, results in a visual experience of initial bright flash.
Further, because they are saturated they also become poorly responsive to any incremental
illumination and this poor responsiveness is experienced as a loss of sensitivity. If the
exposure to higher background illumination is continued for a long time, the initial saturating
response is not maintained. The response gradually returns towards dark-adapted resting
response, and photoreceptor's sensitivity to incremental intensity is gradually restored. Figure

2b shows this response shift as a function of different adapting background illumination. Thus,
background illumination plays an important role in defining visual system's state of adaptation.
There is a wealth of psychophysical data quantifying the performance of the human visual
system as a function of steady background intensity. This data tells us how visual sensitivity
decreases and visual acuity increases with background intensity. It also tells us how color
discrimination changes in background intensity.
During the last decade, computer graphics researchers [9][17][18][23][25][26] have used the
concept of background luminance (or adaptation luminance) to exploit this wealth of
information to create realistic visual representations of static real life scenes on the display
devices. We present some of them in the next section.
7.

Tone-Mapping

So how does one view high dynamic range images? The simplest thing to do would be to
multiply the entire image by a constant. However, this would only show the scene at a
particular exposure, much the same way as one would set a camera to a particular exposure
by controlling the aperture size or snapshot time. A tone-reproduction operator works in a more
intelligent fashion by taking into account the capabilities of the human visual system to display
a high dynamic range image.
In general, tone-reproduction operators work by creating a local scale factor for each pixel in
the high-dynamic range image based on the local adaptation luminance of the pixel. It takes as
input the local adaptation luminance and the high-dynamic range radiance value and returns
the RGB value in a 0.0 to 1.0 range. The local adaptation luminance can be computed as the
average (arithmetic or geometric) of luminance values in a fixed window around each pixel. We
present the source code for four tone-reproduction operators: FPSG [9], PTYG [17], TR [23]
and WRP [26]. The source code is pixel-based and can conceivably be implemented some day
during a blit from the back-buffer to the front-buffer as suggested by Carmac [2]. Although the
source code is pixel-based, one may easily convert these to vertex-based by techniques
similar to those employed by Durand and Dorsey [8], and Scheel et. al. [21]. There are some
trade-offs to this approach however, including using a single world adaptation luminance for
the entire scene rather than a per-pixel adaptation luminance. The world adaptation luminance
can be computed by ray casting into the center of the scene as seen from the camera and then
averaging all the values as in Scheel et. al. [21].
We shall now cover each tone-reproduction operator in chronological order. Tumblin and
Rushmeier introduced the concept of tone-reproduction to the computer graphics community in
[23]. It is the process that maps scene luminance to display luminance while preserving
various visual factors such as the perceived brightness, contrast or detail. Their tonereproduction operator, which we shall name TR, preserves the perceived brightness in the
scene. That is, the algorithm attempts to match the perceived brightness in a region in the
scene to the same region on the display. SeeTable 1 in the Appendix for the C++ source code
for this tone-reproduction operator. A_cone is the adaptation luminance (in cd/m2) per pixel of
the scene image and RGB is the radiance (in W Watt/sr.m2) per pixel. A Vector3 is an array, v,
of 3 doubles. An Image<T> is a two dimensional array of the type T. Indexing the pixel Image>v[K] is the same as addressing pixel (K % width, K / height).

Ward then developed a tone-reproduction operator that preserves the threshold visibility and
contrast, rather than brightness [25]. This technique preserves the visibility at the threshold of
perception (where the observer barely notices the difference between one shade and another
that is slightly darker or brighter). The threshold visibility at various background illuminations is
shown in Figure 3. As one can see, as the background luminance increases, it takes a larger
and larger luminance change to detect a change from the background. The TR tone-mapper
performs better for brightness changes that are large and well above the visibility threshold.
Ferwerda et. al. developed a tone-reproduction operator (FPSG) that like Wards operator tries
to preserve the threshold visibility and contrast [9]. In addition it tries to reproduce the color
changes and acuity associated with changes in illumination conditions. The source code is
given in Table 2.

Figure 3: Threshold vs. Intensity function. Adapted from Fewerda et. al. [9]

Ward-Larson et. al. developed a histogram based technique [26] that works by re-distributing
the local adaptation luminance values so that a monotonic mapping well utilizing the whole
range of available display luminance is achieved. This technique is different from the others in
that the adaptation luminance is not used directly to obtain a scale factor by which the scene
display luminance is obtained. Rather, all the adaptation luminance values are used to
construct a mapping function from scene to display luminance. The source code is in Table 3.
L_cone is the luminance (in cd/m2) per pixel of the scene, and Cr is the color ratio, or the RGB
radiance value divided by L_cone.
Finally Pattanaik et al. [18] and Duran et al. (EGRW90) developed tone-reproduction operators
that can accurately take into account the time-course of adaptation. That is, the effect you get
when you walk out of a dark theater into bright sunlight you are blinded for a period of time.
In Table 4 we present the source code for the static component of the tone-reproduction
operator developed by Pattanaik et al. A_cone is the adaptation luminance (in cd/m22) per
pixel of the scene image, L_cone is the luminance (in cd/m2) per pixel of the scene, and Cr is
the color ratio, or the RGB radiance value divided by L_cone. The rod values can be set to the
cone values in the static case. Figure 4 shows the results of the different tone-reproduction
operators on a bridge image.

Constant Scaling (Long Exposure Time)

TR Tone-Reproduction

FPSG Tone-Reproduction

WRP Tone-Reproduction

PTYG Tone-Reproduction

Constant Scaling (Short Exposure Time)

Figure 4: Results of various tone-reproduction operators on a bridge image. Two exposures of the bridge image are
shown in comparison, one at long exposure and one at short exposure.

8.

Perceptually-based Acceleration Techniques

Knowledge of how the human visual system operates can be used to accelerate the rendering
of images. For example, the Human Visual Systems ability to detect changes in illumination
drops with increasing spatial frequency and movement. First pioneered by Bolin and Meyer [1]
and Myszkowski [13], then optimized for speed by Ramasubramanian et. al. [19], enhanced to
take into account spatiotemporal factors by Myszkowski [14], and enhanced yet again to take
into account visual attention by Yee et. al. [28], perceptual acceleration techniques promise an
order of magnitude in speed improvements by calculating only what the Human Visual System
is capable of seeing and leaving out what it cannot see.
One method to accelerate rendering would be to use the Threshold vs. Intensity function as
shown above in Figure 3. If an intensity change is below threshold, do not render it. Another
idea central to perceptual acceleration techniques is the Contrast Sensitivity Function (CSF).
The CSF of the Human Visual System is measured by showing people images of sine wave
gratings at a given adaptation luminance. It was discovered that people had a harder time
differentiating a sine wave of high frequency from a gray background than a sine wave of lower
frequency. When the sensitivity is plotted against the frequency, an inverted U-shaped curve is
formed. This is called the Contrast Sensitivity Function. Figure 5 shows a plot of the
Spatiotemporal Contrast Sensitivity Function on the left and the Campbell-Robson Contrast
Sensitivity Chart on the right. The equations for the CSF can be obtained from Daly [4].

Figure 5: Left: Spatiotemporal Contrast Sensitivity Function. Right: Campbell-Robson Contrast Sensitivity Chart.

In the CSF plot on the left, contrast sensitivity is plotted against spatial frequency. v is the
speed (in degrees per second) at which the sine wave grating is moving with respect to the
eye focused at a single point. As one can see from the CSF plot, the Human Visual System is
less sensitive to contrast changes at high spatial frequencies and at great speeds. The
Campbell-Robson Chart is a plot of vertical lines of increasing contrast from top to bottom and
increasing spatial frequency from left to right. The point at which the lines vanish to gray is the
viewers own personalized contrast sensitivity function. Reddy [20] used the fact that the visual
system is less sensitive to contrast changes at high frequencies to perceptually optimize an
LOD system for terrain rendering in an interactive application. Reddy subdivides a terrain
patch only if it is deemed visible to the observer using a metric similar to the CSF.

9.

Conclusions

We have introduced the reader the very basics of lighting and perception. It is hoped that it will
provide sufficient motivation for the developer to delve deeper into related topics in order to
bring advanced rendering techniques into the realm of computer games. The authors would
like to thank ATI for graphics hardware support, Kurt Oeschlager of Westwood Studios and
Don Harper of University of Central Florida for MIS support, Zehna Barros of Zehna Originals
for the paintings in the title image, Greg Ward for the Bridge Image. Thanks also to Moumine
Ballo, Joseph Kucan, Greg Hjelstrom and Jani Penttinnen for proofreading.
10.
[1]

References
M. Bolin and G. Meyer. A perceptually based adaptive sampling algorithm. In Proceedings of SIGGRAPH 98, 299-309, July 1998.

[2]

J. Carmac, .plan file. http://www.webdog.org/cgi-bin/finger.pl?id=1&time=20000601040557. 29 April 2000.

[3]

J. Cohen, C. Tchou, T. Hawkins, and P. Debevec. Real-time high dynamic range texture mapping. In Eurographics Rendering Workshop 2001,
London, England, June 2001.

[4]

S Daly. Engineering observations from spatio-velocity and spatiotemporal visual models. In IS&T/SPIE Conference on Human Vision and Elec-tronic
Imaging III, SPIE , volume 3299, 180-191, January 1998.

[5]

P. Debevec and J. Malik. Recovering high dynamic range radiance maps from photographs. In Proceedings of SIGGRAPH 97, 369-378. August 1997.

[6]

P. Debevec. Image-based lighting. In SIGGRAPH 2001 Course Notes, 12-17. August 2001. (http://www.debevec.org/IBL2001).

[7]

J.E. Dowling. The retina : An approachable part of the brain. Cambridge: Belknap press, 1987.

[8]

F. Durand and J. Dorsey. Interactive tone mapping. In Proceedings of the Eurographics Workshop on Rendering, June, 2000.

[9]

J. A. Fewerda, S. Pattanaik, P. Shirley and D. P. Greenberg. A model of visual adaptation for realistic image synthesis. In Proceedings of SIGGRAPH
96, 249-258. August, 1996.

[10] X. He, K. Torrance, F. Sillon, and D. P. Greenberg, D. A comprehensive physical model for light reflection, In Proceedings of Siggraph, 1991.
[11] N. Hoffman, A. J. Preetham. Rendering outdoor light scattering in real time. In Proceedings of GDC 2002.
[12] D.C. Hood, and M.A. Finkelstein. Sensitivity to light. In Boff, K.R., Kaufman, L.R. and Thomas, J.P. (ed.), Handbook of Perception & Human
Performance, Chapter 5, New York: Wile, 1986.
[13] K. Myszkowski. The visible differences predictor: applications to global illumination problems. In Proceedings of the Ninth Eurographics Workshop
on Rendering, 223-236. Vienna, Austria, June 1998.
[14] K. Myskowski, P. Rokita, and T. Tawara. Perceptually-informed accelerated rendering of high quality walkthrough sequences.
[15] In Proceedings of the Tenth Eurographics Workshop on Rendering, 5-18. Grenada, Spain, June 1999.
[16] S. K. Nayar and T. Mitsunaga. High dynamic range imaging: spatially varying pixel exposures. In Proceedings of IEEE Conference on Computer
Vision and Pattern Recognition, June 2000.
[17] S.N. Pattanaik, J.A. Ferwerda, M. Fairchild, and D.P. Greenberg. A multiscale model of adaptation & spatial vision for realistic image display. In
Proceedings of SIGGRAPH 98, 287-298, 1998.
[18] S. N. Pattanaik, J. E. Tumblin, H. Yee and D. P. Greenberg. Time-dependent visual adaptation for fast realistic image display. In Proceedings of
SIGGRAPH 00, 47-54. July 2000.
[19] M. Ramasubramanian, S.N. Pattanaik, and D.P. Greenberg. A perceptually based physical error metric for realistic image synthesis. In Proceedings of
SIGGRAPH '99, 73-82, August 1999.
[20] M. Reddy. Perceptually optimized 3D graphics. In IEEE Computer Graphics and Applications, 21(5): 68-75. September/October 2001.
[21] A. Scheel, M. Stamminger and H.-P. Seidel. Tone reproduction for interactive walkthroughs. In Computer Graphics Forum (Proccedings of
Eurographics 00), 2000, pp 301-312. 2000.
[22] K. E. Torrance and E. M. Sparrow, Theory for off-specular reflection from roughened surface,In Journal of Optical Society of America, vol.57, 11051114, 1967.
[23] J. Tumblin and H. Rushmeier. Tone reproduction for computer generated images. In IEEE CG & A, 13(6):42-48. Nov 1993.
[24] G. Ward, Measuring and modeling anisotropic reflection, In SIGGRAPH 92 Conference Proceedings, 1992.
[25] G. Ward. A contrast-based scalefactor for luminance display. In P.S. Heckbert (Ed.), Graphics Gems IV, Boston: Academic Press Professional, 1994.
[26] G. Ward Larson, H. Rushmeier, and C. Piatko. A visibility matching tone reproduction operator for high dynamic range scenes. IEEE Transactions on
Visualization and Computer Graphics, 3(4):291-306, October-December 1997.
[27] R. R. Lewis, Making shaders more physically plausible, In Computer Graphics Forum, 13 (2), 1994.

[28] H.Yee, S.Pattanaik and D.P.Greenberg. Spatiotemporal sensitivity and visual attention for efficient rendering of dynamic environments, In ACM
Transactions in Graphics, January 2001.

11.

Appendix

Table 1: Source code for the TR tone-reproduction operator.


// A_Cone is the adaptation luminance of the image
// RGB is the Red, Green and Blue radiance
// Display ambient in cd/m^2
// Code by Sumanta Pattanaik and Hector Yee
#define DISPLAY_AMBIENT 25.0
#define MAX_DISPLAY_LUMINANCE 125.0
#define cdm2ToLambert(C) (C*0.001/3.18)
#define lambertToCdm2(L) (L*3.18*1000)
Image<Vector3> * tone_map_TR(const Image <Vector3> * RGB,const Image
<double> * A_cone)
/*%
Uses Stevens and Stevens equation for brightness in units of Bril
%
log10(B) = 0.004[(S-27)(8.4-R) - 108]
%
where
%
S = 100 + 10 log10(L_w)
%
R = 10 log10(L_w/L_in) */
{
int width,height,max,i;
Image<Vector3> * output;
width=A_cone->width;
height=A_cone->height;
max=width*height;
// create output image
output=new Image<Vector3>(width,height);
double L_dMax = cdm2ToLambert(MAX_DISPLAY_LUMINANCE) ;
double L_da = cdm2ToLambert(DISPLAY_AMBIENT);
double S_d = 100.0 + 10.0*log10(L_da);
for (i=0; i<max; i++) {
// Convert Cd/m^2 to Lamberts.
double L_wa = cdm2ToLambert(A_cone->v[i]);
double y = 0.3*RGB->v[i].v[0]+0.6*RGB->v[i].v[1]+0.1*RGB->v[i].v[2];
double f_r = RGB->v[i].v[0]/y;
double f_g = RGB->v[i].v[1]/y;
double f_b = RGB->v[i].v[2]/y;
double L_w = cdm2ToLambert(y);
/*
% Stevens equation gives the Brightness in Brils.
% log10(B) = 0.004[(S-27)(8.4-R)-108]
% where
%
S = 100 + 10 log10(L_wa)
%
R = 10 log10(L_wa / L_w)
*/
double S_w = 100.0 + 10.0*log10(L_wa);
double R_w = -10.0 * log10(L_w/L_wa);
/*
%
We want the brightness corresponding to the world luminance
%
to be same as that perceived from the display
%
i.e. (S_w-27)(8.4-R_w) = (S_d-27)(8.4-R_d) */
double R_d = 8.4 - (S_w-27)*(8.4-R_w)/(S_d-27);
/*
%
R_d = 10log10(L_da/L_d)
%
L_d (in Lambers) = L_da 10^(-0.1*R_d)
%
Needs conversion to Cd/m^2.
*/
double L_d = lambertToCdm2(L_da * pow(10,-0.1*R_d))/MAX_DISPLAY_LUMINANCE;
output->v[i].v[0]=MIN(1.0,L_d*f_r); // Display Red
output->v[i].v[1]=MIN(1.0,L_d*f_g); // Display Green

output->v[i].v[2]=MIN(1.0,L_d*f_b); // Display Blue


}
return output;
}

Table 2: Source code for the FPSG tone-reproduction operator.


//
//
//
//

A_Cone is the adaptation luminance of the image


RGB is the Red, Green and Blue radiance
Display ambient in cd/m^2
Code by Hector Yee and Sumanta Pattanaik

static double tviWard97(const double &luminance)


// combines both photopic and scotopic TVI functions
// in Ward and Rushmeier 97
{
if (luminance< 1.14815e-4f) return 1.38038426e-3f;
double loglum=log10(luminance);
if (loglum<-1.44f) return (pow(10.0, pow(0.405f*loglum+1.6f,2.18f)- 2.86f) );
if (loglum<-0.0184f) return (pow(10.0,loglum-0.395f));
if (loglum<1.9f) return (pow(10.0, pow(0.249f*loglum + 0.65f,2.7f)- 0.72f) );
return luminance*5.559e-2;
}
// display ambient in cd/m^2
#define DISPLAY_AMBIENT 25.0
#define MAX_DISPLAY_LUMINANCE 125.0
Image<Vector3> * tone_map_FPSG(const Image <Vector3> * RGB,const Image <double> * A_cone)
{
const double scale=tviWard97(DISPLAY_AMBIENT) / MAX_DISPLAY_LUMINANCE;
int width,height,max,i;
Image<Vector3> * output;
width=A_cone->width;
height=A_cone->height;
max=width*height;
// create output image
output=new Image<Vector3>(width,height);
double m;
for (i=0; i<max; i++) {
m = scale / tviWard97( A_cone->v[i] * lumscale);
output->v[i].v[0]=MIN(1.0,MAX(0.0,RGB->v[i].v[0] * m));
output->v[i].v[1]=MIN(1.0,MAX(0.0,RGB->v[i].v[1] * m));
output->v[i].v[2]=MIN(1.0,MAX(0.0,RGB->v[i].v[2] * m));
}
return output;
}

Table 3: Source code for the WRP tone-reproduction operator.


//
//
//
//
//
//

A_Cone is the adaptation luminance of the image


L_Code is the scene luminance per pixel
Cr is the RGB radiance divided by L_Cone
Display ambient in cd/m^2
Original code by Ward (Radiance)
Modified by Hector Yee and Sumanta Pattanaik

#define M_LN10
#define
exp10(x)
#define NUM_BINS 128

2.30258509299404568402
exp(M_LN10*(x))

// dynamic range of display device


static const double Lddyn = 32.0;
// maximum luminance of display device
static const double Ldmax = 100.0;
// min display luminance
static const double Ldmin = Ldmax/Lddyn;
static double
htcontrs(double La)
/* human threshold contrast sensitivity, dL(La) */
{
double l10La, l10dL;
/* formula taken from Ferwerda et al. [SG96] */
if (La < 1.148e-4) return(1.38e-3);
l10La = log10(La);
if (l10La < -1.44)
/* rod response regime */
l10dL = pow(.405*l10La + 1.6, 2.18) - 2.86;
else if (l10La < -.0184) l10dL = l10La - .395;
else if (l10La < 1.9)
/* cone response regime */
l10dL = pow(.249*l10La + .65, 2.7) - .72;
else l10dL = l10La - 1.255;
return(exp10(l10dL));
}
Image<Vector3> * tone_map_WRP(const Image <double> * L_cone,const Image <Vector3> * Cr,const
Image <double> * A_cone)
{
int width,height,max,i;
Image<Vector3> * output;
width=L_cone->width;
height=L_cone->height;
max=width*height;
// create output image
output=new Image<Vector3>(width,height);
// create histogram
unsigned int histo[NUM_BINS];
// cumulative distribution
float cumf[NUM_BINS+1];
memset(histo,0,NUM_BINS*sizeof(unsigned int));
memset(cumf,0,NUM_BINS*sizeof(float));
double llmin=FLT_MAX;
double llmax=-FLT_MIN;
for (i=0; i<max; i++) {
llmin=MIN(llmin,A_cone->v[i]);
llmax=MAX(llmax,A_cone->v[i]);
}
double bin_scale=(NUM_BINS-1)/(llmax-llmin);
int bin;
// compute histogram
for (i=0; i<max; i++) {
bin=(int) ((A_cone->v[i]-llmin)*bin_scale);
if (bin<0) bin=0;
if (bin>=NUM_BINS) bin=NUM_BINS-1;
histo[bin]++;
}
unsigned int threshold = (int) (max*.025f + .5f);
unsigned int ceiling,trimmings;
double Tr;
const double logLddyn=log(Lddyn);
const double delta_bin=(llmax-llmin)/NUM_BINS;
unsigned int histot=max;
unsigned int sum;
double Lw,Ld;
do {
/* iterate to solution */

sum = 0;
/* cumulative probability */
for (i = 0; i < NUM_BINS; i++) {
cumf[i] = (float)sum/histot;
sum += histo[i];
}
cumf[i] = 1.;
Tr = histot * delta_bin/logLddyn;
ceiling = (int) (Tr + 1);
trimmings = 0;
/* clip to envelope */
for (i = NUM_BINS; i--; ) {
Lw = exp(llmin + i*delta_bin);
Ld = Ldmin * exp( logLddyn *
.5*(cumf[i]+cumf[i+1]) );
ceiling = (unsigned int) (Tr * (htcontrs(Ld) * Lw) /
(htcontrs(Lw) * Ld) + 1.);
if (histo[i] > ceiling) {
trimmings += histo[i] - ceiling;
histo[i] = ceiling;
}
}
} while ((histot -= trimmings) > threshold &&
trimmings > threshold);
// histogram adjustment
double d;
int j;
for (i=0; i<max; i++) {
d=L_cone->v[i]*bin_scale;
j=(int) d;
if (j<0) j=0;
if (j>=NUM_BINS) j=NUM_BINS-1;
d -= j;
Ld = Ldmin*exp(logLddyn*((1.-d)*cumf[j]+d*cumf[j+1]));
d = (Ld - Ldmin)/(Ldmax - Ldmin);
output->v[i].v[0]=MIN(1.0,MAX(0.0,Cr->v[i].v[0] * d));
output->v[i].v[1]=MIN(1.0,MAX(0.0,Cr->v[i].v[1] * d));
output->v[i].v[2]=MIN(1.0,MAX(0.0,Cr->v[i].v[2] * d));
}
return output;
}

Table 4: Source code for the PTYG tone-reproduction operator


// INPUT
//
L_cone, L_rod : (2D array) Scene Photopic & Scotopic Luminance of the scene (in cd/m^2)
//
Normalized so that Scotopic luminance and photopic luminance of
//
equienergy spectrum are the same.
//
C_r, C_g, C_b : Color ratio.
//
A_cone, A_rod : (array) adaptation luminance for cones and rods (in cd/m^2)
//
Due to the time course of adaptation these values are
//
likely to be different from G_cone and G_rod for dynamic scenes.
//
For static scenes they are the same.
// Author: S. N. Pattanaik.
// Ported to C++ by H. Yee
double
double
double
double
return
}

model_sigma_cone(const double I){


k = 1.0/(5.0*I+1.0);
Fl = 0.2*pow(k,4.0) * (5.0*I) + 0.1*pow(1.0-pow(k,4.0),2.0)*pow((5.0*I),(1.0/3.0));
sigma = pow(2.0,(1.0/0.73)) / (Fl/(5.0*I));
sigma;

double
double
double
double
return
}

model_sigma_rod(const double I){


j = 0.00001/((5*I)+0.00001);
Fls = 3800.0*pow(j,2.0)*(5.0*I)+0.2*pow(1.0-pow(j,2.0),4.0)*pow(5.0*I,1.0/6.0);
sigma = pow(2.0,(1.0/0.73)) / (Fls/(5*I));
sigma;

double model_response(const double I,const double sigma){


double R = pow(I,0.73) / (pow(I,0.73) + pow(sigma,0.73));
return R;
}
double inv_model_response(const double R,const double sigma){
double I = sigma*pow((R/(1.0-R)),(1.0/0.73));
return I;
}
double model_diff_response(const double I,const double sigma){
double n = 0.73;
double A = pow(I,n);
double dR = (pow(sigma,n)*n*A)/pow((A+pow(sigma,n)),2.0);
return dR;
}
Image<Vector3> * tone_map(const Image <double> * L_cone,const Image <double> * L_rod,const Image
<Vector3> * Cr,const Image <double> * A_cone, Image <double> * A_rod)
{
int width,height,max,i;
Image<Vector3> * output;
width=L_cone->width;
height=L_cone->height;
max=width*height;
// create output image
output=new Image<Vector3>(width,height);
double
double
double
double
double
luminance
double
double

L0Cone = 2.0e6;
//% in cd/m^2,
B_cone, sigma_cone, R_cone;
L0Rod = 0.04;
//% in cd/m^2,
B_rod, sigma_rod, R_rod;
white_factor = 5.0; //% the factor by which White differ from the adaptation
dark_factor = 32.0/5.0; //% 32 is the factor by which White differs from the Dark
R_lum_scene,rod_contrib,cone_contrib,scene_white_cone,
scene_dark_cone,scene_white_rod,scene_dark_rod,REF_Wht_scene,REF_Blk_scene,S_color,
scale,R_lum_display, color_strength, L_d_cone, Red_cone, Green_cone, Blue_cone,
L_d_rod, RGB_rod, Red_d, Green_d, Blue_d;
G_display
= 25.0; //% Display adaptation luminance in cd/m^2
display_white = G_display*white_factor;
display_dark = G_display/dark_factor;
display_sigma = model_sigma_cone(G_display);
REF_Wht_display = model_response(display_white,display_sigma);
REF_Blk_display = model_response(display_dark,display_sigma);
S_d = ((REF_Wht_display-REF_Blk_display)/(log10(display_white)-log10(display_dark)));
L_d;

double
double
double
double
double
double
double
double
/*%%
% ADAPTATION MODEL
%
% Scene Response is determined by the dynamic Adaptation Luminace.
%%*/
for (i=0; i<max; i++) {

B_cone =L0Cone/(L0Cone+A_cone->v[i]);
B_rod = L0Rod/(L0Rod+A_rod->v[i]);
sigma_cone = model_sigma_cone(A_cone->v[i]);
sigma_rod = model_sigma_rod(A_rod->v[i]);
R_cone
= B_cone*model_response(L_cone->v[i],sigma_cone);
R_rod
= B_rod*model_response(L_rod->v[i],sigma_rod);
R_lum_scene = R_cone+R_rod;
rod_contrib = R_rod/R_lum_scene;
cone_contrib = R_cone/R_lum_scene;
scene_white_cone = A_cone->v[i]*white_factor;
scene_dark_cone = A_cone->v[i]/dark_factor;
scene_white_rod = A_rod->v[i]*white_factor;
scene_dark_rod
= A_rod->v[i]/dark_factor;
REF_Wht_scene =
B_cone*model_response(scene_white_cone,sigma_cone)+B_rod*model_response(scene_white_rod,sigma_rod);
REF_Blk_scene =
B_cone*model_response(scene_dark_cone,sigma_cone)+B_rod*model_response(scene_dark_rod,sigma_rod);
S_color = model_diff_response(L_cone->v[i],sigma_cone);
/*%%
% APPEARANCE and INVERSE APPEARANCE MODEL
%
Adjusts the scene white/black of the scene
%
to display white/black.
% For the given G_display, B_cone = 1 and B_rod = 0.001
% So the rod component has been ignored in the following computation.
%%%*/
scale = (REF_Wht_display-REF_Blk_display)/(REF_Wht_scene-REF_Blk_scene);
R_lum_display = R_lum_scene;
if (R_lum_display<0) R_lum_display = 0;
else if (R_lum_display>=1) R_lum_display = 0.99999;
/*%%
% INVERSE ADAPTATION MODEL
% Invert the data to get display Luminance
%%*/
L_d = inv_model_response(R_lum_display,display_sigma)/display_white;
/*%%
% Create display R, G, B
%%*/
color_strength = S_color/S_d;
L_d_cone
= L_d*cone_contrib;
Red_cone
= L_d_cone*pow(Cr->v[i].v[0],color_strength);
Green_cone = L_d_cone*pow(Cr->v[i].v[1],color_strength);
Blue_cone = L_d_cone*pow(Cr->v[i].v[2],color_strength);
L_d_rod = L_d*rod_contrib;
RGB_rod = L_d_rod;
Red_d
= Red_cone + RGB_rod;
Green_d = Green_cone + RGB_rod;
Blue_d = Blue_cone + RGB_rod;
output->v[i].v[0]=Red_d;
output->v[i].v[1]=Green_d;
output->v[i].v[2]=Blue_d;
}
return output;
}

Вам также может понравиться