Вы находитесь на странице: 1из 15

Near-Field Photometry

_____________________________________________________________________________________

Measuring and Modeling Complex 3-D Light Sources


by

Ian Ashdown, P. Eng.


Research & Development Manager
Ledalite Architectural Products, Inc.

Abstract
This paper presents a method that, based on a field-theoretic approach to photometry, accurately
and efficiently measures and models arbitrarily complex three-dimensional light sources. It
allows the prediction of direct illuminance at any point on any surface anywhere within the
surrounding 3-D space without requiring any knowledge of the geometry of the source or its
distance from the surface being illuminated. The method is easily integrated with ray tracing
techniques and radiosity methods, including multiprocessor implementations.
NOTE: this is a revised version of a paper called Modeling Complex 3-D Light Sources that
was presented in ACM SIGGRAPH 93 Course 22 Notes, Making Radiosity Practical.

1. Introduction
Given the remarkable degree of sophistication that has been achieved in modeling complex
environments using ray tracing techniques and radiosity methods, it would be reasonable to
expect a similar degree of sophistication in the modeling of light sources illuminating these
environments. However, this is not the case. Most rendering systems rely on isotropic point
sources with light direction controls (Warn 1983) or line and area (i.e., extended) sources
modeled by discrete spatial arrays of point sources (Hall and Greenberg 1983, Brotman and
Badler 1984, Verbeck 1984, Verbeck and Greenberg 1984, Houle 1991, and Houle and Fiume
1993). At best, a few systems model extended area sources as homogeneous one- and two-
dimensional continua (Nishita et al. 1985, Tanaka and Takahashi 1991, Poulin and Amanatides
1990, Picott 1992, and Nishita et al. 1992).
Verbeck and Greenberg (1984) noted that to correctly describe the physical characteristics of
light sources, three attributes must be modeled: (1) the light source geometry, (2) the emitted
spectral distribution, and (3) the luminous intensity distribution, where each point source in a
discrete array has its own spectral and luminous intensity properties. While these attributes may
be necessary and sufficient, they rarely lead to useful models of complex three-dimensional (i.e.,
physical) light sources. The problem is common to all point source models: the geometries and
photometric distributions of most physical light sources are simply too complicated to describe
accurately or efficiently using an array or continuum of point sources.
This problem can be alleviated by reformulating the fundamental photometric concepts of
luminous flux, luminance and illuminance in terms of field theory (e.g., Gershun 1936 and Moon

3-1
and Spencer 1981). An interesting aspect of this reformulation is that it does not require the
concept of a point source; all measurable photometric quantities can be described in terms of
luminous flux, luminance and illuminance. The reformulation leads to a field-theoretic approach
that can be used to model arbitrarily complex three-dimensional light sources (Levin 1971).
This paper describes a practical method, based on Levin's approach, that can accurately predict
the direct illuminance at any point on any surface anywhere within the surrounding three-
dimensional space without requiring any knowledge of the geometry of the source or its distance
from the surface being illuminated. It can be easily integrated with ray tracing techniques and
radiosity methods, and can accommodate surface bidirectional reflectance distribution functions.
The method can even be used to synthesize photometrically accurate perspective images of the
light source viewed from any direction and distance.
A more interesting problem in computer graphics is the direct illuminance of a finite surface area
due to a complex three-dimensional source. Fortunately, the method is such that the illuminance
of any set of points can be calculated simultaneously. The method can be formulated such that it
is inherently parallel, making it suitable for implementation on hardware accelerators and
multiprocessor workstations.
This article also briefly describes a practical near-field goniophotometer (Ashdown 1993b,
1993c) that measures the three-dimensional field of light surrounding a physical light source.
This is exactly the photometric data required by the described illuminance prediction method.
Physical sources such as fluorescent office lighting fixtures therefore do not have to be
laboriously modeled; their near-field photometric characteristics can be measured by the
manufacturer. Used in combination, the near-field goniophotometric data and illuminance
prediction method are ideally suited for accurately modeling architectural interiors using ray
tracing techniques and/or radiosity methods.

2. Fundamental Concepts
For whatever reasons, the fundamental photometric* concepts of luminous flux, luminous
intensity, illuminance, and luminance are rarely discussed in the computer graphics literature. We
therefore begin with a review of their formal definitions as presented in ANSI/IES RP-16-1986,
Nomenclature and Definitions for Illuminating Engineering (ANSI/IES 1986):
Luminous flux () is defined as the time rate of flow of energy (i.e., visible light), and is
measured in lumens.
Luminous intensity is defined as the luminous flux per unit solid angle in a given direction
emitted by a point source, or:
I = d d (1)

where I is the luminous intensity and d is the differential solid angle.


Illuminance is defined as the luminous flux per unit area incident at a point on a surface, or:
E = d dA (2)

* Photometric and radiometric theory are, apart from their units of flux measurement (lumens
versus watts), equivalent. The following terms are therefore similarly equivalent: luminous flux
/ radiant flux, luminous intensity / radiant intensity, illuminance / irradiance and luminance /
radiance.

3-2
where E is the illuminance and dA is the differential area surrounding the point.
Luminance is defined as the luminous flux per unit solid angle in a given direction per unit area
leaving, passing through and/or arriving at a point on a surface, or:
L = d 2 (ddA cos ) (3)

where is the angle between the normal of the real or imaginary surface and the given
direction (see Fig. 1). The subexpression dAcos can be interpreted as the orthogonal projection
of the element [dA] of the surface on a plane perpendicular to the given direction.
The luminance of an emitting surface can be expressed in terms of luminous intensity as:
L = dI (dA cos ) (4)
where the differential area surrounding the point is interpreted as a point source. Similarly, the
luminance of a receiving surface can be expressed in terms of illuminance as:
L = dE (d cos ) (5)
Solving for illuminance, equation (5) becomes:


E = L cosd

(6)

where the integration is performed over all differential solid angles d intersecting an imaginary
hemisphere centered over the point on the surface (Fig. 2).
Equation (6) is useful in that it allows us to calculate the illuminance at a point on any surface
due to an area source if we know the differential surface luminance distribution of the source
(e.g., Ngai 1987). That is, we can in theory calculate the illuminance of a surface if we know the
luminous intensity distribution of each point source comprising the continuum of the area source.
Unfortunately, the geometric and photometric distribution complexity of most physical light
sources precludes any practical application of this approach.

3. Field Theory
We commonly think of luminance as a property of the physical surface that is emitting or
reflecting light. However, there are physical light sources that do not have definite surfaces. One
example is a plasma arc, which emits light throughout its volume. Another example is the sky --
the blue light we see is due to the scattering of sunlight by air molecules and dust from ground
level to the upper reaches of the atmosphere. These are volume sources of light.
ANSI/IES (1986) addresses volume sources by stating in its definition of luminance that a
surface can be real or imaginary. In other words, one can choose an arbitrary plane in space and
(in theory) measure its luminance. Of course, in practice the value measured at the receiving
surface (the luminance meter's photosensitive element) may be less than the luminance of the
emitting surface due to the attenuation of the transmitting media.
We can address this issue more directly by redefining luminance as follows:
The luminance at a given point in space and in a given direction is defined as the luminous
flux per unit area in the given direction per unit solid angle.
This definition is identical to that of ANSI/IES (1986), except for one crucial difference: there is
no mention of any surface! This is reasonable in that luminance is an intrinsic property of the

3-3
field of light surrounding an observer in space. It is not a property of any surface, real or
imaginary.
Gershun (1936) offered a useful geometrical interpretation of luminance by noting that a
differential solid angle is equivalent to a geometric ray. That is, the elementary cone d shown in
Figure 1 can be interpreted as an infinitely thin ray with a differential cross-sectional area equal
to dAcos, and with the luminous flux contained within the ray. Thus, luminance is an
intrinsic property of a directed geometric ray of light.
Gershun's rays are familiar from geometrical optics and ray tracing in computer graphics. They
are straight lines only in an optically homogeneous medium, and may be reflected or refracted at
media boundaries. The luminance of a ray may vary as it transverses a volume source or a
participating medium (e.g., haze, fog, smoke, etc.) in which absorption, scattering or dispersion
occurs. In a vacuum or (over short distances) air, however, the luminance of a ray remains
invariant along its length.
In terms of field theory, these geometric rays represent field lines in a five-dimensional scalar
field of light, where each point in the field has three spatial and two directional co-ordinates. A
scalar luminance value can be measured at every point in the field (i.e., at every point in 3-D
space and in every direction). Illuminance can then be defined in terms of luminance using
equation (6). That is, we can calculate the illuminance of any point on any surface anywhere in
3-D space if we know the luminance of every ray intersecting the point.

4. A Field-Theoretic Approach
One of the reasons Gershun (1936) and other early photometric theorists reformulated the
fundamental photometric concepts in terms of field theory was to develop photometric theory as
a special case of electromagnetic theory, where the wavelength of the electromagnetic radiation is
zero (see Moon and Spencer 1981, Chap. 1). It is important to recognize that this reformulation
introduces no new definitions or results; it is simply a more rational interpretation of the
fundamental photometric concepts of luminous flux, luminance and illuminance. (On the other
hand, it does demonstrate that there is no need to include the point source concept, and with it the
definition of luminous intensity, in photometric theory.)
None of this helps in modeling complex three-dimensional light sources, of course. However,
Levin (1971) made the following observation in a paper on lighting fixture photometry:
In order to completely describe the spatial luminous radiation produced by a source, the basic
interpretation of luminance as a property of geometric rays can be used. A closed surface is
described about the source S as in [Fig. 3], and the luminance L(x, y, z, , ) is described over
the surface. Logically, the luminance distribution for all points on the surface can be
described, and this is sufficient for the calculation of photometric quantities at any distance
and in any direction.
Levin assumed that the source S was embedded in a homogeneous non-participating medium
(air). Thus, any geometric ray outside the source volume is a straight line extending to infinity
with constant luminance along its length. His luminance distribution at a point on the
imaginary closed surface is the luminance of every ray intersecting that point. Clearly, only those
rays originating within the source volume will have non-zero luminance. Knowing this
distribution, we can use equation (6) to predict the direct illuminance at any point on any surface
anywhere within the three-dimensional space surrounding the source. That is, we can for any
point on any real or imaginary surface determine which rays emanating from the source volume

3-4
intersect the point from above the surface of the plane (see Figure 5) and calculate the
illuminance at the point from their luminances. (If we know the bidirectional reflectance
distribution function of the surface, we can also predict its luminance in any given direction due
to the source.) In marked contrast to approaches based on point sources, we do not need to know
anything about the geometry of the source or its distance from the surface being illuminated.
To expand on Levin's brief observation, this approach cannot predict the luminance or
illuminance of a point that is inside the source volume. Furthermore, this volume must be
represent the convex hull of the source (or sources). That is, any geometric ray originating within
the source volume must not intersect the volume after passing through any point outside the
volume. If the source's surface is ill-defined (e.g., a plasma arc), an imaginary convex bounding
volume can be described which fully encloses the source.
It can also be seen that the source (or sources) can be outside the imaginary closed surface for
which a luminance distribution is described. However, this situation places some limitations on
where in space photometric quantities can be calculated -- the source volume creates an umbral-
like region with respect to the closed surface where the rays cannot be traced without intersecting
the volume.
Levin's approach is completely general in that it can in theory model (within the limits of
geometrical optics) the spatial distribution of luminous flux surrounding a source with absolute
accuracy and precision. In terms of field theory, the luminance distribution of the imaginary
closed surface is a bounded four-dimensional scalar field. Assuming a homogeneous non-
participating medium, this field fully describes the unbounded five-dimensional scalar field (i.e.,
the light field) surrounding the source volume.
In more intuitive terms, we can imagine ourselves at a point on the closed surface. The set of
geometric rays intersecting the point clearly represents our view of the source and its
environment in all directions. In other words, we can use our knowledge of the light field at that
point to synthesize a photometrically accurate perspective view of the source from the point.
Furthermore, we can do this for any point in space outside the source volume.
The only limitation to Levin's approach is that the surface being illuminated must not reflect
luminous flux back into the source volume. If it does, then the source becomes photometrically
coupled to its environment, and so the surface must be considered to be part of the source
volume.
(From this point on, the discussion considers architectural lighting fixtures, or luminaires. It
should be understood, however, that the comments also apply to most physical light sources.)

5. A Near-Field Goniophotometer
To be useful, Levin's approach requires some method of determining the luminance of geometric
rays intersecting the imaginary closed surface surrounding a light source (luminaire). Given a
detailed physical description of a luminaire, we can use ray tracing or finite element radiative
transfer techniques (i.e., radiosity) to calculate the ray luminances. This process can be quite
laborious, especially for complicated luminaire designs. In practice, it requires a comprehensive
database of material BRDFs and lamp luminance distributions. (Lighting Technologies' FiELD
[1990] finite element luminaire design program offers this information and design capability in a
commercial product.) Fortunately, the process need only be performed once for any given design.
We can also measure the ray luminances of existing luminaires. Ashdown (1993b, 1993c)
described a practical near-field goniophotometer that uses a compound luminance meter to

3-5
simultaneously measure the luminance of up to 250,000 geometric rays at any point in space.
The meter is mounted on a moveable arm (Fig. 4) that rotates in the vertical plane around the
luminaire. By rotating the luminaire in the horizontal plane, the meter can be positioned at any
point on the surface of an imaginary sphere enclosing the luminaire.
The compound luminance meter consists of a CCD video camera and a frame grabber to acquire
digitized video images. A wide angle lens provides a view of the entire luminaire. By focusing
the lens at infinity, each photosensitive element (pel) of the CCD photosensor array measures
the average luminance of a pyramidal volume of space in a given direction. (By way of
comparison, a lens-type luminance meter is designed to measure the luminance of a physical
surface by focusing on that surface.) With an overall field of view of 90 degrees and a CCD
sensor array of 512 512 pels, the field of view of each pel is only 0.2 degrees, making it an
excellent approximation of a geometric ray.

6. Data Storage Considerations


An initial concern in the development of the goniophotometer was that storage of the video
images would consume many megabytes of data, even for a single luminaire. It was determined,
however, that image resolutions of between 32 32 and 64 64 pixels are sufficient for most
illuminance calculation purposes (Ashdown 1993a). Higher resolutions are generally required if
detailed images are to be synthesized from the data set. A typical frame grabber has an 8-bit
dynamic range. The luminance of each pixel can thus be stored as a byte, scaled by a single
floating point value for the entire image.
It was also determined that angular increments of 5 degrees between meter positions in both the
vertical and horizontal planes are usually sufficient. For a luminaire with nominally quadrilateral
symmetry, this implies 19 37 = 703 images. However, fewer images in the horizontal planes
for vertical angles near nadir and zenith are required. In practice, only 540 images need be stored
for a typical luminaire.
There is a considerable amount of intraframe redundancy in most of the images. Also, since each
image offers a slightly different (rotated) view of the luminaire from that of its neighbors, there is
a large amount of interframe redundancy as well. This redundancy can be removed using any
number of lossless or lossy image compression and motion estimation techniques. While this
topic is still under investigation, preliminary results indicate that near-field photometric data file
sizes of 5 to 100 kilobytes for typical luminaires are likely possible.

7. Illuminance Calculations
Figure 2 shows an elementary convergent cone dw of luminous flux (a geometric ray) incident at
a point P on a surface at an angle to the surface normal. The illuminance at the point is due to
the luminance of all such rays incident on the surface at the point, as expressed in equation (6).
Each ray d intersects the imaginary hemisphere positioned over the point. The orthogonal
projection of the differential area of intersection onto the surface, dC is given by:
dC = cosd (7)

3-6
From Nusselt's analogy (e.g., Cohen and Greenberg 1985), dC is the differential configuration
factor* of d, and represents the luminous flux in d that is intercepted by the differential
surface element dA surrounding the point. The illuminance of the point is then given by:

E = LdC
C
(8)

We can approximate the illuminance at the point by grouping the elementary convergent cones
d into a set of finite convergent cones . The luminance Li of each finite cone i is the average
of its constituent elementary cones, and the approximate illuminance E' is then:
E = L C
i
i i (9)

where Ci is the delta configuration factor of cone i.


An efficient illuminance calculation method can be developed by substituting a hemicube
(Cohen and Greenberg 1985) or one of its derivatives (e.g., Ashdown 1994) for Nusselt's
hemisphere. The hemicube (Fig. 5) has computational advantages in that each face can be
divided into square elements (patches) A, with delta configuration factors given by:

((
Ctop = A x 2 + y 2 + z 2 ))
2
(10)

for patches on the top face and

((
Cside = zA y2 + z 2 + 1 ))
2
(11)

for patches on each of the four side faces (Cohen and Greenberg 1985).
The luminance of each ray Li in equation (9) can be determined by centering a hemicube over
the point on the surface and projecting a geometric ray from the point out through the center of
each hemicube patch into the environment. Only those rays that intersect the luminaire volume
and that are not obstructed by intervening objects will have possibly non-zero luminances.
If the point is located outside the imaginary sphere defined by the compound luminance meter
positions (the measurement sphere), the intersection of the patch ray with the sphere (i.e., its
position and direction) is calculated. Otherwise, the ray is extended through the point in the
opposite direction until it intersects the measurement sphere (Fig. 6).
It is unlikely that the patch ray will intersect the measurement sphere exactly at a meter position
or with the exact direction of a geometric ray in the (measured or calculated) near-field
photometric data set. The closest rays that were measured or calculated must be determined and
the patch ray luminance calculated using bilinear interpolation.
Expressed in pseudocode, the illuminance calculation method is:

* In the thermal radiation literature, the term configuration factor applies when luminous or
radiant flux is exchanged between a differential and a finite area surface element. The term form
factor applies when flux is exchanged between two finite area surface elements.

3-7
Point illuminance E' = 0
FOR each hemicube element ray ri
IF ri is not obstructed
IF ri intersects luminaire volume
Determine intersection with measurement sphere
Calculate luminance Li using bilinear interpolation
Calculate patch configuration factor Ci
E' = E' + Li * Ci
ENDIF
ENDIF
ENDFOR

Determining whether a patch ray ri intersects the luminaire volume is not a critical step. If it
doesn't, the ray will have zero luminance. However, it is important to ensure that the ray does not
intersect the measurement sphere at an angle that is outside the field of view of the luminance
meter at the point of intersection. This is easily accomplished by enclosing the luminaire in an
imaginary bounding volume. Making this volume a box or a sphere simplifies the patch ray-
bounding volume intersection calculations. The only restriction is that the bounding volume must
be entirely within the meter's field of view for any position on the measurement sphere. On the
other hand, the volume should bound the actual luminaire as closely as possible to cull those rays
with zero luminance.
An efficient patch ray culling method is to project the bounding volume onto the hemicube and
mark those patches covered by the projection (using a scanline fill algorithm for each hemicube
face). Only those rays from marked patches will intersect the bounding volume. Those hemicube
faces containing marked patches should be similarly marked. This will eliminate the need to scan
an unmarked face afterwards for marked patches when tracing patch rays.
The number of patch rays intersecting the bounding volume will depend on the distance of the
point from the luminaire. Since aliasing effects will occur if there are not enough intersecting
rays (the video images of the luminaire will be undersampled), it is necessary to increase the
hemicube resolution with increasing distance. This is best done as follows:
1. Precalculate the patch configuration factors for a specific hemicube resolution (e.g., 128
128 patches) and store the results in a RAM lookup table.
2. Determine the maximum projected width of the luminaire bounding volume on the
hemicube in number of marked patches.
3. If the projected width is less than twice the video image resolution (e.g., 64 64 pixels),
recursively subdivide the marked patches until the maximum projected width in marked
patches is greater than the video image resolution.
4. Trace a ray through the center of each marked patch.
Fortunately, it isn't necessary to calculate the configuration factors for the subdivided patches
using equations (10) and (11). They can be approximated with sufficient accuracy simply by
dividing the parent's configuration factor by four.
The computational complexity of the illuminance calculation method depends primarily on the
number of unobstructed patch rays that intersect the luminaire bounding volume. More to the
point, it is independent of the geometric and photometric distribution complexity of the

3-8
luminaire. The luminaire can be anything from a uniform luminous globe to a massive crystal
chandelier. It doesn't matter -- the method's run-time will remain constant.
The number of patch rays that need to be considered depends on the video image resolution. A
near-field photometric data set was calculated for a simple luminous disk for 32 32 and 64 64
pixel resolutions and illuminance predictions made for a wide variety of distances (Ashdown
1993a). These predictions were then compared with those predicted by analytic radiative transfer
theory. It was found that the numerical and theoretical predictions agreed to within 1 percent
for the 64 64 pixel data set, and within 2 percent for the 32 32 pixel data set for most
distances.
As for memory requirements, a typical luminaire will require somewhere in the order of two
megabytes to store its near-field photometric data as a set of decompressed video images. The
illuminance calculation method can therefore be executed entirely in RAM, even on a personal
desktop computer.

8. Calculations in Parallel
The method outlined above calculates the illuminance of a point on a plane. In practice, however,
we are more often interested in the illuminance of a finite surface area. While we can repeat the
method for sampled points on the surface, it is evident that each point will require a different set
of rays to be interpolated from the photometric data set. Furthermore, each CCD pixel ray
requires a unique 3-D transformation from the luminance meter's local co-ordinate system to that
of the hemicube prior to interpolation.
Fortunately, a simple change to the photometric data set allows us to eliminate most of these
transformations and calculate ray luminances in parallel for a set of points on a planar surface.
The set of pixels of each CCD image represents a fan-shaped bundle of rays that intersect the
meter position; the set of all CCD images represents a subset of all rays intersecting the
measurement sphere. We can interpolate from these measured rays a set of evenly spaced rays
that are all oriented in a given direction. This set is in effect the synthesized orthogonal
projection of the luminaire viewed from the given direction. We can therefore interpolate a set of
rays for each luminance meter position and orientation, where the rays are parallel to the meter
axis.
The primary advantage of this interpolated photometric data set is that one 3-D rotation can be
simultaneously applied to a set of parallel rays. The luminances of the set of parallel hemicube
rays for a set of points on a planar surface can then be determined from the four closest sets of
parallel interpolated rays using a 3-D translation and bilinear interpolation.
A second advantage of the interpolated photometric data set is that the illuminance calculation
method becomes inherently parallel, making it amenable to implementation in firmware on a
hardware accelerator or in software on a multiprocessor workstation.

9. The Five-Times Rule


The illumination engineering community separates photometric measurements and illuminance
calculations into two classes: near-field and far-field photometry (e.g., Ngai 1987). Far-field
photometry models luminaires as point sources, and assumes that the distance from a luminaire
to the surface being illuminated is at least five times the maximum luminaire dimension. Near-
field photometry assumes that the luminaire is closer to the surface than this nominal distance,
and models the luminaire as an extended source. The logic behind this distinction is that an

3-9
extended source subtending less than 0.2 radians can be modeled as a point source with very little
loss in illuminance calculation accuracy, regardless of the source's photometric distribution
complexity.
Modeling a luminaire using Levin's approach is therefore only necessary when there are
illuminated surfaces closer than this five-times distance and when the luminaire cannot be
reasonably modeled as a homogeneous array of point sources. In typical office environments,
Levin's approach is most usefully applied to indirect fluorescent luminaires that illuminate the
ceiling plane. Other applications include task lighting and wall washer luminaires. Outside this
domain, the luminaires are better modeled as point sources.

10. Ray Tracing and Radiosity


The illuminance calculation method can be integrated with both ray tracing techniques and
radiosity methods. For progressive radiosity, the direct illuminance of diffuse surfaces within the
five-times distance from the luminaires can be calculated on a point-by-point basis using the
above illuminance calculation method. Each illuminated surface patch can then be treated as a
secondary light source within the progressive radiosity method solution.
The near-field photometric database is ideally suited for ray tracing and hybrid ray tracing-
radiosity methods (e.g., Chen et al. 1991). The luminance of any ray intersecting the luminaire
volume can be interpolated from the photometric database. Specular highlights and reflections of
the luminaire can be accurately rendered, limited only by the resolution of the video images
comprising the database and the accuracy of the surface BRDFs. To this end, it may prove useful
to store the database on disk using fractal-based image compression techniques (e.g., Pentland
and Horowitz 1990). The video images can be decompressed to the desired resolution at run-
time, subject to the limits of processing time, available memory and the original image
resolution.

11. Conclusions
This paper has presented a new approach to modeling complex three-dimensional light sources
that is independent of the geometric and photometric distribution complexity of the source. The
described illuminance calculation method will likely prove most useful in the modeling of
architectural interiors, although there are other applications. It is currently being implemented in
HELIOS, a radiosity renderer described in Ashdown (1994).
While it is possible to generate a photometric data set for the illuminance calculation method
from a physical description of a luminaire, the method's usefulness will depend on the
availability of measured near-field photometric data sets provided by the luminaire
manufacturers. An experimental prototype of the described near-field goniophotometer has been
constructed and is currently being tested. Further research is also being conducted to determine
the most appropriate form of image compression for the photometric data set.
The described approach is not a panacea. It will often be more appropriate to use a simpler point
or area source model, particularly if there are tight rendering time constraints. However, when it
is necessary to generate truly photorealistic images of architectural interiors and other complex
environments, the approach offers a simple solution to an otherwise difficult problem.

3-10
12. References
ANSI/IES. 1986. American National Standard Nomenclature and Definitions for Illuminating
Engineering, ANSI/IES RP-16-1986. New York, NY: Illuminating Engineering Society of North
America.
Ashdown, I. 1993a. Near-Field Photometry: A New Approach, J. Illuminating Engineering
Society 22(1):163-180 (Winter).
Ashdown, I. 1993b. Near-Field Photometry in Practice, 1993 IESNA Annual Conference
Technical Papers. August 8-12, Houston, TX, 413-425.
Ashdown, I. 1993c. Near-Field Photometric Method and Apparatus, U.S. Patent 5,253,036.
October 12, 1993.
Ashdown, I. 1994. Radiosity: A Programmers Perspective. New York, NY: John Wiley & Sons,
Inc.
Brotman, L. S. and N. I. Badler. 1984. Generating Soft Shadows with a Depth Buffer
Algorithm, IEEE Computer Graphics and Applications 4(10):5-12.
Chen, S. E., H. E. Rushmeier, G. Miller and D. Turner. 1991. A Progressive Multi-Pass Method
for Global Illumination, Computer Graphics 25:4 (Proc. ACM SIGGRAPH '91) 25(4):165-174.
Cohen, M. F. and D. P. Greenberg. 1985. The hemicube - A Radiosity Solution for Complex
Environments, Computer Graphics (Proc. ACM SIGGRAPH '85) 19(3):31-40.
Gershun, A. 1936. Svetovoe Pole (The Light Field), Moscow. Translated by P. Moon and G.
Timoshenko in Journal of Mathematics and Physics Vol. XVIII (1939), Massachusetts Institute
of Technology, 51-151.
Hall, R. and D. P. Greenberg. 1983. A Testbed for Realistic Image Synthesis, IEEE Computer
Graphics and Applications 3(11):10-20.
Houle, C. 1991. Light Source Modelling. M.Sc. thesis. Department of Computer Science,
University of Toronto.
Houle, C. and E. Fiume. 1993. Light-Source Modeling Using Pyramidal Light Maps, CVGIP:
Graphical Models and Image Processing 55(5):346-358 (September).
Levin, R. E. 1971. Photometric Characteristics of Light-Controlling Apparatus, Illuminating
Engineering 66(4):205-215.
Lighting Technologies, Inc. 1990. FiELD, Finite Element Luminaire Design, Version 2.1 User's
Guide. Boulder, CO: Lighting Technologies, Inc..
Moon, P. and D. E. Spencer. 1981. The Photic Field. Cambridge, MA: MIT Press.
Ngai, P. 1987. On Near-Field Photometry, J. Illuminating Engineering Society 16(2):129-136
(Summer).
Nishita, T., I. Okamura and E. Nakamae. 1985. Shading Models for Point and Line Sources,
ACM Trans. on Graphics 4(2):124-146.
Nishita, T., S. Takita, and E. Nakamae. 1992. A Shading Model of Parallel Cylindrical Light
Sources, in Visual Computing: Integrating Computer Graphics with Computer Vision, T. L.
Kunii, Ed. Tokyo, Japan: Springer-Verlag, 429-445.

3-11
Pentland, A. and B. Horowitz. 1990. A Practical Approach to Fractal-Based Image
Compression, Data Compression Conference '91. Los Alamitos, CA: IEEE Computer Society
Press.
Picott, K. P. 1992. Extensions of the Linear and Area Lighting Models, IEEE Computer
Graphics and Applications 12(3):31-38.
Poulin, P. and J. Amanatides. 1990. Shading and Shadowing with Linear Light Sources,
Eurographics '90 (Proc. European Computer Graphics Conference and Exhibition, Sept. 4 - 7,
Montreux, Switzerland), C.E. Vandoni and D.A. Duce, Eds., Elsevier Science Publishers B.V.
(North-Holland), 377-386.
Tanaka, T. and T. Takahasi. 1991. Shading with Area Light Sources, Proc. Eurographics '91,
F.H. Post and W. Barth, Eds., Elsevier Science Publishers B.V. (North-Holland), Amsterdam,
235-246.
Verbeck, C. P. 1984. A Comprehensive Light Source Description for Computer Graphics. M.Sc.
thesis, Cornell University.
Verbeck, C. P. and D. P. Greenberg. 1984. A Comprehensive Light-Source Description for
Computer Graphics, IEEE Computer Graphics and Applications 4(7):66-75.
Warn, D. R. 1983. Lighting Controls for Synthetic Images, Computer Graphics (Proc. ACM
SIGGRAPH '83) 17(3):13-21.

3-12
N


dw
dA

Figure 1 - Luminance (emitting surface)

dw


P
dC
dA

Figure 2 - Calculation of illuminance from luminance

3-13
z

S x

y P(x,y,z)

L(x,y,z ,,)

Figure 3 - Describing a complex 3-D source

CCD
CAMERA

LUMINAIRE

Figure 4 - Near-Field Goniophotometer

3-14
Z L
L
A
A

P Y

Figure 5 - hemicube

Figure 6 - Predicting illuminance due to a complex 3-D source

3-15

Вам также может понравиться