Вы находитесь на странице: 1из 6

The Depth of Field Effect Under Point Based Rendering System

Peng Zhao
Department of Computer Science
University of British Columbia
BC, Canada V6T 1Z4
pzhao@cs.ubc.ca

Abstract

also very suitable to be combined with the DOF algorithm.

DOF (Depth Of Field) plays an important role in real


photography. It can be used to draw attention to infocus objects and areas. In modern computer graphics system, however, DOF effects are not supported.
Since the kernel of DOF is to spread the points, which
are out of focus, over a small circle on the image. It
is easy to be implemented under the rendering sys-

In addition, point based rendering needs to reconstruct the continuous surface from the irregularly
sampled data points. For this problem, we simply expand the point to an area on the screen with certain
shape (e.g. square or circle). This splatting method
can produce descent image quality by filling the gaps
between points.

tem based on points, which is called point based rendering. This paper proposes the algorithms for both
the point based rendering and the depth of field rendering. These algorithms are suitable for the combination of these two techniques. They can efficiently
generate the scene with high quality DOF effects.

1.2 Depth of Field


In real photography, DOF control is a useful and important technique. When taking a photo of the scene,
only a limit range of depth will be in focus but objects closer or further away will be out of focus and

Introduction and Related Work

thus appear blurred. This effect provides an important


contribution to generate the photorealistic images.

1.1

Point Based Rendering

Modern laser range and optical scanners need render-

It can also be used to draw attention to the in-focus


objects and areas.

ing techniques that can directly render opaque and

The first person who introduced the DOF effects

transparent surfaces from point clouds without con-

to computer graphics system is Potmesil et al. [1].

nectivity, which is called point-based rendering. This

The method in this paper is a post-filtering process-

technique is different from the traditional meshes

ing, which calculate the intensity of each pixel in

based rendering. It can be used to render some so-

the output image by the sum of the weighted inten-

phisticated models, such as trees.

sities in the CoCs (Circle Of Confusion) that over-

One challenge for point based rendering is to find a

lap this pixel. The major disadvantage of this algo-

proper data structure to represent the point clouds. In

rithm is that it cant resolve the partial CoCs occlu-

this paper, we use a hierarchy tree to store the points

sion, which produces the obvious artifact between the

model, which is very flexible for the level-of-detail

blurry background and the sharp foreground objects.

control, culling and rendering. This data structure is

The distributed ray tracing method was proposed

2 Splat: Point Based Rendering


The data structure of the point based model is a hierarchy tree (Figure 1). Each node of this tree contains
the bounding sphere, normal and the width of the normal cone. If the model is a color model, the node also
includes the information of the color. Such kind of
tree can be generated from polygons, voxels or point
clouds using proper algorithm. Given this structure of
the point model, we can use the following recursive
Figure 1: Hierarchy tree of the model

algorithm to display the model:


DisplayHierachy(node)
if (node is invisible)
return;

by Cook, Porter and Charpenter [6]. This technique


can not only solve the partial occlusion problem, but

else if (node is a leaf node)

it can solbe other problems, such as translucency and

draw this node;


else if (no need to recurse)

color changes. However, the computation complexity


of distributed ray tracing is too heavy to be used in

draw this node;


else

real-time system (e.g. VR system).


Haeberli and Akeleys accumulation buffer algorithm [5] uses the hardware support to generate the

for each child in (node.children)


DisplayHierachy(child);

DOF effects. This method is easy to implemented


under both polygon based rendering system and point

For each node, we first check whether the points in

based rendering system. It can also avoid the partial

this node and its children are visible. Since we have

occlusion problem. But since this method integrat-

the information of the bounding sphere and the nor-

ing multiple rendering passes of the scene, the com-

mal, the visibility of this branch can be determined

plexity is dependent on the world scene. When the


scene becomes more complicated, the efficiency of
algorithm drops down quickly.

by the back face culling and frustum culling. Second,


if the node is a leaf node or there is no need to recurse (e.g. the splat size of this node on the screen
is no more than one pixel), we simply draw the splat

The algorithm implemented in this paper uses a ray

presented by this node and return. Finally, we recur-

distribution buffer (RDB) to produce the DOF effects.

sively call the display function to display each child

It is an improvement over the distributed ray tracing

of this node.

and the accumulation buffer methods. The complex-

By using this recursive function, we can cut off the

ity of this method depends only on the resolution of

branch of nodes, which have no effect on the display

the final image. Another reason to use this algorithm

image, as early as possible. It can dramatically in-

is that it is easy to be implemented under the point

crease the efficiency of rendering. In addition, this

based rendering system. Finally, we propose another

hierarchy structure can also be used to control the

fast approximation algorithm, which can be used for

Level Of Details (LOD). That is, display the objects,

the minor part of the scene where the high quality is

to which are drawn attention, in high details and other

not important.

objects in low details.

For the shape of the splat, we simply use the square

image plane. The diameter of the CoC can be com-

shape. Although other methods, such as circle shape

highly increased. When the DOF effects are added

puted by (see Figure 3):



!


(2)

 
where
is the diameter of the CoC, f is the focal

to the rendering process, the efficiency of the whole

length, and n is the aperture number (i.e. f/n is the

and Gaussian model, can be used to improve the


quality of the image, the computation complexity is

rendering is not affordable.


To solve the occlusion problem, the traditional z-

diameter of the aperture). " and  are the distances


of the image plane and the focus point, individually.

buffer is used. We also use another buffer to store the

Obviously, the intensity of each pixel on the im-

position of the corresponding object point for each

age plane is the sum of the weighted intensities of all

pixel on the image. These are needed for the post

other pixels with CoCs that overlap it. Since part of

filtering process of DOF effects.

the rays of a point on the object maybe occluded by


other points, which is called partial CoC occlusion,
we must find a way to solve this.

3.2 Ray Distribution Buffer Method


The major idea of Ray Distribution Buffer (RDB) is
to classify the imaging rays according to their direction. As shown in Figure 4, for each pixel we introFigure 2: Thin lens camera model

duce an RDB structure wherein each RDB element


represents an imaging ray in some direction.
!
When an object point $#"%'&)(*%+&-,.% is focused on the
  !
point $# &)( &-, , which is not on the image plane, we
can first calculate the diameter of the CoC by Equation 2. Then for each pixel P in the CoC, the ray di / *0 1 !
rection  & &
from the object point to this pixel
can be computed by (see Figure 4):

Figure 3: Circle of Confusion

*/

$#"234#

$(234(
6/8
08

7



0

3
3.1

61

DOF: Depth Of Field


Camera Model

 !)5

(3)

 !)5

(4)
(5)

The human eyes can be modeled as a thin lens camera

where

!8
7 $#9234#

system, which is shown in Figure 2. The power of

Then we assign the value of this object point to the

lens f, the distance of the object  and the distance


of the focus point  are given by the equation:




  

(1)

$(2:;(

!8

<,+2:;,

!8

RDB element for pixel P, which represents the ray


/ 0 1 !
direction of  & &
.
To solve the partial CoC occlusion problem, we
use z-buffering method, just as in conventional z-

The object points, which are not located at the focus

buffering. That is, each RDB element stores a z-

range, will create a circle of confusion (CoC) on the

value. The new value is assigned to the RDB element

3.3 Fast Approximation


Although the RDB method depends only on the resolution of the output image, it still needs a lot of memory to get a desired accuracy of DOF effects. When
the resolution of RDB increases, the computational
costs are not low enough to generate real-time imFigure 4: Ray Distribution Buffer

ages. So we present another algorithm, which is much


faster than RDB method, to approximate the DOF effects.
Instead of using the ray distribution buffer, we use
only a simple accumulation buffer to compute the
DOF effects:

Figure 5: Imaging Rays and RDB

only when the z-value of the object point is smaller

Initialize accumulation buffer to zero;


Initialize number for each pixel to zero;
for each pixel P in the image

than the stored z-value. Then the occlusion of imag-

Calculate the diameter of CoC;

ing rays from the same direction to pixel P can be

for each pixel Q in the CoC


if (Z[P] = Z[Q])

resolved.
Finally, the intensity of each pixel is the average

add P to Qs accumulation buffer;

value of all the RDB elements for this pixel. From

the discussion above, the algorithm of RDB can be

increase the number for Q by 1;

summarized in the following.

for each pixel P in the image


Clear z-buffer of RDB elements;
for each pixel P in the image
Calculate the diameter of CoC;
for each pixel Q in the CoC
Calculate the ray direction to Q;
Find the corresponding RDB element C;
if (Z[P] = Z[C])

Replace Cs RGB and Z value

Intensity[P] = Accu. buffer[P] / Num[P];

That is, we dont classify the imaging rays by their


directions. We just collect all the imaging rays that illuminate on the pixel and average them. To solve the
partial CoC occlusion problem, each pixel has CoC
contribution to its neighbor pixels only when the Z
value of the neighbor is not smaller than the current
pixel. Then the blurry background will have no ef-

for each pixel P in the image

Intensity[P] = Average(RDB[P]);

fect on the sharp foreground. Although this method


doesnt generate real DOF effects, the output image
looks realistic. So this method can be used for the
fast approximation of DOF effects.

Model

Vertices

Time

For the point based rendering part, I use purely

RDB(9x9)

bunny.qs

35,286 * 3

3.6s

software rendering method that draws the picture into

RDB(9x9)

lion.qs

183,408 * 3

3.5s

a memory buffer. Then the memory image buffer and

RDB(5x5)

bunny.qs

35,286 * 3

1.5s

some other information (e.g. the original position for

RDB(5x5)

lion.qs

183,408 * 3

1.7s

each pixel) are used to generate the final image with

Fast Approx.

bunny.qs

35,286 * 3

1.1s

DOF effects. Table 1 shows the computation time,

Fast Approx.

lion.qs

183,408 * 3

1.2s

which includes both the point based rendering and the

Method

DOF rendering, of the demo under different parameTable 1: The computation time

ters. All the tests were run on a PIII 1GHz machine


with 1G memory. The image resolution is 640 x 480
and the configuration of the camera is:
Focal length = 130
Z value of focus plane = 1000
Aperture number = 1
Figure 6 shows the result of the demo using the
RDB algorithm and Figure 7 shows the result of the
fast approximation algorithm.

(a)

(a)

(b)

Figure 6: Experiment Result for RDB algorithm

Implementation and Results

I use C++ to implement the project and the point


based models were downloaded from Stanfords web-

(b)

page. Since I dont know the format of the model


files, I borrowed some codes from that webpage to

Figure 7: Experiment Result for Fast Approximation

read them. What I did in the project is rendering the

algorithm

point based models with DOF effects.

Conclusion

[6] T.Porter R.L.Cook and L.Carpenter. Distributed


ray tracing. Computer Graphics (SIGGRAPH84

From the discussion and the experiment results, we


get the DOF effects can be easily implemented under point based rendering system. By using the algorithms presented in this paper, the quality of the
output image looks good and the computation costs
are not high.
The efficiency of the algorithm is not fast enough to
support real-time rendering. But it can be accelerated
by two ways:
> By using the hardware support to accelerate the
kernel part of rendering, the efficiency of the algorithm can be dramatically increased.
> By using some perception based model, the image can be separated into two parts, one is major
part and the other is minor part. Then we can use
some high quality algorithm for the major part
and use some low quality but high speed method
for the minor part.

References
[1] M.Potmesil and I.Chakravarty. A lens and aperture camera model for synthetic image generation. Computer Graphics (SIGGRAPH81 Proceedings), pages 297305, 1981.
[2] M.Shinya. Post-filtering for depth of field simulation with ray distribution buffer. Proceedings
Graphics Interface, pages 5966, 1994.
[3] Jurriaan D. Mulder and Robert van Liere. Fast
perception-based depth of field rendering. Association for Computing Machinery, pages 129
133, 2000.
[4] P.Fearing.

Importance ordering for real-time

depth of field. Proceedings of the Third International Conference on Computer Science, pages
372380, 1996.
[5] P.Haeberli and K.Akeley.

The accumulation

buffer: Hardware support for high-quality rendering. Computer Graphics (SIGGRAPH90 Proceedings), pages 309318, 1990.

Proceedings), 18:137145, 1984.


[7] S.Rusinkiewicz and M.Levoy. Qsplat: A multiresolution point rendering system for large
meshes. Computer Graphics (SIGGRAPH2000
Proceedings), pages 335342, 2000.

Вам также может понравиться