Вы находитесь на странице: 1из 95

Computer Graphics:

Surface Detection Methods

Course Website: http://www.comp.dit.ie/bmacnamee

2
of
28

Contents
Today we will start to take a look at visible
surface detection techniques:
Why surface detection?
Back face detection
Depth-buffer method
A-buffer method
Scan-line method

3
of
28

Why?
We must determine what is visible within a
scene from a chosen viewing position
For 3D worlds this is known as visible
surface detection or hidden surface
elimination

4
of
28

Two Main Approaches


Visible surface detection algorithms are
broadly classified as:
Object Space Methods: Compares objects
and parts of objects to each other within the
scene definition to determine which surfaces
are visible
Image Space Methods: Visibility is decided
point-by-point at each pixel position on the
projection plane

Image space methods are by far the more


common

5
of
28

Two Main Approaches


Object-space methods (Continuous):
Implemented in WCS considering the geometrical relationships
between the actual objects.It compares the objects & parts of
objects to each other within the scene definition to determine
which surfaces as whole should we label as visible.
Image Space methods (discrete):
Implemented in screen co-ordinate system considering the
relationships b/w images of objects in their final configuration at
pixel level.Visibility is decided point by point at each pixel position
on the projection plane.

6
of
28

OBJECT SPACE ALGORITHMS


Object space algorithms do their work on the objects
themselves before they are converted to pixels in the
frame buffer. The resolution of the display device is
irrelevant here as this calculation is done at the
mathematical level of the objects
for each object a in the scene determine which parts of object a are
visible (involves comparing the polyons in object a to other polygons
in a and to polygons in every other object in the scene)

7
of
28

IMAGE SPACE ALGORITHMS


Image space algorithms do their work as the objects are
being converted to pixels in the frame buffer. The
resolution of the display device is important here as this
is done on a pixel by pixel basis.

for each pixel in the frame buffer determine which polygon


is closest to the viewer at that pixel location colour the pixel
with the colour of that polygon at that location

8
of
28

CHECKING AN OBJECT
When we talked about 3D transformations we reached a
point near the end when we converted the 3D (or 4D
with homogeneous coordinates) to 2D by ignoring the Z
values. Now we will use those Z values to determine
which parts of which polygons (or lines) are in front of
which parts of other polygons.
There are different levels of checking that can be done.
Object
Polygon
part of a Polygon

9
of
28

There are also times when we may not


want to cull out polygons that are behind
other polygons. If the frontmost polygon is
transparent then we want to be able to
'see through' it to the polygons that are
behind it as shown below:

10
of
28

Which objects are transparent in the


above scene?

11
of
28

COHERENCE

Coherence
We used the idea of coherence before in our line drawing algorithm.
We want to exploit 'local similarity' to reduce the amount of
computation needed (this is how compression algorithms work.)
Face - properties (such as colour, lighting) vary smoothly across a
face (or polygon)
Depth - adjacent areas on a surface have similar depths
Frame - images at successive time intervals tend to be similar
Scan Line - adjacent scan lines tend to have similar spans of objects
Area - adjacent pixels tend to be covered by the same face
Object - if objects are separate from each other (ie they do not overlap)
then we only need to compare polygons of the same object, and not
one object to another
Edge - edges only disappear when they go behind another edge or
face
Implied Edge - line of intersection of 2 faces can be determined by the
endpoints of the intersection

12
of
28

EXTENTS
Rather than dealing with a complex object,
it is often easier to deal with a simpler
version of the object.
in 2: a bounding box
in 3d: a bounding volume (though we still
call it a bounding box)

13
of
28

EXTENTS

14
of
28

EXTENTS
We convert a complex object into a simpler outline,
generally in the shape of a box and then we can work
with the boxes. Every part of the object is guaranteed to
fall within the bounding box.
Checks can then be made on the bounding box to make
quick decisions (ie does a ray pass through the box.) For
more detail, checks would then be made on the object in
the box.
There are many ways to define the bounding box. The
simplest way is to take the minimum and maximum X, Y,
and Z values to create a box. You can also have
bounding boxes that rotate with the object, bounding
spheres, bounding cylinders, etc.

15
of
28

Two Main Approaches

16
of
28

Two Main Approaches

17
of
28

Back-Face Detection
The simplest thing we can do is find the
faces on the backs of polyhedra and discard
them

18
of
28

Back-Face Detection (cont)


We know from before that a point ( x, y, z) is
behind a polygon surface if:

Ax By Cz D 0
where A, B, C & D are the plane parameters
for the surface
This can actually be made even easier if we
organise things to suit ourselves

Images taken from Hearn & Baker, Computer Graphics with OpenGL (2004)

19
of
28

Back-Face Detection (cont)


Ensure we have a right handed system with
the viewing direction along the negative z-axis
Now we can simply say that if the z component
of the polygons normal is less than zero the
surface cannot be seen

20
of
28

Back-Face Detection (cont)


In general back-face detection can be
expected to eliminate about half of the
polygon surfaces in a scene from further
visibility tests
More complicated surfaces
though scupper us!
We need better techniques
to handle these kind of
situations

21
of
28

Back-Face Detection (cont)

22
of
28

Depth-Buffer Method
Compares surface depth values throughout
a scene for each pixel position on the
projection plane
Usually applied to scenes only containing
polygons
As depth values can be computed easily,
this tends to be very fast
Also often called the z-buffer method

Images taken from Hearn & Baker, Computer Graphics with OpenGL (2004)

23
of
28

Depth-Buffer Method (cont)

24
of
28

Depth-Buffer Algorithm
1. Initialise the depth buffer and frame buffer
so that for all buffer positions (x, y)
depthBuff(x, y) = 1.0
frameBuff(x, y) = bgColour

25
of
28

Depth-Buffer Algorithm (cont)


2. Process each polygon in a scene, one at
a time
For each projected (x, y) pixel position of a
polygon, calculate the depth z (if not already
known)
If z < depthBuff(x, y), compute the surface
colour at that position and set
depthBuff(x, y) = z
frameBuff(x, y) = surfColour(x, y)

After all surfaces are processed depthBuff


and frameBuff will store correct values

26
of
28

CALCULATION OF Z

27
of
28

CALCULATION OF Z

28
of
28

Calculating Depth
At any surface position the depth is
calculated from the plane equation as:

Ax By D
z
C
For any scan line adjacent x positions differ
by 1, as do adjacent y positions

A( x 1) By D
z'
C

A
z' z
C

29
of
28

Iterative Calculations
The depth-buffer algorithm proceeds by
starting at the top vertex of the polygon
Then we recursively calculate the xcoordinate values down a left edge of the
polygon
The x value for the beginning position on
each scan line can be calculated from the
previous one

1
x' x
m

where m is the slope

30
of
28

Iterative Calculations (cont)


Depth values along the edge being
considered are calculated using

A B
z' z m
C

31
of
28

Iterative Calculations (cont)


top scan line

y scan line
y - 1 scan line

bottom scan line

32
of
28

A-Buffer Method
The A-buffer method is an extension of the
depth-buffer method
The A-buffer method is visibility detection
method developed at Lucasfilm Studios for
the rendering system REYES (Renders
Everything You Ever Saw)

33
of
28

A-Buffer Method (cont)


The A-buffer expands on the depth buffer
method to allow transparencies
The key data structure in the A-buffer is the
accumulation buffer

34
of
28

A-Buffer Method (cont)

If depth is >= 0, then the surface data field


stores the depth of that pixel position as
before
If depth < 0 then the data filed stores a
pointer to a linked list of surface data

35
of
28

A-Buffer Method (cont)


Surface information in the A-buffer includes:
RGB intensity components
Opacity parameter
Depth
Percent of area coverage
Surface identifier
Other surface rendering parameters

The algorithm proceeds just like the depth


buffer algorithm
The depth and opacity values are used to
determine the final colour of a pixel

36
of
28

37
of
28

PAINTERS ALGORITHM

38
of
28

PAINTERS ALGORITHM

39
of
28

PAINTERS ALGORITHM

40
of
28

PAINTERS ALGORITHM

41
of
28

PAINTERS ALGORITHM

42
of
28

PAINTERS ALGORITHM

43
of
28

PAINTERS ALGORITHM

44
of
28

PAINTERS ALGORITHM

45
of
28

PAINTERS ALGORITHM

46
of
28

PAINTERS ALGORITHM

47
of
28

Scan-Line Method
An image space method for identifying
visible surfaces
Computes and compares depth values
along the various scan-lines for a scene

48
of
28

Scan-Line Method (cont)


To facilitate the search for surfaces crossing
a given scan-line an active list of edges is
formed for each scan-line as it is processed
The active list stores only those edges that
cross the scan-line in order of increasing x
Also a flag is set for each surface to indicate
whether a position along a scan-line is either
inside or outside the surface

49
of
28

Scan-Line Method

50
of
28

Scan-Line Method

51
of
28

Scan-Line Method

52
of
28

Scan-Line Method

53
of
28

Scan-Line Method

54
of
28

Scan-Line Method (cont)


Two important tables are maintained:
The edge table
The surface facet table

The edge table contains:


Coordinate end points of each line in the
scene
The inverse slope of each line
Pointers into the surface facet table to
connect edges to surfaces

55
of
28

Scan-Line Method (cont)


The surface facet tables contains:
The plane coefficients
Surface material properties
Other surface data
Maybe pointers into the edge table

56
of
28

Scan-Line Method (cont)


Pixel positions across each scan-line are
processed from left to right
At the left intersection with a surface the
surface flag is turned on
At the right intersection point the flag is
turned off
We only need to perform depth calculations
when more than one surface has its flag
turned on at a certain scan-line position

Images taken from Hearn & Baker, Computer Graphics with OpenGL (2004)

57
of
28

Scan Line Method Example

Images taken from Hearn & Baker, Computer Graphics with OpenGL (2004)

58
of
28

Scan-Line Method Limitations


The scan-line method runs into trouble when
surfaces cut through each other or
otherwise cyclically overlap
Such surfaces need to be divided

59
of
28

WARNOCKS ALGORITHM

60
of
28

WARNOCKS ALGORITHM

61
of
28

WARNOCKS ALGORITHM

62
of
28

WARNOCKS ALGORITHM

63
of
28

WARNOCKS ALGORITHM

64
of
28

WARNOCKS ALGORITHM

65
of
28

WARNOCKS ALGORITHM

66
of
28

Summary
We need to make sure that we only draw
visible surfaces when rendering scenes
There are a number of techniques for doing
this such as
Back face detection
Depth-buffer method
A-buffer method
Scan-line method

Next time we will look at some more


techniques and think about which
techniques are suitable for which situations

67
of
28

ILLUMINATION MODELS

68
of
28

LIGHT SOURCES

69
of
28

LIGHT SOURCES

70
of
28

DIFFUSE & SPECULAR


REFLECTIONS

71
of
28

AMBIENT LIGHT

72
of
28

DIFFUSE REFLECTION

73
of
28

74
of
28

75
of
28

76
of
28

77
of
28

78
of
28

79
of
28

80
of
28

81
of
28

82
of
28

83
of
28

84
of
28

85
of
28

86
of
28

87
of
28

88
of
28

89
of
28

90
of
28

91
of
28

92
of
28

93
of
28

94
of
28

95
of
28

Вам также может понравиться