Вы находитесь на странице: 1из 6

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html

Prev

Graphics and Rendering Introduc on

Next

Graphics and Rendering


This is an overview of the process of rendering. Do not worry if you do not understand everything right away; every step will be covered in lavish detail in later tutorials. Everything you see on your computer's screen, even the text you are reading right now (assuming you are reading this on an electronic display device, rather than a printout) is simply a two-dimensional array of pixels. If you take a screenshot of something on your screen, and blow it up, it will look very blocky. Figure 8. An Image

Each of these blocks is a pixel. The word pixel is derived from the term Picture Element. Every pixel on your screen has a par cular color. A two-dimensional array of pixels is called an image. The purpose of graphics of any kind is therefore to determine what color to put in what pixels. This determina on is what makes text look like text, windows look like windows, and so forth. Since all graphics are just a two-dimensional array of pixels, how does 3D work? 3D graphics is thus a system of producing colors for pixels that convince you that the scene you are looking at is a 3D world rather than a 2D image. The process of conver ng a 3D world into a 2D image of that world is called rendering. There are several methods for rendering a 3D world. The process used by real- me graphics hardware, such as that found in your computer, involves a very great deal of fakery. This process is called rasteriza on, and a rendering system that uses rasteriza on is called a rasterizer. In rasterizers, all objects that you see are empty shells. There are techniques that are used to allow you to cut open these empty shells, but this simply replaces part of the shell with another shell that shows what the inside looks like. Everything is a shell. All of these shells are made of triangles. Even surfaces that appear to be round are merely triangles if you look closely enough. There are techniques that generate more triangles for objects that appear closer or larger, so that the viewer can almost never see the faceted silhoue e of the object. But they are always made of triangles.

Note
Some rasterizers use planar quadrilaterals: four-sided objects, where all of the points lie in the same plane. One of the reasons that hardware-based rasterizers always use triangles is that all of the lines of a triangle are guaranteed to be in the same plane. Knowing this makes the rasteriza on process less complicated. An object is made out of a series of adjacent triangles that dene the outer surface of the object. Such series of triangles are o en called geometry, a model or a mesh. These terms are used interchangeably. The process of rasteriza on has several phases. These phases are ordered into a pipeline, where triangles enter from the top and a 2D image is lled in at the bo om. This is one of the reasons why rasteriza on is so amenable to hardware accelera on: it operates on each triangle one at a me, in a specic order. Triangles

1 of 6

23-Oct-13 7:54 PM

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html


can be fed into the top of the pipeline while triangles that were sent earlier can s ll be in some phase of rasteriza on. The order in which triangles and the various meshes are submi ed to the rasterizer can aect its output. Always remember that, no ma er how you submit the triangular mesh data, the rasterizer will process each triangle in a specic order, drawing the next one only when the previous triangle has nished being drawn. OpenGL is an API for accessing a hardware-based rasterizer. As such, it conforms to the model for rasteriza on-based 3D renderers. A rasterizer receives a sequence of triangles from the user, performs opera ons on them, and writes pixels based on this triangle data. This is a simplica on of how rasteriza on works in OpenGL, but it is useful for our purposes. Triangles and Ver ces. Triangles consist of 3 ver ces. A vertex is a collec on of arbitrary data. For the sake of simplicity (we will expand upon this later), let us say that this data must contain a point in three dimensional space. It may contain other data, but it must have at least this. Any 3 points that are not on the same line create a triangle, so the smallest informa on for a triangle consists of 3 three-dimensional points. A point in 3D space is dened by 3 numbers or coordinates. An X coordinate, a Y coordinate, and a Z coordinate. These are commonly wri en with parenthesis, as in (X, Y, Z).

Rasteriza on Overview
The rasteriza on pipeline, par cularly for modern hardware, is very complex. This is a very simplied overview of this pipeline. It is necessary to have a simple understanding of the pipeline before we look at the details of rendering things with OpenGL. Those details can be overwhelming without a high level overview. Clip Space Transforma on. The rst phase of rasteriza on is to transform the ver ces of each triangle into a certain region of space. Everything within this volume will be rendered to the output image, and everything that falls outside of this region will not be. This region corresponds to the view of the world that the user wants to render. The volume that the triangle is transformed into is called, in OpenGL parlance, clip space. The posi ons of the triangle's ver ces in clip space are called clip coordinates. Clip coordinates are a li le dierent from regular posi ons. A posi on in 3D space has 3 coordinates. A posi on in clip space has four coordinates. The rst three are the usual X, Y, Z posi ons; the fourth is called W. This last coordinate actually denes what the extents of clip space are for this vertex. Clip space can actually be dierent for dierent ver ces within a triangle. It is a region of 3D space on the range [-W, W] in each of the X, Y, and Z direc ons. So ver ces with a dierent W coordinate are in a dierent clip space cube from other ver ces. Since each vertex can have an independent W component, each vertex of a triangle exists in its own clip space. In clip space, the posi ve X direc on is to the right, the posi ve Y direc on is up, and the posi ve Z direc on is away from the viewer. The process of transforming vertex posi ons into clip space is quite arbitrary. OpenGL provides a lot of exibility in this step. We will cover this step in detail throughout the tutorials. Because clip space is the visible transformed version of the world, any triangles that fall outside of this region are discarded. Any triangles that are par ally outside of this region undergo a process called clipping. This breaks the triangle apart into a number of smaller triangles, such that the smaller triangles are all en rely within clip space. Hence the name clip space. Normalized Coordinates. Clip space is interes ng, but inconvenient. The extent of this space is dierent for each vertex, which makes visualizing a triangle rather dicult. Therefore, clip space is transformed into a more reasonable coordinate space: normalized device coordinates. This process is very simple. The X, Y, and Z of each vertex's posi on is divided by W to get normalized device coordinates. That is all. The space of normalized device coordinates is essen ally just clip space, except that the range of X, Y and Z are [-1, 1]. The direc ons are all the same. The division by W is an important part of projec ng 3D triangles onto 2D images; we will cover that in a future tutorial.

2 of 6

23-Oct-13 7:54 PM

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html


Figure 9. Normalized Device Coordinate Space

+Y

+Z -X

+X -Z

-Y
The cube indicates the boundaries of normalized device coordinate space. Window Transforma on. The next phase of rasteriza on is to transform the ver ces of each triangle again. This me, they are converted from normalized device coordinates to window coordinates. As the name suggests, window coordinates are rela ve to the window that OpenGL is running within. Even though they refer to the window, they are s ll three dimensional coordinates. The X goes to the right, Y goes up, and Z goes away, just as for clip space. The only dierence is that the bounds for these coordinates depends on the viewable window. It should also be noted that while these are in window coordinates, none of the precision is lost. These are not integer coordinates; they are s ll oa ng-point values, and thus they have precision beyond that of a single pixel. The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest. Vertex posi ons outside of this range are not visible. Note that window coordinates have the bo om-le posi on as the (0, 0) origin point. This is counter to what users are used to in window coordinates, which is having the top-le posi on be the origin. There are transform tricks you can play to allow you to work in a top-le coordinate space if you need to. The full details of this process will be discussed at length as the tutorials progress. Scan Conversion. A er conver ng the coordinates of a triangle to window coordinates, the triangle undergoes a process called scan conversion. This process takes the triangle and breaks it up based on the arrangement of window pixels over the output image that the triangle covers. Figure 10. Scan Converted Triangle

3 of 6

23-Oct-13 7:54 PM

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html

The center image shows the digital grid of output pixels; the circles represent the center of each pixel. The center of each pixel represents a sample: a discrete loca on within the area of a pixel. During scan conversion, a triangle will produce a fragment for every pixel sample that is within the 2D area of the triangle. The image on the right shows the fragments generated by the scan conversion of the triangle. This creates a rough approxima on of the triangle's general shape. It is very o en the case that triangles are rendered that share edges. OpenGL oers a guarantee that, so long as the shared edge vertex posi ons are iden cal, there will be no sample gaps during scan conversion. Figure 11. Shared Edge Scan Conversion

To make it easier to use this, OpenGL also oers the guarantee that if you pass the same input vertex data through the same vertex processor, you will get iden cal output; this is called the invariance guarantee. So the onus is on the user to use the same input ver ces in order to ensure gap-less scan conversion. Scan conversion is an inherently 2D opera on. This process only uses the X and Y posi on of the triangle in window coordinates to determine which fragments to generate. The Z value is not forgo en, but it is not directly part of the actual process of scan conver ng the triangle. The result of scan conver ng a triangle is a sequence of fragments that cover the shape of the triangle. Each fragment has certain data associated with it. This data contains the 2D loca on of the fragment in window coordinates, as well as the Z posi on of the fragment. This Z value is known as the depth of the fragment. There may be other informa on that is part of a fragment, and we will expand on that in later tutorials. Fragment Processing. This phase takes a fragment from a scan converted triangle and transforms it into one or more color values and a single depth value. The order that fragments from a single triangle are processed in is irrelevant; since a single triangle lies in a single plane, fragments generated from it cannot possibly overlap. However, the fragments from another triangle can possibly overlap. Since order is important in a rasterizer, the fragments from one triangle must all be processed before the fragments from another triangle.

4 of 6

23-Oct-13 7:54 PM

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html


This phase is quite arbitrary. The user of OpenGL has a lot of op ons of how to decide what color to assign a fragment. We will cover this step in detail throughout the tutorials.

Direct3D Note
Direct3D prefers to call this stage pixel processing or pixel shading. This is a misnomer for several reasons. First, a pixel's nal color can be composed of the results of mul ple fragments generated by mul ple samples within a single pixel. This is a common technique to remove jagged edges of triangles. Also, the fragment data has not been wri en to the image, so it is not a pixel yet. Indeed, the fragment processing step can condi onally prevent rendering of a fragment based on arbitrary computa ons. Thus a pixel in D3D parlance may never actually become a pixel at all. Fragment Wri ng. A er genera ng one or more colors and a depth value, the fragment is wri en to the des na on image. This step involves more than simply wri ng to the des na on image. Combining the color and depth with the colors that are currently in the image can involve a number of computa ons. These will be covered in detail in various tutorials.

Colors
Previously, a pixel was stated to be an element in a 2D image that has a par cular color. A color can be described in many ways. In computer graphics, the usual descrip on of a color is as a series of numbers on the range [0, 1]. Each of the numbers corresponds to the intensity of a par cular reference color; thus the nal color represented by the series of numbers is a mix of these reference colors. The set of reference colors is called a colorspace. The most common color space for screens is RGB, where the reference colors are Red, Green and Blue. Printed works tend to use CMYK (Cyan, Magenta, Yellow, Black). Since we're dealing with rendering to a screen, and because OpenGL requires it, we will use the RGB colorspace.

Note
You can play some fancy games with programma c shaders (see below) that allow you to work in dierent colorspaces. So technically, we only have to output to a linear RGB colorspace. So a pixel in OpenGL is dened as 3 values on the range [0, 1] that represent a color in a linear RGB colorspace. By combining dierent intensi es of this 3 colors, we can generate millions of dierent color shades. This will get extended slightly, as we deal with transparency later.

Shader
A shader is a program designed to be run on a renderer as part of the rendering opera on. Regardless of the kind of rendering system in use, shaders can only be executed at certain points in that rendering process. These shader stages represent hooks where a user can add arbitrary algorithms to create a specic visual eect. In term of rasteriza on as outlined above, there are several shader stages where arbitrary processing is both economical for performance and oers high u lity to the user. For example, the transforma on of an incoming vertex to clip space is a useful hook for user-dened code, as is the processing of a fragment into nal colors and depth. Shaders for OpenGL are run on the actual rendering hardware. This can o en free up valuable CPU me for other tasks, or simply perform opera ons that would be dicult if not impossible without the exibility of execu ng arbitrary code. A downside of this is that they must live within certain limits that CPU code would not have to. There are a number of shading languages available to various APIs. The one used in this tutorial is the primary shading language of OpenGL. It is called, unimagina vely, the OpenGL Shading Language, or GLSL. for short. It looks decep vely like C, but it is very much not C. Prev Up Next

5 of 6

23-Oct-13 7:54 PM

Graphics and Rendering

http://www.arcsynthesis.org/gltut/Basics/Intro Graphics and Rendering.html

Introduc on

Home

What is OpenGL

6 of 6

23-Oct-13 7:54 PM

Вам также может понравиться