Вы находитесь на странице: 1из 15

Graphics is the key technology for communicating ideas, data, and trends in most areas of commerce, Science, Engineering,

and Education. Graphics provides one of the natural means for communicating with the computer. Graphics refers to picture objects, Sketch of building/bridge, Flowcharts, control flow diagrams, bar charts, pie charts etc. Computer Graphics(CG) means creation, storage and manipulation of models and images of picture objects by the aid of computers. Such models come from diverse and expanding set of fields including physical, mathematical, artistic, biological, and even conceptual (abstract) structures. CG includes almost everything on computers that is not text or sound. Today almost every computer can do some graphics, and people have even come to expect to control their computer through icons and pictures rather than just by typing. Pixel: Smallest unit/part of picture. P(X,Y) VOXEL A voxel (volumetric pixel or, more correctly, Volumetric Picture Element) is a volume element, representing a value on a regular grid in three dimensional space. This is analogous to a pixel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image). In contrast to pixels and voxels, points and polygons are often explicitly represented by the coordinates of their vertices. A direct consequence of this difference is that polygons are able to efficiently represent simple 3D structures with lots of empty or homogeneously-filled space, while voxels are good at representing regularly-sampled spaces that are nonhomogeneously filled. Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays use voxels to describe their resolution. For example, a display might be able to show 512512512 voxels A voxel (volumetric pixel or, more correctly, Volumetric Picture Element) is a volume element, representing a value on a regular grid in three dimensional space. This is analogous to a pixel, which represents 2D image data in a bitmap (which is sometimes referred to as a pixmap). As with pixels in a bitmap, voxels themselves do not typically have their position (their coordinates) explicitly encoded along with their values. Instead, the position of a voxel is inferred based upon its position relative to other voxels (i.e., its position in the data structure that makes up a single volumetric image). In contrast to pixels and voxels, points and polygons are often explicitly represented by the coordinates of their vertices. A direct consequence of this difference is that polygons are able to efficiently represent simple 3D structures with lots of empty or homogeneously-filled

space, while voxels are good at representing regularly-sampled spaces that are nonhomogeneously filled. Voxels are frequently used in the visualization and analysis of medical and scientific data. Some volumetric displays use voxels to describe their resolution. For example, a display might be able to show 512512512 voxels Resolution: Maximum number of pixels that can be displayed on a screen area without overlapping. Ex: 640x480, 1024x1024, 300x200 ( Depends on the Driver/Adapter) Some common Graphics Adapters: CGA, VGA, EGA, Herculous, IBM etc., CGA operates in CGAC0, CGAC1, CGAC2 modes VGA operates in VGALO, VGAHI, VGAMED modes Note: With respect to different driver/ adapter and the mode , Resolution of the screen differs. . Aspect ratio: It is the ratio of Maximum number of pixels on vertical to the Maximum number of pixels on horizontal direction. Usually aspect ratio must be for better resolution/appearance. Conceptual Framework for Interactive Graphics At the hardware level, a computer receives the input from interactive devices, and outputs an image on the display device. At the software level, Application model represents data or objects to be pictured on the screen, which will be retrieved by the Application Program to create and store the pictures . Graphics System retrieves the views from the application program and is responsible for actually generate the picture from the detailed descriptions Graphics library/package is intermediary between application and display hardware (Graphics System) Application program maps application objects to views (images) of those objects by calling on graphics library This hardware and software framework is more than 4 decades old but is still useful, indeed dominant

Applicatio model

Applicatio program

Graphic s Library (GL)

Graphic system System

DPU Realizing the need for high performance graphics processing and situational awareness at the man-machine interface, ISR has developed the Display Processing Unit (DPU). The DPU is a rugged Embedded Solution that consists of a 12.1or 18.1 diagonal display, bezel controls, special purpose key pad, keyboard, and of course, the Spray Cool processing electronics. In addition, the DPU has touch screen capabilities for flexible and user defined interaction. The DPU uses a modular design that allows each of the components to be mounted together as a line replaceable item or individually for space constrained installations. For example, the display could be mounted in front of the user while the processing electronics box could be mounted under the seat. Another design option allows the DPU to be directly connected to an ARME if additional capability is required. The DPU boasts the latest in Intel Pentium IV 2.4GHz processing and NVDIA Quadro4 Graphics Processing Unit for mind blowing 3-D graphics. In addition to military applications, Spray Cool display products provide a robust solution for industrial use.

Raster Graphics
Raster graphics is that style of graphics in which the image is broken up into a matrix of picture elements (pixels). The matrix will have a certain number of rows, each containing a certain number of pixels. Each pixel can be assigned any of a number of colors. The number of colors depending upon the depth of the image, often expressed as the number of bits needed to encode all the colors. Typical bit depths used today are: 1 (two colors, usually black and white), 2 (four colors, usually shades of gray), 4 (16 colors), 8 (256 colors), 16 (65,536 colors), 24 (socalled true color), and 32 (more colors than you can shake a stick at). The numbers of rows and columns gives you the resolution. If there are c columns and r rows, the image is referred to as a r x c image. If the image is to have a certain physical size, this size combined with the number of pixels give you the number of dots-per-inch (DPI) of the image, which is a measure of its resolution. The higher the DPI, the smaller the dots, and the harder it is to see them as individuals. Raster graphics are convenient in that they can represent photo-realistic images quite easily, but they have limitations. Because the pixels are arranged in a regular pattern, weird moire patterns can appear if they are displayed on a monitor incorrectly, or if they represent an image with a regular pattern that interacts badly with the pattern of the pixels. Likewise, if the resolution is too low and the contrast is too high, certain pixels can stand out and leave the image with the jaggies.

Graphic Adapters A graphics adapter is a video card that fits into the video slot or interface on a computer motherboard. Video interfaces have evolved over the years, most recently from Advanced Graphics Port (AGP) and all its flavors, to Peripheral Component Interconnect Express, or PCI Express (PCIe). The newer interface allows for faster rendering of images to meet standards that are becoming increasingly demanding.

CGA The Color Graphics Adapter (CGA), originally also called the Color/Graphics Adapter or IBM Color/Graphics Monitor Adapter,[1] introduced in 1981, was IBM's first color graphics card, and the first color computer display standard for the IBM PC. The standard IBM CGA graphics card was equipped with 16 kilobytes of video memory, and could be connected either to a NTSC-compatible monitor or television via an RCA jack, or to a dedicated 4-bit "RGBI" interface CRT monitor, such as the IBM 5153 color display.[2] Built around the Motorola MC6845 display controller, the CGA card featured several graphics and text modes. The highest resolution of any mode was 640200, and the highest color depth supported was 4-bit (16 colors). VGA Video Graphics Array (VGA) refers specifically to the display hardware first introduced with the IBM PS/2 line of computers in 1987,[1] but through its widespread adoption has also come to mean either an analog computer display standard, the 15-pin D-subminiature VGA connector or the 640480 resolution itself. While this resolution was superseded in the personal computer market in the 1990s, it is becoming a popular resolution on mobile devices.[2] VGA was the last graphical standard introduced by IBM that the majority of PC clone manufacturers conformed to, making it today (as of 2010) the lowest common denominator that all PC graphics hardware can be expected to implement without devicespecific driver software.[citation needed] For example, the Microsoft Windows splash screen appears while the machine is still operating in VGA mode, which is the reason that this screen always appears in reduced resolution and color depth. VGA was officially superseded by IBM's Extended Graphics Array (XGA) standard, but in reality it was superseded by numerous slightly different extensions to VGA made by clone manufacturers that came to be known collectively as Super VGA . NVIDIA Nvidia is a multinational corporation which specializes in the development of graphics processing units and chipset technologies for workstations, personal computers, and mobile devices. Based in Santa Clara, California, the company has become a major supplier of integrated circuits (ICs), designing graphics processing units (GPUs) and chipsets used in graphics cards, in personal-computer motherboards, and in video game consoles. SVGA Super Video Graphics Array or Ultra Video Graphics Array[1], almost always abbreviated to Super VGA, Ultra VGA or just SVGA or UVGA is a broad term that covers a wide range of computer display standards 4

Originally, it was an extension to the VGA standard first released by IBM in 1987. Unlike VGAa purely IBM-defined standardSuper VGA was defined by the Video Electronics Standards Association (VESA), an open consortium set up to promote interoperability and define standards. When used as a resolution specification, in contrast to VGA or XGA for example, the term SVGA normally refers to a resolution of 800 600 pixels. Though Super VGA cards appeared in the same year as VGA, it wasn't until 1989 that Super VGA was defined by VESA. In that first version, it called for a resolution of 800 600 4-bit pixels. Each pixel could therefore be any of 16 different colours. It was quickly extended to 1024 768 8-bit pixels, and well beyond that in the following years. Although the number of colours was defined in the original specification, this soon became irrelevant as (in contrast to the old CGA and EGA standards) the interface between the video card and the VGA or Super VGA monitor uses simple analog voltages to indicate the desired colour. In consequence, so far as the monitor is concerned, there is no theoretical limit to the number of different colours that can be displayed. Note that this applies to any VGA or Super VGA monitor. While the output of a VGA or Super VGA video card is analog, the internal calculations the card performs in order to arrive at these output voltages are entirely digital. To increase the number of colours a Super VGA display system can reproduce, no change at all is needed for the monitor, but the video card needs to handle much larger numbers and may well need to be redesigned from scratch. Even so, the leading graphics chip vendors were producing parts for high-colour video cards within just a few months of Super VGA's introduction. On paper, the original Super VGA was to be succeeded by Super XGA, but in practice the industry soon abandoned the attempt to provide a unique name for each higher display standard, and almost all display systems made between the late 1990s and the early 2000s are classed as Super VGA. MODE Many video adapters support several different modes of resolution, all of which are divided into two general categories: character mode and graphics mode. Of the two modes, graphics mode is the more sophisticated. Programs that run in graphics mode can display an unlimited variety of shapes and fonts, whereas programs running in character mode are severely limited. Programs that run entirely in graphics mode are called graphics-based programs. In character mode, the display screen is treated as an array of blocks, each of which can hold one ASCII character. In graphics mode, the display screen is treated as an array of pixels. Characters and other shapes are formed by turning on combinations of pixels. INITGRAPH Initgraph initializes the graphics system by loading a graphics driver from disk (or validating a registered driver), and putting the system into graphics mode.

To start the graphics system, first call the initgraph function. initgraph loads the graphics driver and puts the system into graphics mode. You can tell initgraph to use a particular graphics driver and mode, or to autodetect the attached video adapter at run time and pick the corresponding driver. If you tell initgraph to autodetect, it calls detectgraph to select a graphics driver and mode. initgraph also resets all graphics settings to their defaults (current position, palette, color, viewport, and so on) and resets graphresult to 0. Normally, initgraph loads a graphics driver by allocating memory for the driver (through _graphgetmem), then loading the appropriate .BGI file from disk. As an alternative to this dynamic loading scheme, you can link a graphics driver file (or several of them) directly into your executable program file. pathtodriver specifies the directory path where initgraph looks for graphics drivers. initgraph first looks in the path specified in pathtodriver, then (if they are not there) in the current directory. Accordingly, if pathtodriver is null, the driver files (*.BGI) must be in the current directory. This is also the path settextstyle searches for the stroked character font files (*.CHR). *graphdriver is an integer that specifies the graphics driver to be used. You can give it a value using a constant of the graphics_drivers enumeration type, which is defined in graphics.h and listed below. graphics_drivers constant DETECT CGA MCGA EGA EGA64 EGAMONO IBM8514 HERCMONO ATT400 VGA PC3270 Numeric value 0 (requests autodetect) 1 2 3 4 5 6 7 8 9 10

*graphmode is an integer that specifies the initial graphics mode (unless *graphdriver equals DETECT; in which case, *graphmode is set by initgraph to the highest resolution available for the detected driver). You can give *graphmode a value using a constant of the graphics_modes enumeration type, which is defined in graphics.h and listed below. graphdriver and graphmode must be set to valid values from the following tables, or you will get unpredictable results. The exception is graphdriver = DETECT. Palette listings C0, C1, C2, and C3 refer to the four predefined four-color palettes available on CGA (and compatible) systems. You can select the background color (entry #0) in each of these palettes, but the other colors are fixed. Palette Number 0 Three Colors LIGHTGREEN LIGHTRED YELLOW

1 2 3

LIGHTCYAN GREEN CYAN

LIGHTMAGENTA WHITE RED BROWN MAGENTA LIGHTGRAY

After a call to initgraph, *graphdriver is set to the current graphics driver, and *graphmode is set to the current graphics mode. Graphics Driver CGA graphics_mode CGAC0 CGAC1 CGAC2 CGAC3 CGAHI MCGAC0 MCGAC1 MCGAC2 MCGAC3 MCGAMED MCGAHI EGA EGALO EGAHI EGA64 EGA64LO EGA64HI EGA-MONO EGAMONOHI EGAMONOHI HERC ATT400 Value 0 1 2 3 4 0 1 2 3 4 5 0 1 0 1 3 3 Columns x Rows 320 x 200 320 x 200 320 x 200 320 x 200 640 x 200 Palette C0 C1 C2 C3 2 color Pages 1 1 1 1 1 1 1 1 1 1 1

MCGA

320 x 200 C0 320 x 200 320 x 200 320 x 200 640 x 200 640 x 480 C1 C2 C3 2 color 2 color

640 x 200 16 color 4 640 x 350 16 color 2 640 x 200 16 color 1 640 x 350 4 color 640 x 350 2 color 640 x 350 2 color 720 x 348 2 color 320 x 200 320 x 200 320 x 200 320 x 200 640 x 200 640 x 400 C0 C1 C2 C3 2 color 2 color 1 1 w/64K 2 w/256K 2 1 1 1 1 1 1

HERCMONOHI 0 ATT400C0 ATT400C1 ATT400C2 ATT400C3 ATT400MED ATT400HI VGALO VGAMED VGAHI PC3270HI 0 1 2 3 4 5 0 1 2 0 7

VGA

640 x 200 16 color 2 640 x 350 16 color 2 640 x 480 16 color 1 720 x 350 2 color 1

PC3270

IBM8514

IBM8514HI IBM8514LO

0 0

640 x 480 256 color ? 1024 x 768 256 color ?

Return Value initgraph always sets the internal error code; on success, it sets the code to 0. If an error occurred, *graphdriver is set to -2, -3, -4, or -5, and graphresult returns the same value as listed below: Constant Name Number Meaning grNotDetected -2 Cannot detect a graphics card grFileNotFound -3 Cannot find driver file grInvalidDriver -4 Invalid driver grNoLoadMem -5 Insufficient memory to load driver FLICKER EFFECT Flicker is a visible fading[clarification needed] between cycles displayed on video displays, especially the refresh interval on cathode ray tube (CRT) based computer screens. Flicker occurs on CRTs when they are driven at a low refresh rate, allowing the brightness to drop for time intervals sufficiently long to be noticed by a human eye. For most devices, the screen's phosphors quickly lose their excitation between sweeps of the electron gun, and the afterglow is unable to fill such gaps. A similar effect occurs in PDPs during their refresh cycles. For example, if a Cathode Ray Tube computer monitor's vertical refresh rate is set to 60 Hz, most monitors will produce a visible "flickering" effect, unless they use phosphor with long afterglow. Most people find that refresh rates of 7090 Hz and above enable flicker-free viewing on CRTs. Use of refresh rates above 120 Hz is uncommon, as they provide little noticeable flicker reduction and limit available resolution.
GRAYSCALE IMAGE

In photography and computing, a grayscale or greyscale digital image is an image in which the value of each pixel is a single sample, that is, it carries only intensity information. Images of this sort, also known as black-and-white, are composed exclusively of shades of gray, varying from black at the weakest intensity to white at the strongest.[1] Grayscale images are distinct from one-bit black-and-white images, which in the context of computer imaging are images with only the two colors, black, and white (also called bilevel or binary images). Grayscale images have many shades of gray in between. Grayscale images are also called monochromatic, denoting the absence of any chromatic variation. Grayscale images are often the result of measuring the intensity of light at each pixel in a single band of the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.), and in such cases they are monochromatic proper when only a given frequency is captured. But also they can be synthesized from a full color image; see the section about converting to grayscale.

NAPLPS

NAPLPS (North American Presentation Level Protocol Syntax) is a graphics language for use originally with videotex and teletext services. NAPLPS was developed from the Telidon system developed in Canada, with a small number of additions from AT&T. The basics of NAPLPS were later used as the basis for several other microcomputer based graphics systems. History The Canadian Communications Research Centre (CRC), based in Ottawa, had been working on various graphics systems since the late 1960s, much of it led by Herb Bown. [1] Through the 1970s they turned their attention to building out a system of "picture description instructions", which encoded graphics commands as a text stream. Graphics were encoded as a series of instructions (graphics primitives) each represented by a single ASCII character. Graphic coordinates were encoded in multiple 6 bit strings of XY coordinate data, flagged to place them in the printable ASCII range so that they could be transmitted with conventional text transmission techniques. ASCII SI/SO characters were used to differentiate the text from graphic portions of a transmitted "page". These instructions were decoded by separate programs to produce graphics output, on a plotter for instance. Other work produced a fully interactive version. In 1975, the CRC gave a contract to Norpak to develop an interactive graphics terminal that could decode the instructions and display them on a color display. During this period, a number of companies were developing the first teletext systems, notably the BBC's Ceefax system. Ceefax encoded character data into the lines in the vertical blanking interval of normal television signals where they could not be seen onscreen, and then used a buffer and decoder in the user's television to convert these into "pages" of text on the display. The Independent Broadcasting Authority quickly introduced their own ORACLE system, and the two organizations subsequently agreed to use a single standard, the "Broadcast Teletext Specification". At about the same time, other organizations were developing videotex systems, similar to teletext except they used modems to transmit their data instead of television signals. This was potentially slower and used up a telephone line, but had the major advantage of allowing the user to transmit data back to the sender. England's General Post Office developed a system using the Ceefax/ORACLE standard, launching it as Prestel, while France prepared the first steps for its ultimately very successful Minitel system, using a rival display standard called Antiope. By 1977 the Norpak system was running, and from this work the CRC decided to create their own teletext/videotext system. Unlike the systems being rolled out in Europe, the CRC decided from the start that the system should be able to run on any combination of communications links. For instance, it could use the vertical blanking interval to send data to the user, and a modem to return selections to the servers. It could be used in a one-way or two-way system.[1] In teletext mode, character codes were sent to users' televisions by encoding them as dot patterns in the vertical blanking interval of the video signal. Various technical "tweaks" and details of the NTSC signals used by North American televisions allowed the

downstream videotex channel to increase to 600 bit/s, about twice that used in the European systems. In videotext mode, Bell 202 modems were typical, offering a 1,200 bps download rate. A set top box attached to the TV decoded these signals back into text and graphic pages, which the user could select among. The system was publicly launched as Telidon on August 15, 1978. Compared to the European standards, the CRC system was faster, bi-directional, and offered real graphics as opposed to simple character graphics. The downside of the system was that it required much more advanced decoders, typically featuring Zilog Z80 or Motorola 6809 processors with RGB and/or RF output. The Department of Communications (DOC) launched a four-year plan to fund public roll-outs of the technology in an effort to spur the development of a commercial Telidon system[1]. AT&T was so impressed by Telidon that they decided to join the project. They added a number of useful extensions, notably the ability to define your own graphics commands (macro) and character sets (DRCS). They also tabled algorithms for proportionally spaced text, which greatly improved the quality of the displayed pages. A joint CSA/ANSI working group (X3L2.1) revised the specifications, which were submitted to the ANSI board for standardization and became ANSI T500, NAPLPS. The data encoding system was also standardized as the NABTS protocol. Business models for Telidon services were poorly developed. Unlike the UK, where teletext was supported by one of only two large companies whose whole revenue model was based on a read-only medium (television), in North America Telidon was being offered by companies who worked on a subscriber basis. Representative uses of CG (Applications of CG) In Simulation and animation for scientific visualization and entertainment - Computer produced animated movies and displays of the time-varying behavior of real and simulated objects are becoming increasingly popular for scientific and engineering visualization. Using these, we can study the mathematical models of the fluid flow, nuclear and chemical reactions. Art and Commerce: In advertising, to express a message, and attract attention. i.e., at museums, supermarkets, Transportation terminals, and hotels. In Process Control systems: Status displays in refineries, power plants, and computer networks, and in Military: to view number and position of vehicles, weapons launched, troop movements, causalities. Cartography: to produce both accurate and schematic representations of geographical and other natural phenomena from measurement data. Ex: Geographic maps, relief maps, exploring maps for drilling and mining, oceanographic charts, weather maps etc., GRAPHICS SYSTEM A computer graphics system is a computer system which has all the components of a general purpose computer system. 5 major elements in the system are: 1. I/p devices 10

2. Processor 3. Memory 4. Frame buffer 5. O/p devices This model is general enough to include workstations and personal computer, interactive game systems, and sophisticated image-generation systems. Pixels are stored in a part of memory called the frame buffer. The frame buffer can be viewed as the core element of a graphics system. Its resolution determines the details that can be seen in the image. The depth, or precision, of the frame buffer, defined as the number of bits that are used for each pixel, determines properties such as how many colors can be represented on a given system . The frame buffer usually is implemented with special types of memory chips that enable fast redisplay of the contents of the frame buffer. In s/w based systems, such as those used for high-resolution rendering or for generating complex visual effects that cannot produced in real time, the frame buffer is part of system memory. In a simple system, there may be only one processor, the CPU of the system, which must do both the normal processing and the graphical processing. The main graphical function of the processor is to take specifications of graphical primitives (such as lines, circles, and polygons) generated by application programs and to assign values to the pixels in the frame buffer that best represent these entities. For example, a triangle is specified by its 3 vertices, but to display its outline by the three line segments connecting the vertices, graphics system must generate a set of pixels that appear as line segments to the viewer. The conversion of geometric entities to pixel colors and locations in the frame buffer is known as rasterization, or scan conversion. In early graphics systems, the frame buffer was part of the standard memory that could be directly addressed by the CPU. Today, virtually all graphics systems are characterized by special-purpose GPU, custom-tailored to carry out specific graphics functions. The GPU can be either on the motherboard of the system or on a graphics card. The frame buffer is accessed through the GPU and may be included in the GPU. O/P devices For many years, the dominant type of display (or monitor) has been the cathode ray tube(CRT). Although various flat-panel technologies are now more popular, the basic functioning of the CRT has much in common with these newer displays. When electrons strike the phosphor coating on the tube, light is emitted. The direction of the beam is controlled by two pairs of deflection plates. The output of the computer is converted, by digital-to-analog converters, to voltages across the x and y deflection plates. Light appears on the surface of the CRT when a sufficiently intense beam electrons is directed at the phosphor. If the voltages steering the beam change at constant rate, the beam will trace a straight line, visible to a viewer. Such a device is known as the random-scan, calligraphic, or vector CRT, because the beam can be moved directly from any position to any another position. If the intensity of the beam is turned off, the beam can be moved to a new position without changing any visible display. A typical CRT will emit light for only a short timeusually a few millisecondsafter the phosphor is excited by the electron beam. For a human to see a steady, flicker image

11

on most CRT displays, the same path must be retraced, or refreshed, by the beam at a sufficiently high rate, the refresh rate. In older systems, the refresh rate is determined by the frequency of the power system, 60 cycles/second or 60Hz in the United States and 50Hz in the much of the rest of the world. In a raster system, the graphics system takes pixels from the frame buffer and displays them as point on the surface of the display in one of two fundamental ways. In a noninterlaced or progressive display, the pixels are displayed row by row, or scan line by scan line, at the refresh rate. In an interlaced display, odd rows and even rows are refreshed alternatively. Interlaced displays are used in commercial television. In an interlaced display operating at 60Hz, the screen is redrawn in its entirely only 30 times/second, although the visual system is tricked into thinking the refresh rate is 60Hz rather than 30Hz. 12

I/P devices Most graphics systems provide a keyboard and at least one I/p device. The most common I/p devices are the mouse, the joystick, and the data tablet. Each provides positional information to the system, and each usually is equipped with one or more buttons to provide signals to the processor. Often called pointing devices, these devices allow a user to indicate a particular location on the display. GRAPHICS ARCHITECTURE Early graphics systems used general-purpose computers with the standard von Neumann architecture. Such computers are characterized by a single processing unit that processes a single instruction at a time. The display in these systems was based on a calligraphic CRT display that included the necessary circuitry to generate a line segment connecting two points. The job of the host computer was to run the application program and to compute the endpoints of the line segments in the image. This information had to be sent to the display at a rate high enough to avoid flicker on the display. In the early days of CG, computers were so slow that refreshing even simple images, containing a few hundred line segments, would burden an expensive computer. Display processors The earliest attempts to build special-purpose graphics systems were concerned primarily with relieving the general-purpose computer from the task of refreshing the display continuously. These display processors had conventional architectures but included instructions to display primitives on the CRT. The main advantage of display processor was that the instructions to generate the image could be assembled once in the host and sent to the display processor, where they were stored in the displays own memory as a display list, or display file. The display processor would then execute repetitively the program in the display list, at a rate sufficient to avoid flicker, independently of the host for other tasks. Pipeline architectures For CG applications, the most important use of VLSI circuits has been in creating pipeline architectures. Pipelining is similar to an assembly line in a car plant, a series of operations is performed on it, each using specialized tools and workers, until at the end, the assembly process is complete. At any one time, multiple cars are under construction and there is a significant delay or latency between when a chassis starts down the assembly line and the finished vehicle is complete. However, the number of cars produced in a given time, the throughput, is much higher than if a single team built each car. The pipelining can be illustrated for a simple arithmetic calculation. In our pipeline, there is an adder and a multiplier. If we use this configuration to compute a+(b*c), then the calculation takes one multiplication and one additionthe same amount of work is required if we use a single processor to carry out both operations. However, suppose that we have to carry out the same computation with many values of a, b, and c. Now, the multiplier can pass on the results of its calculation to the adder and can start its next multiplication while the adder carries out the 2nd step of the calculation on the 1st set of

13

data. Hence, where it takes the same amount of time to calculate the results for any set of data, when we are working on 2 sets of data at one time, our total time for calculation is shortened markedly. Here the rate at which data flows through the system, the throughput of the system, has been doubled. Note that as we add more boxes to a pipeline, the latency of the system increases and we must balance latency against increased throughput in evaluating the performance of a pipeline. The Graphics Pipeline We start with a set of objects. Each object comprises a set of graphical primitives. Each primitive comprises a set of vertices. We can think of the collection of the primitive types and vertices as defining the geometry of the scene. In a complex scene, there may be thousandseven millionsof vertices that define the objects. We must process all these vertices in a similar manner to form an image in the frame buffer. If we think in terms of processing the geometry of our objects to obtain an image, we can employ the block diagram as shown below, which shows the 4 major steps in the imaging process: 1. Vertex processing 2. Clipping and primitive assembly 3. Rasterization 4. Fragment processing

VerticesVertex processorClipper andRasterizerFragmentpixels primitive processor assembler Geometric pipeline Vertex processing In the first block of our pipeline, each vertex is processed independently. The two major functions of this block are to carry out coordinate transformations and to computer a color for each vertex. Many of the steps in the imaging process can be viewed as transformations between representations of objects in different coordinate systems. Clipping and Primitive Assembly we must do clipping because of the limitation that no imaging system can see the whole world at once. The human retina has a limited size corresponding to an approximately 90degree field of view. Cameras have film of limited size, and we can adjust their fields of view by selecting different lenses. Thus, within this stage of the pipeline, we must assemble sets of vertices into primitives, such as line segments and polygons, before clipping can take place. Consequently, the o/p of this stage is a set of primitives whose projections can appear in the image. Rasterization The primitives that emerge from the clipper are still represented in terms of their vertices and must be further processed to generate pixels in the frame buffer. For example, if 3 vertices specify a triangle filled with a solid color, the rasterizer must determine which

14

pixels in the frame buffer are inside the polygon. The o/p of the rasterizer is a set of fragments for each primitive. A fragment can be thought as a potential pixel that carries with it information, including its color and location, that is used to update the corresponding pixel in the frame buffer. \Fragment Processing the final block in our pipeline takes in the fragments generated by the rasterizer and updates the pixels in the frame buffer. If the application generated 3-dimensional data, some fragments may not be visible because the surfaces that they define are behind other surfaces. The color of the fragment may be altered by texture mapping or bump mapping. The color of the pixel that corresponds to a fragment can also be read from the frame buffer and blended with the fragments color to create translucent effects. (For the diagrams of graphics system Edward Angels book has to be referred). Government

15

Вам также может понравиться