Вы находитесь на странице: 1из 15

Imaging Systems

Written and Created By : Timothy Zwicky

Fundamentals
Thus man of all creatures is more than a creature, he is also a creator. Man alone can direct his success by the use of imagination, or imaging ability.
-Maxwell Maltz

Imaging Systems Fundamentals

Imaging Systems Fundamentals

Dynamic Range refers to the amount of light intensities that a digital cameras sensor can capture. Dynamic range can be measued in many ways, most often it is measures by photographers in f/stops of light. These f/stops are a exponential ratio with a base of two; each stop is double the amount of light as the stop after it. Exposure Latitude is the room for error that a image can have during post-processing after an image has been captured. The Histogram of a photo maps the amount of tones from dark to light within the photograph on a graph.

If at first you dont succeed, redifine sucess- Nitin Sampat

n Imaging System can be defined as a specific workflow for the creation and production of a image. Simply, this process can be divided into three parts: input, processing, and output. All of the steps will be covered in the following chapters of this book. Before we get into details of each part of imaging systems I will give a brief introduction to help convey the concepts of imaging and help explain some of the vocabulary used in the different fields that encompass imaging systems.

Imaging Vocabulary

Input: Recording of an Image


Cameras, Scanners, Camcorders, Smart Phones, Digital Sensors

Processing: Optimizing an Image


Photoshop, Lightroom, CaptureOne, Adobe Camera Raw

Output: Product of an Image


Inkjet paper, Printing press, Trancparency, Monitors, The Web, Smartphones

4 Imaging Systems

Imaging Systems Fundamentals

Image resolution describes the detail an image holds. The term applies to raster digital images, film images, and other types of images. Higher resolution means more image detail. Image resolution can be measured in various ways, spatial, spectral, tonal and temproal. All of these resolutions come into play when overall image quality is determined.

Resolutions

Spectral Resolution
When pixels are recorded from a digital camera, size, color, and tones are recorded. The color information, or spectral resolution, is set into three tone channels: red, green, and blue. Multi-spectral images resolve even finer differences of spectrum or wavelength than is needed to reproduce color.

The source and center of all mans creative power. . . is his power of making images, or the power of imagination.
-Robert Collier

When a image is extracted from a The measure of how closely lines can be an resolved in an image is called spatial reso- camera, the tonal resolution ofout image is simplified to one gray level of lution, and it depends on properties of the 256. This number corresponds to the system creating the image, not just the gray levels the human eye can see, requirpixel resolution in pixels per inch (ppi). ing 8 bits of data or one byte. This For practical purposes the clarity of the data is a photos histogram image is decided by its spatial resolution, as tonedisplayed as amount (y). Newer (x) by pixel not the number of pixels in an image. In cameras offer image capture in 12 bit or effect, spatial resolution refers to the 14 bit which add to the amount of tones number of independent pixel values per that it will obtain in each capture per unit length. color channel.
This is a combination of all three RGB channels to make a full color image,.

Spatial Resolution

Tonal Resolution

As the tonal resolution is decreased ,so does the available tones to produce smooth gradients.

Red Channel

Here is an example of how lower spatial resolution effects image quality.

Green Channel

Blue Channel

Imaging Systems 7

Imaging Systems Fundamentals

first 10,000 Youryour worst. photographs are


File Types
The types of files a picture can be saved as greatly effect optimization, quality, and total size. Depending on your final result, each file type brings different pros or cons to the table. One must consider the future and what needs to be (or may be) needed from the picture in order to make a correct judgement on how to save the file during any step of the imaging work-flow. of information that the image file is not using every time it is saved. This JPEG File lnterchange Format, or JFIF, have lossy compression, which is perfect for websites. GIFs (Graphics Interchange Format) uses a lossless compression, but because of its heavy restriction on the amount of color and tones that can be saved JPEG is almost always a better idea to use for images. Becanse of its single byte channel of color data, simple website graphics tend to be saved and uploaded as GIFs. The final file type, used by most professionals for complete image control under a lossless compression, goes by many names but is generalized as the rawflle format. Most cameras have different types of RAW formats when imaging with the single universal RAW being the digital negative (DNG).

Compression
The most common image file is the .tif, or tagged image file format ex tension. TIF files lise a loss less type of compression, meaning that the file doesnt throw out any data upon saving or opening. This, combined with the 21 bit three color channel standard makes TIFs great for image optimizing and scanning as it will allow constant change to the file without worry. Following .tif are JFIFs, which are commonly seen as .jpeg image files. JPEG is a type of compression created and named after the Joint Photographic Experts Group that will throw out bits

Henri Cartier-Bresson

JPEG quality high

JPEG quality low

GIF Adaptive Limited 256 diffusion dither

GIF Adaptive Limited 50 diffusion dither

Imaging Systems 9

Imaging Systems Fundamentals

Frequency
The dark parts of images have high frequency wavelengths while the brighter parts have a lower frequency. The combination of each represented frequency is cal led spatial frequency which the digital version of the many tones that constructs a photograph. The sampled image frequency is the amount of times the wave is read by the computer. When enlarged, the more points of the wave will need to be filled in to cover the gaps in the image. An image that is not sampled enough is an undersampled image that has gaps filled in its spatial freuency by averaging wave points and produces aliasing, the term used to coin the look undersampling produces. Aliased images look jagged around lines and may have gradients rendered in an coarse, unappealing fashion. If you try to enlarge a photograph too much, the lines inside the picture will become jagged, block-like, and pixelated.

Enlarging a file
Image interpolation occurs in all digital photos at some stage whether this be in bayer demosaicing or in photo enlargement. It happens anytime you resize or remap (distort) your image from one pixel grid to another. Image resizing is necessary when you need to increase or decrease the total number of pixels, whereas remapping can occur under a wider variety of scenarios: correcting for lens distortion, changing perspective, and rotating an image. Digital photo enlargement to several times its original size, while still retaining sharp detail, is perhaps the ultimate goal of many interpolation algorithms. Despite this common aim, enlargement results can vary significantly depending on the resize software, sharpening and interpolation algorithm implemented. When attempting to enlarge an image, there are only three real options to choose from: bicubic, bilinear, and neal est-neighbor interpolation. Nearest neighbor looks at the surronding pixels for an average of tones and colors, resulting in pixel blocks and bad imaging. Bicubic and Bilinear use mathematical equations to create new data points between pixels. Both preform a dual calculation from pixel A to B and back, with linear interpreting them as points and cubic as square grids. The results depend on the image; bilinear is sharper while bicubic is generally more soft.

Center: This image is nearest neighbor interpolation, the aliasing is very easy to see. Nearest neighbor only considers one pixel when interpolating the unknown Right: This image is the original image which has already been resampled and sized to fit in the layout but we will use it for demonstration purposes.

Top left: Bilinear interpolation takes into consideration a 2x2 neighbor for the unknown. Bottom left: Bicubic goes one step farther and takes into consideration a 4x4 grid for the unknown

10

Imaging Systems

Imaging Systems 11

Input

Taking pictures is like tiptoeing into the kitchen late at night and stealing Oreo cookies. - Diane Arbus
12 Imaging Systems

I N P U T

Imaging Systems Input

Capturing an Image
The two most used types of digital sensors on the market today are CMOS, complementary metal oxide semiconductor, and CCD, charged coupled device. From an engineering standpoint, both sensors are similar and use almost the exact same materials in their creation. The real difference between these lie in how information is extracted from the light absorbing micro readers that make up the digital sensor. The CMOS chip to the right demonstrates how each pixel within the grid has its own set of items used for signal amplification and transmission. eCD, on the other hand, registers and empties pixel information by columns, slowly passing down large amounts of data. This difference makes CMOS much faster, using less power to hold the electrical signals, but overall more noisy then CCO chips.

Alfred Stieglitz

A CMOS chip (above), with each sensor linked to the processor directly, is much faster then CCD (below). These extra wires and items per pixel create more gaps in the recorded data and make more noise.

14

Imaging Systems

In photography there is a reality so subtle that it becomes more real than reality.

Imaging Systems Input

Color Arrays
The diagrams to the left are colered specificall to follow the Bayer Pattern, a coloring array developed by Bryce Bayer and patentedd in 1976. Most, if not all digital sensors on the market today for photography have a color filter array (CFA) placed over the entire digital sensor. The filter splits light up into three channels people use for spectral resolution. Bayers array contains twice the amount of green filters because the eyes sensitivity is about double that torward green than any other color. The diagram below shows how light is literally broken up into three spectral channels using Bayers CFA. Broken is a good word to use because when the sensor is divided up this way patches of color information are missing. The mathematical equation calculates the missing color tones and fills in the gaps for the missing red, green, and blue pixels blocked by the other filters.

Other Sensors
Another sensor commonly used is the Foveon-XS sensor. This sensor stops light at different depths within each of the smaller sensors and truly captures all the information of a scene. I t can be said that a Foveon records three channels simultaneously with no interpolation needed.

Scanning Images
When scanning an image it is very easy to produce unnecessarily large files. Large resolution is hard to work with in any post processing program and most of the time matters little for the desired output resul t. The DPI (dots per inch) of any scanner can literally be translated into PPI (pixels per inch). There is a lot of confusion in the industry on which terms mean what as there is no organization setting standards and naming conventions for the various items produced. Web images need a maximum of 72 PPI while high quality image printing looks best at 300. This rule is a generality; always check your printers optimal PPI in the manual or on the internet, usually mislabeled as DPI. The true DPl of a printer is the amount of dots allocated to each inch, not the amount of pixels per inch. When scanning images for the web, large file sizes are pointless as lower resolution produces quality results that one can use on websites or view on monitors and projectors. Magnifying any image for web or print is going to require manipulating the resolution at the level of its input onto the computer. This means that if you would like to double or triple the size of the original image without any sort of pixelation occuring it is best practice to multiply your output PPI by the desired magnification.

One doesnt stop seeing. One doesnt stop framing.


- Annie Leibovitz
Magnification
The general rule is that the scanning Resolution (SR) should be equal to output resolution (OR) multiplied by the desired magnification SR= OR*M Simply scanning images for the web or print req uires no magnification, making scanning resolution equal to the output. Enlarging an image can be done by logically calculating the amount of magnification necessary. 50% increase means OR * 1.5 Triple the size means OR * 3 4x5 to 11x17 requires finding the proportion between the first side clipped when the photograph is printed and the dimension of the paper that the clipping occurs. In this case, the width of the picture is will run off the paper before the height. 4 * M = 11 makes M =2.75 Print 2.75 x 360 = 990 PPI or higher Web: 2.7.5 X 72 == 198 PPI or higher If you are cropping and magnifying, know your crop area before doing any calculations. An 8.5 x 11 cropped in half but enlarged back to 8.5x II requires 600 PPIfor print.

It doesnt turn off and turn on. Its on all the time.

Imaging Systems 17

Input

Processing
The desire to discover, the desire to move, to capture the flavor, three concepts that describe the art of photography - Helmut Newton
18 Imaging Systems

Imaging Systems Processing

You dont take a photograph, you make it.


Raw Data Processing
When raw data is captured with a digital sensor, a number of processs happen before the file is even accessed. When opened, most programs such as Adobe Photoshop and Lightroom will apply a number of corrections and interpolate the image automatically based on its color channel and the required gamma the operating system is built upon. Then next few pages help display the this processes and what the user can manipulate in a raw image file. This process will follow a RAW (.CR2, NEF) image through the optimization to see what the image processing pipeline entails and what RAW users actually have control over. Here starts this pipeline with the digital sensor.

Digital Sensor
An image is captured

Pixel Defect Correction


Dead pixels are scanned and data is filled in from interpolating tone and color information from there surrond-

- Ansel Adams

Colorimetric Transform (RGB to XYZ)


Once interpolated, the channels are converted into a color profile based on light source, sensor response, the camera used and the human eye.

Exposure Correction (White Point)


Exposure can be altered by the user for artistic purposes to find the preferred dynamic range or convey/hide highlight and shadow detail

Photo-Response Non-Uniformity Correction


The response to light creates a measurement that varies from pixel to pixel. PRNU, derived from sensor testing makes all values uniform.

Gamma Correction
Gamma correction sets an exponential function of contrast necessary for all computers interpreting and displaying image information.

Tone Correction
This step fills the viewable dynamic range of the human eye can see. Although output medium dependant, tone curves bring aesthetic quality.

Raw Sensor Data


The following three steps exluding Neutral balance are done automatically upon opening a .jpg or RAW file

Open RAW Image


This is the step were a .jpg workflow would continue automatically. In a RAW workflow the image processing starts here for the user.

Sharpening and lens correction


Image sharpening, noise reduction, vignette adjustment and aberration correction are now edited for both artistic and asthetic appeal.

CFA Interpolation
Color channels, recorded in black and white, are converted into red, green, blue. Any missing pixels from the Bayer array are filled in.

Neutral Balance
Neutral balance, automatically set upon opening for easy viewing, can be corrected by the user or chosen from a list of different types of light sources.

Final Image
The final image generated is a product of these 10 steps.

Imaging Systems 21

Imaging Systems Processing

Image Sharpness
There are a number of ways to sharpen in each image processing program. One of the mos useful ways to sharpen is using photoshops Unsharp mask. The images above are an example of what mostsharpening algorithims, including unsharp mask, attempt to do when a photois processed for increased detail. As mentioned in the last chapter, a frequency jump from high to low and vice-versa indicates a jump in tones. A sharpening filter literally filters through the spatial frequency of a photograph by subtracting a blurred version of the image itself, leaving the tone jumps, and overlaying the line detail onto the original picture. This image math must be done foe each of the color channels. To illustrate the point easier, the example is simplified to black and white.

Noise Reduction
Image noise can compromise the level of detail in your digital or film photos, and so reducing this noise can greatly enhance your final image or print. The problem is that most techniques to reduce or remove noise always end up softening the image as well. Some softening may be acceptable for images consisting primarily of smooth water or skies, but foliage in landscapes can suffer with even conservative attempts to reduce noise. There are multiple ways to reduce noise in Photoshop, Lightroom and various third party plug-in software for Photoshop. The key is to find out what works best in the right situation.

Above: Original image Top Left: 15 percent noise added for demonstration Middle left: 35 percent noise reduction Bottom Left: 60 percent noise reduction

Above: Original Image Top Right: Blurred image of original Middle Right: Original minus blurred image equals unsharp mask Bottom Right: Mask added back to the original .

22

Imaging Systems

Imaging Systems 23

Input

The moment you take the leap of understanding to realize you are not photographing a subject but are photographing light is when you have control over the medium. - Daryl Benson

OUTPUT

Imaging Systems 25

Imaging Systems Output

Printing Overview
Within the last decade the digital printing of images has evolved up to the quality standards of analog and beyond. The photographic paper used in film analog print process contained light sensitive chemicals that, like the film itself, always made a record of images at continuous tonality. In comparison the digital printers used by both professionals and consumers on the markettoday produce small drops of ink to create the illusion of a gradient. Current technology can be compared to the halftone preses of the past century in that drops of ink are placed in close enough clusters with each other to fool the eye into thinking the printed photograph is continuous. No matter the quality of technology used fo the application of dye, toner, pigment, or ink four steps must be preformedin the preprintstage in order to assemble a quality image. Before producing a actual print, the printer recieves the document and takes notice of all the vector images present. A vector image is a graphic such as text or an illustration that is saved as a mathematical equation. Next, the vector equations are extracted from the document as PDL (Page description Language). PDL is then sent to the Raster Image Processor which scales the equations to the size specified and formulates a Raster image. Raster images are X,Y cooridnate mapped images and the normal file types of saved photographs such as .tif .jpg and .DNG. Rasters must be created from vectors in order to bitmap the equations and coordinate where the printing will take place on the paper Now that all the graphics and text have been converted they are paired up with any other raster images in the document and are sent to the printers marking engine. THis final step translates the numerical values of color and tone in each bitmapped pixel to printed images by removing gamma correction, profiling colors for the paper, printer and color space, and converting pixel tonality into the language that you are printing in. For example injets would print lines per inch LPI and use clusters od dots per line to create the image.
Original Simulated Halftone

Left: Raster image , graphed data represents tones and colors, Sized fixed Right: Vector image, Mathe equations simulate outline of graphic, Size variable

A continuous tone image such as the photograph on the left is actually made up of a number of dots akin to the patterns of the halftone processes of the past. By using the halftone filter in Photoshop (right) it is possible to see the illusion of continuous tone and how the eye is tricked. The dotted pattern of a halftone print resembles the pixelation of a printed low resolution image because the dots used in an inkjet printer, now more visible to fill in the lacking pixel data, follow similar patterns we have been using for centuries to fool the viewers eye.

Imaging Systems 27

Imaging Systems Output

From Bit to Print


Many people, with good reason, are baffled by resolution and how a printer can completely pixel ate a giant image that looks beautiful on screen. This has to do with the amount of dots printed per inch in comparison to the amount of pixels required by the printer. Lines per inch (LPI) is what printers print with, different from DPI (dots per inch) and PPI (pixels per inch). What makes matters worse is that printers ancl scanners can be misnamed as having a DPI when they actually mean PPI, or pixels per inch, helping to continue this confusing trend. The less dots per inch, the less inl, tones we have per pixel, resulting in bad image quality. The main reason why the screen looks so different then the printer is due to pixels single tones while prints are created from many small dots of different colored ink. Here is some food for thought before you go.

And if a day goes by without my doing something related to photography, its as though Ive neglected something essential to my existence, as though I had forgotten to wake up. I know that the accident of my being a photographer has made my life possible. - Richard Avedon -

Some simple equations include: DPI/sqrt(# of dots/pixel = PPI DPI/16 = PPI/LPI LPI = (16/DPI)*PPI Printers Speak in LPI 50 LPI = Magazine 75 LPI = News Paper 100 LPI = Photograph

Real example: Epson Stylus Pro 03800 printing 4x5 at 360 pPl with 2880 DPI DPI/sqrt(# of dots/pixel) = PPl # of dots /pixel = (DPl / PPI)2 64= (2880/360)2 64 dots per pixel LPI = (16/DPI)*PPI LPI = ( 16/64)*360 LPI = 90 90 lines per inch

28

Imaging Systems

Вам также может понравиться