Вы находитесь на странице: 1из 14

Basic Concepts in Digital Image Processing

The widespread availability of relatively low-cost personal computers has heralded a revolution in digital image processing activities among scientists and the consumer population in general. Coupled to digitization of analog images (mainly photographs) by inexpensive scanners, and image ac uisition with electronic sensors (primarily though charge-coupled devices or CCDs), user-friendly image-editing software pac!ages have led to a dramatic increase in the ability to enhance features, extract information, and to easily modify the properties of a digital image.

"igital image processing enables the enhancement of visibility for detail in images using algorithms that apply arithmetic and statistical procedures to stored pixel values, instead of the classical dar!room manipulations for filtration of time-dependent voltages necessary for analog images and video signals. #ven though many image processing algorithms are extremely powerful, the average user often applies operations to digital images without concern for the underlying principles behind these manipulations. The images that result from careless manipulation are often severely degraded or otherwise compromised with respect to those that could be produced if the power and versatility of the digital processing software were correctly utilized. $ptical microscopy is a rapidly developing field that has come to be highly dependent upon digital image processing techni ues, both for aesthetic balance and cosmetic touches, as well as rehabilitation and analytical purposes. %owever, even when the microscope is configured correctly and performing optimally, captured digital images often display uneven bac!grounds, excessive noise, aberration artifacts, poor contrast, out-of-focus regions, intensity fluctuations, and can also suffer from color shifts and color balance errors. &n addition, images that appear perfectly sharp and crisp with excellent color saturation in the microscope can often be mangled by the image sensor to produce artifacts such as aliasing, camera noise, improper gamma correction, white balance shifts, poor contrast, and brightness fluctuations. 'resented in (igure ) is a digital image of a thin section of stained dicot leaf epidermis captured in brightfield illumination with a standard optical microscope. *s originally imaged ((igure )(a)), the thin section displays a considerable amount of noise and suffers from uneven illumination throughout the viewfield, leading to poor contrast and lac! of definition in specimen detail. (ollowing bac!ground subtraction, gamma correction, histogram stretching, and ad+ustment of hue, color balance, and saturation, the processed image ((igure )(b)) is considerably improved. Pre-Processing Evaluation of Digital Images *fter digital images have been captured, and prior to initiating processing algorithm applications, each image should be evaluated with regard to its general characteristics, including noise, blur, bac!ground intensity variations, brightness and contrast, and the general pixel value distribution (histogram profile). *ttention should be given to shadowed regions to determine how much detail is present, as well as bright

features (or highlights) and areas of intermediate pixel intensity. This tas! is most easily accomplished by importing the image into one of the popular software editing programs, such as *dobe 'hotoshop, Corel 'hoto-'aint, ,acromedia (irewor!s, or 'aint -hop 'ro. #ach image-editing program has a statistics or status window that enables the user to translate the mouse cursor over the image and obtain information about specific pixel values at any location in the image. (or example, the 'hotoshop Info Palette provides continuously updated pixel information, including x and y coordinates, RGB (red, green, and blue) color values, CMYK (cyan, magenta, yellow, blac!) conversion percentages, and the height and width of a mar uee selection within the image. 'reference options in the palette display include selecting alternative color-space models for information readout. *mong the models available in 'hotoshop are grayscale, !B (hue, saturation, and brightness), web color (the .)/ colors that overlap in the 0indows and ,acintosh 1-bit or .2/ color display palettes), actual color, opacity, and 3ab color (device-independent color space). 4y evaluating the intensities (grayscale and color) and histogram positions of various image features, the blac! and white set points for stretching and sliding of the entire histogram for contrast ad+ustments can be determined. The image should also be chec!ed for clipping, which is manifested by the appearance of saturated white or underexposed blac! regions in the image. &n general, clipping should be avoided, both during image ac uisition, and while the image is being processed. &mages that have been adversely affected by bac!ground intensity variations should be corrected by flat-field techni ues or bac!ground subtraction prior to applying histogram manipulations. "oo#-$p %a&les -everal of the fundamental digital image processing algorithms commonly employed in optical microscopy function through a techni ue !nown as single-image pixel point operations, which perform manipulations on se uential individual pixels rather than large arrays. The general e uation utilized to describe single-image pixel point processes for an entire image array is given by the relationship5 '(x)y* + M , -I(x)y*. where I(x)y* represents the input image pixel at coordinate location ( x)y), '(x)y* is the output image pixel having the same coordinates, and M is a linear mapping function. &n general, the mapping function is an e uation that converts the brightness value of the input pixel to another value in the output pixel. 4ecause some of the mapping functions utilized in image processing can be uite complex, performing these operations on a large image, pixel-by-pixel, can be extremely time-consuming and wasteful of computer resources. *n alternative techni ue used to map large images is !nown as a loo#-up ta&le ("$%), which stores an intensity transformation function (mapping function) designed so that its output gray-level values are a selected transformation of the corresponding input values.

0hen uantized to 1 bits (.2/ gray levels) each pixel has a brightness value that ranges between 6 (blac!) and .22 (white), to yield a total of .2/ possible output values. * loo!-up table utilizes a .2/-

element array of computer memory, which is preloaded with a set of integer values defining the loo!-up table mapping function. Thus, when a single-pixel process must be applied to an image using a loo!-up table, the integer gray value for each input pixel is utilized as an address specifying a single element in the .2/-element array. The memory content of that element (also an integer between 6 and .22) overrides the brightness value (gray level) of the input pixel and becomes the output gray value for the pixel. (or example, if a loo!-up table is configured to return a value of 6 for input values between 6 and ).7 and to return a value of ) for input values between ).1 and .22, then the overall point process will result in binary output images that have only two sets of pixels (6 and )). *lternatively, to invert contrast in an image, a loo!-up table can return inverse values of 6 for .22, ) for .28, . for .29, and so forth. 3oo!up tables have a significant amount of versatility and can be utilized to produce a wide variety of manipulations on digital images. &mage transformations that involve loo!-up tables can be implemented by either one of two mechanisms5 at the input so that the original image data are transformed, or at the output so that the transformed image is displayed but the original image remains unmodified. * permanent transformation of the original input image may be necessary to correct for !nown defects in detector properties (for example, nonlinear gain characteristics) or to transform the data to a new coordinate system (from linear to logarithmic or exponential). 0hen only the output image should be modified, the image transformation is performed +ust before the digital image is converted bac! to analog form by the digital-to-analog converter for display on a computer monitor. &n some cases, the results of the transformation specified by the output loo!-up table(s) are displayed visually on the monitor, but the original image data are not altered. 3oo!-up tables are not restricted to linear or monotonic functions, and a variety of nonlinear loo!-up tables are utilized in signal processing to correct for camera response characteristics or to emphasize a narrow region of gray levels. * good example of the utility of a nonlinear loo!-up table is the correction of recorded images that have been inadvertently captured with an incorrect camera gamma ad+ustment. &n addition, monochrome or color images can also be converted to generate negatives for photography. $ther applications include pseudocoloring and sigmoidal loo!-up tables that emphasize a selected range of gray values targeted to enhance desired features or to ad+ust the amount of image contrast. 'resented in (igure . are loo!-up table mapping functions for image contrast inversion using both a .2/element memory pre-loaded register and a table map ((igure .(a)), and a thresholding operation using only a table map ((igure .(b)). The input pixel gray level is utilized to specify the address of the loo!-up table element whose content provides the gray level of the output pixel in the memory register ((igure .(a)). The s uare loo!-up table map presents an alternative method of calculating output pixel values based on those of the input pixel. To use the map, first determine the input pixel gray-level value, and then extend a vertical line from the input value to the mapping function. * horizontal line is then drawn from the intersection of the vertical line and the mapping function to produce the output pixel gray level on the vertical axis of the map ((igure .(b) and .(c)). &n the case of the thresholding operation ((igure .(c)), all pixels having an input value below )66 are mapped to blac! (6), while other input pixel intensities are unaltered. /lat-/iel0 Correction an0 Bac#groun0 !u&traction * digital image ac uired from a microscope, camera, or other optical device is often described as a ra1 image prior to processing and ad+ustment of critical pixel values (see (igure 9). &n many cases, the raw image is suitable for use in target applications (printing, web display, reports, etc.), but such an image usually exhibits a significant level of noise and other artifacts arising from the optical and capture system, such as distortions from lens aberrations, detector irregularities (pixel non-uniformity and fixed-pattern noise), dust, scratches, and uneven illumination. &n addition, improper bias signal ad+ustment can increase pixel values beyond their true photometric values, a condition that leads to significant errors in measuring the amplitudes of specific image features. #rrors in the raw image are manifested as dar! shadows, excessively bright highlights, spec!s, mottles, and intensity gradients that alter the true pixel values. &n general, these errors are particularly evident in digital images having bright, uniform bac!grounds, which are produced by a variety of common microscope illumination modes, including

brightfield, obli ue, phase contrast, and differential interference contrast ( DIC). (luorescence images having medium gray or bright bac!grounds, though relatively rare, may suffer from similar errors.

*pplying flat-fiel0 correction techni ues to raw digital images can often ensure photometric accuracy and remove common image defects to restore the fidelity of features and achieve a visual balance. These correction steps should be underta!en before measuring light amplitudes or obtaining other uantitative information from pixel intensity values, although the corrections are not necessary in order to display or print an image. (lat-field and bac!ground subtraction techni ues usually re uire collection of additional image frames under conditions similar to those employed to capture the primary raw specimen image. ,ost of the flat-field correction schemes utilize two supplemental image frames, in addition to the raw image, to calculate final image parameters ((igure 9). * flat-fiel0 reference frame can be obtained by removing the specimen and capturing the featureless viewfield at the same focus level as the raw image frame. (lat-field reference frames should display the same brightness level as the raw image and ta!e advantage of the full dynamic range of the camera system to minimize noise in the corrected image. &f both the raw image and flat-field reference frame have low signal amplitudes and contain a significant amount of noise, the corrected image will also be dar! and noisy. &n order to compensate for noise and low intensity, flat-field reference frames can be exposed for longer periods than those used for capturing raw images. -everal averaged frames (9-.6) can be added together to create a master flat-field reference frame with a very low noise level. &n addition to a flat-field reference frame, a 0ar# reference frame is collected, which effectively records the output level of each pixel when the image sensor is exposed to a dar! scene, absent microscope illumination. The dar! frame contains the pixel bias offset level and noise ac uired from electronic and thermal sources that contaminate the raw image. $ffset pixel values derive from the positive voltage applied to the image sensor in order to digitize analog intensity information from each photodiode. #lectronic noise originates from camera readout and related sources, and thermal noise is generated by !inetic vibration of silicon atoms in the collection wells and substrate of semiconductor-based sensors. Collectively, these noise sources are referred to as 0ar# noise, and are a common artifact in digital image sensors, which can contribute up to .6 percent of apparent pixel amplitudes. &n order to ensure photometric accuracy, these sources must be subtracted from the flat-field reference frame and raw image. "ar! frames are generated by integrating the image sensor output for the same period as the raw image, but without opening the camera shutter. ,aster dar! frames can be prepared by averaging several individual dar! frames together to increase signal intensity.

$nce the necessary frames have been collected, flat-field correction is a relatively simple operation that involves several se uential functions. (irst, the master dar! frame is subtracted from both the raw image and flat-field reference frames, followed by the division of the resulting values ((igure 9). &n effect, the raw frame is divided by the flat-field frame after the dar! frame has been subtracted from each frame and the uotient is multiplied by the mean pixel value in order to maintain consistency between the raw and corrected image intensities. &ndividual pixels in the corrected image are constrained to have a gray level value between 6 and .22, as a precaution against sign inversion in cases where the dar! reference frame pixel value exceeds that of the raw image. The flat-field correction illustrated in (igure 9 shows a plot of intensity profile across a selected region of the image versus pixel number for the raw, flat-field, and dar! frames, as well as that for the corrected image. 4ac!ground subtraction is a techni ue that results in localized alterations of each pixel value in the raw image, depending upon the intensity of a corresponding pixel at the same coordinate location in the bac!ground image. *s a result, nonuniformities in detector sensitivity or illumination (including mottle, dirt, scratches, and intensity gradients) can be compensated by storing a bac!ground image of an empty microscope field as a reference image. :ideo-enhanced contrast ( 2EC) microscopy is critically dependent on bac!ground subtraction for removal of both stray light and artifacts from highly magnified images of specimens having poor contrast. &n this case, the bac!ground image is obtained by defocusing or displacing the specimen from the field of view. The resulting bac!ground image is stored and continuously subtracted from the raw image, producing a dramatic improvement in contrast. This techni ue is also useful for temporal comparisons to display changes or motion between viewfields.

0hen it is not feasible to capture a bac!ground image in the microscope, a surrogate image can be created artificially by fitting a surface function to the bac!ground of the captured specimen image (see (igure 8). This artificial bac!ground image can then be subtracted from the specimen image. 4y selecting a number of points in the image that are located in the bac!ground, a list of brightness values at various positions is obtained. The resulting information can then be utilized to obtain a least s3uares fit of a surface function that approximates the bac!ground. &n (igure 8, eight ad+ustable control points are used to obtain a least s uares fit of the bac!ground image with a surface function B(x) y* of the form5 B(x) y* + c4 5 c6x 5 c7y 5 c8x7 5 c9y7 5 c:xy where c(4* ;;; c(:* are the least s uares solutions, and (x) y* represents the coordinates of a pixel in the fitted bac!ground image. The specimen presented in (igure 8 is a young starfish captured digitally with an optical microscope configured to operate in obli ue illumination. The control points should be chosen so that they are evenly distributed across the image, and the brightness level at each control point should be representative of the bac!ground intensity. 'lacing many points within a small region of the image while very few or none are distributed into surrounding regions will result in a poorly constructed bac!ground image. &n general, bac!ground subtraction is utilized as an initial step in improving image uality, although (in practice) additional image enhancement techni ues must often be applied to the subtraction image in order to obtain a useful result.

&mages modified by flat-field correction appear similar to those obtained with bac!ground subtraction, but performing the operation by division (flat-field correction) is preferred because the techni ue yields images that are photometrically more accurate. The primary reason for this difference is that images result from light amplitude values derived by a multiplicative process that combines the luminous flux and exposure time. *fter application of flat-field correction techni ues (but not necessarily bac!ground subtraction algorithms), the relative amplitudes of specimen features will be photometrically accurate. *s an added benefit, flat-field correction removes a ma+ority of the optical defects that are present in the raw image. Image Integration 4ecause a digital image is composed of a matrix of integers, operations such as the summation or integration of images can readily be conducted at high speed. &f the original images were digitized with 1-bit resolution, the storage region, or 0igital frame memory, which holds the accumulated images, must have sufficient capacity to accommodate a sum that exceeds 1 bits. &f it is assumed that a few pixels in an 1-bit digital image have the maximum gray-level value of .22, then summation of 96 frames would result in a local pixel gray-level value of 7/26 and re uire a storage register with )9-bit capacity. To sum .2/ frames, the storage capacity must e ual /2,29/ gray levels, or )/ bits, to accommodate the brightest pixels. *lthough modern computer monitors are capable of displaying images having more than .2/ gray levels, the limited response of the human eye (92-26 gray levels) suggests that )/-bit digital images should be scaled to match the limitations of the display and human visual ability. 0hen the useful information of the image resides only in a subregion of the )/-bit stored image, only this portion should be displayed. This is a beneficial approach when displaying images captured by a slow-scan CC" camera of a viewfield with a large intrascene range of intensities. The process involves searching through the )/-bit image for the visually meaningful portion. 0hen images obtained with a video-rate analog or CC" camera are summed into a )/-bit frame memory, display of a meaningful 1-bit image is usually accomplished by dividing the stored sum by a constant. (or example, a ;/-frame summation of a viewfield can be divided by ;/, /8, 9., or .8. "ivision by 9. is e uivalent to a threefold increase in gain and results in utilization of the full .22 gray-level range. %owever, division by .8 is e uivalent to a fourfold gain increase and results in image saturation and loss of information. &mage integration using digital image processing techni ues often enables visualization of a faint ob+ect that is barely detectable above the camera noise. &ntegration may be of particular value in low-light-level imaging when the brightness of the image cannot be increased by additional image intensification. %owever, it is important to realize that, from signal-to-noise considerations, integration directly on the sensor is always preferable to integration in the processing software. #ach image integration step in the software introduces analog-to-digital noise as well as camera readout noise. Digital Image istogram <0=ustment

* ma+ority of the digital images captured in an optical device, such as a camera or microscope, re uire ad+ustments to either the loo!-up table or the image histogram to optimize brightness, contrast, and general image visibility. %istograms of digital images provide a graphical representation of image contrast and brightness characteristics, and are useful in evaluating contrast deficiencies such as low or high contrast, and inade uate dynamic range. *n image >istogram is a graphical plot displaying input pixel values on the x-axis (referred to as a &in) versus the number (or relative number) of pixels for any given bin value on the y axis. #ach bin in a grayscale histogram depicts a subgroup of pixels in the image, sorted by gray level. The numeric range of input values, or bins, on the x-axis usually corresponds to the bit depth of the captured image (6-.22 for 1-bit images, 6-)6.9 for )6-bit images, and 6-86;2 for ).-bit images). ,athematical operations may be performed on the histogram itself to alter the relative

distribution of bins at any gray level. ,anipulation of the histogram can correct poor contrast and brightness to dramatically improve the uality of digital images. istogram stretc>ing involves modifying the brightness (intensity) values of pixels in the image according to a mapping function that specifies an output pixel brightness value for each input pixel brightness value (see (igure 2). (or a grayscale digital image, this process is straightforward. (or an RGB color space digital image, histogram stretching can be accomplished by converting the image to a hue, saturation, intensity ( !I) color space representation of the image and applying the brightness mapping operation to the intensity information alone. The following mapping function is often utilized to compute pixel brightness values5 'utput(x) y* + (Input(x) y* - B* ? (@ - B* &n the above e uation, the intensity range is assumed to lie between 6.6 and ).6, with 6.6 representing blac! and ).6 representing white. The variable B represents the intensity value corresponding to the blac! level, while the intensity value corresponding to the white level is represented by the variable @. &n some instances, it is desirable to apply a nonlinear mapping function to a digital image in order to selectively modify portions of the image. istogram e3ualiAation (also referred to as >istogram leveling) is a related techni ue, which results in the reassignment of pixel gray-level values so that the entire range of gray levels is utilized and the number of counts per bin remains constant. The process yields a flat image histogram with a horizontal profile that is devoid of pea!s. 'ixel values are reassigned to ensure that each gray level contains an e ual number of pixels while retaining the ran! order of pixel values in the original image. # ualization is often utilized to enhance contrast in images with extremely low contrast where a ma+ority of the pixels have nearly the same value, and which do not respond well to conventional histogram stretching algorithms. The techni ue is effective in treating featureless dar!, and flat-field frames, and to rescue images with low-amplitude gradients. &n contrast, histogram stretching spaces gray-level values to cover the entire range evenly. The auto-en>ance or automatic levels (contrast) features of many image processing software pac!ages utilize one of these histogram-based transformations of the image.

"igital image histograms can be displayed in several motifs that differ from the conventional linear x and y plots of pixel number versus gray level value. "ogarit>mic histograms chart the input pixel value on the x-axis versus the number of pixels having that value on the y-axis, using a log scale. These histograms are useful to examine pixel values that comprise a minority of the image, but exhibit a strong response to histogram stretching. *nother commonly employed variation, the integrate0 or cumulative >istogram, plots input pixel values on the x-axis and the cumulative number of all pixels having a value of x, and lower, on the y-axis. Cumulative histograms are often utilized to ad+ust contrast and brightness for images gathered in phase contrast, "&C, and brightfield illumination modes, which tend to have light bac!grounds. &n some cases, images have regions of very high intensity, manifested by large pea!s near the histogram .22 gray level, where the video signal is saturated and all pixels have been rendered at the maximum gray value. This situation is termed gray-level clipping and usually indicates that a certain degree of detail has been lost in the digital image because some regions of the original image that might have different intensities have each been assigned to the same gray value. Clipping of the histogram may be acceptable in some circumstances if detail is lost only from unimportant parts of the image. -uch a situation might occur, for example, if the system has been ad+usted to maximize the contrast of stained histological slides under brightfield illumination, with the clipping occurring only in bright bac!ground regions where there is no cellular structure. !patial Convolution Kernels (or Mas#s* -ome of the most powerful image processing tools utilize multipixel operations, in which the integer value of each output pixel is altered by contributions from a number of ad+oining input pixel values. These operations are classically referred to as spatial convolutions and involve multiplication of a selected set of pixels from the original image with a corresponding array of pixels in the form of a convolution #ernel or convolution mas#. Convolutions are mathematical transformations of pixels, carried out in a manner that differs from simple addition, multiplication, or division, as illustrated in (igure / for a simple sharpening convolution !ernel mas!. &n the simplest form, a two-dimensional convolution operation on a digital image utilizes a &ox convolution #ernel. Convolution !ernels typically feature an odd number of rows and columns in the form of a s uare, with a 9 x 9 pixel mas! (convolution !ernel) being the most common form, but 2 x 2 and 7 x 7 !ernels are also fre uently employed. The convolution operation is performed individually on each pixel of the original input image, and involves three se uential operations, which are presented in (igure /. The operation begins when the convolution !ernel is overlaid on the original image in such a manner that the center pixel of the mas! is matched with the single pixel location to be convolved from the input image. This pixel is referred to as the target pixel.

<ext, each pixel integer value in the original (often termed the source) image is multiplied by the corresponding value in the overlying mas! ((igure /). These products are summed and the grayscale value of the target pixel in the destination image is replaced by the sum of all the products, ending the operation. The convolution !ernel is then translocated to the next pixel in the source image, which becomes the target pixel in the destination image, until every pixel in the original image has been targeted by the !ernel. Convolution !ernels may contain all positive, or positive and negative values, and thus can result in negative totals, or results that exceed the maximum .22 limit that a pixel can hold. *ppropriate divisor and offset values are needed to correct this. The smoothing convolution !ernel illustrated in (igure 7(a) has a value of unity for each cell in the matrix, with a divisor value of ; and an offset of zero. =ernel matrices for 1-bit grayscale images are often constrained with divisors and offsets that are chosen so that all processed values following the convolution fall between 6 and .22. ,any of the popular software pac!ages have user-specified convolution !ernels designed to fine-tune the type of information that is extracted for a particular application. Convolution !ernels are useful for a wide variety of digital image processing operations, including smoothing of noisy images (spatial averaging) and sharpening images by edge enhancement utilizing 3aplacian, sharpening, or gradient filters (in the form of a convolution !ernel). &n addition to convolution operations, local contrast can be ad+usted through the application of maximum, minimum, or median filters that ran! the pixels within each local neighborhood. (urthermore, the use of a (ourier transform to convert images from the spatial to the fre uency domain ma!es possible another class of filtering operations. The total number of algorithms developed for image processing is enormous, but several operations en+oy widespread application among many of the popular image processing software pac!ages. !moot>ing Convolution /ilters (!patial <veraging* -pecialized convolution !ernels, often termed smoot>ing filters, are often used for reducing random noise in digital images. * typical smoothing convolution filter is illustrated in (igure 7(a), and is essentially a matrix having an integer value of ) for each row and column. 0hen an image is convolved with this type of !ernel, the gray value of each pixel is replaced by the average intensity of its eight nearest neighbors

and itself. >andom noise in digital images is manifested by spurious pixels having unusually high or low intensity values. &f the gray value of any pixel overlaid by the convolution !ernel is dramatically different than that of its neighbors, the averaging effect of the filter will tend to reduce the effect of the noise by distributing it among all of the neighboring pixels.

The nine integers in each smoothing !ernel illustrated in (igure 7 add to a value of ) when summed and divided by the number of values in the matrix. These !ernels are designed so that the convolution operation will produce an output image having an average brightness that is e ual to that of the input images (however, in some cases, this may be only approximate). &n general, the sum of terms in most convolution !ernels will add to a value between zero and one in order to avoid creating an output image having gray values that exceed the dynamic range of the digital-to-analog converter utilized to display the image. -moothing convolution !ernels act as lo1-pass filters to suppress the contribution of high spatial fre uencies in the image. The term spatial fre3uency is analogous to the concept of fre uency with respect to time (temporal fre uency), and describes how rapidly a signal changes with respect to position in the image. * low spatial fre uency might exhibit only a few cycles across the width of an image, while a high spatial fre uency often displays numerous cycles in the same linear dimensions. *n excellent example is the minute orderly arrays of miniature pores and striae exhibited by diatom frustules, which alternate between very high and low intensities over very short distances. * low spatial fre uency might exhibit only a few cycles across the width of an image (manifested as widely spaced stripes, for example), whereas a high spatial fre uency undergoes numerous cycles across the lateral dimensions of an image. The highest spatial fre uency that can be displayed in a digital image has a period e ual to the width of two pixels. The type of random noise typically observed in digital images has a high spatial fre uency that can be effectively removed by applying a smoothing convolution !ernel to the image, pixel by pixel. %owever, other ?real? image features that are desirable, such as ob+ect boundaries and fine structural details, may also have high spatial fre uencies that can unfortunately be suppressed by the smoothing filter. Conse uently, application of a smoothing convolution !ernel will often have the undesirable effect of blurring an input image. (urthermore, the larger the !ernel (2 x 2, 7 x 7, and ; x ;), the more severe this blurring effect will be ((igure 1). (or most applications, the size and form of the smoothing !ernel must be carefully chosen to optimize the tradeoff between noise reduction and image degradation. * Gaussian filter is a smoothing filter based on a convolution !ernel that is a @aussian function, and provides the least amount of spatial blurring for any desired amount of random noise reduction. -moothing filters are good tools for ma!ing simple cosmetic improvements to grainy images that have a low signal-to-noise ratio, but these filters can also undesirably reduce the image resolution as a conse uence.

!>arpening Convolution /ilters &n direct contrast to the action of smoothing convolution filters, sharpening filters are designed to enhance the higher spatial fre uencies in a digital image, while simultaneously suppressing lower fre uencies. * typical 9 x 9 convolution mas! and its effect on a digital image captured with an optical microscope is illustrated in (igure 7(c). &n addition to enhancing specimen boundaries and fine details, sharpening filters also have the effect of removing slowly varying bac!ground shading. Thus, these filters can sometimes be utilized to correct for shading distortion in an image without having to resort to bac!ground subtraction algorithms. Anfortunately, sharpening convolution filters have the undesirable effect of enhancing random noise in digital images.

The !ernel size can be ad+usted to optimize the effects of sharpening filters and to fine-tune the mas!s to operate on a specific range of spatial fre uencies. * typical 9 x 9 mas! (see (igures / and 7) has the greatest effect on image features that vary over the spacing interval of a single pixel. "oubling or tripling the size of the !ernel will target lower spatial fre uencies that extend over two or more pixels. Me0ian /ilters ,edian filters are primarily designed to remove image noise, but are also very effective at eliminating faulty pixels (having unusually high or low brightness values) and reducing the deterioration caused by fine scratches. These filters are often more effective at removing noise than smoothing (low pass) convolution !ernels. ,edian !ernels are applied in a manner that is different from standard smoothing or sharpening !ernels. *lthough the median filter operates in a local neighborhood that is translated from pixel to pixel, there is no convolution matrix applied. *t each successive pixel location, the pixels under scrutiny are ordered in ran! according to their intensity magnitude. * median value is then determined for all of the pixels covered by the neighborhood, and that value is assigned to the central pixel location in the output image. ,edian filters are useful for removing random intensity spi!es that often occur in digital images captured in the microscope. 'ixels contributing to the spi!e are replaced with the median value of the local neighborhood pixels, which produces a more uniform appearance in the processed image. 4ac!ground regions that contain infre uent intensity spi!es are rendered in a uniform manner by the median filter. &n addition, because the median filter preserves edges, fine specimen detail, and boundaries, it is often employed for processing images having high contrast. !pecialiAe0 Convolution /ilters Derivative filters provide a uantitative measurement for the rate of change in pixel brightness information present in a digital image. 0hen a derivative filter is applied to a digital image, the resulting data concerning brightness fluctuation rates can be used to enhance contrast, detect edges and boundaries, and to measure feature orientation. $ne of the most important derivative filters is the !o&el

filter, which combines two orthogonal derivatives (produced by 9 x 9 !ernel convolutions) to calculate the vector gradient of brightness. These convolutions are very useful for edge enhancement of digital images captured in the microscope. #dges are usually one of the most important features in a microscopic structure, and can often be utilized for measurements after appropriate enhancement algorithms have been applied. "aplacian filters (often termed operators) are employed to calculate the second derivative of intensity with respect to position and are useful for determining whether a pixel resides on the dar! or light side of an edge. The 3aplacian enhancement operation generates sharp pea!s at the edges, and any brightness slope, regardless of whether it is positive or negative, is accentuated, bestowing an omnidirectional uality to this filter. &t is interesting to note that in the human visual system, the eye-brain networ! applies a 3aplacian-style enhancement to every ob+ect in the viewfield. %uman vision can be simulated by applying a 3aplacian-enhanced image to the original image, using a dual-image point process, to produce a modified image that appears much sharper and more pleasing. *n important issue that arises within the convolution process methodology centers on the fact that the convolution !ernel will extend beyond the borders of the image when it is applied to border pixels. $ne techni ue commonly utilized to remedy this problem, referred to as centere0) Aero &oun0ary superposition, is simply to ignore the problematic pixels and to perform the convolution operation only on those pixels that are located at a sufficient distance from the borders. This method has the disadvantage of producing an output image that is smaller than the input image. * second techni ue, called centere0) Aero pa00e0 superposition, involves padding the missing pixels with zeroes. Bet a third techni ue regards the image as a single element in a tiled array of identical images, so that the missing pixels are ta!en from the opposite side of the image. This method is called centere0) reflecte0 &oun0ary superposition and has the advantage of allowing for the use of mo0ulo arithmetic in the calculation of pixel addresses to eliminate the need for considering border pixels as a special case. #ach of these techni ues is useful for specific image-processing applications. The zero padded and reflected boundary methods are commonly applied to image enhancement filtering techni ues, while the zero boundary method is often utilized in edge detection and in the computation of spatial derivatives. $ns>arp Mas# /iltering Ansharp mas! algorithms operate by subtraction of a blurred image from the original image, followed by ad+ustment of gray level values in the difference image. This operation enables preservation of highfre uency detail while allowing shading correction and bac!ground suppression. The popular techni ue is an excellent vehicle to enhance fine specimen detail and sharpen edges that are not clearly defined in the original image. The first step in an unsharp mas! process is to produce a slight blur (by passage through a @aussian low-pass filter) and a reduction in amplitude of the original image, which is then subtracted from the unmodified original to produce a sharpened image. >egions in the image that have uniform amplitude are rendered in a medium gray brightness level, whereas regions with larger slopes (edges and boundaries) appear as lighter or dar!er gradients. &n general, unsharp mas! filters operate by subtracting appropriately weighted segments of the unsharp mas! (the blurred original) from the original image. -uch a subtraction operation enhances highfre uency spatial detail at the expense (attenuation) of low-fre uency spatial information in the image. This effect occurs because high-fre uency spatial detail removed from the unsharp mas! by the @aussian filter is not subtracted from the original image. &n addition, low-fre uency spatial detail that is passed by the @aussian filter (to the unsharp mas!) is almost entirely subtracted from the original image. &ncreasing the size of the @aussian filter allows the smoothing operation to remove larger size detail, so that those details are retained in the difference image. $ne of the primary advantages of the unsharp mas! filter over other sharpening filters is the flexibility of control, because a ma+ority of the other filters do not provide any user-ad+ustable parameters. 3i!e other sharpening filters, the unsharp mas! filter enhances edges and fine detail in a digital image. 4ecause sharpening filters also suppress low fre uency detail, these filters can be used to correct shading

distortion throughout an image that is commonly manifested in the form of slowly varying bac!ground intensities. Anfortunately, sharpening filters also have the undesirable side effect of increasing noise in the filtered image. (or this reason, the unsharp mas! filter should be used conservatively, and a reasonable balance should always be sought between the enhancement of detail and the propagation of noise. /ourier %ransforms The /ourier transform is based on the theorem that any harmonic function can be represented by a series of sine and cosine functions, differing only in fre uency, amplitude, and phase. These transforms display the fre uency and amplitude relationship between the harmonic components of the original functions from which they were derived. The (ourier transform converts a function that varies in space to another function that varies with fre uency. &t should also be noted that the highest spatial fre uencies of the original function are found the farthest away from the origin in the (ourier transform.

-patial filtering involving (ourier techni ues can be utilized to manipulate images through deletion of high or low spatial-fre uency information from an image by designing a (ourier filter that is nontransmitting at the appropriate fre uency. This techni ue is especially useful for removing harmonic noise from an image such as the >erring&one or sa1toot> patterns often apparent in video images (see (igure ;). 4ecause the added noise is harmonic, it will be found in localized discrete regions of the (ourier transform. 0hen these local pea!s are removed from the transform with the appropriate filter, the re-formed image is essentially unaltered except that the offending pattern is absent. -imilar filtering techni ues can also be applied to remove sine wave, moirC, halftone, and interference patterns, as well as noise from video signals, CC"s, power supplies, and electromagnetic induction. &llustrated in (igure ;(a) is a video image of a diatom frustule imaged in dar!field illumination with a superimposed sawtooth interference pattern. *d+acent to the diatom image ((igure ;(b)) is the (ourier transform power spectrum for the image, which contains the spatial fre uency information. *fter applying several filters ((igure ;(d)) and re-forming the image, the sawtooth pattern has been effectively eliminated ((igure ;(c)), leaving only the image of the frustule.

The decision as to whether to utilize (ourier filtering or convolution !ernel mas!s depends on the application being considered. The (ourier transform is an involved operation that ta!es more computer horsepower and memory than a convolution operation using a small mas!. %owever, the (ourier filtering techni ue is generally faster than the e uivalent convolution operation, especially when the convolution mas! is large and approaches the size of the original image. *ppropriate choice of e uivalent (ourier and convolution operations may reduce the complexity of their respective mas!s. (or example, a simple (ourier filter, such as one designed to remove harmonic noise, would produce a large and complex convolution mas! that would be difficult to use. *nother useful feature of the (ourier transform stems from its relationship to the convolution operation, which involves several multiplication and addition operations, according to the contents of the convolution mas!, to determine the intensity of each target pixel. This operation can be compared to (ourier filtering, where each value in the (ourier filter is simply multiplied by its corresponding pixel in the (ourier transform of an image. The two operations are related because the convolution operation is identical to the (ourier filtering operation when the (ourier filter is the (ourier transform of the convolution mas!. This e uivalence indicates that either of these two techni ues can be employed to obtain identical results from an image, depending only on whether the operator decides to wor! in image space or /ourier space. Conclusions "igitization of a video or CC"-generated electronic image captured with the microscope results in a dramatic increase in the ability to enhance features, extract information, or modify the image. The extent of the increased processing power of the digital approach may not be appreciated at first glance, particularly in comparison to the older and apparently simpler analog methods, such as traditional photomicrography on film. &n fact, digital image processing enables reversible, virtually noise-free modification of an image as a matrix of integers instead of as a series of time-dependent voltages or, even more primitively, using a photographic enlarger in the dar!room. ,uch of the recent progress in high-resolution transmitted optical microscopy and low-light-level reflected fluorescence microscopy of living cells has relied heavily on digital image processing. &n addition, most confocal and multiphoton microscopes depend strictly on high-speed, high fidelity digitization of the scanned image, and on the subse uent digital manipulation of the viewfield to be displayed. <ewer microscope designs lac!ing eyepieces (oculars) and coupled directly to image capture software also depend on image processing technology to produce high- uality digital images from the microscope. The power of digital image processing to extract information from noisy or low-contrast images and to enhance the appearance of these images has led some investigators to rely on the technology instead of optimally ad+usting and using the microscope or image sensor. &nvariably, beginning with a higher- uality optical image, free of dirt, debris, noise, aberration, glare, scratches, and artifacts, yields a superior electronic image. Careful ad+ustment and proper calibration of the image sensor will lead to a higheruality digital image that fully utilizes the dynamic range of both the sensor and the digital image processing system.