Вы находитесь на странице: 1из 7

MC0086- DIGITAL IMAGE PROCESSING Q.

Explain the significance of digital image processing in Gamma-ray Imaging and Imaging in the Visible and Infrared Bands. Ans:- Major uses of imaging based on gamma rays include nuclear medicine and astronomical observations. In nuclear medicine, the approach is to inject a patient with a radioactive isotope that emits gamma rays as it decays. Images are produced from the emissions collected by gamma ray detectors. Figure 1.5(a) shows an image of a complete bone scan obtained by using gamma-ray imaging. Images of this sort are used to locate sites of bone pathology, such as infections or tumors.

Figure 1.5: Examples of gamma-ray imaging. (a) Bone scan. (b) PET Fig. 1.5(b) shows another major modality of nuclear imaging called positron emission tomography (PET).The principle is the same as with X-ray tomography. However, instead of using an external source of X-ray energy, the patient is given a radioactive isotope that emits positrons as it decays. When a positron meets an electron, both are annihilated and two gamma rays are given off. These are detected and a tomographic image is created using the basic principles of tomography. The image shown in Fig. 1.5(b) is one sample of a sequence that constitutes a 3-D rendition of the patient. This image shows a tumor in the brain and one in the lung, easily visible as small white masses. Imaging in the Visible and Infrared Bands : Considering that the visual band of the electromagnetic spectrum is the most familiar in all our activities, it is not surprising that imaging in this band outweighs by far all the others in terms of scope of application. The infrared band is often used in conjunction with visual imaging, so we have grouped the visible and infrared bands in this section for the purpose of illustration. Fig. 1.8 shows several examples of images obtained with a light microscope. The examples range from pharmaceuticals and micro inspection to materials characterization. Even in just microscopy,

the application areas are too numerous to detail here. It is not difficult to conceptualize the types of processes one might apply to these images, ranging from enhancement to measurements.

Examples of light microscopy images. (a)Taxol (anticancer agent), magnified 250. (b) Cholesterol 40. (c) Microprocessor 60. (d) Nickel oxide thin film 600 . (e) Surface of audio CD 1750. (f) Organic superconductor 450. 2.Explain the properties and uses of electromagnetic spectrum. Ans:- The electromagnetic spectrum can be expressed in terms of wavelength, frequency, or energy. Wavelength () and frequency () are related by the expression = c / ,where c is the speed of light (2.998*108 m/s).The energy of the various components of the electromagnetic spectrum is given by the expression E =h , where h is Plancks constant. The units of wavelength are meters, with the terms microns (denoted m and equal to 106 m) and nanometers (109 m) being used frequently. Frequency is measured in Hertz (Hz), with one Hertz being equal to one cycle of a sinusoidal wave per second. Electromagnetic waves can be visualized as propagating sinusoidal waves with wavelength they can be thought of as a stream of massless particles, each traveling in a wavelike pattern and moving at the speed of light. Each massless particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon. We see from Eq. that energy is proportional to frequency, so the higherfrequency (shorter wavelength) electromagnetic phenomena carry more energy per photon. Thus, radio waves have photons with low energies; microwaves have more energy than radio waves, infrared still more, then visible, ultraviolet, X-rays, and finally gamma rays, the most energetic of all. This is the reason that gamma rays are so dangerous to living organisms. Three basic quantities are used to describe the quality of a chromatic light source: radiance; luminance; and brightness. Radiance is the total amount of energy that flows from the light source, and it is usually measured in watts (W). Luminance, measured in lumens (lm), gives a measure of the amount of energy

an observer perceives from a light source. For example, light emitted from a source operating in the far infrared region of the spectrum could have significant energy (radiance), but an observer would hardly perceive it; its luminance would be almost zero. 3. Differentiate between Monochromatic photography and Color photography Ans:- Monochromatic photography : The most common material for photographic image recording is silver halide emulsion. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electrochemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a nonreversal process. It produces a negative image in the sense that the silver density is inversely proportional to the exposing light. A positive reflection print of an image can be obtained in a two-stage process with nonreversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the light intensity that exposed the negative transparency. A positive transparency of an image can be obtained with a reversal type of film. This film is exposed and undergoes a first development similar to that of a nonreversal film. At this stage in the photographic process, all grains that have been exposed to light are converted completely to metallic silver. In the next step, the metallic silver grains are chemically removed. The film is then uniformly exposed to light, or alternatively, a chemical process is performed to expose the remaining silver halide grains. Then the exposed grains are developed and fixed to produce a positive transparency whose density is proportional to the original light exposure. Color photography: Modern color photography systems utilize an integral tripack film, as illustrated in Fig. 5.4, to produce positive or negative transparencies. In a cross section of this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light m Manipal University Page No. 81 from passing through to the green and red silver emulsions that follow in consecutive layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers. Upon development, the blue emulsion layer is converted into a yellow dye transparency whose dye concentration is proportional to the blue exposure for a negative transparency and inversely proportional for a positive transparency. Similarly, the green and red emulsion layers become magenta and cyan dye layers, respectively. Color prints can be obtained by a variety of processes. The most common technique is to produce a positive print from a color negative transparency onto nonreversal color paper. In the establishment of a mathematical model of the color photographic process, each emulsion layer can be considered to react to light as does an emulsion layer of a monochrome photographic material. To a first approximation, this assumption is correct. However, there are often significant interactions between the emulsion and dye layers. Each emulsion layer possesses a characteristic sensitivity, as shown by the typical curves of Fig. 5.5. The integrated exposures of the layers are given by where dR, dG, dB are proportionality constants whose values are adjusted so

that the exposures are equal for a reference white illumination and so that the film is not saturated. In the chemical development process of the film, a positive transparency is produced with three absorptive dye layers of cyan, magenta and yellow dyes. 4. Define and explain Dilation and Erosion concept. Ans: Dilation: With dilation, an object grows uniformly in spatial extent. Generalized dilation is expressed symbolically as G j k = F j k H j k where F(j, k), for 1 . j, k. N is a binary-valued image and H(j, k) for , 1 . j,k. L, where L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, F(j,k) and H(j,k) are assumed to be square arrays. Generalized dilation can be defined mathematically and implemented in several ways. The Minkowski addition definition is

It states that G(j,k) is formed by the union of all translates of F(j,k) with respect to itself in which the translation distance is the row and column index of pixels of H(j,k) that is a logical 1.

Erosion : With erosion an object shrinks uniformly. Generalized erosion is expressed symbolically as Gjk= Fjk Hjkwhere H(j,k) is an odd size L * L structuring element. Generalized erosion is defined to be

The meaning of this relation is that erosion of F(j,k) by H(j,k) is the intersection of all translates of F(j,k) in which the translation distance is the row and column index of pixels of H(j,k) that are in the logical one state. The below figure illustrates this. The figure illustrates generalized dilation and erosion. Properties of Dilation and Erosion: i. Dilation is commutative: AB=BA but in general, erosion is not commutative. AB#BA ii. Dilation and erosion are opposite in effect; dilation of the background of an object behaves like erosion of the object. This statement can be quantified by the duality relationship.

5. What is mean by Image Feature Evaluation? Which are the two quantitative approaches used for the evaluation of image features? Ans: An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural in the sense that such features are defined by the visual appearance of an image, while other, artificial features result from specific manipulations of an image. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spectra are examples of artificial features. There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been independently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with

independently identified regions is segmented by a segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or segmentation ability of the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt have utilized the Bhattacharyya distance figureof-merit for texture feature evaluation. The method should be extensible for other features as well. The Bhattacharyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as

where x denotes a vector containing individual image feature measurements with conditional density p (x | S1). 6. Explain about the Region Splitting and merging with example. Ans: Sub-divide an image into a set of disjoint regions and then merge and/or split the regions in an attempt to satisfy the conditions Let R represent the entire image and select predicate P. One approach for segmenting R is to subdivide it successively into smaller and smaller quadrant regions so that, for ant region, P() = TRUE. We start with the entire region. If then the image is divided into quadrants. If P is FALSE for any quadrant, we subdivide that quadrant into sub quadrants, and so on. This particular splitting technique has a convenient representation in the form of a so called quad tree (that is, a tree in which nodes have exactly four descendants). The root of the tree corresponds to the entire image and that each node corresponds to a subdivision. In this case, only was sub divided further. If only splitting were used, the final partition likely would contain adjacent regions with identical properties. This draw back may be remedied by allowing merging, as well as splitting. Satisfying the constraints of section 10.3.1 requires merging only adjacent regions whose combined pixels satisfy the predicate P. That is, two adjacent regions and are merged only if = TRUE. 1. Split into four disjoint quadrants any region for which where P ( ) = FALSE 2. Merge any adjacent regions and for which = TRUE. 3. Stop when no further merging or splitting is possible.

Вам также может понравиться