Вы находитесь на странице: 1из 61

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER-1 INTRODUCTION
Resolution has been frequently referred as an important aspect of an image. Images are being processed in order to obtain more enhanced resolution. One of the commonly used techniques for image resolution enhancement is Interpolation. Interpolation has been widely used in many image processing applications such as facial reconstruction, multiple description coding, and super resolution. There are three well known interpolation techniques, namely nearest neighbor interpolation, bilinear interpolation, and bi cubic interpolation. Image resolution enhancement in the wavelet domain is a relatively new research topic and recently many new algorithms have been proposed. Discrete wavelet transform (DWT) is one of the recent wavelet transforms used in image processing. DWT decomposes an image into different sub band images, namely low-low (LL), low-high (LH), high-low (HL), and high-high (HH). Another recent wavelet transform which has been used in several image processing applications is stationary wavelet transform (SWT). In short, SWT is similar to DWT but it does not use down-sampling, hence the sub bands will have the same size as the input image. In this work, we are proposing an image resolution enhancement technique which generates sharper high resolution image. The proposed technique uses DWT to decompose a low resolution image into different sub bands. Then the three high frequency sub band images have been interpolated using bicubic interpolation. The high frequency sub bands obtained by SWT of the input image are being incremented into the interpolated high frequency sub bands in order to correct the estimated coefficients. In parallel, the input image is also interpolated separately. Finally, corrected interpolated high frequency sub bands and interpolated input image are combined by using inverse DWT (IDWT) to achieve a high resolution output image. The proposed technique has been compared with conventional and state-of-art image resolution enhancement techniques.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

1.1 Background
Mathematical transformations are applied to signals to obtain further information from that signal that is not readily available in the raw signal. In the following tutorial I will assume a time-domain signal as a raw signal, and a signal that has been "transformed" by any of the available mathematical transformations as a processed signal. There are number of transformations that can be applied, among which the Fourier transforms are probably by far the most popular. Most of the signals in practice are TIME-DOMAIN signals in their raw format. That is, whatever that signal is measuring, is a function of time. In other words, when we plot the signal one of the axes is time (independent variable), and the other (dependent variable) is usually the amplitude. When we plot time-domain signals, we obtain a timeamplitude representation of the signal. This representation is not always the best representation of the signal for most signal processing related applications. In many cases, the most distinguished information is hidden in the frequency content of the signal. The frequency SPECTRUM of a signal is basically the frequency components (spectral components) of that signal. The frequency spectrum of a signal shows what frequencies exist in the signal. Intuitively, we all know that the frequency is something to do with the change in rate of something. If something ( a mathematical or physical variable, would be the technically correct term) changes rapidly, we say that it is of high frequency, where as if this variable does not change rapidly, i.e., it changes smoothly, we say that it is of low frequency. If this variable does not change at all, then we say it has zero frequency, or no frequency.

1.2 Need the frequency information


Often times, the information that cannot be readily seen in the time-domain can be seen in the frequency domain. Lets give an example from biological signals. Suppose we are looking at an ECG signal (Electrocardiography, graphical recording of

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

heart's electrical activity). The typical shape of a healthy ECG signal is well known to cardiologists. Any significant deviation from that shape is usually considered to be a symptom of a pathological condition. This pathological condition, however, may not always be quite obvious in the original time-domain signal. Cardiologists usually use the time-domain ECG signals which are recorded on strip-charts to analyze ECG signals. Recently, the new computerized ECG recorders/analyzers also utilize the frequency information to decide whether a pathological condition exists. A pathological condition can sometimes be diagnosed more easily when the frequency content of the signal is analyzed.This, of course, is only one simple example why frequency content might be useful. Today Fourier transforms are used in many different areas including all branches of engineering. Although FT is probably the most popular transform being used (especially in electrical engineering), it is not the only one. There are many other transforms that are used quite often by engineers and mathematicians. Hilbert transform, short-time Fourier transform (more about this later), Wigner distributions, the Radon Transform, and of course our featured transformation , the wavelet transform, constitute only a small portion of a huge list of transforms that are available at engineer's and mathematician's disposal. Every transformation technique has its own area of application, with advantages and disadvantages, and the wavelet transform (WT) is no exception. For a better understanding of the need for the WT let's look at the FT more closely. FT (as well as WT) is a reversible transform, that is, it allows to go back and forward between the raw and processed (transformed) signals. However, only either of them is available at any given time. That is, no frequency information is available in the time-domain signal, and no time information is available in the Fourier transformed signal. The natural question that comes to mind is that is it necessary to have both the time and the frequency information at the same time? As we will see soon, the answer depends on the particular application and the nature of the signal in hand. Recall that the FT gives the frequency information of the

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

signal, which means that it tells us how much of each frequency exists in the signal, but it does not tell us when in time these frequency components exist. This information is not required when the signal is so-called stationary.Let's take a closer look at this stationarity concept more closely, since it is of paramount importance in signal analysis. Signals whose frequency content do not change in time are called stationary signals. In other words, the frequency content of stationary signals do not change in time. In this case, one does not need to know at what times frequency components exist, since all frequency components exist at all times.

1.3 Need for Discrete Wavelet Transform


Although the discretized continuous wavelet transform enables the computation of the continuous wavelet transform by computers, it is not a true discrete transform. As a matter of fact, the wavelet series is simply a sampled version of the CWT, and the information it provides is highly redundant as far as the reconstruction of the signal is concerned. This redundancy, on the other hand, requires a significant amount of computation time and resources. The discrete wavelet transform (DWT), on the other hand, provides sufficient information both for analysis and synthesis of the original signal, with a significant reduction in the computation time. The DWT is considerably easier to implement when compared to the CWT. The basic concepts of the DWT will be introduced in this section along with its properties and the algorithms used to compute it. As in the previous chapters, examples are provided to aid in the interpretation of the DWT.

1.4 Image resolution enhancement


The conventional techniques used are the following: interpolation techniques: bilinear interpolation and bicubic interpolation. wavelet zero padding (WZP). The state-of-art techniques used for comparison purposes are the following:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

regularity-preserving image interpolation. new edge-directed interpolation (NEDI). hidden Markov model (HMM). HMM-based image super resolution (HMM SR). WZP and cycle-spinning (WZP-CS). WZP, CS, and edge rectification (WZP-CS-ER). DWT based super resolution (DWT SR). complex wavelet transform based super resolution (CWT SR). According to the quantitative and qualitative experimental results, the proposed technique over performs the aforementioned conventional and state-of-art techniques for image resolution enhancement.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 2 INTERPOLATION
Introduction
One of the commonly used techniques for image resolution enhancement is Interpolation. Interpolation has been widely used in many image processing applications such as facial reconstruction, multiple description coding, and super resolution. There are three well known interpolation techniques, namely nearest neighbor interpolation, bilinear interpolation, and bicubic interpolation. Interpolation is the process of using known data values to estimate unknown data values. Various interpolation techniques are often used in the atmospheric sciences. Interpolation is the process of determining the values of a function at Positions lying between its samples. It achieves this process by fitting a continuous function through the discrete input samples. This permits input values to be evaluated at arbitrary positions in the input, not just those defined at the sample points. While sampling generates an infinite bandwidth signal from one that is band limited, interpolation plays an opposite role: it reduces the bandwidth of a signal by applying a low-pass filter to the discrete signal. That is, interpolation reconstructs the signal lost in the sampling process by smoothing the data samples with an interpolation function. The process of interpolation is one of the fundamental operations in image processing. The image quality highly depends on the used interpolation technique. The interpolation techniques are divided into two categories, deterministic and statistical interpolation techniques. The difference is that deterministic interpolation techniques assume certain variability between the sample points, such as linearity in case of linear interpolation. Statistical interpolation methods approximate the signal by minimizing the estimation error. This approximation process may result in original sample values not being replicated. Since statistics methods are computationally inefficient, in this article only deterministic techniques will be discussed.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Common Interpolation Methods: Photo editing software generally offers a few different interpolation methods for calculating new pixels when an image us upsampled. Here are descriptions of the three methods available in Photoshop. If you don't use Photoshop, your software probably offers similar options although they may use slightly different terminology

Nearest Neighbor doesn't use interpolation. It simply takes the value of the neighboring pixels and adds new pixels without averaging them. This is when you get the jaggies or stair-step effect.

Bilinear is faster than bicubic, but does a poorer job. Both bicubic and bilinear interpolation result in a blurred image, especially when upsampling.

Bicubic is the slowest but produces the best estimation of new pixel values.

2.1 Nearest neighbor interpolation


Nearest-neighbor interpolation (also known as proximal interpolation or, in some contexts, point sampling) is a simple method of multivariate interpolation in one or more dimensions. Interpolation is the problem of approximating the value for a non-given point in some space when given some colors of points around (neighboring) that point. The nearest neighbor algorithm selects the value of the nearest point and does not consider the values of neighboring points at all, yielding a piecewise-constant interpolant. The algorithm is very simple to implement and is commonly used (usually along with bitmapping) in real-time 3D rendering to select color values for a textured surface.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Example of Nearest-neighbor interpolation

2.2 Bilinear interpolation:


In computer vision and image processing, bilinear interpolation is one of the basic re sampling techniques. In texture mapping, it is also known as bilinear filtering or bilinear texture mapping, and it can be used to produce a reasonably realistic image. An algorithm is used to map a screen pixel location to a corresponding point on the texture map. A weighted average of the attributes (color, alpha, etc.) of the four surrounding pixels is computed and applied to the screen pixel. This process is repeated for each pixel forming the object being textured. When an image needs to be scaled up, each pixel of the original image needs to be moved in a certain direction based on the scale constant. However, when scaling up an image by a non-integral scale factor, there are pixels (i.e., holes) that are not assigned appropriate pixel values. In this case, those holes should be assigned appropriate RGB or grayscale values so that the output image does not have non-valued pixels. Bilinear interpolation can be used where perfect image transformation with pixel matching is impossible, so that one can calculate and assign appropriate intensity values to pixels. Unlike other interpolation techniques such as nearest neighbor interpolation and bicubic interpolation, bilinear interpolation uses only the 4 nearest pixel values which are located in diagonal directions from a given pixel in order to find the appropriate color intensity values of that pixel.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel's computed location. It then takes a weighted average of these 4 pixels to arrive at its final, interpolated value. The weight on each of the 4 pixel values is based on the computed pixel's distance (in 2D space) from each of the known points.

Example of Bilinear interpolation. In Bilinear interpolation technique Derivatives of the surface are not continuous over the

square boundaries This algorithm reduces some of the visual distortion caused by resizing an image to a non-integral zoom factor, as opposed to nearest neighbor interpolation, which will make some pixels appear larger than others in the resized image. Bilinear interpolation tends, however, to produce a greater number of interpolation artifacts (such as blurring, and edge halos) than more computationally demanding techniques such as bicubic interpolation.

2.3 Bicubic interpolation


In mathematics, bicubic interpolation is an extension of cubic interpolation for interpolating data points on a two dimensional regular grid. The interpolated surface is smoother than corresponding surfaces obtained by bilinear interpolation or nearestneighbor interpolation. Bicubic interpolation can be accomplished using either Lagrange polynomials, cubic splines, or cubic convolution algorithm.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

In image processing, bicubic interpolation is often chosen over bilinear interpolation or nearest neighbor in image resampling, when speed is not an issue. Images resampled with bicubic interpolation are smoother and have fewer interpolation artifacts.

Example of Bicubic interpolation Bicubic interpolation on the square consisting of 9 unit squares patched

together. Bicubic interpolation as per matlab's implementation. Colour indicates function value. The black dots are the locations of the prescribed data being interpolated. Note how the color samples are not radially symmetric.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 3 WAVELET ANALYSIS


Wavelet analysis is similar to Fourier analysis in the sense that it breaks a signal down into its constituent parts for analysis. Whereas the Fourier transform breaks the signal into a series of sine waves of different frequencies, the wavelet transform breaks the signal into its "wavelets", scaled and shifted versions of the "mother wavelet". There are however some very distinct differences as is evident in Figure 1, which compares a sine wave to the wavelet. In comparison to the sine wave which is smooth and of infinite length, the wavelet is irregular in shape and compactly supported. It is these properties of being irregular in shape and compactly supported that make wavelets an ideal tool for analysing signals of a non-stationary nature. Their irregular shape lends them to analysing signals with discontinuity's or sharp changes, while their compactly supported nature enables temporal localization of signals features.

(a)

(b)

Fig. 1. Demonstrations of (a) a Wave and (b) a Wavelet. When analysing signals of a non-stationary nature, it is often beneficial to be able to acquire a correlation between the time and frequency domains of a signal. The Fourier transform, provides information about the frequency domain, however time localized information is essentially lost in the process. The problem with this is the inability to associate features in the frequency domain with their location in time, as an alteration in

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

the frequency spectrum will result in changes throughout the time domain. In contrast to the Fourier transform, the wavelet transform allows exceptional localization in both the time domain via translations of the mother wavelet, and in the scale (frequency) domain.

3.1 The short term Fourier transform:


There is only a minor difference between STFT and FT. In STFT, the signal is divided into small enough segments, where these segments (portions) of the signal can be assumed to be stationary. For this purpose, a window function "w" is chosen. The width of this window must be equal to the segment of the signal where its stationarity is valid. This window function is first located to the very beginning of the signal. That is, the window function is located at t=0. Let's suppose that the width of the window is "T" s. At this time instant (t=0), the window function will overlap with the first T/2 seconds (I will assume that all time units are in seconds). The window function and the signal are then multiplied. By doing this, only the first T/2 seconds of the signal is being chosen, with the appropriate weighting of the window (if the window is a rectangle, with amplitude "1", then the product will be equal to the signal). Then this product is assumed to be just another signal, whose FT is to be taken. In other words, FT of this product is taken, just as taking the FT of any signal. The result of this transformation is the FT of the first T/2 seconds of the signal. If this portion of the signal is stationary, as it is assumed, then there will be no problem and the obtained result will be a true frequency representation of the first T/2 seconds of the signal. The next step, would be shifting this window (for some t1 seconds) to a new location, multiplying with the signal, and taking the FT of the product. This procedure is followed, until the end of the signal is reached by shifting the window with "t1" seconds intervals.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

The following definition of the STFT summarizes all the above explanations in one line:

( )

( )

( )

x(t) is the signal itself, w(t) is the window function, and * is the complex conjugate. As you can see from the equation, the STFT of the signal is nothing but the FT of the signal multiplied by a window function. The problem with STFT is the fact whose roots go back to what is known as the Heisenberg Uncertainty Principle. This principle originally applied to the momentum and location of moving particles, can be applied to time-frequency information of a signal. Simply, this principle states that one cannot know the exact time-frequency representation of a signal, i.e., one cannot know what spectral components exist at what instances of times. What one can know is the time intervals in which certain band of frequencies exist, which is a resolution problem.

3.2 THE WAVELET TRANSFORM


The Wavelet transform is a transform of this type. It provides the timefrequency representation. (There are other transforms which give this information too, such as short time Fourier transforms, Wigner distributions, etc.) Often times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know the time intervals these particular spectral components occur. For example, in EEGs, the latency of an event-related potential is of particular interest (Event-related potential is the response of the brain to a specific stimulus like flash-light, the latency of this response is the amount of time elapsed between the onset of the stimulus and the response).

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. How wavelet transform works is completely a different fun story, and should be explained after short time Fourier Transform (STFT) . The WT was developed as an alternative to the STFT. To make a real long story short, we pass the time-domain signal from various high pass and low pass filters, which filter out either high frequency or low frequency portions of the signal. This procedure is repeated, every time some portion of the signal corresponding to some frequencies being removed from the signal. Here is how this works: Suppose we have a signal which has frequencies up to 1000 Hz. In the first stage we split up the signal in to two parts by passing the signal from a high pass and a low pass filter (filters should satisfy some certain conditions, socalled admissibility condition) which results in two different versions of the same signal: portion of the signal corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion). Then, we take either portion (usually low pass portion) or both, and do the same thing again. This operation is called decomposition. Assuming that we have taken the low pass portion, we now have 3 sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz. Then we take the low pass portion again and pass it through low and high pass filters; we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz,250500 Hz, and 500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain level. Then we have a bunch of signals, which actually represent the same signal, but all corresponding to different frequency bands. We know which signal corresponds to which frequency band, and if we put all of them together and plot them on a 3-D graph, we will have time in one axis, frequency in the second and amplitude in the third axis. This will show us which frequencies exist at which time ( there is an issue, called "uncertainty principle", which states that, we cannot exactly know what frequency

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

exists at what time instance , but we can only know what frequency bands exist at what time intervals. The uncertainty principle, originally found and formulated by Heisenberg, states that, the momentum and the position of a moving particle cannot be known simultaneously. This applies to our subject as follows: The frequency and time information of a signal at some certain point in the time-frequency plane cannot be known. In other words: We cannot know what spectral component exists at any given time instant. The best we can do is to investigate what spectral components exist at any given interval of time. This is a problem of resolution, and it is the main reason why researchers have switched to WT from STFT. STFT gives a fixed resolution at all times, whereas WT gives a variable resolution as follows: Higher frequencies are better resolved in time, and lower frequencies are better resolved in frequency. This means that, a certain high frequency component can be located better in time (with less relative error) than a low frequency component. On the contrary, a low frequency component can be located better in frequency compared to high frequency component.

Note however, the frequency axis in these plots is labeled as scale. The concept of the scale will be made clearer in the subsequent sections, but it should be noted at this time that the scale is inverse of frequency. That is, high scales correspond to low frequencies, and
low scales

correspond to high frequencies. Consequently, the little peak in the plot

corresponds to the high frequency components in the signal, and the large peak corresponds to low frequency components (which appear before the high frequency components in time) in the signal. The translation and dilation operations applied to the mother wavelet are performed to calculate the wavelet coefficients, which represent the correlation between the wavelet and a localized section of the signal. The wavelet coefficients are calculated for each wavelet segment, giving a time-scale function relating the wavelets correlation

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

to the signal. This process of translation and dilation of the mother wavelet is depicted below in Figure 2.

Fig 2. The scaling and shifting process of the DWT It should be noted that the process examined here is the DWT, where the signal is broken in to dyadic blocks (shifting and scaling is based on a power of 2). The continuous wavelet transform (CWT) still uses discretely sampled data, however the shifting process is a smooth operation across the length of the sampled data, and the scaling can be defined from the minimum (original signal scale) to a maximum chosen by the user, thus giving a much finer resolution. The trade off for this improved resolution is an increased computational time and memory required to calculate the wavelet

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

coefficients. The effect of this shifting and scaling process is to produce a time-scale representation, as depicted in Figure 4. As can be seen from a comparison with the STFT, which employs a windowed FFT of fixed time and frequency resolution, the wavelet transform offers superior temporal resolution of the high frequency components and scale (frequency) resolution of the low frequency components. This is often beneficial as it allows the low frequency components, which usually give a signal its main characteristics or identity, to be distinguished from one another in terms of their frequency content, while providing an excellent temporal resolution for the high frequency components which add the nuance's to the signals behavior.

(a) STFT

(b) DWT

FIGURE 3. STFT AND DWT BREAKDOWN OF A SIGNAL The dilation function of the discrete wavelet transform can be represented as a tree of low and high pass filters, with each step transforming the low pass filter as shown in Figure 5. The original signal is successively decomposed into components of lower resolution, while the high frequency components are not analyzed any further. The maximum number of dilations that can be performed is dependent on the input size of the data to be

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

analyzed, with 2N data samples enabling the breakdown of the signal into N discrete levels using the discrete wavelet transform

FIGURE 4 FILTER BANK REPRESENTATION OF THE DWT DILATIONS The versatility and power of the DWT can be significantly increased by using its generalised form, wavelet packet analysis (DWPA). Unlike the DWT which only decomposes the low frequency components (approximations), DWPA utilises both the low frequency components (approximations), and the high frequency components (details). Figure 6 shows the wavelet decomposition tree.

FIGURE 5 DWPA TREE DECOMPOSITION

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

DWPA allows the signal to be decomposed into any of 2N different signal encoding schemes. Choosing the optimum scheme for a particular signal is usually achieved by determining the best basis for the tree, normally through an entropy based criterion [MISI96]. The same noisy chirp is depicted in Figure 7 is shown below in terms of its best basis wavelet packet analysis. As it is clearly evident, wavelet packet analysis offers superior resolution and clarification of details about the signal.

3.3 Wavelet Families


Several families of wavelets that have proven to be especially useful .some wavelet Families are Haar Daubachies Biorthogonal Coiflets Morlet Mexicanhat Meyer Other real wavelet

Haar
Any discussion of wavelets begins with Haar wavelet, the first and simplest. Haar wavelet is discontinuous, and resembles a step function. It represents the same wavelet as Daubechies db1.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Daubechies
Ingrid Daubechies, one of the brightest stars in the world of wavelet research, invented what are called compactly supported orthonormal wavelets thus making discrete wavelet analysis practicable. The names of the Daubechies family wavelets are written dbN, where N is the order, and db the surname of the wavelet. The db1 wavelet, as mentioned above, is the same as Haar wavelet. Here are the wavelet functions psi of the next nine members of the family:

Biorthogonal
This family of wavelets exhibits the property of linear phase, which is needed for signal and image reconstruction. By using two wavelets, one for decomposition (on the left side) and the other for reconstruction (on the right side) instead of the same single one, interesting properties are derived.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Coiflets
Built by I. Daubechies at the request of R. Coifman. The wavelet function has 2N moments equal to 0 and the scaling function has 2N-1 moments equal to 0. The two functions have a support of length 6N-1. You can obtain a survey of the main properties of this family by typing waveinfo('coif') from the MATLAB command line .

Morlet
This wavelet has no scaling function, but is explicit.

Mexican Hat
This wavelet has no scaling function and is derived from a function that is proportional to the second derivative function of the Gaussian probability density function.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Meyer
The Meyer wavelet and scaling function are defined in the frequency domain.

Other Real Wavelets


Some other real wavelets are available in the toolbox: Reverse Biorthogonal Gaussian derivatives family FIR based approximation of the Meyer wavelet .

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

There is a wide range of applications for Wavelet Transforms. They are applied in different fields ranging from signal processing to biometrics, and the list is still growing. One of the prominent applications is in the FBI fingerprint compression standard. Wavelet Transforms are used to compress the fingerprint pictures for storage in their data bank. The previously chosen Discrete Cosine Transform (DCT) did not perform well at high compression ratios. It produced severe blocking effects which made it impossible to follow the ridge lines in the fingerprints after reconstruction. This did not happen with Wavelet Transform due to its property of retaining the details present in the data. In DWT, the most prominent information in the signal appears in high amplitudes and the less prominent information appears in very low amplitudes. Data compression can be achieved by discarding these low amplitudes. The wavelet transforms enables high compression ratios with good quality of reconstruction. At present, the application of wavelets for image compression is one the hottest areas of research. Recently, the Wavelet Transforms have been chosen for the JPEG 2000 compression standard.

Input Signal

Wavelet Transform

Processing

Inverse Wavelet Transform

Output Signal

Fig. 6 Signal processing application using Wavelet Transform. Fig.6 shows the general steps followed in a signal processing application. Processing may involve compression, encoding, de noising etc. The processed signal is either stored or transmitted. For most compression applications, processing involves quantization and entropy coding to yield a compressed image. During this process, all the wavelet coefficients that are below a chosen threshold are discarded. These discarded coefficients are replaced with zeros during reconstruction at the other end. To reconstruct the signal, the entropy coding is decoded, then quantized and then finally Inverse Wavelet Transformed.

Wavelets also find application in speech compression, which reduces transmission

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

time in mobile applications. They are used in denoising, edge detection, feature extraction, speech recognition, echo cancellation and others. They are very promising for real time audio and video compression applications. Wavelets also have numerous applications in digital communications. Orthogonal Frequency Division Multiplexing (OFDM) is one of them. Wavelets are used in biomedical imaging. For example, the ECG signals, measured from the heart, are analyzed using wavelets or compressed for storage. The popularity of Wavelet Transform is growing because of its ability to reduce distortion in the reconstructed signal while retaining all the significant features present in the signal.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 4 THE DISCRETE WAVELET TRANSFORM


4.1 INTRODUCTION The transform of a signal is just another form of representing the signal. It does not change the information content present in the signal. The Wavelet Transform provides a time-frequency representation of the signal. It was developed to overcome the short coming of the Short Time Fourier Transform (STFT), which can also be used to analyze non-stationary signals. While STFT gives a constant resolution at all frequencies, the Wavelet Transform uses multi-resolution technique by which different frequencies are analyzed with different resolutions. A wave is an oscillating function of time or space and is periodic. In contrast, wavelets are localized waves. They have their energy concentrated in time or space and are suited to analysis of transient signals. While Fourier Transform and STFT use waves to analyze signals, the Wavelet Transform uses wavelets of finite energy.

(a)

(b)

Fig.7 Demonstrations of (a) a Wave and (b) a Wavelet. The wavelet analysis is done similar to the STFT analysis. The signal to be analyzed is multiplied with a wavelet function just as it is multiplied with a window function in STFT, and then the transform is computed for each segment generated. However, unlike STFT, in Wavelet Transform, the width of the wavelet function changes

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

with each spectral component. The Wavelet Transform, at high frequencies, gives good time resolution and poor frequency resolution, while at low frequencies, the Wavelet Transform gives good frequency resolution and poor time resolution.

4.2 The Continuous Wavelet Transform and the Wavelet Series


The Continuous Wavelet Transform (CWT) is provided by equation 2.1, where x(t) is the signal to be analyzed. (t) is the mother wavelet or the basis function. All the wavelet functions used in the transformation are derived from the mother wavelet through translation (shifting) and scaling (dilation or compression). X WT (, s) = 1 |s| x(t)
*

t
dt

The mother wavelet used to generate all the basis functions is designed based on some desired characteristics associated with that function. The translation parameter relates to the location of the wavelet function as it is shifted through the signal. Thus, it corresponds to the time information in the Wavelet Transform. The scale parameter s is defined as |1/frequency| and corresponds to frequency information. Scaling either dilates (expands) or compresses a signal. Large scales (low frequencies) dilate the signal and provide detailed information hidden in the signal, while small scales (high frequencies) compress the signal and provide global information about the signal. Notice that the Wavelet Transform merely performs the convolution operation of the signal and the basis function. The above analysis becomes very useful as in most practical applications, high frequencies (low scales) do not last for a long duration, but instead, appear as short bursts, while low frequencies (high scales) usually last for entire duration of the signal. The Wavelet Series is obtained by discretizing CWT. This aids in computation of CWT using computers and is obtained by sampling the time-scale plane. The sampling rate can be changed accordingly with scale change without violating the Nyquist criterion. Nyquist criterion states that, the minimum sampling rate that allows reconstruction of the original signal is 2 radians, where is the highest frequency in

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

the signal. Therefore, as the scale goes higher (lower frequencies), the sampling rate can be decreased thus reducing the number of computations.

Fig8. Continuous Wavelet Transform Wavelet Series

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

4.3 The Discrete Wavelet Transform


The Wavelet Series is just a sampled version of CWT and its computation may consume significant amount of time and resources, depending on the resolution required. The Discrete Wavelet Transform (DWT), which is based on sub-band coding is found to yield a fast computation of Wavelet Transform. It is easy to implement and reduces the computation time and resources required. The foundations of DWT go back to 1976 when techniques to decompose discrete time signals were devised [5]. Similar work was done in speech signal coding which was named as sub-band coding. In 1983, a technique similar to sub-band coding was developed which was named pyramidal coding. Later many improvements were made to these coding schemes which resulted in efficient multi-resolution analysis schemes.

In CWT, the signals are analyzed using a set of basis functions which relate to each other by simple scaling and translation. In the case of DWT, a time-scale representation of the digital signal is obtained using digital filtering techniques. The signal to be analyzed is passed through filters with different cutoff frequencies at different scales.

This decomposition halves the time resolution since only half the number of samples now characterizes the entire signal. However, this operation doubles the frequency resolution, since the frequency band of the signal now spans only half the previous frequency band, effectively reducing the uncertainty in the frequency by half. The above procedure, which is also known as the subband coding, can be repeated for further decomposition. At every level, the filtering and subsampling will result in half the number of samples (and hence half the time resolution) and half the frequency band spanned (and hence double the frequency resolution). Figure illustrates this procedure, where x[n] is the original signal

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

to be decomposed, and h[n] and g[n] are lowpass and highpass filters, respectively. The bandwidth of the signal at every level is marked on the figure as "f".

Figure 9. The Subband Coding Algorithm The original signal x[n] has 512 sample points, spanning a frequency band of zero to rad/s. At the first decomposition level, the signal is passed through the highpass and

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

lowpass filters, followed by subsampling by 2. The output of the highpass filter has 256 points (hence half the time resolution), but it only spans the frequencies /2 to rad/s (hence double the frequency resolution). These 256 samples constitute the first level of DWT coefficients. The output of the lowpass filter also has 256 samples, but it spans the other half of the frequency band, frequencies from 0 to /2 rad/s. This signal is then passed through the same lowpass and highpass filters for further decomposition. The output of the second lowpass filter followed by subsampling has 128 samples spanning a frequency band of 0 to /4 rad/s, and the output of the second highpass filter followed by subsampling has 128 samples spanning a frequency band of PI/4 to PI/2 rad/s. The second highpass filtered signal constitutes the second level of DWT coefficients. This signal has half the time resolution, but twice the frequency resolution of the first level signal. In other words, time resolution has decreased by a factor of 4, and frequency resolution has increased by a factor of 4 compared to the original signal. The lowpass filter output is then filtered once again for further decomposition. This process continues until two samples are left. For this specific example there would be 8 levels of decomposition, each having half the number of samples of the previous level. The DWT of the original signal is then obtained by concatenating all coefficients starting from the last level of decomposition (remaining two samples, in this case). The DWT will then have the same number of coefficients as the original signal.

4.4 Multi resolution analysis:


Although the time and frequency resolution problems are results of a physical phenomenon (the Heisenberg uncertainty principle) and exist regardless of the transform used, it is possible to analyze any signal by using an alternative approach called the multi resolution analysis (MRA). MRA, as implied by its name, analyzes the signal at different frequencies with different resolutions. Every spectral component is not resolved equally as was the case in the STFT.MRA is designed to give good time resolution and poor frequency resolution at high frequencies and good frequency resolution and poor time resolution at low

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

frequencies. This approach makes sense especially when the signal at hand has high frequency components for short durations and low frequency components for long durations. Fortunately, the signals that are encountered in practical applications are often of this type. For example, the following figure 11 shows a signal of this type. It has a relatively low frequency component throughout the entire signal and relatively high frequency components for a short duration somewhere around the middle.

Figure 10 Multi resolution analysis signal

4.5 DWT and Filter Banks


4.5.1 Multi-Resolution Analysis using Filter Banks Filters are one of the most widely used signal processing functions. Wavelets can be realized by iteration of filters with rescaling. The resolution of the signal, which is a measure of the amount of detail information in the signal, is determined by the filtering operations, and the scale is determined by upsampling and downsampling (subsampling)

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

operations. The DWT is computed by successive lowpass and highpass filtering of the discrete time-domain signal as shown in figure 2.2. This is called the Mallat algorithm or Mallat-tree decomposition. Its significance is in the manner it connects the continuoustime mutiresolution to discrete-time filters. In the figure, the signal is denoted by the sequence x[n], where n is an integer. The low pass filter is denoted by G0 while the high pass filter is denoted by H0. At each level, the high pass filter produces detail information, d[n], while the low pass filter associated with scaling function produces coarse approximations, a[n].

H0 X[n]

d1[n] H0 2 d2[n] H0 G0 2 G0 2 a3[n] 2 d3[n]

G0

Figure 11 Three-level wavelet decomposition tree. At each decomposition level, the half band filters produce signals spanning only half the frequency band. This doubles the frequency resolution as the uncertainity in frequency is reduced by half. In accordance with Nyquists rule if the original signal has a highest frequency of , which requires a sampling frequency of 2 radians, then it now has a highest frequency of /2 radians. It can now be sampled at a frequency of radians thus discarding half the samples with no loss of information. This decimation by 2 halves the time resolution as the entire signal is now represented by only half the number of samples. Thus, while the half band low pass filtering removes half of the frequencies and thus halves the resolution, the decimation by 2 doubles the scale. With this approach, the time resolution becomes arbitrarily good at high frequencies, while the frequency resolution becomes arbitrarily good at low frequencies.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

The time-frequency plane is thus resolved as shown in figure 1.1(d) of Chapter 1. The filtering and decimation process is continued until the desired level is reached. The maximum number of levels depends on the length of the signal. The DWT of the original signal is then obtained by concatenating all the coefficients, a[n] and d[n], starting from the last level of decomposition.

d1[n] d2[n] d3[n] 2 2

H1 X[n]

H1 2 G1

H1 2 G1

a3[n]

G1

Fig. 12 Three-level wavelet reconstruction tree. Fig. 2.3 shows the reconstruction of the original signal from the wavelet coefficients. Basically, the reconstruction is the reverse process of decomposition. The approximation and detail coefficients at every level are upsampled by two, passed through the low pass and high pass synthesis filters and then added. This process is continued through the same number of levels as in the decomposition process to obtain the original signal. The Mallat algorithm works equally well if the analysis filters, G0 and H0, are exchanged with the synthesis filters, G1 and H1. 4.5.2 Conditions for Perfect Reconstruction In most Wavelet Transform applications, it is required that the original signal be synthesized from the wavelet coefficients. To achieve perfect reconstruction the analysis and synthesis filters have to satisfy certain conditions. Let G0(z) and G1(z) be the low pass analysis and synthesis filters, respectively and H0(z) and H1(z) the high pass

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

analysis and synthesis filters respectively. Then the filters have to satisfy the following two conditions as given in

G0 (-z) G1 (z) + H0 (-z). H1 (z) = 0 G0 (z) G1 (z) + H0 (z). H1 (z) = 2z


-d

The first condition implies that the reconstruction is aliasing-free and the second condition implies that the amplitude distortion has amplitude of one. It can be observed that the perfect reconstruction condition does not change if we switch the analysis and synthesis filters. There are a number of filters which satisfy these conditions. But not all of them give accurate Wavelet Transforms, especially when the filter coefficients are quantized. The accuracy of the Wavelet Transform can be determined after reconstruction by calculating the Signal to Noise Ratio (SNR) of the signal. Some applications like pattern recognition do not need reconstruction, and in such applications, the above conditions need not apply. 4.5.3 Classification of wavelets We can classify wavelets into two classes: (a) orthogonal and (b) biorthogonal. Based on the application, either of them can be used. (a)Features of orthogonal wavelet filter banks The coefficients of orthogonal filters are real numbers. The filters are of the same length and are not symmetric. The low pass filter, G0 and the high pass filter, H0 are related to each other by H0 (z) = z
-N

G0 (-z )

-1

The two filters are alternated flip of each other. The alternating flip automatically gives double-shift orthogonality between the low-pass and high-pass filters [1], i.e., the

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

scalar product of the filters, for a shift by two is zero. i.e., G[k] H[k-2l] = 0, where k,lZ [4]. Filters that satisfy equation 2.4 are known as Conjugate Mirror Filters (CMF). Perfect reconstruction is possible with alternating flip. Also, for perfect reconstruction, the synthesis filters are identical to the analysis filters except for a time reversal. Orthogonal filters offer a high number of vanishing moments. This property is useful in many signal and image processing applications. They have regular structure which leads to easy implementation and scalable architecture.

(b)Features of Biorthogonal wavelet filter banks In the case of the biorthogonal wavelet filters, the low pass and the high pass filters do not have the same length. The low pass filter is always symmetric, while the high pass filter could be either symmetric or anti-symmetric. The coefficients of the filters are either real numbers or integers. For perfect reconstruction, biorthogonal filter bank has all odd length or all even length filters. The two analysis filters can be symmetric with odd length or one symmetric and the other anti-symmetric with even length. Also, the two sets of analysis and synthesis filters must be dual. The linear phase biorthogonal filters are the most popular filters for data compression applications.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 5 STATIONARY WAVELET TRANSFORMS


The discrete stationary wavelet transform (SWT) is a un decimated version of DWT. The main idea is to average several detailed co-efficient which are obtained by decomposition of the input signal without downs sampling. This approach can be interpreted as a repeated application of the standard DWT method for different time shifts. The Stationary wavelet transform (SWT) is similar to the dwt except the signal is never sub sampled and instead the filters are up sampled at each level of decomposition.

FIG 13.A 3 level SWT filter bank

Each level's filters are up-sampled versions of the previous.

SWT filters The SWT is an inherently redundant scheme as each set of coefficients contains the same number of samples as the input so for a decomposition of N levels there are a redundancy of 2N.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Applications
A few applications of SWT are specified below.

Signal denoising Pattern recognition

5.1 One-Stage Filtering: Approximations and Details


For many signals, the low-frequency content is the most important part. It is what gives the signal its identity. The high-frequency content on the other hand imparts flavor or nuance. Consider the human voice. If you remove the high-frequency components, the voice sounds different but you can still tell whats being said. However, if you remove enough of the low-frequency components, you hear gibberish. In wavelet analysis, we often speak of approximations and details. The approximations are the high-scale, lowfrequency components of the signal. The details are the low-scale, high-frequency components. The filtering process at its most basic level looks like this:

Figure14: Filtering Process

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

The original signal S passes through two complementary filters and emerges as two signals. Unfortunately, if we actually perform this operation on a real digital signal, we wind up with twice as much data as we started with. Suppose, for instance that the original signal S consists of 1000 samples of data. Then the resulting signals will each have 1000 samples, for a total of 2000. These signals A and D are interesting, but we get 2000 values instead of the 1000 we had. There exists a more subtle way to perform the decomposition using wavelets. By looking carefully at the computation, we may keep only one point out of two in each of the two 2000-length samples to get the complete information. This is the notion of own sampling. We produce two sequences called cA and cD.

Figure 15: Sampling The process on the right which includes down sampling produces DWT Coefficients. To gain a better appreciation of this process lets perform a one-stage discrete wavelet transform of a signal. Our signal will be a pure sinusoid with highfrequency noise added to it. Here is our schematic diagram with real signals inserted into it:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Figure 16: Schematic Diagram

5.2 Multiple-Level Decomposition:


The decomposition process can be iterated, with successive approximations being decomposed in turn, so that one signal is broken down into many lower resolution components. This is called the wavelet decomposition tree.

Figure17: Wavelet Decomposition Tree

Looking at a signals wavelet decomposition tree can yield valuable information.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Figure 18: Wavelet Decomposition Tree

Number of Levels:
Since the analysis process is iterative, in theory it can be continued indefinitely. In reality, the decomposition can proceed only until the individual details consist of a single sample or pixel. In practice, youll select a suitable number of levels based on the nature of the signal, or on a suitable criterion such as entropy.

5.3 Wavelet Reconstruction:


The discrete wavelet transform can be used to analyze, or decompose, signals and images. This process is called decomposition or analysis. The other half of the story is how those components can be assembled back into the original signal without loss of information. This process is called reconstruction, or synthesis. The mathematical manipulation that effects synthesis is called the inverse discrete wavelet transform (IDWT). To synthesize a signal using Wavelet Toolbox software, we reconstruct it from the wavelet coefficients:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Where wavelet analysis involves filtering and downsampling, the wavelet reconstruction process consists of upsampling and filtering. Upsampling is the process of lengthening a signal component by inserting zeros between samples:

The toolbox includes commands, like idwt and waverec, that perform single-level or multilevel reconstruction, respectively, on the components of one-dimensional signals. These commands have their two-dimensional analogs, idwt2 and waverec2.

5.4 Reconstruction Filters:


The filtering part of the reconstruction process also bears some discussion, because it is the choice of filters that is crucial in achieving perfect reconstruction of the original signal. The downsampling of the signal components performed during the decomposition phase introduces a distortion called aliasing. It turns out that by carefully choosing filters for the decomposition and reconstruction phases that are closely related (but not identical), we can cancel out the effects of aliasing. The low- and high-pass decomposition filters (L and H), together with their associated reconstruction filters (L' and H'), form a system of what is called quadrature mirror filters:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

5.5 Reconstructing Approximations and Details


We have seen that it is possible to reconstruct our original signal from the coefficients of the approximations and details.

It is also possible to reconstruct the approximations and details themselves from their coefficient vectors. As an example, lets consider how we would reconstruct the first-level approximation A1 from the coefficient vector cA1. We pass the coefficient vector cA1 through the same process we used to reconstruct the original signal. However, instead of combining it with the level-one detail cD1, we feed in a vector of zeros in place of the detail coefficients vector:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

The process yields a reconstructed approximation A1, which has the same length as the original signal S and which is a real approximation of it. Similarly, we can reconstruct the first-level detail D1, using the analogous process:

The reconstructed details and approximations are true constituents of the original signal. In fact, we find when we combine them that

Note that the coefficient vectors cA1 and cD1 because they were produced by downsampling and are only half the length of the original signal cannot directly be combined to reproduce the signal. It is necessary to reconstruct the approximations and details before combining them. Extending this technique to the components of a multilevel analysis, we find that similar relationships hold for all the reconstructed signal constituents. That is, there are several ways to reassemble the original signal:

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 6
DIGITAL IMAGE PROCESSING
Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. An image may be defined as a two-dimensional function f(x, y), where x & y are spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the

intensity or gray level of the image at that point. When x, y & the amplitude values of f are all finite discrete quantities, we call the image a digital image. The field of DIP refers to processing digital image by means of digital computer. Digital image is composed of a finite number of elements, each of which has a particular location & value. The elements are called pixels. Vision is the most advanced of our sensor, so it is not surprising that image play the single most important role in human perception. However, unlike humans, who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from gamma to radio waves. They can operate also on images generated by sources that humans are not accustomed to associating with image. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. This is limiting & somewhat artificial boundary. The area of image analysis (image understanding) is in between image processing & computer vision.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, & high-level processes. Low-level process involves primitive operations such as image processing to reduce noise, contrast enhancement & image sharpening. A low- level process is characterized by the fact that both its inputs & outputs are images. Mid-level process on images involves tasks such as segmentation, description of that object to reduce them to a form suitable for computer processing & classification of individual objects. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. Finally higher- level processing involves Making sense of an ensemble of recognized objects, as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision. Digital image processing, as already defined is used successfully in a broad range of areas of exceptional social & economic value.

6.1 Definition of image


An image is represented as a two dimensional function f(x, y) where x and y are spatial co-ordinates and the amplitude of f at any pair of coordinates (x, y) is called the intensity of the image at that point.

6.2 types of images Gray scale image:


A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane. I(x, y) is the intensity of the image at the point (x, y) on the image plane.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I: [0, a] [0, b] [0, info)

Color image
It can be represented by three functions, R (xylem) for red, G (xylem) for green and B (xylem) for blue. An image may be continuous with respect to the x and y coordinates and also in amplitude. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Digitizing the coordinates values is called sampling. Digitizing the amplitude values is called quantization.

Coordinate convention:
The result of sampling and quantization is a matrix of real numbers. We use two principal ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting image has M rows and N columns. We say that the image is of size M X N. The values of the coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use integer values for these discrete coordinates. In many image processing books, the image origin is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are (xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second sample along the first row. It does not mean that these are the actual values of physical coordinates when the image was sampled. Following figure shows the coordinate convention. Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments. The coordinate convention used in the toolbox to denote arrays is different from the preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the same as the order discussed in the previous paragraph, in the sense that the first element of a coordinate topples, (alb), refers to a row and the second to a column.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

The other difference is that the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox also employs another coordinate convention called spatial coordinates which uses x to refer to columns and y to refers to rows. This is the opposite of our use of variables x and y.

Image as Matrices:
The preceding discussion leads to the following representation for a digitized image function: f (0,0) f(1,0) f(xylem)= . . . f(0,1) f(1,1) . . .. f(0,N-1) f(1,N-1) .

f(M-1,0) f(M-1,1) f(M-1,N-1) The right side of this equation is a digital image by definition. Each element of this array is called an image element, picture element, pixel or pel. The terms image and pixel are used throughout the rest of our discussions to denote a digital image and its elements. A digital image can be represented naturally as a MATLAB matrix: f(1,1) f(1,2) . f(1,N) f(2,1) f(2,2) .. f(2,N) f = f(M,1) f(M,2) .f(M,N) Where f(1,1) = f(0,0) (note the use of a monoscope font to denote MATLAB quantities). Clearly the two representations are identical, except for the shift in . . .

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

origin. The notation f(p ,q) denotes the element located in row p and the column q. For example f(6,2) is the element in the sixth row and second column of the matrix f. Typically we use the letters M and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar. Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and so on. Variables must begin with a letter and contain only letters, numerals and underscores. As noted in the previous paragraph, all MATLAB quantities are written using mono-scope characters. We use conventional Roman, italic notation such as f(x ,y), for mathematical expressions

6.3 Reading Images:


Images are read into the MATLAB environment using function imread whose syntax is imread(filename) Format name TIFF JPEG GIF BMP PNG XWD Description Tagged Image File Format Joint Photograph Experts Group Graphics Interchange Format Windows Bitmap Portable Network Graphics X Window Dump recognized extension .tif, .ti .jpg, .jpeg .gif .bmp .png .xwd

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

Here filename is a spring containing the complete of the image file(including any applicable extension).For example the command line >> f = imread (8. jpg); reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes () to delimit the string filename. The semicolon at the end of a command line is used by MATLAB for suppressing output. If a semicolon is not included. MATLAB displays the results of the operation(s) specified in that line. The prompt symbol(>>) designates the beginning of a command line, as it appears in the MATLAB command window. When as in the preceding command line no path is included in filename, imread reads the file from the current directory and if that fails it tries to find the file in the MATLAB search path. The simplest way to read an image from a specified directory is to include a full or relative path to that directory in filename. For example, >> f = imread ( D:\myimages\chestxray.jpg); reads the image from a folder called my images on the D: drive, whereas >> f = imread( . \ myimages\chestxray .jpg); reads the image from the my images subdirectory of the current of the current working directory. The current directory window on the MATLAB desktop toolbar displays MATLABs current working directory and provides a simple, manual way to change it. Above table lists some of the most of the popular image/graphics formats supported by imread and imwrite.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 7
INTRODUCTION TO MATLAB

MATLAB
MATLAB is a high-performance language for technical computing. It integrates computation, visualization, and programming in an easy-to-use environment where problems and solutions are expressed in familiar mathematical notation. Typical uses include

Math and computation Algorithm development Data acquisition Modeling, simulation, and prototyping Data analysis, exploration, and visualization Scientific and engineering graphics Application development, including graphical user interface building.

MATLAB is an interactive system whose basic data element is an array that does not require dimensioning. This allows you to solve many technical computing problems, especially those with matrix and vector formulations, in a fraction of the time it would take to write a program in a scalar non interactive language such as C or FORTRAN.

The name MATLAB stands for matrix laboratory. MATLAB was originally written to provide easy access to matrix software developed by the LINPACK and EISPACK projects. Today, MATLAB engines incorporate the LAPACK and BLAS libraries, embedding the state of the art in software for matrix computation.

MATLAB has evolved over a period of years with input from many users. In university environments, it is the standard instructional tool for introductory and

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

advanced courses in mathematics, engineering, and science. In industry, MATLAB is the tool of choice for high-productivity research, development, and analysis. MATLAB features a family of add-on application-specific solutions called toolboxes. Very important to most users of MATLAB, toolboxes allow you to learn and apply specialized technology. Toolboxes are comprehensive collections of MATLAB functions (M-files) that extend the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are available include signal processing, control systems, neural networks, fuzzy logic, wavelets, simulation, and many others.

The MATLAB System:


The MATLAB system consists of five main parts:

Development Environment:
This is the set of tools and facilities that help you use MATLAB functions and files. Many of these tools are graphical user interfaces. It includes the MATLAB desktop and Command Window, a command history, an editor and debugger, and browsers for viewing help, the workspace, files, and the search path.

The MATLAB Mathematical Function:


This is a vast collection of computational algorithms ranging from elementary functions like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

The MATLAB Language:


This is a high-level matrix/array language with control flow statements, functions, data structures, input/output, and object-oriented programming features. It allows both "programming in the small" to rapidly create quick and dirty throw-away programs, and "programming in the large" to create complete large and complex application programs.

Graphics:
MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as annotating and printing these graphs. It includes high-level functions for twodimensional and three-dimensional data visualization, image processing, animation, and presentation graphics. It also includes low-level functions that allow you to fully

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

customize the appearance of graphics as well as to build complete graphical user interfaces on your MATLAB applications.

The MATLAB Application Program Interface (API):


This is a library that allows you to write C and Fortran programs that interact with MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling MATLAB as a computational engine, and for reading and writing MAT-files.

MATLAB working environment: MATLAB desktop:Matlab Desktop is the main Matlab application window. The desktop contains five sub windows, the command window, the workspace browser, the current directory window, the command history window, and one or more figure windows, which are shown only when the user displays a graphic.

The command window is where the user types MATLAB commands and expressions at the prompt (>>) and where the output of those commands is displayed. MATLAB defines the workspace as the set of variables that the user creates in a work session. The workspace browser shows these variables and some information about them. Double clicking on a variable in the workspace browser launches the Array Editor, which can be used to obtain information and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current directory, whose path is shown in the current directory window. For example, in the windows operating system the path might be as follows: C:\MATLAB\Work, indicating that directory work is a subdirectory of the main directory MATLAB; WHICH IS INSTALLED IN DRIVE C. clicking on the arrow in the current directory window shows a list of recently used paths. Clicking on the button to the right of the window allows the user to change the current directory.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

MATLAB uses a search path to find M-files and other MATLAB related files, which are organize in directories in the computer file system. Any file run in MATLAB must reside in the current directory or in a directory that is on search path. By default, the files supplied with MATLAB and math works toolboxes are included in the search path. The easiest way to see which directories are on the search path. The easiest way to see which directories are soon the search paths, or to add or modify a search path, is to select set path from the File menu the desktop, and then use the set path dialog box. It is good practice to add any commonly used directories to the search path to avoid repeatedly having the change the current directory. The Command History Window contains a record of the commands a user has entered in the command window, including both current and previous MATLAB sessions. Previously entered MATLAB commands can be selected and re-executed from the command history window by right clicking on a command or sequence of commands.

This action launches a menu from which to select various options in addition to executing the commands. This is useful to select various options in addition to executing the commands. This is a useful feature when experimenting with various commands in a work session.

Using the MATLAB Editor to create M-Files:


The MATLAB editor is both a text editor specialized for creating M-files and a graphical MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in the desktop. M-files are denoted by the extension .m, as in pixelup.m. The MATLAB editor window has numerous pull-down menus for tasks such as saving, viewing, and debugging files. Because it performs some simple checks and also uses color to differentiate between various elements of code, this text editor is recommended as the tool of choice for writing and editing M-functions. To open the editor, type edit at the prompt opens the M-file filename.m in an editor window, ready for editing. As noted earlier, the file must be in the current directory, or in a directory in the search path.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 8 SIMULATION RESULTS


Fig. 2 shows that super resolved image of (a) Baboon, (b) Barbara, and (c)Lenas picture using proposed technique are much better than the low resolution image in super resolved image by using the interpolation, and WZP. Note that the input low resolution images have been obtained by down-sampling the original high resolution images. In order to show the effectiveness of the proposed method over the conventional and stateof-art image resolution enhancement techniques, four well-known test images (Lena, Baboon, and barbara) with different features are used for comparison.

bilinear image

Bicubic interpolated image

Super resolved image using WZP

Proposed technique

(a)

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

bilinear image

Bicubic interpolated image

Super resolved image using WZP

Proposed technique

(b)
bilinear image Bicubic interpolated image

Super resolved image using WZP

Proposed technique

(c)

Fig. 19. (a)Baboon, (b) Barbara,(c)Lena

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

TABLE I PSNR (DB) RESULTS FOR RESOLUTION ENHANCEMENT FROM 128 X 128 TO 512 X 512 OF THE PROPOSEDTECHNIQUE COMPARED WITH THE CONVENTIONAL AND STATE-OF-ART IMAGE RESOLUTION ENHANCEMENT TECHNIQUES.

PSNR(dB)

Techniques\images Baboon

Barbara

Lean

Bilinear

27.71

28.07

29.28

Bicubic

29.69

30.11

31.30

WZP

34.81

35.25

36.28

Proposed Technique

34.98

35.62

36.74

Table I compares the PSNR performance of the proposed technique using bicubic interpolation with conventional and state-of-art resolution enhancement techniques: bilinear, bicubic, WZP, NEDI, HMM, HMM SR, WZP-CS, WZP-CS-ER, DWT SR, CWT SR, and regularity-preserving image interpolation. Additionally, in order to have more comprehensive comparison, the performance of the super resolved image by using SWT only (SWT-SR) is also included in the table. The results in Table I indicate that the proposed technique over-performs the aforementioned conventional and state-of-art image resolution enhancement techniques. Table I also indicates that the proposed technique over-performs the aforementioned conventional and state-of-art image resolution enhancement techniques.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

CHAPTER 9 CONCLUSION
This work proposed an image resolution enhancement technique based on the interpolation of the high frequency subbands obtained by DWT, correcting the high frequency subband estimation by using SWT high frequency subbands, and the input image. The proposed technique uses DWT to decompose an image into different subbands, and then the high frequency subband images have been interpolated. The interpolated high frequency subband coefficients have been corrected by using the high frequency subbands achieved by SWT of the input image. An original image is interpolated with half of the interpolation factor used for interpolation the high frequency subbands. Afterwards all these images have been combined using IDWT to generate a super resolved imaged. The proposed technique has been tested on well-known benchmark images, where their PSNR and visual results show the superiority of proposed technique over the conventional and state-of-art image resolution enhancement techniques.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

REFERENCES
[1] L. Yi-bo, X. Hong, and Z. Sen-yue, The wrinkle generation method for facial reconstruction based on extraction of partition wrinkle line features and fractal interpolation, in Proc. 4th Int. Conf. Image Graph., Aug. 2224, 2007, pp. 933937. [2] Y. Rener, J. Wei, and C. Ken, Downsample-based multiple description coding and post-processing of decoding, in Proc. 27th Chinese Control Conf., Jul. 1618, 2008, pp. 253256. [3] H. Demirel, G. Anbarjafari, and S. Izadpanahi, Improved motionbased localized super resolution technique using discrete wavelet transform for low resolution video enhancement, in Proc. 17th Eur. Signal Process. Conf., Glasgow, Scotland, Aug. 2009, pp. 10971101. [4] Y. Piao, I. Shin, and H. W. Park, Image resolution enhancement using inter-subband correlation in wavelet domain, in Proc. Int. Conf. Image Process., 2007, vol. 1, pp. I445448. [5] H. Demirel and G. Anbarjafari, Satellite image resolution enhancement using complex wavelet transform, IEEE Geoscience and Remote Sensing Letter, vol. 7, no. 1, pp. 123126, Jan. 2010. [6] C. B. Atkins, C. A. Bouman, and J. P. Allebach, Optimal image scaling using pixel classification, in Proc. Int. Conf. Image Process., Oct. 710, 2001, vol. 3, pp. 864867. [7] W. K. Carey, D. B. Chuang, and S. S. Hemami, Regularity-preserving image interpolation, IEEE Trans. Image Process., vol. 8, no. 9, pp. 12951297, Sep. 1999. [8] S. Mallat, A Wavelet Tour of Signal Processing, 2nd ed. New York: Academic, 1999. [9] J. E. Fowler, The redundant discrete wavelet transform and additive noise,Mississippi State ERC, Mississippi State University, Tech. Rep. MSSU-COEERC-04-04, Mar. 2004.

Discrete And Stationary Wavelet Decomposition For Image Resolution Enhancement

%%%

Вам также может понравиться