Вы находитесь на странице: 1из 8

Identification of Effective Method for Medical Image Fusion by Comparative Study: Curvelet Transform vs Contourlet Transform

R. Rubesh Selvakumar, R.V.S College of Engg & Tech, Dindigul, e-mail: egobikarubesh2009@rediffmail.com P. Ramya, M.E Scholar, Anna University of Madurai, e mail: ramyajun26@gmail.com e-mail: S. Subhaashree, M.E Scholar, Anna University of Madurai, e mail: subhaashree.vive@gmail.com e-mail:

Abstract
In most recent days image fusion play vital role in medical field which provides additional information for diagnosis. ld The images can be taken from either same modality or multiple modalities are used to create fused image. The fused image gives more additional information to the doctors to diagnose the disease easily. Image fusion can be done by . using various methods such as DWT (Discrete Wavelet Transform), Complex Wavelet Transform (CWT), Curvelet based Transform, Contourlet based Transform etc... In this paper Curvelet based Transform is analyzed theoretically and it is compared with Contourlet based Transform. Using two quality indicators, experimental results showed that the Contourlet based Transform give extensive fused image. Keywords:- Curvelet Transform, Contourlet Transform, CWT(Complex Wavelet Transform), DWT(Discrete Wavelet Transform), Image Fusion.
Figure 1Graphical Representation of Image Information Process

1.INTRODUCTION Image fusion consists of putting together information coming from different modality of medical images, whereas registration consists of computing the geometrical transformation between two Data sets. This geometrical transformation is used to resample one image data set to match other. An excellent registration is set for an excellent fusion. The process of information fusion can be seen as an information transfer problem in which two or mor more information sets are combined into a new one that should contain all the information from the original sets. During the process of fusion, input images A and B are combined into a new fused image F by transferring, ideally all of their information into F. This is illustrated . graphically using a simple Venn diagram (Carroll et al., 2007) in Figure 1.

The combination of images from different modalities leads to additional clinical information which is not apparent in the separate imaging modality. For this reason radiologists prefer multiple imaging modalities to obtain more details. Image fusion is performed to extract all the useful information from the individual modality and integrate them into one image. egrate In general, a successful fusion should extract complete information from source images into the result, without introducing any artifacts or inconsistencies. Many new algorithm of Medical image fusion have been developed in1980 which are characterized under three categories such as spatial fusion, transform fusion and optimizations methods. Averaging method, Principle Component Analysis (PCA) and Intensity Hue Saturation (HIS) are the some of spatial domain techniques. All the techniques in the spatial domain provide some spatial distortion in fused image which becomes a negative factor for further processing such as classification. To overcome this drawback, Multi Resolution Decomposition Tool was developed by Burt and Adelson in1989 and it is known as Laplacian Pyramid. But the Laplacian Pyramid based methods are not suitable for continuous function and also provide blocking effects in region. Thats why Wavelet based Transform was developed as Multi Resolution Decomposition tool by Mallert and Mayer.[5] Wavelet Transform has good frequency division characteristics and most commonly used method in medical image fusion. Though the Wavelet based Transform solve the problem of low contrast and blocking effects in spatial domain. But these type of transform provide insufficient

information for curved shape images and edge representation of images and also have the problem of shift invariant and lack of directional selectivity. So, Complex Wavelet Transform (CWT) was developed to resolve the problem of directional selectivity and shift invariant which was initiated by Kingsbury in 1998. Dual Tree Complex Wavelet Transform (DTCWT) was also established by Kingsbury which uses two trees of real filter to provide real and imaginary part of complex coefficients. [6]Dual Tree Complex Wavelet Transform handle only limited directions and also have the problem shift invariants due to down sampling and up sampling. Curvelet based Transform was developed as Multi Geometric tool by Candes and Donoho in 1999. But for continuous edges, the curvelet will not give the smooth edge result and approximation. [7] . So, Contourlet based Transform was proposed by Do and Veeterli [4] in 2002 which is also a Multi Geometric tool to address the lack of geometrical structure in the Wavelet Transform and it forms Multi Resolution Directional tight frame designed to approximate images made of smooth regions separated by smooth boundaries. The reminder of this paper is briefly described and compared of curvlet and contourlet transform. 2. THE CURVELET TRANSFORM Curvelet transform was derived from ridgelet transform by cands and donoho in 2000 [7]. Curvelets is two dimensional wavelets that provide a new architecture for multiscale analysis and multidimensional anisotropic form. Curvelet transform or first generation Curvelet transform consists of special filtering process and multiscale ridgelet transform. Using ridgelet transform in first generation Curvelet transform causes data redundancy. To overcome this fast curvelet transform was developed in the year 2005 [2].This is known as second generation curvelet transform. They use frequency partitioning technique instead of ridgelet transform to avoid redundancy. but it bring forth aliasing problem during reconstruction .to overcome this a new complex curvelet transform was introduced by Yan He and Zhang Xing-lan in 2010[3]. First Generation Curvelet Transform Unlike the results in wavelet transform, the coefficient of the edge can be concentrated in Curvelet Transform. Ridgelet transform is a constant in the direction of line: xcos+xsin=C. In vertical direction of the line, it is wavelet function. Given an integrable bivarient function f(x1,x2). we define its ridgelet coefficients by

Radon transform for function f(x1,x2) is Where, a and b are selected for the function which are both integral and square integrable. Ridgelet transform is a application of radon transform where angular variable is constant and t is varying. Discrete radom transform converts an array of n x n to nx2n. Then one dimensional wavelet transform 2nx2n array discrete radon transform is gotten. Curvelet transform processing can be shown as in Figure 2. Decompose the image into sub bands. Apply intensity scaling to approximation sub bands .filter one of the sub bands. Apply inverse intensity scaling to approximate sub bands. Recompose the sub bands into original image.
Continuous Curvelet Transform

In two-dimensional space R2,x stands for spatial domain variable, stands for frequency domain variable, r,  express polar coordinates. First a couple of window functions should be brought, W( r) and V( t )

Figure 1Curvelet Transform Flow Graph (a) Curvelet Decomposition (b) Curvelet Composition

separately express radius window and corner window, W is supported in r (1/2,2),V is supported in t[1,1],then permitting condition should be satisfied:

Then local window in Cartesian coordinate system is expressed as: To all scales jj0, frequency window of Fourier frequency domain is expressed as follow:

Here, [j/2] stands for j/2 rounding operation. There are differences of dilation factor between W and V. In time domain, dilation factor of W is 2-j/2 longer, than is width length2 , it is also called anisotropic scaling relation. Uj stands for a wedge window in polar coordinates; it is expressed in Figure 3. and then Curvelet j(x) is defined by j(w) =Uj(w),that is mother Curvelet. There are three parameters in Curvelets j, l, k=(k1,k2) Z2 stands for scale, direction and space position respectively. When the scale is 2-j ,l=2.2-[j/2].l ,l= 0,1,..,0l2, translation position could be expressed as Curvelet could be expressed as follows: In which R is a rotation of  then all curvelets in scale 2-j could b e obtained by rotation and translation of j. The coefficients of curvelet can be defined by inner product of L2(R2) and j,l,k : Reconstruction formula is Curvelet transform should suit parsevel relation:

Figure 5 Discrete Curvelet Transform for image p

Here,

is defined by inner product of one dimensional lowpass window: We introduce slopes with the same interval,

In which, shear matrix

so Discrete curvelet is defined as follows:

Here , b quantizes (k1x2-j ,k2x2-jw/2) and the Discrete Curvelet Transform is defined as follows: Because the block is not a standard rectangle, so Fast Fourier Transform algorithm cant be used, then rewrite the last formula to the following one:

Discrete Curvelet Transform In two dimensional Cartesian coordinate system , we use Uj as a block region with same centre to replace it as in Figure 4 and A schematic diagram of Discrete Curvelet Transform is shown in Figure 5

Figure 4 Discrete Cuurvelet for Tiling of Frequency

2.2. SECOND GENERATION CURVELET TRANSFORM Fast Discrete Curvelet Transform(FDCT) Traditional Curvelet transform is based on the ridgelet analysis theory, its digital realization is complicated and decomposition of Laplacian pyramid brings enormous data redundancy. To overcome this fast Discrete Curvelet Transform was introduced by Candes and Donoho. It consists of USFFT (unequallyspaced fast Fourier transform) and wrapping. It is

comparatively similar, much faster and less redundant than First Generation Curvelet Transform. The USFFT method is given as follow: 1) Apply the 2D FFT on f (t1,t2) and obtain Fourier samples

containing contours and textures.DCT is better than wavelet transform (or wavelet packet transform) in dealing with the singularity in two or higher dimensions, it provides an abundant directional selectivity and can represent various directional smooth contours in natural images [8], as shown in Fig. 7.

2) For each scale/angle pair (j,l),resample (or interpolate) f[n1,n2] to obtain sampled values f[n1,n2-n1tanl] for (n1,n2) pj. where
Figure 7 Comparison of Curve Representation with Wavelet and Contourlet

Pj= {(n1,n2):n1,0 n1<n1,0+L1,j,n2,0n2,<n2,0 +L2,j} 3) Multiply the interpolation (or sheared) object f with the parabolic window Uj (obey length2=width),effectively localizing f near the parallelogram with orientation l,and obtain 4) Apply the inverse 2D FFT to each , hence collecting the discrete coefficients cD(j, l, k) where t1,t2 are special variables ; n1 ,n2 are corresponding frequency variables.figure1 is an example of a curvelet function in spatial and frequency domain at scale 4 and orientation 16.

There are two stages in the DCT: multi-scale analysis stage and directional analysis stage, as shown in Fig. 8. The first stage is used to capture the point discontinuities. A Laplacian pyramid decomposes the input image into a detail sub-image and band-pass image which is difference between the input image and the prediction image In the second stage, the band-pass image is decomposed into 2ld (ld=1,2,3) wedge shape subimage by the directional filter banks (DFB), and the detail sub-image is then decomposed by the LP for the next loop, this stage to link point discontinuities into linear structures. The whole loop can be done lp ( lp = 1,2,...,n ) iteratively, and the number of direction decomposition at each level can be different, which is much more flexible than the three directions in wavelet.

Figure 6 Example of Curvelet Function (a) in Spatial Domain (b) in Frequency Domain

3. CONTOURLET TRANSFORM Contourlets are an extension of Curvelets, which are defined on the continuous domain which can be approximated in the discrete domain. Contourlets, however, are defined and derived in discrete domain from the beginning. They both allow for directionality and anisotropy. 3.1 Discrete Contourlet Transform Do and Vetterli developed a DCT (Discrete Contourlet transform) in 2002. It is an geometrical image transform which is used to represent the imge

Figure 8 Discrete Contourlet Transform Diagram

The image fusion is performed in contourlet domain. To perform image fusion successfully, the proposed technique needs a prerequisite that the input images have to be accurately registered. Fig. 9 illustrates the framework of the new image fusion algorithm based on contourlet transform. Suppose that there are just only two input images, I1 and I2 in the fusion approach, without loss the generality. Firstly, the CT decomposes the input images into a set of multi-

scale and different directional sub-band images, the number of the decomposition scale Ns and the number of the directional sub bands Nmd in scale m are decided before contourlet decomposition. Suppose that (m,n) indicate one of the sub band image in the decomposition sub band set, where m and n is the scale level and the n directional sub band in scale m , respectively. Secondly, according to the sub bands characteristics, The low frequency sub bands I1 (ml,nl) and I2 (ml,nl) hold the most of low frequency information, the variation of these coefficients is very small, a welldealing low frequency fusion rule, fusion rule A, will be chosen to fuse these coefficients from the both two input images. While the high frequency directional sub band I1 (mi,ni) and I2 (mi,ni) (il) contain highly fluctuated coefficients, the large absolute decomposition coefficients correspond to shaper brightness changes in different direction and thus to the salient features in the image such as edges, lines and region boundaries. Then, the inverse contourlet transform is applied to the gotten fused coefficients, and the fused image is generated. Fusion Rule A. Considering the low frequency sub band is the images approximate information; the coefficients are much similar and vary very slowly. So, here we choose the average of the corresponding low frequency sub band as the fusion rule, it defines as, F (ml,nl) = I1 (ml,nl) + I2 (ml,nl) /2 Fusion Rule B: High frequency coefficient reflects the image edge information and detail information which has low correlation between pixel. so absolute value maximum selection rule is used. Coefficient with larger magnitude is selected in each high sub band. F (mi,ni) = {I1 (mi,ni) if | I1 (mi,ni)||I2(mi,ni)| I2(mi,ni) if | I1 (mi,ni)|<|I2(mi,ni)| DCT provide flexible multi scale and directional representation for images (can have different number of direction at each scale) But it introduces redundancy (up to 33%) due to the LP stage. These properties of CT i.e., directionality and anisotropy made it a powerful tool for content based image retrieval.
Figure 9 Discrete Contourlet Transform Image Fusion

3.2 Complex Contourlet Transform Complex Contourlet Transform was developed by Dipeng Chen and Qi Li in 2005. It incorporates DTCWT and DFB to provide flexible and robust scale direction representation for source images. Specifically the flexibility allows arbitrary number of directions at any scale which can help to capture most important salient information in images, i.e., edges. Therefore CCT is well suited to image fusion scheme as it provides simultaneous better directional selectivity and shift invariance. The DT-CWT decomposition details space W j at the J-th scale, gives six sub bands at each scale capturing distinct directions. Traditionally, we obtain the three high pass bands corresponding to the LH, HL, and HH sub bands, indexed by i{1, 2, 3}.[1] Each of them has two wavelets as real and complex part. By averaging the outputs of dual tree, we get an approximate of shift invariant (Kingsbury, 1999). In second stage for each sub band applied (lJ) levels DFB (Bamberger and Smith, 1992) as shown in Figure 10. W j of DT-CWT is nearly shift invariant and this property can be still established in the subspace even after applying directional filter banks on a detail subspace W j . The mathematical form is defined as:

Where i=1,2,3 and

is the impulse

response of the synthesis filter, is overall down sampling matrices of DFB and is a wavelet functions. The family basis for the subspace and each consists of a complex dual tree. The location shift is denoted as m.

The fusion rule can be used to combine the coefficient to obtain the fused coefficient which can be done in same manner as in DCT. The DT CCT produces images with improved contours and textures, while the property of shift invariance is retained. 3.3 Non Sub-Sampled Contourlet Transform However, because of down sampling and up sampling, both DCT and DT CCT have lack of shift invariance and results in ringing artifacts. It is overcome by Non Sub Sampled Contourlet Transform (NSCT) which was proposed by Cunha and Zohna in 2005. The NSCT can be divided into two parts that are both shift invariant.[3]

NSDFB is somewhat similar to the critical sampling DFB of CT, but it satisfies the translation invariance.

Figure 11: Schematic Diagram For NSCT NSCT Filter Bank Structure (b) Frequency Partioning

The image fusion processing based NSCT shown in Figure 12, in the case of two images, the basic steps are as follows: (1) Decomposing two source images of A and B into a series of directional sub band coefficients and low frequency sub-band coefficients based on NSCT; (2) Fusing each sub band coefficients using different fusion operators, to get the NSCT coefficients corresponds to the scale; (3) Reconstructing the NSCT coefficients, to obtain the fused image.

Figure 10 DT CCT based on Image Fusion Rule

First, the image is decomposed by the nonsampling pyramid filter (NSP), and then decomposed by the non-sampling direction filter band (NSDFB) at each scale sub band , as shown in Figure 11 [3]. The input image convolved with a two-dimensional filter template result in the coarse approximation image, the high-frequency detail image is obtained by subtracting the coarse image with original image. Then, the non-sampling direction filter works on the high-frequency details, and the direction details of the image is obtained. After N-level NSCT decomposition sub-band image were obtained with the same size input image, where Lj is the direction decomposition series. Laplace pyramid decomposition is entirely different with the corresponding NSCT and CT. NSP is constructed from a two-channel non-sub sampling filter banks, without up sampling and sub sampling, the translation invariance preserved. After the image decomposed into different scales by NSP, at each scale, an image can be divided into arbitrary direction of power two by NSDFB, as shown in Figure 11.b The

Figure 12 Image Fusion based on NSCT

The NSCT not only provide multi resolution but also geometrical and directional representation and it is a shift invariant. so that each pixel of transform sub bands corresponds to that of the original image in the same location. Therefore we gather the geometrical information pixel by pixel from the NSCT coefficients. The NSCT provides better result in fusion than DCT and CCT.

a)

CT b) MRI Figure 13: Input Images

a CWT

b)FDCT

c) NSCT

Figure 14: Output Images

4. QUANTITATIVE COMPARISON There are two quality indicators are used in this paper which can be explained as follows 1.Information Entropy (IE): The IE of image is an important index which is used to measure the degree of useful information presents in the image. Here, h(i) is the ratio of no of pixel with gray value equal to I , over the total no of pixel. Larger value of IE indicates more information contained in the image. 2.Root Mean Square Error (RMSE): It can be used to measure the difference between source image and fused image. Smaller value of RMSE and Smaller the difference indicate better fusion performance. It can be defined as follows

This paper presents the comparison of Wavelet Based Transform, Curvelet based Transform and Contourlet transform in terms of various performance measures, like Information Entropy (IE) and Root Mean Square Error(RMSE).Among the three types Transform (as shown in Table 1), Non Sub sampled Contourlet (NSCT) Transform provides very good results for pixel level fusion due to its improved directionality ,better geometric representation and good directional selectivity .Hence using the Contourlet Transform, one can have the fused image with better direction, high geometric representation and better visual quality. This will help the physicians to diagnose diseases in a better way.

6. REFERENCE [1].Al-Azzawi, N., Sakim, H. A. M., Abdullah, W. A. K. W. & Ibrahim, H. (2009). Medical image fusion scheme using complex contourlet transform based on PCA, Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBC 2009, pp. 5813-5816, lSBN 1557-170X, [2].CHANG Xia,JIAO Li-Cheng,JIA JianHua.Multisensor Image Adaptive Fusion Based on Nonsubsampled Contourlet[J]. Chinese Journal Computers. 2009, 32(11):2229-2238(in Chinese). [3].Cunha.A.L,J.Zhou,and M.N.Do.The nonsubsampled contourlet transform: theory,design and applications[A],IEEE Trans.Image Proc.,2005,1-17. [4]. Do and Martic Vetterli, The Contourlet Transform: AnEfficient Directional Multi-resolution Image Representation, IEEETransaction on image processing, vol.14, no.12, pp. 2001-2106, 2005. [5]. Dipeng Chen and Qu Li , The Use of Complex Countourler Transform on Fusion Scheme, World Academy of Science, Engineeringand Technology 12, 2005. [6][kingbury, 1998] kingbury N.G. (1998). The dualtree complex wavelet transform: a new technique for shift in-variant and directionalfilters, IEEE digital signal processing workshop (86). [7]. kirankumar.Y Comparison of Fusion Techniqueapplied to practical images: Discrete Curvlet Transform using Wrapping Technique and Wavelet Transform, Journal of theoretical and applied Information Technology. [8].LIU Sheng-peng,FANG Yong. Infrared Image Fusion Algorithm Based On Contourlet Transform And Improved Pulse Coupled Neural Network. Journal of Infrared and Millimeter Waves, 2007,26(3):217221(in Chinese).

Here r represents a source image and f represents a


fused image. Transform CWT FDCT NSCT Fusion rule MS Rule MS Rule MS Rule Entropy 5.40 5.361 6.170 RMSE 2.392 1.530 1.325

Table 1:Comparison of Fusion Results

4.EXPERIMENTAL RESULT We use brain image of CT and MRI for experiment analysis. Figure 13(a)show the CT image and fig 13(b) shows the MRI image From the figure, We can see that the CT image does not shows the soft tissues clearly and In MRI image, the soft tissues are clear but it does not show the coronal bones clearly. The experiment compared the CWT, FDCT with NSCT. The Figure 14(a, b,c) is the result of fusion method using the Contourlet transforms, which shows the image fusion using Non Sub sampled Contourlet Transform gives better fusion result than Fast Discrete Curvelets and Complex Wavelet. 5. CONCLUSION

[9]. Mallert,S.G Multi-frequency channel decomposition of image and wavelet models, IEEE Transaction on Acoustic , Speech and signal processing, vol.37, no.12, p.p. 2091-2110, 1989. [10].Vrabel, J.: Multispectral imagery band sharpening study. Photogrammetric Engineering and Remote Sensing, Vol. 62. (1969) 1075-1083 [11].XING Su-Xia,CHEN Tian-Hua. Study of Driver Vision Enhancement Technology based on Image Fusion Journal of Highway and Transportation Research and Development2010.8P131-135(in Chinese). [12] Yan He ,Zhang Xing-lan ,Li Wei-wei,Chen Feng- Image Restoration using Gaussian Scale Mixtures in Complex Curvelet Transform DomainCollege of Computer Science,Chongqing University of Technology, Chongqing

Вам также может понравиться