Вы находитесь на странице: 1из 33

MATLAB IEEE PAPERS

1. A rough-set based binarization technique for fingerprint images


This paper appears in: Signal Processing, Computing and Control (ISPCC), 2012 IEEE International Conference on Date of Conference: 15-17 March 2012

ABSTRACT
Fingerprint is considered as the most robust biometric in the sense that it could even be obtained without willing participation of the subjects (uncontrolled situation). Fingerprints are unique in the sense of location and direction of minutiae points present. A set of minutiae points thus best characterize the fingerprints. However extracting minutiae points from fingerprint under uncontrolled situation is a challenge. A very robust binarization process is of high demand to get correct set of minutiae points. In this paper, a rough-set based approach for binarization of fingerprint image is presented. Maximization of rough entropy and minimization of roughness of the image lead to an optimum threshold for binarization. The result of the proposed method is compared with the traditional Otsu's thresholding method for binarization.

2. Real-time recognition system of traffic light in urban environment


This paper appears in: Computational Intelligence for Security and Defence Applications (CISDA), 2012 IEEE Symposium on Date of Conference: 11-13 July 2012

ABSTRACT
Detection of arrow traffic light is a focal point research in autonomous vehicle, and in urban environment it is the basic technique. However, most researches mainly concern the circular traffic lights. A novel algorithm is proposed in this paper to resolve the problems of detection and recognition of arrow traffic lights. Two submodules, detection module and recognition module, are introduced in the main framework. In detection submodule, the color space conversion, binarization and morphology features filtering methods are performed to get the regions of candidates of blackboards. For getting the regions of arrow of traffic lights, segmentation based on the YCbCr color space is used in the cropping image, which is cropped from original image by the region of blackboard. In recognition submodule, Gabor wavelet transform and 2D independent component analysis(2DICA) are used to extract traffic light candidate's features for features of the arrow traffic lights. A library for recognition has been built, and experimental results show that rate of recognition exceeds 91%.

3. Retinal vessel segmentation using ensemble classifier of bagged decision trees


This paper appears in: Image Processing (IPR 2012), IET Conference on Date of Conference: 3-4 July 2012

MATLAB IEEE PAPERS


ABSTRACT
This paper presents a new supervised method for segmentation of blood vessels in retinal images. This method uses an ensemble system of boot strapped decision trees and utilizes a feature vector based on the orientation analysis of gradient vector field, morphological linear transformation, line strength measures and Gabor filter responses. The feature vector encodes information to handle the healthy as well as the pathological retinal image. The method is evaluated on the publicly available DRIVE and STARE databases. Method performance on both sets of test images is better than the 2nd human observer and other existing methodologies available in the literature. The incurred accuracy, speed, robustness and simplicity make the algorithm a suitable tool for automated retinal image analysis.

4. An improved support vector machine kernel for medical image retrieval system
This paper appears in: Pattern Recognition, Informatics and Medical Engineering (PRIME), 2012 International Conference on Date of Conference: 21-23 March 2012

ABSTRACT
Digital medical images take up most of the storage space in the medical database. Digital images are in the form of X-Rays, MRI, CT. These medical images are extensively used in diagnosis and planning treatment schedule. Retrieving required medical images from the database in an efficient manner for diagnosis, research and educational purposes is essential. Image retrieval systems are used to retrieve similar images from database by inputting a query image. Image retrieval systems extract features in the image to a feature vector and use similarity measures for retrieval of images from the database. So the efficiency of the image retrieval system depends upon the feature selection and its classification. In this paper, it is proposed to implement a novel feature selection mechanism using Discrete Sine Transforms (DST) with Information Gain for feature reduction. Classification results obtained from existing Support Vector Machine (SVM) is compared with the proposed Support Vector Machine model. Results obtained show that the proposed SVM classifier outperforms conventional SVM classifier and multi layer perceptron neural network.

5. Cognition and Removal of Impulse Noise With Uncertainty


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: July 2012

ABSTRACT

Uncertainties are the major inherent feature of impulse noise. This fact makes image denoising a difficult task. Understanding the uncertainties can improve the performance of image denoising. This paper presents a novel adaptive detail-preserving filter based on the cloud model (CM) to remove impulse noise. It is called the CM filter. First, an

MATLAB IEEE PAPERS


uncertainty-based detector identifies the pixels corrupted by impulse noise. Then, a weighted fuzzy mean filter is applied to remove the noise candidates. The experimental results show that, compared with the traditional switching filters, the CM filter makes a great improvement in image denoising. Even at a noise level as high as 95%, the CM filter still can restore the image with good detail preservation.

6. Fast license plate detection based on edge density and integral edge image
This paper appears in: Applied Machine Intelligence and Informatics (SAMI), 2012 IEEE 10th International Symposium on Date of Conference: 26-28 Jan. 2012

ABSTRACT

This paper presents a robust algorithm for license plate detection that can detect multiple license plates with various sizes in unfamiliar and complex backgrounds. License plate detection is an important processing step in license plate recognition which has many applications in intelligent transportation systems. Vertical edges and edge density features are utilized to find candidate regions. Then, the candidates are filtered out based on geometrical and textural properties. The efficiency of the method is improved using the integral edge image and two-stage candidate window detection. The experimental results confirm robustness and efficiency of proposed method.

7. Object recognition based on gabor wavelet features


This paper appears in: Devices, Circuits and Systems (ICDCS), 2012 International Conference on Date of Conference: 15-16 March 2012

ABSTRACT

The proposed method is to recognize objects from different categories of images using Gabor features. In the domain of object recognition, it is often to classify objects from images that make only limited part of the image. Hence to identify local features and certain region of images, salient point detection and patch extraction are used. Gabor wavelet features such as Gabor mean and variance using 2 scales and 2 orientations and 2 scales and 4 orientations are computed for every patch that extracted over the salient points taken from the original image. These features provide adequate resolution in both spatial and spectral domains. Thus extracted features are trained in order to get a learning model, tested and classified using SVM. Finally, the results obtained using Gabor wavelet features using 2 scales and 2 orientations and 2 scales and 4 orientations are compared and thus observed that the latter performs better than the former with less error rate. The experimental evaluation of proposed method is done using the Caltech database.

8. Robust Reversible Watermarking via Clustering and Enhanced Pixel-Wise Masking

MATLAB IEEE PAPERS


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Aug. 2012

ABSTRACT

Robust reversible watermarking (RRW) methods are popular in multimedia for protecting copyright, while preserving intactness of host images and providing robustness against unintentional attacks. However, conventional RRW methods are not readily applicable in practice. That is mainly because: 1) they fail to offer satisfactory reversibility on largescale image datasets; 2) they have limited robustness in extracting watermarks from the watermarked images destroyed by different unintentional attacks; and 3) some of them suffer from extremely poor invisibility for watermarked images. Therefore, it is necessary to have a framework to address these three problems, and further improve its performance. This paper presents a novel pragmatic framework, wavelet-domain statistical quantity histogram shifting and clustering (WSQH-SC). Compared with conventional methods, WSQH-SC ingeniously constructs new watermark embedding and extraction procedures by histogram shifting and clustering, which are important for improving robustness and reducing run-time complexity. Additionally, WSQH-SC includes the property-inspired pixel adjustment to effectively handle overflow and underflow of pixels. This results in satisfactory reversibility and invisibility. Furthermore, to increase its practical applicability, WSQH-SC designs an enhanced pixel-wise masking to balance robustness and invisibility. We perform extensive experiments over natural, medical, and synthetic aperture radar images to show the effectiveness of WSQH-SC by comparing with the histogram rotation-based and histogram distribution constrained methods.

9. A block based segmentation of JPEG compressed document images


This paper appears in: Engineering and Systems (SCES), 2012 Students Conference on Date of Conference: 16-18 March 2012 ABSTRACT :

Block based image and video coding systems are used extensively in practice. In low bit rate applications they suffer with undesirable compression artifacts, especially for document images. Existing methods can reduce theses artifacts by using post processing methods without changing the encoding process. Some of these post processing methods requires classification of the encoded blocks into different categories. In this paper we proposed a block based segmentation of JPEG coded images.

10. Preconditioning for Edge-Preserving Image Super Resolution


This paper appears in: Image Processing, IEEE Transactions on

MATLAB IEEE PAPERS


Date of Publication: Jan. 2012

ABSTRACT

We propose a simple preconditioning method for accelerating the solution of edgepreserving image super-resolution (SR) problems in which a linear shift-invariant point spread function is employed. Our technique involves reordering the high-resolution (HR) pixels in a similar manner to what is done in preconditioning methods for quadratic SR formulations. However, due to the edge preserving requirements, the Hessian matrix of the cost function varies during the minimization process. We develop an efficient update scheme for the preconditioner in order to cope with this situation. Unlike some other acceleration strategies that round the displacement values between the low-resolution (LR) images on the HR grid, the proposed method does not sacrifice the optimality of the observation model. In addition, we describe a technique for preconditioning SR problems involving rational magnification factors. The use of such factors is motivated in part by the fact that, under certain circumstances, optimal SR zooms are nonintegers. We show that, by reordering the pixels of the LR images, the structure of the problem to solve is modified in such a way that preconditioners based on circulant operators can be used.

11. Nonrigid Brain MR Image Registration Using Uniform Spherical Region Descriptor
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Jan. 2012

ABSTRACT

There are two main issues that make nonrigid image registration a challenging task. First, voxel intensity similarity may not be necessarily equivalent to anatomical similarity in the image correspondence searching process. Second, during the imaging process, some interferences such as unexpected rotations of input volumes and monotonic gray-level bias fields can adversely affect the registration quality. In this paper, a new feature-based nonrigid image registration method is proposed. The proposed method is based on a new type of image feature, namely, uniform spherical region descriptor (USRD), as signatures for each voxel. The USRD is rotation and monotonic gray-level transformation invariant and can be efficiently calculated. The registration process is therefore formulated as a feature matching problem. The USRD feature is integrated with the Markov random field labeling framework in which energy function is defined for registration. The energy function is then optimized by the -expansion algorithm. The proposed method has been compared with five state-of-the-art registration approaches on both the simulated and real 3-D databases obtained from the BrainWeb and Internet Brain Segmentation Repository, respectively. Experimental results demonstrate that the proposed method can achieve high registration accuracy and reliable robustness behavior.

12. A Probabilistic Model of Scheme With Dynamic Group

Visual Cryptography

MATLAB IEEE PAPERS


This paper appears in: Information Forensics and Security, IEEE Transactions on Date of Publication: Feb. 2012

ABSTRACT

The (t, n) visual cryptography (VC) is a secret sharing scheme where a secret image is encoded into n transparencies, and the stacking of any t out of n transparencies reveals the secret image. The stacking of t - 1 or fewer transparencies is unable to extract any information about the secret. We discuss the additions and deletions of users in a dynamic user group. To reduce the overhead of generating and distributing transparencies in user changes, this paper proposes a (t, n) VC scheme with unlimited n based on the probabilistic model. The proposed scheme allows n to change dynamically in order to include new transparencies without regenerating and redistributing the original transparencies. Specifically, an extended VC scheme based on basis matrices and a probabilistic model is proposed. An equation is derived from the fundamental definitions of the (t, n) VC scheme, and then the (t, ) VC scheme achieving maximal contrast can be designed by using the derived equation. The maximal contrasts with t = 2 to 6 are explicitly solved in this paper.

13. ECG removal in preterm EEG combining empirical mode decomposition and adaptive filtering
This paper appears in: Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on Date of Conference: 25-30 March 2012

ABSTRACT

In neonatal electroencephalography (EEG) heart activity is a major source of artifacts which can lead to misleading results in automated analysis if they are not properly eliminated. In this work we propose a combination of empirical mode decomposition (EMD) and adaptive filtering (AF) to cancel electrocardiogram (ECG) noise in a simplified EEG montage for preterm infants. The introduction of EMD prior to AF allows to selectively remove ECG preserving at maximum the original characteristics of EEG. Cleaned signals improved up to 17% the correlation coefficient with original datasets in comparison with signals denoised solely with AF.

14. Improved Image Recovery From Compressed Data Contaminated With Impulsive Noise
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Jan. 2012

ABSTRACT

MATLAB IEEE PAPERS


Compressed sensing (CS) is a new information sampling theory for acquiring sparse or compressible data with much fewer measurements than those otherwise required by the Nyquist/Shannon counterpart. This is particularly important for some imaging applications such as magnetic resonance imaging or in astronomy. However, in the existing CS formulation, the use of the l2 norm on the residuals is not particularly efficient when the noise is impulsive. This could lead to an increase in the upper bound of the recovery error. To address this problem, we consider a robust formulation for CS to suppress outliers in the residuals. We propose an iterative algorithm for solving the robust CS problem that exploits the power of existing CS solvers. We also show that the upper bound on the recovery error in the case of non-Gaussian noise is reduced and then demonstrate the efficacy of the method through numerical studies.

15. ECG classification using wavelet transform and Discriminant Analysis


This paper appears in: Biomedical Engineering (ICoBE), 2012 International Conference on Date of Conference: 27-28 Feb. 2012

ABSTRACT

This paper focuses on two cardiac conditions, the supraventricular ectopy and the ventricular ectopy. Four different mother wavelets are used to produce sets of features. Results shows that each cardiac conditions beat has its own unique characteristics and also decomposition of different mother wavelet produced different degree in discriminative power. The Discriminant Analysis Classifier of different distance metric (linear, quadratic and mahalanobis) are tested. Classification performance mostly reached more than 90% for both individual feature and combined feature classification.

16. Detection of the eye blinks for human's fatigue monitoring


This paper appears in: Medical Measurements and Applications Proceedings (MeMeA), 2012 IEEE International Symposium on Date of Conference: 18-19 May 2012

ABSTRACT

This paper presents a non-intrusive vision based system for eye blinks detection and fatigue level monitoring. It uses a web camera positioned in front of the face. A cascade of boosted classifiers based on Haar-like features is used for fast detection of the eyes region. The frames differencing in combination with the thresholding are applied to detect the eyes closure and opening. The frame processing algorithm is pointed out in order to distinguish the involuntary blinks from the voluntary ones. Experimental tests are shown that validate the proposed system.

MATLAB IEEE PAPERS


17. Improved SNR Evolution for OFDM-IDMA System
This paper appears in: Wireless Communications Letters, IEEE Date of Publication: April 2012

ABSTRACT

The bit error rate performance of interleave division multiple access (IDMA) based systems can be predicted by signal-to-noise ratio (SNR) evolution which tracks the average symbol SNR at each iteration and provides a faster solution than brute-force simulations. As the desired SNR in the evolution procedure is hard to obtain, approximate SNR updating formula has been widely adopted in the literature. In this paper a revised SNR updating formula is proposed for orthogonal frequency division multiplexing interleave division multiple access (OFDM-IDMA) systems in Rayleigh fading channels. Theoretical analysis shows that the new updating formula provides a tighter lower bound of the expected SNR in the evolution procedure compared with the existing one. We then verified this by simulations.

18. An adaptive steganographic technique based on integer wavelet transform


This paper appears in: Networking and Media Convergence, 2009. ICNM 2009. International Conference on Date of Conference: 24-25 March 2009

ABSTRACT

Steganography gained importance in the past few years due to the increasing need for providing secrecy in an open environment like the internet. With almost anyone can observe the communicated data all around, steganography attempts to hide the very existence of the message and make communication undetectable. Many techniques are used to secure information such as cryptography that aims to scramble the information sent and make it unreadable while steganography is used to conceal the information so that no one can sense its existence. In most algorithms used to secure information both steganography and cryptography are used together to secure a part of information. Steganography has many technical challenges such as high hiding capacity and imperceptibility. In this paper, we try to optimize these two main requirements by proposing a novel technique for hiding data in digital images by combining the use of adaptive hiding capacity function that hides secret data in the integer wavelet coefficients of the cover image with the optimum pixel adjustment (OPA) algorithm. The coefficients used are selected according to a pseudorandom function generator to increase the security of the hidden data. The OPA algorithm is applied after embedding secret message to minimize the embedding error. The proposed system showed high hiding rates with reasonable imperceptibility compared to other steganographic systems.

MATLAB IEEE PAPERS


19. Rhythm of Motion Extraction and Rhythm-Based Cross-Media Alignment for Dance Videos
This paper appears in: Multimedia, IEEE Transactions on Date of Publication: Feb. 2012

ABSTRACT

We present how to extract rhythm information in dance videos and music, and accordingly correlate them based on rhythmic representation. From dancer's movement, we construct motion trajectories, detect turnings, and stops of trajectories, and then estimate rhythm of motion (ROM). For music, beats are detected to describe rhythm of music. Two modalities are therefore represented as sequences of rhythm information to facilitate finding cross-media correspondence. Two applications, i.e., background music replacement and music video generation, are developed to demonstrate the practicality of cross-media correspondence. We evaluate performance of ROM extraction, and conduct subjective/objective evaluation to show that rich browsing experience can be provided by the proposed applications.

20. Peak Minimization for Reference-Based UltraWideband (UWB) Radio


This paper appears in: Communications, IEEE Transactions on Date of Publication: August 2012

ABSTRACT

We introduce a peak mitigation technique for reference based systems that is similar to the tone reservation scheme employed in orthogonal frequency division multiplexing (OFDM) systems but without the cost in data rate. A comparison of reference-based systems under either peak or average power constraints is presented.

21. A real-time license plate localization method based on vertical edge analysis
This paper appears in: Computer Science and Information Systems (FedCSIS), 2012 Federated Conference on Date of Conference: 9-12 Sept. 2012

ABSTRACT

License plate localization is the most important part of the license plate recognition system. Ability to correctly detect license plate under different conditions directly affects overall recognition system accuracy. In this paper a real-time license plate localization method is proposed. First, vertical edges are detected from the image and binarized. Then, license plate candidates are extracted by the two-stage detection process. In this process a sliding-window technique is used to mark all windows which satisfied edge

MATLAB IEEE PAPERS


density conditions. Edge density conditions are computed on integral edge image allowing us to significantly increase the processing speed of the method. To better distinguish between license plates and complex backgrounds, the edge analysis is performed to remove specific edges. Finally, false candidates are filtered out based on geometrical and textural properties. The proposed method can detect multiple license plates with different sizes in a complex background. The experimental results confirm robustness and ability to localize license plates in real-time. On the database of 501 images our method correctly localizes 97.4% of license plates.

22. A template selection method based on quality for fingerprint matching


This paper appears in: Fuzzy Systems and Knowledge Discovery (FSKD), 2012 9th International Conference on Date of Conference: 29-31 May 2012

ABSTRACT

Fingerprint identification system operates by acquiring a fingerprint image and comparing it with the template image, therefore the accuracy of system depends highly on the quality of template image. In this paper, we propose a method to select template based on fingerprint image quality. In the proposed method, four features firstly are extracted from a fingerprint image. And then, a support vector machine (SVM) model is trained to analyze image qualities, and lastly the fingerprint image with the best quality is chosen as the template to match for each finger. We design an experiment to compare the proposed method with two common methods in FVC2000db1, FVC2000db2 and FVC2000db3. The results of experiment show us that the proposed method is more effective to improve the identification accuracy than the other two methods.

23. The portable wireless aerial image transmission system based on DSP
This paper appears in: Microwave and Millimeter Wave Technology (ICMMT), 2010 International Conference on Date of Conference: 8-11 May 2010

ABSTRACT

The paper provides portable digital aerial image system based on Blackfin DSP. The system includes the aeromodelling-carrier part and the ground control part. The image transmission between the two parts realized with Nordic Semiconductor's RF transceiver nRF24L01. The aeromodelling-carrier part takes pictures using CMOS image sensor and compresses data by ADSP-BF531, then sends them to ground part wirelessly. The ground control center based on ADSP-BF533 supervises images real time and samples data. After relevant subsequent image processing and compressing, the final image is saved

MATLAB IEEE PAPERS


locally. In controllable range, the system can take pictures anyplace and any angle, can lock object neatly. This system features double DSP architecture so it can transmit and process image very fast. It is small, light, low power consumption and low cost. It is easy for single taking and operation, so has good application prospect in field reconnaissance, traffic surveillance, city layout and the situation of bad weather or a disaster and so on.

24. Tracking and counting people in visual surveillance systems


This paper appears in: Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on Date of Conference: 22-27 May 2011

ABSTRACT

The greatest challenge on monitoring characters from a monocular video scene is to track targets under occlusion conditions. In this work, we present a scheme to automatically track and count people in a surveillance system. First, a dynamic background subtraction module is employed to model light variation and then to determine pedestrian objects from a static scene. To identify foreground objects as characters, positions and sizes of foreground regions are treated as decision features. Moreover, the performance to track individuals is improved by using the modified overlap tracker, which investigates the centroid distance between neighboring objects to help on target tracking in occlusion states of merging and splitting. On the experiments of tracking and counting people in three video sequences, the results exhibit that the proposed scheme can improve the averaged detection ratio about 10% as compared to the conventional work.

25. Detecting background setting for dynamic scene


This paper appears in: Signal Processing and its Applications (CSPA), 2011 IEEE 7th International Colloquium on Date of Conference: 4-6 March 2011

ABSTRACT

Processing Real-Time image sequence is now possible because of advancement of technological developments in digital signal processing, wide-band communication, and high-performance VLSI. With the developments in video technology, the surveillance system can be built with some low cost gadget such as the web-camera. In this modern life with increasing number of crime rate, people in society need for security and safety; video surveillance has become important reason to oppose threats of crime and terrorism. The most fundamental part of surveillance is foreground detection, which is retrieval of an object of interest. The object of interest can remodel by common background subtraction technique. There is some problem arises by using this technique, where because of variation of light source, the background constantly changes. The intensity of pixel changes throughout the object detection takes place. Intensity of pixel value changes leads to improper foreground detection, the background detected as foreground

MATLAB IEEE PAPERS


object. This paper proposes a method to model and update the background of the scene by intersection solving method.

26. A view on latest audio steganography techniques


This paper appears in: Innovations in Information Technology (IIT), 2011 International Conference on Date of Conference: 25-27 April 2011

ABSTRACT

Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.

27. Multitemporal Image Change Detection Using Undecimated Discrete Wavelet Transform and Active Contours
This paper appears in: Geoscience and Remote Sensing, IEEE Transactions on Date of Publication: Feb. 2011

ABSTRACT

In this paper, an unsupervised change detection method for satellite images is proposed. Owing to its robustness against noise, the undecimated discrete wavelet transform is exploited to obtain a multiresolution representation of the difference image, which is obtained from two satellite images acquired from the same geographical area but at different time instances. A region-based active contour model is then applied to the multiresolution representation of the difference image for segmenting the difference image into the changed and unchanged regions. The proposed change detection method has been conducted on two types of image data sets, i.e., the synthetic aperture radar images and the optical images. The change detection results are compared with several state-of-the-art techniques. The extensive simulation results clearly show that the proposed change detection method consistently yields superior performance.

28. Face recognition using eigenfaces

MATLAB IEEE PAPERS


This paper appears in: Computer Vision and Pattern Recognition, 1991. Proceedings CVPR '91., IEEE Computer Society Conference on Date of Conference: 3-6 Jun 1991

ABSTRACT

An approach to the detection and identification of human faces is presented, and a working, near-real-time face recognition system which tracks a subject's head and then recognizes the person by comparing characteristics of the face to those of known individuals is described. This approach treats face recognition as a two-dimensional recognition problem, taking advantage of the fact that faces are normally upright and thus may be described by a small set of 2-D characteristic views. Face images are projected onto a feature space (`face space') that best encodes the variation among known face images. The face space is defined by the `eigenfaces', which are the eigenvectors of the set of faces; they do not necessarily correspond to isolated features such as eyes, ears, and noses. The framework provides the ability to learn to recognize new faces in an unsupervised manner

29. The License Plate Recognition System Based on Fuzzy Theory and BP Neural Network
This paper appears in: Intelligent Computation Technology and Automation (ICICTA), 2011 International Conference on Date of Conference: 28-29 March 2011

ABSTRACT

In different conditions such as light and complex backgrounds, we get some car images, the traditional methods are slow convergence speed and low accuracy. This paper presents a method which applies fuzzy theory to enhance several features of for target. To obtain the license information, we use an improved BP neural network algorithm, by through setting proper numbers of hidden layer of BP network, we can solve the recognition problems of China's automobile license such as characters kinds, the numbers, and confusing. This method can improve the accuracy and efficiency of car license recognition, and enhance the system robustness.

30. Research and implementation of information hiding based on RSA and HVS
This paper appears in: E -Business and E -Government (ICEE), 2011 International Conference on Date of Conference: 6-8 May 2011

ABSTRACT

To better protect the security of information, this paper proposed an information transmission scheme which combined the cryptosystem and information hiding. The scheme pretreated the information firstly by making use of the RSA arithmetic which

MATLAB IEEE PAPERS


belongs to the public key cryptosystem, and then took advantage of the improved LSB which based on the Human Visual System to make the cryptograph hid in the 24 bits BMP image and transmitted it. The embedding and extracting process of secret information in the scheme were discussed in detail. Also, the result of information hiding achieved by this scheme was given. The principle of the scheme is simple, and it is easy to implement. The experimental result suggests that the scheme is efficient and has good practical value.

31. Automatic road extraction using high resolution satellite images based on Level Set and Mean Shift methods
This paper appears in: Electronics Computer Technology (ICECT), 2011 3rd International Conference on Date of Conference: 8-10 April 2011

ABSTRACT

Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.

32. A new adaptive weight algorithm for salt and pepper noise removal
This paper appears in: Consumer Electronics, Communications and Networks (CECNet), 2011 International Conference on Date of Conference: 16-18 April 2011

ABSTRACT

A new adaptive weight algorithm is developed for the removal of salt and pepper noise. It consists of two major steps, first to detect noise pixels according to the correlations between image pixels, then use different methods based on the various noise levels. For the low noise level, neighborhood signal pixels mean method is adopted to remove the noise, and for the high noise level, an adaptive weight algorithm is used. Experiments show the proposed algorithm has advantages over regularizing methods in terms of both

MATLAB IEEE PAPERS


edge preservation and noise removal, even for heavily contaminated image with noise level as high as 90%, it still can get a significant performance.

33. A novel method of image steganography in DWT domain


This paper appears in: Radioelektronika (RADIOELEKTRONIKA), 2011 21st International Conference Date of Conference: 19-20 April 2011

ABSTRACT

In this paper, we present a novel steganographic method for embedding of secret data in still grayscale image. In order to provide large capacity of the secret data while maintaining good visual quality of stego-image, the embedding process is performed in transform domain of Discrete Wavelet transform (DWT) by modifying of transform coefficients in an appropriate manner. In addition, the proposed method do not require original image for successful extraction of the secret information. The experimental results show that the proposed method provides good capacity and excellent image quality.

34. A new wavelet-based image watermarking technique


This paper appears in: Consumer Electronics (ICCE), 2011 IEEE International Conference on Date of Conference: 9-12 Jan. 2011

ABSTRACT

Rightful ownership of images is indispensible. Watermarking provide such claims. In this paper, we embed our watermarks in the two subbands (HL and LH) of the DWT of the image. Simulation results demonstrate the presence of watermark against very high compression JPEG as well as against several attacks such as Gaussian-noise, resizing, and low pass filtering.

35. IMAGE Resolution Enhancement by Using Discrete and Stationary Wavelet Decomposition
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: May 2011

ABSTRACT

In this correspondence, the authors propose an image resolution enhancement technique based on interpolation of the high frequency subband images obtained by discrete wavelet transform (DWT) and the input image. The edges are enhanced by introducing an intermediate stage by using stationary wavelet transform (SWT). DWT is applied in order to decompose an input image into different subbands. Then the high frequency subbands as well as the input image are interpolated. The estimated high frequency subbands are

MATLAB IEEE PAPERS


being modified by using high frequency subband obtained through SWT. Then all these subbands are combined to generate a new high resolution image by using inverse DWT (IDWT). The quantitative and visual results are showing the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.

36. Fuzzy Random Impulse Noise Removal From Color Image Sequences
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: April 2011

ABSTRACT

In this paper, a new fuzzy filter for the removal of random impulse noise in color video is presented. By working with different successive filtering steps, a very good tradeoff between detail preservation and noise removal is obtained. One strong filtering step that should remove all noise at once would inevitably also remove a considerable amount of detail. Therefore, the noise is filtered step by step. In each step, noisy pixels are detected by the help of fuzzy rules, which are very useful for the processing of human knowledge where linguistic variables are used. Pixels that are detected as noisy are filtered, the others remain unchanged. Filtering of detected pixels is done by blockmatching based on a noise adaptive mean absolute difference. The experiments show that the proposed method outperforms other state-of-the-art filters both visually and in terms of objective quality measures such as the mean absolute error (MAE), the peak-signal-to-noise ratio (PSNR) and the normalized color difference (NCD).

37. Adaptive overlap-and-add technique in MBOFDM based UWB receiver design


This paper appears in: Communications (NCC), 2011 National Conference on Date of Conference: 28-30 Jan. 2011

ABSTRACT

Traditional OFDM based transmission system uses Cyclic Prefix (CP) in an OFDM symbol in order to maintain orthogonality of transmission. Present days' Ultra-Wideband (UWB) systems use Multi-Band OFDM (MB-OFDM) techniques for transmission in application like Wireless Personal Area Network (WPAN). UWB based systems are power limited by the regulation of FCC. CP introduces correlation in the transmitted data sequence and hence introduces ripples in the power spectral density (PSD) of the transmitted data. This in turn reduces the range of data transmission. However, use of zero-pad suffix (ZPS) will have a flat PSD and hence does not suffer from the range degradation problem. In the receiver, ZP removal requires use of a technique called as overlap and add (OLA) in order to capture the multipath energy of the channel and maintain the orthogonality in the received data. During transmission, the length of the ZP

MATLAB IEEE PAPERS


is fixed and equal to the channel length. During reception, in traditional OFDM receiver, the OLA is done using a ZP length of same as channel length as transmitted. In UWB receiver the FFT window gets smeared due to multipath fading and hence estimation of true FFT window start point does affect the OLA process and hence the overall system performance. In this paper, we propose a method which adapts OLA length in UWB receiver depending on the current band of reception and the band wise estimated true FFT window start point. The proposed method is more beneficiary for high delay spread channel like channel model 4 (CM4). In CM4, the technique can achieve around 1 dB Eb/No gain at 10-2 of BER, while simulated over uncoded MB-OFDM based UWB system.

38. High capacity and inaudibility audio steganography scheme


This paper appears in: Information Assurance and Security (IAS), 2011 7th International Conference on Date of Conference: 5-8 Dec. 2011

ABSTRACT

Steganography is an information hiding technique where secret message is embedded into unsuspicious cover signal. Measurement of good steganography algorithm includes security, capacity, robustness and imperceptibility. These measures are contradicted, therefore improving one, affects the others. In this paper, we propose a new high capacity audio steganography algorithm based on the wavelet packet transform with adaptive hiding in least significant bits. The adaptive hiding is determined depend on the cover samples strength and bits block matching between message and cover signals. The results show that message can be embedded up to 42 % of the total size of the cover audio signal with at least of 50 dB signal to noise ratio.

39. Data Hiding in Motion Vectors of Compressed Video Based on Their Associated Prediction Error
This paper appears in: Information Forensics and Security, IEEE Transactions on Date of Publication: March 2011

ABSTRACT

This paper deals with data hiding in compressed video. Unlike data hiding in images and raw video which operates on the images themselves in the spatial or transformed domain which are vulnerable to steganalysis, we target the motion vectors used to encode and reconstruct both the forward predictive (P)-frame and bidirectional (B)-frames in compressed video. The choice of candidate subset of these motion vectors are based on their associated macroblock prediction error, which is different from the approaches based on the motion vector attributes such as the magnitude and phase angle, etc. A greedy adaptive threshold is searched for every frame to achieve robustness while maintaining a low prediction error level. The secret message bitstream is embedded in the

MATLAB IEEE PAPERS


least significant bit of both components of the candidate motion vectors. The method is implemented and tested for hiding data in natural sequences of multiple groups of pictures and the results are evaluated. The evaluation is based on two criteria: minimum distortion to the reconstructed video and minimum overhead on the compressed video size. Based on the aforementioned criteria, the proposed method is found to perform well and is compared to a motion vector attribute-based method from the literature.

40. A Basic Digital Watermarking Algorithm in Discrete Cosine Transformation Domain


This paper appears in: Intelligent Systems, Modelling and Simulation (ISMS), 2011 Second International Conference on Date of Conference: 25-27 Jan. 2011

ABSTRACT

Digital watermarking is the newfangled idea in digital media. As the replication and modification of digital media content is done frequently and without any significant obstruction, secrecy and authenticity become vulnerable to attacks. In the information hiding community digital watermarking has achieved immense popularity due to its righteous stronghold against piracy and non-repudiation. Many watermarking algorithm has been developed in recent years. From the context of the purposes, as they serve, they differ from each other. Here we propose some basic algorithms of digital watermarking technique using LSB (Least Significant Bit) and DCT (Discrete Cosine Transformation).

41. Quality Assessment of Deblocked Images


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Jan. 2011

ABSTRACT

We study the efficiency of deblocking algorithms for improving visual signals degraded by blocking artifacts from compression. Rather than using only the perceptually questionable PSNR, we instead propose a block-sensitive index, named PSNR-B, that produces objective judgments that accord with observations. The PSNR-B modifies PSNR by including a blocking effect factor. We also use the perceptually significant SSIM index, which produces results largely in agreement with PSNR-B. Simulation results show that the PSNR-B results in better performance for quality assessment of deblocked images than PSNR and a well-known blockiness-specific index.

42. Fast Sparse Image Reconstruction Using Adaptive Nonlinear Filtering

MATLAB IEEE PAPERS


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Feb. 2011

ABSTRACT

Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information and that the signal can be exactly reconstructed from these highly incomplete observations. The major challenge in practical applications of compressed sensing consists in providing efficient, stable and fast recovery algorithms which, in a few seconds, evaluate a good approximation of a compressible image from highly incomplete and noisy samples. In this paper, we propose to approach the compressed sensing image recovery problem using adaptive nonlinear filtering strategies in an iterative framework, and we prove the convergence of the resulting two-steps iterative scheme. The results of several numerical experiments confirm that the corresponding algorithm possesses the required properties of efficiency, stability and low computational cost and that its performance is competitive with those of the state of the art algorithms.

43. Vehicle license plate detection and recognition using symbol analysis
This paper appears in: Telecommunications and Signal Processing (TSP), 2011 34th International Conference on Date of Conference: 18-20 Aug. 2011

ABSTRACT

License plate detection and recognition is one of the most important aspects of applying computer techniques towards intelligent transportation systems. Detecting the accurate location of a license plate from a vehicle image is the most crucial step of a license plate detection system. This paper describes a proposing of a new region-based license plate detection method based on a symbol analysis. Throughout the detection and recognition the original image is filtered, transformed to gray-scale image and thresholded. In the next step the best candidates of regions are selected. The whole system was tested on fifty different cars with various license plates. The indication rate of success recognition is eighty eight percent.

44. Perceptual Image Hashing Based on Virtual Watermark Detection


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: April 2010

ABSTRACT

MATLAB IEEE PAPERS


This paper proposes a new robust and secure perceptual image hashing technique based on virtual watermark detection. The idea is justified by the fact that the watermark detector responds similarly to perceptually close images using a non embedded watermark. The hash values are extracted in binary form with a perfect control over the probability distribution of the hash bits. Moreover, a key is used to generate pseudorandom noise whose real values contribute to the randomness of the feature vector with a significantly increased uncertainty of the adversary, measured by mutual information, in comparison with linear correlation. Experimentally, the proposed technique has been shown to outperform related state-of-the art techniques recently proposed in the literature in terms of robustness with respect to image processing manipulations and geometric attacks.

45. Design and optimization of the moving object detecting system based on Blackfin DSP
This paper appears in: Information and Automation (ICIA), 2010 IEEE International Conference on Date of Conference: 20-23 June 2010

ABSTRACT

Based on Blackfin DSP, we design and implement a compact moving object detecting and video recording system. The background subtraction algorithm is applied to the system to detect the moving objects. And when there is any object being detected, the frames containing the moving objects will be recorded automatically. In order to improve the accuracy of the detecting results, in this paper, instead of using the median filter, we design the block filtering algorithm to filter the noise with the simply connected block region, since the block filtering algorithm needs far less amount of computation than the median filtering algorithm. Further, in order to improve the detecting speed, based on the memory configuration of the system, a new block based parallel data processing mechanism is proposed, in which the external memory data to be processed is moved into the internal memory space in advance that greatly reduces the external memory access requests of the DSP core, thus good parallel data processing and transmission is achieved. The results of the experiments demonstrate that the block filtering algorithm is more suitable for a real time system than the median filter, because the block filtering algorithm improves the filtering efficiency greatly by reducing the amount of filtering computation and also works with satisfying statistical accuracy. In addition, the results of the experiments also show that the block based parallel data processing mechanism is highly effective in improving detecting speed and contributes to a more efficient parallel detecting system.

MATLAB IEEE PAPERS


46. Oversampling to reduce the effect of timing jitter on high speed OFDM systems
This paper appears in: Communications Letters, IEEE Date of Publication: March 2010

ABSTRACT

The impairments caused by timing jitter are a significant limiting factor in the performance of very high data rate OFDM systems. In this letter we show that oversampling can reduce the noise caused by timing jitter. Both fractional oversampling achieved by leaving some band-edge OFDM subcarriers unused and integral oversampling are considered. The theoretical results are compared with simulation results for the case of white timing jitter showing very close agreement. Oversampling results in a 3 dB reduction in jitter noise power for every doubling of the sampling rate.

47. New companding transform for PAPR reduction in OFDM


This paper appears in: Communications Letters, IEEE Date of Publication: April 2010

ABSTRACT

High peak-to-average power ratio (PAPR) is a major drawback of orthogonal frequency division multiplexing (OFDM) systems. Among the various PAPR reduction techniques, companding transform appears attractive for its simplicity and effectiveness. This paper proposes a new companding algorithm. Compared with the others, the proposed algorithm offers an improved bit error rate and minimized out-of-band interference while reducing PAPR effectively. Theoretical analysis and numerical simulation are presented.

48. Multiobjective optimization for pre-DFT combining in coded SIMO-OFDM systems


Date of Publication: April 2010

ABSTRACT

For coded SIMO-OFDM systems, pre-DFT combining was previously shown to provide a good trade-off between error-rate performance and processing complexity. Max-sum SNR and max-min SNR are two reasonable ways for obtaining these combining weights. In this letter, we employ multiobjective optimization to further reveal the suitability and limitation of these two criteria. Our results show that: (1) Neither max-sum SNR nor max-min SNR is universally good; (2) For better error-rate performance, the means for

MATLAB IEEE PAPERS


weight calculation should be adapted according to the capability of the error-correcting code used, and multiobjective optimization can help in the determination.

49. Audio Compression Using a Munich and Cambridge Morlet Wavelet


This paper appears in: Advances in Multimedia, 2009. MMEDIA '09. First International Conference on Date of Conference: 20-25 July 2009

ABSTRACT

Most psycho-acoustic models for coding applications use a uniform (equal bandwidth) spectral decomposition as a first step to approximate the frequency selectivity of the human auditory system. However, the equal filter properties of the uniform sub-bands do not match the non uniform characteristics of cochlear filters and reduce the precision of psycho-acoustic modelling. In this paper we present a new design of a psycho-acoustic model for audio coding following the model used in the standard MPEG-1 audio layer 3. This architecture is based on appropriate wavelet packet decomposition instead of a short term Fourier transformation. Its important characteristic is to propose an analysis of the frequency bands that come closer to the critical bands of the ear. This study shows the best performance of the Morlet Munich coder.

50. Blurred Image Recognition by Legendre Moment Invariants


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: March 2010

ABSTRACT

Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various pointspread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments.

51. Generic Lossless Visible WatermarkingA New Approach

MATLAB IEEE PAPERS


This paper appears in: Image Processing, IEEE Transactions on Date of Publication: May 2010

ABSTRACT

A novel method for generic visible watermarking with a capability of lossless image recovery is proposed. The method is based on the use of deterministic one-to-one compound mappings of image pixel values for overlaying a variety of visible watermarks of arbitrary sizes on cover images. The compound mappings are proved to be reversible, which allows for lossless recovery of original images from watermarked images. The mappings may be adjusted to yield pixel values close to those of desired visible watermarks. Different types of visible watermarks, including opaque monochrome and translucent full color ones, are embedded as applications of the proposed generic approach. A two-fold monotonically increasing compound mapping is created and proved to yield more distinctive visible watermarks in the watermarked image. Security protection measures by parameter and mapping randomizations have also been proposed to deter attackers from illicit image recoveries. Experimental results demonstrating the effectiveness of the proposed approach are also included.

52. Image Thumbnails That Represent Blur and Noise


This paper appears in:
Image Processing, IEEE Transactions on

Date of Publication: Feb. 2010

ABSTRACT

The information about the blur and noise of an original image is lost when a standard image thumbnail is generated by filtering and subsampling. Image browsing becomes difficult since the standard thumbnails do not distinguish between high-quality and lowquality originals. In this paper, an efficient algorithm with a blur-generating component and a noise-generating component preserves the local blur and the noise of the originals. The local blur is rapidly estimated using a scale-space expansion of the standard thumbnail and subsequently used to apply a space-varying blur to the thumbnail. The noise is estimated and rendered by using multirate signal transformations that allow most of the processing to occur at the lower spatial sampling rate of the thumbnail. The new thumbnails provide a quick, natural way for users to identify images of good quality. A subjective evaluation shows the new thumbnails are more representative of their originals for blurry images. The noise generating component improves the results for noisy images, but degrades the results for textured images. The blur generating component of the new thumbnails may always be used to advantage. The decision to use the noise generating

MATLAB IEEE PAPERS


component of the new thumbnails should be based on testing with the particular image mix expected for the application.

53. Analysis of BER performance in presence of nonlinear distortion due to PD-HPA in downlink DSCDMA signals
This paper appears in:

Communications Letters, IEEE Date of Publication: April 2010

ABSTRACT

A predistorter-high power amplifier (PD-HPA) pair has become a common practice in wireless communication to compensate for nonlinear distortion due to HPA. However, the PD-HPA pair still produces severe signal distortion when the input signal exceeds the PD-HPA's saturation level. The effects of such distortion on bit error rate (BER) degradation in downlink direct sequence-code division multiple access signals (DSCDMA) are analyzed. We establish which signal characteristics at the HPA input are the factors contributing to BER. Assuming that the baseband CDMA signal is characterized as a complex Gaussian process, we develop analytic expressions for the BER and the contributing factors to BER.

54.Copy-Move Forgery Detection Based on SVD in Digital Image


This paper appears in: Image and Signal Processing, 2009. CISP '09. 2nd International Congress on Date of Conference: 17-19 Oct. 2009

ABSTRACT

Identifying the authenticity and integrity of digital images becomes increasingly important in digital forensics. In this paper, we propose an effective method for detecting copymove forgery. This method is implemented by first extracting SV features, which are invariant to algebraic, geometric changes, and some disturbances. Due to similar texture characteristic between copied and pasted regions, each SV feature vector is represented as a query and is then matched to its nearest neighbors in image. Experiments are provided to demonstrate the efficiency of presented method on different forgeries and evaluate its robustness and sensitivity to tampered images with some post image processing.

55. Development of EMD-Based Denoising Methods Inspired by Wavelet Thresholding


This paper appears in: Signal Processing, IEEE Transactions on Date of Publication: April 2009

ABSTRACT

One of the tasks for which empirical mode decomposition (EMD) is potentially useful is

MATLAB IEEE PAPERS


nonparametric signal denoising, an area for which wavelet thresholding has been the dominant technique for many years. In this paper, the wavelet thresholding principle is used in the decomposition modes resulting from applying EMD to a signal. We show that although a direct application of this principle is not feasible in the EMD case, it can be appropriately adapted by exploiting the special characteristics of the EMD decomposition modes. In the same manner, inspired by the translation invariant wavelet thresholding, a similar technique adapted to EMD is developed, leading to enhanced denoising performance.

56. Fast Image Recovery Using Variable Splitting and Constrained Optimization
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: Sept. 2010

ABSTRACT

We propose a new fast algorithm for solving one of the standard formulations of image restoration and reconstruction which consists of an unconstrained optimization problem where the objective includes an l2 data-fidelity term and a nonsmooth regularizer. This formulation allows both wavelet-based (with orthogonal or frame-based representations) regularization or total-variation regularization. Our approach is based on a variable splitting to obtain an equivalent constrained optimization formulation, which is then addressed with an augmented Lagrangian method. The proposed algorithm is an instance of the so-called alternating direction method of multipliers, for which convergence has been proved. Experiments on a set of image restoration and reconstruction benchmark problems show that the proposed algorithm is faster than the current state of the art methods.

57. On High-Rate Full-Diversity 2X2 Space-Time Codes with Low-Complexity Optimum Detection
This paper appears in: Communications, IEEE Transactions on Date of Publication: May 2009

ABSTRACT

The 2times2 MIMO profiles included in mobile WiMAX specifications are Alamouti's space-time code (STC) for transmit diversity and spatial multiplexing (SM). The former has full diversity and the latter has full rate, but neither of them has both of these desired features. An alternative 2times2 STC, which is both full rate and full diversity, is the Golden code. It is the best known 2times2 STC, but it has a high decoding complexity. Recently, the attention was turned to the decoder complexity, this issue was included in the STC design criteria, and different STCs were proposed. In this paper, we first present a full-rate full-diversity 2times2 STC design leading to substantially lower complexity of the optimum detector compared to the Golden code with only a slight performance loss.

MATLAB IEEE PAPERS


We provide the general optimized form of this STC and show that this scheme achieves the diversity multiplexing frontier for square QAM signal constellations. Then, we present a variant of the proposed STC, which provides a further decrease in the detection complexity with a rate reduction of 25% and show that this provides an interesting tradeoff between the Alamouti scheme and SM.

58. Reversible Data Hiding Based on Histogram Modification of Pixel Differences


This paper appears in: Circuits and Systems for Video Technology, IEEE Transactions on Date of Publication: June 2009

ABSTRACT

In this letter, we present a reversible data hiding scheme based on histogram modification. We exploit a binary tree structure to solve the problem of communicating pairs of peak points. Distribution of pixel differences is used to achieve large hiding capacity while keeping the distortion low. We also adopt a histogram shifting technique to prevent overflow and underflow. Performance comparisons with other existing schemes are provided to demonstrate the superiority of the proposed scheme.

59. The Application of Wavelet Transform to Multimodality Medical Image Fusion


This paper appears in: Networking, Sensing and Control, 2006. ICNSC '06. Proceedings of the 2006 IEEE International Conference on

ABSTRACT

Medical image fusion has been used to derive useful information from multi-modality medical image data. The idea is to improve the image content by fusing images like computer tomography (CT) and magnetic resonance imaging (MRI) images, so as to provide more information to the doctor and clinical treatment planning system. This paper aims to demonstrate the application of wavelet transformation to multi-modality medical image fusion. This work covers the selection of wavelet function, the use of wavelet based fusion algorithms on medical image fusion of CT and MRI, and the fusion image quality evaluation. We introduce the peak-to-peak signal-to-noise ratio (PSNR) method for measuring fusion effect. The performances of other two methods of image fusion based on pyramid-decomposition and simple image fusion attempts are briefly described for comparison. The experiment results demonstrate the effectiveness of the fusion scheme based on wavelet transform

60. Ensemble Methods of Face Recognition Based on Bit-plane Decomposition

MATLAB IEEE PAPERS


This paper appears in:

Computational Intelligence and Natural Computing, 2009. CINC '09. International Conference on Date of Conference: 6-7 June 2009

ABSTRACT

Face recognition has become one of the latest research subjects of pattern recognition and image processing. Although many face recognition techniques have been proposed and many achievements have been obtained, we canpsilat get high recognition rate due to the changes of face expression, location, direction and light. In this paper we study human face recognition based on ensemble techniques. In order to improve diversity of component classifiers, the idea of bit-plane decomposition is used and moving window classifier is used as a basic individual classifier. The quantized pattern representationspsila layers are used jointly to make a decision. And we mainly study several fused methods which include product, sum, majority vote, max, min and median rules. Experiments results with face images databases show that fusion of multiple classifiers has good classification performance. Moreover, we compare different multiple classifier schemes with other human face recognition methods.

61. A Pattern Similarity Scheme for Medical Image Retrieval


This paper appears in: Information Technology in Biomedicine, IEEE Transactions on Date of Publication: July 2009

ABSTRACT

In this paper, we propose a novel scheme for efficient content-based medical image retrieval, formalized according to the PAtterns for Next generation DAtabase systems (PANDA) framework for pattern representation and management. The proposed scheme involves block-based low-level feature extraction from images followed by the clustering of the feature space to form higher-level, semantically meaningful patterns. The clustering of the feature space is realized by an expectation-maximization algorithm that uses an iterative approach to automatically determine the number of clusters. Then, the 2component property of PANDA is exploited: the similarity between two clusters is estimated as a function of the similarity of both their structures and the measure components. Experiments were performed on a large set of reference radiographic images, using different kinds of features to encode the low-level image content. Through this experimentation, it is shown that the proposed scheme can be efficiently and effectively applied for medical image retrieval from large databases, providing unsupervised semantic interpretation of the results, which can be further extended by knowledge representation methodologies.

62. A closed-form blind CFO estimator based on frequency analysis for OFDM systems
This paper appears in: Communications, IEEE Transactions on

MATLAB IEEE PAPERS


Date of Publication: June 2009

ABSTRACT

This letter proposes a blind carrier frequency offset (CFO) estimator for orthogonal frequency-division multiplexing (OFDM) systems based on the frequency analysis of the received signal, and derives a closed-form CFO estimate. Since only one OFDM symbol is utilized instead of multi-symbol averaging, the proposed method is effective even when the CFO is time varying. Finally, analysis and simulation results indicate the outstanding performance of the proposed estimator.

63. A Moving Target Detection Algorithm Based on the Dynamic Background


This paper appears in: Computational Intelligence and Software Engineering, 2009. CiSE 2009. International Conference on Date of Conference: 11-13 Dec. 2009

ABSTRACT

Advantages and disadvantages of two common algorithms frequently used in the moving target detection: background subtraction method and frame difference method are analyzed and compared in this paper. Then based on the background subtraction method, a moving target detection algorithm is proposed. The background image used to process the next frame image is generated through superposition of the current frame image and the current background image with a certain probability. This algorithm makes the objects which stay long time to be a part of the background after a certain period of time, but not be detected as a part of foreground. The experimental results show that this algorithm can detect moving targets more effectively and precisely.

64. Image encryption using binary key-images


This paper appears in: Systems, Man and Cybernetics, 2009. SMC 2009. IEEE International Conference on Date of Conference: 11-14 Oct. 2009

ABSTRACT

This paper introduces a new concept for image encryption using a binary keyimage. The key-image is either a bit plane or an edge map generated from another image, which has the same size as the original image to be encrypted. In addition, we introduce two new lossless image encryption algorithms using this key-image technique. The performance of these algorithms is discussed against common attacks such as the brute force attack, ciphertext attacks and plaintext attacks. The analysis and experimental results show that the proposed algorithms can fully encrypt all types of images. This makes them suitable for securing multimedia applications and shows they have the potential to be used to secure communications in a variety of wired/wireless scenarios and real-time application such as mobile phone services.

MATLAB IEEE PAPERS


65. Image retrieval using both color and texture features
This paper appears in: Machine Learning and Cybernetics, 2009 International Conference on Date of Conference: 12-15 July 2009

ABSTRACT

This paper has a further exploration and study of visual feature extraction. According to the HSV (Hue, Saturation, Value) color space, the work of color feature extraction is finished, the process is as follows: quantifying the color space in non-equal intervals, constructing one dimension feature vector and representing the color feature by cumulative histogram. Similarly, the work of texture feature extraction is obtained by using gray-level co-occurrence matrix (GLCM) or color co-occurrence matrix (CCM). Through the quantification of HSV color space, we combine color features and GLCM as well as CCM separately. Depending on the former, image retrieval based on multi-feature fusion is achieved by using normalized Euclidean distance classifier. Through the image retrieval experiment, indicate that the use of color features and texture based on CCM has obvious advantage.

66. An automatic wavelet-based nonlinear image enhancement technique for aerial imagery
This paper appears in: Recent Advances in Space Technologies, 2009. RAST '09. 4th International Conference on Date of Conference: 11-13 June 2009

ABSTRACT

Recently we proposed a wavelet-based dynamic range compression algorithm to improve the visual quality of digital images captured in the high dynamic range scenes with nonuniform lighting conditions. The fast image enhancement algorithm which provides dynamic range compression preserving the local contrast and tonal rendition is a very good candidate in aerial imagery applications such as image interpretation for defense and security tasks. This algorithm can further be applied to video streaming for aviation safety. In this paper the latest version of the proposed algorithm which is able to enhance aerial images so that the enhanced images are better then direct human observation, is presented. The results obtained by applying the algorithm to numerous aerial images show strong robustness and high image quality.

67. An adaptive steganographic technique based on integer wavelet transform


This paper appears in: Networking and Media Convergence, 2009. ICNM 2009. International Conference on Date of Conference: 24-25 March 2009

ABSTRACT

Steganography gained importance in the past few years due to the increasing need for

MATLAB IEEE PAPERS


providing secrecy in an open environment like the internet. With almost anyone can observe the communicated data all around, steganography attempts to hide the very existence of the message and make communication undetectable. Many techniques are used to secure information such as cryptography that aims to scramble the information sent and make it unreadable while steganography is used to conceal the information so that no one can sense its existence. In most algorithms used to secure information both steganography and cryptography are used together to secure a part of information. Steganography has many technical challenges such as high hiding capacity and imperceptibility. In this paper, we try to optimize these two main requirements by proposing a novel technique for hiding data in digital images by combining the use of adaptive hiding capacity function that hides secret data in the integer wavelet coefficients of the cover image with the optimum pixel adjustment (OPA) algorithm. The coefficients used are selected according to a pseudorandom function generator to increase the security of the hidden data. The OPA algorithm is applied after embedding secret message to minimize the embedding error. The proposed system showed high hiding rates with reasonable imperceptibility compared to other steganographic systems.

68. SNR estimation algorithm based on the preamble for OFDM systems in frequency selective channels
This paper appears in: Communications, IEEE Transactions on Date of Publication: Aug. 2009

ABSTRACT

We propose a new SNR estimation method based on the preamble for OFDM systems in frequency selective channels. The OFDM training symbols in the preamble are equalized by the known data in frequency domain and employed to estimate the noise variance. The second order moments of the received symbols are used to estimate the signal plus noise power in the OFDM packets. The SNRs on the subchannels and the average SNR of the packets can all be estimated. Simulation results show that the proposed method is robust to frequency selectivity in wireless channels, and its performance is considerably improved compared with the available methods.

69. Effective Fuzzy Clustering Algorithm for Abnormal MR Brain Image Segmentation
This paper appears in: Advance Computing Conference, 2009. IACC 2009. IEEE International Date of Conference: 6-7 March 2009

ABSTRACT

Clustering approach is widely used in biomedical applications particularly for brain tumor detection in abnormal magnetic resonance (MR) images. Fuzzy clustering using fuzzy C-means (FCM) algorithm proved to be superior over the other clustering

MATLAB IEEE PAPERS


approaches in terms of segmentation efficiency. But the major drawback of the FCM algorithm is the huge computational time required for convergence. The effectiveness of the FCM algorithm in terms of computational rate is improved by modifying the cluster center and membership value updation criterion. In this paper, the application of modified FCM algorithm for MR brain tumor detection is explored. Abnormal brain images from four tumor classes namely metastase, meningioma, glioma and astrocytoma are used in this work. A comprehensive feature vector space is used for the segmentation technique. Comparative analysis in terms of segmentation efficiency and convergence rate is performed between the conventional FCM and the modified FCM. Experimental results show superior results for the modified FCM algorithm in terms of the performance measures.

70. Adaptive Fuzzy Filtering for Artifact Reduction in Compressed Images and Videos
This paper appears in: Image Processing, IEEE Transactions on Date of Publication: June 2009

ABSTRACT

A fuzzy filter adaptive to both sample's activity and the relative position between samples is proposed to reduce the artifacts in compressed multidimensional signals. For JPEG images, the fuzzy spatial filter is based on the directional characteristics of ringing artifacts along the strong edges. For compressed video sequences, the motion compensated spatiotemporal filter (MCSTF) is applied to intraframe and interframe pixels to deal with both spatial and temporal artifacts. A new metric which considers the tracking characteristic of human eyes is proposed to evaluate the flickering artifacts. Simulations on compressed images and videos show improvement in artifact reduction of the proposed adaptive fuzzy filter over other conventional spatial or temporal filtering approaches.

71. Efficient Detection Ordering Scheme for MIMO Transmission Using Power Control
This paper appears in: Signal Processing Letters, IEEE Date of Publication: Aug. 2009

ABSTRACT

In this letter, an efficient ordering scheme for an ordered successive interference cancellation detector is determined under the bit error rate minimization criterion for multiple antenna systems using transmission power control. From the convexity of the Q -function, we derive the ordering strategy that makes the channel gains converge to their geometric mean. Based on this approach, the fixed ordering algorithm is first designed, for which the geometric mean is used for a constant threshold. To further improve the performance, the modified scheme employing adaptive thresholds is

MATLAB IEEE PAPERS


developed using the correlation among ordering results. Theoretical analysis and simulation results show that proposed ordering schemes using QR-decomposition not only require a reduced computational complexity compared to the conventional scheme, but result in improved error performance.

72. A medium access control scheme for TDD-CDMA cellular networks with two-hop relay architecture
This paper appears in: Wireless Communications, IEEE Transactions on Date of Publication: May 2009

ABSTRACT

In this paper, we propose a multihop medium access control (mMAC) scheme for time division duplexing-code division multiple access (TDD-CDMA) cellular networks with two-hop relay architecture to support packet data transmission. The proposed mMAC is based on joint CDMA/PRMA (packet reservation multiple access) protocol and it includes BCH code selection, power control and multihop relaying. Simulation results reveal that cellular networks with two-hop relay architecture with the proposed mMAC scheme can substantially provide a good performance as well as larger cell coverage as compared to conventional TDD-CDMA single-hop cellular networks.

73. Automatic MRI brain tissue segmentation using a hybrid statistical and geometric model
This paper appears in: Biomedical Imaging: Nano to Macro, 2006. 3rd IEEE International Symposium on Date of Conference: 6-9 April 2006

ABSTRACT

This paper presents a novel hybrid segmentation technique incorporating a statistical as well as a geometric model in a unified segmentation scheme for brain tissue segmentation of magnetic resonance imaging (MRI) scans. We combine both voxel probability and image gradient and curvature information for segmenting gray matter (GM) and white matter (WM) tissues. Both qualitative and quantitative results on synthetic and real brain MRI scans indicate superior and consistent performance when compared with standard techniques such as SPM and FAST

74. Development of a novel voice verification system using wavelets


This paper appears in: Computer and Communication Engineering, 2008. ICCCE 2008. International Conference on Date of Conference: 13-15 May 2008

ABSTRACT

This paper presents a novel voice verification system using wavelet transforms. The conventional signal processing techniques assume the signal to be stationary and are

MATLAB IEEE PAPERS


ineffective in recognizing non stationary signals such as the voice signals. Voice signals which are more dynamic could be analyzed with far better accuracy using wavelet transform. The developed voice recognition system is word dependant voice verification system combining the RASTA and LPC. The voice signal is filtered using the special purpose voice signal filter using the relative spectral algorithm (RASTA). The signals are de-noised and decomposed to derive the wavelet coefficients and thereby a statistical computation is carried out. Further the formant or the resonance of the voices signal is detected using the linear predictive coding (LPC). With the statistical computation on the coefficients alone, the accuracy of the verifying sample individual voice to his own voice is quite high (around 75% to 80%). The reliability of the signal verification is strengthened by combining entailments from these two completely different aspects of the individual voice. For voice comparison purposes four out five individuals are verified and the results show higher percentage of accuracy. The accuracy of the system can be improved by incorporating advanced pattern recognition techniques such as hidden Markov model (HMM).

Вам также может понравиться