Вы находитесь на странице: 1из 4

Advanced Compression Scheme for Encrypted Images

Mathew P C M Arunkumar Dr A Vincent Antony Kumar


II ME Computer and Communication Lecturer, Dept of IT HOD, Dept of IT PSNA College of Engg & Technology PSNA College of Engg & Technology PSNA College of Engg & Technology Dindigul, Tamil Nadu Dindigul, Tamil Nadu Dindigul, Tamil Nadu

mathewpandanad@gmail.com

ABSTRACT
When it is desired to transmit redundant data over an insecure channel, it is customary to encrypt the data. Here in this paper we propose a method of compression of an encrypted image. In the encoder section, the image is first encrypted and then it undergoes compression in resolution. The cipher function scrambles only the pixel values, but does not shuffle the pixel locations. After downsampling, each sub-image is encoded independently and the resulting syndrome bits are transmitted. The received image undergoes a joint decryption and decompression in the decoder section. By using the local statistics based on the image, it is recovered back. Here the decoder gets only lower resolution version of the image. Also this method provides only partial access to the current source at the decoder side which improves the decoders learning of the source statistics. The source dependency is exploited to improve the compression efficiency. This scheme provides better coding efficiency and less computational complexity.

deals with a complete implementation of SPIHT codec in the MATLAB environment. The implementation is designed to handle grayscale (256 levels) images of square resolutions that are a power of two. The overall goal of compression is to represent an image with the smallest possible no of bits, thereby speeding transmission and minimizing storage requirements. Compression is of two types-lossless and lossy. In lossless compression schemes the reconstructed image after compression is numerically identical to the original image. However lossless compression can achieve a modest amount of compression. Lossy schemes though it does not permit perfect reconstruction of the original image can provide satisfactorily quality at a fraction of the original bit rate. In the existing system, Lossless compression of encrypted sources can be achieved through Slepian-Wolf coding [3]. For encrypted real-world sources, such as images, the key to improve the compression efficiency is how the source dependency is exploited. Trellis Coded Vector Quantization [3] can also be used for compressing the encrypted image sources. It has been reported that good results are produced for the binary images. But still challenges remain when it comes practical in real world applications. The coding efficiency can be improved only by exploiting the source dependency. Both these two techniques have the following disadvantages. Markov decoding in a Slepian-Wolf coding is expensive with computational complexity. The source dependency is not fully utilized. Since image and video are highly nonstationary, the Markov model cannot describe its local statistics precisely. For 8-bit gray scale images, only two most significant bit-planes are compressible by employing a 2-D Markov model in bit planes [10].

General Terms
Encryption, Syndrome bits, Cipher text.

Keywords:

Encrypted images, Image Processing, Markov Decoding, Memoryless Coding

1. INTRODUCTION
When we speak of image compression, there are generally two different solutions, the lossless and lossy concept of operation. Lossy compression methods most often rely on transforming spatial image domain into a domain, which reveals image components according to their relevance, making it possible to employ coding methods that take advantage of data redundancy in order to suppress it. Since first attempts, the discrete cosine transform (DCT) domain has been used. Image is divided into segments (due to the fact DCT was designed to work with periodic states) and each segment is then a subject of the transform, creating a series of frequency components that correspond with detail levels of the image. Several forms of coding are applied in order to store only coefficients that are found as significant. Such a way is used in the popular JPEG file format, and most video compression methods are generally based on it. The other approach is based on discrete-time wavelet transform (DWT), which produces multi-scale image decomposition. By employing filtering and subsampling, a result in the form of the decomposition image is produced, very effectively revealing data redundancy in several scales. A coding principle is then applied in order to compress the data. This article

2. ENCRYPTION
There are two major groups of image encryption algorithms: (a) non-chaos selective methods and (b) Chaos-based selective or non-selective methods. Most of these algorithms are designed for a specific image format compressed or uncompressed, and some of them are even format compliant. Here encryption is performed using RSA algorithm. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations. The RSA algorithm involves three steps: key generation, encryption and decryption. Key generation is the most important part of RSA; it is also the hardest part of RSA to implement correctly. Prime numbers must be primes; otherwise the RSA will not work or is insecure. There exists some rare composite numbers that make the RSA work, but

the end result is insecure. Find fast implementation of the extended Euclidean algorithm. Do not select too small e. Do not compute too small d. Compute at least 1024 bit public key. Smaller keys are nowadays considered insecure. If you need long time security compute 2048 bit keys or longer. Also, compute always new n for each key pair. Do not share n with any other key pair (common modulus attack). Test the keys by performing RSA encryption and decryption operations. Encryption is done always with public key. In order to encrypt with public key it need to be obtained. Public key must be authentic to avoid man-in-the middle attacks in protocols. Verifying the authenticity of the public key is difficult. When using certificates a trusted third party can be used. If certificates are not in use then some other means of verifying is used (fingerprints, etc). The message to be encrypted is represented as number m, 0 < m < n - 1. If the message is longer it need to be splitted into smaller blocks. Encryption: compute c = me mod n, where the e and n are the public key, and m is the message block. The c is the encrypted message. The private key d is used to decrypt messages. Compute: m = cd mod n, where n is the modulus (from public key) and d is the Private key. RSA is widely used in electronic commerce protocols, and is believed to be secure given sufficiently long keys and the use of up-to-date implementations.

wavelet coefficients are searched in order to obtain the maximum c(i, j) after executing the discrete wavelet transform. For operations in the subsequent bit-planes of threshold T, n is reduced by 1. For each pixel in the LIP, one bit is used to describe its significance. If it is not significant, the pixel remains in the LIP and no more bits are generated; otherwise, a sign bit is produced and the pixel is moved to the LSP. Similarly, each set in the LIS requires one bit for the significance information. The insignificant sets remain in the LIS; the significant sets are partitioned into subsets, which are processed in the same manner and at the same resolution until each significant subset has exactly one coefficient. Finally, each pixel in the LSP is refined with one bit. The above mentioned procedure is then repeated for the subsequent resolution. The algorithm has several advantages. The first one is an intensive progressive capability we can interrupt the decoding (or coding) at any time and a result of maximum possible detail can be reconstructed with one-bit precision. This is very desirable when transmitting files over the internet, since users with slower connection speeds can download only a small part of the file, obtaining much more usable result when compared to other codec such as progressive JPEG. Second advantage is a very compact output bitstream with large bit variability no additional entropy coding or scrambling has to be applied. It is also possible to insert a watermarking scheme into the SPIHT coding domain and this watermarking technique is considered to be very strong regarding to watermark invisibility and attack resiliency.

3. COMPRESSION
The image compression highly used in all applications like medical imaging, satellite imaging, etc. The image compression helps to reduce the size of the image, so that the compressed image could be sent over the computer network from one place to another in short amount of time. In the SPIHT algorithm, the image is first decomposed into a number of subbands by means of hierarchical wavelet decomposition. The subband coefficients are then grouped into sets known as spatial-orientation trees, which efficiently exploit the correlation between the frequency bands. The coefficients in each spatial orientation tree are then progressively coded from the most significant bit-planes (MSB) to the least significant bit-planes (LSB), starting with the coefficients with the highest magnitude and at the lowest pyramid levels. The SPIHT multistage encoding process employs three lists and sets: 1. The list of insignificant pixels (LIP) contains individual coefficients that have magnitudes smaller than the threshold. 2. The list of insignificant sets (LIS) contains sets of wavelet coefficients that are defined by tree structures and are found to have magnitudes smaller than the threshold (insignificant). The sets exclude the coefficients corresponding to the tree and all subtree roots and they have at least four elements. 3. The list of significant pixels (LSP) is a list of pixels found to have magnitudes larger than the threshold (significant). 4. The set of offspring (direct descendants) of a tree node, O(i, j), in the tree structures is defined by pixel location (i, j). The set of descendants, D(i, j), of a node is defined by pixel location (i, j). L(i, j) is defined as L(i, j) = D(i, j) O(i, j). The threshold, T, for the first bit-plane is equal to 2n, and n = log2(max(i, j){c(i, j)}, where c(i, j) represents the (i, j)th wavelet coefficient. All the

Fig. 1. Layout of three-level decomposition of the encrypted image.

4. CHANNEL ESTIMATION
The decoding starts from the 00 sub-image of the lowest-resolution level, say, level N. We suggest transmitting the uncompressed 00Nsub-image as the doped bits. Thus, the 00N subimage can be known by the decoder without ambiguity, and knowledge about the local statistics will be derived based on it. Next, other sub-images of the same resolution level are interpolated from the decrypted 00N sub image. The decoding section is illustrated in the Fig 3. A feedback channel is needed for the encoder to know how many bits to transmit for each sub-image, which generally increases the transmission delay. However, this cost is reasonable because the encoder has no idea about the source statistics and cannot determine the coding rate. It is the decoder who is able to learn such information and advise the encoder. On the other hand, the feedback channel does consume some bandwidth, but the

consumption is not directly related to the compression efficiency, and the amount of information transmitted through the feedback channel is minimal. The SI generation in our scheme is through interpolation. For the sake of simplicity, for any pixel in the target sub-image, we only use the four horizontal and vertical neighbors or the four diagonal neighbors in the known sub-image(s) for the interpolation. Intuitively, the SI quality will be better, if the neighbors are geometrically closer to the pixel to be interpolated. Hence, we use a two-step interpolation in each resolution level to improve the SI estimation. First, sub-image 11 is interpolated from sub-image 00; after sub-image 11 is decoded, we use both 00 and 11 to interpolate 01 and 10.

Slepian-Wolf decoding treats the SI as a noisy version of the source to be decoded. We can consider that there is a virtual channel between the source and the SI. To perform Slepian-Wolf decoding, it is also necessary for the decoder to estimate the statistics of the virtual channel. The encoder decomposes each encrypted image into four resolution levels. The sub-images in the lowest-resolution level are sent without compression.
Channel estimation

Context adaptive interpolation

Slepian wolf coding

Input image data

Syndromes

Key Generation
Fig. 3. Decoders Diagram in decoding the sub image

Encryption of Image

Encoding using SPIHT

But the decoder still performs inter sub-image interpolation. For the other sub-images, we transmit the four least significant bitplanes (LSB) as raw bits, because there is not much gain to employ Slepian-Wolf coding on them. The four LSBs are sent prior to the MSBs, such that the decoder can have better knowledge about the pixels before starting decoding the MSBs. The four MSBs, on the other hand, are Slepian-Wolf encoded using rate-compatible punctured turbo codes in a bit-plane based fashion. The sending rate of each Slepian-Wolf coded bit-plane is determined by the decoders feedback.

Transmission of bits

5. PERFORMANCE ANALYSIS
The various simulation results for different images are included in the following table. The result shows that this scheme provides a better compression of encrypted images.
TABLE I PERFORMANCE ANALYSIS FOR VARIOUS ENCRYPTED IMAGES

Decryption and Decompression

Parameters CR

Baby 2.8407 40.3379 6.063

Lena 2.0259 39.0831 8.0941

Rice 2.1733 47.1089 1.2752

Average 2.3466333 3 42.176633 3 5.1441

Compute CR, PSNR, MSE

PSNR MSE

End
Fig. 2. Flow chart for Advanced Compression and Decompression Scheme

6. CONCLUSION
An efficient compression of encrypted image data scheme was proposed, employing SPIHT compression algorithm and RSA algorithm. Here this method provides a better coding efficiency and less computational complexity than existing approaches. This technique allows only partial access to current sources at the decoder side. Thus, further in the future we could try to use for compression of encrypted videos where Advanced Compression Scheme can be used for interframe and intraframe correlation learning at the decoder side.

7. REFERENCES
[1] M. Johnson, P. Ishwar, V. M. Prabhakaran, D. Schonberg, and K. Ramchandran, On compressing encrypted data, IEEE Trans. Signal Process., vol. 52, no. 10, pp. 29923006, Oct. 2004. [2] A. Liveris, Z. Xiong, and C. Georghiades, Compression of binary sources with side information at the decoder using LDPC codes, IEEE Commun. Lett., vol. 6, no. 10, pp. 440442, Oct. 2002. [3] Y. Yang, V. Stankovic, and Z. Xiong, Image encryption and data hiding: Duality and code designs, in Proc. Inf. TheoryWorkshop, Lake Tahoe, CA, Sep. 2007, pp. 295300. [4] D. Schonberg, Practical Distributed Source Coding and its Application to the Compression of Encrypted Data, Ph.D. dissertation, Univ. California, Berkeley, 2007. [5] J. D. Slepian and J. K. Wolf, Noiseless coding of correlated information sources, IEEE Trans. Inf. Theory, vol. IT-19, pp. 471480, Jul. 1973. [6] -Minn Ang and Kah Phooi Seng, Lossless Image Compression using Tuned Degree-K Zerotree Wavelet Coding, Proceedings of the International MultiConference of Engineers and Computer Scientists Vol I, IMECS 2009, March 18 - 20, 2009, Hong Kong. [7]A. A. Kassim, W. S. Lee: Embedded Color Image Coding Using SPIHT With Partially Linked Spatial Orientation Tree, IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, pp. 203-206, 2003. [8] Q. Yao, W. Zeng, and W. Liu, Multi-resolution based hybrid spatiotemporal compression of encrypted videos, in Proc. IEEE Int. Conf. Acous., Speech and Sig. Process., Taipei, Taiwan, R.O.C., Apr. 2009, pp. 725728. [9] J. Bajcsy and P. Mitran, Coding for the Slepian-Wolf problem with turbo codes, in Proc. IEEE Global Telecommun. Conf., San Antonio, TX, Nov. 2001, pp. 14001404. [10] Wei Liu, Wenjum Zeng, Lina Dong, and Qiuming Yao, Efficient Compression of Gray Scale Images, Vol. 19, no.4, Apr 2010.

Вам также может понравиться