Вы находитесь на странице: 1из 56

CHAPTER 1 IMAGE COMPRESSION 1.1 INTRODUCTION: Image compression is the application of Data compression on digital images.

In effect, the objective is to reduce redundancy of the image data in order to be able to store or transmit data in an efficient form. Uncompressed multimedia (graphics, audio and video) data requires considerable storage capacity and transmission bandwidth. Despite, rapid progress in mass-storage density, processor speeds, and digital communication system performance, demand for data storage capacity and data-transmission bandwidth continues to outstrip the capabilities of available technologies. The recent growth of data intensive multimedia-based web applications have not only sustained the need for more efficient ways to encode signals and images but have made compression of such signals central to storage and communication technology. 1.2 NEED FOR COMPRESSION: The examples above clearly illustrate the need for sufficient storage space, large transmission bandwidth, and long transmission time for image, audio, and video data. At the present state of technology, the only solution is to compress multimedia data before its storage and transmission, and decompress it at the receiver.

1.3 PRINCIPLES: A common characteristic of most images is that the neighboring pixels are correlated and therefore contain redundant information. The foremost task then is to find less correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy reduction. Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS). In general, three types of redundancy can be identified:

Spatial Redundancy or correlation between neighboring pixel values. Spectral Redundancy or correlation between different color planes or spectral bands. Temporal Redundancy or correlation between adjacent frames in a sequence of images (in video applications). Image compression aims at reducing the number of bits needed to

represent an image by removing the spatial and spectral redundancies as much as possible. 1.4 COMPRESSION STRATEGIES: Compression techniques classified into two techniques. 1.4.1 Lossless compression: It is otherwise known as Information preserving compression. Lossless compression consists of those techniques guaranteed to generate an exact duplication of the input dataset after a compress/decompress cycle.
2

This is mainly used in medical applications. The maximum compression ratio achieved in this technique is 2:1. 1.4.2 Lossy compression: In this lossy compression provides higher level of data reduction but results in a less than perfect reproduction of the original image. Lossy image compression is useful in applications such as broadcast television, video conferencing, and facsimile transmission, in which a certain amount of error is acceptable trade-off for increased compression performance. The compression ratio is increased in this technique and lossed informations are not able to predict by human eye visualization. 1.4.3 Lossless vs. Lossy compression: In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. However lossless compression can only achieve a modest amount of compression. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. However, lossy schemes are capable of achieving much higher compression. Under normal viewing conditions, no visible loss is perceived (visually lossless).

1.5 BLOCK DIAGRAM: As shown in fig 1.1, Compression is accomplished by applying a


Source encoder (Wavelet Transform) Quantizer (SPIHT Algorithm)

Image signal/ Image

Entropy Encoder (SPIHT Algorithm)

Compressed signal/Image

Fig. 1.1 Block diagram


Figure 1.1

linear transform to decorrelate the image data, quantizing the resulting transform coefficients, and entropy coding the quantized values. 1.5.1 Source encoder (linear transforms): Over the years, a variety of linear transforms have been developed which include Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and many more, each with its own advantages and disadvantages. Wavelet Transform: The wavelet transform may be described as a series of filter banks, as shown in the figure 1.2. The filterbank is made up of a series of low and highpass filters. The number of layers an image can be filtered depends on the width of the image. If the length and width of the filterbank are equal to 2N then N layers are possible. After each layer is processed, more of the
4

wavelet transformed data is high frequency data. If the maximum number of layers is selected, after the transformation, only one pixel will be lowpass data with the remainder being high frequency data.

Fig. 1.2 Decompose Levels 1.5.2 Quantizer: Quantizing refers to a reduction of the precision of the floating point values of the wavelet transform. A quantizer simply reduces the number of bits needed to store the transformed coefficients by reducing the precision of those values. Since this is a many-to-one mapping, it is a lossy process and is the main source of compression in an encoder. Quantization can be performed on each individual coefficient, which is known as Scalar Quantization (SQ). Quantization can also be performed on a group of coefficients together, and this is known as Vector Quantization (VQ). 1.5.3 Entropy encoder: An entropy encoder further compresses the quantized values to give better overall compression. It uses a model to accurately determine the probabilities for each quantized value and produces an appropriate code based on these probabilities so that the resultant output code stream will be smaller than the input stream. The most commonly used entropy encoders are the Huffman encoder and the arithmetic encoder, although for

applications requiring fast transmission with high quality, SPIHT encoding scheme is very effective. CHAPTER 2 WAVELET TRANSFORMATION 2.1 INTRODUCTION: The Wavelet transform is a transform of this type. It provides the time-frequency representation. Often times a particular spectral component occurring at any instant can be of particular interest. In these cases, it may be very beneficial to know the time intervals these particular spectral components occur. Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. The frequency and time information of a signal at some certain point in the time-frequency plane cannot be known. In other words: We cannot know what spectral component exists at any given time instant. The best we can do is to investigate what spectral components exist at any given interval of time. This is a problem of resolution, and it is the main reason why researchers have switched to WT from STFT. STFT gives a fixed resolution at all times, whereas WT gives a variable resolution as follows: Higher frequencies are better resolved in time, and lower frequencies are better resolved in frequency. This means that, a certain high frequency component can be located better in time (with less relative error) than a low frequency component.
6

On the contrary, a low frequency component can be located better in frequency compared to high frequency component. The fundamental idea behind wavelets is to analyze according to scale. The wavelet analysis procedure is to adopt a wavelet prototype function called an analyzing wavelet or mother wavelet. Any signal can then be represented by translated and scaled versions of the mother wavelet. Wavelet analysis is capable of revealing aspects of data that other signal analysis techniques such as Fourier analysis miss aspects like trends, breakdown points, discontinuities in higher derivatives, and self-similarity. Furthermore, because it affords a different view of data than those presented by traditional techniques, it can compressor de-noise a signal without appreciable degradation The purpose of this project is to compress an image using wavelet transformation. After the successful compression of the images, the file size would be reduced without losing important details. The final product will allow people to send images more quickly over the Internet, save space, and have higher quality digital images. 2.2 WHY WAVELET?

Wavelet Transforms are effective for imaging compression. Reduce file size without losing important details. Save space and bandwidth. Produce higher quality digital images. Unlike DFT/DCT, wavelet transformed data has its energy more concentrated, which allows higher quantization and compression ratio.

Less blocking effects, which commonly occur in other image compression formats. A useful wavelet may be visualized as a small wave that approaches

zero quickly as time approaches infinity. Wavelet transformations are ideal for representing signals with low-frequency components mixed with small number of sharp transitions. The wavelet-transformed data has an unique energy distribution where the high-frequency components are concentrated, and the low-frequency components are flattened out. This allows energy preservation, which helps to reconstruct a good approximation of the original image. 2.2.1 Time-Frequency Resolution: A major draw back of Fourier analysis is that in transforming to the frequency domain, the time domain information is lost. When looking at the Fourier transform of a signal, it is impossible to tell when a particular event took place. In an effort to correct this deficiency, Dennis Gabor (1946) adapted the Fourier transform to analyze only a small section of the signal at a time. A technique called windowing the signal, called the Windowed Fourier Transform (WFT) gives information about signals simultaneously in the time domain and in the frequency domain. To illustrate the time-frequency resolution differences between the Fourier transform and the wavelet transform consider the figure 2.1 and figure 2.2.

Fig. 2.1 WFT Resolution Figure 2.1 shows a windowed Fourier transform, where the window is simply a square wave. The square wave window truncates the sine or cosine function to fit a window of a particular width. Because a single window is used for all frequencies in the WFT, the resolution of the analysis is the same at all locations in the time frequency plane. An advantage of wavelet transforms is that the windows vary. Wavelet analysis allows the use of long time intervals where we want more precise low-frequency information, and shorter regions where we want highfrequency information. A way to achieve this is to have short high-frequency basis functions and long low-frequency ones. Figure 2.2 shows a time-scale view for wavelet analysis rather than a time frequency region. Scale is inversely related to frequency. A low-scale compressed wavelet with rapidly changing details corresponds to a high frequency. A high-scale stretched wavelet that is slowly changing has a low frequency.

Fig. 2.2 Wavelets Resolution The purpose served by the Wavelet Transform is that it produces a large number of values having zero, or near zero, magnitudes. 2.3 TYPES: There are two classifications of wavelet transformation, they are 1. Continuous wavelet transformation 2. Discrete wavelet transformation 2.3.1 Continuous Wavelet Transformation (CWT): The Morlet-Grossmann definition of the continuous wavelet transform for a 1D signal f(x) is:

10

Where z* denotes the complex conjugate of z, *(x) is the analyzing wavelet, a (>0) is the scale parameter and b is the position parameter. The transform is characterized by the following three properties: 1. It is a linear transformation, 2. It is covariant under translations:

3. It is covariant under dilations:

2.3.2 Discrete Wavelet Transformation(DWT): In general when dealing with stationary signals, whose statistical properties are invariant over time, the ideal tool is the Fourier transform. The Fourier transform is an infinite linear combination of dilated cosine and sine waves. When we encounter non-stationary signals, we can represent these signals by linear combinations of atomic decompositions known as wavelets. These wavelets, or atomic decompositions, allow us to extract the simple constituents that make up a complicated structure or signal. The atomic constituent used in signal processing can be grouped into time (space)-scale, time (space)-frequency, or a combination of both. In the time (space)-scale group we find the Grossman-Morlet and Daubechies wavelet basis defined by the functions shown below. These functions are used to obtain wavelet coefficients used in wavelet analysis and synthesis.
11

Grossman-Morlet wavelets:

Daubechies wavelets:

In the Grossman-Morlet wavelet function, the parameter "a" gives the scaling factor and "b" the center (location) of the function. In the case of the Daubechies wavelet function, scaling is given by the parameter center by the parameter "k". DWTs are particularly effective in analyzing waveforms which have spikes or pulses buried in noise. The noise may be more effectively removed than with FT filtering and the shape of the pulses preserved. Conservation of Energy similar to a Parseval theorem would also be nice. 2.3.3 Choosing the right wavelet: The low and high pass filters (subband coders) are in reality the wavelet that is used. There have been a wide variety of wavelets created over time The low pass is called the scaling function The high pass is the wavelet function Different wavelets give better results depending on the type of data and its

12

CHAPTER 3 ENCODING TECHNIQUES 3.1 INTRODUCTION: Encoding is the process of transforming information from one format into another. The encoding techniques used can effectively compress the image and also transform it in to the form which is suitable for transmission over various medium. 3.2 TYPES: The various encoding techniques employed using Wavelet transforms are 1. EZW 2. STW 3. SPIHT 3.2.1 Embedded Zerotree Wavelet (EZW): The EZW algorithm was one of the first algorithms to show the full power of wavelet-based image compression. An embedded coding is a process of encoding the transform magnitudes that allows for progressive transmission of the compressed image. Zerotrees area concept that allows for a concise encoding of the positions of significant values that result during the embedded coding process.

13

3.2.2 Spatial-orientation Tree wavelet: The only difference between STW and EZW is that STW uses a different approach to encoding the zerotree information. STW uses a state transition model. From one threshold to the next, the locations of transform values undergo state transitions. This model allows STW to reduce the number of bits needed for encoding. 3.2.3 Set Partitioning In Hierarchical Tree (SPIHT): A new coding algorithm for progressive image transmission called Set Partitioning In Hierarchical Tree (SPIHT). The proposed SPIHT coding keeps low bit-rate quality and has three improved features. Firstly, to reduce the amount of memory usage, SPIHT coding introduces three lists to store the significant information. They classify the coefficients of a wavelet transformed image in three sets:

the list LIP of insignificant pixels which contains the coordinates of those coefficients which are insignificant with respect to the current threshold T.

the list LSP of significant pixels which contains the coordinates of those coefficients which are significant with respect to T, and the list LIS of insignificant sets which contains the coordinates of the roots of insignificant subtrees. During the compression procedure, the sets of coefficients in LIS

are refined and if coefficients become significant they are moved from LIP to LSP. For the hierarchical pyramid nature of the spatial orientation tree, BFS provides a better architecture.
14

3.3 PROGRESSIVE IMAGE TRANSMISSION: A key for the progressive image transmission is to apply multiresolution decomposition on the target Image. The multi-resolution decomposition provides multi-resolution representation of an image. At different resolution, the details of an image characterize different physical structures of the scene. At a coarse resolution, these details correspond to the larger structures which provide the image content. It is therefore natural to analyze the image details at a coarse resolution first and then gradually increase the resolution. 3.4 EMBEDDED CODING: In embedded coding, the coded bits are ordered in accordance with their importance and all lower rate codes are provided at the beginning of the bitstream. Using an embedded code, the encoder can terminate the encoding process at any stage, so as to exactly satisfy the target bit-rate specified by the channel. To achieve this, the encoder can maintain a bit count and truncate the bit-stream, whenever the target bit rate is achieved. Embedded coding used in SPIHT is more general and sophisticated than the simple bit-plane coding, where the encoding commences with the most significant bit plane and progressively continues with the next most significant bit-plane and so on. If target bit-rate is achieved before the less significant bit planes are added to the bit-stream, there will be reconstruction error at the receiver, but the significance ordering of the embedded bit stream helps in reducing the reconstruction error at the given target bit rate.

15

3.4.1 Relationship between subbands: Every coefficient at a given scale can be related to a set of coefficients at the next finer scale of similar orientation. Only, the highest frequency subbands are exceptions, since there is no existence of finer scale beyond these. The coefficient at the coarser scale is called the parent and the coefficients at the next finer scale in similar orientation and same spatial location are the children. For a given parent, the set of all coefficients at all finer scales in similar orientation and spatial locations are called descendants. Similarly, for a given child, the set of coefficients at all coarser scales of similar orientation and same spatial location are called ancestors.

Fig. 3.1 Parent-child dependencies of subbands

16

Figure 3.1 illustrates this concept, showing the descendants of a DWT coefficient existing in HH3 subband. Note that the coefficient under consideration has four children in HH2 subband, since HH2 subband has four times resolution as that of HH3. Likewise, the coefficient under consideration in HH3 subband has sixteen descendants in subband HH1, which in this case is a highestresolution subband. For a coefficient in the LL subband, that exists only at the coarsest scale (in this case, the LL3), the hierarchical concept is slightly different. There, a coefficient in LL3 has three children one in HL3, one in LH3 and one in HH3, all at the same spatial location. Thus, every coefficient at any subband other than LL3 must have its ultimate ancestor residing in the LL3 subband. The relationship defined above best depicts the concept of spacefrequency localization of wavelet transforms. If we form a descendant tree, starting with a coefficient in LL3 as a root node, the tree would span all coefficients at all higher frequency subbands at the same spatial location. 3.4.2 Encoding the Significance map: To efficiently encode the significance map at any pass a datastructure is defined as a tree-like significance map data structure that includes an insignificant coefficient into it, provided all the descendants of that coefficient are also insignificant. A tree must therefore have a root, which itself is insignificant, but its parent is significant at that threshold. If all the ancestors till the coarsest frequency LL subband form the zerotree, then the ancestor at LL subband is declared as the zerotree root. The zerotree

17

concept is based on the hypothesis that if a DWT coefficient at a coarse scale is insignificant with respect to a given threshold, then all its higher frequency descendants are likely to be insignificant with respect to the same threshold. Although, this may not be always true, but these are generally true. It is possible that a coefficient is insignificant, but has some significant descendents. This is shown as flowchart model in figure 3.2.

Fig. 3.2 Flowchart for encoding .

18

CHAPTER 4 SPIHT ALGORITHM 4.1 PROPOSED SPIHT CODING ALGORITHM: 4.1.1 PRINCIPLES: The proposed SPIHT (Set Partitioning in Hierarchical Tree) coding is based on EZW algorithm. To implement the SPIHT on silicon, we examine the performance and memory requirement of a hardware implementation. In our opinion, the SPIHT coding has three essential advantages as following. Less memory required: When applying other algorithm techniques, large amount of memory may be occupied to store more wavelet coefficients. The SPIHT coding algorithm uses three lists including LSP (List of Significant Pixels), LIP (List of Insignificant Pixels) and LIS (List of Insignificant Set).It is apparent that the proposed SPIHT coding occupies less memory than EZW algorithm. Improved refinement pass: In the EZW algorithm, refinement pass is to output the n-th most significant bit of |ci,j| which is in the coefficient list except those included in the last sorting pass. To implement coding algorithm on a chip, EZW needs more hardware control and more memory space to store extra information in refinement pass. A better approach proposed in SPIHT is to put sorting pass before the refinement pass. Thus, SPIHT does not need to store last address

19

or information of the refinement pass and is more efficient than EZW coding. Efficient Breadth-first-search (BFS): The spatial orientation tree is defined on the hierarchical pyramid. In proposed SPIHT algorithm, we use breadth first-search (BFS) method instead. The algorithm BFS method is shown in figure 4.1,

2 4

3 8

Fig. 4.1 Breadth first Search(BFS)


20

1. Pull the node from the beginning of the queue and examine it. i. If the searched elements found in this node, quit the search. ii. Otherwise push all the (so-far-unexamined) successor (the direct child node) of this node into the end of the queue, if there are any. 2. If the queue is empty, every node on the graph has been examined -quit the search and return not found. 3. Repeat from step 2. 4.2 THEORY: SPIHT wavelet compression method is the refined form of zerotree coding with improved performance. It has been modified to use with the proposed processing of wavelet transformed data, for better coding efficiency. A list is maintained of significant pixels (LSP), insignificant pixels (LIP), and insignificant sets (LIS). The algorithm checks significance with current threshold value in the insignificant pixels list(LIP), if there is a significant coefficient(i.e. greater than current threshold) it is moved to significant list(LSP) and outputs 1 else output 0. Then LIS is scanned to check significance of 4 descendents of those coefficients that are significant. If it finds any of the children as significant, it outputs 1, moves it to LSP and split the set further into 4 (children) sets.

21

If all the children are insignificant, then output a 0 and no operation is done. The modified algorithm no more checks all the descendents D(mj) for significance, or its children-onward descendants C(m) because the absence of isolated significant coefficients from processed wavelet data eliminates the need to check all D(m) and/or C(m). It only need to check its 4 children G(m) since all the significant coefficients within a subband tree are linked together in the processed wavelet data. 4.3 ALGORITHM STEPS: D(m) = {Descendent indices of the index m} C(m) = {Child indices of the index m} G(m) = D(m) - C(m){Grandchildren of m, i.e., descendants which are not children} = null set Step 1 (Initialize): Choose initial threshold T0 such that all transform values satisfy |w(m)|< T0 and at least one value satisfies |w(m)| T0/2.Set LIP equal to H, set LSP equal to , and set LIS equal to all the indices in H that have descendants (assigning them all type D). Step 2 (Update threshold): Let Tk = Tk-1/2. Step 3 (Sorting pass):
22

For each m in LIP do: Output Sk[m] If Sk[m] = 1 then Move m to end of LSP Output sign of w(m); set wQ(m) = Tk Continue until end of LIP For each m in LIS do: If m is of type D then Output Sk[D(m)] If Sk[D(m)] = 1 then For each n C(m) do: Output Sk[n] If Sk[n] = 1 then Append n to LSP Output sign of w(n); set wQ(n) = Tk Else If Sk[n] = 0 then Append n to LIP

23

If G(m) = then Move m to end of LIS as type G Else Remove m from LIS Else If m is of type G then Output Sk[G(m)] If Sk[G(m)] = 1 then Append C(m) to LIS, all type D indices Remove m from LIS Continue until end of LIS Notice that the set LIS can undergo many changes during this procedure; it typically does not remain fixed throughout. Step 4 (Refinement pass): Scan through indices m in LSP found with higher threshold values Tj , for j < k (if k = 1 skip this step). For each value w(m),do the following: If |w(m)| [wQ(m);wQ(m) + Tk), then Output bit 0 Else if |w(m)| [wQ(m) + Tk;wQ(m) + 2Tk), then
24

Output bit 1 Replace value of wQ(m) by wQ(m) + Tk. Step 5 (Loop): Repeat steps 2 through 4.

CHAPTER 5 XILINX ISE OVERVIEW 5.1 Xilinx ISE: The Integrated Software Environment (ISE) is the Xilinx design software suite that allows you to take your design from design entry through Xilinx device programming. The ISE Project Navigator manages and processes your design through the following steps in the ISE design flow. 5.2 XST: Xilinx Synthesis Technology (XST) is a Xilinx application that synthesizes Hardware Description Language (HDL) designs to create Xilinx-specific netlist files called NGC files. The NGC file is a netlist that contains both logical design data and constraints. The NGC file takes the place of both Electronic Data Interchange Format (EDIF) and Netlist Constraints File (NCF) files. 5.3 SOFTWARE REQUIREMENTS:

25

Xilinx ISE 9.1i 5.4 HARDWARE REQUIREMENTS: Spartan-3 Startup Kit, containing the Spartan-3 Startup Kit Demo Board

5.5 DESIGN ENTRY: Design entry is the first step in the ISE design flow. During design entry, you create your source files based on your design objectives. You can create your top-level design file using a Hardware Description Language (HDL), such as VHDL, Verilog or ABEL, or using a schematic. You can use multiple formats for the lower-level source files in your design. Note if you are working with a synthesized EDIF or NGC/NGO file, you can skip design entry and synthesis and start with the implementation process. 5.5.1Synthesis: After design entry and optional simulation, you run synthesis. During this step, VHDL, Verilog, or mixed language designs become netlist files that are accepted as input to the implementation step. 5.5.2 Implementation:

26

After synthesis, you run design implementation, which converts the logical design into a physical file format that can be downloaded to the selected target device. From Project Navigator, you can run the implementation process in one step, or you can run each of the implementation processes separately. Implementation processes vary depending on whether you are targeting a Field Programmable Gate Array (FPGA) or a Complex Programmable Logic Device (CPLD).

5.5.3 Verification: You can verify the functionality of your design at several points in the design flow. You can use simulator software to verify the functionality and timing of your design or a portion of your design. The simulator interprets VHDL or Verilog code into circuit functionality and displays logical results of the described HDL to determine correct circuit operation. Simulation allows you to create and verify complex functions in a relatively small amount of time. You can also run in-circuit verification after programming your device. 5.5.4 Device configuration: After generating a programming file, you configure your device. During configuration, you generate configuration files and download the programming files from a host computer to a Xilinx device. Designs are usually made up of combinatorial logic and macros such as flip-flops, adders, subtractors, counters, FSMs, and RAMs. The
27

macros greatly improve performance of the synthesized designs. It is important to use coding techniques to model the macros so they are optimally processed by XST.XST first tries to recognize (infer) as many macros as possible. These macros are then passed to the Low Level Optimization step. In order to obtain better optimization results, the macros are either preserved as separate blocks, or merged with surrounded logic. This filtering depends on the type and size of a macro. For example, by default, 2-to-1 multiplexers are not preserved by the optimization engine. Synthesis constraints control the processing of inferred macros. 5.5.5 Signed and unsigned support in XST: When using Verilog or VHDL in XST, some macros, such as adders or counters, can be implemented for signed and unsigned values. To enable support for signed and unsigned values in Verilog, enable Verilog-2001 as follows:

In Project Navigator, select Verilog 2001 as instructed in the Synthesis Options topic of ISE Help, or

Set the -verilog2001 command line option to yes.

For VHDL, depending on the operation and type of the operands, you must include additional packages in your code. For example, to create an unsigned adder, use the arithmetic packages and types that operate on unsigned values shown in Table 5.1, Unsigned Adder.

28

PACKAGE numeric_std std_logic_arith std_logic_unsigned

TYPE unsigned unsigned Std_logic_vector

Table 5.1 Unsigned Adder

To create a signed adder, use the arithmetic packages and types that operate on signed values shown in Table 5.2, Signed Adder. PACKAGE numeric_std Std_logic_arith Std_logic_signed TYPE signed signed Std_logic_vector

Table 5.2 Signed adder 5.6 XST VHDL LANGUAGE SUPPORT: VHDL offers a broad set of constructs for compactly describing complicated logic: VHDL allows the description of the structure of a system how it is decomposed into subsystems, and how those subsystems are interconnected.

29

VHDL allows the specification of the function of a system using familiar programming language forms. VHDL allows the design of a system to be simulated before being implemented and manufactured. This feature allows you to test for correctness without the delay and expense of hardware prototyping. VHDL provides a mechanism for easily producing a detailed, devicedependent version of a design to be synthesized from a more abstract specification. This feature allows you to concentrate on more strategic design decisions, and reduce the overall time to market for the design.

CHAPTER 6 DESCRIPTION ON VHDL 6.1 INTRODUCTION: VHDL is the VHISC Hardware description language. VHISC is an abbreviation for very high speed integrated circuits. It can describe the behavior and structure of electronic systems, but it particularly suited as the language to describe the structure and behavior of digital electronic hardware design, such as ASIC and FPGAs as well as conventional digital circuits. VHDL is a notation, and precisely and completely defined by the language reference model (LRM).this sets VHDL apart from the other hardware description languages which are to some extend defined in an adhoc way by the behavior of tools that use them. VHDL is an international

30

standard, regulated by the IEEE; the definition of the language is not proprietary. VHDL is not an information model, a data base scheme, a simulator, a tool set or a methodology. Simulation and synthesis are the two main kinds of tools which operates the VHDL language. 6.2 DESIGN FLOW USING VHDL: The basic design flow of VHDL consists of synthesis and implementation parts.

6.2.1 Analysis: It consists of compiling the .vhd file and storing it in a design library. During compilation, the compiler checks the syntax and semantics of the .vhd file and converts it into an intermediate form, which is stored in a specific design library. Finally, during analysis, the analyzer maps the equation based form of design to a standard cell library. 6.2.2 Elaboration: During the elaboration phase, flattening of the design hierarchy takes place. Elaboration involves the following tasks:

Initialization of signals, processes and variables, Components are bound to architectures, Memory is allocated for storage of various objects.
31

The final result of the elaboration process is a flat collection of the signal nets and processes. 6.2.3 Initialization: The initialization process follows elaboration. Initialization involves the following tasks: Simulation time is reset to 0 ns, Explicit signals of ports and signals of architectures are initialized, resolved and assigned values, Signals declared implicitly by attributes are assigned values, Processes are executed until they suspend. 6.2.4 Simulation: Simulation follows Initialization. During simulation execution of the processes takes place. 6.2.5 Synthesis: After the design has been successfully simulated, the synthesis stage converts the text-based design into a gate level Netlist. This Netlist is a nonreadable file that describes the actual circuit to be implemented at a very low level. 6.2.6 Translate:

32

This stage prepares the synthesized design for use within the FPGA. It checks the design and ensures that the Netlist is consistent with the chosen architecture. The result is stored in a file in a tool specific binary format that describes the logic design in terms of design primitives such as latches, FlipFlops and Function Generators. 6.2.7 Mapping: In this stage, the design is distributed to the resources in the FPGA. Thus, mapping assigns a designs logic elements to the specific physical elements such as CLBs and IOBs that actually implement logic functions in the device. 6.2.8 Place And Route, (PAR): During the placement phase of the PAR, the design blocks created during mapping are assigned specific locations in the FPGA. The Routing phase assigns the interconnect paths in the FPGA. 6.2.9 Bit Stream Generation: Bit file(.bit) generated an implementation stage is used to transfer the program to FPGA kit for implementation and verification. 6.2.10 Configuration: During configuration, the bit file is downloaded from the programming file into the SRAM of the target device. The complete design flow can be represented graphically as depicted in Figure 6.1

33

6.2.11 System level verification: As a first step, VHDL may be used to model and simulate aspects of the complete system containing one or more devices. This may be a fully functional description of the system allowing the FPGA/ASIC specification to be validated prior to commencing detailed design. Alternatively, this may be a partial description that abstracts certain properties of the system, such as a performance model to detect system performance bottle necks.

34

Fig. 6.1 VHDL design flow

35

6.2.12 RTL design and Test Bench Creation: Once the over all system architecture and partitioning is stable, the detailed design of each FPGA/ASIC can commence. This starts by capturing the design in VHDL at the RTL level, and capturing a set of test cases in VHDL. These two tasks are complementary, and are some times performed by different design teams in isolation to ensure that the specification is correctly interpreted. The RTL VHDL should be synthesizable if automatic logic synthesis to be used. Test case generation is a major task requires a disciplined approach and much engineering ingenuity, the quality of the final FPGA/ASIC depends on the coverage of these test cases. 6.2.13 RTL Verification: The RTL VHDL is then simulated to validate the functionality against the specification.RTL simulation is usually one or two orders of magnitude faster than gate level simulation, and experience has shown that this speed this best exploited by doing more simulation, not spending less time on simulation. In practice it is common to spend 70-80% of the design cycle writing and simulating VHDL at and above the register transfer level, and 20-30% of the time synthesizing and verifying the gates. 6.2.14 Look Ahead Synthesis: Although some exploratory synthesis will be done early on in the design process, to provide accurate speed and area data to aid in the evaluation of architectural decisions and to check the engineers understands

36

of how the VHDL will be synthesized, the man synthesis production run is differed until functional simulation is complete. It is pointless to invest a lot of time and effort in synthesis until the functionality of the design is validated. 6.3 GENERAL VHDL CODE: Library IEEE Use IEEE.STD_LOGIC_1164.ALL; Entity<entity name> is entity name is declared here Port ( All port are described here and specified their data type The ports are also specified their flow either in or out ); End<entity name> Architecture<architecture name>of, <entity name> is Begin ( ---VHDL STATEMENT . ) End of VHDL code

37

Similar to many programming

languages VHDL supports

comments or not part of the VHDL design, but allow the user to make the notes referring to the VHDL code, usually as an aid to understand it. Here the comment is a header that tells us that the VHDL describes an AOL gate. A VHDL comment, which ignored by the VHDL compiler, can be on a separate line or at the end of a line of VHDL code. But in any case stops at the end of the line. 6.3.1 Library IEEE: Use IEEE.STD_LOGIC_1164.ALL; The above entity declaration is library class (library IEEE) and a use class (Use IEEE.STD_LOGIC_1164.ALL). This gives the entity access to all the names declared within package STD_LOGIC in particular. 6.3.2 Entity<entity name> is The name of the entity is just an arbitrary level intended by user. It does not correspond to a predefined in a VHDL component library, entity and is, are VHDL keywords. This line defines the start of a new VHDL design unit definition. This library and use clauses, although written before the entity declaration do not define the start of the VHDL description of the design unit, they are context clauses. We can think of an entity declaration as corresponding to a chip package.

38

6.3.3 Port ( --VHDL statements ); The entity declaration includes the name of the entity (in this e.g.), and a set of port declaration. A port may corresponds to a pin on an IC, an edge connector on a board, or any logical channel of one or more ports(example a, b), the direction that information is allowed to flow through the ports(in, out or inout ) ,and the data type of the ports (i.e.,STD_LOGIC). In our example the port declaration corresponds to a pin of our chip. The data type of a port defines the set of values that may flow through the port. The ports are of type STD_LOGIC, which is found in package STD_LOGIC_1164 on library IEEE. The package is a VHDL language constructs where the new data type may be defined, and the particular package STD_LOGIC_1164 is an IEEE standard for representing digital signals in VHDL. The concept of data type is borrowed by VHDL from the world of software. It allows the VHDL compiler to ensure that the design is at least reasonably robust beginning simulation. 6.3.4 End<entity name> The entity declaration is terminated by the VHDL keyword end. Here we indulge in a little programming robustness by adding the name of the design entity after the end keyword. Including the name of the design entity is particularly relevant in large description where the port list may extent over many screens (or pages).
39

6.4 EXAMPLE: Entity example is Port (clk: in bit; a:in integer; b:out integer); end example; 6.4.1 Architecture<architecture name>of, entity name> is The name of the architecture body is just an arbitrary label invented by user. It is possible to define several alternative architectural bodies for a single design entity, and the only purpose of the architecture name is to distinguish between these alternatives. Note that when we define an architecture, we have to tell the VHDL analysis that the architecture correspond to ADI design entity, you might think that it would be enough to specify the name of the architecture and that the architecture automatically correspond to the previously declared entity. 6.4.2 Begin: The VHDL keywords begin denotes the end of the architecture declarative region and the start of the architecture part. In this architecture, there is one statement, and all the names referenced in this statement are in act the ports of the design. Because all the names used in the architecture statement part are declared in the entity declaration, the architecture statement declarative part is empty.

40

6.4.3 end<architecture name> The architecture is terminated by the VHDL keyword end. Once again, we refer the architecture name at the end of the architecture body for the same reason as we did with the entity. Usually the architecture body requires significantly more code than entity declaration, hence repeating the name of the architecture is even more relevant. 6.4.4 End of VHDL code end is an another VHDL comment that ends the VHDL description. 6.5 BASIC TERMINOLOGY: 6.5.1 Entity declaration: The entity declaration describes the external view of the entity for e.g., input and output signal names 6.5.2 Architecture body: The architecture body contains the internal description of the entity for eg., as a set of internal components that represents the structure of the entity, or as a set of con-current or sequential statements that represents the behavior of the entity.

41

6.5.3 Configuration declaration: A configuration declaration is used to create a configuration for an entity. It specifies the binding of one architecture body from many architecture bodies that may associate with the entity. 6.5.4 Package declaration: A package declaration encapsulates a set of related declaration, such as type declaration, sub type and sub program declaration which can be shared across two or more design units. 6.5.5 Package body: A package body contains the definition of sub programs declared in a package declaration. 6.5.6 Signals: The architecture contains three signals AB, CD AND 0 used internally with in the architecture. A signal is a declared before the beginning of the architecture and its own data type (example STD_LOGIC).technically, ports are signals, so signals and ports are read and assigned in the same way. 6.5.7 Assignment: The assignments within the architecture are concurrent signal assignments. Such assignments executes whenever a signal on the right hand side of the assignment changes the value. Because of this, the order in which

42

concurrent assignments are written has no effect on their execution. The assignments are concurrent because two assignments could execute at the same time (if two inputs changed simultaneously).the style of description that uses concurrent assignments is sometime termed data-flow. 6.5.8 Delays: Each of the concurrent signal assignments has a delay. The expressions on the right hand side is evaluated whenever a signal on a right hand side changes a value, and the signal on the left hand side of the assignments is updated with the new value after the given delay. 6.5.9 Components: The two component declaration must usually match the ports in the entity declaration one-for-one.the component declaration defines the names, order ,mode and types of the ports to be used when the component is instanced design entity-a component is declared once with in any architecture, but may be instanced many number of times. 6.5.10 Association: Signals in architecture are associated with ports on a component using a port map. In effect, the ports make an electrical connection between piece of wires in the architecture (signals) and pins on a component (ports). The same signal may be associated with the several ports. This is to define inter connections between the components.

43

6.5.11 Process: To create software-style VHDL, first we have to deal with process. A VHDL process as block of hardware. Instead of instantiating a component in the architecture, we can instantiated a process this is different from the way in which component creates functionality. Executing concurrently with respect to each other. A process can contain signal assignments to describe the functionality of the function. You can embed component instances within each other, so there is no way to build a process hierarchy. By describing designs functionality using statements executing sequence. Architecture < architecture name > of <entity name> is --signal and component declaration Begin End We can also write the above statement without components Architecture of is --signal declaration Begin G1: process --software-style VHDL for the component1 End process; G2: process; --software-style VHDL for the component2 End process; End;
44

6.6 BENEFITS AND SCOPE OF VHDL: 6.6.1 Executable specification: It is often that a large number of ASIC designs meet their specifications first time, but failed to work when plugged in to a system. VHDL allows this issue to be addressed in two ways: a VHDL specification can be executed in order to achieve a high level description. a VHDL specification for a part can form the basis for a simulation model to verify the operation of the part in the wider context. This depends on how accurately a specification handles aspects such as timing and initialization. Behavioral simulation can reduce design time by allowing design problems to be detected early on avoiding the need to rework designs at gate level. Behavioral simulation also permits design optimization by exploring alternative architectures, resulting in better designs. 6.6.2 Tools: VHDL descriptions of hardware design and test benches are portable between design tools and portable between design centers and project partners. You can safely invest in VHDL modeling effort and training, knowing that you will not be tied in to a single toll vendor, but will be free to preserve your investment across tolls and platforms. Also, ensuring continuous supply of state of the art VHDL tools. VHDL is suited to the specification, design and description of digital electronic hardware.

45

6.6.3 System level and digital: VHDL is not ideally suited for an abstract system level simulation, prior to the hardware software split. VHDL is suitable for use today in the digital hardware design process, from specification through high level functional simulation.

46

CHAPTER 7 SYNTHESIS RESULTS HDL SYNTHESIS REPORT:

Initialization:

DEVICE NAME COMPARATORS

DESCRIPTION 9-bit comparator great equal 9-bit comparator less equal 1-bit register 8-bit register 32-bit 4-to-1 multiplexer 8-bit 16-to-1 multiplexer 32-bit up counter 32-bit adder flip-flops Table 7.1 initialization

NUMBERS 1 1 2 33 1 2 2 1 334

REGISTERS MULTIPLEXERS

COUNTERS ADDER REGISTER

Sorting:

47

DEVICE NAME COMPARATORS

DESCRIPTION 9-bit comparator great

NUMBERS 1

equal 9-bit comparator greater 1 9-bit comparator less 1 9-bit comparator less 1 equal 1-bit register 8-bit register 16x8 bit dual port ram Table 7.2 sorting 3 6 2

REGISTERS RAM

16x8 bit single port ram 2

Refinement: DEVICE NAME 9-bit comparator great equal COMPARATORS 9-bit comparator less equal 9-bit comparator greater 9-bit comparator less 1-bit register 8-bit register REGISTERS Flip flops Table 7.3 refinement 7.2 SYNTHESIS WAVEFORMS 7.2.1 Initialization input: 1 1 2 2 1 6 29 DESCRIPTION NUMBERS

48

Fig 7.1 initialization input

7.2.2 Initialization output:

49

Fig 7.2 initialization output

7.2.3 Sorting input:

50

Fig 7.3 sorting input

7.2.4 Sorting output:

51

Fig 7.4 sorting output

7.2.5 Refinement input:

52

Fig 7.5 refinement input

7.2.6 Refinement output:

53

Fig 7.6 refinement output

CHAPTER 8

54

CONCLUSION The wavelet transformed coefficients of standard image is achieved using MATLAB and encoded using SPIHT algorithm in VHDL suitable for transmission over the media like internet. With wqm and an array (lsp) we can recover the original image at receiving end. We can achieve the compression ratio as 4:1 or 8:1 by simply assigning various threshold values.

REFERENCES:
55

J. M. Shapiro, Embedded image coding using zero trees of wavelet coefficients, IEEE Transactions on Signal Processing, Dec. 1993. A. Said and W. Pearlman, A new, fast and efficient image codec based on set partitioning, IEEE Trans. Circuits Syst. Video Technol., vol. 6, pp. 243250, June 1996. J. Wu, C. Oliver, C. Chatellier, Embedded zerotree runlength wavelet image coding ICIP Proceedings, pp 784-787, Oct 2001 VHDL DouglasL.Perry, Third Edition, TMH Publishing, New Delhi. VHDL PRIMER, J.Bhasker, Third Edition, Pearson Education.

56

Вам также может понравиться