Вы находитесь на странице: 1из 70

Introduction to the Fourier Transform Virtually everything in the world can be described via a waveform - a function of time, space

or some other variable. For instance, sound waves, electromagnetic fields, the elevation of a hill versus location, the price of your favorite stock versus time, etc. The Fourier Transform gives us a unique and powerful way of viewing these waveforms. The Fourier Transform of the Box Function

Continuing the study of the Fourier Transform, we'll look at the box function (also called a square pulse or square wave):

Figure 1. The box function. In Figure 1, the function g(t) has amplitude of A, and extends from t=-T/2 to t=T/2. For |t|>T/2, g(t)=0. Using the definition of the Fourier Transform (Equation [1] above), the integral is evaluated:

[Equation 4]

The solution, G(f), is often written as the sinc function, which is defined as:
[Equation 5]

[While sinc(0) isn't immediately apparent, using L'Hopitals rule or whatever special powers you have, you can show that sinc(0) = 1] The Fourier Transform of g(t) is G(f),and is plotted in Figure 2 using the result of equation [2].

Figure 2. The sinc function is the Fourier Transform of the box function. To learn some things about the Fourier Transform that will hold in general, consider the square pulses defined for T=10, and T=1. These functions along with their Fourier Transforms are shown in Figures 3 and 4, for the amplitude A=1.

Figure 3. The Box Function with T=10, and its Fourier Transform.

Fi

4 T

Functi n with T Transform.

and its Fouri r

A fundamental lesson can be learned from Fi ures 3 and 4. From Fi ure 3, note t at t e wider square pulse produces a narrower, more constrained spectrum (t e Fourier Transform). From Fi ure 4, observe t at t e t inner square pulse produces a wider spectrum t an in Fi ure 3. This fact will hold in general: rapidl changing functions require more high frequency content (as in Figure 4). Functions that are moving more slowly in time will have less high frequency energy (as in Figure 3). Further, notice that when the box function is shorter in time (Figure 4), so that it has less energy, there appears to be less energy in it s Fourier Transform. We'll explore this equivalence later. In the next section, we'll look at properties of the Fourier Transform
Properties of the Fourier transform
An integrable function is a function on the real line that is Lebesgue-measurable and satisfies

Basic properties
Given integrable functions f(x), g(x), and h(x) denote their Fourier transforms by and Linearity For any complex numbers a and b, if h(x) = a (x) + bg(x), then Translation For any real number x0, if h(x) = (x x0), then Modulation , ,

respectively. The Fourier transform has the following basic properties ( insky 2002). P

For any real number Scaling

0,

if h(x) = e

2 ix 0

(x), then

For a non-zero real number a, if h(x) = (ax), then case a = 1 leads to the time-reversal property, which states: if h(x) = (x), then Conjugation If In particular, if And if Duality If Convolution If , then then , then is real, then one has the reality condition .

The

is purely imaginary, then

Linear time-invariant system theory, commonly known as LTI system theory, comes from applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. It investigates the response of a linear and timeinvariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time (e.g., an acoustic waveform), but in applications like image processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus these systems are also called linear translation-invariant to give the theory the most general reach. In the case of genericdiscrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. A good example of LTI system is electrical circuit that can be made up of resistors,capacitors and [1] inductors., The defining properties of any LTI system are linearity and time invariance.  Linearity means that the relationship between the input and the output of the system is a linear map: If input produces response and input produces response produces the scaled and summed

then the scaled and summed input response

where a1 and a2 are real scalars. It follows that this can be ,

extended to an arbitrary number of terms, and so for real numbers Input In particular, Input produces output

(E q. 1) produces output

input functi n can e represented y a continuum of input functions, combined "linearly", as shown, then the corresponding output function can be represented by the corresponding continuum of output functions, scal Time i

and summed in the same way.

ariance means that whether we apply an input to the system now

or T seconds from now, the output will be identical except for a time delay of the T seconds. hat is, if the output due to input x(t) is y(t), then the output due to input x(t T) is y(t T). Hence, the system is time invariant because the output does not depend on the particular time the input is applied. he fundamental result in I system theory is that any

I system can be

characteri ed entirely by a single function called the system's impulse response. he output of the system is simply the convolution of the input to the system with the system's impulse response. his method of analysis is often called the time domain point of view. he same result is true of discrete-time linear shift-invariant systems in which signals are discrete-time samples, and convolution is defined on sequences.

Basic Definition of the Z-Transform


The z-transform of a sequence is defined as

X(z)=n=x[n]zn
(1)

Sometimes this equation is referred to as the bilateral z-transform. At times the z-transform is defined as

X(z)=n=0x[n]zn
(2)

which is known as the unilateral z-transform. There is a close relationship between the z-transform and the Fourier transform of a discrete time signal, which is defined as

X(ei )=n=x[n]e(i
(3)

n)

Notice that that when the zn is replaced with the Fourier Transform exists,

e(i

n) the z-transform reduces to the Fourier Transform. When

z=ei

, which is to have the magnitude of

z equal to unity.

# #





"

where c

re

l rs

ts that ary

er a

ti

exed y

. hus if an

 



Elliptic filter
An elli i fil er (also nown as a Cauer fil er, named after Wilhelm Cauer) is a signal processing filter with equali ed ripple (equiripple) behavior in both the passband and the stopband. he amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple (whether the ripple is equali ed or not). Alternatively, one may give up the ability to independently adjust the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations. As the ripple in the stopband approaches ero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches ero, the filter becomes a type II Chebyshev filter and finally, as both ripple values approach ero, the filter becomes a Butterworth filter. he gain of a lowpass elliptic filter as a function of angular frequency

is given by:

function) and
0

is the cutoff frequency

is the ripple factor is the selectivity factor he value of the ripple factor specifies the passband ripple, while the

combination of the ripple factor and the selectivity factor specify the stopband ripple.

Properties
 In the passband, the elliptic rational function varies between ero and unity. he passband of the

In the stopband, the elliptic rational function varies between infinity and the discrimination factor Ln which is defined as:

In the limit of

the elliptic rational function becomes a Chebyshev polynomial,

and therefore the filter becomes a Chebyshev type I filter, with ripple factor

he gain of the stopband therefore will vary between

gain therefore will vary between

and

and

where

is the nth-order elliptic rational function (sometimes nown as a Chebyshev rational

'

& $ % 6

limit of

and

such that

becomes a Butterworth filter  In the limit of , and such that

filter becomes a hebyshev type II filter with gain

Addressing mode
Addressing modes are an aspect of the instruction set architecture in most central processing unit (CPU) designs. The various addressing modes that are defined in a given instruction set architecture define how machine languageinstructions in that architecture identify the operand (or operands) of each instruction. An addressing mode specifies how to calculate the effective memory address of an operand by using information held inregisters and/or constants contained within a machine instruction or elsewhere. In computer programming, addressing modes are primarily of interest to compiler writers and to those who write code directly in assembly language

address modes
The basic addressing modes are: register direct, moving date to or from a specific register; register indirect, using a register as a pointer to memory; program counter-based, using the program counter as a reference point in memory; absolute, in which the memory addressis contained in the instruction; and immediate, in which the data is contained in the instruction. Some instructions will have an inherent or implicit address (usually a specific register or the memory contents pointed to by a specific register) that is implied by the instruction without explicit declaration. One approach to processors places an emphasis on flexibility of addressing modes. Some engineers and programmers believe that the real power of a processor lies in its addressing modes. Most addressing modes can be created by combining two or more basic addressing modes, although building the combination in software will usually take more time than if the combination addressing mode existed in hardware (although there is a trade-off that slows down all operations to allow for more complexity). In a purely othogonal instruction set, every addressing mode would be available for every instruction. In practice, this isnt the case.

Since the Butterworth filter is a limiting form of the hebyshev filter, it follows that in the the filter

= 1 and Ln = , the

Virtual memory, memory pages, and other hardware mapping methods may be layered on top of the addressing modes.
E D D
In mathematics, the di

in ourier analysis. It transforms one function into another, which is called the frequency domainrepresentation, or simply the DFT, of the original function (which is often a function in the time

limited (finite) duration. Such inputs are often created by sampling a continuous function, li e a

segment that is analyzed is one period of an infinitely extended periodic signal; if this is not actually true, a window function has to be used to reduce the artifacts in the spectrum. or the same reason,

discrete-time functions. he sinusoidal basis functions of the decomposition have the same properties. ft & dft

analyze the frequencies contained in a sampled signal, to solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. A key enabling factor for

al., 969) but has the same initialism.

Digital filter
In electronics, computer science and mathematics, a di ital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. his is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-time analog signals. An analog signal may be processed by a digital filter by first being digitized and represented as a sequence of numbers, then manipulated

HG

ourier transform for the D

, which apparently predates the term "fast ourier transform" (Cooley et

H HG

of algorithms for computing D

s. he terminology is further blurred by the (now rare) synonym finite

HGG

transformation or function, regardless of how it is computed, whereas "

HG

HG

mean "D

" in colloquial settings. ormally, there is a clear distinction: "D

" refers to a mathematical

" refers to a specific family

HGG

HG

algorithms are so commonly employed to compute D

HGG

transform (

) algorithm. s that the term " " is often used to

HG

these applications is the fact that the D

HG

in computers. In particular, the D

is widely employed insignal processing and related fields to

can be computed efficiently in practice using a fast ourier

HG

generalizations discussed below), making the D

HG

he input to the D

is a finite sequence of real or complex numbers (with more abstract ideal for processing information stored

HG

(forever). herefore it is often said that the D

HG

the inverse D

cannot reproduce the entire time domain, unless the input happens to be periodic is a transform for ourier analysis of finite-domain

HG

components to reconstruct the finite segment that was analyzed.

HG H

person's voice.

nli e the discrete-time ourier transform (D

), it only evaluates enough frequency sing the D implies that the finite

HG

domain). But the D

requires an input function that is discrete and whose non-zero values have a

CB

rete F urier transf rm (DFT) is a specific ind of discrete transform, used

HGG

G G H

mathematically, and then reconstructed as a new analog signal (seedigital signal processing). In an analog filter, the input signal is directly manipulated by the circuit. A digital filter system usually consists of an analog-to-digital converter to sample the input signal, followed by a microprocessor and some peripheral components such as memory to store data and filter coefficients etc. Finally a digital-to-analog converter to complete the output stage. Program Instructions (software) running on the microprocessor implement the digital filter by performing the necessary mathematical operations on the numbers received from the A C. In some high performance applications, an FPGA or ASIC is used instead of a general purpose microprocessor, or a specialized

SP with specific paralleled architecture f r expediting operations such as filtering. o

igital filters may be more expensive than an equivalent analog filter due to their increased

complexity, but they make practical many designs that are impractical or impossible as analog filters. Since digital filters use a sampling process and discrete -time processing, they experience latency (the difference in time between the input and the response), which is almost irrelevant in analog filters.

igital filters are commonplace and an essential element of every day electronics such

as radios, cellphones, and stereo receivers.

Finite impulse response FIR filter


A finite impulse response ( IR) filter is a type of a signal processing filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which have internal feedback and may continue to respond indefinitely (usually decaying). Theimpulse response of an Nth-order discretetime FIR filter (i.e. with a Kronecker delta impulse input) lasts for N+1 samples, and then dies to zero. FIR filters can be discrete-time or continuous-time, and digital or analog.

efinition

A discrete-time FIR filter of order N. The top part is an N-stage delay line with N+1 taps. Each unit delay is a z-1 operator in Z-transform notation.

The output y of a linear time invariant system is determined by convolving its input signal x with its impulse response b.

U U

For a discrete-time FIR filter, the output is a weighted sum of the current and a finite number of previous values of the input. The operation is described by the following equation, which defines the output sequence y[n] in terms of its input sequence x[n]:

where:   

x[n] is the input signal, y[n] is the output signal, bi are the filter coefficients, also known as tap weights, that make up the impulse
response,

N is the filter order; an Nth-order filter has (N + 1) terms on the right-hand side.
The x[n

i] in these terms are commonly referred to as taps, based on the structure of


X W

a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a 5th order/ -tap filter , for instance. [edit]Properties An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:  Are inherently stable. This is due to the fact that, because there is no feedback, all the poles are located at the origin and thus are located within the unit circle.  Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler.  They can easily be designed to be linear phase by making the coefficient sequence symmetric; linear phase, or phase change proportional to frequency, corresponds to equal delay at all frequencies. This property is sometimes desired forphase-sensitive applications, for example data communications, crossover filters, and mastering. The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are

make FIR filters approximately as efficient as IIR for many appli cations.

Filter design

needed.

owever many digital signal processors provide specialized hardware features to

To design a filter means to select the coefficients such that the system has specific characteristics. The required characteristics are stated in filter specifications. Most of the time filter specifications refer to the frequency response of the filter. There are different methods to find the coefficients from frequency specifications:
Window design method Frequency Sampling method Weighted least squares design Parks-McClellan method (also known as the Equiripple, Optimal, or Minimax method). The Remez exchange algorithm is commonly used to find an optimal equiripple set of coefficients. Here the user specifies a desired frequency response, a weighting function for errors from this response, and a filter order N. The algorithm then finds the set of (N + 1) coefficients that minimize the maximum deviation from the ideal. Intuitively, this finds the filter that is as close as you can get to the desired response given that you can use only (N + 1) coefficients. This method is particularly easy in practice since at least one text[1] includes a program that takes the desired filter and N, and returns the optimum coefficients. 5. Equiripple FIR filters can be designed using the FFT algorithms as well[2]. The algorithm is iterative in nature. You simply compute the DFT of an initial filter design that you have using the FFT algorithm (if you don't have an initial estimate you can start with h[n]=delta[n]). In the Fourier domain or FFT domain you correct the frequency response according to your desired specs and compute the inverse FFT. In time-domain you retain only N of the coefficients (force the other coefficients to zero). Compute the FFT once again. Correct the frequency response according to specs. 1. 2. 3. 4.

Software packages like M TLAB, GNU Octave, Scilab, and SciPy provide convenient ways to apply these different methods. Some filter specifications refer to the time-domain shape of the input signal the filter is expected to "recognize". The optimum matched filter for separating any waveform from white noise is obtained by sampling that shape and using those samples in reverse order as the coefficients of the filter -- giving the filter an impulse response that is the time-reverse of the expected input signal.

[edit] Window design method


For more details on this topic, see Window function.

In the Window Design Method, one designs an ideal IIR filter, then applies a window function to it in the time domain, multiplying the infinite impulse by the window function. This results in the frequency response of the IIR being convolved with the frequency response of the window function [3] thus the imperfections of the FIR filter (compared to the ideal IIR filter) can be understood in terms of the frequency response of the window function. The ideal frequency response of a window is a Dirac delta function, as that results in the frequency response of the FIR filter being identical to that of the IIR filter, but this is not attainable for finite windows, and deviations from this yield differences between the FIR response and the IIR response

LTI system theory


Linear time-invariant system theory, commonly known as LTI system theory, comes from applied mathematics and has direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. It investigates the response of a linear and timeinvariant system to an arbitrary input signal. Trajectories of these systems are commonly measured and tracked as they move through time (e.g., an acoustic waveform), but in applications likeimage processing and field theory, the LTI systems also have trajectories in spatial dimensions. Thus these systems are also called linear translation-invariant to give the theory the most general reach. In the case of genericdiscrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. A good example of LTI system is electrical circuit that can be made up of resistors, capacitors and inductors.,
[1]

The defining properties of any LTI system are linearity and time invariance.  Linearity means that the relationship between the input and the output of the system is alinear map: If input produces response and input produces response produces the scaled and summed

then the scaled and summed input response

where a1 and a2 are real scalars. It follows that this can be ,

extended to an arbitrary number of terms, and so for real numbers Input In particular, Input produces output

(E produces output q. 1)

where c and x are scalars and inputs that vary over a continuum indexed by . Thus if an input function can be represented by a continuum of input functions, combined linearly , as shown, then the corresponding output function can be represented by the corresponding continuum of output functions, scaled and summed in the same way. Time invariance means that whether we apply an input to the system now or T seconds from now, the output will be identical except for a time delay of the T seconds. That is, if the output due to input x(t) is y(t) , then the output due to input x(t

T) is y(t T). ence, the system is time invariant because

the output does not depend on the particular time the input is applied. 

      

Frequency Transformation We need to apply a suitable frequency transformation, if we wish to design bandpass, bandstop and high-pass filters, using the low-pass approximating function analysis previously covered. The block diagram, shown in figure 5.25, illustrates the procedure, which produces, from the specification supplied, the required high-pass approximating function.

Microcontroller
A mi rocontroller (sometimes abbreviated C, uC or MCU) is a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals. Program memory in the form of also often included on chip, as well as a typically small amount of flash or P

AM. Microcontrollers are designed

for embedded applications, in contrast to the microprocessors used in personal computers or other general purpose applications. Microcontrollers are used in automatically controlled products and devices, such as automobile engine control systems, implantable medical devices, remote controls, office machines, appliances, power tools, and toys. By reducing the size and cost compared to a design that uses a separate microprocessor, memory, and input/output devices, microcontrollers make it economical to digitally control even more devices and processes. Mixed signal microcontrollers are common, integrating analog components needed to control non-digital electronic systems. Some microcontrollers may use four-bit words and operate at clock rate frequencies as low as 4 kHz, for low power consumption (milliwatts or microwatts). hey will generally have the ability to retain functionality while waiting for an event such as a button press or other interrupt; power consumption while sleeping (CP clock and most peripherals off) may be just nanowatts, making many of them well suited for long lasting battery applications.

ther microcontrollers may serve performance-critical

roles, where they may need to act more like a digital signal processor (DSP), with higher clock speeds and power consumption.

de

f d

edc

M is

difference between data and program memory Data memory = where you place your variables. You can read and write values. Program memory = where the application is stored. Some chips allows parts of the program memory to be modified in blocks (segments), but you can't store variables in the program memory. It is normally possible to store constants - i.e. initialized variables that you do not change - in the program memory. Your PC also has data memory and program memory. But the program memory is very small in the PC - it is just for storage of the BIOS - the boot messages you see when the PC boots, and (often, but not always) the configiguration pages where you define if you have a floppy installed, if the computer should support a USB keyboard etc.

Watchdog timer
A watchdog timer (or com uter operating properl (COP) timer) is a computer hardware or software timer that triggers a system reset or other corrective action if the main program, due to some fault condition, such as a hang, neglects to regularly service the watchdog (writing a "service pulse" to it, also referred to as "kicking the dog", petting the dog, "feeding the watchdog"[1] or "waking the watchdog"). he intention is to bring the system back from the unresponsive state into normal operation. Watchdog timers can be more complex, attempting to save debug information onto a persistent medium; i.e. information useful for debugging the problem that caused the fault. In this case a second, simpler, watchdog timer ensures that if the first watchdog timer does not report completion of its information saving task within a certain amount of time, the system will reset with or without the information saved. he most common use of watchdog timers is in embedded systems, where this specialized timer is often a built-in unit of a microcontroller. Even more complex watchdog timers may be used to [2] run untrusted code in a sandbox. Watchdog timers may also trigger fail-safe control systems to move into a safety state, such as turning off motors, high-voltage electrical outputs, and other potentially dangerous subsystems until the fault is cleared. or those embedded systems that can't be constantly watched by a human, watchdog timers may be the solution. or example, most

embedded systems need to be self-reliant, and it's not usually possible to wait for someone to reboot them if the software hangs. Some embedded designs, such as space probes, are simply not accessible to human operators. If their software ever hangs, such systems are permanently disabled. In cases similar to these, a watchdog timer can help in solving the problem. he watchdog timer is a chip external to the processor. However, it could also be included within the same chip as the CP ; this is done in many microcontrollers. In either case, the watchdog timer is tied directly to the processor's reset signal. Expansion card based watchdog timers exist and can be fitted to computers without an onboard watchdog.
[edit]

Mnemonic
Mnemonic, which comes from the Greek word mnemon, meaning mindfulness, is a device to aid the memory. Sometimes called a mnemonic device, a mnemonic captures information in a memorable way to help a person remember something that is important to him or her. Mnemonics are used to help students quickly recall information that is frequently used and needs to be at their fingertips, but it is also used to help jog the memory of less frequently used information, for example, to recall symptoms or procedures for rarely encountered situations. People may also use a mnemonic to remember on a single occasion, such as a shopping list.
Assembly mnemonics
h
In assembly language a mnemonic is a code, usually from 1 to number. Programming in machine code, by supplying the computer with the numbers of the operations it must perform, can be quite a burden, because for every operation the corresponding number must be looked up or remembered. ooking up all numbers takes a lot of time, and mis-remembering a number may introduce computer bugs. letters, that represents an opcode, a

Therefore a set of mnemonics was devised. Each number was represented by an alphabetic code. So instead of entering the number corresponding to addition to add two numbers one can enter "add". Although mnemonics differ between different CP designs some are common, for instance: "sub" (subtract), "div" (divide), "add" (add) and "mul" (multiply). This type of mnemonic is different from the ones listed above in that instead of a way to make remembering numbers easier, it is a way to make remembering numbers unnecessary (e.g. by relying on the computer's assembler program to do the lookup work.)

Digital signal processing


Digital signal processing (DSP) is concerned with the representation of signals by a sequence of numbers or symbols and the processing of these signals. Digital signal processing and analog signal processing are subfields of signal processing. DSP includes subfields like: audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, etc. The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals. The first step is usually to convert the signal from an analog to a digital form, by sampling it using an analog-to-digital converter (ADC), which turns the analog signal into a stream of numbers. However, often, the required output signal is another analog output signal, which requires a digital-toanalog converter (DAC). Even if this process is more complex than analog processing and has a discrete value range, the application of computational power to digital signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression.
[1]

DSP algorithms have long been run on standard computers, on specialized processors called digital signal processors (DSPs), or on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional technologies used for digital signal processing including more powerful general purpose microprocessors, field-programmable gate arrays ( PGAs), digital signal controllers (mostly for industrial apps such as motor control), and stream processors, among others.[2]

Applications
The main applications of DSP are audio signal processing, audio compression, digital image processing, video compression, speech processing, speech recognition, digital

compression and transmission in digital mobile phones, room correction of sound in hi-fi and sound reinforcement applications, weather forecasting, economic forecasting, seismic data processing, analysis and control of industrial processes, medical imaging such as CATscans

r ts

communications,

ADA , S

A , seismology and biomedicine. Specific examples are speech

and MRI, MP3 compression, computer graphics, image manipulation, hifi loudspeaker crossovers and equalization, and audio effects for use with electric guitar amplifiers.

Differences between Frequency Domain and Time Domain


The time-domain representation gives the amplitudes of the signal at the instants of time during which it was sampled. However, in many cases you need to know the frequency content of a signal rather than the amplitudes of the individual samples. Fourier's theorem states that any waveform in the time domain can be represented by the weighted sum of sines and cosines. The same waveform then can be represented in the frequency domain as a pair of amplitude and phase values at each component frequency. You can generate any waveform by adding sine waves, each with a particular amplitude and phase. The following figure shows the original waveform, labeled sum, and its component frequencies. The fundamental frequency is shown at the frequency f0, the second harmonic at frequency 2f0, and the third harmonic at frequency 3f0.

In the frequency domain, you can separate conceptually the sine waves that add to form the complex time-domain signal. The previous figure shows single frequency components, which spread out in the time domain, as distinct impulses in the frequency domain. The amplitude of each frequency line is the amplitude of the time waveform for that frequency component. The representation of a signal in terms of its individual frequency components is the frequency-domain representation of the signal. The frequency-domain representation might provide more insight about the signal and the system from which it was generated. The samples of a signal obtained from a DAQ device constitute the time-domain representation of the signal. Some measurements, such as harmonic distortion, are difficult to quantify by inspecting the time waveform on an oscilloscope. When the same signal is displayed in the frequency domain by an FFT Analyzer, also known as a Dynamic Signal Analyzer, you easily can measure the harmonic frequencies and amplitudes.

Parseval's Theorem
Parseval's Theorem states that the total energy computed in the time domain must equal the total energy computed in the frequency domain. It is a statement of conservation of energy. The following equation defines the continuous form of Parseval's theorem.

(A)

The following equation defines the discrete form of Parseval's theorem.

(B)

where

is a discrete FFT pair and n is the number of elements in the sequence.

The following figure shows the block diagram of a VI that demonstrates Parseval's theorem.

The VI in the previous figure produces a real input sequence. The upper branch on the block diagram computes the energy of the time-domain signal using the left side of Equation B. The lower branch on the block diagram converts the time-domain signal to the frequency domain and computes the energy of the frequency-domain signal using the right side of Equation B. The following figure shows the results returned by the VI in the previous figure.

In the previous figure, the total computed energy in the time domain equals the total computed energy in the frequency domain.

Digital filter
In electronics, computer science and mathematics, a digital filter is a system that performs mathematical operations on a sampled, discrete-time signal to reduce or enhance certain aspects of that signal. This is in contrast to the other major type of electronic filter, the analog filter, which is an electronic circuit operating on continuous-time analog signals. An analog signal may be processed by a digital filter by first being digitized and represented as a sequence of numbers, then manipulated mathematically, and then reconstructed as a new analog signal (see digital signal processing). In an analog filter, the input signal is "directly" manipulated by the circuit. A digital filter system usually consists of an analog-to-digital converter to sample the input signal, followed by a microprocessor and some peripheral components such as memory to store data and filter coefficients etc. inally a digital-to-analog converter to complete the output stage. Program Instructions (software) running on the microprocessor implement the digital filter by performing the necessary mathematical operations on the numbers received from the ADC. In some high

performance applications, an FPGA or ASIC is used instead of a general purpose microprocessor, or

i gital filters may be more expensive than an equivalent analog filter due to their increased

complexity, but they make practical many designs that are impractical or impossible as analog filters. Since digital filters use a sampling process and discrete -time processing, they experience latency (the difference in time between the input and the response), which is almost irrelev in analog filters. ant i gital filters are commonplace and an essential element of everyday electronics such

as radios, cellphones, and stereo receivers.

Impulse response
The impulse response, often denoted h[k] or hk, is a measurement of how a filter will respond to the Kronecker delta function. For example, given a difference equation, one would setx0 = 1 and xk =

filters are typically considered in two categories:infinite impulse response (IIR) and finite impulse response (FIR). In the case of linear time-invariant FIR filters, the impulse response is exactly equal to the sequence of filter coefficients:

IIR filters on the other hand are recursive, with the output depending on both current and previous inputs as well as previous outputs. The general form of the an IIR filter is thus:

Plotting the impulse response will reveal how a filter will respond to a sudden, momentary disturbance.

Types of Digital Filters


There are two types of digital filters

1. Recursive (Finite Impulse Response)

0 for

and evaluate. The impulse response is a characterization of the filter's behaviour.

a specialized

SP with specific paralleled architecture for expediting operations such as filtering.

v v

i gital

2. n n- ecu i e ( n inite m ul e Re 1.

Recursive (Finite Impulse Response): A recursive filte i one which in addition to in ut value al o u e previous outputvalue . The e, like the previou input value , are stored in the processor's memory. The word recursive literally means "runnin back", and refers to the fact that previouslycalculated output values go back into the calculation of the latest output. The expression for a recursive filter therefore contains not only terms involving the input values (xn, x n-1, x n-2, ...) but also terms in yn-1 , yn-2, ... From this explanation, it might seem as though recursive filters require more calculations to be performed, since there are previous output terms in the filter expression as well as input terms. In fact, the reverse is usually the case. To achieve a given frequency response characteristic using a recursive filter generally requires a much lower order filter, and therefore fewer terms to be evaluated by the processor, than the equivalent non-recursive filter.

2. Non-Recursive: he current output (yn) is calculated solely from the current and previous input values (xn, x n-1, x n-2, ...). This type of filter is said to be non-recursive

Special Function Registers


The 8051 is a flexible microcontroller with a relatively large number of modes of operations. Your program may inspect and/or change the operating mode of the 8051 by manipulating the values of the 8051's Special Function Registers (SFRs). SFRs are accessed as if they were normal Internal RAM. The only difference is that Internal RAM is from address 00h through 7Fh whereas SFR registers exist in the address range of 80h through FFh. Each SFR has an address (80h through FFh) and a name. The following chart provides a graphical presentation of the 8051's SFRs, their names, and their address. As you can see, although the address range of 80h through FFh offer 128 possible addresses, there are only 21 SFRs in a standard 8051. All other addresses in the SFR range (80h through FFh) are considered invalid. Writing to or reading from these registers may produce undefined values or behavior. Programming Tip: It is recommended that you not read or write to SFR addresses that have not been assigned to an SFR. Doing so may provoke undefined behavior and may cause your program to be incompatible with other 8051-derivatives that use the given SFR for some other purpose.

SFR Types

y x

n e)

As mentioned in the chart itself, the SFRs that have a blue background are SFRs related to the I/O ports. The 8051 has four I/O ports of 8 bits, for a total of 32 I/O lines. Whether a given I/O line is high or low and the value read from the line are controlled by the SFRs in green. The SFRs with yellow backgrouns are SFRs which in some way control the operation or the configuration of some aspect of the 8051. For example, TCON controls the timers, SCON controls the serial port. The remaining SFRs, with green backgrounds, are "other SFRs." These SFRs can be thought of as auxillary SFRs in the sense that they don't directly configure the 8051 but obviously the 8051 cannot operate without them. For example, once the serial port has been configured using SCON, the program may read or write to the serial port using the SBUF register. Programming Tip: The SFRs whose names appear in red in the chart above are SFRs that may be accessed via bit operations (i.e., using the SETB and CLR instructions). The other SFRs cannot be accessed using bit operations. As you can see, all SFRs that whose addresses are divisible by 8 can be accessed with bit operations.

SFR Descriptions
This section will endeavor to quickly overview each of the standard SFRs found in the above SFR chart map. It is not the intention of this section to fully explain the functionality of each SFR--this information will be covered in separate chapters of the tutorial. This section is to just give you a general idea of what each SFR does. P0 (Port 0, Address 80h, Bit-Addressable): This is input/output port 0. Each bit of this SFR corresponds to one of the pins on the microcontroller. For example, bit 0 of port 0 is pin P0.0, bit 7 is pin P0.7. Writing a value of 1 to a bit of this SFR will send a high level on the corresponding I/O pin whereas a value of 0 will bring it to a low level. Programming Tip: While the 8051 has four I/O port (P0, P1, P2, and P3), if your hardware uses external RAM or external code memory (i.e., your program is stored in an external ROM or EPROM chip or if you are using external RAM chips) you may not use P0 or P2. This is because the 8051 uses ports P0 and P2 to address the external memory. Thus if you are using external RAM or code memory you may only use ports P1 and P3 for your own use. SP (Stack Pointer, Address 81h): This is the stack pointer of the microcontroller. This SFR indicates where the next value to be taken from the stack will be read from in Internal RAM. If you push a value onto the stack, the value will be written to the address of SP + 1. That is to say, if SP holds the value 07h, a PUSH instruction will push the value onto the stack at address 08h. This SFR is modified by all instructions which modify the stack, such as PUSH, POP, LCALL, RET, RETI, and whenever interrupts are provoked by the microcontroller. Programming Tip: The SP SFR, on startup, is initialized to 07h. This means the stack will start at 08h and start expanding upward in internal RAM. Since alternate register banks 1, 2, and 3 as well as the user bit variables occupy internal RAM from addresses 08h through 2Fh, it is necessary to initialize SP in your program to some other value if you will be using the alternate register banks and/or bit memory. It's not a bad idea to initialize SP to 2Fh as the

first instruction of every one of your programs unless you are 100% sure you will not be using the register banks and bit variables. DPL/DPH (Data Pointer Low/High, Addresses 82h/83h): The SFRs DPL and DPH work together to represent a 16-bit value called the Data Pointer. The data pointer is used in operations regarding external RAM and some instructions involving code memory. Since it is an unsigned two-byte integer value, it can represent values from 0000h to FFFFh (0 through 65,535 decimal). Programming Tip: DPTR is really DPH and DPL taken together as a 16-bit value. In reality, you almost always have to deal with DPTR one byte at a time. For example, to push DPTR onto the stack you must first push DPL and then DPH. You can't simply plush DPTR onto the stack. Additionally, there is an instruction to "increment DPTR." When you execute this instruction, the two bytes are operated upon as a 16-bit value. However, there is no instruction that decrements DPTR. If you wish to decrement the value of DPTR, you must write your own code to do so. PCON (Power Control, Addresses 87h): The Power Control SFR is used to control the 8051's power control modes. Certain operation modes of the 8051 allow the 8051 to go into a type of "sleep" mode which requires much less power. These modes of operation are controlled through PCON. Additionally, one of the bits in PCON is used to double the effective baud rate of the 8051's serial port. TCON (Timer Control, Addresses 88h, Bit-Addressable): The Timer Control SFR is used to configure and modify the way in which the 8051's two timers operate. This SFR controls whether each of the two timers is running or stopped and contains a flag to indicate that each timer has overflowed. Additionally, some non-timer related bits are located in the TCON SFR. These bits are used to configure the way in which the external interrupts are activated and also contain the external interrupt flags which are set when an external interrupt has occured. TMOD (Timer Mode, Addresses 89h): The Timer Mode SFR is used to configure the mode of operation of each of the two timers. Using this SFR your program may configure each timer to be a 16-bit timer, an 8-bit autoreload timer, a 13-bit timer, or two separate timers. Additionally, you may configure the timers to only count when an external pin is activated or to count "events" that are indicated on an external pin. TL0/TH0 (Timer 0 Low/High, Addresses 8Ah/8Ch): These two SFRs, taken together, represent timer 0. Their exact behavior depends on how the timer is configured in the TMOD SFR; however, these timers always count up. What is configurable is how and when they increment in value. TL1/TH1 (Timer 1 Low/High, Addresses 8Bh/8Dh): These two SFRs, taken together, represent timer 1. Their exact behavior depends on how the timer is configured in the TMOD SFR; however, these timers always count up. What is configurable is how and when they increment in value. P1 (Port 1, Address 90h, Bit-Addressable): This is input/output port 1. Each bit of this SFR corresponds to one of the pins on the microcontroller. For example, bit 0 of port 1 is pin P1.0,

bit 7 is pin P1.7. Writing a value of 1 to a bit of this SFR will send a high level on the corresponding I/O pin whereas a value of 0 will bring it to a low level. SCON (Serial Control, Addresses 98h, Bit-Addressable): The Serial Control SFR is used to configure the behavior of the 8051's on-board serial port. This SFR controls the baud rate of the serial port, whether the serial port is activated to receive data, and also contains flags that are set when a byte is successfully sent or received. Programming Tip: To use the 8051's on-board serial port, it is generally necessary to initialize the following SFRs: SCON, TCON, and TMOD. This is because SCON controls the serial port. However, in most cases the program will wish to use one of the timers to establish the serial port's baud rate. In this case, it is necessary to configure timer 1 by initializing TCON and TMOD. SBUF (Serial Control, Addresses 99h): The Serial Buffer SFR is used to send and receive data via the on-board serial port. Any value written to SBUF will be sent out the serial port's TXD pin. Likewise, any value which the 8051 receives via the serial port's RXD pin will be delivered to the user program via SBUF. In other words, SBUF serves as the output port when written to and as an input port when read from. P2 (Port 2, Address A0h, Bit-Addressable): This is input/output port 2. Each bit of this SFR corresponds to one of the pins on the microcontroller. For example, bit 0 of port 2 is pin P2.0, bit 7 is pin P2.7. Writing a value of 1 to a bit of this SFR will send a high level on the corresponding I/O pin whereas a value of 0 will bring it to a low level. Programming Tip: While the 8051 has four I/O port (P0, P1, P2, and P3), if your hardware uses external RAM or external code memory (i.e., your program is stored in an external ROM or EPROM chip or if you are using external RAM chips) you may not use P0 or P2. This is because the 8051 uses ports P0 and P2 to address the external memory. Thus if you are using external RAM or code memory you may only use ports P1 and P3 for your own use. IE (Interrupt Enable, Addresses A8h): The Interrupt Enable SFR is used to enable and disable specific interrupts. The low 7 bits of the SFR are used to enable/disable the specific interrupts, where as the highest bit is used to enable or disable ALL interrupts. Thus, if the high bit of IE is 0 all interrupts are disabled regardless of whether an individual interrupt is enabled by setting a lower bit. P3 (Port 3, Address B0h, Bit-Addressable): This is input/output port 3. Each bit of this SFR corresponds to one of the pins on the microcontroller. For example, bit 0 of port 3 is pin P3.0, bit 7 is pin P3.7. Writing a value of 1 to a bit of this SFR will send a high level on the corresponding I/O pin whereas a value of 0 will bring it to a low level. IP (Interrupt Priority, Addresses B8h, Bit-Addressable): The Interrupt Priority SFR is used to specify the relative priority of each interrupt. On the 8051, an interrupt may either be of low (0) priority or high (1) priority. An interrupt may only interrupt interrupts of lower priority. For example, if we configure the 8051 so that all interrupts are of low priority except the serial interrupt, the serial interrupt will always be able to interrupt the system, even if another interrupt is currently executing. However, if a serial interrupt is executing no other interrupt will be able to interrupt the serial interrupt routine since the serial interrupt routine has the highest priority.

PSW (Program Status Word, Addresses D0h, Bit-Addressable): The Program Status Word is used to store a number of important bits that are set and cleared by 8051 instructions. The PSW SFR contains the carry flag, the auxiliary carry flag, the overflow flag, and the parity flag. Additionally, the PSW register contains the register bank select flags which are used to select which of the "R" register banks are currently selected. Programming Tip: If you write an interrupt handler routine, it is a very good idea to always save the PSW SFR on the stack and restore it when your interrupt is complete. Many 8051 instructions modify the bits of PSW. If your interrupt routine does not guarantee that PSW is the same upon exit as it was upon entry, your program is bound to behave rather erradically and unpredictably--and it will be tricky to debug since the behavior will tend not to make any sense. ACC (Accumulator, Addresses E0h, Bit-Addressable): The Accumulator is one of the most-used SFRs on the 8051 since it is involved in so many instructions. The Accumulator resides as an SFR at E0h, which means the instruction MOV A,#20h is really the same as MOV E0h,#20h. However, it is a good idea to use the first method since it only requires two bytes whereas the second option requires three bytes. B (B Register, Addresses F0h, Bit-Addressable): The "B" register is used in two instructions: the multiply and divide operations. The B register is also commonly used by programmers as an auxiliary register to temporarily store values. Other SFRs The chart above is a summary of all the SFRs that exist in a standard 8051. All derivative microcontrollers of the 8051 must support these basic SFRs in order to maintain compatability with the underlying MSCS51 standard. A common practice when semiconductor firms wish to develop a new 8051 derivative is to add additional SFRs to support new functions that exist in the new chip. For example, the Dallas Semiconductor DS80C320 is upwards compatible with the 8051. This means that any program that runs on a standard 8051 should run without modification on the DS80C320. This means that all the SFRs defined above also apply to the Dallas component. However, since the DS80C320 provides many new features that the standard 8051 does not, there must be some way to control and configure these new features. This is accomplished by adding additional SFRs to those listed here. For example, since the DS80C320 supports two serial ports (as opposed to just one on the 8051), the SFRs SBUF2 and SCON2 have been added. In addition to all the SFRs listed above, the DS80C320 also recognizes these two new SFRs as valid and uses their values to determine the mode of operation of the secondary serial port. Obviously, these new SFRs have been assigned to SFR addresses that were unused in the original 8051. In this manner, new 8051 derivative chips may be developed which will run existing 8051 programs. Programming Tip: If you write a program that utilizes new SFRs that are specific to a given derivative chip and not included in the above SFR list, your program will not run properly on a standard 8051 where that SFR does not exist. Thus, only use non-standard SFRs if you are

sure that your program wil only have to run on that specific microcontroller. Likewise, if you write code that uses non-standard SFRs and subsequently share it with a third-party, be sure to let that party know that your code is using non-standard SFRs to save them the headache of reali ing that due to strange behavior at run-time.

Periodic Signals
Periodic Signals are signals that repeat themselves after a certain amount of time. More formally, a function f(t) is periodic if f(t + ) = f(t) for some and all t. The classic example of a periodic function is si (x) since si (x + 2 ) = si (x). However, we do not restrict attention to sinusoidal functions. An important class of signals that we will encounter frequently throughou this book is the t class of periodic signals.

[edit] Terminology
We will discuss here some of the common terminology that pertains to a periodic function. Let g(t) be a periodic function satisfying g(t + ) = g(t) for all t.

[edit] Period
The period is the smallest value of T satisfying g(t + T) = g(t) for all t. The period is defined so because if g(t + T) = g(t) for all t, it can be verified that g(t + T') = g(t) for all t where T' = 2T, 3T, 4T, ... In essence, it's the smallest amount of time it takes for the function to repeat itself. If the period of a function is finite, the function is called "periodic". Functions that never repeat themselves have an infinite period, and are k nown as "aperiodic functions". The period of a periodic waveform will be denoted with a capital T. The period is measured in seconds.

[edit] Frequency
The frequency of a periodic function is the number of complete cycles that can occur per second. Frequency is denoted with a lower-case f. It is defined in terms of the period, as follows:

Frequency has units of hertz or cycle per second.

[edit] Radial Frequency


The radial frequency is the frequency in terms of radians. it is defined as follows:
=2 f

[edit] Amplitude
The amplitude of a given wave is the value of the wave at that point. Amplitude is also known as the "Magnitude" of the wave at that particular point. There is no particular variable that is used with amplitude, although capital A, capital M and capital R are common. The amplitude can be measured in different units, depending on the signal we are studying. In an electric signal the amplitude will typically be measured in volts. In a building or other such structure, the amplitude of a vibration could be measured in meters.

[edit] Continuous Signal


A continuous signal is a "smooth" signal, where the signal is defined over a certain range. For example, a sine function is a continuous sample, as is an exponential function or a constant function. A portion of a sine signal over a range of time 0 to 6 seconds is also continuous. Examples of functions that are not continuous would be any discrete signal, where the value of the signal is only defined at certain intervals.

[edit] DC Offset
A DC Offset is an amount by which the average value of the periodic function is not centered around the x-axis. A periodic signal has a DC offset component if it is not centered about thex-axis. In general, the DC value is the amount that must be subtracted from the signal to center it on thex-axis. by definition:

With A0 being the DC offset. If A0 = 0, the function is centered and has no offset.
Half- ave Symme ry

Nonperiodic/Aperiodic Signals
The opposite of a periodic signal is an aperiodic signal. An aperiodic function never repeats, although technically an aperiodic function can be considered like a periodic function with an infinite period. This chapter will formally introduce the fourier transform, will discuss some of the properties of the transform, and then will talk about power spectral density functions.

[edit] Background
If we consider aperiodic signals, it turns out that we can generalize the Fourier Series sum into an integral named the Fourier Transform. The Fourier Transform is used similarly to the Fourier Series, in that it converts a time-domain function into a frequency domain representation. However, there are a number of differences:
1. Fourier Transform can work on Aperiodic Signals. 2. Fourier Transform is an infinite sum of infinitesimal sinusoids. 3. Fourier Transform has an inverse transform, that allows for conversion from the frequency domain back to the time domain.

[edit] Fourier Transform


This operation can be performed using this MATLAB command: fft

The Fourier Transform is the following integral:

[edit] Inverse Fourier Transform


And the inverse transform is given by a similar integral:

Using these formulas, time-domain signals can be converted to and from the frequency domain, as needed.

[edit] Partial Fraction Expansion


One of the most important tools when attempting to find the inverse fourier transform is the Theory of Partial Fractions. The theory of partial fractions allows a complicated fractional value to be decomposed into a sum of small, simple fractions. This technique is highly important when dealing with other transforms as well, such as the Laplace transform and the Z-Transform.

ADVANTAGE & DISAD of FIR filter

The choice of an FIR design versus an IIR can spark much debate amoung designers. It is the opinon of the author that new users of digital filters should start out with FIR type filters. This is recommended for the following reasons:
FIR filters can easily be designed for constant phase delay and/or constant group delay ( hich affects the distortion of pass band signals with broadband characteristics)

Stability is inherent and limit cycling is not a problem as it is with IIR designs (This is provided the User implements the filter with nonrecursive techniques)

Round off errors can be controlled in a straightforward fashion in order to eep their effects insignificant

The drawbacks of using anFIR are:

An FIR generally requires more stages than an IIR to obtain sharp filter bands. Additional stages add to memory requirements and slow processing speed.

Designs that demand high performance usually justify an effort to tradeoff FIR vs IIR filter implementations. But for first time application it is recommended that aggressive filter design be avoided until one has the experience to avoid the various pitfalls that await you. Finite impulse response A finite impulse response (FIR) filter is a type of a signal processing filter whose impulse response (or response to any finite length input) is of fi ite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which have internal feedback and may continue to respond indefinitely (usually decaying). Theimpulse response of an Nth-order discrete-time FIR filter (i.e. with a Kronecker delta impulse input) lasts for N+1 samples, and then dies to zero.

Definition

A discrete-time FIR filter of order N. The top part is an N-stage delay line with N+1 taps. Each unit delay is a z-1 operator in Z-transform notation.

The output y of a linear time invariant system is determined by convolving its input signal x with its impulse response b. For a discrete-time FIR filter, the output is a weighted sum of the current and a finite number of previous values of the input. The operation is described by the following equation, which defines the output sequence y[n] in terms of its input sequence x[n]:

where:
y y y y

x[n] is the input signal, y[n] is the output signal, bi are the filt r co ffici nts, also nown as t p w ights, that make up the impulse response, N is the filter order; an Nth-order filter has (N + 1) terms on the right-hand side. The x[n i] in these terms are commonly referred to as t ps, based on the structure of a tapped delay line that in many implementations or block diagrams provides the delayed inputs to the multiplication operations. One may speak of a "5th order/6-tap filter", for instance.

[edit] Properties
An FIR filter has a number of useful properties which sometimes make it preferable to an infinite impulse response (IIR) filter. FIR filters:
y

y y

Require no feedback. This means that any rounding errors are not compounded by summed iterations. The same relative error occurs in each calculation. This also makes implementation simpler. Are inherently stable. This is due to the fact that, because there is no required feedback, all the poles are located at the origin and thus are located within the unit circle. They can easily be designed to be linear phase by making the coefficient sequence symmetric; linear phase, or phase change proportional to frequency, corresponds to equal delay at all frequencies. This property is sometimes desired for phase-sensitive applications, for example data communications, crossover filters, and mastering.

The main disadvantage of FIR filters is that considerably more computation power in a general purpose processor is required compared to an IIR filter with similar sharpness or selectivity, especially when low frequency (relative to the sample rate) cutoffs are needed. However many digital signal processors provide specialized hardware features to make FIR filters approximately as efficient as IIR for many applications. Infinite impulse response

Infinite impulse response (IIR) is a property of signal processing systems. Systems with this property are known as IIR systems or, when dealing with filter systems, as IIR filters. IIR systems have an impulse response function that is non-zero over an infinite length of time. This is in contrast to finite impulse response (FIR) filters, which have fixed-duration impulse responses. The simplest analog IIR filter is an RC filter made up of a single resistor (R) feeding into a node shared with a single capacitor (C). This filter has an exponential impulse response characterized by an RC time constant. IIR filters may be implemented as either analog or digital filters. In digital IIR filters, the output feedback is immediately apparent in the eq uations defining the output. Note that unlike FIR filters, in designing IIR filters it is necessary to carefully consider the "time zero" case[ itation needed] in which the outputs of the filter have not yet been clearly defined. Design of digital IIR filters is heavily dependent on that of their analog counterpart because s there are plenty of resources, works and straightforward design methods concerning analog feedback filter design while there are hardly any for digital IIR filters. As a result, usually, when a digital IIR filter is going to be implemented, an analog filter (e.g. Chebyshev filter, Butterworth filter, Elliptic filter) is first designed and then is converted to a digital filter by applying discretization techniques such as Bilinear transform or Impulse invariance. Example IIR filters include the Chebyshev filter, Butterworth filter, and the Bessel filter.
f

Transfer function deri ation


Digital filters are often described and implemented in terms of the difference equation that defines how the output signal is related to the input signal:

where:
y y y y y y

is the feedforward filter order are the feedforward filter coefficients is the feedback filter order are the feedback filter coefficients is the input signal is the output signal.

A more condensed form of the difference equation is:

which, when rearranged, becomes:

To find the transfer function of the filter, we first take the Z-transform of each side of the above equation, where we use the time-shift property to obtain:

We define the transfer function to be:

Considering that in most IIR filter designs coefficient takes the more traditional form:

is 1, the IIR filter transfer function

[edit] Description of block diagram

Simple IIR filter block diagram

A typical block diagram of an IIR filter looks like the following. The z 1 block is a unit delay. The coefficients and number of feedback/feedforward paths are implementation dependent.

[edit] Stability
The transfer function allows us to judge whether or not a system is bounded-input, boundedoutput (BIBO) stable. To be specific, the BIBO stability criteria requires that the ROC of the system includes the unit circle. For example, for a causal system, all poles of the transfer function have to have an absolute value smaller than one. In other words, all poles must be located within a unit circle in the z-plane. The poles are defined as the values of z which make the denominator of H(z) equal to 0:

Clearly, if then the poles are not located at the origin of the z-plane. This is in contrast to the FIR filter where all poles are located at the origin, and is therefore always stable. IIR filters are sometimes preferred over FIR filters because an IIR filter can achieve a much sharper transition region roll-off than FIR filter of the same order.

[edit] Example
Let the transfer function of a filter H be

with ROC a < | z | and 0 < a < 1

which has a pole at a, is stable and causal. The time-domain impulse response is
h(n) = anu(n)

which is non-zero for n > = 0. Discrete Fourier transform In mathematics, the discrete Fourier transform (DFT) is a specific kind of discrete transform, used in Fourier analysis. It transforms one function into another, which is called the frequency domain representation, or simply the DFT, of the original function (which is often a function in the time domain). But the DFT requires an input function that is discrete and whose non-zero values have a limited (finite) duration. Such inputs are often created by sampling a continuous function, like a person's voice. Unlike the discrete-time Fourier transform (DTFT), it only evaluates enough frequency components to reconstruct the finite segment that was analyzed. Using the DFT implies that the finite segmen that is analyzed is t one period of an infinitely extended periodic signal; if this is not actually true, a window function has to be used to reduce the artifacts in the spectrum. For the same reason, the inverse DFT cannot reproduce the entire time domain, unless the input happens to be periodic

(forever). Therefore it is often said that the DFT is a transform for Fourier analysis of finitedomain discrete-time functions. The sinusoidal basis functions of the decomposition have the same properties. The input to the DFT is a finite sequence of real or complex numbers (with more abstract generalizations discussed below), making the DFT ideal for processing information stored in computers. In particular, the DFT is widely employed in signal processing and related fields to analyze the frequencies contained in a sampled signal, to solve partial differential equations, and to perform other operations such as convolutions or multiplying large integers. A key enabling factor for these applications is the fact that the DFT can be computed efficiently in practice using a fast Fourier transform (FFT) algorithm. FFT algorithms are so commonly employed to compute DFTs that the term "FFT" is often used to mean "DFT" in colloquial settings. Formally, there is a clear distinction: "DFT" refers to a mathematical transformation or function, regardless of how it is computed, whereas "FFT" refers to a specific family of algorithms for computing DFTs. The terminology is further blurred by the (now rare) synonym finite Fourier transform for the DFT, which apparently predates the term "fast Fourier transform" (Cooley et al., 1969) but has the same initialism.

Properties
[edit] Completeness
The discrete Fourier transform is an invertible, linear transformation

with C denoting the set of complex numbers. In other words, for any N > 0, an N-dimensional complex vector has a DFT and an IDFT which are in turn N-dimensional complex vectors.

[edit] Orthogonality
The vectors form an orthogonal basis over the set of N-dimensional complex vectors:

where is the Kronecker delta. This orthogonality condition can be used to derive the formula for the IDFT from the definition of the DFT, and is equivalent to the unitarity property below.

[edit] The Plancherel theorem and Parseval's theorem


If Xk and Yk are the DFTs of xn and yn respectively then the Plancherel theorem states:

where the star denotes complex conjugation. Parseval's theorem is a special case of the Plancherel theorem and states:

These theorems are also equivalent to the unitary condition below.

[edit] Periodicity
If the expression that defines the DFT is evaluated for all integers k instead of just for , then the resulting infinite sequence is a periodic extension of the DFT, periodic with period N. The periodicity can be shown directly from the definition:

Similarly, it can be shown that the IDFT formula leads to a periodic extension.

[edit] The shift theorem


Multiplying xn by a linear phase for some integer m corresponds to a circular shift of the output Xk: Xk is replaced by Xk m, where the subscript is interpreted modulo N (i.e., periodically). Similarly, a circular shift of the input xn corresponds to multiplying the output Xk by a linear phase. Mathematically, if {xn} represents the vector x then
if then and

[edit] Circular convolution theorem and cross-correlation theorem


The convolution theorem for the continuous and discrete time Fourier transforms indicates that a convolution of two infinite sequences can be obtained as the inverse transform of the product of the individual transforms. With sequences and transforms of length N, a circularity arises:

The quantity in parentheses is 0 for all values of m except those of the form n l pN, where p is any integer. At those values, it is 1. It can therefore be replaced by an infinite sum of Kronecker delta functions, and we continue accordingly. Note that we can also exte the nd limits of m to infinity, with the understanding that the x and y sequences are defined as 0 outside [0,N-1]:

which is the convolution of the sequence with a periodically extended by:

sequence defined

Similarly, it can be shown that:

which is the cross-correlation of

and

A direct evaluation of the convolution or correlation summation (above) requiresO(N2) operations for an output sequence of length N. An indirect method, using transforms, can take advantage of the O(NlogN) efficiency of the fast Fourier transform (FFT) to achieve much

better performance. Furthermore, convolutions can be used to efficiently compute DFTs via Rader's FFT algorithm and Bluestein's FFT algorithm. Methods have also been developed to use circular convolution as part of an efficient process that achieves normal (non-circular) convolution with an or sequence potentially much longer than the practical transform size (N). Two such methods are called overlap-save and overlap-add.[1]

[edit] Convolution theorem duality


It can also be shown that:

which is the circular convolution of

and

[edit] Trigonometric interpolation polynomial


The trigonometric interpolation polynomial

for N even ,

for N odd,

where the coefficients Xk are given by the DFT of xn above, satisfies the interpolation property p(2 n / N) = xn for .

For even N, notice that the Nyquist component

is handled specially.

This interpolation is not unique: aliasing implies that one could add N to any of the complexsinusoid frequencies (e.g. changing e it to ei(N 1)t ) without changing the interpolation property, but giving different values in between the xn points. The choice above, however, is typical because it has two useful properties. First, it consists of sinusoids whose frequencies have the smallest possible magnitudes: the interpolation is bandlimited. Second, if the xn are real numbers, then p(t) is real as well. In contrast, the most obvious trigonometric interpolation polynomial is the one in which the frequencies range from 0 to N 1 (instead of roughly N / 2 to + N / 2 as above), similar to the inverse DFT formula. This interpolation does not minimize the slope, and is not generally real-valued for real xn; its use is a common mistake.

[edit] The unitary DFT


Another way of looking at the DFT is to note that in the above discussion, the DFT can be expressed as a Vandermonde matrix:

where

is a primitive Nth root of unity. The inverse transform is then given by the inverse of the above matrix:

With unitary normalization constants defined by a unitary matrix:

, the DFT becomes a unitary transformation,

where det() is the determinant function. The determinant is the product of the eigenvalues, which are always or as described below. In a real vector space, a unitary transformation can be thought of as simply a rigid rotation of the coordinate system, and all of the properties of a rigid rotation can be found in the unitary DFT. The orthogonality of the DFT is now expressed as an orthonormality condition (which arises in many areas of mathematics as described in root of unity):

If

is defined as the unitary DFT of the vector

then

and the Plancherel theorem is expressed as:

If we view the DFT as just a coordinate transformation which simply specifies the components of a vector in a new coordinate system, then the above is just the statement that the dot product of two vectors is preserved under a unitary DFT transformation. For the special case , this implies that the length of a vector is preserved as wellthis is just Parseval's theorem:

[edit] Expressing the inverse DFT in terms of the DFT


A useful property of the DFT is that the inverse DFT can be easily expressed in terms of the (forward) DFT, via several well-known "tricks". (For example, in computations, it is often convenient to only implement a fast Fourier transform corresponding to one transform direction and then to get the other transform direction from the first.) First, we can compute the inverse DFT by reversing the inputs:

(As usual, the subscripts are interpreted modulo N; thus, for n = 0, we have xN 0 = x0.) Second, one can also conjugate the inputs and outputs:

Third, a variant of this conjugation trick, which is sometimes preferable because it requires no modification of the data values, involves swapping real and imaginary parts (which can be done on a computer simply by modifying pointers). Define swap(xn) as xn with its real and imaginary parts swappedthat is, if xn = a + bi then swap(xn) is b + ai. Equivalently, swap(xn ) equals . Then

That is, the inverse transform is the same as the forward transform with the real and imaginary parts swapped for both input and output, up to a normalization (Duhamel et al., 1988). The conjugation trick can also be used to define a new transform, closely related to the DFT, that is involutarythat is, which is its own inverse. In particular, clearly its own inverse: . A closely related involutary transformation (by a is

factor of (1+i) /2) is

, since the (1 + i) factors in is none other than the

cancel the 2. For real inputs , the real part of discrete Hartley transform, which is also involutary.

[edit] Eigenvalues and eigenvectors


The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research. Consider the unitary form defined above for the DFT of length N, where

This matrix satisfies the matrix polynomial equation:

This can be seen from the inverse properties above: operating twice gives the original data in reverse order, so operating four times gives back the original data and is thus the identity matrix. This means that the eigenvalues satisfy the equation:
4

= 1.

Therefore, the eigenvalues of

are the fourth roots of unity: is +1, 1, +i, or i.

Since there are only four distinct eigenvalues for this matrix, they have some multiplicity. The multiplicity gives the number of linearly independent eigenvectors corresponding to each eigenvalue. (Note that there areN independent eigenvectors; a unitary matrix is never defective.)

As a special case of general Fourier transform, the discrete time transform shares all properties (and their proofs) of the Fourier transform discussed above, except now some of these properties may take different forms. In the following, we always assume and
nm
y

Lin rity

g jl

g jh

Pr p r i
h g

f Di cr

F uri r Tr

g h h i

f rm

Proof:

If we let

Differencing is the discrete-time counterpart of differentiation.

uu

Diff r ncing

Fr qu ncy Shifting

s r r rq

Ti

po

Ti

Shifting

, the above becomes

R v rs l

Proof:

proof: Differentiating the definition of discrete Fourier transform with respect to

The convolution theorem states that convolution in time domain corresponds to multiplication in frequency domain and vice versa:

yx x

Convolution Th or

vv

Diff r nti tion in fr qu ncy

, we get

Recall that the convolution of periodic signals

and

is

Here the convolution of periodic spectra

and

is similarly defined as

Proof of (a):

Proof of (b):

The z-transform has a set of properties in parallel with that of the Fourier transform (and Laplace transform). The difference is that we need to pay special attention to the ROCs. In the following, we always assume

and

| 

| }

Pr p r i
} |

f Z-Tr

z{

z { z ~

P rs v l's R l tion

f rm

While it is obvious that the ROC of the linear combination of intersection of the their individual ROCs in which both

note that in some cases the ROC of the linear combination could be larger than For example, for both and , the ROC is is the

, but the ROC of their difference entire z-plane.

Proof:

Define

Ti

Lin rity

and

should be the and exist, .

Shifting

, we have

and

The new ROC is the same as the old one except the possible addition/deletion of the origin or infinity as the shift may change the duration of the signal.

The discrete signal (for a non-integer

then the expanded version


1 2 3 4 5 6

Ex

pl : If

is ramp

1 2 3 4 5 6

1 2 3 4 5 6

0.5 1 1.5 2 2.5 3

Ti

xp nsion (Sc ling)

cannot be continuously scaled in time as is zero). Therefore is defined as

has to be an integer

is

where

is the integer part of

Proof: The z-transform of such an expanded signal is

Note that the change of the summation index from skipped are all zeros.
y

to

has no effect as the terms

Convolution

The ROC of the convolution could be larger than the intersection of the possible pole-zero cancellation caused by the convolution.

and

, due to

Proof:

Ti

Diff r nc

Note that due to the additional zero same as addition of except the possible deletion of

and pole

, the resulting ROC is the

caused by the added pole and/or

caused by the added zero which may cancel an existing pole.

Proof: The accumulation of

Applying the convolution property, we get

as

Proof:

Ti

R v rs l

Ti

Accu ul tion

can be written as its convolution with

where

. in

Proof:

In particular, if

The multiplication by a frequency shift by

) corresponding to, respectively, either a left-shift or a right shift in frequency domain. The property is essentially the same as the frequency shifting property of discrete Fourier transform.

Conjug tion

Sc ling in Z-do

, the above becomes

to

corresponds to a rotation by angle

in the z-plane, i.e.,

. The rotation is either clockwise (

) or counter clockwise (

Proof: Complex conjugate of the z-transform of

is

Replacing

by

, we get the desired result. in

Proof:

i.e.,

Example: Taking derivative with respect to

Diff r nti tion in z-Do

of the right side of

we get

Due to the property of differentiation in z-domain, we have

Note that for a different ROC

, we have

8051 Tutorial: Addressing Modes


An "addressing mode" refers to how you are addressing a given memory location. In summary, the addressing modes are as follows, with an example of each:
Immediate Addressing MOV A,#20h Direct Addressing MOV A,30h Indirect Addressing MOV A,@R0 External Direct MOVX A,@DPTR Code Indirect MOVC A,@A+DPTR

Each of these addressing modes provides important flexibility.

Immediate Addressing

Immediate addressing is so-named because the value to be stored in memory immediately follows the operation code in memory. That is to say, the instruction itself dictates what value will be stored in memory. For example, the instruction: MOV A,#20h This instruction uses Immediate Addressing because the Accumulator will be loaded with the value that immediately follows; in this case 20 (hexidecimal). Immediate addressing is very fast since the value to be loaded is included in the instruction. However, since the value to be loaded is fixed at compile-time it is not very flexible.

Direct Addressing
Direct addressing is so-named because the value to be stored in memory is obtained by directly retrieving it from another memory location. For example: MOV A,30h This instruction will read the data out of Internal RAM address 30 (hexidecimal) and store it in the Accumulator. Direct addressing is generally fast since, although the value to be loaded isnt included in the instruction, it is quickly accessable since it is stored in the 8051s Internal RAM. It is also much more flexible than Immediate Addressing since the value to be loaded is whatever is found at the given address--which may be variable. Also, it is important to note that when using direct addressing any instruction which refers to an address between 00h and 7Fh is referring to Internal Memory. Any instruction which refers to an address between 80h and FFh is referring to the SFR control registers that control the 8051 microcontroller itself. The obvious question that may arise is, "If direct addressing an address from 80h through FFh refers to SFRs, how can I access the upper 128 bytes of Internal RAM that are available on the 8052?" The answer is: You cant access them using direct addressing. As stated, if you directly refer to an address of 80h through FFh you will be referring to an SFR. However, you may access the 8052s upper 128 bytes of RAM by using the next addressing mode, "indirect addressing."

Indirect Addressing
Indirect addressing is a very powerful addressing mode which in many cases provides an exceptional level of flexibility. Indirect addressing is also the only way to access the extra 128 bytes of Internal RAM found on an 8052. Indirect addressing appears as follows: MOV A,@R0

This instruction causes the 8051 to analyze the value of the R0 register. The 8051 will then load the accumulator with the value from Internal RAM which is found at the address indicated by R0. For example, lets say R0 holds the value 40h and Internal RAM address 40h holds the value 67h. When the above instruction is executed the 8051 will check the value of R0. Since R0 holds 40h the 8051 will get the value out of Internal RAM address 40h (which holds 67h) and store it in the Accumulator. Thus, the Accumulator ends up holding 67h. Indirect addressing always refers to Internal RAM; it never refers to an SFR. Thus, in a prior example we mentioned that SFR 99h can be used to write a value to the serial port. Thus one may think that the following would be a valid solution to write the value 1 to the serial port: MOV R0,#99h ;Load the address of the serial port MOV @R0,#01h ;Send 01 to the serial port -- WRONG!! This is not valid. Since indirect addressing always refers to Internal RAM these two instructions would write the value 01h to Internal RAM address 99h on an 8052. On an 8051 these two instructions would produce an undefined result since the 8051 only has 128 bytes of Internal RAM.

External Direct
External Memory is accessed using a suite of instructions which use what I call "External Direct" addressing. I call it this because it appears to be direct addressing, but it is used to access external memory rather than internal memory. There are only two commands that use External Direct addressing mode: MOVX A,@DPTR MOVX @DPTR,A As you can see, both commands utilize DPTR. In these instructions, DPTR must first be loaded with the address of external memory that you wish to read or write. Once DPTR holds the correct external memory address, the first command will move the contents of that external memory address into the Accumulator. The second command will do the opposite: it will allow you to write the value of the Accumulator to the external memory address pointed to by DPTR.

External Indirect
External memory can also be accessed using a form of indirect addressing which I call External Indirect addressing. This form of addressing is usually only used in relatively small projects that have a very small amount of external RAM. An example of this addressing mode is: MOVX @R0,A Once again, the value of R0 is first read and the value of the Accumulator is written to that address in External RAM. Since the value of @R0 can only be 00h through FFh the project

would effectively be limited to 256 bytes of External RAM. There are relatively simple hardware/software tricks that can be implemented to access more than 256 bytes of memory using External Indirect addressing; however, it is usually easier to use External Direct addressing if your project has more than 256 bytes of External RAM. Tutorial I : Basic Elements of Digital Communication System the whole digital communication system is divided as per the figure shown below. These are the basic elements of any digital communication system and It gives a basic understanding of communication systems. We will discuss these basic elements.

basic elements of digital communication system 1. Information Source and Input ransducer: The source of information can be analog or digital, e.g. analog: aurdio or video signal, digital: like teletype signal. In digital communication the signal produced by this source is converted into digital signal consists of 1 s and 0 s. For this we need source encoder. 1. 2. Source Encoder In digital communication we convert the signal from source into digital signal as mentioned above. The point to remember is we should like to use as few binary digits as possible to represent the signal. In such a way this efficient representation of the source output results in little or no redundancy. This sequence of binary digits is called information sequence. Source Encoding or Data Compression: the process of efficiently converting the output of wither analog or digital source into a sequence of binary digits is known as source encoding. 3. Channel Encoder:

The information sequence is passed through the channel encoder. The purpose of the channel encoder is to introduced, in controlled manner, some redundancy in the binary information sequence that can be used at the receiver to overcome the effects of noise and interference encountered in the transmission on the signal through the channel. e.g. take k bits of the information sequence and map that k bits to unique n bit sequence called code word. The amount of redundancy introduced is measured by the ratio n/k and the reciprocal of this ratio (k/n) is known as rate of code or code rate. 4. Digital Modulator: The binary sequence is passed to digital modulator which in turns convert the sequence into electric signals so that we can transmit them on channel (we will see channel later). The digital modulator maps the binary sequences into signal wave forms , for example if we represent 1 by sin x and 0 by cos x then we will transmit sin x for 1 and cos x for 0. ( a case similar to BPSK) 5. C annel: The communication channel is the physical medium that is used for transmitting signals from transmitter to receiver. In wireless system, this channel consists of atmosphere , for traditional telephony, this channel is wired , there are optical channels, under water acoustic cahnenls etc. we further discriminate this channels on the basis of their property and characteristics, like AWGN channel etc. 6. Digital Demodulator: The digital demodulator processes the channel corrupted transmitted waveform and reduces the waveform to the sequence of numbers that represents estimates of the transmitted data symbols. 7. C annel Decoder: This sequence of numbers then passed through the channel decoder which attempts to reconstruct the original information sequence from the knowledge of the code used by the channel encoder and the redundancy contained in the received data The average robability of a bit error at the output of the decoder is a measure of the performance of the demodulator decoder combination. THI I THE MOST IMPORTANT POINT, We will discuss a lot about this BER (Bit Error Rate) stuff in coming posts. 8. Source Decoder At the end, if an analog signal is desired then source decoder tries to decode the sequence from the knowledge of the encoding algorithm. And which results in the approximate replica of the input at the transmitter end

9. Output Transducer: Finally we get the desired signal in desired format analog or digital. The point worth noting are : 1. the source coding algorithm plays important role in higher code rate 2. the channel encoder introduced redundancy in data 3. the modulation scheme plays important role in deciding the data rate and immunity of signal towards the errors introduced by the channel 4. channel introduced many types of errors like multi path, errors due to thermal noise etc. 5. the demodulator and decoder should provide high BER.

Advantage of Digital over analog signal processing


1.A Digital programmable system allows flexibility in reconfiguring the digital signal processingoperation simply by changing the program. Reconfiguration of analog system usually impliesredesign of hardware followed by testing and verification to see that if operates properly .2.Digital system provide much better control of accuracy requirements. 3.Digital system are easily stored on magnetic media without loss of signal beyond thatintroduce in A/D conversion. As a consequence, the signals become transportable and can beprocessed offline in a remote laboratory. 4.Digital signal processing method also allows for the implementation of more sophisticated. 5.In some cases a digital implementation of signal processing system is cheaper than its analog counterpart

Decimation-In-Frequency FF Algorithm he DF equation


nk X? A ! x[n]WN k N 1

Split the DF equation into even and odd frequency indexes X? r A ! x[n]W 2

n! N 1 N / 2 1 n2r N

Substitute variables to get X? r A ! 2


N / 2 1 n! N / 2 1 n2r N

x[n]W

Similarly for odd-numbered frequencies X? r  1A ! 2


N / 2 1 n!

x[n]  x[n  N / 2] W

Copyright (C) 2

5 Gner Arslan

351M Digital Signal Processing

n!

x[n  N / 2]W

n!

n!

x[n]W

N 1

n2r N

n !N / 2

x[n]W

n2r N

n  N / 2 2r

N / 2 1

n!

x[n]  x[n  N / 2] W

nr N /2

n 2r  1 N/2

12

Decimation-In-Frequency FFT Algorithm Final flow graph for 8-point decimation in frequency

Copyright (C) 2005 Gner Arslan

351M Digital Signal Processing

13

Comparison of analog and digital filters


Digital filters are not subject to the component non-linearities that greatly complicate the design of analog filters. Analog filters consist of imperfect electronic components, whose values are specified to a limit tolerance (e.g. resistor values often have a tolerance of +/- 5%) and which may also change with temperature and drift with time. As the order of an analog

filter increases, and thus its component count, the effect of variable component errors is greatly magnified. In digital filters, the coefficient values are stored in computer memory, making them far more stable and predictable.[8] Because the coefficients of digital filters are definite, they can be used to achieve much more complex and selective designs specifically with digital filters, one can achieve a lower passband ripple, faster transition, and higher stopband attenuation than is practical with analog filters. Even if the design could be achieved using analog filters, the engineering cost of designing an equivalent digital filter would likely be much lower. Furthermore, one can readily modify the coefficients of a digital filter to make an adaptive filter or a usercontrollable parametric filter. While these techniques are possible in an analog filter, they are again considerably more difficult. Digital filters can be used in the design of finite impulse response filters. Analog filters do not have the same capability, because finite impulse response filters require delay elements. Digital filters rely less on analog circuitry, potentially allowing for a better signal-to-noise ratio. A digital filter will introduce noise to a signal during analog low pass filtering, analog to digital conversion, digital to analog conversion and may introduce digital noise due to quantization. With analog filters, every component is a source of thermal noise (such as Johnson noise), so as the filter complexity grows, so does the noise. However, digital filters do introduce a higher fundamental latency to the system. In an analog filter, latency is often negligible; strictly speaking it is the time for an electrical signal to propagate through the filter circuit. In digital filters, latency is a function of the number of delay elements in the system. Digital filters also tend to be more limited in bandwidth than analog filters. High bandwidth digital filters require expensive ADC/DACs and fast computer hardware for processing. In very simple cases, it is more cost effective to use an analog filter. Introducing a digital filter requires considerable overhead circuitry, as previously discussed, including two low pass analog filters.

8051
A micro-controller can be compared to a small stand alone computer, it is a very powerful device, which is capable of executing a series of pre-programmed tasks and interacting with other hardware devices. Being packed in a tiny integrated circuit (IC) whose size and weight is usually negligible, it is becoming the perfect controller for robots or any machines requiring some kind of intelligent automation. A single microcontroller can be sufficient to control a small mobile robot, an automatic washer machine or a security system. Any microcontroller contains a memory to store the program to be executed, and a number of input/output lines that can be used to interact with other devices, like reading the state of a sensor or controlling a motor. Nowadays, microcontrollers are so cheap and easily available that it is common to use them instead of simple logic circuits like counters for the sole purpose of gaining some design flexibility and saving some space. Some machines and robots will even rely on a multitude of microcontrollers, each one dedicated to a certain task. Most recent microcontrollers are 'In System Programmable', meaning that you can modify the program being executed, without removing the microcontroller from its place. Today, microcontrollers are an indispensable tool for the robotics hobbyist as well as for the engineer. Starting in this field can be a little difficult, because you usually can't understand how everything works inside that integrated circuit, so you have to study the system gradually, a small part at a time, until you can figure out the whole image and understand how the system works. is the name of a big family of microcontrollers. This figures shows the main The 80 features and components that the designer can interact with

a block diagram of the 8051 main components.

As seen in figure above, the 8051 microcontroller has nothing impressive in appearance:

y 4 Kb of ROM is not much at all. y 128b of RAM (including S Rs) satisfies the user's basic needs. y 4 ports having in total of 32 input/output lines are in most cases sufficient to make all necessary
connections to peripheral environment.

The whole configuration is obviously thought of as to satisfy the needs of most programmers working on development of automation devices. One of its advantages is that nothing is missing and nothing is too much. In other words, it is created exactly in accordance to the average users taste and needs. Another advantages are RAM organization, the operation of Central Processor nit (CP ) and ports which completely use all recourses and enable further upgrade

2.2 Pinout Description


Each port has 8 pins, and will be treated from the software point of view as an 8 -bit variable called 'register', each bit being connected to a different Input/Output pin.

2.3 Input/Output Ports (I/O Ports)


All 8051 microcontrollers have 4 I/O ports each comprising 8 bits which can be configured as inputs or outputs. Accordingly, in total of 32 input/output pins enabling the microcontroller to be connected to peripheral devices are available for use.

Pin configuration, i.e. whether it is to be configured as an input (1) or an output (0), depends on its logic state. In order to configure a microcontroller pin as an input, it is necessary to apply a logic zero (0) to appropriate I/O port bit. In this case, voltage level on appropriate pin will be 0.

Similarly, in order to configure a microcontroller pin as an input, it is necessary to apply a logic one (1) to appropriate port. In this case, voltage level on appropriate pin will be 5V (as is the case with any TTL input). This may seem confusing but don't loose your patience. It all becomes clear after studying simple electronic circuits connected to an I/O pin.

2.4 Memory Organization


The 8051 has two types of memory and these are Program Memory and Data Memory. Program Memory (ROM) is used to permanently save the program being executed, while Data Memory (RAM) is used for temporarily storing data and intermediate results created and used during the operation of the microcontroller. Depending on the model in use (we are still talking about the 8051 microcontroller family in general) at most a few Kb of ROM and 128 or 256 bytes of RAM is used. However

All 8051 microcontrollers have a 16-bit addressing bus and are capable of addressing 64 kb memory. It is neither a mistake nor a big ambition of engineers who were working on basic core development. It is a matter of smart memory organization which makes these microcontrollers a real programmers goody.

Impulse response
In signal processing, the impulse response, or impulse response function (IRF), of a dynamic system is its output when presented with a brief input signal, called an impulse. More generally, an impulse response refers to the reaction of any dynamic system in response to some external change. In both cases, the impulse response describes the reaction of the system as a function of time (or possibly as a function of some other independent variable that parameterizes the dynamic behavior of the system). For example, the dynamic system might be a planetary system in orbit around a star; the external influence in this case might be another massive object arriving from elsewhere in the galaxy; the impulse response is the change in the motion of the planetary system caused by interaction with the new object. In all these cases, the 'dynamic system' and its 'impulse response' may refer to actual physical objects, or to a mathematical system of equations describing these objects

Practical applications
In practical systems, it is not possible to produce a perfect impulse to serve as input for testing; therefore, a brief pulse is sometimes used as an approximation of an impulse. Provided that the pulse is short enough compared to the impulse response, the result will be close to the true, theoretical, impulse response. In many systems, however, driving with a very short strong pulse may drive the system into a nonlinear regime, so instead the system is driven with a pseudo-random sequence, and the impulse response is computed from the input and output signals.[1]

[edit] Loudspeakers
An application that demonstrates this idea was the development of impulse response loudspeaker testing in the 1970s. Loudspeakers suffer from phase inaccuracy, a defect unlike other measured properties such as frequency response. Phase inaccuracy is caused by small delayed sounds that are the result of resonance, energy storage in the cone, the internal volume, or the enclosure panels vibrating. Measuring the impulse response, which is a direct plot of this "time-smearing," provided a tool for use in reducing resonances by the use of improved materials for cones and enclosures, as well as changes to the speaker crossover. The need to limit input amplitude to maintain the linearity of the system led to the use of inputs such as pseudo-random maximum length sequences, and to the use of computer processing to derive the impulse response.[2]

[edit] Digital filtering


Impulse response is a very important concept in the design of digital filters for audio processing, because digital filters can differ from 'real' filters in often having a pre-echo, which the ear is not accustomed to.

[edit] Electronic processing


Impulse response analysis is a major facet of radar, ultrasound imaging, and many areas of digital signal processing. An interesting example would be broadband internet connections. DSL/Broadband services use adaptive equalisation techniques to help compensate for signal distortion and interference introduced by the copper phone lines used to deliver the service.

[edit] Control systems


In control theory the impulse response is the response of a system to a Dirac delta input. This proves useful in the analysis of dynamic systems: the Laplace transform of the delta function is 1, so the impulse response is equivalent to the inverse Laplace transform of the system's transfer function.

[edit] Acoustic and audio applications


In acoustic and audio applications, impulse responses enable the acoustic characteristics of a location, such as a concert hall, to be captured. Various commercial packages are available containing impulse responses from specific locations, ranging from small rooms to large concert halls. These impulse responses can then be utilized in convolution reverb applications to enable the acoustic characteristics of a particular location to be applied to target audio.[3]

[edit] Economics
In economics, and especially in contemporary macroeconomic modeling, impulse response functions describe how the economy reacts over time to exogenous impulses, which economists usually call 'shocks', and are often modeled in the context of a vector autoregression. Impulses that are often treated as exogenous from a macroeconomic point of view include changes in government spending, tax rates, and other fiscal policy parameters; changes in the monetary base or other monetary policy parameters; changes in productivity or

other technological parameters; and changes in preferences, such as the degree of impatience. Impulse response functions describe the reaction of endogenous macroeconomic variables such as output, consumption, investment, and employment at the time of the shock and over subsequent points in time.[4][5]

Status register
From Wikipedia, the free encyclopedia Jump to: navigation, search

A status register or flag register (also: condition code register, program status word, PSW, etc.) is a collection of flag bits for a processor. An example is the FLAGS register of the x86 architecture. The status register is a hardware register which contains information about program state. Individual bits are implicitly or explicitly read and/or written by the machine code instructions executing on the processor. The status register in a traditional processor design includes at least three central flags: Zero, Carry, and Overflow, which are tested via the condition codes that are part of many machine code instructions. A status register may often have other fields as well, such as more specialized flags, interrupt enable bits, and similar types of information. During an interrupt, the status of the thread currently executing can be preserved (and later recalled) by storing the current value of the status register along with the program counter and other active registers into the machine stack or a reserved area of memory.

[edit] The most common flags

Fla

S/N

Name Zero flag

Des iption Indicates that the result of a arithmetic or logical operation (or, sometimes, a load) was zero. Enables numbers larger than a single word to be added/subtracted by carrying a binary digit from a less significant word to the least significant bit of a more significant word as needed. It is also used to extend bit shifts and rotates in a similar manner on many processors (sometimes done via a dedicated X flag). Indicates that the result of a mathematical operation is negative. In some processors,[1] the N and S flags are distinct with different meanings and usage: One indicates whether the last result was negative whereas the other indicates whether a subtraction or addition has taken place.

Carry flag

Sign flag / Negative flag

V / O / W Overflow flag

Indicates that the signed result of an operation is too large to fit in the register width using twos complement representation. Indicates whether the number of set bits of the last result is odd or even.

Infinite impulse response


Infinite impulse response (IIR) is a property of signal processing systems. Systems with this property are known as IIR systems or, when dealing with filter systems, as IIR filters. IIR systems have an impulse response function that is non-zero over an infinite length of time. This is in contrast to finite impulse response (FIR) filters, which have fixed-duration impulse responses. The simplest analog IIR filter is an RC filter made up of a single resistor (R) feeding into a node shared with a single capacitor (C). This filter has an exponential impulse response characterized by an RC time constant. IIR filters may be implemented as either analog or digital filters. In digital IIR filters, the output feedback is immediately apparent in the equations defining the output. Note that unlike FIR filters, in designing IIR filters it is necessary to carefully consider the "time zero" case[citation needed] in which the outputs of the filter have not yet been clearly defined. Design of digital IIR filters is heavily dependent on that of their analog counterparts because there are plenty of resources, works and straightforward design methods conc erning analog feedback filter design while there are hardly any for digital IIR filters. As a result, usually, when a digital IIR filter is going to be implemented, an analog filter (e.g. Chebyshev filter, Butterworth filter, Elliptic filter) is first designed and then is converted to a digital filter by applying discretization techniques such as Bilinear transform or Impulse invariance. Example IIR filters include the Chebyshev filter, Butterworth filter, and the Bessel filter.

Transfer function derivation


Digital filters are often described and implemented in terms of the difference equation that defines how the output signal is related to the input signal:

where:
y y y y

is the feedforward filter order are the feedforward filter coefficients is the feedback filter order are the feedback filter coefficients

arity flag

y y

is the input signal is the output signal.

A more condensed form of the difference equation is:

which, when rearranged, becomes:

To find the transfer function of the filter, we first take the Z-transform of each side of the above equation, where we use the time-shift property to obtain:

We define the transfer function to be:

Considering that in most IIR filter designs coefficient filter transfer function takes the more traditional form:

is 1, the IIR

[edit] Description of block diagram

Simple IIR filter block diagram

A typical block diagram of an IIR filter looks like the following. The z 1 block is a unit delay. The coefficients and number of feedback/feedforward paths are implementation -dependent.

[edit] Stability
The transfer function allows us to judge whether or not a system is bounded-input, bounded-output (BIBO) stable. To be specific, the BIBO stability criteria requires that the ROC of the system includes the unit circle. For example, for a causal system, all poles of the transfer function have to have an absolute value smaller than one. In other words, all poles must be located within a unit circle in the zplane. The poles are defined as the values of z which make the denominator of H(z) equal to 0:

Clearly, if then the poles are not located at the origin of the z-plane. This is in contrast to the FIR filter where all poles are located at the origin, and is therefore always stable. IIR filters are sometimes preferred over FIR filters because an IIR filter can achieve a much sharper transition region roll-off than FIR filter of the same order.

[edit] Example
Let the transfer function of a filter H be

with ROC a < | z | and 0 < a < 1

which has a pole at a, is stable and causal. The time-domain impulse response is
h(n) = anu(n)

which is non-zero for n > = 0.

[edit] Applications
The main advantage of IIR filters against FIR filter is that it uses through recursion less amount of taps. Therefore in digital signal applications the IIR filters use less hardware/software resources than a FIR filter. On the other side IIR filters can be instable at processing high frequency components. Due to instability feature, IIR filters require more fine-tuning than FIR filters to achieve the result of a FIR filter.

Вам также может понравиться