Вы находитесь на странице: 1из 25

Presented by:

DEBASHISH DASH
ROLL-0701216109
7TH SEMESTER
ELECTRONICS AND
TELECOMMUNICATION
ENGINEERING

ENGINEERING
CONTENTS
1. Abstract
2. Introduction
3. Generations of Mobile Communication
4. 4-G architecture
5. 64-point Fourier Transform Chip
5.1. Introduction
5.2. Algorithm Formulation
5.3. Features
5.4. Description of the chip
6. Applications of the chip
7. Conclusion
8. Bibliography
1.ABSTRACT
As the generations of mobile communication system evolved, it became
necessary to have faster, efficient and cost effective communication
mechanisms. The recent development that furthered the growth of
communication is the development of 4-G communication systems.

4G refers to the fourth generation of cellular wireless standards. It is a


successor to 3G and 2G families of standards. A 4G system is expected to
provide a comprehensive and secure all-IP based solution where facilities
such as IP telephony, ultra-broadband Internet access, gaming services and
streamed multimedia may be provided to users.

This 4-G technology thrives on the high data rate systems. The most
computationally intensive part of such a high data rate system are the 64-
point inverse FFT in the transmit direction and the viterbi decoder in the
receiver direction. This 64-point FFT in transmit direction refers to the 64-
point Fourier transform chip.

It uses radix-2 FFT algorithms for faster computations and finds applications
in systems involving high mathematical computations precisely. In real life
applied for Video motion compensation, Low power WLAN(with or without
OFDM), Digital T.V. applications, Wireless telephony.

The chip is paving new avenue for the development of 5-G systems which
are still in a developmental phase.
2. Introduction

Mobile Technology has groomed a lot in past few years, major reasons for
rapid advancements in mobile network technology is requirements for being
mobile or connectivity on move. Latest mobile handsets offers features
which one had never thought off, ultimately it forces mobile network
companies to bring these features in practice use to take commercial
advantages.

These mobile phones evolved over generations and have assumed today’s
features.

Mobile phones and their network vary very significantly from provider to
provider and country to country. However the basic communication method
of all of them is through the electromagnetic microwaves with a cell base
station. The cellular companies have large antennas, which are usually
mounted over towers, buildings and poles. The cell phones have low-power
transceivers that transmit voice and data to the nearest sites usually within
the 5 to 8 miles (8 to 13 kilometers away).

When a mobile device or phone is turned on, it registers with the mobile
telephone exchange or switch. It is a unique identifier and is alerted by the
mobile switch when there is an incoming phone call.

The handset listens for the strong signals from the nearest base stations.
When a user moves, the mobile device handoff to various nearest sites
during phone calls or while waiting between calls it reselect the nearest cell
phone sites.

Cell sites have relatively less power radio transmitters. They broadcast their
presence and relay communications between the mobile handsets and the switch.
On the other hand, the switch connects the call to the same or another mobile
network or subscriber.
The dialogue between the mobile phone handset and the cell phone site is a stream
of the digital data, which includes the digitized audio. This technology depends on
the same system as of mobile phone operator. Some mobile phone technologies
have adopted the AMPS for the analog communication and D-AMPS,
CDMA2000, EVDO, GSM, UMTS, and GPRS for the digital communication.
Each mobile phone network has a unique radio frequency.

There are different mobile communication methods, such as SMS, WAP,


WLAN, WIFI, GPRS, Bluetooth, Infrared, IrDA and I-Phone etc. Mobile
phones are different from the cordless telephones because they only operate
within the specific range. Many types of the mobile computers have been
introduced including the Laptop computer, Subnotebook, Portable data
terminal (PDT), Personal digital assistance (PDA), Tablet personal computer
and smart phone.

3.GENERATIONS OF MOBILE
COMMUNICATION
The generations of mobile communication are as follows:

1-G:

1G (or 1-G) refers to the first-generation of wireless telephone technology,


mobile telecommunications. These are the analog telecommunications
standards that were introduced in the 1980s.

First Generation mobile phone networks were the earliest cellular systems to
develop, and they relied on a network of distributed transceivers to
communicate with the mobile phones. First Generation phones were also
analogue, used for voice calls only, and their signals were transmitted by the
method of frequency modulation. These systems typically allocated one 25
MHz frequency band for the signals to be sent from the cell base station to
the handset, and a second different 25 MHz band for signals being returned
from the handset to the base station. These bands were then split into a
number of communications channels, each of which would be used by a
particular caller.
These systems used analogue circuit-switched technology, with FDMA
(Frequency Division Multiple Access), and worked mainly in the 800-900
MHz frequency bands. The networks had a low traffic capacity, unreliable
handover, poor voice quality, and poor security.

2-G:

2G - Second Generation mobile telephone networks were the logical next


stage in the development of wireless systems after 1G, and they introduced
for the first time a mobile phone system that used purely digital technology.
The demands placed on the networks, particularly in the densely populated
areas within cities, meant that increasingly sophisticated methods had to be
employed to handle the large number of calls, and so avoid the risks of
interference and dropped calls at handoffs. Although many of the principles
involved in a 1G system also apply to 2G - they both use the same cell
structure - there are also differences in the way that the signals are handled,
and the 1G networks are not capable of providing the more advanced
features of the 2G systems, such as caller identity and text messaging.

In GSM 900, for example, two frequency bands of 25 MHz bandwidth are
used. The band 890-915 MHz is dedicated to uplink communications from
the mobile station to the base station, and the band 935-960 MHz is used for
the downlink communications from the base station to the mobile station.
Each band is divided into 124 carrier frequencies, spaced 200 kHz apart, in a
similar fashion to the FDMA method used in 1G systems. Then, each carrier
frequency is further divided using TDMA into eight 577 uS long "time
slots", every one of which represents one communication channel - the total
number of possible channels available is therefore 124 x 8, producing a
theoretical maximum of 992 simultaneous conversations. In the USA, a
different form of TDMA is used in the system known as IS-136 D-AMPS,
and there is another US system called IS-95 (CDMAone), which is a spread
spectrum code division multiple access (CDMA) system. CDMA is the
technique used in 3G systems.
2.5G: 2.5G (Second Generation Enhanced) is a generic term used to refer to
a standard of wireless mobile telephone networks that lies somewhere
between 2G and 3G. The development of 2.5G has been viewed as a
stepping-stone towards 3G, which was prompted by the demand for better
data services and access to the Internet. In the evolution of mobile
communications, each generation provides a higher data rate and additional
capabilities, and 2.5G is no exception as it is provides faster services than
2G, but not as fast or as advanced as the newer 3G systems.

Some observers have seen 2.5G as an alternative route to 3G, but this
appears to be short-sighted as 2.5G is several times slower than the full 3G
service. In technical terms 2.5G extends the capabilities of 2G systems by
providing additional features, such as a packet-switched connection (GPRS)
in the TDMA-based GSM system, and enhanced data rates (HSCSD and
EDGE).

These enhancements in 2.5G systems permit data speeds of 64-144 kbps,


which enables these phones to feature web browsing, the use of navigation
and navigational maps, voice mail, fax, and the sending and receiving of
large email messages.

3-G:

Third Generation mobile telephone networks are the latest stage in the
development of wireless communications technology. Significant features of
3G systems are that they support much higher data transmission rates and
offer increased capacity, which makes them suitable for high-speed data
applications as well as for the traditional voice calls. In fact, 3G systems are
designed to process data, and since voice signals are converted to digital
data, this results in speech being dealt with in much the same way as any
other form of data. Third Generation systems use packet-switching
technology, which is more efficient and faster than the traditional circuit-
switched systems, but they do require a somewhat different infrastructure to
the 2G systems.

Compared to earlier mobile phones a 3G handset provides many new


features, and the possibilities for new services are almost limitless, including
many popular applications such as TV streaming, multimedia,
videoconferencing, Web browsing, e-mail, paging, fax, and navigational
maps.

Japan was the first country to introduce a 3G system, which was largely
because the Japanese PDC networks were under severe pressure from the
vast appetite in Japan for digital mobile phones. Unlike the GSM systems,
which developed various ways to deal with demand for improved services,
Japan had no 2.5G enhancement stage to bridge the gap between 2G and 3G,
and so the move into the new standard was seen as a solution to their
capacity problems.

It is generally accepted that CDMA is a superior transmission technology,


when it is compared to the old techniques used in GSM/TDMA. WCDMA
systems make more efficient use of the available spectrum, because the
CDMA technique enables all base stations to use the same frequency. In the
WCDMA system, the data is split into separate packets, which are then
transmitted using packet switching technology, and the packets are
reassembled in the correct sequence at the receiver end by using the code
that is sent with each packet. WCDMA has a potential problem, caused by
the fact that, as more users simultaneously communicate with a base station,
then a phenomenon known as “cell breathing” can occur. This effect means
that the users will compete for the finite power of the base station’s
transmitter, which can reduce the cell’s range – W-CDMA and cdma2000
have been designed to alleviate this problem.

The operating frequencies of many 3G systems will typically use parts of the
radio spectrum in the region of approximately 2GHz (the IMT-2000 core
band), which were not available to operators of 2G systems, and so are away
from the crowded frequency bands currently being used for 2G and 2.5G
networks. UMTS systems are designed to provide a range of data rates,
depending on the user’s circumstances, providing up to 144 kbps for moving
vehicles (macrocellular environments), up to 384 kbps for pedestrians
(microcellular environments) and up to 2 Mbps for indoor or stationary users
(picocellular environments). In contrast, the data rates supported by the basic
2G networks were only 9.6 kbps, such as in GSM, which was inadequate to
provide any sophisticated digital services.

4-G:

4 the limitation of the 3G, people are try to make new generation of mobile
communication, this is the 4th generation. This 4G system is more reliable,

Nowadays, some companies have started developing the 4G communication


system, this technology can have a high uplink rate up to 200Mbps, more
data can transfer in the mobile phone. So the 4G mobile can have more
function such as work as the television. Some telecommunication companies
claimed that they would applied this 4G system to the business and it will
bring more convenience to people.

Transition from 1-G to 4-G

4. 4-G ARCHITECTURE
To reap the economical and developmental benefits of competition, namely
diversified service offerings and rapid technological evolution, the mobile value
chain must be open so as to foster and harbor the participation of multiple new
players, e.g., value added service providers, content providers, application
developers, etc). These players will cooperate with the incumbent mobile operators
to contribute additional value to the mobile service provision process but will also
compete for the lion’s share of user revenue.

Analyzing the “ABC” vision

In the 4G mobile communication era, a plethora of disparate services and


multimedia applications will have to be flexibly yet efficiently deployed over a
heterogeneous multinetwork environment, raising service management
requirements. Nonetheless, mobile users will expect seamless global roaming
across these different wireless networks and ubiquitous access to personalized
applications and rich content via a universal and user-friendly interface. In
studying the implications of the heralded “Always Best Connected” vision of 4G
mobile systems, we identify the notion of utility, implicitly embedded in the “best”
adjective. Utility is a fundamental concept in microeconomic theory that concerns
a typically continuous function representation of the consumer’s preference
relation over a set of commodities
User utility issues

Users engage communication-based applications to realize various subjective


benefits. These applications depend on the timely and orderly provision of network
bearer services to exchange application-specific signaling and to move various
classes of user information (e.g., image, video, corporate data) between
communicating application endpoints. Inasmuch as the network is unable to
provide the required levels of service, application will become dysfunctional and
any user-perceived benefits of these applications will remain elusive, thus leading
to a degraded user experience. Performance of communication-based applications
depends on the accommodation of QoS requirements for their native signaling and
the exchange of arbitrary user information. From a network viewpoint, these
factors translate to traffic flows with different QoS Requirements that will – in
principle – levy different charges, thereby decreasing user satisfaction. Thus,
ensuring an adequate performance for communication based applications so as to
maximize user satisfaction, translates to honoring the QoS requirements of their
traffic flows while minimizing the overall charges incurred, i.e., solving the user’s
utility maximization problem. Providers of network bearer services face the dual
problem, i.e., maximizing revenue and minimizing network resource usage whilst
meeting QoS requirements for all serviced traffic flows. However, having the
network meet the QoS requirements of communication-based applications, does
not – necessarily – maximize user utility. Considering QoS as a multidimensional
space, user utility is a diminishing function of quantity along each of the individual
dimensions of QoS (e.g., packet delay). For communication-based applications that
an

operate on multiple QoS levels and content resolutions, requesting the highest QoS
level possible does not necessarily increase user utility. For, in general, the higher
the QoS level chosen by the application, the more network resources must be
allocated to support it and the more costly the use of the network will be. Given the
multitude and diversity in the product offerings of the value chain participants, the
technological complexity of the overall heterogeneous system and the IT illiteracy
of the major consumer segment, it is understandable that most users will be unable
to engage and coordinate such service provision matters all by themselves so as to
maximize their utility. Consequently, some kind of intelligent mediation aspart of
the mobile service provision process should be introduced to efficiently cater for
the utility-related aspects. We believe that such mediation is a task that cannot –
and should not – be undertaken by any of the aforementioned roles. in the mobile
value chain (e.g., value-added service provider, mobile network operator). For each
of them will find interest in biasing a solution to his/her own preference – and
monetary benefit of course. Thereupon, we claim that a trusted user delegate (e.g.,
intelligent agent) should always provide for the mediation between the value chain
participants in providing services and applications, as well as for an unbiased
solution to the user’s utility maximization problem. Fundamentally, that constitutes
emergence of new role in the value chain; a role that will maintain the customer
relationship and provide the user with a universal roaming and service access
capability whilst accommodating personal preferences, regardless of the access
network(s) and terminal equipment in use.
Regardless of whether this new role will emerge through fission from the incumbent
mobile network operator role or
through further evolution of existing MVNO approaches, one of its major tasks will be to
provide billing services for its customer (i.e., the mobile user) by collecting related
charging information from other players’ equipment that is engaged in the mobile service
provision process, correlating it and issuing a single itemized bill to the customer, thus
fulfilling user requirements for one-stop billing. In addition, it will act as a clearinghouse,
realizing accounting procedures that apportion revenue between the interested players
according to bilateral or multilateral accounting agreements . The intelligent mediation
process could be part of a service provision platform that mediates between independent
application provider’s and mobile network operator’s domains to accomplish a flexible
deployment model for value-added services and multimedia applications developed by
the former over the network infrastructure managed by the latter. The alternative service
management option is to impose several
bilateral customer relationships between the user and all kinds of wireless access network
operators he/she may contact when accessing any value-added service or application.
However, that significantly complicates service provision by mandating the resolution of
all technical issues (e.g., deployment details, activation preconditions, pricing structures,
etc) on a per service/application provider-mobile network operator basis for each
particular user – an approach that is clearly non-scalable. For intelligent mediation to
work, network and terminal domain functionality must be controllable by higher layer or
third party entities besides the network and terminal entities engaging in protocol
signaling related to the particular functionality. That is, network control and management
plane
should be exposed to higher layer entities (e.g., the “mediator” agent) and technologically
agnostic interactions (e.g., IDL) that allow such higher layer entities to monitor and
control network protocol signaling and thereby, overall network behavior, should be
specified. Thus, rather than a concrete, all-encompassing architecture, a set of
interworking approaches with standard technological solutions seems to be a more
suitable approach for 4G.

5. 64-point Fourier transform chip

FFT CHIP LAYOUT


5.1. Introduction:

Fourth generation wireless and mobile system are currently the focus of research
and development. Broadband wireless system based on orthogonal frequency
division multiplexing will allow packet based high data rate communication
suitable for video transmission and mobile internet application. Considering this
fact we proposed a data path architecture using dedicated hardware for the
baseband processor. The most computationally intensive part of such a high data
rate system are the 64-point inverse FFT in the transmit direction and the viterbi
decoder in the receiver direction. Accordingly an appropriate design methodology
for constructing them has to be chosen a) how much silicon area is needed b) how
easily the particular architecture can be made flat for implementation in VLSI c) in
actual implementation how many wire crossings and how many long wires
carrying signals to remote parts of the design are necessary d) how small the power
consumption can be .
5.2 Algorithm formulation:
• Discrete Fourier Transform (DFT) is a fundamental digital signal
processing algorithm used in many applications, including frequency
analysis and frequency domain processing.
• DFT is the decomposition of a sampled signal in terms of sinusoidal
(complex exponential) components.
• The symmetry and periodicity properties of the DFT are exploited to
significantly lower its computational requirements.
• The resulting algorithms are known as Fast Fourier Transforms (FFTs).

• An 64-point DFT computes a sequence x(n) of 64 complex valued
numbers given another sequence of data X(k) of length 64 according to
the formula: X(k) = 63n 0 x(n)e ^–j2pnk/64 ; k = 0 to 63.
• To simplify the notation, the complex-valued phase factor e ^–j2pnk/64
is usually defined as W64 ^ n where: W64 = cos(2p/64) – j sin(2p/64).
• The basis of the FFT is that a DFT can be divided into smaller DFTs.
• In the processor USFFT64 a radix-8 FFT algorithm is used.
• It divides DFT into two smaller DFTs of the
• length 8, as it is shown in the formula:
• X(k) = X(8r+s) = ∑W8^mr W64^ms∑ x(8l+m)W8^sl ; r = 0 to 7, s = 0
to 7.
5.3 Features:
• 64 -point radix-8 FFT.
• Forward and inverse FFT.
• Pipelined mode operation, each result is outputted in one clock cycle,
the latent delay from input to output is equal to 163 clock cycles,
simultaneous loading/downloading supported.
• Input data, output data, and coefficient widths are parameterizable in
range 8 to 16.
• Two and three data buffers are selected.
• FFT for 10 bit data and coefficient width is calculated on Xilinx
XC4SX25-12 FPGA at 250 MHz clock cycle, and on Xilinx XC5SX25-
12 FPGA at 300 MHz clock cycle, respectively.
• FFT unit for 10 bit data and coefficients, and 2 data buffers occupies
1513 CLB slices, 4 DSP48 blocks, and 2,5 kbit of RAM in Xilinx
XC4SX25 FPGA, and 700 CLB slices 4 DSP48E blocks, and 2,5 kbit
of RAM in Xilinx XC5SX25 FPGA, data buffers are implemented on
the distributed RAM.
• Overflow detectors of intermediate and resulting data are present.
• Two normalizing shifter stages provide the optimum data magnitude
bandwidth.
• Structure can be configured in Xilinx, Altera, Actel, Lattice FPGA
devices, and ASIC.
• Can be used in OFDM modems, software defined radio, multichannel
coding.

5.4 Description of the chip:


Input and output data are represented by nb and nb+3 bit twos complement
complex integers, respectively. The twiddle coefficients are nw –bit wide numbers.

Typical core interconnection:


The core interconnection depends on the application nature where it is used. The
simple core
interconnection considers the calculation of the unlimited data stream which are
inputted in
each clock cycle. This interconnection is shown on the figure.

Here DATA_SRC is the data source, for example, the analog-to-digital converter,
USFFT64
is the core which is customized as one with 3 inner data buffers. The FFT
algorithm starts
with the impulse START. The respective results are outputted after the READY
impulse and
followed by the address code ADDR. The signal START is needed for the global
synchronization, and can be generated once before the system operation.
The input data have the natural order, and can be numbered from 0 to 63. When 3
inner data
buffers are configured then the output data have the natural order. When 2 inner
data buffers
are configured then the output data have the 8-th inverse order, i.e. the order is
0,8,16,...56,1,9,17,... .
Block diagram of the chip:
The basic block diagram of the USFFT64 core with two data buffers is shown in
the fig

Interface:

Components:
BUFRAM64 – data buffer with row writing and column reading;
FFT8 – datapath, which calculates the 8-point DFT;
CNORM – shifter to 0,1,2,3 bit left shift;
ROTATOR64 – complex multiplier with twiddle factor ROM;
CT64 – counter modulo 64.
Unique Features of the Chip:
*Highly pipeline calculations:

Each base FFT operation is computed by the computational unit, called FFT8,
which is the data path for FFT calculations. FFT8 calculates the 8-point DFT in the
high pipelined mode. Therefore in each clock cycle one complex number is read
from the input data buffer RAM and the complex result is written in the output
buffer RAM. The 8-point DFT algorithm is divided into several stages which are
implemented in the stages of the FFT8 pipeline. This supports the increasing the
clock frequency up to 250 MHz and higher. The latent delay of the FFT8 unit from
input the first data to output the first result is equal to 14 clock cycles.
High precision computations:
In the core the inner data bit width is higher to 4 digits than the input data bit
width. The main
error source is the result truncation after multiplication to the factors W64 ms .
Because the most of base FFT operation calculations are additions, they are
calculated without errors. The FFT results have the data bit width which is higher
in 3 digits than the input data bit width, which provides the high data range of
results when the input data is the sinusoidal signal. The maximum result error is
less than the 1 least significant bit of the input data. Besides, the normalizing
shifters are attached to the outputs of FFT8 pipelines, which provide the proper
bandwidth of the resulting data. The overflow detector outputs provide the
opportunity to input the proper shift left bit number for these shifters.
Low hardware volume
The USFFT64 processor has the minimum multiplier number which is equal to 4.
This fact makes this core attarctive to implement in ASIC. When configuring in
Xilinx FPGA, these multipliers are implemented in 4 DSP48 units respectively.
The user can select the input data, output data, and coefficient widths which
provide application dynamic range needs. This can minimize both logic hardware
and memory volume.

6. Applications of the chip


a) Video motion compensation:
A block diagram of a phase correlated motion estimator is shown in Fig. . An
image is first partitioned into blocks which are typically 64 pixels by 64 lines. Each
image block, in two successive fields, is then phase correlated. This is achieved by
first applying a 2-D FFT to each block. The phase of each transformed block is
then subtracted from the corresponding block of the previous field. Meanwhile,
the amplitudes are normalized to eliminate any variations in illumination which
may confuse the motion measurement. The phase differences and the normalized
amplitudes are then subjected to a 2-D inverse FFT to produce a phase correlation
surface. The surface obtained contains peaks, whose coordinates from the center
correspond to the vertical and horizontal velocity components of dominant motions
in the scene. A list of trial motion vectors is therefore compiled by locating the
peaks. Fig demonstrates some examples of the correlation surface. The trial motion
vectors are then passed to a vector assignment unit as candidate vectors and are
used to derive a valid motion vector for each pixel in the scene. The two
image fields are spatially shifted by each candidate vector in turn and correlated by
an image correlator. During the spatial correlation, the modulus of the luminance
difference is calculated and analyzed. The vector which gives the lowest
value is regarded as the valid vector. Other applications of phase correlation
include slow motion generation, noise reduction, correction of film unsteadiness,
and HDTV bandwidth reduction. The availability of cost effective real-time FFT
VLSI chips therefore becomes a prerequisite for the hardware implementation of
such systems. Prior work on Fourier transform systems typically falls into
one of two categories a) methods based on direct discrete Fourier transform
implementations [5]–[7] and b) methods based on direct hardware implementations
of established FFT signal flow graphs [8]–[18]. The problem with such solutions
is that the approach adopted at algorithmic level typically takes little account of
implications at architecture, data flow, or chip design levels. Consequently, many
such designs may be irregular, dominated by wiring and may have heavy
overheads in terms of data storage . The approach adopted here has been to
develop algorithmto- architecture mappings directly from first principles with the
aim of producing an efficient silicon solution. In particular, the inherent structure
within the DFT matrix has been exploited to tailor the organization and flow of
data to that best suited to candidate architectural solutions. In the case
of a radix-4 matrix factorization, this involves partitioning the DFT matrix into
regular blocks of smaller matrices, with these then being mapped onto a regular
silicon structure. The merits of using a radix-4 decomposition (rather than other
radices) are well documented [19]. These include regularity, simplicity, and a
reduction in the number of complex multiplications required. Conventional
Cooley–Tukey radix- 4 FFT’s algorithms [20] are based on either the decimation
of the time or the frequency samples, leaving the other in a natural order. However,
investigations undertaken in the course of this research suggested that greater
architectural advantage can be achieved by using a base-4 decomposition
in which both the time and frequency transform data are reversed in order prior to
the matrix factorization . This decimation-in-time-and-frequency (DITF) algorithm
leads not only to computational efficiencies similar to that of more conventional
Cooley–Tukey algorithms but also to regular architectures where data flow is
tailored to that required in typical real-time image processing applications. As will
be discussed, it also leads to important reductions in data storage requirements.

The approach adopted here can be explained by considering the example of a 16-
point DFT matrix.
b) WLAN applications:
A novel 64-point FFT/IFFT architecture for high-speed WLAN systems based on
OFDM transmission has been presented. This architecture is based on a
decomposition of the 64-point FFT into two 8-point FFTs so that the resulting
algorithm-to-architecture mapping is well suited for silicon implementation. It
exhibits numerous attractive features from a VLSI point of view, which include
regularity, modularity, simple wiring, and high throughput. A new low-power
high-performance 64-point FFT/IFFT chip for WLANapplications has been
successfully designed and fabricated based on the architecture described. The chip
includes all necessary interfaces to directly integrate it with the other data path
elements for an IEEE 802.11a modem or similar applications. The chip computes a
64-point parallel to parallel FFT/IFFT in 23 clock cycles, yet only dissipates 41
mW power at 20 MHz clock frequency and at 1.8 V supply voltage. The
new chip is deemed to result in a considerable reduction in cost, size, and power
dissipation for the existing WLAN systems. The work undertaken demonstrates
how combining higher level system requirements with efficient algorithm-to-
architecture mapping strategies can produce efficient silicon solutions for high-
performance WLAN systems.
c) miscellaneous applications:
The 64-point Fourier transform chip is used in other applications as in wireless
telephony or digital t.v. applications.

7. CONCLUSION
With the communication industry growing and expanding, the need of faster and
cheaper methods of communications are being devised. The afore mentioned chip
is an instance of it.
The number of non-trivial complex multiplications for the conventional 64-point
radix-2 DIT FFT is 66. Thus this approach results in a reduction of about 26% for
complex multiplication compared to that required in the conventional radix-2 64-
point FFT. This reduction of arithmetic complexity further enhances the scope for
realizing a low-power 64-point FFT processor.
This hence reduces the cost factor in developing complex systems and paves way
for the more advanced genre of communication systems.
8.BIBLIOGRAPHY
1. www.wikipedia.org
2. www.google.com
3. http://cellphones.about.com/
4. http://it.med.miami.edu/x675.xml
5. www.citeseerx.ist.psu.edu
6. www.ieeexplore.ieee.org
7. www.seminarprojects.com
8. Digital Signal Processing by P. Ramesh Babu

Вам также может понравиться